Tagged: Google Toggle Comment Threads | Keyboard Shortcuts

  • Joe 11:50 on May 9, 2007 Permalink
    Tags: excellent tools, Google, , MeasureMap, much better tool   

    Google Analytics updated 

    Yesterday Google announced the new, updated version of their Analytics tool.
    Google Analytics screenshot
    Under the hood, most data is captured just like the current version. But the user interface and data presentation is a whole different story. A flash demo shows many excellent tools to analyze trends and zoom in to visitor navigation paths, decision funnels, trends over time and all you could possibly want to know about keyword conversion (organic as well as paid-for Adwords).

    The basis for this overhaul has been the acquisition of MeasureMap in 2006.

    Over the next few weeks every current Analytics account will be migrated to the new version, so most of us need to wait in anticipation of a much, much better tool. Again, Google sets the industry standard at a very high level, though times for the competition to catch up.

    Read the official announcement on the Analytics blog.

    (on a side note: I tried writing this review with WordPress plugin Structured Blogging. This review would be marked up according to the hReview microformat standard. It didn’t quite work out, maybe html is not allowed in the review body. More to come…)

    [ratings]

     
  • Joe 09:56 on April 13, 2007 Permalink
    Tags: , Google, ,   

    Google Maps with KML data 

    Some time ago I wrote about the introduction of a new feature for the Google Maps API: you can now use the same definitions file format as with Google Earth (technically speaking this is the KML 2.1 format).

    All cool and neat, so I did a little experiment to try it out. And guess what: it didn’t work! No error messages, just my KML file was completely ignored.

    Well, it appears that KML file must be accessible for Google to read and parse; in other words: it is not the client side API (Javascript) which reads the KML, but the Google service. Apparently what they do is: parse the file, calculate the correct view port and then send over all geo data, back to the client API.

    As I hosted the file on my local computer (http://localhost/), this did not work. Duh!

    This feature was not immediately obvious to me from the API documentation. And it might not be strictly necessary: as long as the KML file is hosted on the same server as the client HTML, the file could just be retrieved through XMLHttpRequest (the same origin policy would be satisfied). Performance might be an issue, though, as XML Parsing in the browser is not very efficient. And setting the viewport for the map would require another round trip and delay.

    Conclusion: be sure to host your KML file such that it is accessible for Google (over http), otherwise it will just not work.

    var geoXml = new GGeoXml('http://www.yourhost.com/geo/map.kml');

    [ratings]

     
  • Joe 11:06 on April 12, 2007 Permalink
    Tags: , early applications, geeky applications, Google, , open IM protocol, protypical web   

    Twitter vs Jabber 

    Twitter logoAs you might know, Twitter is the hype of last months. Everybody and their dog are updating their current activities like crazy.

    As a spin off, many secondary purposes are being created on top of the public Twitter API.

    This reminds me of the old days when Jabber was started as an open IM protocol. Lots of geeky applications sprung into life, like monitoring incoming email (headers), keeping an eye on your computer logs and such. Now the Jabber protocol (XMPP) is being used as the basis for a couple of IM platforms, like Google Talk. Many of those early applications are now official a XMPP extension. The platform has matured, but lost its appeal to the geeky crowd.

    Today, these kinds of applications are being built on Twitter by the dozens.
    Without any effort, I found lots and lots of them. I estimate these are less than 5% of all Twitter applications out there, so the list is really getting huge.

    • MoniTwitter (answering one simple question: What’s your website doing?)
    • TwitterIsWeird (displays pairs of twitter quotes in comic balloons)
    • PingTwitter (update Twitter when you publish a new blog post)
    • TwitterChat (2-way live shoutbox-twitter integration)
    • Twitterific (Mac OSX client application)

    And then we have the Twitterforum, an unofficial Twitter related discussions site, listing even more twitter related applications and sites.

    So does the Twitter API popularity have to do with its incredible simplicity? And its pluggability for the protypical web 2.0 platform (yes, it has a JSON interface)?

    I’m not sure, but I hacked togeter my own little contribution to this madness in just half an hour: Browse with Twitter, a Greasemonkey script for Firefox.

    Update your twitter.com status with a message “Browsing: [document.title]” whenever you load a web page.

    Fair warning: don’t install this script if you do value your privacy (or at least restrict it to the sites you explicitly want to show up on twitter).

    [ratings]

     
  • Joe 23:03 on April 9, 2007 Permalink
    Tags: Google,   

    Microsummary plugin goes commercial? 

    Well, not.
    But the other day I got a Google Alert, which learned me that the Microsummary Plugin made it into a commercial WordPress bundle.

    I’m surprised that this business exists, because you will still have to upload and manage the whole shebang onto a PHP and MySQL enabled hosting account. And those of us who are able to do so, are surely capable of installing WordPress. Maybe the added value is in the selection of bundled plugins, I don’t know.

    Anyway, there’s not much of documentation (e.g. it’s not clear if all 100+ bundled plugins are enabled by default). Just noticed the plugin is part of their list.

    And their fair warning to their customers:

    […] For instance, many core WordPress files got changed in the course of the new WordPress release. The same can be said about all the bundled enhancements.

    This has one big, important consequence: you must upgrade carefully. Simply uploading the new files and replacing the old ones won’t cut it. You need to remove all old files before uploading the new ones; that will help you avoid lingering stale/old files, which could cause WordPress to malfunction. Oh, and don’t forget to back up any files you’ve changed.

    Hmm…

     
    • Rudd-O 03:44 on April 10, 2007 Permalink

      Hello, and thanks for noting Turbocharged in your blog. I’m the lead engineer and owner of the (admittedly still small) business. I want to let you in on a little secret: since one of the big hurdles is usually installation, besides supporting it, we’re buidling a remote WordPress/Turbocharged installer for customers. Once it’s done, feel free to ask for an account so you can test it.

  • Joe 11:45 on March 29, 2007 Permalink
    Tags: access site, , , Benelux, Bernhard Seefeld, Brandon Badger, CRM, een site, GMaps Utility Library, Google, , New Foundland and Labrador, open web, Rotterdam, satellite imagery, Sidney Mock, USD, web crawler, web-applicatie te maken, www.nederkaart.nl   

    Google Geo Day 1 

    Google Geo Day, part 1

    Today the google geoday is held in Amsterdam Expo XXI

    The morning programme consisted of three speeches, most about google maps and google earth, focused towards developers who want to create mashups based on the maps API.

    A bunch of quick notes, some in Dutch (with the Dutch speakers mostly)

    (More …)

     
  • Joe 22:18 on March 24, 2007 Permalink
    Tags: , car sales, Conrad Black, correct tools, , Dennis Furr, Google, , Japan;, limit services, MySpace, proper semantic web technolgy, , Rupert Murdoch, , semantic web initiatives, semweb technology, Stephen Downes, , , Web people, Web Will Fail,   

    Why the Semantic Web will NOT Fail 

    W3C Semantic Web stack taken from W3C’s web siteOn Linkedin Answers, Krzysztof Pająk asks the question “Why the Semantic Web will Fail?
    Update: the person at LinkedIn apparently ripped his question literally off a blog post by Stephen Downes: Why the Semantic Web Will Fail– which I just found out about.

    I posted the following clarification to LinkedIn answers:

    I hereby leave my answer as general insight for this thread, but I have no respect for the way you’re apparently doing business. This smells a lot like plagiarism.

    The original blog post is much more about trust and control, while the Linkedin thread seems to focus more about business models and cost. Just be sure to read Spehens blog.

    Quoted, from Stephen Downes:

    I was thinking about the edgy things of Web 2.0, and where they’re working, and more importantly, where they’re beginning to show some cracks. 


    
A few of key things today: 


    
- Yahoo is forcing people to give up their Flickr identities and to join the mother ship, and 


    
- MySpace is blocking all the widgets that aren’t supported by some sort of business deal with MySpace 


    
- the rumour that Google is turning off the search API 


    
And that’s when I realized: 


    
The Semantic Web will never work because it depends on businesses working together, on them cooperating. 


    
We are talking about the most conservative bunch of people in the world, people who believe in greed and cut-throat business ethics. People who would steal one another’s property if it weren’t nailed down. People like, well, Conrad Black and Rupert Murdoch. 


    
And they’re all going to play nice and create one seamless Semantic Web that will work between companies – competing entities choreographing their responses so they can work together to grant you a seamless experience?

    Then, Dennis Furr answered:

    Another way to look at this is from the perspective of the SME. Let the big players cause restrictions and limit services and their clients will abandon them. This will create new opportunities for new and existing SMEs to demonstrate their worth. 


    
-Yahoo doesn’t force anyone to do anything. We make choices. 


    
-If MySpace doesn’t provide the correct tools to satisfy their customers than the customers will vote with their feet. 


    
-If Google (foolishly) turned off the search API then someone else would provide a replacement service. 


    
Consumers aren’t loyal to brands, they are loyal to what these brands deliver. Look at the US automobile industry in the 1970’s. US auto manufacturers were building large cars that didn’t get very good fuel economy. Japanese car sales flourished. After much pain and agony US auto manufacturers developed relationships with their Japanese competitors and started manufacturing cars that were more attactive in terms of fuel economy. They even built cars with engines manufactured in Japan that were also used in Japanese cars. 


    
My point is that if large players in an industry choose not to “play nice“ then this will likely create a place in the market for the SME. By developing seamless working relationships, collectively, the SME may develop enough momentum to displace larger traditional providers.

    Excellent.
    But there’s more.

    Why the Semantic Web will NOT fail

    First, Dennis gives a most execellent answer to the question about greed and conservatism.

    Then, about the technology, things may evolve slghtly different than foreseen back in 2000 when the term “the semantic web“ emerged.

    Back then, the perspective came mostly from the AI folks and Librarians, where the interpretation and categorization of data was thought of in a very top-down way. Basically, we needed massive centralized ontologies, which cost tons of money to define and maintain.

    The cost of such a system could easily be prohibitive according to the scenario of Kryzsztof Pająk Stephen Downes.

    But then came round the developments which were tagged “web 2.0“. The key factor in my opinion, is the third point of Tim O’Reilly’s What is Web 2.0 article: data is the next “Intel Inside”. In my words, this means that users have to gain by sharing their data (the sum adds more value to the individual items) and smart companies can benefit from exploiting this data in a sensible/smart way.

    We have seen this in the form of tagging on sites as Flickr and del.icio.us. Individual users get the benefit of putting their data in context of the rest, the service gets the benefits of being able to do all kinds of data mining and exploitation (e.g. advertising). The key point here is: users add their own meta data, for their own benefit.

    Right now these so called folksonomies are becoming more and more mainstream. The center of this bottom up movement is the microformats initiative.
    This doesn’t go unnoticed by the Semantic Web people and the first initiative to build the bridge between folksonomies, like microformats, and proper semantic web technolgy (rdf and ontologies) is being finalized right now: the W3C GRDDL recommendation. So we could finally get the benefits of both massive amounts of metadata, all entered by normal users, and carefully mapped ontologies, created by professionals for some specific benefit.

    I would not be surprised if 2007 will be the year of the first successful, mainstream semantic web initiatives. Interesting fact: the new Video on demand service Joost.com is heavily supported by semweb technology at the back end.

    Here is the linkedin thread in case you’re interested…

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel

Twitter links powered by Tweet This v1.8.3, a WordPress plugin for Twitter.