Tagged: XML Toggle Comment Threads | Keyboard Shortcuts

  • Joe 23:29 on June 14, 2007 Permalink
    Tags: Apple Computer, DTD, Martin Ott, XML   

    Installing Subversion on Mac OSX 

    SubversionYesterday it was the third time in two years that I needed to install Subversion from scratch on Mac OSX. The third time to reinvent the wheel and learn from earlier mistakes. So now I wrote down some notes, which I like to share…

    First, the installation

    Currently the most up to date binary builds with installer come from Martin Ott. Download the package (currently 1.4.3) and install. No sweat, no pain. Then I want to make sure the svnserve daemon is running whenever I boot my mac. Time to get into the dirty handwork. (More …)

    • Guido Haarmans 20:19 on June 15, 2007 Permalink

      Subversion 1.4.4 binaries are actually available.from CollabNet. The Mac OS X binaries are a fully packaged, complete distribution of Subversion. They are Universal binaries so they run on both Intel® and PowerPC based Macs. The download includes the Ruby, Perl™, Python® and Java™ bindings suitable for Subclipse and other Java-based applications. The server side supports both Berkeley DB and FSFS. And it’s all in a proper OS X package, just download and double-click.

  • Joe 09:56 on April 13, 2007 Permalink
    Tags: , , , XML   

    Google Maps with KML data 

    Some time ago I wrote about the introduction of a new feature for the Google Maps API: you can now use the same definitions file format as with Google Earth (technically speaking this is the KML 2.1 format).

    All cool and neat, so I did a little experiment to try it out. And guess what: it didn’t work! No error messages, just my KML file was completely ignored.

    Well, it appears that KML file must be accessible for Google to read and parse; in other words: it is not the client side API (Javascript) which reads the KML, but the Google service. Apparently what they do is: parse the file, calculate the correct view port and then send over all geo data, back to the client API.

    As I hosted the file on my local computer (http://localhost/), this did not work. Duh!

    This feature was not immediately obvious to me from the API documentation. And it might not be strictly necessary: as long as the KML file is hosted on the same server as the client HTML, the file could just be retrieved through XMLHttpRequest (the same origin policy would be satisfied). Performance might be an issue, though, as XML Parsing in the browser is not very efficient. And setting the viewport for the map would require another round trip and delay.

    Conclusion: be sure to host your KML file such that it is accessible for Google (over http), otherwise it will just not work.

    var geoXml = new GGeoXml('http://www.yourhost.com/geo/map.kml');


  • Joe 11:02 on April 4, 2007 Permalink
    Tags: , Christian Heilmann, corporate software/intranet stuff, , semanic web, web techniques, XML, XSLT   

    Reasons for stripping down 

    Christian Heilmann won’t go naked tomorrow.

    His reasons not to do so are:

    • Most CSS naked sites are generated from templates, so what is the individual blog author’s effort?
    • The target audience is missed, it is merely preaching for already converted style purists
    • The target market is missed: those really crappy intranets within big corporations

    My reasons for still participating with my (slightly modified) WordPress blog…

    You definitely have a point with the generated code/templates based blogs. Last year, I adapted my site layout to have the content first and navigation and boilerplate stuff to the bottom in the HTML stream. That was fun and even sped up the apparent rendering of my site with CSS enabled. So I did learn something usefull in the process as well.

    That site structure is generated from templates as well, but all handcrafted XML/XSLT stuff (in Firefox, select the alternate style CSS Naked Day to see how it works).

    Now I’m using WordPress and, indeed, just installed a plugin for ‘Naked Day. OK, so my contribution was to adapt that plugin for WP 2.1.x.

    About the corporate software/intranet stuff: you are completely right. But here I feel that bottom-up advocacy does work in the longer term. In my former (large) organisation, I got quite a few corporate J2EE developers interested in standards compliant CSS web techniques, especially after they had a very bad time trying to meet the requirements from our User Interaction specialists. The advance of Ajax does the rest.

    In just another 5 years or so, even those big vendors might have “got it” (and then go wondering what that “semanic web” stuff is all about ;-))

  • Joe 21:41 on February 2, 2007 Permalink
    Tags: , semantic web context, Tim Berners-Lee, , web service, XML   

    Making sense of tagging 

    By now almost everyone and their dog are familiar with the Web 2.0 meme and it’s common attributes. One of the more prominent features is tagging, assigning free text keywords to your photos, bookmarks and everything else.
    This has many benefits, as you can generate nice tag clouds or find interesting bookmarks by tag subject.

    But there are problems as well,most prominently the fact that my tag word may mean something rather different, depending on context.

    Over the past years I have been struggling with this problem, especially for tagging my photos. At first I cooked my own solution, based on a modified version of the Exif parser jhead (with added XML output) and a sticky ball of XSL transformation scripts (never published).
    Then I switched to iPhoto. Adding tags itself is a real pain with iPhoto, but this problem is solved by the excellent Keyword Assistant. The problem, however, is still in making sense of those keywords. I mean, there must at least be an option to export this metadata together with the image files, for archival (I’m rather sure that iPhoto 6 format will be forgotten about in a mere 15 to 20 years from now).

    There appear to be a couple of half finished projects to export iPhoto metadata to RDF. This looks like a promising route, but for some reason these didn’t gain traction and seem to have been abandoned.

    Of course, exporting just tags does not give the definitive answer to what exactly these tags mean, especially a couple of years from now. Context matters very much, if I tag a photo with a certain keyword, this may well mean something different than the same keyword for, let’s say, a song.

    So I conceived a very nice contextual tagging system, all in my head. Working title: TagLib. This would be a service-like application, always sitting in the background (or maybe running remotely as a web service) and waiting for tagging activity. Then, whenever a tag needs to be entered, all kinds of context would be considered. For instance, the kind of subject. When tagging a photo, the tag could be associated with the media type (photo) and time. The time could be compared with events in iCalendar and – if a matching event was found – the photo and event could be coupled. RDF would the natural choice for the data format, which then naturally extended to related data, e.g. FOAF for people’s names and Dublin Core for lots of other metadata.

    I still think that such a tagging service would make a lot of sense. Especially when it would be open and available for the general public to extend, you would get a kick start assigning meaningful keywords to whatever you want to tag.

    The working would be something along these lines:

    • start tagging operation (e.g. right click, context menu)
    • tagging interface invoked with context (object type, time, previous tagging)
    • suggested tags appear with auto-completion, based on context
    • user action: inspect context of suggested tag
    • when satisfied, apply tag
    • otherwise, create a personal “fork” for your context, e.g. by referring to name in Foaf file etc.

    Example: the first time you enter the tag bush, you wold be suggested the choice between the president of the USA or a wilderness scene. Or maybe you know someone else by the name bush, and you point the tool to the bush in your address book (facilitated through Foaf or some other mechanism).

    This all is a rough concept, stuck at the thought model level. I would have kept this all to myself, if I had not come across an article by Tim Berners-Lee: Using labels to give semantics to tags. In short: applying well defined (semantic) labels to liberally tagged objects, in order to give them presence in the semantic web context. In Tim BL’s words: “The concept of a label as a preset set of data which is applied to things and classes of things provides an intuitive user interface for a operation which should be simple for untrained users.

    Excellent, there’s still way to go!

  • Joe 19:54 on December 4, 2006 Permalink
    Tags: XML   

    Microsummary Generator Wizard Extension 

    After some fiddling around with my Microsummaries Generator Extension, I’m feeling that I’m getting on the right user interaction track.

    The latest release lets you select any content block on a web page. Then, a microsummary generator is constructed, which you can either install right away, or refine in a two-step wizard process.

    Continue reading for a walkthrough with screenshots…

    (More …)

    • Jason Barnabe 22:18 on December 7, 2006 Permalink

      I’ve put together a microsummary generator repository at http://userstyles.org/livetitle/ . I’m interested in providing an interface to microsummary extensions. For example, your extension could allow the generators created by your users to be easily posted to my site. If you’re interested in collaborating, mail me!

  • Joe 10:54 on July 21, 2006 Permalink
    Tags: , Greasemonkey Building, , user interface solution, XML   

    Create Microsummaries with Greasemonkey 

    Building your own Microsummaries Generator in XML for the new Firefox 2.0 (beta 1) can be a daunting task.

    Figuring out the whole XPath string can be an annoying experience, especially counting the number of nested divs, table rows and such. To alleviate this, Greasemonkey comes to the rescue with my Microsummary Generator user script.

    The general idea is that you start the script on your target page. Then you all individually discernible elements on the page will be highlighted when you move the mouse over the (just like for instance DOM Inspector). When you click the desired element, the script generates the proper Microsummaries Generator XML document for this element.

    Determining the XPath for the Microsummary headline is currently done with these simple rules in mind:

    Calculate the full XPath location path, down to the document root element (html). This results in a full nesting of named elements, indexed by their position if not the first of the same kind. Example:


    If an element has an ID attribute, take this as the starting point. Example:


    The general idea is that an “id’ed div” bears more semantic meaning, as intended by the website architect, and as such is much more likely to survive (minor) design and markup changes.
    This, however, fails whenever an ID is generated for some other reason, like the title elements of this weblog (id=”post-31″). I have no solution for this yet, but I’m considering some user interface solution like this:

        |          |   |      |     |
        |          |   |      |     +- ( ) id='id2e7f2ab'
        |          |   |      +------- (*) id='statstab'
        |          |   +-------------- ( ) id='stats'
        |          +------------------ ( ) id='main'
        +----------------------------- ( ) use root element

    Description: every element, which has an ID attribute, can be selected as the root of the XPath expression. 

    So the current version of the script has a few rough edges and there are still many manual steps needed in order to get the resulting XML Generator properly installed into Firefox. Be sure to share your thoughts for improvement!

    • Johan Sundström 18:42 on July 21, 2006 Permalink

      id=”post-31″ might not have been the ideal example of a nonsemantic id IMO, but I see your point. The sketched-out UI for picking a preferred XPath reference hook looks great; it’s just the kind of tool I’d want to have in the XPath extension, or better still in FireBug.

      A GM script to do that kind of thing would be a great start, though.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Twitter links powered by Tweet This v1.8.3, a WordPress plugin for Twitter.