Tagged: RDF Toggle Comment Threads | Keyboard Shortcuts

  • Joe 20:44 on May 25, 2009 Permalink
    Tags: , RDF, , , seo   

    Next phase for semweb take up 

    The recent announcement from Google that they will start indexing RDFa and Microformats flew mostly under the radar, but is doesn’t go completely unnoticed (see Zemanta links below).

    I personally think that this marks the start of “real world” adoption of semweb, be it through a surrogate approach via microformats.
    Why now? Because improved representation of your content in Google is simply too big to ignore. If embedding microformatted content (or, hopefully, RDFa) brings you an advantage in Google Page Rank, web site owners and SEO specialists will rapidly adopt the technology. Without the google index incentive this never would happen.

    The other side may be that data quality gets diluted in a way. Up till now we are used to working with reasonably clean and consistent collections (like DBpedia, MusicBrainz to name a few), where the data quality matters all by itself. That is radically different from entering some code for the purpose of cranking up your rank on the search engines.

    Maybe in a year from now we are all busy with implementing trust- and reputation systems for linked data instead of spreading the word. I’m curious if the nature of linked data makes this job any easier than with the unstructured web of documents.

    Update: Ivan Hermann tells it all in a nutshell: RDFa, Google.

    Reblog this post [with Zemanta]
     
  • Joe 13:50 on May 6, 2008 Permalink
    Tags: developer extension, , RDF, Ted Mielczarek, xpi   

    Extension Developer’s Extension for FF 3.0 

    If you’re developing add-ons for Firefox, you likely know the Extension Developer’s Extension by Ted Mielczarek.

    It appears that this extension is perfectly compatible with the current Firefox 3.0 beta releases. However, as is the case with Live HTTP headers, the Developer’s Extension does not install out-of-the box (observed for Firefox beta 5). This is because Firefox now refuses to install extensions which specify a non-secure URL for
    auto updates.

    The fix is really easy:

    • download the extension
    • uncompress (using zip)
    • edit the file install.rdf (top level): remove the em:updateURL property
    • zip the whole shebang again, name the file somefile.xpi
    • now Firefox will install the extension if you drop the file on an open window.

    Four your convenience, you can download the modified extension here: extensiondev-030-no-update – but you really shouldn’t, download the original and make the modification yourself instead!

    @Marc K, did you see this article: http://kb.mozillazine.org/Extensions.checkUpdateSecurity

    My guess is that this property is not available at all by default, so you need to add te property yourself in the about:config screen.
    To do so, right click anywhere in the property list, select “New” => “Boolean” and paste in “extensions.checkUpdateSecurity”. Set its value to false and you should be there.
    I haven’t tried this myself…
     
    • Marc K 11:11 on July 3, 2008 Permalink

      What’s annoying me is that I read on a blog that you can set extensions.checkUpdateSecurity to “false” in the pre-release versions of FF3, but I can’t find that property anywhere in my about:config for the final version released in June… Just as I’m starting to learn about that crazy XUL soup, and I was referred to the Extensions Developers’ extension by Mozilla’s own tutorial.

  • Joe 11:26 on April 4, 2008 Permalink
    Tags: artificial intelligence, basic semweb technologies, central blog site, data smarter vs make software smarter, , internet ventures, Leah Culver, netxweb, Nova Spivack, online, pure mathematical algorithms, radar networks, RDF, real estate brokers, , , , social networks, software builds, twine, twones, Twones.com, Web approach   

    The NextWeb 2008 (day one) 

    Although last year’s NextWeb conference had good coverage in the blogosphere, this year everything has been professionalized around the yearly event. One of these improvements is the nextweb.org, which has become a central blog site where professional bloggers keep up with the developments around new internet ventures.

    You can read about all noteworthy and sometimes even anecdotal events there, so I limit myself ot some personal observations at this place.

    Noteworthy was the first keynote by Adeo Ressi, “Get Funding for Your Dream“. According to him, now is the best time ever to start a new venture. But at the same time, there are many dangers luring in VC funding, which you should be aware of.

    One of the most central statements: you ore strictly on your own for reviewing the contract terms when it come to closing a deal. Your legal advisor will be honest with you up to the point when you sing a contract with them, as they have just one incentive left afterwards: close the deal and get the percentage of the value you negotiated earlier. Every delay is just wate of time – so forget about honest advice on VC terms.

    This reminds me of the peculiar situation we have with real estate brokers and financial advisers over here: these people all work for a percentage of the deal, so nobody is at your side when it comes to choosing the real best option, let alone a careful review of the terms.

    The rest of the talk was about what to expect when going through the movements, from choosing investors, preparing your references (they will be interviewed, even the unlikely ones, and should always be unconditional positive about you) and, indeed, bad terms vs acceptable ones.

    Interesting – and enlightening – it looks like we are doing pretty well regarding our own startup Twones.com.

    The keynote by Leah Culver of Pownce was charming and gave most of all insight in the networking aspects of starting a online business. Her suggestion to talk more about the how and why around OAuth was not accepted by the audience. Regretful, I would have liked a quick introduction in this emerging standard as an alternative to all those proprietary solutions for all those social networks.

    Nova Spivack of Twine held the keynote I was looking forward to the most. This time, surprisingly, the audience chose for an introduction into the semantic web, rather than a presentation about Twine.

    And this presentation was well done. No new or surprising elements for those who follow Nova Spivack’s blog (his “CEO blog” at Radar Networks), but I am sure that many people in the audience will have “got it“. And from personal experience I know how difficult it is to explain the relevance of the highly abstract and often complex elements of the semantic web.

    What I liked was the perspective in which Nova places the semweb:

    Tagging approach
    pro: easy to do
    con: easy to do (inconsistence, no “meaning“)

    Statistical approach (Google)
    Pro: pure mathematical algorithms
    Con: no understanding of the content

    Linguistic approach
    pro: true language understanding
    con: computational intensive, scales badly, one domain at a time

    Semantic Web approach (radar networks, dbpedia, metaweb, talis)
    pro: more precise queries (metadadata)
    con: lack of tools, who creates the metadata?

    Artificial Intelligence approach (cycorp)
    pro: this is the holy grail!
    con: never finished and always outdated (the holy grail)
    Now the Semantic Web approach is in the middle:
    Software needs some improvement and you need metadata
    But: advantages add up to a network effect; if I enhance my data, I get the benefit inr eturn that my data now can be linked automatically in all kind of related contexts, especially those I never could imagine myself.

    And this is taking off at an increasing speed, see the updated graph on open, linked data on the web.

    The Growing Linked Data Universe
    Characteristics of the semantic web approach:

    • Make data smarter vs make software smarter
    • Metadata vs AI & linguistics
    • Open data enables network effects

    Approaches:

    • Bottom up (you need to learn RDF and such) – this is not going to happen (note: basic semweb technologies exist since around 2000).
    • Top down: software builds all the RDF and OWL and stuff for you. Not surprisingly, this where Twine aims at.

    Some notes on the practical side. Nova dislikes the term Semantic Web as being to vague, “Web of Data” would be more appropriate. And then, already an old theme, he adapts the popuplar but heavily overloaded term “web 2.0” to mean “the second decade of the web” en so, web 3.0 as the third decade, roughly 2010 – 2020. So we got a timeline. And right now the early adopters are emerging, the first killer apps will be launched roughly between now and the next two years.

    Finally, a critical not on business models: how do protect my business if all data has to be open and free?

    The bottom line is taht every entrepreneur needs to decide for themselves, but in the long run people will move away from closed environments where they only put effort in, without being able to get the value back of their own data, let alone benefit of the network effect.

    Again, this is an area were Twones will shine: our business model scale along with the network effect, the more open and the more shared each user’s data is, the more value everyone will get out of it.

    Oh, and I got my private Twine invite (looks good, many thanks Nova!).

    Got curious about Twones?

    We will lanuch an invitation only beta at the end of the month, you can register for the beta waiting list at http://www.twones.com

     
  • Joe 15:35 on October 18, 2007 Permalink
    Tags: , , RDF, real world applications, screen scraping, , semantic web objects, simplest imaginable solution, web browsers   

    From Microformats to RDF 

    In response to Microformats vs. RDF: How Microformats Relate to the Semantic Web.

    Indeed, microformats are not an alternative for RDF, not even a “poor man’s version”. But that was not a design goal at any time. What’s more: microformats are no first class semantic web objects in any way either. Rather, they are the simplest imaginable solution for semantically correct markup, limited to the most common data formats out there.

    To rephrase the microformats charter, they want to be the common man’s solution, aimed at the well intending webmaster crowd. As such, microformats can be hugely successful (analogue to the “html as tag soup” success story). Fine.

    Next, as we end up having millions of valid items of hCard, hReview and what not on the web, there is GRDDL to instantly promote all this content in full fledged RDF.

    The good news is that we have all components currently available – many microformats are auto-generated from well designed CMS templates – and GRDDL is a Proposed Recommendation since 6 July 2007.

    What we’re waiting for, is a business need to discover, transform and aggregate all of this data. I would be surprised if nobody is working on this, right now. Google, or a Google killer?

    Bottom line: the semantic web has been lacking real world content for too long (not withstanding DBPedia and Freebase and such) and real world applications for the common man. Microformats can and will have a place in advocacy for this large target audience, people who grasp html and basic data constructs, but who are not interested in graph theory.

    This audience will only jump on the bandwagon if they can instantly understand the intent from view source inspection. Compare the success of RSS 2.0 over the semantically superior (but more complex, RDF based) RSS 1.0 version.

    In the end it will just not matter, most content will be “good enough” to be useful for the semweb (through GRDDL transformations and screen scraping), just like today’s html is good enough to be rendered, in some way, in our web browsers. By that time we will have a load of other problems, like semantic spam, the need for provenance tracking and trust levels for semantic information. But that is another story…

    Update:  Semantic Report writes about Using Microformats to Get Started with the Semantic Web. So, there then!

    [ratings]

     
  • Joe 08:12 on May 31, 2007 Permalink
    Tags: Cory Doctorow, data web, , Henry Story, RDF, , semantic web tools   

    Context as Metadata 

    Context - (c) Jeremy Noble More than a year ago, Henry Story blogged about Keeping track of Context in Life and on the Web. It is about the context of the story you’re telling, as essential background information for the general audience and distracting bloat for the initiated at the same time.

    The conclusion is that, using a semantic web approach, you could provide links to as many contextual facts as you like, without the need of directly exposing these to the observing end user. Just use those links for queries and matching algorithms wherever appropriate.

    In other words: don’t bug me with redundant metadata if I don’t need it. This might be even more true for content creation: just read Cory Doctorow’s Metacrap article again and you know why.

    Years ago, almost immediately after I bought my first digital photo camera, I started to realize why metadata is important. In a few words: taking pictures is easy, storage space is cheap and deleting images is a pain. You need to carefully compare and make sure to pick the best one. So, hundreds, soon thousands of images started to pile up in the form of un-imaginatively named blobs, like “IMG_1123.JPG”. Essentially, these images get lost as the proverbial needle in a haystack.

    Now you could put all those images in folders, labeled after an event, date, person or whatever. But this is a tedious job and only provides a very flat view (you don’t even want to think about creating nested or linked structures on your file system).

    Then, I soon found out that every digicam image has embedded EXIF meta data, which proved to be of huge value for tracing back those lost images. If I know that a shot was made during some event, I only need to look up the events’ date and browse all images shot during that period.

    Then iPhoto came around, with the possibility to add tags (with a terrible interface, use Keyword Assistant instead!), ratings and multiple album folders. Providing even more metadata and control to find your images at a later time.

    There’s just one problem left: entering and assigning all that meta data by hand is still much work if you have hundreds of images to go. Errors are quickly made and hard to detect when you’re focused on other things, such as composition and image quality. (More …)

     
  • Joe 08:38 on May 8, 2007 Permalink
    Tags: Adding Fleck;, Alex King, cluttering;, gif, , RDF   

    Adding Fleck to ShareThis plugin 

    Share This iconYou may have noticed that I just added one of those immensely popular social bookmark sharing plugins to this blog. It is called ShareThis, developed by Alex King. I especially love the stylish, RDF-like Share This icon.

    I felt the ShareThis functionality overlaps the Fleck plugin for a great deal, so instead of having both of them cluttering every blog post, I just added Fleck to the ShareThis set.
    (More …)

     
    • Henri van den Hoof 16:06 on May 18, 2007 Permalink

      Ziet er goed uit. Ik heb zelf een plugin geschreven en daar ook Fleck in meegenomen maar vind deze Share This ook wel erg geslaagd. Misschien eens kijken of ik ’em ook kan gebruiken en customizen 🙂

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel

Twitter links powered by Tweet This v1.8.3, a WordPress plugin for Twitter.