GRDDL

The details of data in documents; GRDDL, profiles, and HTML5

Submitted by connolly on Fri, 2008-08-22 14:09. :: | |
GRDDL, a mechanism for putting RDF data in XML/XHTML documents, is specified mostly at the XPath data model level. Some GRDDL software goes beyond XML and supports HTML as she are spoke, aka tag soup. HTML 5 is intended to standardize the connection between tag soup and XPath. The tidy use case for GRDDL anticipates that using HTML 5 concrete syntax rather than XHTML 1.x concrete syntax involves no changes at the XPath level

But in GRDDL and HTML5, Ian Hickson, editor of HTML 5, advocates dropping the profile attribute of the HTML head element in favor of rel="profile" or some such. I dropped by the #microformats channel to think out loud about this stuff, and Tantek said similarly, "we may solve this with rel="profile" anyway." The rel-profile topic in the microformats wiki shows the idea goes pretty far back.

Possibilities I see include:
  • GRDDL implementors add support for rel="profile" along with HTML 5 concrete syntax
  • GRDDL implementors don't change their code, so people who want to use GRDDL with HTML 5 features such as <video> stick to XML-wf-happy HTML 5 syntax and they use the head/@profile attribute anyway, despite what the HTML 5 spec says.
  • People who want to use GRDDL stick to XHTML 1.x.
  • People who want to put data in their HTML documents use RDFa.

I don't particularly care for the rel="profile" design, but one should choose ones battles and I'm not inclined to choose this one. I'm content for the market to choose.

 

sidekick calendar subscription for SXSW

Submitted by connolly on Sat, 2008-03-08 12:57. :: | |

At a conference, like in a good coding session, it's too easy to lose track of time, so I rely heavily on a PDA to remind me of appointments. The SXSW program has just the features I want:

  • an "add this to my calendar" button next to each session
  • a calendar feed of my choices

But I carry a hiptop, which doesn't support calendar subscription. I could copy-and-paste a few critical sessions to my hiptop, but when the climbing geeks offer an hCalendar feed, it becomes wortwhile to use iCal on the laptop, i.e. something that groks calendar subscription, as the master calendar device.

I have had a system for exporting my mobile calendar as a feed, but it's a tedious 4 step shell command sequence; it's OK once or twice a week, but here at SXSW, I want to sync up several times a day.

I have been moving my palmagent project from shell commands and Makefiles to a RESTful Web service, and this pushed me over the edge to add calendar feed support.

As usual, to pull the data from the hiptop's data servers:

  1. Make a directory to hold hiptop accounts and put it in hip_config.py:
    AccountsDir = "/Users/connolly/Desktop/danger-accts"
  2. Start hipwsgi.py running:
    pbjam:~/projects/palmagent$ python hipwsgi.py &
    Serving HTTP on 0.0.0.0 port 8080 ...
  3. Use dangerSync.py to log in and get some session credentials for half an hou of use:
    ~/Desktop/danger-accts/ACCT $ python ~/projects/palmagent/dangerSync.py \
    --prod --user ACCT \
    --passwd YOUR_PASSWORD_HERE \
    >session-id
  4. Visit http://0.0.0.0:8080/pim/ACCT and hit the Pull button.

Now you have event, task, contact, and note directories containing a JSON file for each record and hipwsgi.py lets you navigate them in a few different ways.

The pull feature is incremental; it grabs just the records that have changed since you previously pulled:

Pull majo from danger hiptop service

back to sync options

event

 

The new feature today is the ical export, linked from the event categories page:

event

back to sync options

 

You can copy the address of that ical export link and subscribe to it from iCal, and bingo, there it is, merged with the SXSW calendar and such.

@@screenshot pending 

 

I can only imagine...

Submitted by connolly on Sun, 2007-12-09 09:30. :: | | | |

I have a new bookmark. No, not a del.icio.us bookmark; not some bits in a file. This is the kind you have to go there to get.. go to Cleveland, that is. It reads:

Thank you
for you love & support
for the Ikpia & Ogbuji families
At this time of real need.
We will never forget
Imose, Chica, & Anya

Imose, Chica, & Anya

Abundant Life International Church
Highland heights, OH

After working with Chime for a year or so on the GRDDL Working Group (he was the difference between a hodge-podge of test files and a nicely organized GRDDL Test Cases technical report), I was really excited to meet him at the W3C Technical Plenary in Cambridge in early November. His Fuxi work is one of the best implementations of the way I think semantic web rules and proofs should go. When he told me some people didn't see the practical applications, it made me want to fly there and tell them how I think this will save lives and end world hunger.

So this past Tuesday, when I read the news about his family, the only way I could make my peace with it was to go and be with him. I can only imagine what he is going through. Eric Miller and Brian and David drove me to the funeral, but the line to say hi to the family was too long. And the internment service didn't really provide an opportunity to talk. So I was really glad that after I filled my plate at the reception, a seat across from Chime and Roschelle opened up for me and I got to sit and share a meal with them.

Grandpa Linus was at the table, too. His eulogy earlier at the funeral ended with the most inspiring spoken rendition of a song that I have ever heard:

Now The Hacienda's Dark The Town Is Sleeping
Now The Time Has Come To Part The Time For Weeping
Vaya Con Dios My Darling
Vaya Con Dios My Love

His eulogy is also posted as Grandparents' lament on The Kingdom Kids web site, along with details about a fund to help the family get back on their feet.

Soccer schedules, flight itineraries, timezones, and python web frameworks

Submitted by connolly on Wed, 2007-09-12 17:17. :: | | | |

The schedule for this fall soccer season came out August 11th. I got the itinerary for the trip I'm about to take on July 26. But I just now got them synchronized with the family calendar.

The soccer league publishes the schedule in somewhat reasonable HTML; to get that into my sidekick, I have a Makefile that does these steps:

  1. Use tidy to make the markup well-formed.
  2. Use 100 lines of XSLT (soccer-schedfix.xsl) to add hCalendar markup.
  3. Use glean-hcal.xsl to get RDF calendar data.
  4. Use hipAgent.py to upload the calendar items via XMLRPC to the danger/t-mobile service, which magically updates the sidekick device.

But oops! The timezones come out wrong. Ugh... manually fix the times of 12 soccer games... better than manually keying in all the data... then sync with the family calendar. My usual calendar sync Makefile does the following:

  1. Use dangerSync.py to download the calendar and task data via XMLRPC.
  2. Use hipsrv.py to filter by category=family, convert from danger/sidekick/hiptop conventions to iCalendar standard conventions pour the records into a kid template to produce RDF Calendar (and hCalendar).
  3. Use toIcal.py to convert RDF Calendar to .ics format.
  4. Upload to family WebDAV server using curl.

Then check the results on my mac to make sure that when my wife refreshes her iCal subscriptions it will look right.

Oh no! The timezones are wrong again!

The sidekick has no visible support for timezones, but the start_time and end_time fields in the XMLRPC interface are in Z/UTC time, and there's a timezone field. However, after years with this device, I'm still mystified about how it works. The Makefiles approach is not conducive to tinkering at this level, so I worked on my REST interface, hipwsgi.py until it had crude support for editing records (using JSON syntax in a form field). What I discovered is that once you post an event record with a mixed up timezone, there's no way to fix it. When you use the device UI to change the start time, it looks OK, but the Z time via XMLRPC is then wrong.

So I deleted all the soccer game records, carefully factored the danger/iCalendar conversion code out of hipAgent.py into calitems.py for ease of testing, and goit it working for local Chicago-time events.

Then I went through the whole story again with my itinerary. Just replace tidy and soccer-schedfix.xsl with flightCal.py to get the itinerary from SABRE's text format to hCalendar:

  1. Upload itinerary to the sidekick.
  2. Manually fix the times.
  3. Sync with iCal. Bzzt. Off by several hours.
  4. Delete the flights from the sidekick.
  5. Work on calitems.py some more.
  6. Upload to the sidekick again. Ignore the sidekick display, which is right for the parts of the itinerary in Chicago, but wrong for the others.
  7. Sync with iCal. Win!

I suppose I'm resigned that the only way to get the XMLRPC POST/upload right (the stored Z times, at least, if not the display) is to know what timezone the device is set to when the POST occurs. Sigh.

A March 2005 review corroborates my findings:

The Sidekick and the sync software do not seem to be aware of time zones. That means that your PC and your Sidekick have to be configured for the same time zone when they synchronize, else your appointments will be all wrong.

 

 hipwsgi.py is about my 5th iteration on this idea of a web server interface to my PDA data. It uses WSGI and JSON and Genshi, following Joe G's stuff. Previous itertions include:

  1. pdkb.pl - quick n dirty perl hack (started April 2001)
  2. hipAgent.py - screen scraping (Dec 2002)
  3. dangerSync.py - XMLRPC with a python shelf and hardcoded RDF/XML output (Feb 2004)
  4. hipsrv.py - conversion logic in python with kid templates and SPARQL-like filters over JSON-shaped data (March 2006)
It's pretty raw right now, but fleshing out the details looks like fun. Wish me luck.

Linked Data at WWW2007: GRDDL, SPARQL, and Wikipedia, oh my!

Submitted by connolly on Thu, 2007-05-17 16:29. :: | | |

Last Tuesday, TimBL started to gripe that the WWW2007 program had lots of stuff that he wanted to see all at the same time; we both realized pretty soon: that's a sign of a great conference.

That afternoon, Harry Halpin and I gave a GRDDL tutorial. Deploying Web-scale Mash-ups by Linking Microformats and the Semantic Web is the title Harry came up with... I was hesitant to be that sensationalist when we first started putting it together, but I think it actually lived up to the billing. It's too bad last-minute complications prevented Murray Maloney from being there to enjoy it with us.

For one thing, GRDDL implementations are springing up all over. I donated my list to the community as the GrddlImplementations wiki topic, and when I came back after the GRDDL spec went to Candidate Recommendation on May 2, several more had sprung up.

What's exciting about these new implementations is that they go beyond the basic "here's some RDF data from one web page" mechanism. They're integrated with RDF map/timeline browsers, and SPARQL engines, and so on.

The example from the GRDDL section of the semantic web client library docs (by Chris Bizer, Tobias Gauß, and Richard Cyganiak) is just "tell me about events on Dan's travel schedule" but that's just the tip of the iceberg: they have implemented the whole LinkedData algorithm (see the SWUI06 paper for details).

With all this great new stuff popping up all over, I felt I should include it in our tutorial materials. I'm not sure how long OpenLink Virtuoso has had GRDDL support (along with database integration, WEBDAV, RSS, Bugzilla support, and on and on), but it was news to me. But I also had to work through some bugs in the details of the GRDDL primer examples with Harry (not to mention dealing with some unexpected input on the HTML 5 decision). So the preparation involved some late nights...

I totally forgot to include the fact that Chime got the Semantic Technologies conference web site using microformats+GRDDL, and Edd did likewise with XTech.

But the questions from the audience showed they were really following along. I was a little worried when they didn't ask any questions about the recursive part of GRDDL; when I prompted them, they said they got it. I guess verbal explanations work; I'm still struggling to find an effective way to explain it in the spec. Harry followed up with some people in the halls about the spreadsheet example; as mnot said, Excel spreadsheets contain the bulk of the data in the enterprise.

One person was even followingn along closely enough to help me realize that the slide on monotonicity/partial understanding uses a really bad example.

The official LinkedData session was on Friday, but it spilled over to a few impromptu gatherings; on Wednesday evening, TimBL was browsing around with the tabulator, and he asked for some URIs from the audience, and in no time, we were browsing protiens and diseases, thanks to somebody who had re-packaged some LSID-based stuff as HTTP+RDF linked data.

Giovanni Tummarello showed a pretty cool back-link service for the Semantic Web. It included support for finding SPARQL endpoints relevant to various properties and classes, a contribution to the serviceDescription issue that the RDF Data Access Working Group postponed. I think I've seen a few other related ideas here and there; I'll try to put them in the ServiceDescription wiki topic when I remember the details...

Chris Bizer showed that dbpedia is the catalyst for an impressive federation of linked data. Back in March 2006, Toward Semantic Web data from Wikipedia was my wish into the web, and it's now granted. All those wikipedia infoboxes are now out there for SPARQLing. And other groups are hooking up musicbrainz and wordnet and so on. After such a long wait, it seems to be happening so fast! 

Speaking of fast, the Semantic MediaWiki project itself is starting to do performance testing with a full copy of wikipedia, Denny told us on Friday afternoon in the DevTrack.

Also speaking of fast, how did OpenLink go from not-on-my-radar to supporting every Semantic Web Technology I have ever heard of in about a year? I got part of the story in the halls... it started with ODBC drivers about a decade ago, which explains why their database integration is so good. Kingsley, here's hoping we get to play volleyball sometime. It's a shame we had just a few short moments together in the halls...

tags: (photos), grddl, www2007, travel

A design for web content labels built from GRDDL and rules

Submitted by connolly on Thu, 2007-01-25 13:35. :: | | |

In #swig discussion, Tim mentioned he did some writing on labels and rules and OWL which prompted me to flesh out some related ideas I had. The result is a Makefile and four tests with example labels. One of them is:

All resources on example.com are accessible for all users and meet WAI AA guidelines except those on visual.example.com which are not suitable for users with impaired vision.

I picked an XML syntax out of the air and wrote visaa.lbl:

<label
xmlns="http://www.w3.org/2007/01/lbl22/label"
xmlns:mobilebp="http://www.w3.org/2007/01/lbl22/mobilebp@@#"
xmlns:wai="http://www.w3.org/2007/01/lbl22/wai@@#"
>
<scope>
<domain>example.com</domain>
<except>
<domain>visual.example.com</domain>
</except>
</scope>
<audience>
<wai:AAuser />
</audience>
</label>

And then in testdata.ttl we have:

<http://example.com/pg1simple> a webarch:InformationResource.
<http://visual.example.com/pg2needsVision> a
webarch:InformationResource.
:charlene a wai:AAuser.

Then we run the test thusly...

$ make visaa_test.ttl
xsltproc --output visaa.rdf label2rdf.xsl visaa.lbl
python ../../../2000/10/swap/cwm.py visaa.rdf lblrules.n3 owlAx.n3
testdata.ttl \
--think --filter=findlabels.n3 --n3 >visaa_test.ttl

and indeed, it concludes:

    <http://example.com/pg1simple>     lt:suitableFor :charlene .

but doesn't conclude that pg2needsVision is OK for charlene.

The .lbl syntax is RDF data via GRDDL and label2rdf.xsl. Then owlAx.n3 is rules that derive from the RDFS and OWL specs; i.e. stuff that's already standard. As Tim wrote, A label is a fairly direct use of OWL restrictions. This is very much the sort of thing OWL is designed for. Only the lblrules.n3 bit goes beyond what's standardized, and it's written in the N3 Rules subset of N3, which, assuming a few built-ins, maps pretty neatly to recent RIF designs.

A recent item from Bijan notes a SPARQL-rules design by Axel; I wonder if these rules fit in that design too. I hope to take a look soonish.

A new Basketball season brings a new episode in the personal information disaster

Submitted by connolly on Thu, 2006-11-16 12:39. :: | | | |

Basketball season is here. Time to copy my son's schedule to my PDA. The organization that runs the league has their schedules online. (yay!) in HTML. (yay!). But with events separated by all <br>s rather than enclosed in elements. (whimper). Even after running it thru tidy, it looks like:

<br />
<b>Event Date:</b> Wednesday, 11/15/2006<br>
<b>Start Time:</b> 8:15<br />
<b>End Time:</b> 9:30<br />
...
<br />
<b>Event Date:</b> Wednesday, 11/8/2006<br />
<b>Start Time:</b> 8:15<br />

So much for XSLT. Time for a nasty perl hack.

Or maybe not. Between my no more undocumented, untested code new year's resolution and the maturity of the python libraries, my usual doctest-driven development worked fine; I was able to generate JSON-shaped structures without hitting that oh screw it; I'll just use perl point; the gist of the code is:

def main(argv):
    dataf, tplf = argv[1], argv[2]
    tpl = kid.Template(file=tplf)
    tpl.events = eachEvent(file(dataf))

    for s in tpl.generate(output='xml', encoding='utf-8'):
        sys.stdout.write(s)

def eachEvent(lines):
    """turn an iterator over lines into an iterator over events
    """
    for l in lines:
        if 'Last Name' in l:
            surname = findName(l)
            e = mkobj("practice", "Practice w/%s" % surname)
        elif 'Event Date' in l:
            if 'dtstart' in e:
                yield e
                e = mkobj("practice", "Practice w/%s" % surname)
            e['date'] = findDate(l)
        elif 'Start Time' in l:
            e['dtstart'] = e['date'] + "T" + findTime(l)
        elif 'End Time' in l:
            e['dtend'] = e['date'] + "T" + findTime(l)

next = 0
def mkobj(pfx, summary):
    global next
    next += 1
    return {'id': "%s%d" % (pfx, next),
            'summary': summary,
            }

def findTime(s):
    """
    >>> findTime("<b>Start Time:</b> 8:15<br />")
    '20:15:00'
    >>> findTime("<b>End Time:</b> 9:30<br />")
    '21:30:00'
    """
    m = re.search(r"(\d+):(\d+)", s)
    hh, mm = int(m.group(1)), int(m.group(2))
    return "%02d:%02d:00" % (hh + 12, mm)

...

It uses my palmagent hackery: event-rdf.kid to produce RDF/XML which hipAgent.py can upload to my PDA. I also used the event.kid template to generate an hCalendar/XHTML version for archival purposes, though I didn't use that directly to feed my PDA.

The development took half an hour or so squeezed into this morning:

changeset:   5:7d455f25b0cc
user:        Dan Connolly http://www.w3.org/People/Connolly/
date:        Thu Nov 16 11:31:07 2006 -0600
summary:     id, seconds in time, etc.

changeset:   2:2b38765cec0f
user:        Dan Connolly http://www.w3.org/People/Connolly/
date:        Thu Nov 16 09:23:15 2006 -0600
summary:     finds date, dtstart, dtend, and location of each event

changeset:   1:e208314f21b2
user:        Dan Connolly http://www.w3.org/People/Connolly/
date:        Thu Nov 16 09:08:01 2006 -0600
summary:     finds dates of each event

Talking with U.T. Austin students about the Microformats, Drug Discovery, the Tabulator, and the Semantic Web

Submitted by connolly on Sat, 2006-09-16 21:36. :: | | | | | |

Working with the MIT tabulator students has been such a blast that while I was at U.T. Austin for the research library symposium, I thought I would try to recruit some undergrads there to get into it. Bob Boyer invited me to speak to his PHL313K class on why the heck they should learn logic, and Alan Cline invited me to the Dean's Scholars lunch, which I used to attend when I was at U.T.

To motivate logic in the PHL313K class, I started with their experience with HTML and blogging and explained how the Semantic Web extends the web by looking at links as logical propositions. cal screen shot I used my XML 2005 slides to talk a little bit about web history and web architecture, and then I moved into using hCalendar (and GRDDL, though I left that largely implicit) to address the personal information disaster. This was the first week or so of class and they had just started learning propositional logic, and hadn't even gotten as far as predicate calculus where atomic formulas like those in RDF show up. And none of them had heard of microformats. I promised not to talk for the full hour but then lost track of time and didn't get to the punch line, "so the computer tells you that no, you can't go to both the conference and Mom's birthday party because you can't be in two places at once" until it was time for them to head off to their next class.

One student did stay after to pose a question that is very interesting and important, if only tangentially related to the Semantic Web: with technology advancing so fast, how do you maintain balance in life?

While Boyer said that talk went well, I think I didn't do a very good job of connecting with them; or maybe they just weren't really awake; it was an 8am class after all. At the Dean's Scholars lunch, on the other hand, the students were talking to each other so loudly as they grabbed their sandwiches that Cline had to really work to get the floor to introduce me as a "local boy done good." They responded with a rousing ovation.

Elaine Rich had provided the vital clue for connecting with this audience earlier in the week. She does AI research and had seen TimBL's AAAI talk. While she didn't exactly give the talk four stars overall, she did get enough out of it to realize it would make an interesting application to add to a book that she's writing, where she's trying to give practical examples that motivate automata theory. So after I took a look at what she had written about URIs and RDF and OWL and such, she reminded me that not all the Deans Scholars are studying computer science; but many of them do biology, and I might do well to present the Semantic Web more from the perspective of that user community.

So I used TimBL's Bio-IT slides. They weren't shy when I went too fast with terms like hypertext, and there were a lot of furrowed brows for a while. But when I got to the FOAFm OMM, UMLS, SNP, Uniprot, Bipax, Patents all have some overlap with drug target ontology drug discovery diagram, I said I didn't even know some of these words and asked them which ones they knew. After a chuckle about "drug", one of them explained about SNP, i.e. single nucleotide polymorphism and another told me about OMM and the discussion really got going. I didn't make much more use of Tim's slides. One great question about integrating data about one place from lots of sources prompted me to tempt the demo gods and try the tabulator. The demo gods were not entirely kind; perhaps I should have used the released version rather than the development version. But I think I did give them a feel for it. In answer to "so what is it you're trying to do, exactly?" I gave a two part answer:

  1. Recruit some of them to work on the tabulator so that their name might be on the next paper like the SWUI06 paper, Tabulator: Exploring and Analyzing linked data on the Semantic Web.
  2. Integrate data accross applications and accross administrative boundaries all over the world, like the Web has done for documents.

We touched on the question of local and global consistency, and someone asked if you can reason about disagreement. I said that yes, I had presented a paper in Edinburgh just this May that demonstrated formally a disagreement between several parties

One of the last questions was "So what is computer science research anway?" which I answered by appeal to the DIG mission statement:

The Decentralized Information Group explores technical, institutional and public policy questions necessary to advance the development of global, decentralized information environments.

And I said how cool it is to have somebody in the TAMI project with real-world experience with the privacy act. One student followed up and asked if we have anybody with real legal background in the group, and I pointed him to Danny. He asked me afterward how to get involved, and it turned out that IRC and freenode are known to him, so the #swig channel was in our common neighborhood in cyberspace, even geography would separate us as I headed to the airport to fly home.

technorati tags:, ,

Blogged with Flock

converting vcard .vcf syntax to hcard and catching up on CALSIFY

Submitted by connolly on Thu, 2006-06-29 00:17. :: | | | |

A while back I wrote about using JSON and templates to produce microformat data. I swapped some of those ideas in today while trying to figure out a simple, consistent model for recurring events using floating times plus locations.

I spent a little time catching up on the IETF CALSIFY WG; they meet Wednesday, July 12 at 9am in Montreal. I wonder how much progress they'll make on issues like the March 2007 DST change and the CalConnect recommendations toward an IANA timezone registry.

When I realized I didn't have a clear problem or use case in mind, I went looking for something that I could chew on in test-driven style.

So I picked up the hcard tests and built a vcard-to-hcard converter sort of out of spare parts. icslex.py handles low-level lexical details of iCalendar, which turn out to have lots in common with vCard: line breaking, escaping, that sort of thing. On top of that, I wrote vcardin.py, which has enough vcard schema smarts to break down the structured N and ADR and GEO properties so there's no microparsing below the JSON level. Then contacts.kid is a kid template that spits out the data following the hcard spec.

It works like this:

python vcardin.py contacts.kid hcard/01-tantek-basic.vcf >,01.html

Then I used X2V to convert the results back to .vcf format and compared them using hcard testing tools (normalize.pl and such) fixed the breakage. Lather, rinse, repeat... I have pretty much all the tests working except 16-honorific-additional-multiple.

It really is a pain to set up a loop for the additional-name field when that field is almost never used, let alone used with multiple values. This sort of structure is more natural in XML/XHTML/hCard than in JSON, in a way. And if I change the JSON structure from a string to a list, does that mean the RDF property should use a list/collection? Probably not... I probably don't need additional-name to be an owl:FunctionalProperty.

Hmm... meanwhile, this contacts.kid template should mesh nicely with my wikipedia airport scraping hack...

See also: IRC notes from #microformats, from #swig.

WWW2006 in Edinburgh: Identity, Reference, and Meaning

Submitted by connolly on Fri, 2006-06-02 14:40. :: | | | | |

I went to Edinburgh last week for WWW2006.

I spent Tuesday in the workshop on Identity, Reference, and the Web (IRW2006). I didn't really finish my presentation slides in time, but I think my paper, A Pragmatic Theory of Reference for the Web is mostly coherent. Each section of the workshop got an entry in a semantic wiki; mine is the one that started at 12:00.

The IRE formalism presented by Valentina and Aldo was though-provoking. I think their proxy-for is like foaf:topic (modulo the way they mix in time). And exact-proxy-for is like foaf:primaryTopic. Very handy. I wonder if foaf:primaryTopic should be promoted to its own thing, separate from all the social networking stuff in foaf.

Ginsberg's talk hit on one of the most important questions: "Do I commit to a document just because I use one of its terms?" His answer was basically to reify everything; I think we can do better than that. Peter Patel-Schneider's talk basically gave a 'no' answer to the question. I don't think we should go that far either, though from a standardization point of view, that's sorta where we're at.

Steve Pepper's talked about published subjects and public resource identifiers; I can sympathize with his point that we have too many URL/URI/URN/IRI/XRI/etc. terms, but when he suggests that the answer is to make a new one, I'm not sure I agree. He argues to deprecate all the others, but as URI Activity lead at W3C, I'm not in a position where I can overrule people and deprecate things that they say they want. I agree with him that the 303 redirection is too much trouble, but he doesn't seem to be willing to use the HashURI pattern either, and as I said in the advice section of my paper, that's asking for trouble.

On Thursday, I was on a panel about tagging versus the Semantic Web: Meaning on the Web: Evolution or Intelligent Design?. Frank started by debunking 4 myths about the Semantic Web. I gotta find Frank's slides. "I'll hold up one finger whever anybody says myth #1, and so on." As the the other Frank was talking about tagging, Frank held up 2 and 3 fingers, and the audience pointed out that he should have held up 1 finger.

I talked without slides. I think I got away with it. I said that I don't expect symbolic reasoning to beat statistical methods when it comes to the wisdom of crowds, but who wants to delegate their bank balance or the targets of their mail messages to the wisdom of crowds? Sometimes we mean exactly what we say, not just something close.

I suggested that GRDDL+microformats is a practical way to get lots of Semantic Web data. And I brought up the problem with iCalendar timezones and noted that while timezones data should be published by the government entities that govern them, Semantic Web data from wikipedia might be a more straightforward mechanism and might be just as democratic.

So much for philosophical discussions; stay tuned for another item about SPARQL and databases and running code.

Syndicate content