00:03:37 nm. minihackathon this afternoon 00:04:45 ericP, trop tard 00:05:06 port drat 00:11:33 all worky now? 00:29:00 no ericP sparql delete gives an exception 00:30:02 Oh, hey eripc 00:30:07 just the man 00:47:13 hmmm .. bulyt a test harness and found a bug 00:47:28 s/harness/for the harness/ 01:19:42 oops 01:21:03 mcherian has quit (Read error: Operation timed out) 01:21:32 http://dig.xvm.mit.edu/wiki/tabulator/tracker/state won't parse and N3 01:22:03 Ha ... same problem as I had on the client side at one point 01:22:07 "This is to test a two-^ 01:22:07 line description." 01:22:14 in line 1088 01:23:07 In js that's 01:23:07 str = str.replace(/\\/g, '\\\\'); // escape backslashes 01:23:07 str = str.replace(/\"/g, '\\"'); // escape quotes 01:23:08 str = str.replace(/\n/g, '\\n'); // escape newlines 01:23:45 on output or 01:23:59 str = str.replace(/\\"/g, '"'); // unescape quotes ' 01:23:59 str = str.replace(/\\n/g, '\n'); // unescape newlines 01:23:59 str = str.replace(/\\\\/g, '\\'); // unescape backslashes 01:24:02 on input 01:25:06 It is actually the last line of the file 01:25:38 maybe after than got added the delete stopped working because it couldn't parse the file 01:26:15 Are you doing the inserts just by appending to the file, I wonder? In which case one could end up with dups? 01:53:12 melvster has quit (Ping timeout: 246 seconds) 01:53:26 knappy (~haoqi@dhcp-18-111-36-78.dyn.mit.edu) has joined #dig 02:09:09 Hi knappy 02:09:19 hi 02:10:29 I've been having a go at the issue pane, more or less usable now, but the dig.xvm.mit.edu new wiki is down -- it was working very nicely yseterday 02:11:21 new wiki? 02:12:16 EricP and Joe P have been building a new version of the wiki - they started on Friday, and made great progress. 02:12:25 oh yeah 02:12:30 But currently now you can insert things but delete them 02:13:00 ok 02:13:21 Have a good weekend? 02:13:38 I actually didn't do anything this weekend 02:13:39 >< 02:13:42 How about you? 02:14:48 >< is supposed to be a face 02:14:48 Drove up to NH and back yesterday 02:14:52 oh wow 02:14:54 for what? 02:15:02 to drop off my son 02:15:09 oh 02:15:13 but anyway 02:15:29 The tracker is broken atm 02:15:37 ok 02:15:42 I wonder if i could kjust fix the file 02:19:16 fixed 02:19:24 nice 02:20:40 If you pull the latest tabulator and look at http://dig.csail.mit.edu/2010/issues/track#TabTracker 02:20:52 ok 02:21:22 nunnun_away is now known as nunnun 02:21:31 done 02:22:26 Ok, so there are various tets bugs and a couple of real ones. 02:22:33 You can make a new bug 02:22:51 You can't change the state of a bug because that means deleting the old state 02:23:10 That is waiting for ericp to fix deletion. 02:23:54 k 02:26:08 I also added a bit to the library, like $rdf.graph() for making a new formula, and $rdf.term() for making a literal out of a javascript value. that is the state of the tabulator at the moment. .. 02:28:34 What things should I put in the issue tracker 02:28:51 I'd only play with it until it is more stable 02:29:09 You can note things which need to be fixed 02:29:54 In general, the issue tracker is a sort of commonagenda, a list of wishes and bugs to be done, and a way of making sure we don't forget them.. so you put in things to remidn yourself or someoen else or to suggest a feature for discussioon. 02:30:04 The comment feature is a bit crude just now. 02:30:57 I like the "dependency tracking" part of issue tracker :) 02:30:58 I see 02:31:44 notice that Bugzilla and Redmine have such features but not roundtrip without addon. 02:32:10 dependency? 02:32:17 Links between issues. Should be made to allow cross domain links. 02:32:27 Yes. very true. 02:32:29 block 02:32:34 doAfter 02:32:34 But i though I hadn't put them in yet 02:32:36 like that. 02:33:03 doAfter is interesting -- for PERT charts 02:33:37 also just a dependency on either any of or all of a set of subtasks. 02:33:45 yeah, "block" and "doAfter" have slight difference semantically in Redmine 02:33:55 yes, "block" is used for subtasks 02:33:57 "this sutask is a requirement for that" 02:34:08 Mozilla calls them "tracking bugs" 02:34:10 or "this subtaks would completely satisfy that" 02:34:16 HTML tracking bug, Acid3 tracking bug, etc. 02:34:20 HTML5 02:34:30 HTML5 is a super container bug? 02:34:43 Yeah 02:35:07 One thing I liek about this RDF-driven one is it is very easily configurable 02:35:38 So people can design their own states and state transitions and the process around them. 02:35:52 But hem if you link bugs between groups you have to have some commonality. 02:36:06 Maybe we ask every bug state to be classified as open or closed 02:36:35 timbl, this is an example https://bugzilla.mozilla.org/show_bug.cgi?id=acid3 02:36:42 Or we just have one "closed" terminal state which them propagates through the dependencies. 02:37:37 I see the moz one 02:38:25 I have been using the time in miliseconds since 1970 for bug IDs 02:38:36 Do you think we should have sequential ones 02:38:55 I don't know. 02:39:51 You can generate a next if by storing it in the wiki ad doing a single SPARQL update to increment it, deleting one number and inserting the next. If the sparql succeed you know noone else has grabbed that same number. 02:40:00 But I thought that was a bit complicated 02:40:40 but a cute way of doing it I thought. 02:40:52 I think I like sequential ones, but yeah, synchronization is a big problem. 02:41:40 So long as sparql update doesn't fail silently (ericp) then you can use it for mutex 02:42:31 DELETE { reservedby } INSERT { reservedby } sort of thing in one operation. 02:42:41 So we need an eror code for "delet failed" 02:44:18 Anyway, I can't wait for "Google Todos" 02:44:50 They will come, if they aren't ther already -- lets get the distributed version out first 02:49:53 hi timbl ericP kennyluck 02:51:00 That's better: 02:51:01 js> $rdf.term(new Date()).toNT() 02:51:01 "2010-8-9T2:50:38Z"^^ 02:51:12 Hi, there. 02:52:01 Sorry but I have to go. Gotta do some Linked Data tutoring. 02:52:15 is $rdf part of jquery 02:52:21 cya kennyluck 02:56:34 Bye Kenny 02:56:54 $rdf is not. It is the name we have adopted for the rdf library 02:57:04 $ is ormally jquery 02:57:21 I think the plan is that $.rdf = $rdf 02:57:32 in the tabulator environment. 02:57:46 But I think we ar etrying to keep the jquery code independent of the rdf code 02:58:14 and the rdfquery code, which is an extension to jquery depends on both 02:58:39 I haven't leaned how to use jquery yet .. probably would save me some coding time 03:12:08 sorry for absenteeism -- called off on a mission 03:12:16 back now 03:12:31 so there's a repeatable failure for update 03:12:54 yes, anythingwhich involves a DELETE. 03:14:26 i left you a backtrace and how to reproduce 03:14:57 huh, looks like i never tested DELETE without a GRAPH constraint 03:15:25 So you have a regresion test? 03:16:16 http://swobjects.svn.sourceforge.net/viewvc/swobjects/branches/sparql11/tests/test_SPARUL.cpp?view=markup&pathrev=1125#l_411 03:16:23 starting line 36 03:16:30 but they all have GRAPHs in them 03:16:39 how does swobjects-t.py get generated 03:17:16 t_SWObjects and hand-written 03:17:45 Eg: sparql: sending update to 03:17:45 query=DELETE { . 03:17:45 } 03:17:46 INSERT { . 03:17:46 } 03:17:46 would be interesting to try to replicate all the tests into python,perl,java, but haven't tried to do that 03:17:46 sparql: update complete for status=500, text.length=534 03:18:28 BasicGraphPattern::deletePattern (this=0x7ea880, target=0x0, rs=0x7f1350, bgp=0x0) 03:18:29 Well, running a test of the wiki across the network would mean the client coudl be written in anything 03:18:45 i bet it's not graceful about default graph 03:19:41 the only one we need :) 03:19:44 ericP, i copied swobjcts.[py,so] so rebuilds dont break the actual wiki space 03:20:15 copy to /srv/dig/lib/python to 'install' 03:21:12 roger that 03:25:03 writing tests -- then fixing code 03:25:08 ETA ~ 20 mins 03:37:55 hey, all my new delete tests fail 03:38:03 who writes such junk? 03:39:46 actually, going back to the earlier SVn conversation, (during this 20min) presbrey, i think maybe in the long run it is crazy to use SVN when we already have a set of diffs in the set of requests. 03:40:05 Seems crazy and slow for SVN to do a diff and generate a patch 03:41:14 If we want to regenerate the state at any given time we just run all the requests up to that point through the server. 03:42:03 presbrey has quit (Ping timeout: 264 seconds) 03:48:34 presbrey (~presbrey@SCRIPTS.MIT.EDU) has joined #dig 03:50:42 DIGlogger, pointer 03:50:42 See http://dig.csail.mit.edu/irc/dig/2010-08-09#T03-50-42 03:50:45 hmm, tests passed -- could be harder 03:51:28 (once i figured out the convention i used for data files) 03:54:17 have you thought anymore about handling the segfaults more generally 03:56:39 no new inspirations since the safety scissors python wrapper 03:57:13 but that only works for python 03:57:15 could make it a separate process, but that seems extravagent 03:57:55 well, there's always fixing the bugs, but that doesn't make it much safer wrt odd input 04:01:21 Do these segfaults come as C++ exceptions, or do tey break the whole exception system? 04:02:34 ha delete on the default graph is quite odd 04:02:53 odd? 04:03:19 this input isn't odd, but there are places where the SWIG interface doesn't enforce proper types 04:05:14 presbrey, where's the script now? 04:05:36 i think that, once again, the ResultSet needs to have the targed db set 04:05:53 what script, test.py is in /srv/dig/www/wiki 04:05:58 i've got another fix for that, but compilation will take a bit 04:06:50 the one which imports SWObjects, parses and executes 04:08:21 sure /srv/dig/www/wiki/test.py should be good for that 04:08:37 there's a delete.sparql in the same dir 04:08:48 it was on issues.n3 i think in tabulator 04:09:10 though i don't think it gets far enough for that to matter 04:09:26 last argv is the base-uri 04:09:58 anyways, try the new SWObjects.py and _SWObjects.so in /usr/local/src/SWObjects/brancehs/sparql11/swig/python/ 04:12:44 status? 04:13:43 ? 04:15:36 no segv 04:15:37 i built new libs, was wondering if they worked 04:15:56 that not "no, segv", is it? 04:17:07 yes delete works on the wiki now 04:17:13 just restarted it 04:17:26 rockin' 04:20:04 ok, bed time 04:20:48 working? 04:21:47 Ok, looks good 04:22:56 Nice -- thanks ericp and presbrey !!!!!!!!!!!!!!!! 04:22:59 good night 04:32:50 good night 04:33:45 presbrey has left #dig 04:34:13 presbrey (~presbrey@SCRIPTS.MIT.EDU) has joined #dig 05:08:30 knappy has quit (Quit: knappy) 05:09:04 kennyluck has quit (Quit: kennyluck) 05:10:33 knappy (~haoqi@dhcp-18-111-36-78.dyn.mit.edu) has joined #dig 05:33:12 knappy has quit (Quit: knappy) 05:34:38 knappy (~haoqi@dhcp-18-111-36-78.dyn.mit.edu) has joined #dig 05:46:20 kennyluck (~kennyluck@118-168-65-157.dynamic.hinet.net) has joined #dig 05:51:57 knappy has left #dig 07:12:01 drrho has quit (Ping timeout: 276 seconds) 07:13:44 drrho (~rho@chello213047112079.11.11.vie.surfer.at) has joined #dig 07:14:11 drrho has quit (Remote host closed the connection) 08:06:34 mcherian (~mathew@bne75-8-88-161-125-97.fbx.proxad.net) has joined #dig 08:25:00 mcherian has quit (Ping timeout: 265 seconds) 11:24:44 RalphS_ (~swick@30-7-139.wireless.csail.mit.edu) has joined #dig 11:28:46 DIGlogger (~dig-logge@groups.csail.mit.edu) has joined #dig 11:28:46 topic is: Decentralized Information Group @ MIT http://dig.csail.mit.edu/ 11:28:46 Users on #dig: DIGlogger RalphS_ kennyluck presbrey lkagal timbl Yudai_ ericP gbot46 sandro nunnun 12:40:16 mcherian (~mathew@78.41.129.5) has joined #dig 13:12:27 RalphS_ has quit (Ping timeout: 260 seconds) 13:23:02 nice 13:34:25 timbl_ (~timbl@pool-96-237-236-72.bstnma.fios.verizon.net) has joined #dig 13:34:25 timbl_ has quit (Client Quit) 13:36:59 nunnun is now known as nunnun_away 13:38:25 timbl has quit (Ping timeout: 245 seconds) 14:08:59 RalphS_ (~swick@30-7-139.wireless.csail.mit.edu) has joined #dig 14:14:12 nunnun_away is now known as nunnun 14:24:44 melvster (~melvster@p579F9A07.dip.t-dialin.net) has joined #dig 14:27:09 kennyluck has quit (Quit: kennyluck) 15:08:57 lkagal has quit (Quit: lkagal) 15:11:40 kennyluck (~kennyluck@114-25-243-107.dynamic.hinet.net) has joined #dig 15:33:43 amy (~amy@31-35-122.wireless.csail.mit.edu) has joined #dig 16:00:21 lkagal (~lkagal@30-6-179.wireless.csail.mit.edu) has joined #dig 16:00:46 ericP 16:01:08 I'm going to get a little python wrapper for SWObjects going 16:03:44 its in /srv/dig/lib/python/swobjects 16:25:02 drrho (~rho@chello213047112079.11.11.vie.surfer.at) has joined #dig 16:31:49 just kidding, /srv/dig/www/wiki/swobjects.py 16:48:10 mcherian has quit (Read error: Operation timed out) 18:02:06 new data wiki does POST text/turtle append 18:08:11 nunnun is now known as nunnun_away 18:08:14 when would an insert not be an append? 18:08:40 oh silly me, when it's RDF 18:08:46 NOT RDF 18:16:29 simple PUT and DELETE also now implemented 18:17:31 OK I think that covers it. nothing special in python for OPTIONS 18:22:23 nunnun_away is now known as nunnun 18:22:28 don't forget M_DAV_META_PUT, presbrey 18:24:14 whats that 18:24:28 this handler is only advertising SPARQL, not DAV 18:28:37 i think i send sparql, webdav and the cors thingy 18:32:32 Access-Control-Allow-Origin: * 18:33:40 presbrey, i was just noting your vigilance at covering all of the HTTP verbs 18:34:34 then demonstrating that by parodying the verbs i've seen in other specs (HTTP Extensions, DAV) 18:39:08 I think get, post, put, and delete covers enough for now 18:39:42 melvster, curl -X OPTIONS http://dig.xvm.mit.edu/wiki/presbrey -v to see what we send 18:40:28 no how could I forget HEAD! 18:40:28 cool! 18:41:12 ericP, did you work on the ?graph= business? 18:41:51 I'm looking at w3.org/TR/sparql11-http-rdf-update/ 18:41:57 presbrey, CONNECT 18:42:07 though i have no idea how people use it 18:42:27 curious how timbl is thinking to merge into that 18:42:30 not sure if i have or haven't. remind me what it was? 18:42:58 ?graph= looks equivalent to GRAPH { } in the SPARUL 18:44:33 service?default-graph-uri="foo"&query="INSERT {

1 }" vs. service?query="INSERT { GRAPH {

1 } }" ? 18:52:41 yes looks to be something like that 19:13:09 timbl (~timbl@31-35-252.wireless.csail.mit.edu) has joined #dig 19:13:20 A new one sparql: update complete for status=415, text.length=0 19:13:28 415 19:16:43 wiki HEAD requests work now too 19:17:16 afternoon timbl 19:17:30 415 = TTP Error 415 Unsupported media type 19:17:36 sparql POST takes application/sparql-query or text/turtle 19:17:45 what are you sending? 19:17:59 I thought, same as I was sending yesterday! 19:18:08 today I'm checking 19:18:12 which you send 19:18:20 and text/turtle does an append 19:18:24 instead of sparql query 19:18:39 excellent 19:18:48 indeed 19:19:05 DELETE, PUT, and HEAD all work too 19:19:36 testing with curl, all the same things work from yesterday 19:20:56 BTW there is a spec very close to this from the DAWG (sparql group) 19:20:57 xhr.setRequestHeader('Content-type', 'application/sparql-query'); 19:21:12 (This is your code, this file ... sparqlUpfate.js) 19:21:44 It hasn't changed 19:22:24 ha 19:23:16 can you go post again please 19:23:22 :-) credit where crdedis due 19:23:35 posted 19:23:38 ohhh POST application/sparql-query; charset=UTF-8 19:24:00 always forget that semi-; 19:24:34 I wonder who put on the UTF-8 -- firefox??! ? 19:25:04 those sneaks. try again 19:25:08 Or is that what you expect? 19:25:39 looks like you got it that time 19:25:56 silly me putting in new features and not testing in tabulator 19:26:11 good thing yours is working tim thanks ;) 19:26:16 you shoudl be using tabulator every moment of course 19:26:19 :) 19:26:36 its still hard to write Python in tabulator otherwise I would be 19:26:48 A regression test of curl scripts or python would seem a good idea 19:27:17 I have some for swobjects python of insert vs. append 19:27:41 some curl ones would be good yes 19:27:53 re. the DAWG spec 19:28:20 I did read your comments to chimezie 19:28:37 as I commented to ericP earlier 19:28:52 ?graph= argument in their spec seems sketchy within our model 19:29:32 you go about '?' in general 19:29:42 yes, taht's no way to run a restaurant -- that is, you want to access something with a uri, go there. anything else is a kludge, perhaps a useful kluge but secondary. 19:30:04 I'm not sure other ?kw break things as much as ?graph 19:30:08 Yes, I don't see why people s oftern stick all the info behing a "?" 19:30:28 shows how they code, but doesn't make it better 19:30:42 ?graph is really the worst offender =~ GRAPH{} if you want to use HTTP and do things based on the HTTP URI like ACL 19:31:24 otherwise all you can ACL is the endpoint and not the documents therein at least with webid authz 19:31:35 of course you know these things ;) 19:32:12 anyway timbl you should test that you like how HEAD and DELETE work 19:32:27 these are two new methods in our data wiki 19:32:28 Confusion between graph and document is an issue. 19:34:18 Do you think we should have MS-Author-Via: SPARQL,DAV 19:34:21 ? 19:34:43 so that DAV-only clients can use it? 19:34:49 not unless we go make sure DAV clients can use it 19:34:59 true 19:35:16 The code in the tabulator I have moved and have not tested it 19:35:21 DAV 1 is probably easily achievable in python, DAV 2 needs mod_webdav for sure 19:35:41 tabulator need minimal DAV -- basically PUT 19:35:52 oshani (~oshani@c-67-161-2-233.hsd1.ca.comcast.net) has joined #dig 19:35:55 MS-Author-Via: SPARQL,PUT 19:36:00 ha 19:36:14 the DAV headers I know of say: 19:36:21 DAV: [version-list] 19:36:24 like DAV: 1 19:36:28 vs. DAV: 1, 2 19:36:38 That's a separate DAV: header 19:36:43 so we could say Author-Via: SPARQL, DAV 19:36:46 and additionally specify DAV: 1 19:36:59 But do we do all of DAV 1 19:42:57 rfc4918 won't tell me but I bet 1 is doable 19:45:37 dav class 1 is generally short for 'no locking' 19:45:56 I'm not sure its worth doing much beyond PUT if thats what tabulator needs 19:46:14 lets not then 19:48:32 Lets just get this functionality solid 19:48:33 we can always turn on mod_webdav behind the data wiki and try to make them friends 19:48:42 sparql: update complete for status=500, text.length=534 19:49:03 That was doing a delete, insert 19:49:18 no just an insert 19:50:07 bet its the quotes 19:52:58 INSERT { "The tabulator \"Find All\" button does not work as the views have been turned off." . } 19:55:46 nunnun is now known as nunnun_away 19:56:08 nunnun_away is now known as nunnun 20:00:06 That's one to remember for the test suite :) 20:00:15 The error got into the file: Error trying to parse as Notation3: Line 1 of : Bad syntax: expected '.' or '}' or ']' at end of statement at: "Find All" button does not work" 20:01:05 If you are escaping " and \n remember to escape \ too 20:01:55 did you reupload it? the file looks ok to me 20:02:08 I don't escape post data 20:02:14 The tabulator uploaded it and complained 20:02:46 cwm http://dig.xvm.mit.edu/wiki/tabulator/tracker/state fails too same way 20:03:47 The last line of the file ends: "The tabulator "Find All" button does not work as the views have been turned off." . 20:03:55 without escaping 20:04:46 looks like a bug in ericPs turtle serializer to me 20:05:13 yes. 20:05:30 where is the code for that 20:05:53 We should put cwm and it and rdflib.js though the same sries of tests 20:06:43 https://swobjects.svn.sourceforge.net/svnroot/swobjects/ 20:07:19 https://swobjects.svn.sourceforge.net/svnroot/swobjects/branches/sparql11/lib/ 20:10:00 nunnun is now known as nunnun_away 20:10:01 nunnun_away is now known as nunnun 20:16:43 then not much to do on the wiki until next swobjects release 20:20:47 Indeed. Looks like this could be an issue: 20:20:47 virtual void rdfLiteral (const RDFLiteral* const, std::string lexicalValue, const URI* datatype, LANGTAG* p_LANGTAG) { 20:20:47 ret << '"' << lexicalValue << '"'; 20:20:48 if (datatype != NULL) { ret << "^^<" << datatype->getLexicalValue() << '>'; } 20:20:48 if (p_LANGTAG != NULL) { ret << '@' << p_LANGTAG->getLexicalValue(); } 20:20:48 ret << ' '; 20:21:09 . 20:21:11 in https://swobjects.svn.sourceforge.net/svnroot/swobjects/branches/sparql11/lib/SPARQLSerializer.hpp 20:21:22 RalphS_ has quit (Quit: leaving ...) 20:21:29 just quotes the literal value withouyt escaping 20:22:02 But that is in the sparql serializer, could be reused as NT serializer. 20:23:24 O 20:23:27 I'm calling 20:23:42 _SWObjects.RdfDB.toString('text/turtle') 20:26:18 you can see my swobjects wrapper eg. DefaultGraph in swobjects.py in wiki 20:37:11 oshani has quit (Quit: Mama nidi!) 20:38:57 timbl, how do you imagine sw conneg working? 20:38:57 ------------------------------- 20:39:17 if (posStr[0] == '"' && posStr[posStr.size()-1] == '"') 20:39:17 return getRDFLiteral(posStr.substr(1, posStr.size()-2), NULL, NULL); 20:39:18 -------- in https://swobjects.svn.sourceforge.net/svnroot/swobjects/branches/sparql11/lib/SWObjects.cpp 20:39:20 all the q= quality, preference arguments still apply? 20:39:37 looks like a bug -- no handling of data types, or language 20:40:11 umm yes, in fcat you have to be careful with q's 20:40:11 unless getRDFLiteral works 20:40:34 If there is a lng or a datatype, the last ch won't be a " 20:40:55 like "123"^^ 20:41:05 or "chat'@fr 20:41:46 and what about Accept langs? 20:41:57 is that going to only give you triples in the output of a certain lang? 20:42:12 yosi (~chatzilla@static-71-243-122-114.bos.east.verizon.net) has joined #dig 20:42:18 There, the RDF system tends to supply multiple language labels in ontologies and so the conneg isn't used. 20:42:51 so lang is ignored in accept header 20:42:56 or is it invalid to send 20:42:58 But for soemthing wihc is basiaclly RDF but an html renderingis possible, you should make sure the server prefers RDF. Or the tabulatr, which handles RDF and HTML, son't get the RDF. 20:43:16 I was mostly thinking of the data wiki 20:43:38 effects on transforms and projections I'll leave for another day 20:44:02 and data in general, which would be nice to not always store n times for each n rdf formats 20:44:12 my foaf already 20:45:07 The tabulator will send lang pref st te server, the server can ignore it 20:45:15 for the data wiki 20:45:54 Oh, as regards N formats, we should be able for these languages to do fast streaming conversion 20:46:34 The cwm NT parser and RDF/XML serializer will work in a stream. 20:48:04 so the python wrapper could convert on the fly from the internal n3 to people who wanted rdf/xml 20:48:36 I was thinking of a separate layer in straight C using redland/rapper 20:48:59 If they stream, sure, 20:49:10 C should be faster 20:49:29 lots more formats (and tested with string escapes) 20:49:46 eg. nt, ttl, rdfxml, rss, atom, dot, json-triples, json 20:50:19 dot? 20:50:25 ha a graphviz format 20:50:42 For sparql results, CSV is handy for many people oiut there 20:50:52 C should mean faster, separate should mean easier to get others to install independent of sparql 20:51:11 I do indeed have some RDF to dot converter, wasn't sure you meant the same dot! 20:51:12 conneg to mean just means reserializing RDF data, no sparql 20:51:22 yes. 20:51:55 And I think generating jason is a reasonabl ething for rdf. but dot should be the application side. 20:52:15 The idea of this is commodity storage. 20:52:28 The storage shoudl not include any application 20:52:39 it would be cool if you could say Accept:text/json to a HTML w/ RDFa document 20:52:40 so it will work with all applications and be timeless 20:53:16 well, taht functionality shoudl be put in the client. 20:53:28 The client has total control over what it wanst to do with the data. 20:53:55 why don't you want to conneg json, turtle, rdfxml, etc. from RDFa? 20:53:58 There are limits to conneg. 20:54:10 It might neem neat rfom pprogrammer POV to ask for JASON. 20:54:17 Butthen the JASON they get back has no URI 20:54:44 You start the quoting curl commands instead of URIs. Then the URI has failed. 20:55:06 Better to have a link tp hte data resource link edfrom the HTML web page, so each has a URI. 20:55:08 if my browser asks for HTML on an RDF and it gives me some RDFa, I think it should give me back the RDF from RDFa 20:55:51 ETOOMANYUNBOUNPREPOSITIONS 20:56:03 sorry 20:56:34 not houting, just emulating an IBM systm 20:57:20 its ok I always read errors as implied whisper 20:57:32 IBM doesn't know any better 20:58:03 so end-to-end, I expect a server that constructs RDFa from RDF on the fly to also do the reverse. RDF from RDFa. 20:58:16 I'm not sure about the URI argument, I'll have to reread that. 20:58:47 but it seems like we're already cheating the URI in a sense that all the data is actually *.n3* 20:59:07 When a page uis HTML, then you never know what you are missing when you look at only the RDFa. 20:59:27 No, not cheating at all. We define the URI space. 20:59:42 We omit details of how iy is stored, so we can change tah twithout issue. 20:59:59 when a page is HTML, an agent who can only use (Accept:) RDF/RDFa won't be missing since it can't read HTML anyway 21:00:08 Then to actually map onto a file, as we are using files this time, we can do watever we want that makes it easy to run the server. 21:00:48 we might want to change that on the server now to .ttl instead since its not really n3 21:00:58 yes about not missing it, but what about a tabulator user, who follows a link and can hndle data and hypertext? 21:01:13 Should the window be divided into two views? 21:01:47 multiple panes? thats what tabulator is all about eh? 21:02:12 Well, I'd prefer a smooth upgrade path to n3. yes, maybe we should use .ttl .. not sure. 21:02:28 (reminds me... view-source with tabulator still doesn't show the data) 21:10:51 ok I'm gonna go fix my python rasqal bindings 21:17:35 actually, there is one good ?kw= I'd like to support 21:17:39 ?callback= 21:22:19 but its still just more conneg 21:33:03 Maybe we need a source pane 21:33:26 It isn't trivial because of the way Jambo made the tabr work now. 21:34:11 ooops 21:34:12 sparql: update failed for status=500, Internal Server Error, body length=534 21:34:12 for query: INSERT { "Before the refactor, View Source would wok on an RDF file. Now it doesn't. " . 21:34:12 } 21:34:50 After I carefully scanned the data for quotes too 21:35:03 Ha .. maybe the single ' 21:35:15 can't be 21:37:12 lkagal has quit (Quit: lkagal) 21:37:33 Darn, the dile has been rewritte with that stupi quote agian. 21:42:35 Now it won't return the file 21:44:36 It will return other files 21:45:29 Has multiple line problem 21:46:05 got the javascript callback working 21:46:07 http://dig.xvm.mit.edu/wiki/tabulator/tracker/state?callback=test 21:47:41 Is that output a standard? 21:48:02 the JSON or the callback? 21:48:22 the callback is, maybe the json isnt 21:48:44 oh, no it should be indexed I think 21:48:46 There is a more friendly JSON Sandro has been discussing and I like where a string is just a string and number just a number and node with a URI just {uri: "htt..." } 21:48:51 json triples should be indexed s-p at least 21:49:48 yes it should really be more lispy 21:50:04 and how do you figure the @en into a literal? 21:50:16 You don't 21:50:25 Actually I was just olayng with Rhono 21:50:36 ok retry http://dig.xvm.mit.edu/wiki/tabulator/tracker/state?callback=test 21:50:42 str = "chat" 21:50:48 str.lang = "en" 21:51:06 It would remember str.lang for a bit and then forget it 21:51:40 like if I tried to put on str.toNT = function(){...} then it would drop the lang .. 21:51:43 gbot46, ? 21:51:44 nothing known about 21:51:46 gbot46, help? 21:51:49 nothing known about help 21:51:52 who is gbot46 ? 21:52:13 gbot46, presbrey? 21:52:13 nothing known about presbrey 21:52:16 ouch 21:52:51 gbot? 21:53:05 gbot, help 21:53:14 gbot, who are you? 21:53:22 gbot, who is your daddy? 21:54:05 from config.fsf.org 21:56:47 so chrome's accept header is application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 21:56:57 I guess q=1.0 is assumed if none is given 21:57:18 wait is this a comma-delimited list? 21:57:34 no it must be semi-colon 21:57:53 Yes, beware the comma is highest precedence 21:58:01 comma is exactly like two headers 21:58:55 what q= do text/plain and image/png get? 21:59:48 text/plain get 0.8 21:59:56 split it first by comma 22:00:28 In fact biking in I was thinking our code which turns thes einto RDF should make separat statments for each comma delimited bit 22:00:50 Then the semicolon separate a content-type from the q param 22:00:58 tThat is the rfc822 way 22:01:01 does image/png have no q= or 0.8 or 0.5? 22:01:23 image/png has q=1 22:01:37 any other junk has q=.5 22:01:39 ok 22:02:48 We should kick gbot? 22:03:01 I don't know gbot46 22:03:09 its probably too late. all the data wiki secrets are exposed! 22:03:19 oh nooooooooo! 22:04:17 im not doing any negotiation for you yet 22:04:22 only ?callback is done 22:04:55 yours makes the comma separation much more clear 22:04:58 application/rdf+xml, application/xhtml+xml;q=0.3, text/xml;q=0.2, application/xml;q=0.2, text/html;q=0.3, text/plain;q=0.1, text/n3;q=1.0, text/rdf+n3;q=1, application/x-turtle;q=1, text/turtle;q=1 22:05:58 I love fixing the file, clicking one of the many red dots and seeingthem all go yellow for a moment then green :) 22:06:29 youre only doing GET though, are you fixing it on the server? 22:06:42 yes 22:06:46 emacs 22:07:00 oic 22:07:21 Thanks for access to it .. thought it would be useful 22:07:49 So we have to avoid quotes and newlines until Ericp is free 22:07:57 hehe yep 22:08:10 I could try but it will be so much better and faster if he does it 22:16:39 I guess these q= are float 22:17:03 amy has quit (*.net *.split) 22:18:20 There'll be a spec somewhere -- I wouldn't expect an exponent application/flash 1.0e-14 22:18:26 amy (~amy@31-35-122.wireless.csail.mit.edu) has joined #dig 22:19:00 for sorting in python 22:19:16 {'text/html': 0.90000000000000002, 'image/png': 0.0, 'application/xml': 0.0, '*/*': 0.5, 'text/plain': 0.80000000000000004, 'application/xhtml+xml': 0.0} 22:19:43 ouch those 0.0 are supposed to be 1.0 22:20:06 ok fixed {'text/html': 0.90000000000000002, 'image/png': 1.0, 'application/xml': 1.0, '*/*': 0.5, 'text/plain': 0.80000000000000004, 'application/xhtml+xml': 1.0} 22:20:10 I guess I could just 100x them all 22:20:19 figuring no one is using q=0.001 22:20:42 doesn't matter much, python can sort floats just fine 22:30:12 melvster has quit (Read error: Connection reset by peer) 22:35:08 timbl, whats the content_type for ntriples? 22:35:34 dunno good q 22:35:46 I have not pushed that one. Might not be regs 22:35:55 but should say in the netriples spec 22:36:03 ericp is here 22:40:08 muahahahaha 22:42:45 curl -v http://dig.xvm.mit.edu/wiki/tabulator/tracker/state -H 'Accept: application/rdf+xml' 22:42:49 works now 22:49:07 also curl -v http://dig.xvm.mit.edu/wiki/tabulator/tracker/state -H 'Accept: application/rdf+xml;q=0.5,text/turtle;q=0.6' 22:49:12 vs. curl -v http://dig.xvm.mit.edu/wiki/tabulator/tracker/state -H 'Accept: application/rdf+xml;q=0.7,text/turtle;q=0.6' 22:49:16 converting on the fly? 22:49:54 yes 22:50:05 r29529 22:50:18 cool 22:50:32 very friendly. 22:50:40 W3c standards even :) 22:51:09 coneg is a great way of avoiding conflict and user upset 22:51:26 it will end up hiding parse errors 22:51:32 from bad serializations 22:51:42 raptor library is lenient and will try to skip malformed triples 22:51:47 You mean unused code will rot? 22:52:01 no I mean tabulator will get back valid turtle now 22:52:03 hiding parse errors in data bad 22:52:16 lkagal (~lkagal@pool-96-237-240-136.bstnma.fios.verizon.net) has joined #dig 22:52:29 if it says accept: text/turtle, the negotiate will cause a reserialize and the bad triples will get dropped 22:52:47 all you guys have fios!?! I'm so jealous 22:53:30 :) You will have google soon maybe 22:53:41 if' you're very very good 22:54:21 ok, /me impressed by the conversion on the fly 22:56:37 bye timbl 22:59:37 timbl has quit (Quit: timbl) 23:04:37 melvster (~melvster@p579F9A07.dip.t-dialin.net) has joined #dig 23:13:33 mcherian (~mathew@bne75-8-88-161-125-97.fbx.proxad.net) has joined #dig 23:20:13 yosi has quit (Quit: ChatZilla 0.9.86 [Firefox 3.6.8/20100722145641]) 23:23:03 nunnun is now known as nunnun_away 23:30:39 mcherian has quit (Ping timeout: 240 seconds) 23:34:40 oshani (~oshani@c-67-161-2-233.hsd1.ca.comcast.net) has joined #dig