Map and Territory in RDF APIs

Submitted by connolly on Tue, 2010-04-27 14:30. :: |

RDF specs and APIs have made a bit of a mess out of a couple pretty basic tools of math and computing: graphs and logic formulas. With the RDF next steps workshop coming up and Pat Hayes re-thinking RDF semantics Sandro thinking out loud about RDF2, I'd like us to think about RDF in more traditional terms. The scala programming language seems to be an interesting framework to explore how they relate to RDF.

The Feb 1999 RDF spec wasn't very clear about the map and the territory. It said that statements are made out of parts in the territory, rather than features on the map, which doesn't make very much sense. RDF APIs seem to inherit this confusion; e.g. from an RDF::Value class for ruby:


Checking if a value is a resource (blank node or URI reference)


Blank nodes and URI references are parts of the map; resources are in the territory.

Likewise in Package org.jrdf.graph:

Resource A resource stands for either a Blank Node or a URI Reference.

The 2004 RDF specs take great pains to clarify these use/mention distinctions, but they also go on at great length.

Let's review Wikipedia on graphs:

In mathematics, a graph is an abstract representation of a set of objects where some pairs of the objects are connected by links. ...  The edges may be directed (asymmetric) or undirected (symmetric) ... and the edges are called directed edges or arcs; ... graphs which have labeled edges are called edge-labeled graphs.

With that in mind, in the swap-scala project, we summarize the RDF abstract syntax as an edge-labelled directed graph with just one or two wrinkles:

package org.w3.swap.rdf

trait RDFGraphParts {
  type Arc = (SubjectNode, Label, Node)

  type Node
  type Literal <: Node
  type SubjectNode <: Node
  type BlankNode <: SubjectNode
  type Label <: SubjectNode

The wrinkles are:

  • Arcs can only start from BlankNodes or Labels, i.e. SubjectNodes
  • Arcs labels may also appear as Nodes

We use another trait to relate concrete datatypes to these abstract types:

trait RDFNodeBuilder extends RDFGraphParts {
def uri(i: String): Label
type LanguageTag = Symbol
def plain(s: String, lang: Option[LanguageTag]): Literal
def typed(s: String, dt: String): Literal
def xmllit(content: scala.xml.NodeSeq): Literal

This doesn't pin down what a Label is, but in any concrete implementation, you can build one from a String using the uri method. The RDFNodeBuilder trait is used to implement RDF/XML, RDFa, and turtle parsers that are agnostic to the concrete implementation of an RDF graph.

Now let's look at terms of first order logic:

 The set of terms is inductively defined by the following rules:

  1. Variables. Any variable is a term.
  2. Functions. Any expression f(t1,...,tn) of n arguments (where each argument ti is a term and f is a function symbol of valence n) is a term.
This is represented straightforwardly in scala a la:
package org.w3.swap.logic1
* A Term is either a Variable or an FunctionTerm.
sealed abstract class Term { ... }

class Variable extends Term { ...}

abstract class FunctionTerm() extends Term {
def fun: Any
def args: List[Term]

The core RDF doesn't cover all of first order logic; it corresponds fairly closely to the conjunctive query fragment:

The conjunctive queries are simply the fragment of first-order logic given by the set of formulae that can be constructed from atomic formulae using conjunction \wedge and existential quantification \exists, but not using disjunction \lor, negation \neg, or universal quantification \forall.

We can then excerpt just the relevant parts of the definition of formulas:

The set of formulas is inductively defined by the following rules:

  1. Predicate symbols. If P is an n-ary predicate symbol and t1, ..., tn are terms then P(t1,...,tn) is a formula.
  2. Binary connectives. If φ and ψ are formulas, then (φ \rightarrow ψ) is a formula. Similar rules apply to other binary logical connectives.
  3. Quantifiers. If φ is a formula and x is a variable, then \forall x \varphi and \exists x \varphi are formulas.
Our scala representation follows straightforwardly:
package org.w3.swap.logic1ec 

sealed abstract class ECFormula
case class Exists(vars: Set[Variable], g: And) extends ECFormula
sealed abstract class Ground extends ECFormula
case class And(fmlas: Seq[Atomic]) extends Ground
case class Atomic(rel: Symbol, args: List[Term]) extends Ground

Now that we have scala representations for RDF graphs and conjunctive query formulas, how do we relate them? This is the fun part:

package org.w3.swap.rdflogic

import swap.rdf.RDFNodeBuilder
import swap.logic1.{Term, FunctionTerm, Variable}
import swap.logic1ec.{Exists, And, Atomic, ECProver, ECFormula}

* RDF has only ground, 0-ary function terms.
abstract class Ground extends FunctionTerm {
override def fun = this
override def args = Nil

case class Name(n: String) extends Ground
case class Plain(s: String, lang: Option[Symbol]) extends Ground
case class Data(lex: String, dt: Name) extends Ground
case class XMLLit(content: scala.xml.NodeSeq) extends Ground

* Implement RDF Nodes (except BlankNode) using FOL function terms
trait TermNode extends RDFNodeBuilder {
type Node = Term
type SubjectNode = Term
type Label = Name

def uri(i: String) = Name(i)

type Literal = Term
def plain(s: String, lang: Option[Symbol]) = Plain(s, lang)
def typed(s: String, dt: String): Literal = Data(s, Name(dt))
def xmllit(e: scala.xml.NodeSeq): Literal = XMLLit(e)

The abstract RDFGraphBuilder node types are implemented as first order logic terms. For formulas, we use a "holds" predicate:

 object RDFLogic extends ... {
def atom(s: Term, p: Term, o: Term): Atomic = {
Atomic('holds, List(s, p, o))
def atom(arc: (Term, Term, Term)): Atomic = {
Atomic('holds, List(arc._1, arc._2, arc._3))

Then all the semantic machinery up to simple entailment between RDF graphs just falls out of conjunctive query.

I haven't done RDFS Entailment yet; the plan is to do basic rules first (N3rules or RIF BLD) and then use that for RDFS, OWL2-RL, and the like.



Existentials in ACL2 and Milawa make sense; how about level breakers?

Submitted by connolly on Wed, 2010-01-20 18:20. :: |

Since my Sep 2006 visit to the ACL 2 seminar, I've been trying to get my head around existentials in ACL2. The lightbulb finally went off this week while reading Jared's Dec 2009 Milawa thesis.

3.7 Provability

Now that we have a proof checker, we can use existential quantification to
decide whether a particular formula is provable. Recall from page 61 the notion
of a witnessing (Skolem) function.
We begin by introducing a witnessing function,
logic.provable-witness, whose defining axiom is as follows.

Definition 92: logic.provable-witness
(por* (pequal* ...))

Intuitively, this axiom can be understood as: if there exists an appeal which is
a valid proof of x, then (logic.provable-witness x axioms thms atbl) is such
an appeal.

Ding! Now I get it.

This witnessing stuff is published in other ACL publications, noteably:

  • Structured Theory Development for a Mechanized Logic, M. Kaufmann and J Moore, Journal of Automated Reasoning 26, no. 2 (2001), pp. 161-203.

But I can't fit those in my tiny brain.

Thanks, Jared, for explaining it at my speed!

Here's hoping I can turn this new knowledge into code that converts N3Rules to ACL2 and/or Milawa's format. N3Rules covers RDF, RDFs, and, I think, OWL2-RL and some parts of RIF. Roughly what stuff FuXi covers.

I'm somewhat hopeful that the rest of N3 is just quoting. That's the intuition that got me looking into ACL2 and Milawa again after working on some TAG stuff using N3Logic to encode ABLP logic. Last time I tried turning N3 {} terms in to lisp quote expressions was when looking at IKL as a semantic framework for N3. I didn't like the results that time; I'm not sure why I expect it to be different this time, but somehow I do...

Another question that's keeping me up at night lately: is there a way to fit level-breakers such as log:uri (or name and denotation, if not wtr from KIF) in the Milawa architecture somehow?


Accountability Appliances: What Lawyers Expect to See - Part II (Structure)

Submitted by kkw on Thu, 2008-01-10 14:16. :: | | | | |

Building accountability appliances involves a challenging intersection between business, law, and technology. In my first blog about how to satisfy the legal portion of the triad, I explained that - conceptually - the lawyer would want to know whether particular digital transactions had complied with one or more rules. Lawyers, used to having things their own way, want more... they want to get the answer to that question in a particular structure.

All legal cases are decided using the same structure. As first year law students, we spend a year with highlighter in hand, trying to pick out the pieces of that structure from within the torrent of words of court decisions. Over time, we become proficient and -- like the child who stops moving his lips when he reads -- the activity becomes internalized and instinctive. From then on, we only notice that something's not right by its absence.

The structure is as follows:

  • ISSUE - the legal question that is being answered. Most typically it begins with the word "whether" "Whether the Privacy Act was violated?" Though the bigger question is whether an entire law was violated, because laws tend to have so many subparts and variables, we often frame a much narrower issue based upon a subpart that we think was violated, such as "Whether the computer matching prohibition of the Privacy Act was violated?"
  • RULE - provides the words and the source of the legal requirement. This can be the statement of a particular law, such as "The US Copyright law permits unauthorized use of copyrighted work based upon four conditions - the nature of use, the the nature of the work, the amount of the work used, and the likely impact on the value of the work. 17 USC § 107." Or, it can be a rule created by a court to explain how the law is implemented in practical situations: "In our jurisdiction, there is no infringement of a copyrighted work when the original is distributed widely for free because there is no diminution of market value. Field v. Google, Inc., 412 F. Supp 2d. 1106 (D.Nev. 2006)." [Note: The explanation of the citation formats for the sources has filled books and blogs. Here's a good brief explanation from Cornell.]
  • FACTS - the known or asserted facts that are relevant to the rule we are considering and the source of the information. In a Privacy Act computer matching case, there will be assertions like "the defendant's CIO admitted in deposition that he matched the deadbeat dads list against the welfare list and if there were matches he was to divert the benefits to the custodial parent." In a copyright case fair use case, a statement of facts might include "plaintiff admitted that he has posted the material on his website and has no limitations on access or copying the work."
  • ANALYSIS - is where the facts are pattern-matched to the rule. "The rule does not permit US persons to lose benefits based upon computer matched data unless certain conditions are met. Our facts show that many people lost their welfare benefits after the deadbeat data was matched to the welfare rolls without any of the other conditions being met." Or "There can be no finding of copyright infringement where the original work was so widely distributed for free that it had no market value. Our facts show that Twinky Co. posted its original material on the web on its own site and every other site where it could gain access without any attempt to control copying or access."

  • CONCLUSION - whether a violation has or has not occurred. "The computer matching provision of the Privacy Act was violated." or "The copyright was not infringed.

In light of this structure, we've been working on parsing the tremendous volume of words into their bare essentials so that they can be stored and computed to determine whether certain uses of data occurred in compliance with law. Most of our examples have focused on privacy.

Today, the number of sub-rules, elements of rules, and facts are often so voluminous that there is not enough time for a lawyer or team of lawyers to work through them all. So, the lawyer guesses what's likely to be a problem and works from there; the more experienced or talented the lawyer, the more likely that the guess leads to a productive result. Conversely, this likely means that many violations are never discovered. One of the great benefits of our proposed accountability appliance is that it could quickly reason over a massive volume of sub-rules, elements, and facts to identify the transactiions that appear to violate a rule or for which there's insufficient information to make a determination.

Although we haven't discussed it, I think there also will be a benefit to be derived from all of the reasoning that concludes that activities were compliant. I'm going to try to think of some high value examples.



Two additional blogs are coming:

Physically, what does the lawyer expect to see? At the simplest level, lawyers are expecting to see things in terms they recognize and without unfamiliar distractions; even the presence of things like curly brackets or metatags will cause most to insist that the output is unreadable. Because there is so much information, visualization tools present opportunities for presentations that will be intuitively understood.


The 1st Lawyer to Programmer/Programmer to Lawyer Dictionary! Compliance, auditing, privacy, and a host of other topics now have lawyers and system developers interacting regularly. As we've worked on DIG, I've noticed how the same words (e.g., rules, binding, fact) have different meanings.


I can only imagine...

Submitted by connolly on Sun, 2007-12-09 09:30. :: | | | |

I have a new bookmark. No, not a bookmark; not some bits in a file. This is the kind you have to go there to get.. go to Cleveland, that is. It reads:

Thank you
for you love & support
for the Ikpia & Ogbuji families
At this time of real need.
We will never forget
Imose, Chica, & Anya

Imose, Chica, & Anya

Abundant Life International Church
Highland heights, OH

After working with Chime for a year or so on the GRDDL Working Group (he was the difference between a hodge-podge of test files and a nicely organized GRDDL Test Cases technical report), I was really excited to meet him at the W3C Technical Plenary in Cambridge in early November. His Fuxi work is one of the best implementations of the way I think semantic web rules and proofs should go. When he told me some people didn't see the practical applications, it made me want to fly there and tell them how I think this will save lives and end world hunger.

So this past Tuesday, when I read the news about his family, the only way I could make my peace with it was to go and be with him. I can only imagine what he is going through. Eric Miller and Brian and David drove me to the funeral, but the line to say hi to the family was too long. And the internment service didn't really provide an opportunity to talk. So I was really glad that after I filled my plate at the reception, a seat across from Chime and Roschelle opened up for me and I got to sit and share a meal with them.

Grandpa Linus was at the table, too. His eulogy earlier at the funeral ended with the most inspiring spoken rendition of a song that I have ever heard:

Now The Hacienda's Dark The Town Is Sleeping
Now The Time Has Come To Part The Time For Weeping
Vaya Con Dios My Darling
Vaya Con Dios My Love

His eulogy is also posted as Grandparents' lament on The Kingdom Kids web site, along with details about a fund to help the family get back on their feet.

Units of measure and property chaining

Submitted by connolly on Tue, 2007-07-31 13:42. :: | | | |

We're long overdue for standard URIs for units of measure in the Semantic Web.

The SUMO stuff has a nice browser (e.g. see meter), a nice mapping from wordnet, and nice licensing terms. Of course, it's not RDF-native. In particular, it uses n-ary relations in the form of functions of more than one argument; 1 hour is written (&%MeasureFn 1 &%HourDuration). I might be willing to work out a mapping for that, but other details in the KIF source bother me a bit: a month is modelled conservatively as something between 28 and 31 days, but a year is exactly 365 days, despite leap-years. Go figure.

There's a nice Units in MathML note from November 2003, but all the URIs are incomplete, e.g. http://.../units/yard .

The Sep 2006 OWL Time Working Draft has full URIs such as, but its approach to n-ary relations is unsound, as I pointed out in a Jun 2006 comment.

Tim sketched the Interpretation Properties idiom back in 1998; I don't suppose it fits in OWL-DL, but it appeals to me quite a bit as an approach to units of measure. He just recently fleshed out some details in Units of measure are modelled as properties that relate quantities to magnitudes; for example:

 track length [ un:mile 0.25].

This Interpretation Properties approach allows us to model composition of units in the natural way:

W is o2:chain of (A V).

where o2:chain is like property chaining in OWL 1.1 (we hope).

Likewise, inverse units are modelled as inverse properties:

s a Unit; rdfs:label "s".
hz rdfs:label "Hz"; owl:inverseOf s.

Finally, scalar conversions are modelled using product; for example, mile is defined in terms of meter like so:

(m 0.0254) product inch.
(inch 12) product foot.
(foot 3) product yard.
(yard 22) product chain.
(chain 10) product furlong.
(furlong 8)product mile.

I supplemented his ontology with some test/example cases, unit_ex.n3 and then added a few rules to flesh out the modelling. These rules converts between meters and miles:

# numeric multiplication associates with unit multiplication
{ (?U1 ?S1) un:product ?U2.
(?U2 ?S2) un:product ?U3.
(?S1 ?S2) math:product ?S3
} => { (?U1 ?S3) un:product ?U3 }

# scalar conversions between units
{ ?X ?UNIT ?V.
(?BASE ?CONVERSION) un:product ?UNIT.
(?V ?CONVERSION) math:product ?V2.
} => { ?X ?BASE ?V2 }.

Put them together and out comes:

    ex:track     ex:length  [
:chain 20.0;
:foot 1320.0;
:furlong 2.0;
:inch 15840.0;
:m 402.336;
:mile 0.25;
:yard 440.0 ] .

The rules I wrote for pushing conversion factors into chains isn't fully general, but it works in cases like converting from this:

(un:foot un:hz) o2:chain fps.
bullet speed [ fps 4000 ].

to this:

    ex:bullet     ex:speed  [
ex:fps 4000;
:mps 1219.2 ] .

As I say, I find this approach quite appealing. I hope to discuss it with people working on units of measure in development of a Delivery Context Ontology.

IKL by Hayes et al. provides a semantics for N3?

Submitted by connolly on Thu, 2007-05-17 14:25. :: |

One my trip to Duke, just after I arrived on Thursday, Pat Hayes gave a talk about IKL; it's a logic with nice Web-like properties such as any collection of well-formed IKL sentences is itself well-formed. As he was talking, I saw lots of parallels to N3... propositions as terms, log:uri, etc.

By Friday night I was exhuasted from travel, lack of sleep, and conference-going, but I couldn't get the IKL/N3 ideas out of my head, so I had to code it up as another output mode of

The superman case works, though it's a bit surprising that rdf:type gets contextualized along with superman. The thread continues with the case of "if your homepage says you're vegetarian, then for the purpose of registration for this conference, you're vegetarian". I'm still puzzling over Pat's explanation a bit, but it seems to make sense.

Along with the IKL spec and IKL Guide, Pat also suggests:

A design for web content labels built from GRDDL and rules

Submitted by connolly on Thu, 2007-01-25 13:35. :: | | |

In #swig discussion, Tim mentioned he did some writing on labels and rules and OWL which prompted me to flesh out some related ideas I had. The result is a Makefile and four tests with example labels. One of them is:

All resources on are accessible for all users and meet WAI AA guidelines except those on which are not suitable for users with impaired vision.

I picked an XML syntax out of the air and wrote visaa.lbl:

<wai:AAuser />

And then in testdata.ttl we have:

<> a webarch:InformationResource.
<> a
:charlene a wai:AAuser.

Then we run the test thusly...

$ make visaa_test.ttl
xsltproc --output visaa.rdf label2rdf.xsl visaa.lbl
python ../../../2000/10/swap/ visaa.rdf lblrules.n3 owlAx.n3
testdata.ttl \
--think --filter=findlabels.n3 --n3 >visaa_test.ttl

and indeed, it concludes:

    <>     lt:suitableFor :charlene .

but doesn't conclude that pg2needsVision is OK for charlene.

The .lbl syntax is RDF data via GRDDL and label2rdf.xsl. Then owlAx.n3 is rules that derive from the RDFS and OWL specs; i.e. stuff that's already standard. As Tim wrote, A label is a fairly direct use of OWL restrictions. This is very much the sort of thing OWL is designed for. Only the lblrules.n3 bit goes beyond what's standardized, and it's written in the N3 Rules subset of N3, which, assuming a few built-ins, maps pretty neatly to recent RIF designs.

A recent item from Bijan notes a SPARQL-rules design by Axel; I wonder if these rules fit in that design too. I hope to take a look soonish.

She's a witch and I have the proof (in N3)

Submitted by connolly on Tue, 2007-01-02 22:28. :: |

A while back, somebody turned the Monty Python Burn the Witch sketch into an example resolution proof. Bijan and Kendall had some fun turning it into OWL.

I'm still finding bugs pretty regularly, but the cwm/n3 proof stuff is starting to mature; it works for a few PAW demo scenarios. Ralph asked me to characterize the set of problems it works for. I don't have a good handle on that, but this witch example seems to be in the set.

Transcribing the example resolution FOL KB to N3 is pretty straightforward; the original is preserved in the comments:

@prefix : <witch#>.
@keywords is, of, a.

#[1] BURNS(x) /\ WOMAN(x) => WITCH(x)

{ ?x a BURNS. ?x a WOMAN } => { ?x a WITCH }.


#[3] \forall x, ISMADEOFWOOD(x) => BURNS(x)
{ ?x a ISMADEOFWOOD. } => { ?x a BURNS. }.

#[4] \forall x, FLOATS(x) => ISMADEOFWOOD(x)
{ ?x a FLOATS } => { ?x a ISMADEOFWOOD }.



#[6] \forall x,y FLOATS(x) /\ SAMEWEIGHT(x,y) => FLOATS(y)

{ ?x a FLOATS. ?x SAMEWEIGHT ?y } => { ?y a FLOATS }.

# and, by experiment


Then we run cwm to generate the proof and then run the proof checker in report mode:

$ witch.n3  --think --filter=witch-goal.n3  --why >witch-pf.n3
$ --report witch-pf.n3 >witch-pf.txt

The report is plain text; I'll enrich it just a bit here. Note that in the N3 proof format, some formulas are elided. It makes some sense not to repeat the whole formula you get by parsing an input file, but I'm not sure why cwm elides results of rule application. It seems to give the relevant formula on the next line, at least:

  1. ...
    [by parsing <witch.n3>]

  2. :GIRL a :WOMAN .
    [by erasure from step 1]

    [by erasure from step 1]

  4. :DUCK a :FLOATS .
    [by erasure from step 1]

  5. @forAll :x, :y . { :x a wit:FLOATS; wit:SAMEWEIGHT :y . } log:implies {:y a wit:FLOATS . } .
    [by erasure from step 1]

  6. ...
    [by rule from step 5 applied to steps [3, 4]
    with bindings {'y': '<witch#GIRL>', 'x': '<witch#DUCK>'}]

  7. :GIRL a :FLOATS .
    [by erasure from step 6]

  8. @forAll :x . { :x a wit:FLOATS . } log:implies {:x a wit:ISMADEOFWOOD . } .
    [by erasure from step 1]

  9. ...
    [by rule from step 8 applied to steps [7]
    with bindings {'x': '<witch#GIRL>'}]

    [by erasure from step 9]

  11. @forAll :x . { :x a wit:ISMADEOFWOOD . } log:implies {:x a wit:BURNS . } .
    [by erasure from step 1]

  12. ...
    [by rule from step 11 applied to steps [10]
    with bindings {'x': '<witch#GIRL>'}]

  13. :GIRL a :BURNS .
    [by erasure from step 12]

  14. @forAll witch:x . { witch:x a :BURNS, :WOMAN . } log:implies {witch:x a :WITCH . } .
    [by erasure from step 1]

  15. ...
    [by rule from step 14 applied to steps [2, 13]
    with bindings {'x': '<witch#GIRL>'}]

  16. :GIRL a :WITCH .
    [by erasure from step 15]

All the files are in the swap/test/reason directory: witch.n3, witch-goal.n3, witch-pf.n3, witch-pf.txt. Enjoy.

ACL 2 seminar at U.T. Austin: Toward proof exchange in the Semantic Web

Submitted by connolly on Sat, 2006-09-16 21:15. :: | | |


In our PAW and TAMI projects, we're making a lot of progress on the practical aspects of proof exchange: in PAW we're working out the nitty gritty details of making an HTTP client (proxy) and server that exchange proofs, and in TAMI, we're working on user interfaces for audit trails and justifications and on integration with a truth maintenance system.

It doesn't concern me too much that cwm does some crazy stuff when finding proofs; it's the proof checker that I expect to deploy as part of trusted computing bases and the proof language specification that I hope will complete the Semantic Web standards stack.

But N3 proof exchange is no longer a completely hypothetical problem; the first examples of interoperating with InferenceWeb (via a mapping to PML) and with Euler are working. So it's time to take a close look at the proof representation and the proof theory in more detail.

My trip to Austin for a research library symposium at the University of Texas gave me a chance to re-connect with Bob Boyer. A while back, I told him about RDF and asked him about Semantic Web logic issues and he showed me the proof checking part of McCune's Robbins Algebras Are Boolean result:

Proofs found by programs are always questionable. Our approach to this problem is to have the theorem prover construct a detailed proof object and have a very simple program (written in a high-level language) check that the proof object is correct. The proof checking program is simple enough that it can be scrutinized by humans, and formal verification is probably feasible.

In my Jan 2000 notes, that excerpt is followed by...

I offer a 500 brownie-point bounty to anybody who converts it to Java and converts the ()'s in the input format to <>'s.

5 points for perl. ;-)

Bob got me invited to the ACL2 seminar this week; in my presentation, Toward proof exchange in the Semantic Web. I reviewed a bit of Web Architecture and the standardization status of RDF, RDFS, OWL, and SPARQL as background to demonstrating that we're close to collecting that bounty. (Little did I know in 2000 that TimBL would pick up python so that I could avoid Java as well as perl ;-)

Matt Kauffman and company gave all sorts of great feedback on my presentation. I had to go back to the Semantic Web Wave diagram a few times to clarify the boundary between research and standardization:

  • RDF is fully standardized/ratified
  • turtle has the same expressive capability as RDF's XML syntax, but isn't fully ratified, and
  • N3 goes beyond the standards in both syntax and expressiveness

One of the people there who knew about RDF and OWL and such really encouraged me to get N3/turtle done, since every time he does any Semantic Web advocacy, the RDF/XML syntax is a deal-killer. I tried to show them my work on a turtle bnf, but what I was looking for was in June mailing list discussion, not in my February bnf2turtle breadcrumbs item.

They asked what happens if an identifier is used before it appears in an @forAll directive and I had to admit that I could test what the software does if they wanted to, but I couldn't be sure whether that was by design or not; exactly how quantification and {}s interact in N3 is sort of an open issue, or at least something I'm not quite sure about.

Moore noticed that our conjunction introduction (CI) step doesn't result in a formula whose main connective is conjuction; the conjuction gets pushed inside the quantifiers. It's not wrong, but it's not traditional CI either.

I asked about ACL2's proof format, and they said what goes in an ACL2 "book" is not so much a proof as a sequence of lemmas and such, but Jared was working on Milawa, a simple proof checker that can be extended with new prooftechniques.

I started talking a little after 4pm; different people left at different times, but it wasn't until about 8 that Matt realized he was late for a squash game and headed out.

MLK and the UT TowerI went back to visit them in the U.T. tower the next day to follow up on ACL2/N3 connections and Milawa. Matt suggested a translation of N3 quantifiers and {}s into ACL2 that doesn't involve quotation. He offered to guide me as I fleshed it out, but I only got as far as installing lisp and ACL2; I was too tired to get into a coding fugue.

Jared not only gave me some essential installation clues, but for every technical topic I brought up, he printed out two papers showing different approaches. I sure hope I can find time to follow up on at least some of this stuff. tags:, , , ,

Blogged with Flock

Equality and inconsistency in the rules layer

Submitted by connolly on Mon, 2006-06-05 17:13. :: |

In the working group that originally did RDFS, there wasn't enough confidence to standardize things that could express inconsistencies, such as disjointness of classes, nor to do equality reasoning a la foaf:mbox. The 2004 versions of RDF and RDFS are now properly grounded in mathematical logic, and OWL specifies owl:disjointWith and owl:differentFrom as well as owl:InverseFunctionalProperty and owl:sameAs.

I have gotten quite a bit of mileage out of N3 rules that capture at least some of the meaning of these constructs. If we feed this to cwm or Euler:

{ ?p a owl:InverseFunctionalProperty.
?u ?p ?w.
?v ?p ?w.
} => { ?u = ?v }.

:Dan foaf:mbox <>;
foaf:name "Dan C.".
:Daniel foaf:mbox <>;
foaf:name "Daniel W. Connolly".

foaf:mbox a owl:InverseFunctionalProperty.

then they will conclude...

:Dan = :Daniel.

And we can use cwm's equality reasoning mode or explicit substitution-of-equals-for-equals rules a la { ?S1 ?P ?O. ?S1 = ?S2 } => { ?S2 ?P ?O } to conclude...

:Dan     = :Dan,
foaf:mbox <>;
foaf:name "Dan C.",
"Daniel W. Connolly" .

To capture owl:disjointWith, I use:

{ [ is rdf:type of ?x ] owl:disjointWith
[ is rdf:type of ?y ] }
=> { ?x owl:differentFrom ?y }.

Taking an example from the DAML walkthru that I worked on with Lynn Stein and Deb McGuinness back in 2000:

    :Female     a rdfs:Class;
daml:disjointWith :Male;
rdfs:subClassOf :Animal .

... and adding :bob a :Male. :jill a :Female gives :bob owl:differentFrom :jill. And from :pat a :Male, :Female, we get :pat owl:differentFrom :pat which is clearly inconsistent.

It's pretty straightforward to write rules to express these features; the axiomatic semantics from Fikes and McGuinness in 2001 represents them in KIF. It's much less straightforward to be sure there are no bugs, such as the inconsistency reported by Baclawski of the UBOT project. So I think ter Horst's paper, Completeness, decidability and complexity of entailment for RDF Schema and a semantic extension involving the OWL vocabulary is a particularly interesting contribution. My study of his rules is in owlth.n3. In many cases, the rules are the ones I have been using for years:

# rdfp1
{ ?p a owl:FunctionalProperty.
?u ?p ?v.
?u ?p ?w.
} => { ?v = ?w }.

But the semantics he gives does not always interpret owl:sameAs as identity; I'd like to understand better why not. He looks for clashes of the form ?v owl:differentFrom ?w; owl:sameAs ?w and ?v owl:disjointWith ?w. ?u a ?v, ?w; isn't it enough to just look for ?x owl:differentFrom ?x and reduce owl:disjointWith to owl:differentFrom by the rule I gave above?

Syndicate content