Public Policy and the Web

Accountability Appliances: What Lawyers Expect to See - Part III (User Interface)

I've written in the last two blogs about how lawyers operate in a very structured enviroment. This will have a tremendous impact on what they'll consider acceptable in a user interface. They might accept something which seems a bit like an outline or a form, but years of experience tell me that they will rail at anything code-like.

For example, we see

:MList a rdf:List

and automatically read

"MList" is the name of a list written in rdf


air:pattern {

and know that we are asking our system to look for a pattern in the data in which a particular "member" is in a particular list of members. Perhaps because law is already learning to read, speak, and think in another language, most lawyers look at lines like those above and see no meaning.

Our current work-in-progress produces output that includes:

bjb reject bs non compliant with S9Policy 1


phone record 2892 category HealthInformation


bs request instruction bs request content
type Request
bs request content intended beneficiary customer351
type Benefit Action Instruction
customer351 location MA
xphone record 2892 about customer351

Nearly every output item is a hotlink to something which provides definition, explanation, or derivation. Much of it is in "Tabulator", the cool tool that aggregates just the bits of data we want to know.

From a user-interface-for-lawyers perspective, this version of output is an improvement over our earlier ones because it removes a lot of things programmers do to solve computation challenges. It removes colons and semi-colons from places they're not commonly used in English (i.e., as the beginning of a term) and mostly uses words that are known in the general population. It also parses "humpbacks" - the programmers' traditional
concatenation of a string of words - back into separate words. And, it replaces hyphens and underlines - also used for concatenation - with blank spaces.

At last week's meeting, we talked about the possibility of generating output which simulates short English sentences. These might be stilted but would be most easily read by lawyers. Here's my first attempt at the top-level template:


Issue: Whether the transactions in [TransactionLogFilePopularName] {about [VariableName] [VariableValue]} comply with [MasterPolicyPopularName]?

Rule: To be compliant, [SubPolicyPopularName] of [MasterPolicyPopularName] requires [PatternVariableName] of an event to be [PatternValue1].

Fact: In transaction [TransactionNumber] [PatternVariableName] of the event was [PatternValue2].

Analysis: [PatternValue2] is not [PatternValue].

Conclusion: The transactions appear to be non-compliant with [SubPolicyName] of [MasterPolicyPopularName].

This seems to me approximately correct in the context of requests for the appliance to reason over millions of transactions with many sub-rules. A person seeking an answer from the system would create the Issue question. The Issue question is almost always going to ask whether some series of transactions violated a super-rule and often will have a scope limiter (e.g., in regards to a particular person or within a date scope or by one entity), denoted here by {}.

From the lawyer perspective, the interesting part of the result is the finding of non-compliance or possible non-compliance. So, the remainder of the output would be generated to describe only the failure(s) in a pattern-matching for one or more sub-rules. If there's more than one violation, the interface would display the Issue once and then the Rule to Conclusion steps for each non-compliant result.

I tried this out on a laywer I know. He insisted it was unintelligible when the []'s were left in but said it was manageable when he saw the same text without them.

For our Scenario 9, Transaction 15, an idealized top level display would say:

Issue: Whether the transactions in Xphone's Customer Service Log about Person Bob Same comply with MA Disability Discrimination Law?

Rule: To be compliant, Denial of Service Rule of MA Disability Discrimination Law requires reason of an event to be other than disability.

Fact: In transaction Xphone Record 2892 reason of the event was Infectious Disease.

Analysis: Infectious disease is not other than disability.

Conclusion: The transactions appear to be non-compliant with Denial of Service Rule of MA Disability Discrimination Law.

Each one of the bound values should have a hotlink to a Tabulator display that provides background or details.

Right now, we might be able to produce:

Issue: Whether the transactions in Xphone's Customer Service Log about Betty JB reject Bob Same comply with MA Disability Discrimination Law?

Rule: To be non-compliant, Denial of Service Rule of MA Disability Discrimination Law requires REASON of an event to be category Health Information.

Fact: In transaction Xphone Record 2892 REASON of the event was category Health Information.

Analysis: category Health Information is category Health Information.

Conclusion: The transactions appear to be non-compliant with Denial of Service Rule of MA Disability Discrimination Law.

This example highlights a few challenges.

1) It's possible that only failures of policies containing comparative matches (e.g., :v1 sameAs :v2; :v9 greaterThan :v3; :v12 withinDateRange :v4) are legally relevant. This needs more thought.

2) We'd need to name every sub-policy or have a default called UnnamedSubPolicy.

3) We'd need to be able to translate statute numbers to popular names and have a default instruction to include the statute number when no popular name exists.

4) We'd need some taxonomies (e.g., infectious disease is a sub-class of disability).

5) In a perfect world, we'd have some way to trigger a couple alternative displays. For example, it would be nice to be able to trigger one of two rule structures: either one that says a rule requires a match or one that says a rules requires a non-match. The reason for this is that if we always have to use the same structure, about half of the outputs will be very stilted and cause the lawyers to struggle to understand.

6) We need someway to deal with something the system can't reason. If the law requires the reason to be disability and the system doesn't know whether health information is the same as or different from disability, then it ought to be able to produce an analysis that says something along the lines of "The relationship between Health Information and disability is unknown" and produce a conclusion that says "Whether the transaction is compliant is unknown." If we're reasoning over millions of transactions there are likely to be quite a few of these and they ought to be presented after the non-compliant ones.



Accountability Appliances: What Lawyers Expect to See - Part II (Structure)

Submitted by kkw on Thu, 2008-01-10 14:16. :: | | | | |

Building accountability appliances involves a challenging intersection between business, law, and technology. In my first blog about how to satisfy the legal portion of the triad, I explained that - conceptually - the lawyer would want to know whether particular digital transactions had complied with one or more rules. Lawyers, used to having things their own way, want more... they want to get the answer to that question in a particular structure.

All legal cases are decided using the same structure. As first year law students, we spend a year with highlighter in hand, trying to pick out the pieces of that structure from within the torrent of words of court decisions. Over time, we become proficient and -- like the child who stops moving his lips when he reads -- the activity becomes internalized and instinctive. From then on, we only notice that something's not right by its absence.

The structure is as follows:

  • ISSUE - the legal question that is being answered. Most typically it begins with the word "whether" "Whether the Privacy Act was violated?" Though the bigger question is whether an entire law was violated, because laws tend to have so many subparts and variables, we often frame a much narrower issue based upon a subpart that we think was violated, such as "Whether the computer matching prohibition of the Privacy Act was violated?"
  • RULE - provides the words and the source of the legal requirement. This can be the statement of a particular law, such as "The US Copyright law permits unauthorized use of copyrighted work based upon four conditions - the nature of use, the the nature of the work, the amount of the work used, and the likely impact on the value of the work. 17 USC § 107." Or, it can be a rule created by a court to explain how the law is implemented in practical situations: "In our jurisdiction, there is no infringement of a copyrighted work when the original is distributed widely for free because there is no diminution of market value. Field v. Google, Inc., 412 F. Supp 2d. 1106 (D.Nev. 2006)." [Note: The explanation of the citation formats for the sources has filled books and blogs. Here's a good brief explanation from Cornell.]
  • FACTS - the known or asserted facts that are relevant to the rule we are considering and the source of the information. In a Privacy Act computer matching case, there will be assertions like "the defendant's CIO admitted in deposition that he matched the deadbeat dads list against the welfare list and if there were matches he was to divert the benefits to the custodial parent." In a copyright case fair use case, a statement of facts might include "plaintiff admitted that he has posted the material on his website and has no limitations on access or copying the work."
  • ANALYSIS - is where the facts are pattern-matched to the rule. "The rule does not permit US persons to lose benefits based upon computer matched data unless certain conditions are met. Our facts show that many people lost their welfare benefits after the deadbeat data was matched to the welfare rolls without any of the other conditions being met." Or "There can be no finding of copyright infringement where the original work was so widely distributed for free that it had no market value. Our facts show that Twinky Co. posted its original material on the web on its own site and every other site where it could gain access without any attempt to control copying or access."

  • CONCLUSION - whether a violation has or has not occurred. "The computer matching provision of the Privacy Act was violated." or "The copyright was not infringed.

In light of this structure, we've been working on parsing the tremendous volume of words into their bare essentials so that they can be stored and computed to determine whether certain uses of data occurred in compliance with law. Most of our examples have focused on privacy.

Today, the number of sub-rules, elements of rules, and facts are often so voluminous that there is not enough time for a lawyer or team of lawyers to work through them all. So, the lawyer guesses what's likely to be a problem and works from there; the more experienced or talented the lawyer, the more likely that the guess leads to a productive result. Conversely, this likely means that many violations are never discovered. One of the great benefits of our proposed accountability appliance is that it could quickly reason over a massive volume of sub-rules, elements, and facts to identify the transactiions that appear to violate a rule or for which there's insufficient information to make a determination.

Although we haven't discussed it, I think there also will be a benefit to be derived from all of the reasoning that concludes that activities were compliant. I'm going to try to think of some high value examples.



Two additional blogs are coming:

Physically, what does the lawyer expect to see? At the simplest level, lawyers are expecting to see things in terms they recognize and without unfamiliar distractions; even the presence of things like curly brackets or metatags will cause most to insist that the output is unreadable. Because there is so much information, visualization tools present opportunities for presentations that will be intuitively understood.


The 1st Lawyer to Programmer/Programmer to Lawyer Dictionary! Compliance, auditing, privacy, and a host of other topics now have lawyers and system developers interacting regularly. As we've worked on DIG, I've noticed how the same words (e.g., rules, binding, fact) have different meanings.


Accountability Appliances: What Lawyers Expect to See - Part I

Submitted by kkw on Wed, 2008-01-02 12:59. :: | | |

Just before the holidays, Tim suggested I blog about "what lawyers expect to see" in the context of our accountability appliances projects. Unfortunately, being half-lawyer, my first response is that maddening answer of all lawyers - "it depends." And, worse, my second answer is - "it depends upon what you mean by 'see'". Having had a couple of weeks to let this percolate, I think I can offer some useful answers.

Conceptually, what does the lawyer expect to see? The practice of law has a fundamental dichotomy. The law is a world of intense structure -- the minutae of sub-sub-sub-parts of legal code, the precise tracking of precedents through hundreds of years of court decisions, and so on. But, the lawyers valued most highly are not those who are most structured. Instead, it is those who are most creative at manipulating the structure -- conjuring compelling arguments for extending a concept or reading existing law with just enough of a different light to convince others that something unexpected supersedes something expected. In our discussions, we have concluded that an accountability appliance we build now should address the former and not the latter.

For example, a lawyer could ask our accountability appliance if a single sub-rule had been complied with: "Whether the federal Centers for Disease Control was allowed to pass John Doe's medical history from its Epidemic Investigations Case Records system to a private hospital under the Privacy Act Routine Use rules for that system?" Or, he could ask a question which requires reasoning over many rules. Asking "Whether the NSA's data mining of telephone records is compliant with the Privacy Act?" would require reasoning over the nearly thirty sub-rules contained within the Privacy Act and would be a significant technical accomplishment. Huge numbers of hours are spent to answer these sorts of questions and the automation of the more linear analysis would make it possible to audit vastly higher numbers of transactions and to do so in a consistent manner.

If the accountability appliance determined that a particular use was non-compliant, the lawyer could not ask the system to find a plausible exception somewhere in all of law. That would require reasoning, prioritizing, and de-conflicting over possibly millions of rules -- presenting challenges from transcribing all the rules into process-able structure and creating reasoning technology that can efficiently process such a volume. Perhaps the biggest challenge, though, is the ability to analogize. The great lawyer draws from everything he's ever seen or heard about to assimilate into the new situation to his client's benefit. I believe that some of the greatest potential of the semantic web is in the ability to make comparisons -- I've been thinking about a "what's it like?" engine -- but this sort of conceptual analogizing seems still a ways in the future.


Stay tuned for two additional blogs:

Structurally, what does the lawyer expect to see? The common law (used in the UK, most of its former colonies, including the US federal system, and most of US states) follows a standard structure for communicating. Whether a lawyer is writing a motion or a judge is writing a decision, there is a structure embedded within all of the verbiage. Each well-formed discussion includes five parts: issue, rule, fact, analysis, and conclusion.

Physically, what does the lawyer expect to see? At the simplest level, lawyers are expecting to see things in terms they recognize and without unfamiliar distractions; even the presence of things like curly brackets or metatags will cause most to insist that the output is unreadable. Because there is so much information, visualization tools present opportunities for presentations that will be intuitively understood.


The 1st Lawyer to Programmer/Programmer to Lawyer Dictionary! Compliance, auditing, privacy, and a host of other topics now have lawyers and system developers interacting regularly. As we've worked on DIG, I've noticed how the same words (e.g., rules, binding, fact) have different meanings.

On the Future of Research Libraries at U.T. Austin

Submitted by connolly on Sat, 2006-09-16 17:14. :: | | |

Wow. What a week!

I'm always on the lookout for opportunities to get back to Austin, so I was happy to accept an invitation to this 11 - 12 September symposium, The Research Library in the 21st Century run by University of Texas Libraries:Image: San Jacinto Residence Hall

In today's rapidly changing digital landscape, we are giving serious thought to shaping a strategy for the future of our libraries. Consequently, we are inviting the best minds in the field and representatives from leading institutions to explore the future of the research library and new developments in scholarly communication. While our primary purpose is to inform a strategy for our libraries and collections, we feel that all participants and their institutions will benefit.

I spent the first day getting a feel for this community, where evidently a talk by Clifford Lynch of CNI is a staple. "There is no scholarship without scholarly communication," he said, quoting Courant. He noted that traditionally, publishers disseminate and libraries preserve, but we're shifting to a world where the library helps disseminate and makes decisions on behalf of the whole world about which works to preserve. He said there's a company (I wish I had made a note of the name) that has worked out the price of an endowed web site; at 4% annual return, they figure it at $2500/gigabyte.

James Duderstadt from the University of Michigan told us that the day when the entire contents of the library fits on an iPod (or "a device the size of a football" for other audiences that didn't know about iPods ;-) is not so far off. He said that the University of Michigan started digitizing their 7.8million volumes even before becoming a Google Book Search library partner. They initially estimated it would take 10 years, but the current estimate is 6 years and falling. He said that yes, there are copyright issues and other legal challenges, and he wouldn't be suprised to end up in court over it; he had done that before. Even the sakai project might face litigation. What got the most attention, I think, was when he relayed first-hand experience from the Spellings Commission on the Future of Higher Education; their report is available to those that know where to look, though it is not due for official release until September 26.

He also talked about virtual organizations, i.e. groups of researchers from universities all over, and even the "meta university," with no geographical boundaries at all. That sort of thing fueled my remarks for the Challenges of Access and Preservation panel on the second day. I noted that my job is all about virtual organizations, and if the value of research libraries is connected to recruiting good people, you should keep in mind the fact that "get together and go crazy" events like football games are a big part of building trust and loyalty.

Kevin Guthrie, President of ITHAKA, made a good point that starting new things is usually easier than changing old things, which was exactly what I was thinking when President Powers spoke of "preserving our investment" in libraries in his opening address. U.T. invested $650M in libraries since 1963. That's not counting bricks and mortar; that's special collections, journal subscriptions, etc.

My point that following links is 96% reliable sparked an interesting conversation; it was misunderstood as "96% of web sites are persistent" and then "96% of links persist"; when I clarified that it's 96% of attempts to follow links that succeed, and this is because most attempts to follow links are from one popular resource to another, we had an interesting discussion of ephemera vs. the scholarly record and which parts need what sort of attention and what sort of policies. The main example was that 99% of political websites about the California run-off election went offline right after the election. My main point was: for the scholarly record, HTTP/DNS is as good as it gets for the forseeable future; don't throw up your hands at the 4% and wait for some new technology; apply your expertise of curation and organizational change to the existing technologies.

In fact, I didn't really get beyond URIs and basic web architecture in my remarks. I had prepared some points about the Semantic Web, but I didn't have time for them in my opening statement and they didn't come up much later in the conversation, except when Ann Wolpert, Director of Libraries at MIT, brough up DSPACE a bit.

Betsy Wilson of the University of Washington suggested that collaboration would be the hallmark of the library of the future. I echoed that back in the wrap-up session referring to library science as the "interdisciplinary discipline"; I didn't think I was making that up (and a google search confirms I did not), but it seemed to be new to this audience.

By the end of the event I was pretty much up to speed on the conversation; but on the first day, I felt a little out of place and when I saw the sound engineer getting things ready, I mentioned to him that I had a little experience using and selling that sort of equipment. It turned out that he's George Geranios, sound man for bands like Blue Oyster Cult for about 30 years. We had a great conversation on digital media standards and record companies. I'm glad I sat next to David Seaman of the DLF at lunch; we had a mutual colleague in Michael Sperberg-McQueen. I asked him about IFLA, one of the few acronyms from the conversation that I recognized; he helped me understand that IFLA conferences are relevant, but they're about libraries in general, and the research library community is not the same. And Andrew Dillon got me up to speed on all sorts of things and made the panel I was on fun and pretty relaxed.

Fred Heath made an oblique reference to a New York Times article about moving most of the books out of the U.T. undergraduate library as if everyone knew, but it was news to me. Later in the week I caught up with Ben Kuipers; we didn't have time for my technical agenda of linked data and access limited logic, but we did discover that both of us were a bit concerned with the fragility of civilization as we know it and the value of books over DVDs if there's no reliable electricity.

The speakers comments at the symposium were recorded; there's some chance that edited transcripts will appear in a special issue of a journal. Stay tuned for that. And stay tuned for more breadcrumbs items on talks I gave later in the week where I did get beyond the basic http/DNS/URI layer of Semantic Web Archtiecture.

tags:, ,

on Wikimania 2006, from a few hundred miles away

Submitted by connolly on Thu, 2006-08-10 16:26. :: | |

Wikimania 2006 was last week in Boston; I had it on my travel schedule, tenatively, months in advance, but I didn't really come up with a solid justification, and there were conflicts, so I ended up not going.

I was very interested to see the online participation options, but I didn't get my hopes up too high, because I know that ConnectingAudiences is challenging.

I tried to participate in the transcription stuff real-time; installation of the goby collaborative editor went smoothly enough (it looks like an interesting alternative to SubEthaEdit, though it's client/server, not peer-to-peer; they're talking about switching to the jabber protocol...) but I couldn't seem to connect to any sessions while people were active in them.

The real-time video feed of mako on a definition of Freedom was surprisingly good, though I couldn't give it my full attention during the work day. I didn't understand the problem he was speaking to (isn't GFDL good enough?) until I listened to Lessig on Free Culture and realized that CC share-alike and GFDL don't interoperate. (Yet another reason to keep the test of independent invention in mind at all times.)

Lessig read this quote, but only referred to the author using a photo that I couldn't see via the audio feed; when I looked it up, I realized there was a gap in this student's free culture education:

If we don't want to live in a jungle, we must change our attitudes. We must start sending the message that a good citizen is one who cooperates when appropriate, not one who is successful at taking from others.

RMS, 1992

These sessions on the wikipedia process look particularly interesting; I hope to find time to see or listen to a recording:

I bumped into TimBL online and remind him about the Wikipedia and the Semantic Web panel; he had turned it down because of other travel obligations, but he just managed to stop by after all. I hope it went allright; he was pretty jet-lagged.

I see WikiSym 2006 coming up August 21-23, 2006 in Odense, Denmark. I'm not sure I can find justification to make travel plans on just a few weeks of notice. But Denny's hottest conference ever item burns like salt in an open wound and motivates me to give it a try. It looks like the SweetWiki folks, who participate in the GRDDL WG, will be there; that's the start of a justification...

Net Neutrality: This is serious

Submitted by timbl on Wed, 2006-06-21 16:35. ::

( real video, download m4v )

When I invented the Web, I didn't have to ask anyone's permission. Now, hundreds of millions of people are using it freely. I am worried that that is going end in the USA.

I blogged on net neutrality before, and so did a lot of other people. (see e.g. Danny Weitzner,, etc.) Since then, some telecommunications companies spent a lot of money on public relations and TV ads, and the US House seems to have wavered from the path of preserving net neutrality. There has been some misinformation spread about. So here are some clarifications. ( real video Mpegs to come)

Net neutrality is this:

If I pay to connect to the Net with a certain quality of service, and you pay to connect with that or greater quality of service, then we can communicate at that level.
That's all. Its up to the ISPs to make sure they interoperate so that that happens.

Net Neutrality is NOT asking for the internet for free.

Net Neutrality is NOT saying that one shouldn't pay more money for high quality of service. We always have, and we always will.

There have been suggestions that we don't need legislation because we haven't had it. These are nonsense, because in fact we have had net neutrality in the past -- it is only recently that real explicit threats have occurred.

Control of information is hugely powerful. In the US, the threat is that companies control what I can access for commercial reasons. (In China, control is by the government for political reasons.) There is a very strong short-term incentive for a company to grab control of TV distribution over the Internet even though it is against the long-term interests of the industry.

Yes, regulation to keep the Internet open is regulation. And mostly, the Internet thrives on lack of regulation. But some basic values have to be preserved. For example, the market system depends on the rule that you can't photocopy money. Democracy depends on freedom of speech. Freedom of connection, with any application, to any party, is the fundamental social basis of the Internet, and, now, the society based on it.

Let's see whether the United States is capable as acting according to its important values, or whether it is, as so many people are saying, run by the misguided short-term interested of large corporations.

I hope that Congress can protect net neutrality, so I can continue to innovate in the internet space. I want to see the explosion of innovations happening out there on the Web, so diverse and so exciting, continue unabated.

Neutrality of the Net

Submitted by timbl on Tue, 2006-05-02 15:22. ::

Net Neutrality is an international issue. In some countries it is addressed better than others. (In France, for example, I understand that the layers are separated, and my colleague in Paris attributes getting 24Mb/s net, a phone with free international dialing and digital TV for 30euros/month to the resulting competition.) In the US, there have been threats to the concept, and a wide discussion about what to do. That is why, though I have written and spoken on this many times, I blog about it now.

Twenty-seven years ago, the inventors of the Internet[1] designed an architecture[2] which was simple and general. Any computer could send a packet to any other computer. The network did not look inside packets. It is the cleanness of that design, and the strict independence of the layers, which allowed the Internet to grow and be useful. It allowed the hardware and transmission technology supporting the Internet to evolve through a thousandfold increase in speed, yet still run the same applications. It allowed new Internet applications to be introduced and to evolve independently.

When, seventeen years ago, I designed the Web, I did not have to ask anyone's permission. [3]. The new application rolled out over the existing Internet without modifying it. I tried then, and many people still work very hard still, to make the Web technology, in turn, a universal, neutral, platform. It must not discriminate against particular hardware, software, underlying network, language, culture, disability, or against particular types of data.

Anyone can build a new application on the Web, without asking me, or Vint Cerf, or their ISP, or their cable company, or their operating system provider, or their government, or their hardware vendor.

It is of the utmost importance that, if I connect to the Internet, and you connect to the Internet, that we can then run any Internet application we want, without discrimination as to who we are or what we are doing. We pay for connection to the Net as though it were a cloud which magically delivers our packets. We may pay for a higher or a lower quality of service. We may pay for a service which has the characteristics of being good for video, or quality audio. But we each pay to connect to the Net, but no one can pay for exclusive access to me.

When I was a child, I was impressed by the fact that the installation fee for a telephone was everywhere the same in the UK, whether you lived in a city or on a mountain, just as the same stamp would get a letter to either place.

To actually design legislation which allows creative interconnections between different service providers, but ensures neutrality of the Net as a whole may be a difficult task. It is a very important one. The US should do it now, and, if it turns out to be the only way, be as draconian as to require financial isolation between IP providers and businesses in other layers.

The Internet is increasingly becoming the dominant medium binding us. The neutral communications medium is essential to our society. It is the basis of a fair competitive market economy. It is the basis of democracy, by which a community should decide what to do. It is the basis of science, by which humankind should decide what is true.

Let us protect the neutrality of the net.

  1. Vint Cerf, Bob Kahn and colleagues
  2. TCP and IP
  3. I did have to ask for port 80 for HTTP
Syndicate content