There were two immediate and strong criticisms of my RELATIONSHIP post of last Tuesday. The first, and broader, criticism, by Ian Davis, suggests I’ve misunderstood both the relative newness and the general flexibility of the work — multi-variate relationships can be expressed in multi-variate terms, and missing characterizations can be added, and so on.

The second, in a comment by bardia, says that all the objections I raise and more have been discussed by the people on the FOAF list, and that if these were fatal problems, that group, smart as they are, would have caught them.

I want to deal with these in turn, but first, I want to re-state my views on the subject, because Ian in particular seems to have misconstrued them as practical objections. For the record, I do not believe that RELATIONSHIP suffers from practical problems; I do not believe that it is underdeveloped, or that there are missing but critical implementation details. I believe instead that it suffers from a philosophical error, and one that cannot be fixed by any future iteration of the current line of reasoning. In particular, I believe that a formal and explicit ontology for human relations is unworkable, for several reasons. First, I believe that most such relations cannot be expressed formally — try detailing any reasonably think relationship of yours using this vocabulary, with any extensions you’d like to add. You will need nuance that is not there, leading to so many new relations — pitchedBusinessIdeaTo, wasFiredBy, usedToRunWithBackInTheDay — that you will drag the vocabulary down the sink of natural language parsing.

Next, I believe most human relations cannot be made explicit without changing the nature of the relationship — transient states such as kindOfInLovewith, thinkingOfSeveringTiesWith, thisCloseToScreamingAt are simultaneously vital and inexpressible in any straightforward way. Furthermore, other humans can read those states without their ever needing to be rendered explicit. Leaving them out dooms RELATIONSHIP to the shallow end of the expressiveness pool.

(I also believe that ontology is the machine learning of the current age, an immensely appealing notion that will fail to achieve almost everything currently expected of it. However, that is part of a larger conversation, so I’ll set that aside for now.)

To bardia’s comment first: That issues raised here have already been discussed on the FOAF list doesn’t seem like a serious objection, because the FOAF list is self-selecting for people who believe that human relations can be described in explicit terms.

FOAF list critiques are made by insiders, and insiders and outsiders have, by definition, different points of view. There were many brilliant theorists of Communism, but it took von Hayek to point out that only markets could allocate economic information efficiently at large scale. Whatever internal criticisms the Communists were subjecting themselves to, and there were many, the ineffectiveness of centralization wasn’t one of them.

In the same way, whatever objections to RELATIONSHIP the members of the FOAF list may have entertained, they were unlikely to conclude that the basic problem, as conceived, is unsolvable.

The second and more substantive criticism, from Ian, is the idea that the RELATIONSHIP vocabulary is both flexible and extensible. This is a subtler critique, and that fact that Ian raised it suggests poor writing on my part — I didn’t spell out my objections carefully enough in the earlier piece.

You can see this misunderstanding when Ian says “Without these vocabularies, incomplete and imperfect as they are, we would be mute in the machine readable web, unable to express ourselves in any meaningful way.” Note the sense of inevitability here — if my my critique was correct, there would be aspects of human life that could not be rendered sensible to machines. Ian seems to regard this as unthinkable, and therefore assumes I must not really believe what I seem to be saying.

But of course I _do_ believe that there are aspects of human life that cannot be rendered sensible to machines. This is the AI argument of the last 50 years recapitulated as a conversation about social intelligence.

The AI debate, in its broadest form, involved two theories of the relation between machine and human intelligence — difference of degree and difference of kind. The difference of degree camp (Minsky, Kurzweil) assume human intelligence is just fancy computation, and therefore more computing power will be enough to create artificial intelligence. (As Hubert Dreyfuss noted, they seem to think this not because they have evidence that this is how the mind works, but rather because if it isn’t true, we won’t be able to make computers think, which violates a core theological tenet of AI.) The difference of kind camp (Dreyfuss, McDermott) says that when humans think they are doing something different than computation, so more computing power isn’t enough — faster machines are wonderful, but they won’t add up to intelligence.

I think we are now seeing the same split around social intelligence. Human social networks are plainly vital, and we think about them all day long. Machines are fantastically good at consuming explicit structure and returning results made from calculating using that structure. There is a camp (Davis, bardia, et al.) that thinks that what humans do when they think about social networks is a kind of computation, and can readily be rendered in a form suitable for machine input, and there’s a camp (boyd, Weinberger, me) that thinks that what humans do when they think about social networks is a different _kind_ of thing than computation. Human social calculations are in particular a kind of thing that cannot be made formal or explicit without changing them so fundamentally that the model no longer points to the things it is modeled on.

When Ian criticizes my earlier piece on the ground that characterizations can be infinitely multi-variate and flexible, it’s obvious I wasn’t clear enough. The flaw in RELATIONSHIP is not that you can’t characterize someone as a colleague _and_ an employee, but rather that you can’t completely specify the fullness of any reasonably complex relationship, you can’t know in advance which of those characterizations you would use in what circumstances, and you can’t make even a subset of those things explicit without changing the thing you are trying to describe.

Note that this doesn’t imply a complete criticism of FOAF, merely of the idea that the FOAF container can and should carry explicit semantics. (I’ve run across this before, where a criticism of the Semantic Web is assumed to be a criticism of RDF, even though RDF is a general-purpose tool.) We have many examples of places where link structure is informative, without semantics needing to be attached to the individual links — Google, of course, and LiveJournal (though they seem to be in danger of forgetting this). Meanwhile Orkut provides a good example of the UI perils of trying to add semantics to link declarations.

Like the AI debate, this is at base a theological question — there is a group that regards the world as both clearly knowable and describable, and assumes that a lack of clarity is a synonym for a lack of explicitness, and there is a group that assumes that humans possess a core set of social capabilities that cannot be rendered explicit. And we are so early in the overlap of social network theory and computation that we hardly have the language to make those positions clear to one another.

0 Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Academia and Wikipedia

[In direct response to various points in Clay’s K5 Article on Wikipedia Anti-elitism which responds to Larry Sanger’s Why Wikipedia Must…

Folksonomy

Folksonomy, a new term for socially created, typically flat name-spaces of the del.icio.us ilk, coined by Thomas Vander Wal. In…