« You Scaled Your What? | Main | The Perils of Good Abstractions »

Thursday, November 30, 2006


Pete Lacey

When I was working at Systinet (now part of HP), we use to call this kind of thinking "science fiction." I'm curious, though, where you encountered this notion. In my experience, reasonable people (SOA or ROA, SOAP or REST) do not promote anything like this. This typically comes from the overzealous sales and marketing guys or the clueless technology reporter. It's along the same lines as being told that someday your fridge will order more milk for you when you're low.

BTW, I think you mean "omniscient" not "omnipotent."


Dan Pritchett

First, thinks for correcting my grammar. All powerful, all knowing, both seem unlikely but yes, you are correct, I meant omniscient and have updated the article accordingly.

Overzealous proponents or maybe I'm just reading too much into the postings but the material is there. And I'm happy to see you agree it's science fiction.

Bill de hOra

When I was working at Systinet (now part of HP), we use to call this kind of thinking "science fiction."

It's now called the semantic web.

Alex Bunardzic

I've commented in more details here:


Chris Stiles

IMHO .. every generation of programmers is doomed to relive this attempt to define some kind of 'services architecture' - an IT version of the groundhog day where we attempt to find an environment which fosters 'ultimate' reuse.

The previous iteration of such attempts ground to a halt because of problems with tooling - and the fact that then exeunt processors meant that everything was so damn slow.

Now we have tools, but we are approaching two areas that are restatements of other problems; that of complex type inference and that of defining complete and practical formal languages.

We are chasing after things that won't be possible in practice for many computer generations - unless you have a implementation of large scale quantum computing - and may not even be practically possible for real systems.

Chimeras all. The only thing that matters with SOA is roughly what can be done with it now. Most of the rest is just persiflage.

Vadim Geshel

My simple theory for why discussions of REST seem to imply omniscient inference is this. Many proponents of REST say, basically: just use what we've used since 95, why add complexity, things already work. Which tends to leave out, or at least not make explicit, the fact that what we've used since 95 is not just resources and statelessness and four verbs, but also a single data format with fixed semantics (HTML), client apps with those semantics hard-coded (browsers), and, most importantly, humans, aka (sufficiently) omniscient inference engines.


I want to disagree with Vadim about whether there are any semantics to speak of in HTML.

I think part of our problem is that we tend to believe that markup conveys semantics.

I don't think semantics is coded into formats and utilities much, it is more that semantics tends to be preserved by the tool, not that it is in any sense incorporated in the tool.

We are the ones who make all the meaning, and we are mostly unaware of our doing that, which is why there is so much reliance on tacit knowledge in systems.

In all other respects, I am completely aligned, and I think this is a great post and discussion.

Roy T. Fielding

Sounds like a paper tiger to me. There are certainly a lot of software sales companies and marketing brochures for tools that describe various buzzwords as a means of enabling omniscient applications, but lumping REST proponents into that category is bizarre. I have yet to see a REST sales pitch. I wouldn't even know where to start one.

Most REST-based applications have a human in the loop directing the selection of transitions based on the content of data presented to them -- they don't need to be omniscient, just aware of the current state and trusting that the resource owner is not misrepresenting the transitions (links, buttons, etc.). We know that works well enough to use it as an example of REST in action and see the advantages of that architecture over other architectures that also have a human in the loop.

Some REST-based applications do not have a human in the loop. I wrote one of the very first -- MOMspider -- which performs a simple task through a preconfigured set of interactions. It only needed to know enough about HTML to recognize safe links and follow them, but the principle is the same in general. The more standardized a media type is for a given purpose, the more automation can be applied to the transitions presented by that media type.

If you can preconfigure an application with enough semantics to find and select the appropriate transitions based on the current state presented to them by the server, then a REST-based application can be performed without a human in the loop.

The distinction here between SOA/ROA/Py and REST is not the presence or absence of omniscient behavior, but rather how much knowledge is necessary to comprehend the application alternatives at each step of the application such that the desired end state is reached (without accidentally purchasing a pink motorhome along the way).

I argue that REST-based architecture is better for that purpose because the states are simpler to understand. Obviously, if you had omniscience then the degree of simplicity in understanding each state would be irrelevant. That is why it makes no sense to say that REST advocates are not aware of the need.

In other words, I agree that this question of omniscience is critical to understanding claims about architecture, but I disagree that anyone who actually understands REST would make such claims. The purpose of REST constraints is to construct an artificial context in which the shared understanding and opportunities for partial failures are as small as possible. It would be absurd to go to all that trouble if we thought such understanding is unnecessary.

I also disagree with your characterization that companies will not define standard media types because they feel a need to compete. The only companies that compete based on data formats are closed-product vendors. The people who buy software, or make it in-house with open source tools, clamor for open standards in data formats and frequently require them in RFPs. They do that for vendor independence and the serendipitous reuse gained from using data formats that can be analyzed independently from the applications used to create them.

Dan Pritchett

I agree with your characterizations of what is required to write applications. And I thank you for clearly stating what context a REST client must have.

I think the lack of standardized types is already prevalent. Yes, it is by closed product companies, but ultimately it still occurs. Video offers a concrete example. MPEG is a perfectly reasonable format yet any robust player needs to support Quicktime and Windows Media as well. Google and Yahoo both offer map interfaces but these are not compatible. What we see from consumer driven demand is the requirement for applications to support all of the formats, not a demand for a single format.



I always thought the REST omniscience arguments went a little like this:

It'll be pretty easy to jigger with the RESTful API by just making GET, POST, PUT, and DELETE calls against the server and looking at the XML output that gets handed back to you.

So instead of looking through WS-Whatever, you just get your code working with the subset of application state you care about.

As a bonus, given a sufficiently smart REST client, you get some intelligence about the HTTP codes, a set of standard headers, existing HTTP authentication, and one or two other things I forget about.

If you get lucky, a couple of people *have* standardized on some XML data representation and your code now works, no fuss no muss with all of them (or well, optimistically all of them, someone probably misread something).

See, I thought the whole SemWeb thing was about getting the applications to figure out what the data meant in an omniscient manner.

The comments to this entry are closed.