Paul Prescod Claims: The next generation of web services will use individual data objects as endpoints. Software component boundaries will be invisible and irrelevant.
The article is located here: http://www.xml.com/pub/a/2002/02/06/rest.html
Do you have any thoughts on the vision posed in the article?
Can this really come to fruition?
The author's example of locating business entities via URI's is interesting and his arguments for doing so are solid, but I am not sure how this concept would be used for actually locating and using services. My understanding is that with UDDI, a client application specifies a TModel (at compile time, essentially), and then services matching this TModel are discovered at run time. SOAP is then required because a language-neutral transport is needed to move data between the client app and the web service. The author states that "no matter what your problem, you can and should think about it as a data resource manipulation problem rather than as an API design problem" - does anyone know what this means in terms of locating and interacting with services in the REST architecture, which I assume means without SOAP and with a more "robust" version of UDDI?
Do you think that that we will realistically be able to have programs go to UDDI, get services, and use them?
I am not sure that will work. Imagine a leather service. A computer could try to work out the cheapest service from UDDI, but does it have a notion of quality? Can we trust various orgs? I see it more as a yellow pages where we can look for things, but trust relationships will probably need to be setup, and that is a human interaction.
I believe that UDDI - or some other similar directory, discovery and integration service - while being extremely overhyped and oversimplified, does have a promising future. Provided that some things are put into place, that is.
By itself, there is no possible way for an application to determine the quality of service, adapt itself to the contract of n separate web services, and continue to function, constantly swapping between the most "appropriate" provider. Anything is possible, but we are far from something of this nature, and it sure as hell won't be UDDI when it happens.
For this to occur, a UDDI provider would have to establish standard external contracts for each type of web service, as adapting interfaces is not accounted for with the current SOAP specification. Or, perhaps, a UDDI provider could expose only a single web service, and determine at run-time which third-party delegate would be used based on parameters passed by the client, and some form of authentication. This would assure a standard contract.
To eliminate at least some questions regarding quality of service issues, for example, directory services could be certified and managed by trusted sources. Similar to the way in which J2EE servers are certified by Sun, any web services that end up being published would be of a certain quality, and service level agreements would be established. (I know that instrisnic SLAs are a proposed extension to SOAP, but as of yet, this has not been incorporated.)
The technology is here (although a bit immature), we simply don't have the standards or levels of cooperation to implement a largely successful transparent-interoperability platform. Not yet, at least.
xml.com's Anti-awards 2001 had the entry "Most Technically Deficient Initiative Kept Alive by Marketing Dollars". Care to guess who won?
<<The winner of this category is the much-vaunted UDDI, whose press-intensive launch depleted entire rain forests, yet somehow failed to produce anything of any use whatever. The sheer momentum of the bandwagon has meant a UDDI 2 release was required to fix up the ailing specification -- a possible contender for next year's awards.>>
Oops, hit the button halfway...
"The winner of this category is the much-vaunted UDDI, whose press-intensive launch depleted entire rain forests, yet somehow failed to produce anything of any use whatever. The sheer momentum of the bandwagon has meant a UDDI 2 release was required to fix up the ailing specification -- a possible contender for next year's awards."
The first time I encountered REST and tried to figure out
what it was, I gave up. The name is totally uninformative
("Representational State Transfer"???) and I couldn't find
any good material describing it in a simple fashion.
With this article, the light finally went on in my head, the UDDI example immediately made sense. To my surprise I
realized that I've been working on a product that very
closely implements the REST model of UDDI as described.
Not because we were trying to follow REST, but because we
were using the eCo Framework to base our system on
) which as far as I can tell
pre-dates this REST stuff.
I've also implemented SOAP messaging within our system (along the lines of JAXM) which is what UDDI uses as
For simple web services such as getting the details of a business out of
a registry, it is hard to see where SOAP provides any benefit.
Yes, this is definitely the way to do it. In my opinion there is no need to make things more complicated than this. There are many existing web applications that receive HTTP parameters and leverage XML data. These applications are in fact ad hoc web services and it is hard to see the need for adding a SOAP layer on top of them.
In future web applications, one thing we can be absolutely certain of is that we will need to integrate information and services from the "ordinary" web with information gathered from web services. But why draw a line between them? The REST approach integrates the existing web seamlessly with web services.
Furthermore, for a truly dynamic web, issues like security and service lookup need to be solved in a wider web context instead of just in an RPC/web services context.
I found this article intruiging, but guilty of the same type of hype as the UDDI folks. The idea that everything can be as simple as "just use a URI along with a GET or POST" is compelling, but it doesn't really help use solve practical problems. Its a laudable goal to simplify the Web Services mess, and the author gives an example of how this could be used to make essentially a CRUD (create, retrieve, update, delete) style of application. Anything more complicated, and the model is stretched to the breaking point.
I browsed the REST discussions after reading the original thesis paper, and by and large, they seem focused on coming up with clever ways of working around the severe limitations of having only GET, POST, PUT, and DELETE as API calls, and HTTP as the primary transport, and trying to handle asynchronous calls, events, and all the other things that HTTP is really bad at. I won't even mention the pedantic discussions I read on URL style and REST principles.
I'm not claiming that any of the current Web Services "standards" solve this problem any better, but if this process has taught us anything, its that you should use the right tool for the right job, not force everything to look the same, even if it doesn't fit.
David is right.
I can think of many CRUD type services where this would work fine. It is a very attractive proposition. However, I'm currently working on a design where I have to interface to a third party provider where, if I'm lucky, I might get a response in 24 hours (the SLA says max 36 hours). Now I might be able to use a REST type approach for this, but only by polling a URI. I suppose I could expose a URI for the remote service to call back. Hang on, now I need an HTTP server at the client too.
Sorry, I'm not going to do it. MOM is the answer to this problem.
I think this REST stuff is right on but the I wish there was a short description somewhere. The name REST doesn't impart any meaning. I would read the dissertation if I was still in college and had all that time to waste. I would read the "shortened" version of the disseration if I could get past the first page without falling asleep.
Now, in the end, the article says all this webservices stuff is over the top. We don't need no stinkin' UDDI or even SOAP to achieve what's needed. It waves its hands at MOM, but I'll grant it can be done.
Anyone care to defend SOAP?
Finally, I think it's funny that smart people are already defining the next generation of webservices when this, first generation, still isn't in place.
You definitely can access UDDI information using classical URIs if you wish so. Each time a businessEntity
structure is saved via a call to save_business, the UDDI registry node generates a URL (a discoveryURL in UDDI terminology). The generated URL points to the instance of businessEntity wrapped in a businessDetail message. This businessDetail message is the same message as the one you would get by invoking get_businessDetail using SOAP. Here is an example discoveryURL that points to the details of the IBM Corporation businessEntity:
Now, with regard to information management, the article author explains that delete_business could simply be replaced by an HTTP DELETE. HTTP DELETE would suppress on the Web server the XML file corresponding to the businessEntity. This is far too simplistic. Indeed, businessEntity specifies businessServices. Suppressing a businessEntity implies suppressing the specified businessServices as well. If it was possible to access a specific businessService using a URI, invoking that URI should now lead to a "404 Not Found" reply from the Web server. So, in addition to HTTP DELETE, you should implement server-side logic to appropriately maintain the XML definitions. This being said differently, you need to implement... a UDDI registry to accurately maintain information.
In addition, I am sceptical about firewalls making HTTP DELETE calls pass through. Don't forget that it is a design goal of SOAP to be firewall friendly. Firewall-friendlyness is really what explains the failure of CORBA on the Net.
My personal conclusion is that instead of inventing "a second generation Web services", it would be better acquiring first real understanding of what first generation Web services already bring.