By craigmcc on Apr 01, 2009
It started with a simple question or two, which boil down to:
- If Hypermedia as the Engine of Application State (HATEOAS) is so cool, why is it not being used by more REST APIs today?
- Are there any short term benefits to go along with the stated long term benefits like adaptability to change?
Here's the answer I sent back to the rest-discuss mailing list, which I've decided to memorialize by turning it in to a blog entry. Note that it was written yesterday, so the publication date (April 1) is not semantically meaningful :-).
I know exactly where you are coming from with these questions ... I felt the same way until recently. I've designed several REST APIs over the last couple of years, but up until the most recent one, I designed and documented them in the "typical" way, describing the URI structure of the application and letting the client figure out what to send when. My most recent effort is contributing to the design of the REST architecture for the Sun Cloud API to control virtual machines and so on. In addition, I'm very focused on writing client language bindings for this API in multiple languages (Ruby, Python, Java) ... so I get a first hand feel for programming to this API at a very low level.
We started from the presumption that the service would publish only one well-known URI (returning a cloud representation containing representations for, and/or URI links to representations for, all the cloud resources that are accessible to the calling user). Every other URI in the entire system (including all those that do state changes) are discovered by examining these representations. Even in the early days, I can see some significant, practical, short term benefits we have gained from taking this approach:
- REDUCED CLIENT CODING ERRORS. Looking back at all the REST client side interfaces that I, or people I work with, have built, about 90% of the bugs have been in the construction of the right URIs for the server. Typical mistakes are leaving out path segments, getting them in the wrong order, or forgetting to URL encode things. All this goes away when the server hands you exactly the right URI to use for every circumstance.
- REDUCED INVALID STATE TRANSITION CALLS. When the client decides which URI to call and when, they run the risk of attempting to request state transitions that are not valid for the current state of the server side resource. An example from my problem domain ... it's not allowed to "start" a virtual machine (VM) until you have "deployed" it. The server knows about URIs to initiate each of the state changes (via a POST), but the representation of the VM lists only the URIs for state transitions that are valid from the current state. This makes it extremely easy for the client to understand that trying to start a VM that hasn't been deployed yet is not legal, because there will be no corresponding URI in the VM representation.
- FINE GRAINED EVOLUTION WITHOUT (NECESSARILY) BREAKING OLD CLIENTS. At any given time, the client of any REST API is going to be programmed with some assumptions about what the system can do. But, if you document a restriction to "pay attention to only those aspects of the representation that you know about", plus a server side discipline to add things later that don't disrupt previous behavior, you can evolve APIs fairly quickly without breaking all clients, or having to support multiple versions of the API simultaneously on your server. You don't have to wait years for serendipity benefits :-). Especially compared to something like SOAP where the syntax of your representations is versioned (in the WSDL), so you have to mess with the clients on every single change.
Having drunk the HATEOAS koolaid now, I would have a really hard time going back :-).