HTTP And The Zombie Apocalypse
HTTP is the world’s most successful application protocol. Yet it is widely maligned and misunderstood. Part of the problem is a poor developer experience. Part of the problem is that a Ph.D. dissertation is usually not the best introduction to a subject. And part of the problem is that building network applications is hard and we blame HTTP for that when we shouldn’t.
But HTTP is conceptually simple. HTTP cares about the things that network applications care about. No protocol can address the needs of every such application but we can try to address the needs they all share, the challenges every network application must deal with.
These are, in no particular order: naming, state, actions, versioning, authorization, caching and compression, and error handling. Everything on that list is essential for building distributed applications. And I don’t think most developers would even argue that point. But they might argue the point that this describes HTTP.
But it does.
The HTTP Feature Set
Here’s the same list, only mapped to their HTTP equivalents.
Concern | HTTP Feature |
---|---|
naming | URLs (and redirects) |
state | GET , PUT , and DELETE |
actions | POST |
versioning | content negotiation |
authorization | authorization header |
caching | caching headers |
compression | encoding headers |
error Handling | 4xx and 5xx error codes |
HTTP is not trying to complicate our lives. Network applications already do that quite well, because networks are unreliable, slow, and insecure. Networks are the dark night in the zombie apocalypse. HTTP is trying to help you survive in that hostile environment. Each feature of the protocol serves as both a warning —you may want to consider caching that— and a way to express your intentions —please cache this if you can— so that disparate clients and servers can cooperate—and survive until morning.
Honor The Fallen
Even better, HTTP distills a lot of hard won lessons from its predecessor protocols, like Sun RPC or CORBA. That’s why it often seems counterintuitive at first glance. Developers gravitate toward an RPC-style of programming, which makes sense because that’s what the entire industry did, back in the 80s and early 90s. But it turns out that modeling network applications in terms of remote function calls was an oversimplification. Because networks are hostile environments, frighteningly unlike the cozy confines of a shared process space.
The early casualties were enormous.
REST And The Art Of War
When REST advocates criticize an API, what they’re really saying is this API is rediscovering or reinventing some aspect of HTTP and thus is unlikely to survive a zombie attack, also known as a slow, unreliable, and insecure network. Unfortunately, these concerns gets expressed in terms of REST, which makes them sound academic, because they are literally academic. That tends tends to happen with Ph.D. dissertations. Sometimes it’s better to say use a shotgun rather than reciting from The Art Of War. But if you learned everything you know about fighting zombies from The Art Of War, you may not even be aware that there’s a simpler way to explain something.
And it’s not that REST is bad. It’s just not the right place to start. I haven’t said anything about hypermedia or stateless servers or uniform interfaces, and all those idea are important. They come from those hard won lessons of the past. But they also obfuscate answers to practical questions, like how to implement versioning or avoid request chaining. It’s the zombie apocalypse. And it’s dark. Our concerns are immediate and practical. We need to know how to use a can of vegetable oil as a candle or purify rainwater. So, for the love of all things that are not undead, please stop quoting Sun Tzu!
Our Fearless Leader
We can all move on, and it’s worth it to do so,
because HTTP is still here and its better than ever.
And there aren’t too many of us that know how
to survive a zombie apocalypse
design network application protocols better than Roy Fielding.
And if you think you do, you’re probably wrong.
Without Fielding, the Web might not have survived.
Here’s how the MIT Technology Review described Fielding’s contributions
back in 1999, when they recognized him as an Innovator Under 30:
Fielding’s first big contribution came in 1994, when he invented a way for browsers to efficiently update stored Web pages, by transmitting information only if something has changed. Without this traffic-saving advance, the Web might have collapsed under its own explosive growth. Thanks to that success, Fielding was tapped by WWW inventor Tim Berners-Lee to author the latest version of the Hypertext Transfer Protocol (HTTP)…Fielding, who is due to receive his PhD this year…is also co-founder and chairman of the Apache Group…whose free software now powers more than half of all Web servers—trouncing competition from Microsoft and Netscape.
Invented HTTP caching? Check. Was asked by the inventor of the Web to write a key spec? Check. Founded what eventually grew into The Apache Foundation? Check.
Which is all to say that it’s worth taking the time and making the effort to understand and articulate the features of HTTP in their own right, just as it’s worth taking the time to brush up on your survival skills in the face of a zombie apocalypse.
What About The Zombies?!
A substantial percentage of the people who’ve read this far are likely wondering what the hell to do about the zombies? Or, more specifically, what naming has to do with URLs or how content negotiation is related to versioning and so forth. Perhaps they’re thinking that’s just as confusing as talking about hypermedia or stateless constraints or what have you. For the moment, I’ll leave that as an exercise for the reader, along with the assurance that these mundane concerns are indeed the purview of HTTP, even if the manner of their resolution is not immediately evident. And that there’s a good reason for taking the path less traveled. (Because there were zombies on the other paths.)