Microservices And Serverless Architecture
One of our favorite patterns at Panda Strike is to have an HTTP API that dispatches jobs to workers. We called this the dispatcher-worker pattern, but its names are exceeded only by its variations. In particular, it’s a variation of the microservices pattern.
The basic idea is to ensure that the HTTP API is only concerned with validating requests and dispatching them to the right worker. And, when necessary, relaying a result from a worker back to the client. Meanwhile, each worker need only do one job. For example, one worker might handle registrations, while another handles feed requests.
So you can imagine our excitement about Amazon’s support for “serverless” architecture, which happens to fit this pattern perfectly.
Yes, Of Course There Are Servers
Of course, there are always servers. But with serverless architecture, you don’t think much about them. Behind the scenes, there’s a massive cluster of servers, and all the usual complexity that implies. But the interface to the execution environment, the unit of deployment, is at the process, or even function level, a layer above the servers.
Lambda Grows Up
Amazon Web Services introduced support for Node 4.x in its Lambda service back in April. In combination with the AWS API Gateway, this meant we could implement our dispatcher-worker pattern without concerning ourselves with managing servers. The API Gateway acts as the dispatcher, and Lambda functions act as the workers.
All The Pros, None Of The Cons
And we get all the same benefits, except without the complexity associated with managing clusters of servers. For example, one property of microservice architectures is granular elasticity — you only add capacity to services that need it — which is more cost-effective than the coarser elasticity intrinsic to monolithic architectures. And this is true for AWS Lambda-based microservices, too, since your costs are based on use.
Another example is separation of concerns. Our dispatchers validate requests, ensuring that bad requests, including those from would-be attackers, never reach the process space of the workers, where they can do real damage. The AWS API Gateway works the same way, allowing us to validate the requests — checking the schema of request bodies, verifying API tokens, and so forth — before they ever reach the Lambda functions.
The Bottom Line Is The Bottom Line
For most services, the cost-effectiveness of this architecture is evident in the costs for Lambda functions. The AWS pricing page for Lambda gives a few examples. Even for a function invoked a million times per day, the cost is likely to be far less than the cost of even one EC2 server. The API Gateway is a bit more expensive, but in most cases, it still more cost-effective than running your own servers.
The hype around containers is, in part, driven by the ability to build out an infrastructure layer that supports serverless architecture. A great deal of our dev ops research was consequently focused on containers. But the maturity of these AWS services means that we don’t need to build out our own serverless architecture layer — we can simply use Amazon’s.
Kicking The Tires
We’re still kicking the tires, particular on performance and availability. Amazon has yet to clearly state what guarantees they offer, if any, around these services. And the case studies so far are limited. But assuming this combination of API Gateway and Lambda hold up under testing and real world use, we’ll be strongly recommending our clients focus on serverless architecture over container-based alternatives.
Coming Soon…
This, in turn, means our current research and development is focused on serverless architecture. Our container-based open source experiments, Huxley and P42, have been tabled for the time being. (The closest thing to what we were doing is Convox, so if you’re doing container-based services, check them out.) And we’ll have an announcement soon on our first serverless project. Stay tuned!