Choosing the right tool for the job is not always as easy as it seems. Sometimes, there is more than one tool for doing something, and such is the case for cloud-based applications. In my last “Choosing between” post, I looked out how to choose between containers and virtual machines, and here the same tension exists, because in many cases the choice is not always clear.
Two technologies (really, movements) that changed how people think about application both emerged into the mainstream about 4 years ago, namely containers and serverless computing. Technically speaking, containers are nothing new, but their widespread use is. Both of these technologies attempt to solve a common problem – which is getting away from managing virtual machines and infrastructure to using less “server” and more “as a service” models for computing.
Because both of these technology movements are fairly new, any app that is weighing these as possible models for deployment are also likely to be new, so legacy apps are probably not a consideration. Gernally speaking then, microservices are probably the common motivator behind both containers and serverless computing. Microservice architecture says that applications should be broken up into numerous, autonomous components that are loosely couple and communicate with standard protocols like MQTT or HTTP. Each component can be managed independently of all the other components and can also be scaled independently.
Briefly defined, a container is a fundamentally way of packaging software such that the package has only what it needs to run the software, which would be things like runtimes, utilities, and other such dependencies. The containers then are deployed to a container engine that can run the software in the container. The container engine provide infrastructure to run the container. A container is like a virtual machine, but the bright line for the container is at the kernel level. Everything above the kernel in a compute stack is what runs in the container.
Serverless architecture is another way of running software. Serverless environments provider a full managed platform as a service, and the user deploys event-driven, on-demand “functions” to the environment, which has all of the runtimes and supporting infrastructure to run the functions. The user though can add additional libraries and other assets to the function that are not provided by the environment. Serverless computing is not limited to user-defined function though. The concept can be applied to databases, caching services, web services, queues, and many other common compute components – the main thing to remember about serverless is that everything does not require the user to manage the server that runs the services.
One common misconception with serverless though is that one is committed to vendor lock-in with serverless. While this may have been true at first, third party abstractions for development as well as broader support for popular technology as a service by major cloud providers has mitigated the problem. Many developers though program for an environment, and portability is not as much of a concern.
The fundamental question to ask when considering containers versus serverless probably stems from technical implementation, but there are certainly non-technical aspects to consider as well, such as cost and control. That said, the question is, “what is the best fit when considering my application and organization?” Here are some key areas to look at…
Containers by their nature can support a wider range technology than serverless can. When an application requires a technology that is not supported by serverless providers, then containers are likely to be a better choice. Here, serverless is not a good fit.
Serverless is a fundamentally a platform as a service. Where the model fits well is when the platform assumptions can support the application assumptions. Usually cloud providers though have pretty broad support for most every popular development platform and allow for third party libraries.
Part of the application needs to be “always on”. For some applications, there are agents that run continuously that perform actions. Note, that in many cases though, these agents could easily be replaced in a serverless context with event driven functions that respond to whatever the agent is monitoring or performing. Functions can be triggered by queues, HTTP requests, timers, and other similar events. If the “always on” component is not simply waiting for something to happen, then containers might be a good fit. Otherwise, choose containers.
Conversely with containers needing to be “always on”, serverless function don’t have to be always on, and furthermore they are generally stateless, meaning they don’t persist data within the function between request (caching solutions such as Redis typically handle persistence) . The events that trigger the functions such as timers, queues, and requests are al managed by the platform. When applications can be more passive, rather than active, serverless is ideal.
Containers are one of the few truly cloud agnostic technologies that are widely supported. They can be run on premise as easily as they are run in the cloud.
Platform agnosticism is probably less important when it comes to serverless. This does not mean that serverless implies vendor lock-in as stated earlier. While there are abstractions for serverless platforms, usually these abstractions only apply to the code level API. There are a few notable serverless platforms for application though that are widely supported by cloud providers, such as Redis, MQTT and AMQP queues, Mongo, MySQL, and PostgreSQL.
The container DevOps story is pretty well baked, and with many cloud providers and services alike coalescing around Docker containers on Kubernetes, the story for deploying all things containers is going to be pretty homogenous regardless of what the container actually does.
Serverless CI/CD pipelines though are likely to be much more arduous because there is disparity in provisioning services on clouds even if the application uses popular technologies or abstractions. Each cloud provider has their own way of provisioning services and they each come with a different set of assumptions, although the end product may be very similar between cloud providers.
Unlike serverless, containers can be fine-tuned with full control of what is running. There may also be performance tradeoffs by optimizing things too, given that serverless is more generic while containers don’t have to be. But unless there is a really compelling reason to have fine grained control, then serverless is certainly a great choice.
Containers usually require compute reservation in order to operate, but one of the main advantages of serverless is that it can substantially reduce costs because they use per-user model rather than a reserved compute model that is required for containers to run. This typically lowers the operating cost of an application.
This list is by no means exhaustive, but it hits on many of the considerations between serverless and containers. At the end of the day, the best fit for a given app might not necessarily be the most optimal technologically, or even the most cost efficient for other reasons. The thing to do is consider the consequences of choosing, weight the cost, and get the app up and running.