Myths around containers. Part 3: Speed
Containers are considerably faster than virtual machines – at least that’s what most people say. But do they actually bring more speed to overall development and deployment process? Let’s find out in the third part of my article series.
If you haven’t read first and second part I highly encourage you to do so. But let’s start with the most common myth on speed – the actual time it takes to launch a container.
Myth: Containers launch faster than virtual machines
Basically when you launch container you spawn a new process with a more isolation and without all the overhead you get when you launch traditional virtual machine. And obviously it’s a lot faster – there’s no doubt about it. There’s one caveat however – if an image of a container you’re launching isn’t available on a host it must be downloaded first and it may take some time. That’s why it is crucial for speed to use and create slim images to minimize initial download time. There are other methods to mitigate this issue (e.g pre-download, use common base image, etc.), but in general it is a part of good practice to keep container images small.
It’s true – launching containers is like launching a process and it’s very quick provided that your container image is small.
Myth: Deployment of applications is faster when they are containerized
When your applications are available as container images it is possible to leverage features that come with the best container orchestrator – Kubernetes. Deployment process becomes easier, more flexible and faster than ever before. Rolling updates with rollback reduces risk of failure, don’t require downtime and enable thorough testing (including A/B tests, blue/green deployments, multiple environments created on demand and other). All you need to prepare is a proper configuration describing this process and some additional features in your applications like healtchecks or externalized configuration (e.g. credentials, parameters, connections strings).
Yes, containers launched on a proper orchestrator engine (i.e. Kubernetes) can be deployed rapidly.
Myth: Development process is faster
If deployment is faster you can assume that so is development. This however depends on your approach to creating containers, because you still need to develop the same code regardless if you want to put it in a container or not. You (or your team responsible for particular service/app) could choose to define each container using Dockerfile with all those “nasty” things written in shell and use Jenkins to build it automatically on each commit. This however can get messy, complex and often not so secure. Better option is to have unified way of injecting app into prebuilt image using source-to-image method promoted and used in OpenShift. You can make the whole process incredibly easy and almost transparent for developers. I personally think it’s the best choice for most cases. So with an app available as container you can launch it easily in a very similar environment using on-demand environment on Kubernetes cluster or you can use a one node, development “cluster” available as light virtual machine (minikube).
If you choose to build containers from scratch it can add more complexity, but with other techniques (like source-to-image) it is less painful and faster.
Myth: Application in container will run faster
Without the overhead from virtualization your app could run faster, but nowadays hypervisors and CPUs can bring this overhead to as little as few percents. So launching container is definitely faster, but when the application is running it becomes less significant whether it runs on container or virtual machine. Actually most deployments of containers are being done on virtual machines, so the same overhead applies there as well – only small number of clusters will benefit of “raw” power available on bare metal. And that lead us to a conclusion – speed of your app still depends on it’s architecture, code quality, resources assigned and available to it, NOT the fact whether it runs on container or not. One thing could bring it to the next level – it’s scaling. For all stateless applications Kubernetes brings scaling (even autoscaling) out of the box, so for workloads dealing with multiple, small transactions it can actually decrease response time.
Unfortunately containers won’t speed up your application, however they enable scaling it horizontally which may help it significantly.
They launch immediately, because they are just fancier processes similar to the ones you launch everyday on your desktop. You can deploy them faster on many environments using Kubernetes or on any machine using Minikube to make your development easier and rapid as well. They won’t bring significant boost to your app, unless you haven’t used multi-instance behind some loadbalancer deployed before and scaling is something that would distribute your workload decreasing response time.
Yes, containers are definitely great technology that can be treated as a significant milestone. Personally I find it as a major part of Cloud Native approach in developing, deploying and maintaining modern environments. If you choose to embrace containers in your organization you will not only change the way you work, but you’ll be ready for even more radical solutions such as serverless. That’s however a topic for a different article or similar series 🙂
This was the third and last part of my article series on myths around containers. Hope I shed some light and debunked myths we’ve all been hearing recently – this may help your choose wisely whether invest your time to dive in or stay away and wait for the next “big thing”.
Written by Tomasz Cholewa
Cloud Architect, Automation Freak