The traditional style of deployment is a risk- and change management process in which success is measured by whether service and stability are guaranteed without interruptions. The exact process can take many forms but some of the most common look something like the following:
- At worst → finish development work, throw the code over the wall to operations, hope the change board approves it and that operations can deploy it with success.
- Better → you have a manual change process where you upload your code to the server via SSH or FTP and run some upgrade steps directly on the server.
- Best → you work in a virtual machine, a CI/CD pipeline delivers your code to a server, changes are made to the application via scripts or configuration deployment tools, but you probably don’t have access to the machine and cannot verify or validate its entire state.
The Known Deployment Way
Probably, the application is somehow hard wired to the environment. But the environment is likely mutable because people are allowed SSH access. Additionally this environment likely looks nothing like the development environment and maybe your developers don't even have control over what the live environment looks like. It's also extremely likely that deployments involve multiple manual processes, which have to be performed in just the right way for the deployment to succeed. All of these things make change extremely hard and as such experimentation and creativity is not encouraged because of the cost.
There is little wonder that the deployment process is seen as high risk. There are multiple places where untracked breaking changes could be introduced. Although the processes described above are designed to minimize risk and, what it actually does, is create a culture of fear around releases and actively impede delivery of value to you. This is also why change management and lack of service disruption are seen as the key metrics for success. In this scenario it is extremely hard to change anything and extremely easy to break everything!
How To Make It Easier
Containerized deployment practices conversely focus on velocity as the key metric, which is the ability for developers to push code to production and thus create value for the stakeholder. To do this, it's necessary to break down the traditional walls between engineering and operations and allowing developers to make (controlled) infrastructure changes in the form of automated tools that configure the environment based on version tracked config files. Therefore, some new concepts in design and deployment have to be introduced.
Mutability is an anti-pattern - in the containerised world you cannot SSH in and change a setting. Instead you redeploy the application environment via configuration files and automated build pipelines trigger from version control systems. This fast and automated process bakes the concept of deployment into the heart of application design, but this means a whole new way of thinking.
Infrastructure as Code
By treating the infrastructure as something that is versioned, strictly separated and deployable, you are able to easily perform change tracking, rollbacks and run applications almost anywhere. Thus, you aim to create accountability for runtime environment changes and eliminate brittle server setups through developing in (virtually) the same environment you deploy live and rebuilding that environment on a regular (daily/hourly) basis. Docker for example allows this process to be extremely lightweight and flexible, significantly decreasing the build time compared to virtual machines.
Because containers are so lightweight and fast to build and deploy, you can extend the single responsibility principle to your application runtimes. Every container is responsible for only one thing. As such you are able to tailor its running conditions to best support it and scale it at runtime to meet demands, helps you to speed up deployments by only rebuilding the parts of your application that actually change.
Automation is King
The entire build process should be done automatically, ideally you have no human interaction at all (in a production environment a single final stage deploy button may however be an appropriate guard). Simply pushing a change in to your application (or container) repository should trigger a pipeline of tests and then deployment to a state exactly as the production environment.
It should be easy to see how this alone helps to combat fear around deployments, what could give you more confidence in your process than having done it in exactly the same way hundreds of times before in the deployment process?
This automation also provides other benefits for you at runtime, if used correctly, containers can be self-healing. If your container crashes a new instance is started immediately, scaling is easy as container platforms such as Kubernetes allow auto scaling when resources are in demand, immutable runtime images support information security, as confidential information can be mounted to temporary filesystems at runtime.
And What About The Platform?
All of these ideas lead to one important new problem domain...
- ... how can you build an application, or modify an existing application to make it suitable to run in a container, to support rebuild, scaling, immutability?
- ... how can you control that complexity?
- ... who looks after the systems that allow you to support that development workflow?
Although the usage of container technology increased significantly from 2018 to 2019 (according to Rightscale), in our experience, the complexity is still one of the main impediments for companies to switch to containers. . Therefore it's highly recommended to start a cooperation with a partner who manages the underlying platform, guides you towards the ideal application, makes sure that your application is running smoothly and develops you personal deployment strategy.
PS: If you are interested in further information about the basics or the implementation of container technology, we recommend our knowledge page about container technology.