As more organizations go through the container revolution, let’s take a holistic look at the ecosystem and challenges to overcome.
Lots of organizations are headed down the journey of the container revolution; a revolution where applications and application infrastructure are being containerized. If you were to start your journey today, when you ask someone about container technology, the first words out of their mouths are typically Docker or something Docker related. Though Docker was not the first interpretation of a standard called Linux Containers aka LXC which was from 2008, though they are certainly the most famous.
Knock Knock, it’s Docker!
The very first time I heard of Docker I was working at Red Hat several years ago. Ironically at the time I was wearing Docker Khakis and was confused why someone was talking about pants that I bought at Kohl’s.
As my colleague started to explain that he was not talking about pants, my mind raced back JAVA containers like Catalina [Tomcat’s Servlet container]. Though what stuck with me during the conversation and introduction to Docker was the ability to be the same throughout environments and being very portable. Imagine a supercharged application distribution with everything you need to run. As a JEE developer, I was making analogies to Spring Boot’s mantra to “Just Run”.
The history of Docker is a relatively new one, open-sourced by dotCloud in 2013, the Docker ecosystem continues to bloom encompassing tooling needed to make the container format robust and scalable.
Containers, Images, oh my!
As we in the computing world migrated off the mainframe to x86 then to virtual machines, now the revolution has focused on the mighty container [some might argue serverless is next].
Some lingo, in Docker terms an image is what turns into a running container or more often than not running containers.
Starting with the mantra, think of a container as application jail. An application can only have as much resources as defined by the manifest/specification of the image.
Like your gasoline-powered car needing the spark, air, fuel, and compression to run, your applications need compute, memory, storage, and networking to be of value. A container specification helps define all of these required components.
Why the Hoopla?
The big draw for a container is the ability to have consistency and portability. In the software world thanks to abstraction we have been aiming for write once, run anywhere [WORA]. JAVA is the defacto standard for this cross-platform goal. Though writing an application is only part of what you need for the application to run. The quintessential bootstrap problem aka what to run first; at lower levels, your applications need system libraries to run. Docker/containers help package these system libraries.
Can certainly see the benefit, if you are able to package up exactly what you need to run, system libraries and all, you can take exactly what you need e.g right sizing. With that, you can start to have increased in density.
By design a container is immutable, meaning they can not change. Since a container is a running image, if you need to make a change, you are re-creating the image and spinning up new containers. If you are making a change, you are making a new one; no more hot patching which certainly makes change control folks happy though is possible to SSH into a running container.
Though not a good idea to hot patch, another trait of a container is that they are ephemeral, meaning they are short-lived or transitive in nature. So even if you do a patch, the container is made to die so your changes do not persist in that single container alone.
I need a Container Now!
With the vast amount of parts available to us with OpenSource, the availability of Docker Images has been exploding thanks to public galleries such as Docker Hub. Need yourself a MongoDB, can quickly pull from 100’s if not 1000’s of images that contain MongoDB.
Unlike an application dependency, for example, that database driver e.g JDBC that your JAVA App needs to talk to MongoDB you get from Maven Central, you can get a working instance of MongoDB itself from Docker Hub.
Getting started is as easy as downloading one of the Docker Engine runtimes. If you are on a Mac, the easiest way is to install the Docker for Mac Desktop. After you install and the service starts up, you can head to Terminal and type in “docker” and the magic will start.
Let’s get the Magic Started!
Getting started with Docker commands is easy. For now, two basics are Pull and Run. With those three you are dangerous.
Ready to run your very first container? Pull down that nginx (web server) with a Docker Pull then can immediately run that. “docker pull nginx”. The Docker Mac Client comes pre-configured to point to Docker Hub as the default registry. By running “docker pull nginx”, you are getting the latest published version of nginx on Docker Hub. Almost there!
Next, get ready to run your newly minted nginx image. Can leverage the Docker Run command. Though we will add a few arguments so we can expose and bind the default nginx port 80 with the -p flag and name the container with the --name attribute. “docker run --name your-first-nginx -p 80:80 nginx”. Once you run that, you are all set to head to the nginx launch page by just going to your browser and entering localhost [aka your IP].
To stop the running container can just run the stop command. The easiest way to do this is to find the container ID and pass the ID to the stop command. Run the list container command with “docker ps -a” and copy the container ID of “your-first-nginx”. All you have to do now is run “docker stop your_container_ID”.
Bonus Round - Make and Share Something
One of the primary mechanisms of making your very own Docker Application is Docker Compose.
Making your own is not that difficult either. A simple Docker Compose and you are on your way.
You need to define a few specifications on how to access your container and defining resource limits for your application jail.
Primary reasons for making your own image could be modifying configuration on the running application infrastructure that will be running inside a container. This excellent and very detailed article on dev.to goes through configuring nginx to be a reverse proxy and packaging up all those config changes to a new image.
Once you made something, can share with the world potentially with a push. Docker Push needs a Docker Registry to publish to. If you don’t have a private one in Nexus for example, the public Docker Hub one is great.
Why are all my workloads not in a container?
Going back to the point that containers are ephemeral and made to die, certain workloads do not do well with this especially stateful workloads. Items that require state or have specific persistence requirements might not have been built to be in a container. Application clustering mechanisms can be taxed as containers go and come at a faster velocity than the cluster can re-balance.
Even my favorite language JAVA has been going through an evolution to be containerized easier. When containerizing a run-time or language a lot of internals need to be considered for example how a language respects resource limits.
We are lucky that there has been a lot of ecosystem improvements to the four pillars of need ( remember applications need compute, memory, storage, and networking to be of value) and allowing more application infrastructure to be containerized. Take a look at how the Cloud Native Compute Foundation’s landscape changes year to year, so many projects to help us along the journey can be overwhelming.
What if we have more than one of these containers?
There are a lot of arguments about the number of concurrent processes that should be run in a container. Because containers are easy to spin up, they elude themselves to be home to one process and just like magic you are marching towards microservices before your very eyes.
But applications certainly do not run by themselves and scaling thus the need for a platform to orchestrate all of these running containers. Que the music, welcome Kubernetes!