Here’s a quick 4-1-1 on what developers need to know to deploy your applications to Kubernetes.
Getting into the container:
Containers allow you to package your application and everything it needs to run. This includes any application dependencies or files and the runtime environment for your application.
Containers like the name suggest are portable. You can spin up containers in different environments and different infrastructure without hearing, “well, it works on my machine.” And so that is also the main difference between containers and VMs. VMs are like ships, and containers are like boats. You can put a boat on a ship, but you wouldn’t be able to put a ship on a boat.

A container is a running instance of an image. A container image contains the source code, libraries, and dependencies of your application. If your app uses ssh, then you would ensure the container image has ssh installed. Images are like templates, and you can add layers to them to add additional functionality to your containers.
Container images are stored in a registry that is on a public or private repository, like the Docker Hub. Typically you pull down an image from a repository or build your own container image and use that to spin up a container. Docker is a container engine that
Shipping the Containers:
Kubernetes is a tool used to orchestrate containers and manage them. There also container platforms like OpenShift, or Mesos to help build, deploy, and manage your containers as well. In such an ecosystem, you need to know a few extra things about containers (you can also read the Design and Architecture for Kubernetes here).
Pods group container(s) to allow them to run on your infrastructure. Pods are the smallest deployable unit that’s created and managed by Kubernetes. Pods run on machines managed by Kubernetes called nodes.
Pods can contain a single container (this is the most common use case) or have multiple containers. A container deployed into the same pod is called a sidecar container. The sidecar pattern ensures that containers share the same set of resources at the pod level.
A DaemonSet deployment often gets considered and compared to the sidecar pattern. A DaemonSet is a Kubernetes resource that ensures an instance of a pod runs on all nodes in a cluster. The DaemonSets pattern consumes fewer resources than the sidecar pattern because you have only one instance per node.
Because you have pods that act as a wrapper for your container(s), you can make pods addressable using a Kubernetes Service. You can use configure your container platform to specify how you want to build your images, deploy and operationalize your application.
What happens after:
There are a couple of ways to ensure that your application performs well. Health checks provide liveness check and readiness check functionality. A liveness probe will check if a container is running. A readiness probe determines if a container is ready to service requests. The readiness probe can be configured to do an HTTP check. This way you’ll know that the service is ready to receive traffic. Check the Kubernetes documentation for the probes available.
The final guidance is related to performance. Each container running on a node consumes compute resources. A slow down in performance or capabilities can be related to the CPU and memory resources allocated to the pods. Every pod is limited to how much memory and CPU it can consume while on a node. And In some cases, pods are terminated if they’ve surpassed a memory limit. A typical example of resource negligence is having an instance of Jenkins that is performing builds slowly because it has a CPU limit 500m CPU or half a core.