Deploying your Application as Container

Simplifying Deployment with Container: How Containers Solve Real-World Problems. Okay, you can call it Docker 🐳

Do Exploit
7 min read2 days ago

Understanding Application Deployment

An “application” in software development refers to a program or system designed to perform specific tasks for end users. Deployment is the process of delivering applications, updates, or patches from developers to users. Whether it’s pushing code via File Transfer Protocol (FTP), deploying on virtual machines, or using CI/CD pipelines, it’s all part of deployment.

There are always differences between computers

Source: https://www.pinterest.com/pin/programmer-tshirt-it-works-on-my-machine-by-originto--679762137481755733/

But here’s the catch: deployment is hard. You’ve probably heard the joke, “It works on my machine.” It’s funny because it’s true. Every developer’s machine — and every server — is unique. Different operating systems, libraries, configurations, and dependencies create inconsistencies. And this isn’t just a problem on personal laptops — it extends to servers, where applications go through multiple stages (development, staging, production) before reaching real users.

Let’s break down the key problems this creates.

Problem

1. Conflicting Dependencies

Imagine this: Your server runs two applications. App X needs Python 3.10 & kernel X, while App Y requires Python 3.12 and kernel Y. These conflicting dependencies can exist on the same server. The result? You end up with one server per application.

2. Slow Development On-Boarding

When a new engineer joins your team, they shouldn’t spend days setting up their environment. But in reality, they often do. Missing libraries, mismatched dependency versions, or OS-specific tools can turn onboarding into a frustrating puzzle. Instead of contributing to the project, they’re stuck solving setup issues.

3. Multi-platform Deployment

In today’s world, your application needs to run on multiple platforms — Windows, Linux, macOS, and more. Supporting all these platforms means maintaining separate builds, configurations, and testing environments. It’s time-consuming, and often leads to the “It works on my machine” scenario.

4. Hard to Rollback

Deployments fail. But rolling back can be hard. If a new version of your application breaks, reverting isn’t as simple as rolling back the code. You might need to downgrade libraries, reconfigure services, or even rebuild the entire environment. And if you’re running multiple applications on the same server, one broken app can take everything else down with it.

The most safest way? Spin up a new server, deploy the application there. But this approach is costly, slow, and often limited by licensing or resource constraints.

The Bigger Picture

But what if there was a better way? What if you could eliminate environment inconsistencies, simplify onboarding, and make deployments predictable and reliable? That’s where containers come in.

Potential Solutions

Introduce Containerization

A container is a standard unit of software that packages your application’s code, dependencies, and runtime into a single, lightweight, and portable unit. Think of it as a self-contained box that includes everything your application needs to run — no matter where it’s deployed.

Source: https://www.redhat.com/en/topics/containers/whats-a-linux-container

Here’s how it works:

  • Application/Services: Your app and its components.
  • Supporting Files/Runtime: Libraries, dependencies, operating system, and runtime environments.
  • Host Operating System: The underlying OS where the docker installed.

Containers ensure that your application runs consistently across different environments — whether it’s your laptop, a development server, or a production cluster. And the best part? Containers are lightweight and fast, unlike virtual machines.

Docker: The Popular

Before Docker, there was Linux Containers (LXC), which provided basic container functionality. However, Docker, introduced in 2013, revolutionized containerization by adding user-friendly tools and services that made it accessible to developers and operations teams loves it. Docker simplified the process of building, shipping, and running containers.

Deep Dive Into Docker Container

I recommend you to watch and follow the demo section before proceeding to this section, to give you an overview from building to deploying containers.

Virtual Machine vs Containers

To understand why containers are so powerful, let’s compare them to virtual machines (VMs):

Source: https://www.redhat.com/en/topics/containers/whats-a-linux-container

Containers are less resource-intensive, have a standardized interface (start, stop, environment variables, etc.), and retain application isolation while being easier to manage as part of a larger application (multiple-containers that communicate).

Ecosystem

Source: https://middleware.io/blog/understanding-the-docker-ecosystem/

Docker provides an ecosystem for building, deploying, and managing containers. Here’s how it works:

  1. Dockerfile
    A text file that defines the steps to build a container image. Each instruction in the Dockerfile is executed in order to create a reproducible build process.
  2. Container Image
    A snapshot of the build results that includes everything needed to run the application — code, runtime, dependencies, and file system objects. Images are stored in registries like Docker Hub.
  3. Container
    A running instance of a container image. When you deploy an image, it becomes a container.

What’s under the hood?

Isolation

Container isolation involves isolating a containerized application’s runtime environment from the host operating system and other processes running on the host. This isolation takes several forms, including file system isolation, network isolation and system call isolation.

File System Isolation (Mount Namespace)

Source: https://www.toptal.com/linux/separation-anxiety-isolating-your-system-with-linux-namespaces

The mount namespace gives each container its own isolated view of the filesystem. This means the container can’t access or interfere with files belonging to other containers or the host system. Instead of using the host’s default filesystem, the container gets its own “virtual” filesystem.

  • The host system uses an overlay filesystem for Python.
  • Inside the Python container, it sees its own filesystem, which looks similar to the host’s but is completely separate.
  • By default, other containers cannot access the filesystem of the Python container.

Process Isolation (PID Namespace)

Source: https://www.toptal.com/linux/separation-anxiety-isolating-your-system-with-linux-namespaces

The PID namespace ensures that each container has its own isolated view of running processes. This means the container can only see and manage its own processes, not those of other containers or the host (by default).

  • On the host system, you can see all processes, including those created by the Python container.
  • Inside the Python container, only its own processes are visible (e.g., 3 processes). It cannot see processes from the host or other containers.

Network Isolation (Network Namespace)

Source: https://www.toptal.com/linux/separation-anxiety-isolating-your-system-with-linux-namespaces

The network namespace gives each container its own isolated network environment, including network interfaces, IP addresses, and routing rules. This ensures that containers can use the ports they need without conflicting with each other, and traffic can be directed to the correct container.

  • The host system creates a bridge interface to manage network traffic for all containers.
  • Each container has its own virtual network interface and IP address.
  • Firewall and routing rules on the host ensure that traffic is forwarded to the correct container.

Demo

In this demo, we’ll simulate how containers solve the Conflicting Dependencies problem. For a hands-on experience, you can find the complete guide here:

https://github.com/michaelact/example-container

Explanation

Build

To build the application, we use the docker build command. This command reads the Dockerfile (a text file with instructions) and creates a container image for the application.

What’s in the Dockerfile?

Deploy

Once the image is built, we can deploy it using docker compose up -d . Docker Compose is a tool that lets you define and run multi-container applications with a single configuration file.

What’s in the Docker Compose File?

A text file, which defines how the container should be run.

Further Readings

More Benefits

More To Explore

Conclusion

Deploying applications doesn’t have to be complicated. As we’ve seen, frustrating issues like conflicting dependencies, slow onboarding, and hard rollbacks. But with Docker and containerization, these challenges become a thing of the past.

Containers provide a consistent, portable, and efficient way to package and deploy applications. They ensure your app runs the same way on your laptop, in staging, and in production — no more “It works on my machine” excuses. Plus, tools like Docker Compose make it easy to manage and scale your applications.

So, are you ready to simplify your deployments? Give Docker a try, and see how containers can transform the way you work.

References

  1. https://www.redhat.com/en/topics/containers/whats-a-linux-container
  2. https://www.docker.com/resources/what-container/
  3. https://aws.amazon.com/docker/
  4. https://snyk.io/blog/best-practices-for-container-isolation/
  5. https://middleware.io/blog/understanding-the-docker-ecosystem/
  6. https://securitylabs.datadoghq.com/articles/container-security-fundamentals-part-2/

--

--

Do Exploit
Do Exploit

Written by Do Exploit

I share stories about what I've learned in the past and now. Let's connect to Instagram! @do.exploit

No responses yet