Most developers think they “know Docker” because they can run:
docker build, docker run, docker-compose up
That’s not understanding. That’s muscle memory. If this is where your Docker knowledge stops, you are operating at cargo-cult level:
- You copy commands
- You don’t understand consequences
- You panic when things break
Docker is not magic. Docker is Linux primitives glued together with tooling.
Until you understand what actually happens under the hood, you will:
- Debug blindly in production
- Lose data due to bad volume configuration
- Break networking and blame Docker
- Ship bloated images
This article strips Docker down to its bones.

What Docker Really Is (No Marketing Nonsense)
Let’s kill the myths first. Docker is not a virtual machine replacement, not a deployment platform, and not a magic packaging tool. What Docker actually is is much simpler and far more important: it uses Linux namespaces to isolate processes, cgroups to control resource usage, and union filesystems to build layered images, all coordinated by a long-running daemon called dockerd. Everything else you interact with — Dockerfiles, the CLI, Docker Compose — is just user experience built on top of these primitives. Docker didn’t invent containers; Linux did. Docker’s real contribution was making those low-level Linux features usable for everyday developers.

Containers vs Virtual Machines (The Lie You Were Told)
People often say: “Containers are lightweight VMs.” That sentence has caused more production failures than bugs.
| Aspect | Virtual Machine | Container |
|---|---|---|
| Kernel | Separate | Shared with host |
| Boot time | Minutes | Milliseconds |
| Isolation | Hardware-level | Process-level |
| Overhead | Heavy | Lightweight |
| Security boundary | Stronger | Weaker (by design) |
The Uncomfortable Truth
A container is just a process with constraints. (it means we can allocate resources to it)
Same kernel. Same OS. Same host underneath.
If that sentence makes you uncomfortable, good. It means you’re starting to understand Docker properly.
The Filesystem Illusion (Union FS Explained Simply)

When you pull a Docker image, Docker does not download one big file. It downloads layers.
Typical layers look like:
- Base OS layer
- Runtime layer (Python, Node, Go)
- Dependency layer
- Application code layer
Important facts about these layers:
- They are read-only
- They are shared across containers
- They are cached aggressively
What Happens When a Container Starts?
Docker:
- Stacks all read-only layers
- Adds one thin writable layer on top
All file changes go only to that writable layer.
When the container is deleted:
- The writable layer disappears
- Your data disappears with it
That’s why:
- Writing data inside containers is a rookie mistake
- Containers are disposable by design
- Volumes exist
Volumes: Where Most Systems Go to Die

Storage is where Docker setups usually collapse. You don’t get many choices — and choosing the wrong one guarantees pain.
There are three options.
1. Container filesystem (bad)
- Data lives inside the container’s writable layer
- Destroy the container → data is gone
- Only acceptable for temporary, throwaway files
Use this for anything important and you’ve built a self-destructing system.
2. Docker volumes (correct)
- Managed by Docker
- Independent of the container lifecycle
- Portable, predictable, and easy to back up
This is what production systems are supposed to use.
3. Bind mounts (dangerous)
- Directly map host filesystem paths into containers
- Environment-specific and brittle
- Easy to break, painful to debug
Great for local development. Risky in production unless you know exactly what you’re doing.
One Rule You Must Remember
App code lives in images. App data lives in volumes.
Break this rule and production will punish you.
Networking: Why localhost Betrays You

Inside a container:
localhostpoints only to the container itself- Not the host
- Not other containers
- Not “where the database runs”
This is where many systems break.
What Docker Actually Sets Up
Docker doesn’t rely on magic. It creates:
- Virtual network bridges
- Virtual network interfaces
- An internal DNS server
Every container on the same Docker network gets automatic service discovery.
That’s why this works:
db:5432
The Hard Truth
Docker resolves service names through its internal DNS. The moment you hardcode IP addresses or depend on localhost across containers, you’ve brought fragility into the system. It may appear to work today, in your environment, on your machine — but it will fail in production, usually under load or during a redeploy.
The Docker Daemon: Single Point of Control
Everything in Docker flows through dockerd:
- Building images
- Pulling images
- Creating networks
- Managing volumes
- Running containers
If dockerd crashes:
- Containers keep running
- You lose control and orchestration
This surprises people. It shouldn’t.
Containers are Linux processes. Docker is just the manager.
This is not a bug. This is how Linux works.
What You Should Take Away
If you remember only five things:
- Containers are not VMs
- Containers are processes
- Filesystems are layered illusions
- Data inside containers is disposable
- Docker is Linux with a nice CLI
Once this clicks:
- Docker stops being scary
- Debugging becomes logical
- Production failures make sense
Ignore this, and Docker will keep “mysteriously” failing.
What’s Next
In the next article, we’ll dissect:
Why 90% of Dockerfiles are inefficient — and how to fix them
Most Dockerfiles in the wild are bloated, slow, insecure, and poorly cached, usually because they are written by copying patterns without understanding how Docker actually builds images. And yes, you are probably doing it wrong — and fixing it will immediately make your builds faster, your images smaller, and your systems easier to run in production.
Discussion