Containers, Images, Networking, and Best Practices
| Aspect | Virtual Machines | Containers |
|---|---|---|
| Isolation | Hardware-level (hypervisor) | OS-level (namespaces, cgroups) |
| OS | Full OS per VM (kernel + userspace) | Shares host kernel, isolated userspace |
| Size | GBs (entire OS) | MBs (app + dependencies) |
| Startup Time | Minutes | Seconds (or milliseconds) |
| Resource Overhead | High (separate kernel, drivers) | Low (shared kernel) |
| Density | 10s per host | 100s-1000s per host |
| Security | Stronger isolation | Weaker (shared kernel) |
Virtual Machines: Containers: ┌─────────────────────┐ ┌─────────────────────┐ │ App A │ App B │ │ App A │ App B │App C│ ├───────────┼─────────┤ ├─────────────────────┤ │ Bins/Libs │ │ Bins/Libs │ ├─────────────────────┤ ├─────────────────────┤ │ Guest OS │Guest OS │ │ Container Runtime │ ├─────────────────────┤ ├─────────────────────┤ │ Hypervisor │ │ Host OS │ ├─────────────────────┤ ├─────────────────────┤ │ Host OS │ │ Hardware │ ├─────────────────────┤ └─────────────────────┘ │ Hardware │ └─────────────────────┘
Isolate what process can see:
| Namespace | Isolates | Example |
|---|---|---|
| PID | Process IDs | Container sees PID 1 as its init, host sees it as PID 12345 |
| NET | Network stack | Container has own network interfaces, routing tables, firewall |
| MNT | Filesystem mounts | Container sees own root filesystem |
| UTS | Hostname | Container can have different hostname than host |
| IPC | Inter-process communication | Shared memory, semaphores isolated |
| USER | User/Group IDs | Root in container ≠ root on host (user namespacing) |
# See container namespaces
docker inspect --format '{{.State.Pid}}' container_name
ls -la /proc/PID/ns/
# Example output:
lrwxrwxrwx 1 root root 0 net:[4026532208]
lrwxrwxrwx 1 root root 0 pid:[4026532209]
lrwxrwxrwx 1 root root 0 mnt:[4026532206]
...
Control what resources process can use:
| Resource | Control | Docker Flag |
|---|---|---|
| CPU | CPU shares, cores, quota | --cpus=2, --cpu-shares=512 |
| Memory | Memory limit, swap | --memory=1g, --memory-swap=2g |
| Block I/O | Disk read/write limits | --device-read-bps, --device-write-bps |
| Network | Bandwidth limits | (requires tc/iptables) |
# Run container with resource limits
docker run -d \
--name myapp \
--cpus=2 \ # 2 CPU cores
--memory=1g \ # 1GB RAM
--memory-swap=2g \ # 2GB total (RAM + swap)
--pids-limit=100 \ # Max 100 processes
nginx
# View resource usage
docker stats myapp
Stack read-only layers + writable top layer:
Container Filesystem (OverlayFS): ┌─────────────────────────────────┐ │ Container Layer (R/W) │ ← Changes here │ /tmp/myfile.txt │ ├─────────────────────────────────┤ │ Image Layer 3 (R/O) │ │ ADD app.jar /app/ │ ├─────────────────────────────────┤ │ Image Layer 2 (R/O) │ │ RUN apt-get install openjdk │ ├─────────────────────────────────┤ │ Image Layer 1 (R/O) │ │ FROM ubuntu:22.04 │ └─────────────────────────────────┘
Copy-on-Write (CoW): Modify file in read-only layer → copy to writable layer first
A read-only template with instructions for creating a container. Built from layers stacked on top of each other.
# View image layers
docker history nginx:latest
IMAGE CREATED BY SIZE
f9c14fe76a38 /bin/sh -c #(nop) CMD ["nginx" "-g" "daemon… 0B
/bin/sh -c #(nop) EXPOSE 80 0B
/bin/sh -c apt-get update && apt-get install… 54MB
/bin/sh -c #(nop) WORKDIR /usr/share/nginx/… 0B
/bin/sh -c #(nop) FROM debian:bookworm 124MB
| Driver | Filesystem | Performance | Use Case |
|---|---|---|---|
| overlay2 | OverlayFS | Best | Default on modern Linux (recommended) |
| aufs | AUFS | Good | Legacy Ubuntu (deprecated) |
| devicemapper | LVM thin provisioning | OK | RHEL/CentOS 7 (legacy) |
| btrfs | Btrfs | OK | Special cases (CoW filesystem) |
| zfs | ZFS | Good | Special cases (enterprise features) |
# Check storage driver
docker info | grep "Storage Driver"
Storage Driver: overlay2
FROM node:18
WORKDIR /app
COPY . . # Copies everything
RUN npm install # Cache invalidated on ANY file change
CMD ["node", "server.js"]
Problem: npm install runs on every code change!
FROM node:18
WORKDIR /app
# Copy only package files first
COPY package*.json ./
RUN npm install # Cached unless package.json changes
# Copy code after dependencies installed
COPY . .
CMD ["node", "server.js"]
Result: npm install cached when only code changes!
Reduce final image size by using separate build and runtime stages.
FROM golang:1.21
WORKDIR /app
COPY . .
RUN go build -o myapp
CMD ["./myapp"]
# Final image: 1.2GB (includes Go compiler, build tools)
# Stage 1: Build
FROM golang:1.21 AS builder
WORKDIR /app
COPY go.* ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o myapp
# Stage 2: Runtime
FROM alpine:3.19
RUN apk --no-cache add ca-certificates
WORKDIR /app
COPY --from=builder /app/myapp .
CMD ["./myapp"]
# Final image: 15MB (only binary + Alpine)
Result: 80x smaller image!
# Python with compiled dependencies
FROM python:3.11 AS builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --user --no-cache-dir -r requirements.txt
FROM python:3.11-slim
WORKDIR /app
COPY --from=builder /root/.local /root/.local
COPY . .
ENV PATH=/root/.local/bin:$PATH
CMD ["python", "app.py"]
# React/Node.js app
FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM nginx:alpine
COPY --from=builder /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y nginx
RUN apt-get install -y vim
# 4 layers!
RUN apt-get update && apt-get install -y \
curl \
nginx \
vim \
&& rm -rf /var/lib/apt/lists/*
# 1 layer, cleaned up package cache
# Good: Cleanup in same RUN command
RUN apt-get update && apt-get install -y curl \
&& rm -rf /var/lib/apt/lists/*
# Bad: Separate RUN for cleanup (doesn't reduce image size!)
RUN apt-get update && apt-get install -y curl
RUN rm -rf /var/lib/apt/lists/* # Too late, layer already created
Exclude files from build context (like .gitignore):
# .dockerignore
node_modules
npm-debug.log
.git
.env
*.md
.DS_Store
Dockerfile
.dockerignore
.vscode
.idea
coverage/
dist/
*.test.js
FROM node:latest # Can break unexpectedly
FROM python:3 # Ambiguous (3.9? 3.10? 3.11?)
FROM node:18.19-alpine # Specific version
FROM python:3.11.7-slim # Specific version
FROM nginx:1.25-alpine # Specific version
| Driver | Use Case | Isolation |
|---|---|---|
| bridge | Default. Containers on same host. | Internal network, port mapping for external access |
| host | Container uses host's network stack (no isolation) | No isolation, best performance |
| none | No networking | Complete isolation |
| overlay | Multi-host networking (Swarm, Kubernetes) | Containers across hosts communicate |
| macvlan | Assign MAC address to container (appears as physical device) | Direct network access |
# Create custom bridge network
docker network create mynet
# Run containers on same network
docker run -d --name web --network mynet nginx
docker run -d --name api --network mynet node:18
# Containers can communicate by name
# From 'web' container: curl http://api:3000
Host Network Namespace: ┌────────────────────────────────────┐ │ eth0 (192.168.1.100) │ │ │ │ Docker Bridge (docker0) │ │ 172.17.0.1 │ │ │ │ │ ├─ Container 1: 172.17.0.2:80 │ │ │ nginx │ │ │ │ │ └─ Container 2: 172.17.0.3:3000 │ │ node app │ └────────────────────────────────────┘ External → 192.168.1.100:8080 → 172.17.0.2:80 (port mapping)
# Port mapping
docker run -d -p 8080:80 nginx
# Host port 8080 → Container port 80
# Publish all exposed ports to random host ports
docker run -d -P nginx
# Container uses host network stack directly
docker run -d --network host nginx
# No port mapping needed
# Container port 80 = Host port 80
# Better performance, less isolation
# Docker provides built-in DNS
docker network create myapp
docker run -d --name db --network myapp postgres
docker run -d --name api --network myapp node:18
# From 'api' container:
# Can access 'db' by hostname:
curl http://db:5432 # Resolves to db container IP
# Pass database URL to app container
docker run -d --name api --network myapp \
-e DATABASE_URL=postgresql://db:5432/mydb \
node:18
| Type | Location | Use Case | Performance |
|---|---|---|---|
| Volume | Docker-managed (/var/lib/docker/volumes/) | Persistent data (databases, uploads) | Best |
| Bind Mount | Any host path | Development (code sync), config files | Good |
| tmpfs | Host memory | Temporary data (not persistent) | Fastest |
# Create named volume
docker volume create pgdata
# Use volume
docker run -d --name postgres \
-v pgdata:/var/lib/postgresql/data \
postgres:15
# Volume persists after container deletion
docker rm -f postgres
docker run -d --name postgres2 \
-v pgdata:/var/lib/postgresql/data \
postgres:15
# Data still there!
# List volumes
docker volume ls
# Inspect volume
docker volume inspect pgdata
# Remove volume
docker volume rm pgdata
# Mount host directory into container
docker run -d --name web \
-v /home/user/website:/usr/share/nginx/html:ro \
nginx
# :ro = read-only
# Changes on host immediately visible in container
# Development example
docker run -it --rm \
-v $(pwd):/app \
-w /app \
node:18 \
npm run dev
# Mount tmpfs (in-memory, not persistent)
docker run -d --name app \
--tmpfs /tmp:rw,size=100m,mode=1777 \
myapp
# Use case: Sensitive temporary files, fast I/O
docker run --rm -v pgdata:/data -v $(pwd):/backup alpine tar czf /backup/backup.tar.gz /data# docker-compose.yml
version: '3.8'
services:
web:
image: nginx:alpine
ports:
- "8080:80"
volumes:
- ./html:/usr/share/nginx/html:ro
networks:
- frontend
depends_on:
- api
api:
build:
context: ./api
dockerfile: Dockerfile
environment:
- DATABASE_URL=postgresql://postgres:secret@db:5432/mydb
- REDIS_URL=redis://cache:6379
networks:
- frontend
- backend
depends_on:
- db
- cache
db:
image: postgres:15
environment:
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=mydb
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- backend
cache:
image: redis:7-alpine
networks:
- backend
networks:
frontend:
backend:
volumes:
pgdata:
# Commands
docker compose up -d # Start all services
docker compose ps # List services
docker compose logs -f api # Follow logs
docker compose exec api sh # Shell into container
docker compose down # Stop and remove
docker compose down -v # Stop and remove volumes
| Component | Purpose | Used By |
|---|---|---|
| Docker | Complete platform (daemon, CLI, API, registry) | Developers, Docker Swarm |
| containerd | Container runtime (daemon) | Docker, Kubernetes, AWS ECS |
| runc | Low-level runtime (creates containers) | containerd, CRI-O |
| CRI-O | Kubernetes-specific runtime | Kubernetes (lightweight alternative) |
Stack Layers: Docker: ┌──────────────┐ │ docker CLI │ ├──────────────┤ │ dockerd │ ├──────────────┤ │ containerd │ ├──────────────┤ │ runc │ └──────────────┘ Kubernetes (bypasses Docker): ┌──────────────┐ │ kubectl │ ├──────────────┤ │ kubelet │ ├──────────────┤ │ containerd │ or CRI-O ├──────────────┤ │ runc │ └──────────────┘
FROM node:18
COPY . /app
CMD ["node", "server.js"]
# Runs as root (UID 0)!
FROM node:18
RUN useradd -m -u 1000 appuser
COPY --chown=appuser:appuser . /app
USER appuser
CMD ["node", "server.js"]
# Runs as appuser
# Image sizes:
ubuntu:22.04 77MB
debian:bookworm 124MB
alpine:3.19 7.3MB # Smallest
scratch 0MB # For static binaries (Go)
distroless ~20MB # Google's minimal images
# Docker Scout (built-in)
docker scout cves myimage:latest
# Trivy (open-source)
trivy image myimage:latest
# Snyk
snyk container test myimage:latest
# Bad: Secrets in Dockerfile
ENV API_KEY=abc123secretkey
# Bad: Secrets in image
COPY .env /app/.env
# Good: Pass at runtime
docker run -e API_KEY=$API_KEY myimage
# Good: Docker secrets (Swarm)
docker secret create api_key ./secret.txt
docker service create --secret api_key myimage
# Good: Kubernetes secrets
kubectl create secret generic api-key --from-literal=key=abc123
# Run container with read-only root filesystem
docker run --read-only --tmpfs /tmp myimage
# Good for security (prevents file modifications)
# Drop all capabilities, add only needed ones
docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE myimage
# Default capabilities are too permissive
HEALTHCHECK --interval=30s --timeout=3s --retries=3 \
CMD curl -f http://localhost:8080/health || exit 1
# Or in docker-compose.yml:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 3s
retries: 3
start_period: 40s
docker run -d \
--memory=1g \
--memory-reservation=512m \
--cpus=2 \
--pids-limit=100 \
myimage
# Configure log driver
docker run -d \
--log-driver=json-file \
--log-opt max-size=10m \
--log-opt max-file=3 \
myimage
# Or use centralized logging (syslog, fluentd, splunk)
docker run -d --restart=unless-stopped myimage
# Policies:
# no - Don't restart
# on-failure - Restart on non-zero exit
# always - Always restart
# unless-stopped - Always restart unless manually stopped