DevOps · Containers - Docker: A Complete Command Reference
Docker — A Complete
Command Reference
Core concepts, essential commands, choosing the right base image, and a full 3-service Python walkthrough from scratch.
Docker lets you package any application and its dependencies into a lightweight, portable container that runs identically across every environment — your laptop, a CI server, or a cloud VM. This guide covers everything from the vocabulary to a production-ready multi-service stack.
A read-only template built from a Dockerfile. The blueprint for a container.
A running instance of an image. Isolated, ephemeral, and fast to start.
A text file of instructions used to build a custom image layer by layer.
A storage hub for images. Docker Hub is the default public registry.
Persistent storage that survives container restarts and removals.
Virtual networks that control how containers communicate with each other.
The base image sets your container's OS, package manager, shell, and final size. Click a language to see the best picks and example Dockerfiles.
FROM python:3.12-slim WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY . . CMD ["python", "app.py"]
# Stage 1 — install deps FROM node:20-alpine AS builder WORKDIR /app COPY package*.json . RUN npm ci --only=production # Stage 2 — lean runtime FROM node:20-alpine WORKDIR /app COPY --from=builder /app/node_modules ./node_modules COPY . . CMD ["node", "server.js"]
FROM maven:3.9-eclipse-temurin-21 AS build WORKDIR /app COPY pom.xml . RUN mvn dependency:go-offline COPY src ./src RUN mvn package -DskipTests FROM eclipse-temurin:21-jre-alpine COPY --from=build /app/target/*.jar app.jar ENTRYPOINT ["java","-jar","app.jar"]
FROM golang:1.22-alpine AS builder WORKDIR /app COPY . . RUN CGO_ENABLED=0 go build -o server . FROM scratch COPY --from=builder /app/server /server ENTRYPOINT ["/server"]
FROM node:20-alpine AS build WORKDIR /app COPY . . RUN npm ci && npm run build FROM nginx:1.27-alpine COPY --from=build /app/dist /usr/share/nginx/html EXPOSE 80
docker run -d \ --name pg \ -e POSTGRES_PASSWORD=secret \ -e POSTGRES_DB=mydb \ -v pg_data:/var/lib/postgresql/data \ postgres:16-alpine
docker run -d --name cache redis:7-alpine
Pull, list, inspect, and clean up images.
docker pull nginx:latest
Downloads from Docker Hub. Specify a tag to pin a version.
docker images # or: docker image ls
docker build -t myapp:1.0 .
-t tags the image. . is the build context (location of Dockerfile).
docker image prune -a # -a = including tagged
docker run -d -p 8080:80 --name web nginx
-d = detached · -p host:container = port mapping · --name = friendly name.
docker ps -a docker stop web docker rm -f web # force-stop then delete
docker exec -it web bash # or sh for alpine
docker logs -f web
docker volume create mydata docker run -v mydata:/app/data myapp
docker network create mynet docker run --network mynet --name db postgres docker run --network mynet --name api myapp
Containers on the same network resolve each other by container name.
| Command | What it does |
|---|---|
docker compose up -d | Start all services in the background |
docker compose down | Stop and remove containers & networks |
docker compose down -v | Also delete volumes |
docker compose logs -f | Stream logs from all services |
docker compose ps | List service containers and status |
docker compose build | Rebuild images defined in the file |
docker compose exec api bash | Open a shell in a running service |
docker compose pull | Pull the latest images |
docker system prune -a --volumes
Removes stopped containers, unused images, build cache, and volumes. Use with care.
docker system df
We'll build a real web application made of three containers that communicate over a private Docker network:
馃煝 Nginx — public-facing reverse proxy on port 80
馃數 Flask API — Python web service on internal port 5000
馃敶 Redis — in-memory counter store on internal port 6379
A user visits localhost:80 → Nginx proxies to Flask → Flask increments a counter in Redis and returns JSON. This is the canonical microservice pattern used in production.
:1.27-alpine
Reverse Proxy
host:80 → :80
python:3.12-slim
Python Service
internal :5000
redis:7-alpine
Counter Store
internal :6379
Start with a clean directory. Everything Docker needs will live here — the Python code, configuration files, and the Compose file that wires it all together.
myapp/ ├── api/ │ ├── app.py # Flask application │ ├── requirements.txt # Python dependencies │ └── Dockerfile # How to build the API image ├── nginx/ │ └── default.conf # Nginx proxy config └── docker-compose.yml # Orchestrates all 3 services
mkdir -p myapp/api myapp/nginx cd myapp
The entire Python backend. It uses Flask to handle HTTP and redis-py to talk to the Redis container. The REDIS_HOST env variable defaults to redis — the exact service name in Compose — enabling automatic DNS resolution inside the Docker network.
import os import redis from flask import Flask, jsonify app = Flask(__name__) # Connect to Redis using the hostname from docker-compose.yml # Docker's internal DNS resolves "redis" → the Redis container IP r = redis.Redis( host=os.getenv("REDIS_HOST", "redis"), port=6379, decode_responses=True ) @app.route("/") def index(): # Atomically increment the visit counter in Redis count = r.incr("visits") return jsonify({ "message": "Hello from Flask!", "visits": count }) @app.route("/health") def health(): # Health check endpoint — Nginx uses this to confirm the API is up return jsonify({"status": "ok"}), 200 if __name__ == "__main__": # Bind to 0.0.0.0 so Nginx can reach Flask through Docker's network # (127.0.0.1 would be unreachable from other containers) app.run(host="0.0.0.0", port=5000)
Tells pip exactly what to install. Pinning versions guarantees identical builds everywhere.
- flask — lightweight web framework
- redis — Python client for Redis
- gunicorn — production WSGI server (replaces Flask's dev server in the container)
flask==3.0.3 redis==5.0.4 gunicorn==22.0.0
This Dockerfile turns our Python code into a container image. Each instruction creates a cached layer. We deliberately copy requirements.txt and run pip install before copying the rest of the source — this way a code-only change skips the slow dependency install entirely.
# Base image: Debian slim with Python 3.12 pre-installed (~45 MB) FROM python:3.12-slim # Set /app as the working directory — all paths below are relative to it WORKDIR /app # COPY requirements first — if requirements.txt hasn't changed, # Docker reuses the cached pip layer and skips re-installing deps COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt # Now copy the rest of the application source code COPY . . # Document the port (informational only — Compose handles actual exposure) EXPOSE 5000 # Start with Gunicorn (4 workers) — never use Flask's dev server in prod # "app:app" = from file app.py, load the object named app CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "4", "app:app"]
Nginx acts as the entry point. All HTTP traffic on port 80 arrives here first. It forwards every request to Flask on port 5000. Because both containers are on the same Docker network, Nginx resolves Flask using the service name api — no IP address ever needed.
# "upstream" names our Flask backend — Docker DNS resolves "api" upstream flask_api { server api:5000; } server { # Accept HTTP on port 80 listen 80; location / { # Forward all requests to the Flask upstream proxy_pass http://flask_api; # Pass client details to Flask proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # How long to wait for Flask to respond proxy_read_timeout 60s; proxy_connect_timeout 5s; } }
This single file declares all three services, images, environment variables, port mappings, dependencies, and the shared network.
- depends_on — Compose starts Redis → Flask → Nginx in order
- networks — all services join appnet, enabling hostname-based DNS
- volumes — Redis data persists in a named volume across restarts
- Only Nginx exposes a port — Flask and Redis are internal-only
version: "3.9" services: # ── SERVICE 1: Redis ────────────────────────────────── redis: image: redis:7-alpine # official image, no custom Dockerfile needed container_name: redis restart: unless-stopped # auto-restart on crash or host reboot networks: - appnet volumes: - redis_data:/data # persist the RDB snapshot between restarts # ── SERVICE 2: Flask API ────────────────────────────── api: build: ./api # build the image from api/Dockerfile container_name: flask_api restart: unless-stopped environment: - REDIS_HOST=redis # app.py reads this via os.getenv() depends_on: - redis # Redis must start before Flask networks: - appnet # No "ports:" — Flask is only reachable from within the Docker network # ── SERVICE 3: Nginx ────────────────────────────────── nginx: image: nginx:1.27-alpine # no custom Dockerfile — just mount config container_name: nginx_proxy restart: unless-stopped ports: - "80:80" # only Nginx is exposed to the outside world volumes: # Mount our config, overriding Nginx's default. :ro = read-only - ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro depends_on: - api # Flask must start before Nginx networks: - appnet # Shared bridge network — enables container-to-container DNS networks: appnet: driver: bridge # Named volume — data persists even after "docker compose down" volumes: redis_data:
From the myapp/ root, one command builds the Flask image and starts all three containers in the correct order.
docker compose up --build -d # --build = rebuild the api image even if a cached version exists # -d = run in detached (background) mode
docker compose ps # Expected output: # NAME STATUS PORTS # flask_api Up 5000/tcp # nginx_proxy Up 0.0.0.0:80->80/tcp # redis Up 6379/tcp
curl http://localhost # {"message": "Hello from Flask!", "visits": 1} curl http://localhost # {"message": "Hello from Flask!", "visits": 2}
Common operations during development:
docker compose logs -f docker compose logs -f api # only Flask logs
docker compose exec api bash # inside: env | grep REDIS → REDIS_HOST=redis # inside: ping redis → confirms DNS resolution works
docker compose exec redis redis-cli # redis> GET visits → "2"
docker compose down # stop + remove containers and network docker compose down -v # also wipe the redis_data volume
Here's the full journey of a single HTTP request:
- 馃寪 Browser sends GET http://localhost:80
- 馃煝 Nginx receives it on port 80, proxies to api:5000 via Docker DNS
- 馃數 Flask handles the request, calls r.incr("visits") on Redis
- 馃敶 Redis atomically increments the key, returns the new integer
- 馃數 Flask wraps it in a JSON response and sends it back to Nginx
- 馃煝 Nginx forwards the response to the browser
A Dockerfile is a recipe — each instruction creates a new read-only layer stacked on top of the previous one. Getting the order and structure right saves build time, reduces image size, and avoids security pitfalls.
Every Dockerfile instruction explained with best-practice context:
| Instruction | Purpose | Best practice |
|---|---|---|
FROM |
Base image to build on | Use slim or alpine variants (python:3.12-slim) to minimize size |
WORKDIR |
Set the working directory inside the image | Always set before COPY / RUN; avoids path confusion |
COPY |
Copy files from host into the image | Copy requirements.txt first, then source — exploits layer caching |
ADD |
Like COPY, but also extracts archives and fetches URLs | Prefer COPY unless you specifically need tar extraction |
RUN |
Execute a command during build | Chain with && and clean up in the same layer to keep size small |
ENV |
Set environment variables | Use for config; never put secrets here (they appear in docker inspect) |
ARG |
Build-time variable (not persisted in final image) | Use for version pins or tokens needed only at build time |
EXPOSE |
Document which port the app listens on | Documentation only — you still need -p at runtime |
CMD |
Default command when container starts | Use JSON array form: ["python", "app.py"] |
ENTRYPOINT |
Fixed executable; CMD becomes its arguments | Combine with CMD for flexible CLI containers |
USER |
Switch to a non-root user | Always drop privileges before CMD — never run as root in prod |
HEALTHCHECK |
How Docker monitors container health | Use with web services so orchestrators can restart unhealthy containers |
VOLUME |
Declare a mount point for external volumes | Use for data that must survive container recreation (DBs, uploads) |
LABEL |
Metadata key-value pairs | Add maintainer, version, and description for traceability |
A production-quality Dockerfile with every decision explained:
# ── Stage: base ─────────────────────────────────────────────── # slim variant = ~45MB vs ~900MB for python:3.12 (full Debian) FROM python:3.12-slim AS base # Keeps Python from buffering stdout/stderr (better logs in Docker) ENV PYTHONUNBUFFERED=1 \ PYTHONDONTWRITEBYTECODE=1 # All subsequent paths are relative to this directory WORKDIR /app # ── Stage: deps ─────────────────────────────────────────────── # Copy ONLY the requirements file first. # Docker caches this layer; it only rebuilds when requirements change. COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt # --no-cache-dir saves ~30MB by not storing the pip download cache # ── Stage: app ──────────────────────────────────────────────── # Now copy the actual source — changes here don't invalidate the deps layer COPY . . # Create a non-root user and switch to it (security best practice) RUN addgroup --system appgroup && \ adduser --system --ingroup appgroup appuser USER appuser # Declare the port (documentation; doesn't open it) EXPOSE 5000 # Health check: Docker marks the container unhealthy if /health 404s HEALTHCHECK --interval=30s --timeout=5s --retries=3 \ CMD curl -f http://localhost:5000/health || exit 1 # Use array form to avoid shell interpretation of signals (clean shutdown) CMD ["python", "app.py"]
Always create a .dockerignore file alongside your Dockerfile — it works like .gitignore and prevents junk from bloating your build context:
# Python __pycache__/ *.pyc .venv/ *.egg-info/ # Version control .git/ .gitignore # Dev artifacts .env *.log tests/ docs/ README.md
Docker networking controls how containers talk to each other and to the outside world. Understanding drivers, DNS, and port mapping prevents most connectivity bugs.
The five built-in network drivers:
Private virtual LAN on a single host. Containers get IPs in 172.17.0.0/16. Isolated from host network. Best for: most use cases, Compose stacks.
Container shares the host's network namespace. No port mapping needed — but also no isolation. Best for: performance-critical apps, monitoring agents.
No network interface at all. Completely isolated. Best for: batch jobs, security-sensitive containers with no network needs.
Spans multiple Docker hosts — required for Docker Swarm or cross-host container communication. Uses VXLAN tunnels under the hood.
Container gets its own MAC address and appears as a physical device on the LAN. Best for: legacy apps that expect a real NIC, IoT, network monitoring.
Automatic DNS inside user-defined networks:
# Create a custom network (NOT the default bridge — it has no DNS) docker network create mynet # Start two containers on that network docker run -d --network mynet --name db postgres:16 docker run -d --network mynet --name api myapp # The "api" container can now reach "db" by hostname, no IPs needed: # postgres://db:5432/mydb ← just use the container name as the host # Inspect to confirm docker network inspect mynet
Port mapping — exposing containers to the outside:
# host_port:container_port — most common docker run -p 8080:80 nginx # Bind to a specific host IP (useful on multi-NIC servers) docker run -p 127.0.0.1:8080:80 nginx # Let Docker pick a random free host port docker run -p 80 nginx docker port <container> # find out which port was assigned # Map multiple ports at once docker run -p 80:80 -p 443:443 nginx # Expose ALL declared ports randomly (not recommended for prod) docker run -P nginx
Network management commands:
| Command | What it does |
|---|---|
docker network ls | List all networks |
docker network inspect <name> | Show IPs, connected containers, subnet |
docker network create --driver bridge mynet | Create a custom bridge network |
docker network connect mynet <container> | Attach a running container to a network |
docker network disconnect mynet <container> | Detach a container from a network |
docker network rm mynet | Delete a network (must have no connected containers) |
docker network prune | Remove all unused networks |
Three production-style setups that cover the most common architectures: a containerized database, a full-stack web app, and a CI/CD pipeline pattern.
① PostgreSQL with persistent storage
The classic problem: how do you run a database in Docker without losing data when the container is recreated?
docker run -d \ --name postgres \ -e POSTGRES_USER=admin \ -e POSTGRES_PASSWORD=secret \ -e POSTGRES_DB=mydb \ -p 5432:5432 \ -v pgdata:/var/lib/postgresql/data \ --restart unless-stopped \ postgres:16-alpine # Connect from host with psql psql -h localhost -U admin -d mydb # Or directly inside the container docker exec -it postgres psql -U admin -d mydb
② Full-stack app: Next.js + Node API + MongoDB
A realistic three-tier web app: React frontend, Express backend, and MongoDB — all defined in one Compose file.
services: frontend: build: ./frontend # builds from ./frontend/Dockerfile ports: - "3000:3000" environment: NEXT_PUBLIC_API_URL: http://api:4000 depends_on: - api networks: - appnet api: build: ./backend ports: - "4000:4000" # exposed for dev; remove in prod environment: MONGO_URI: mongodb://mongo:27017/mydb NODE_ENV: production depends_on: - mongo networks: - appnet mongo: image: mongo:7 restart: unless-stopped volumes: - mongodata:/data/db # persist database files networks: - appnet # No ports: — Mongo is internal only, not reachable from outside networks: appnet: driver: bridge volumes: mongodata:
docker compose up --build -d # Live logs while you test docker compose logs -f api # Check Mongo from inside the api container docker compose exec api node -e "require('mongoose').connect(process.env.MONGO_URI).then(()=>console.log('ok'))"
③ CI/CD pipeline with Docker
The standard pattern for building, testing, and pushing an image with GitHub Actions — zero dependencies on the runner beyond Docker.
name: Build & Push Docker Image on: push: branches: [ main ] jobs: build: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v4 # Set up QEMU for multi-platform builds (arm64 + amd64) - name: Set up QEMU uses: docker/setup-qemu-action@v3 - name: Set up Buildx uses: docker/setup-buildx-action@v3 # Authenticate with Docker Hub (secrets stored in repo settings) - name: Login to Docker Hub uses: docker/login-action@v3 with: username: ${{ secrets.DOCKERHUB_USERNAME }} password: ${{ secrets.DOCKERHUB_TOKEN }} # Build & push a multi-platform image, tagged with commit SHA - name: Build and push uses: docker/build-push-action@v5 with: context: . platforms: linux/amd64,linux/arm64 push: true tags: yourusername/myapp:${{ github.sha }},yourusername/myapp:latest cache-from: type=gha # GitHub Actions cache for faster builds cache-to: type=gha,mode=max
# On any server with Docker installed: docker pull yourusername/myapp:latest docker run -d -p 80:5000 --name myapp --restart unless-stopped yourusername/myapp:latest # Or pin to a specific commit SHA for reproducible deploys: docker run -d -p 80:5000 yourusername/myapp:a3f8c21
Comments
Post a Comment