DevOps · Containers - Docker: A Complete Command Reference

DevOps · Containers

DockerA Complete
Command Reference

Core concepts, essential commands, choosing the right base image, and a full 3-service Python walkthrough from scratch.

馃惓

Docker lets you package any application and its dependencies into a lightweight, portable container that runs identically across every environment — your laptop, a CI server, or a cloud VM. This guide covers everything from the vocabulary to a production-ready multi-service stack.

Core Concepts
馃摝
Image

A read-only template built from a Dockerfile. The blueprint for a container.

馃殺
Container

A running instance of an image. Isolated, ephemeral, and fast to start.

馃搵
Dockerfile

A text file of instructions used to build a custom image layer by layer.

馃彧
Registry

A storage hub for images. Docker Hub is the default public registry.

馃敆
Volume

Persistent storage that survives container restarts and removals.

馃寪
Network

Virtual networks that control how containers communicate with each other.

Choosing a Base Image

The base image sets your container's OS, package manager, shell, and final size. Click a language to see the best picks and example Dockerfiles.

Python
Node.js
Java
Go
Nginx
Databases

Python Images

python:3.12-slim · python:3.12-alpine · python:3.12

Three flavours: slim (Debian-based, ~45 MB) is the best default — small yet compatible with most compiled packages. alpine (~8 MB) is smaller but breaks packages needing glibc. The full python:3.12 (~900 MB) includes every build tool — dev only.

✓ Recommended: slim pip / venv Debian base
~45 MBslim
~8 MBalpine
~900 MBfull
Recommended Dockerfilepython
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "app.py"]
馃挕
Always pin the version tag (e.g. 3.12-slim) — never use :latest in production.

Node.js Images

node:20-alpine · node:20-slim · node:20

node:20-alpine (~60 MB) is the community favourite for production. Works well unless a native addon needs glibc. node:20-slim (~80 MB) is the safer default. Use multi-stage builds to separate the node_modules install from the final image.

✓ Recommended: alpine npm / yarn / pnpm musl libc
~60 MBalpine
~80 MBslim
~900 MBfull
Multi-stage production buildnode
# Stage 1 — install deps
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json .
RUN npm ci --only=production

# Stage 2 — lean runtime
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
CMD ["node", "server.js"]

Java / JVM Images

eclipse-temurin:21-jre-alpine · amazoncorretto:21

Use eclipse-temurin:21-jre-alpine for runtime-only containers (~90 MB). Use the JDK variant in the builder stage. amazoncorretto is AWS's optimised OpenJDK. Avoid the old openjdk:* images — they are deprecated.

✓ Recommended: temurin JRE Avoid openjdk (deprecated) Maven / Gradle in builder
~90 MBJRE alpine
~200 MBJDK slim
Maven multi-stagejava
FROM maven:3.9-eclipse-temurin-21 AS build
WORKDIR /app
COPY pom.xml .
RUN mvn dependency:go-offline
COPY src ./src
RUN mvn package -DskipTests

FROM eclipse-temurin:21-jre-alpine
COPY --from=build /app/target/*.jar app.jar
ENTRYPOINT ["java","-jar","app.jar"]

Go Images

golang:1.22-alpine → scratch / distroless

Go compiles to a single static binary, so the final image can be scratch (0 MB OS) or gcr.io/distroless/static (~2 MB). Use golang:1.22-alpine as the builder, copy only the binary. The leanest pattern in Docker.

✓ Final: scratch or distroless CGO_ENABLED=0 required
~0 MBscratch final
~2 MBdistroless
Scratch final stagego
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY . .
RUN CGO_ENABLED=0 go build -o server .

FROM scratch
COPY --from=builder /app/server /server
ENTRYPOINT ["/server"]

Nginx / Static Sites

nginx:1.27-alpine · nginx:stable-alpine

nginx:alpine (~25 MB) is the go-to for serving static files, React/Vue builds, and as a reverse proxy. Pair with a multi-stage build: compile your frontend in Node, copy dist/ into the nginx image.

✓ Recommended: nginx:alpine Reverse proxy Static files
~25 MBalpine
~50 MBfull
React build → nginxnginx
FROM node:20-alpine AS build
WORKDIR /app
COPY . .
RUN npm ci && npm run build

FROM nginx:1.27-alpine
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80

Database Images

postgres · mysql · redis · mongo

Database images are typically pulled directly — no custom Dockerfile needed. Pass credentials via environment variables and mount a named volume so data persists when the container is recreated.

postgres:16-alpine mysql:8.4 redis:7-alpine mongo:7
Neverstore data in container FS
Postgres with persistent volumedb
docker run -d \
  --name pg \
  -e POSTGRES_PASSWORD=secret \
  -e POSTGRES_DB=mydb \
  -v pg_data:/var/lib/postgresql/data \
  postgres:16-alpine
Redisdb
docker run -d --name cache redis:7-alpine
Image Management

Pull, list, inspect, and clean up images.

pull an imageimages
docker pull nginx:latest

Downloads from Docker Hub. Specify a tag to pin a version.

list imagesimages
docker images          # or: docker image ls
build from Dockerfileimages
docker build -t myapp:1.0 .

-t tags the image. . is the build context (location of Dockerfile).

remove unused imagescleanup
docker image prune -a  # -a = including tagged
Container Lifecycle
run a containercontainers
docker run -d -p 8080:80 --name web nginx

-d = detached · -p host:container = port mapping · --name = friendly name.

list / stop / removecontainers
docker ps -a
docker stop  web
docker rm -f web    # force-stop then delete
shell into a running containerdebug
docker exec -it web bash   # or sh for alpine
follow logsdebug
docker logs -f web
Volumes & Networks
named volumestorage
docker volume create mydata
docker run -v mydata:/app/data myapp
custom networknetwork
docker network create mynet
docker run --network mynet --name db  postgres
docker run --network mynet --name api myapp

Containers on the same network resolve each other by container name.

Docker Compose Cheatsheet
CommandWhat it does
docker compose up -dStart all services in the background
docker compose downStop and remove containers & networks
docker compose down -vAlso delete volumes
docker compose logs -fStream logs from all services
docker compose psList service containers and status
docker compose buildRebuild images defined in the file
docker compose exec api bashOpen a shell in a running service
docker compose pullPull the latest images
馃挕
Add restart: unless-stopped to Compose services so they recover automatically after a host reboot.
System Cleanup
full prunecleanup
docker system prune -a --volumes

Removes stopped containers, unused images, build cache, and volumes. Use with care.

disk usageinfo
docker system df
Full Example — 3-Service Python App

We'll build a real web application made of three containers that communicate over a private Docker network:

 馃煝 Nginx — public-facing reverse proxy on port 80
 馃數 Flask API — Python web service on internal port 5000
 馃敶 Redis — in-memory counter store on internal port 6379

A user visits localhost:80 → Nginx proxies to Flask → Flask increments a counter in Redis and returns JSON. This is the canonical microservice pattern used in production.

馃寪 nginx
:1.27-alpine

Reverse Proxy
host:80 → :80
馃悕 flask api
python:3.12-slim

Python Service
internal :5000
⚡ redis
redis:7-alpine

Counter Store
internal :6379
1
Create the project folder structure

Start with a clean directory. Everything Docker needs will live here — the Python code, configuration files, and the Compose file that wires it all together.

folder layoutbash
myapp/
├── api/
│   ├── app.py             # Flask application
│   ├── requirements.txt   # Python dependencies
│   └── Dockerfile         # How to build the API image
├── nginx/
│   └── default.conf       # Nginx proxy config
└── docker-compose.yml     # Orchestrates all 3 services
create directoriesbash
mkdir -p myapp/api myapp/nginx
cd myapp
2
Write the Flask API — api/app.py

The entire Python backend. It uses Flask to handle HTTP and redis-py to talk to the Redis container. The REDIS_HOST env variable defaults to redis — the exact service name in Compose — enabling automatic DNS resolution inside the Docker network.

api/app.py
api/app.pypython
import os
import redis
from flask import Flask, jsonify

app = Flask(__name__)

# Connect to Redis using the hostname from docker-compose.yml
# Docker's internal DNS resolves "redis" → the Redis container IP
r = redis.Redis(
    host=os.getenv("REDIS_HOST", "redis"),
    port=6379,
    decode_responses=True
)

@app.route("/")
def index():
    # Atomically increment the visit counter in Redis
    count = r.incr("visits")
    return jsonify({
        "message": "Hello from Flask!",
        "visits": count
    })

@app.route("/health")
def health():
    # Health check endpoint — Nginx uses this to confirm the API is up
    return jsonify({"status": "ok"}), 200

if __name__ == "__main__":
    # Bind to 0.0.0.0 so Nginx can reach Flask through Docker's network
    # (127.0.0.1 would be unreachable from other containers)
    app.run(host="0.0.0.0", port=5000)
馃攳
Why host="0.0.0.0"? Flask defaults to 127.0.0.1 (loopback). Inside a container, Nginx contacts Flask via Docker's virtual network interface — not loopback. Binding to 0.0.0.0 makes Flask accept connections from any interface inside the container.
3
List Python dependencies — api/requirements.txt

Tells pip exactly what to install. Pinning versions guarantees identical builds everywhere.

  • flask — lightweight web framework
  • redis — Python client for Redis
  • gunicorn — production WSGI server (replaces Flask's dev server in the container)

api/requirements.txt
api/requirements.txtpip
flask==3.0.3
redis==5.0.4
gunicorn==22.0.0
4
Write the API Dockerfile — api/Dockerfile

This Dockerfile turns our Python code into a container image. Each instruction creates a cached layer. We deliberately copy requirements.txt and run pip install before copying the rest of the source — this way a code-only change skips the slow dependency install entirely.

api/Dockerfile
api/Dockerfile — fully annotateddockerfile
# Base image: Debian slim with Python 3.12 pre-installed (~45 MB)
FROM python:3.12-slim

# Set /app as the working directory — all paths below are relative to it
WORKDIR /app

# COPY requirements first — if requirements.txt hasn't changed,
# Docker reuses the cached pip layer and skips re-installing deps
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Now copy the rest of the application source code
COPY . .

# Document the port (informational only — Compose handles actual exposure)
EXPOSE 5000

# Start with Gunicorn (4 workers) — never use Flask's dev server in prod
# "app:app" = from file app.py, load the object named app
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "4", "app:app"]
Layer caching trick: Copy requirements.txt and run pip install before COPY . . — that way, a change to your Python code only rebuilds the last two layers. The dependency install is served from cache and takes milliseconds instead of minutes.
5
Configure Nginx — nginx/default.conf

Nginx acts as the entry point. All HTTP traffic on port 80 arrives here first. It forwards every request to Flask on port 5000. Because both containers are on the same Docker network, Nginx resolves Flask using the service name api — no IP address ever needed.

nginx/default.conf
nginx/default.conf — annotatednginx
# "upstream" names our Flask backend — Docker DNS resolves "api"
upstream flask_api {
    server api:5000;
}

server {
    # Accept HTTP on port 80
    listen 80;

    location / {
        # Forward all requests to the Flask upstream
        proxy_pass          http://flask_api;

        # Pass client details to Flask
        proxy_set_header    Host              $host;
        proxy_set_header    X-Real-IP         $remote_addr;
        proxy_set_header    X-Forwarded-For   $proxy_add_x_forwarded_for;

        # How long to wait for Flask to respond
        proxy_read_timeout    60s;
        proxy_connect_timeout 5s;
    }
}
6
Write docker-compose.yml — the orchestration file

This single file declares all three services, images, environment variables, port mappings, dependencies, and the shared network.

  • depends_on — Compose starts Redis → Flask → Nginx in order
  • networks — all services join appnet, enabling hostname-based DNS
  • volumes — Redis data persists in a named volume across restarts
  • Only Nginx exposes a port — Flask and Redis are internal-only

docker-compose.yml
docker-compose.yml — fully annotatedcompose
version: "3.9"

services:

  # ── SERVICE 1: Redis ──────────────────────────────────
  redis:
    image: redis:7-alpine        # official image, no custom Dockerfile needed
    container_name: redis
    restart: unless-stopped       # auto-restart on crash or host reboot
    networks:
      - appnet
    volumes:
      - redis_data:/data           # persist the RDB snapshot between restarts

  # ── SERVICE 2: Flask API ──────────────────────────────
  api:
    build: ./api                   # build the image from api/Dockerfile
    container_name: flask_api
    restart: unless-stopped
    environment:
      - REDIS_HOST=redis           # app.py reads this via os.getenv()
    depends_on:
      - redis                       # Redis must start before Flask
    networks:
      - appnet
    # No "ports:" — Flask is only reachable from within the Docker network

  # ── SERVICE 3: Nginx ──────────────────────────────────
  nginx:
    image: nginx:1.27-alpine       # no custom Dockerfile — just mount config
    container_name: nginx_proxy
    restart: unless-stopped
    ports:
      - "80:80"                     # only Nginx is exposed to the outside world
    volumes:
      # Mount our config, overriding Nginx's default. :ro = read-only
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
    depends_on:
      - api                         # Flask must start before Nginx
    networks:
      - appnet

# Shared bridge network — enables container-to-container DNS
networks:
  appnet:
    driver: bridge

# Named volume — data persists even after "docker compose down"
volumes:
  redis_data:
7
Build and start everything

From the myapp/ root, one command builds the Flask image and starts all three containers in the correct order.

build and launchcompose
docker compose up --build -d

# --build  = rebuild the api image even if a cached version exists
# -d       = run in detached (background) mode
verify all services are runningcompose
docker compose ps

# Expected output:
# NAME          STATUS    PORTS
# flask_api     Up        5000/tcp
# nginx_proxy   Up        0.0.0.0:80->80/tcp
# redis         Up        6379/tcp
test the endpointbash
curl http://localhost
# {"message": "Hello from Flask!", "visits": 1}

curl http://localhost
# {"message": "Hello from Flask!", "visits": 2}
Each request increments the counter stored in Redis. Restart the entire stack with docker compose restart — the count persists because Redis data lives in the redis_data named volume, not in the container filesystem.
8
Debug, inspect, and tear down

Common operations during development:

stream logs from all servicesdebug
docker compose logs -f
docker compose logs -f api    # only Flask logs
open a shell in the Flask containerdebug
docker compose exec api bash
# inside: env | grep REDIS  →  REDIS_HOST=redis
# inside: ping redis        →  confirms DNS resolution works
inspect Redis directlydebug
docker compose exec redis redis-cli
# redis> GET visits  →  "2"
stop and clean upcompose
docker compose down           # stop + remove containers and network
docker compose down -v         # also wipe the redis_data volume
⚠️
down -v permanently deletes volume data. Use it only when you want to reset state (e.g. wipe the counter), or in CI pipelines where fresh state is required between runs.
9
End-to-end data flow recap

Here's the full journey of a single HTTP request:

  • 馃寪 Browser sends GET http://localhost:80
  • 馃煝 Nginx receives it on port 80, proxies to api:5000 via Docker DNS
  • 馃數 Flask handles the request, calls r.incr("visits") on Redis
  • 馃敶 Redis atomically increments the key, returns the new integer
  • 馃數 Flask wraps it in a JSON response and sends it back to Nginx
  • 馃煝 Nginx forwards the response to the browser
All container-to-container traffic travels over the internal appnet bridge network — nothing is reachable from outside except the single port 80 exposed by Nginx. This is exactly how production microservice stacks are secured.

馃搵 Dockerfile Writing Guide

A Dockerfile is a recipe — each instruction creates a new read-only layer stacked on top of the previous one. Getting the order and structure right saves build time, reduces image size, and avoids security pitfalls.

Every Dockerfile instruction explained with best-practice context:

Instruction Purpose Best practice
FROM Base image to build on Use slim or alpine variants (python:3.12-slim) to minimize size
WORKDIR Set the working directory inside the image Always set before COPY / RUN; avoids path confusion
COPY Copy files from host into the image Copy requirements.txt first, then source — exploits layer caching
ADD Like COPY, but also extracts archives and fetches URLs Prefer COPY unless you specifically need tar extraction
RUN Execute a command during build Chain with && and clean up in the same layer to keep size small
ENV Set environment variables Use for config; never put secrets here (they appear in docker inspect)
ARG Build-time variable (not persisted in final image) Use for version pins or tokens needed only at build time
EXPOSE Document which port the app listens on Documentation only — you still need -p at runtime
CMD Default command when container starts Use JSON array form: ["python", "app.py"]
ENTRYPOINT Fixed executable; CMD becomes its arguments Combine with CMD for flexible CLI containers
USER Switch to a non-root user Always drop privileges before CMD — never run as root in prod
HEALTHCHECK How Docker monitors container health Use with web services so orchestrators can restart unhealthy containers
VOLUME Declare a mount point for external volumes Use for data that must survive container recreation (DBs, uploads)
LABEL Metadata key-value pairs Add maintainer, version, and description for traceability

A production-quality Dockerfile with every decision explained:

Dockerfile — Python web app (annotated)dockerfile
# ── Stage: base ───────────────────────────────────────────────
# slim variant = ~45MB vs ~900MB for python:3.12 (full Debian)
FROM python:3.12-slim AS base

# Keeps Python from buffering stdout/stderr (better logs in Docker)
ENV PYTHONUNBUFFERED=1 \
    PYTHONDONTWRITEBYTECODE=1

# All subsequent paths are relative to this directory
WORKDIR /app

# ── Stage: deps ───────────────────────────────────────────────
# Copy ONLY the requirements file first.
# Docker caches this layer; it only rebuilds when requirements change.
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# --no-cache-dir saves ~30MB by not storing the pip download cache

# ── Stage: app ────────────────────────────────────────────────
# Now copy the actual source — changes here don't invalidate the deps layer
COPY . .

# Create a non-root user and switch to it (security best practice)
RUN addgroup --system appgroup && \
    adduser  --system --ingroup appgroup appuser
USER appuser

# Declare the port (documentation; doesn't open it)
EXPOSE 5000

# Health check: Docker marks the container unhealthy if /health 404s
HEALTHCHECK --interval=30s --timeout=5s --retries=3 \
    CMD curl -f http://localhost:5000/health || exit 1

# Use array form to avoid shell interpretation of signals (clean shutdown)
CMD ["python", "app.py"]

Always create a .dockerignore file alongside your Dockerfile — it works like .gitignore and prevents junk from bloating your build context:

.dockerignoreconfig
# Python
__pycache__/
*.pyc
.venv/
*.egg-info/

# Version control
.git/
.gitignore

# Dev artifacts
.env
*.log
tests/
docs/
README.md
馃彈️
Multi-stage builds let you compile/test in a fat image and copy only the binary into a tiny final image. Add AS builder to your heavy stage, then COPY --from=builder /app/dist . in the lean final stage. A Go binary can shrink from 800MB to 15MB this way.
Layer cache order matters. Always put instructions that change rarely (RUN apt-get install, pip install) before instructions that change often (COPY . .). A cache miss invalidates every subsequent layer.
馃寪 Docker Networking Deep-Dive

Docker networking controls how containers talk to each other and to the outside world. Understanding drivers, DNS, and port mapping prevents most connectivity bugs.

The five built-in network drivers:

馃數
bridge (default)

Private virtual LAN on a single host. Containers get IPs in 172.17.0.0/16. Isolated from host network. Best for: most use cases, Compose stacks.

馃煝
host

Container shares the host's network namespace. No port mapping needed — but also no isolation. Best for: performance-critical apps, monitoring agents.

none

No network interface at all. Completely isolated. Best for: batch jobs, security-sensitive containers with no network needs.

馃煛
overlay

Spans multiple Docker hosts — required for Docker Swarm or cross-host container communication. Uses VXLAN tunnels under the hood.

馃敹
macvlan

Container gets its own MAC address and appears as a physical device on the LAN. Best for: legacy apps that expect a real NIC, IoT, network monitoring.

Automatic DNS inside user-defined networks:

how containers find each other by namenetwork
# Create a custom network (NOT the default bridge — it has no DNS)
docker network create mynet

# Start two containers on that network
docker run -d --network mynet --name db  postgres:16
docker run -d --network mynet --name api myapp

# The "api" container can now reach "db" by hostname, no IPs needed:
# postgres://db:5432/mydb   ← just use the container name as the host

# Inspect to confirm
docker network inspect mynet

Port mapping — exposing containers to the outside:

-p syntax variationsnetwork
# host_port:container_port  — most common
docker run -p 8080:80 nginx

# Bind to a specific host IP (useful on multi-NIC servers)
docker run -p 127.0.0.1:8080:80 nginx

# Let Docker pick a random free host port
docker run -p 80 nginx
docker port <container>   # find out which port was assigned

# Map multiple ports at once
docker run -p 80:80 -p 443:443 nginx

# Expose ALL declared ports randomly (not recommended for prod)
docker run -P nginx

Network management commands:

CommandWhat it does
docker network lsList all networks
docker network inspect <name>Show IPs, connected containers, subnet
docker network create --driver bridge mynetCreate a custom bridge network
docker network connect mynet <container>Attach a running container to a network
docker network disconnect mynet <container>Detach a container from a network
docker network rm mynetDelete a network (must have no connected containers)
docker network pruneRemove all unused networks
馃敀
Security principle: Only expose the minimum. Keep internal services (databases, caches, workers) on a private network with no published ports. Only the public-facing gateway (Nginx, Traefik) should have -p mappings. This is the Docker equivalent of a firewall DMZ.
馃殌 Real-World Project Examples

Three production-style setups that cover the most common architectures: a containerized database, a full-stack web app, and a CI/CD pipeline pattern.

① PostgreSQL with persistent storage

The classic problem: how do you run a database in Docker without losing data when the container is recreated?

postgres with named volumedatabase
docker run -d \
  --name    postgres \
  -e        POSTGRES_USER=admin \
  -e        POSTGRES_PASSWORD=secret \
  -e        POSTGRES_DB=mydb \
  -p        5432:5432 \
  -v        pgdata:/var/lib/postgresql/data \
  --restart unless-stopped \
  postgres:16-alpine

# Connect from host with psql
psql -h localhost -U admin -d mydb

# Or directly inside the container
docker exec -it postgres psql -U admin -d mydb
馃捑
The named volume pgdata survives docker rm postgres. To back up: docker exec postgres pg_dump -U admin mydb > backup.sql. To restore: pipe the SQL back through docker exec -i postgres psql -U admin mydb < backup.sql.

② Full-stack app: Next.js + Node API + MongoDB

A realistic three-tier web app: React frontend, Express backend, and MongoDB — all defined in one Compose file.

docker-compose.yml — Next.js stackcompose
services:

  frontend:
    build: ./frontend          # builds from ./frontend/Dockerfile
    ports:
      - "3000:3000"
    environment:
      NEXT_PUBLIC_API_URL: http://api:4000
    depends_on:
      - api
    networks:
      - appnet

  api:
    build: ./backend
    ports:
      - "4000:4000"             # exposed for dev; remove in prod
    environment:
      MONGO_URI:  mongodb://mongo:27017/mydb
      NODE_ENV:   production
    depends_on:
      - mongo
    networks:
      - appnet

  mongo:
    image:          mongo:7
    restart:        unless-stopped
    volumes:
      - mongodata:/data/db     # persist database files
    networks:
      - appnet
    # No ports: — Mongo is internal only, not reachable from outside

networks:
  appnet:
    driver: bridge

volumes:
  mongodata:
launch and verifybash
docker compose up --build -d

# Live logs while you test
docker compose logs -f api

# Check Mongo from inside the api container
docker compose exec api node -e "require('mongoose').connect(process.env.MONGO_URI).then(()=>console.log('ok'))"

③ CI/CD pipeline with Docker

The standard pattern for building, testing, and pushing an image with GitHub Actions — zero dependencies on the runner beyond Docker.

.github/workflows/docker.ymlci/cd
name: Build & Push Docker Image

on:
  push:
    branches: [ main ]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:

      - name: Checkout code
        uses: actions/checkout@v4

      # Set up QEMU for multi-platform builds (arm64 + amd64)
      - name: Set up QEMU
        uses: docker/setup-qemu-action@v3

      - name: Set up Buildx
        uses: docker/setup-buildx-action@v3

      # Authenticate with Docker Hub (secrets stored in repo settings)
      - name: Login to Docker Hub
        uses: docker/login-action@v3
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}

      # Build & push a multi-platform image, tagged with commit SHA
      - name: Build and push
        uses: docker/build-push-action@v5
        with:
          context:    .
          platforms:  linux/amd64,linux/arm64
          push:       true
          tags:       yourusername/myapp:${{ github.sha }},yourusername/myapp:latest
          cache-from: type=gha    # GitHub Actions cache for faster builds
          cache-to:   type=gha,mode=max
pull and run the published imagedeploy
# On any server with Docker installed:
docker pull yourusername/myapp:latest
docker run -d -p 80:5000 --name myapp --restart unless-stopped yourusername/myapp:latest

# Or pin to a specific commit SHA for reproducible deploys:
docker run -d -p 80:5000 yourusername/myapp:a3f8c21
馃攣
Zero-downtime deploys: combine this with docker compose pull && docker compose up -d on your server via SSH in the workflow, or use a proper orchestrator (Kubernetes, Docker Swarm) for blue/green deployments.
馃攽
Never put secrets in images. Use --env-file .env at runtime, Docker secrets (Swarm), or a secrets manager (Vault, AWS Secrets Manager). Anything baked into a Docker image is visible to anyone who can pull it.

Comments

Popular posts from this blog

Automate Blog Content Creation with n8n and Grok 3 API

DAX: The Complete Guide

Hello world !