Docker for Beginners: Install, Run & Deploy Your First Container
A complete beginner's guide to Docker — from installation to running containers, building images, Docker Compose, and deploying real applications. No prior container experience needed.
What Is Docker?
Docker packages your application and all its dependencies into a container — a lightweight, portable unit that runs the same way everywhere. No more "it works on my machine" problems.
Think of it like this:
- A virtual machine emulates an entire OS (heavy, slow to start)
- A container shares the host OS kernel but isolates the application (lightweight, starts in seconds)
Traditional: App → OS → Hardware
Virtual Machine: App → Guest OS → Hypervisor → Host OS → Hardware
Docker: App → Container Runtime → Host OS → Hardware
Why developers use Docker:
- Consistent environments across dev, staging, and production
- Instant setup —
docker runinstead of installing 20 dependencies - Easy scaling — run 10 copies of your app with one command
- Clean system — remove a container and everything is gone
Step 1: Install Docker
Ubuntu / Debian
# Remove old versions
sudo apt remove docker docker-engine docker.io containerd runc 2>/dev/null
# Install prerequisites
sudo apt update
sudo apt install ca-certificates curl gnupg -y
# Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Add the repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list
# Install Docker
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-compose-plugin -yCentOS / RHEL / Fedora
sudo dnf install dnf-plugins-core -y
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y
sudo systemctl enable --now dockerWindows / macOS
Download Docker Desktop from docker.com. It includes Docker Engine, CLI, and Docker Compose.
Post-Install: Run Docker Without sudo
sudo usermod -aG docker $USER
newgrp dockerVerify Installation
docker --version
docker run hello-worldYou should see "Hello from Docker!" — your first container just ran.
Step 2: Core Concepts
Images vs Containers
| Concept | Analogy | Description | |---------|---------|-------------| | Image | Blueprint / Recipe | Read-only template with OS, dependencies, and app code | | Container | Running instance | A live process created from an image | | Registry | App Store | Where images are stored (Docker Hub, GitHub Container Registry) |
One image can spawn multiple containers:
# One image, three containers
docker run -d --name web1 nginx
docker run -d --name web2 nginx
docker run -d --name web3 nginxDocker Hub
Docker Hub is the default public registry. Thousands of pre-built images are available:
docker pull nginx # Web server
docker pull postgres:16 # Database
docker pull redis:alpine # Cache (lightweight Alpine variant)
docker pull python:3.12 # Python runtime
docker pull node:22 # Node.js runtimeStep 3: Essential Docker Commands
Run a Container
# Run Nginx web server
docker run -d -p 8080:80 --name my-web nginx-d— run in background (detached)-p 8080:80— map host port 8080 to container port 80--name my-web— give it a friendly name
Open http://localhost:8080 — you'll see the Nginx welcome page.
Manage Containers
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# Stop a container
docker stop my-web
# Start a stopped container
docker start my-web
# Restart
docker restart my-web
# Remove a container
docker rm my-web
# Remove a running container (force)
docker rm -f my-webView Logs
# View logs
docker logs my-web
# Follow logs in real-time
docker logs -f my-web
# Last 50 lines
docker logs --tail 50 my-webExecute Commands Inside a Container
# Open a shell inside a running container
docker exec -it my-web bash
# Run a single command
docker exec my-web cat /etc/nginx/nginx.conf
# Open shell in Alpine-based containers (no bash)
docker exec -it my-web shManage Images
# List downloaded images
docker images
# Remove an image
docker rmi nginx
# Remove all unused images
docker image prune -aClean Up Everything
# Remove all stopped containers, unused networks, dangling images
docker system prune
# Nuclear option — remove everything
docker system prune -a --volumesStep 4: Build Your Own Image
Create a Simple Node.js App
Create a project folder:
mkdir my-app && cd my-appCreate app.js:
const http = require("http");
const server = http.createServer((req, res) => {
res.writeHead(200, { "Content-Type": "application/json" });
res.end(JSON.stringify({
message: "Hello from Docker!",
hostname: require("os").hostname(),
timestamp: new Date().toISOString(),
}));
});
server.listen(3000, () => {
console.log("Server running on port 3000");
});Create package.json:
{
"name": "my-app",
"version": "1.0.0",
"main": "app.js",
"scripts": {
"start": "node app.js"
}
}Write a Dockerfile
Create Dockerfile:
# Use Node.js 22 on Alpine (small image)
FROM node:22-alpine
# Set working directory
WORKDIR /app
# Copy package files first (better caching)
COPY package.json ./
# Install dependencies
RUN npm install --production
# Copy application code
COPY . .
# Expose port 3000
EXPOSE 3000
# Start the app
CMD ["npm", "start"]Build the Image
docker build -t my-app:1.0 .-t my-app:1.0— tag it with name and version.— build context (current directory)
Run Your Custom Image
docker run -d -p 3000:3000 --name my-app my-app:1.0Test it:
curl http://localhost:3000
# {"message":"Hello from Docker!","hostname":"a1b2c3d4e5f6","timestamp":"2026-03-11T10:00:00.000Z"}Dockerfile Best Practices
# 1. Use specific version tags (not :latest)
FROM node:22-alpine
# 2. Create a non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# 3. Copy dependency files first for better layer caching
COPY package.json package-lock.json ./
RUN npm ci --production
# 4. Copy app code after dependencies
COPY . .
# 5. Switch to non-root user
USER appuser
# 6. Use EXPOSE to document the port
EXPOSE 3000
# 7. Use exec form for CMD (proper signal handling)
CMD ["node", "app.js"]Step 5: Volumes — Persistent Data
Containers are ephemeral — when removed, their data is lost. Use volumes to persist data:
# Create a named volume
docker volume create my-data
# Run PostgreSQL with persistent data
docker run -d \
--name my-db \
-e POSTGRES_PASSWORD=secretpass \
-v my-data:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:16Even if you remove and recreate the container, the database data persists.
Bind Mounts (map host folder)
# Mount current directory into the container
docker run -d \
-v $(pwd)/html:/usr/share/nginx/html \
-p 8080:80 \
nginxEdit files on your host, changes appear instantly in the container. Great for development.
Volume Commands
# List volumes
docker volume ls
# Inspect a volume
docker volume inspect my-data
# Remove unused volumes
docker volume pruneStep 6: Docker Compose — Multi-Container Apps
Most real applications need multiple services (app + database + cache). Docker Compose defines everything in one YAML file.
Example: Node.js + PostgreSQL + Redis
Create docker-compose.yml:
services:
app:
build: .
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgres://postgres:secretpass@db:5432/myapp
- REDIS_URL=redis://cache:6379
depends_on:
- db
- cache
restart: unless-stopped
db:
image: postgres:16-alpine
environment:
POSTGRES_DB: myapp
POSTGRES_PASSWORD: secretpass
volumes:
- pg-data:/var/lib/postgresql/data
ports:
- "5432:5432"
restart: unless-stopped
cache:
image: redis:7-alpine
ports:
- "6379:6379"
restart: unless-stopped
volumes:
pg-data:Compose Commands
# Start all services
docker compose up -d
# View logs
docker compose logs -f
# Stop all services
docker compose down
# Stop and remove volumes (careful — deletes data!)
docker compose down -v
# Rebuild after code changes
docker compose up -d --build
# Scale a service
docker compose up -d --scale app=3Example: Monitoring Stack (Grafana + Prometheus)
services:
prometheus:
image: prom/prometheus:latest
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prom-data:/prometheus
ports:
- "9090:9090"
restart: unless-stopped
grafana:
image: grafana/grafana:latest
volumes:
- grafana-data:/var/lib/grafana
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
depends_on:
- prometheus
restart: unless-stopped
volumes:
prom-data:
grafana-data:One command and you have a full monitoring stack:
docker compose up -dStep 7: Networking
Default Bridge Network
Containers on the same Docker Compose file can reach each other by service name:
# From the "app" container, you can connect to:
# db:5432 (PostgreSQL)
# cache:6379 (Redis)
# Docker DNS resolves these automaticallyCustom Networks
# Create a network
docker network create my-network
# Run containers on the same network
docker run -d --name web --network my-network nginx
docker run -d --name api --network my-network my-app:1.0
# "web" can now reach "api" by name
docker exec web curl http://api:3000Network Commands
# List networks
docker network ls
# Inspect a network
docker network inspect my-network
# Connect a running container to a network
docker network connect my-network existing-containerStep 8: Environment Variables & Secrets
Pass Environment Variables
# Inline
docker run -e DATABASE_URL="postgres://localhost/mydb" my-app
# From a file
docker run --env-file .env my-app.env File
DATABASE_URL=postgres://postgres:secretpass@db:5432/myapp
REDIS_URL=redis://cache:6379
NODE_ENV=production
API_KEY=your-api-key-here
Security tip: Never bake secrets into your Docker image. Always pass them at runtime via environment variables or Docker secrets.
Step 9: Deploy to Production
Push to Docker Hub
# Login
docker login
# Tag your image
docker tag my-app:1.0 yourusername/my-app:1.0
# Push
docker push yourusername/my-app:1.0Deploy on a Remote Server
# On your production server
docker pull yourusername/my-app:1.0
docker run -d -p 80:3000 --restart unless-stopped yourusername/my-app:1.0Deploy with Docker Compose
Copy your docker-compose.yml to the server and run:
docker compose up -dRestart Policies
| Policy | Behavior |
|--------|----------|
| no | Never restart (default) |
| always | Always restart, including on boot |
| unless-stopped | Restart unless manually stopped |
| on-failure:3 | Restart up to 3 times on failure |
Common Mistakes to Avoid
1. Using :latest tag in production
# Bad — you don't know which version you'll get
FROM node:latest
# Good — pinned version, reproducible builds
FROM node:22-alpine2. Running as root
# Add after installing dependencies
USER node3. Not using .dockerignore
Create .dockerignore:
node_modules
.git
.env
*.log
Dockerfile
docker-compose.yml
This prevents copying unnecessary files into the image, making builds faster and images smaller.
4. Too many layers
# Bad — 3 layers
RUN apt update
RUN apt install curl -y
RUN apt clean
# Good — 1 layer
RUN apt update && apt install curl -y && apt cleanQuick Reference
# Images
docker pull image:tag # Download
docker build -t name:tag . # Build
docker images # List
docker rmi image:tag # Remove
# Containers
docker run -d -p H:C image # Run (H=host port, C=container port)
docker ps # List running
docker ps -a # List all
docker stop name # Stop
docker rm name # Remove
docker logs -f name # Follow logs
docker exec -it name bash # Shell access
# Compose
docker compose up -d # Start all
docker compose down # Stop all
docker compose logs -f # Follow logs
docker compose up -d --build # Rebuild and restart
# Cleanup
docker system prune -a # Remove everything unused