Introduction: Why your Docker skills still suck (and how to fix that)
Let’s be honest running docker run hello-world
doesn’t mean you “know Docker.”
You’ve probably followed a dozen YouTube tutorials, watched a few folks make a container dance in their terminal, maybe even got a Django app running once before it mysteriously broke after a restart.
But here’s the catch: Docker isn’t something you understand just by watching. It’s something you get good at by doing especially when things go wrong.
Real-world Docker skills come from solving real problems: broken ports, flaky volumes, weird networking bugs that make you question your existence.
€50 free credits for 30 days trial
Promo code: devlink50That’s where this article comes in.
I’ve collected 10 practical, not-boring, actually useful Docker projects that force you to learn the stuff most tutorials skip. Stuff like:
- Linking multiple containers the right way
- Using
docker-compose
like a wizard- Debugging containers that misbehave in production
- Making your dev environment bulletproof (and maybe even beautiful)
These aren’t enterprise-grade Kubernetes monsters. They’re small, fun, and surprisingly educational perfect for leveling up at your own pace.
Let’s start with the easiest one and build from there. Your future DevOps brain will thank you.
Project 1: Containerize a static website (because you gotta start somewhere)
What you’re building:
A simple static website served via Docker using Nginx or http-server. Nothing fancy just HTML, CSS, maybe a splash of JS.
But here’s the catch: you’ll set it up cleanly with a Dockerfile, serve it on localhost, and hot reload changes.
Skills you’ll learn:
- Writing a basic
Dockerfile
- Using
COPY
,EXPOSE
, andCMD
properly - Mounting local directories with volumes
- Running containers with port forwarding (
-p 8080:80
)
How it works:
- Create a folder with your HTML/CSS/JS.
- Write a Dockerfile that uses the Nginx base image.
- Mount your local folder into the container so you don’t need to rebuild every time you change a file.
- Serve and view on
localhost:8080
.
# Simple Dockerfile using Nginx
FROM nginx:alpine
COPY . /usr/share/nginx/html
EXPOSE 80
Try this:
docker build -t my-static-site .
docker run -p 8080:80 my-static-site
Want live reload? Use http-server
with Node:
Dockerfile
FROM node:alpine
RUN npm install -g http-server
WORKDIR /app
COPY . /app
EXPOSE 8080
CMD ["http-server", ".", "-p", "8080", "-c-1"]
Then run:
docker build -t my-live-site .
docker run -p 8080:8080 -v ${PWD}:/app my-live-site
Now every time you change a file, just refresh the browser. No rebuilds.
Why this matters:
This is your “Hello World,” but with real muscle. You learn how to:
- Serve files in containers
- Handle ports
- Use volumes to avoid rebuilds
- Build a basic mental model of Docker’s filesystem
You’re laying the foundation. The boring part. But don’t skip it people who rush through this usually get smacked in Project 7.
Project 2: Build your dev portfolio in a Docker container
What you’re building:
A portfolio site React, Vue, Svelte, Astro, pick your poison containerized for both dev and prod. Bonus: you’ll set up a multi-stage build so your final image isn’t bloated with Node junk.
You’ll go from:
npm run dev
on your laptop
→docker-compose up
anywhere
Skills you’ll learn:
- Multi-stage Docker builds
- Exposing ports for development vs production
- Optimizing image size
- Using
.dockerignore
like.gitignore
How to do it (React/Vite example):
1. Create your React app
npm create vite@latest my-portfolio --template react
cd my-portfolio
npm install
2. Write a multi-stage Dockerfile
Dockerfile
# Stage 1: Build
FROM node:18-alpine as builder
WORKDIR /app
COPY . .
RUN npm install && npm run build
# Stage 2: Serve
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
EXPOSE 80
3. Build + run it:
docker build -t my-portfolio .
docker run -p 8080:80 my-portfolio
Boom. Now your portfolio runs in a clean production image, separate from all that node_modules
mess.
Dev tip:
If you want hot reloading in dev mode too, you can mount your local folder into a node
container and run npm run dev
:
docker run -it -p 5173:5173 -v ${PWD}:/app -w /app node:18-alpine sh
# Then inside:
npm install
npm run dev
Why this project matters:
Everyone tells you to make a portfolio site. But deploying it properly in Docker forces you to think like an engineer, not just a developer:
- What does “production-ready” actually mean?
- Why is my container 700MB?
- Where do I separate build vs run?
Once you master this, you’re no longer the junior who ships containers with 300MB of unused dependencies.
Project 3: Set up a full LAMP stack (because real apps need real databases)
What you’re building:
A classic Linux + Apache + MySQL + PHP (LAMP) stack using docker-compose
. You’ll run a PHP app (like WordPress or your own mini CMS) with a proper backend and persistent data.
This is where you stop thinking in containers and start thinking in systems.
Skills you’ll learn:
- Using
docker-compose.yml
to manage multi-container setups - Persistent volumes for databases
- Linking containers by service name
- Environment variables for DB configs
- Container networking
Quick Setup
Create a project folder and add a docker-compose.yml
like this:
version: '3.8'
services:
web:
image: php:8.2-apache
ports:
- "8081:80"
volumes:
- ./web:/var/www/html
depends_on:
- db
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: mydb
MYSQL_USER: user
MYSQL_PASSWORD: pass
MYSQL_ROOT_PASSWORD: rootpass
volumes:
- db_data:/var/lib/mysql
volumes:
db_data:
Create a basic index.php
inside the web/
folder:
<?php
$mysqli = new mysqli("db", "user", "pass", "mydb");
if ($mysqli->connect_error) {
echo "Connection failed: " . $mysqli->connect_error;
} else {
echo "Connected to MySQL!";
}
Run it:
docker-compose up -d
Visit http://localhost:8081. If it says “Connected to MySQL!” you nailed it.
Why this project matters:
Most tutorials ignore multi-container reality. But real web apps have:
- A backend
- A database
- Shared configs
- Persistent data
Using docker-compose
like this teaches you how to think like a backend dev and ops engineer at the same time.
You’ll break stuff, especially the DB. That’s part of the learning.
Project 4: Run a WordPress blog in Docker (and customize it like a hacker)
What you’re building:
A full-blown WordPress site, powered entirely by containers. You’ll spin up WordPress + MySQL, use volumes for data persistence, and actually mess with themes/plugins from your local machine.
Yup Dockerized blogging, fully editable.
Skills you’ll learn:
- Real-world multi-container orchestration
- Using
depends_on
andvolumes
for persistent app + DB state - Customizing apps running inside containers
- Exposing ports and paths for CMS platforms
Your docker-compose.yml
:
version: '3.8'
services:
wordpress:
image: wordpress:latest
ports:
- "8082:80"
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: user
WORDPRESS_DB_PASSWORD: pass
WORDPRESS_DB_NAME: wpdb
volumes:
- wp_data:/var/www/html
depends_on:
- db
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: wpdb
MYSQL_USER: user
MYSQL_PASSWORD: pass
MYSQL_ROOT_PASSWORD: rootpass
volumes:
- db_data:/var/lib/mysql
volumes:
wp_data:
db_data:
Run it:
docker-compose up -d
Then go to http://localhost:8082 and run through the classic WordPress setup wizard.
Customizing themes/plugins:
Your WordPress files are stored in the wp_data
volume. Want to edit the theme?
- Stop the container.
- Mount a local folder into
/var/www/html/wp-content/themes
indocker-compose.yml
. - Edit files locally, refresh browser. Instant dev workflow.
You can even copy your existing blog over or build a new theme if you’re feeling spicy.
Why this project matters:
WordPress isn’t just a blog it’s a real-world app with:
- DB connection requirement
- Volume persistence
- Plugin/theme customization
- Configs that break often
This project shows you how Docker helps isolate the chaos — while still letting you hack, tweak, and experiment.
Optional challenge: Try backing up your WordPress + DB data and restoring it on another machine. Real DevOps vibes.
Project 5: Self-host your own cloud IDE with code-server (VS Code in the browser, baby)
What you’re building:
Your very own VS Code in the browser, running inside a container. You’ll be able to code anywhere even from your iPad while keeping all your dev tools isolated.
It’s like having a dev laptop… inside Docker.
Skills you’ll learn:
- Running third-party devtools in containers
- Port mapping and security considerations
- Using bind mounts to persist your code
- Environment management inside containers
Using the official code-server
image:
version: '3.8'
services:
code-server:
image: codercom/code-server:latest
container_name: vscode
restart: always
environment:
- PASSWORD=secret123
volumes:
- ./projects:/home/coder/project
ports:
- "8443:8080"
Create a projects
folder in the same directory. That’s where all your files live and get edited inside the container.
Run it:
docker-compose up -d
Then open:
http://localhost:8443
Log in with the password secret123
, and you’ve got full VS Code in the browser.
Pro Tip:
Mount your local ~/.ssh
or Git credentials into the container if you want to commit code directly from code-server. But don’t do this on shared or exposed servers unless you lock things down properly.
Why this project matters:
This is where Docker stops being “just deployment stuff” and starts being your dev environment.
You’ll learn:
- How to isolate tools
- How to persist data across restarts
- How to serve secure dev tools via the browser
And if you’ve ever wanted to code on a Chromebook, iPad, or PotatoPC™ this project makes it possible.
Project 6: Make a personal file drop app (because you don’t need Big Cloud for everything)
What you’re building:
A simple web-based app to upload, store, and download files self-hosted and containerized. You can use tools like nextcloud
, minio
, or even roll your own Flask upload tool.
Great for sharing files across devices or with friends without third-party servers.
Skills you’ll learn:
- Mounting volumes for persistent uploads
- Setting up secure upload endpoints
- Managing storage in containers
- Optional: adding reverse proxies for SSL and auth
Option A: Run Nextcloud in Docker (the full suite)
version: '3.8'
services:
nextcloud:
image: nextcloud
ports:
- "8083:80"
volumes:
- nextcloud_data:/var/www/html
restart: always
volumes:
nextcloud_data:
Then run:
docker-compose up -d
Visit http://localhost:8083, set your admin user, and boom — you’ve got Dropbox-level functionality.
Option B: Make a mini file uploader with Flask
If you want to go hacker-mode and build your own:
Create app.py
:
from flask import Flask, request
import os
app = Flask(name)
UPLOAD_FOLDER = '/uploads'
os.makedirs(UPLOAD_FOLDER, exist_ok=True)
@app.route('/', methods=['POST'])
def upload():
file = request.files['file']
file.save(os.path.join(UPLOAD_FOLDER, file.filename))
return 'Uploaded!', 200
And Dockerfile:
FROM python:3.10-slim
WORKDIR /app
COPY app.py .
RUN pip install flask
EXPOSE 5000
CMD ["python", "app.py"]
Then:
docker build -t file-drop .
docker run -p 5000:5000 -v ${PWD}/uploads:/uploads file-drop
Now curl -F 'file=@yourfile.txt' localhost:5000
works. You made a file drop from scratch!
Why this project matters:
You’ll get your hands dirty with:
- File systems inside containers
- Upload logic and file permissions
- Persistent data across restarts
- Optional: adding authentication, HTTPS
This project helps you bridge backend logic with DevOps concepts. It’s a practical exercise in “how would I host my own service?
Project 7: Run your own web analytics with Plausible (because Google doesn’t need your data)
What you’re building:
A fully containerized privacy-friendly analytics dashboard using Plausible (or Umami). It tracks visitors to your websites without cookies, without creepy tracking, and without relying on Google.
Yes, you’ll finally know who’s visiting your portfolio without selling their souls.
Skills you’ll learn:
- Running full-stack apps with Postgres
- Managing environment variables for app configs
- Using
docker-compose
for production apps - Reverse proxying (optional) with Nginx or Traefik
Basic Plausible setup:
version: "3.3"
services:
plausible:
image: plausible/analytics
ports:
- "8084:8000"
depends_on:
- postgres
- clickhouse
environment:
- BASE_URL=http://localhost:8084
- SECRET_KEY=super_secret
volumes:
- plausible_data:/plausible
postgres:
image: postgres:13
environment:
POSTGRES_DB=plausible_db
POSTGRES_USER=plausible
POSTGRES_PASSWORD=plausible
volumes:
- postgres_data:/var/lib/postgresql/data
clickhouse:
image: clickhouse/clickhouse-server:22.6
volumes:
- clickhouse_data:/var/lib/clickhouse
volumes:
plausible_data:
postgres_data:
clickhouse_data:
Run it:
docker-compose up -d
Then open:
http://localhost:8084
Set up your admin account and paste the <script>
tag into your website.
Want SSL and a domain?
Add a reverse proxy container like Nginx Proxy Manager or Traefik to expose it securely via HTTPS and domain name.
Why this project matters:
This setup teaches you to:
- Host full apps with databases and analytics logic
- Manage long-term data storage
- Run “production-like” tools without managed hosting
- Respect user privacy while still getting insight
Bonus? You’ll never touch Google Analytics again. And clients love seeing a slick dashboard that doesn’t scream “Google owns your traffic.”
Project 8: Create a CI/CD pipeline with Jenkins or Drone (automate everything)
What you’re building:
A fully Dockerized CI/CD pipeline using either Jenkins or Drone CI. You’ll trigger builds, run tests, and auto-deploy your apps — inside containers.
This is where Docker becomes more than dev tools — it’s infrastructure.
Skills you’ll learn:
- Containerizing build pipelines
- Mounting code volumes or Git repos
- Automating builds, tests, and deploys
- Managing secrets and credentials
Option A: Jenkins in Docker
Create docker-compose.yml
:
version: '3.8'
services:
jenkins:
image: jenkins/jenkins:lts
ports:
- "8085:8080"
- "50000:50000"
volumes:
- jenkins_home:/var/jenkins_home
restart: always
volumes:
jenkins_home:
Start it:
docker-compose up -d
Go to http://localhost:8085, unlock it using the password from:
docker exec jenkins cat /var/jenkins_home/secrets/initialAdminPassword
Install basic plugins and start creating pipelines. You can configure a GitHub webhook → Jenkins → Docker build + deploy flow.
Option B: Drone CI (lightweight and modern)
Drone is simpler and uses a .drone.yml
file similar to GitHub Actions:
version: '3'
services:
drone:
image: drone/drone:2
ports:
- "8086:80"
volumes:
- drone_data:/data
environment:
- DRONE_GITEA_SERVER=https://your-gitea.com
- DRONE_RPC_SECRET=supersecret
- DRONE_SERVER_HOST=localhost:8086
- DRONE_SERVER_PROTO=http
volumes:
drone_data:
Requires a Git repo integration like Gitea/GitHub + Drone plugin. But once set up, it’s so smooth.
Why this project matters:
You’re now building the kind of automation infrastructure used by real dev teams.
You’ll understand:
- How build tools interact with source code
- Secrets management
- Running builds in isolation
- Triggering deploys from a Git push
Whether you’re freelancing or scaling your startup, this one’s a game-changer.
Project 9: Reverse proxy everything with Traefik or Nginx Proxy Manager (like a traffic ninja)
What you’re building:
A reverse proxy setup that routes traffic to your various containerized apps based on domain or subdomain. You’ll expose multiple services on a single server (or localhost), with optional SSL, basic auth, and load balancing.
Basically: make your Docker world look professional.
Skills you’ll learn:
- Configuring reverse proxies with Docker
- Using labels and automatic service discovery
- Generating SSL certs with Let’s Encrypt
- Managing multiple apps on a single port (via hostnames)
Option A: Traefik + Docker labels
Create a docker-compose.yml
:
version: "3.8"
services:
reverse-proxy:
image: traefik:v2.9
command:
- "--api.insecure=true"
- "--providers.docker=true"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
- "8088:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
my-app:
image: nginx
labels:
- "traefik.http.routers.myapp.rule=Host(myapp.localhost
)"
- "traefik.http.services.myapp.loadbalancer.server.port=80"
Visit:
http://myapp.localhost
(after adding it to your /etc/hosts
)
Option B: Nginx Proxy Manager (GUI)
Add this to docker-compose.yml
:
version: '3.8'
services:
npm:
image: jc21/nginx-proxy-manager
ports:
- "80:80"
- "81:81"
- "443:443"
volumes:
- npm_data:/data
- npm_letsencrypt:/etc/letsencrypt
volumes:
npm_data:
npm_letsencrypt:
Run it and go to:
http://localhost:81
Login (default: admin@example.com / changeme
) and start adding domains + SSL with a few clicks.
Why this project matters:
Most real-world setups need some kind of proxy for:
- Clean URLs (
myapp.com
, notlocalhost:3000
) - SSL (Let’s Encrypt)
- Subdomains per service
- Security (basic auth, IP whitelisting)
This is your step into infra engineer territory routing, traffic management, and zero-downtime restarts.
Okay. Let’s take it all home in the next one.
Project 10: Build a mini SaaS stack with Node, MongoDB, and Nginx (your startup, containerized)
What you’re building:
A fully containerized mini SaaS application — backend in Node.js, database in MongoDB, served to the world through Nginx.
You’ll mimic a real app setup with:
- API server
- Database
- Frontend (optional)
- Reverse proxy
- SSL-ready networking
This is your Docker-powered MVP launchpad.
Skills you’ll learn:
- Multi-tier architecture in Docker
- Managing environment variables
- Connecting app → DB → frontend
- Production-like configs with Nginx reverse proxy
- Network separation + naming conventions
Project structure:
/mini-saas
/backend
server.js
Dockerfile
/nginx
default.conf
docker-compose.yml
Sample backend/Dockerfile
:
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 5000
CMD ["node", "server.js"]
Sample server.js
:
const express = require("express");
const mongoose = require("mongoose");
mongoose.connect("mongodb://mongo:27017/saas", {
useNewUrlParser: true,
useUnifiedTopology: true,
});
const app = express();
app.get("/", (req, res) => res.send("SaaS Backend Running"));
app.listen(5000, () => console.log("Server up on 5000"));
Sample nginx/default.conf
:
nginx
server {
listen 80;
location / {
proxy_pass http://backend:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Docker-compose.yml
:
version: "3.8"
services:
backend:
build: ./backend
container_name: backend
depends_on:
- mongo
mongo:
image: mongo
container_name: mongo
volumes:
- mongo_data:/data/db
nginx:
image: nginx:alpine
container_name: nginx
ports:
- "8089:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- backend
volumes:
mongo_data:
Run it:
docker-compose up --build
Then visit:
http://localhost:8089
You just simulated launching a micro-SaaS backend.
Why this is your final Docker flex:
You’ve now built a:
- REST API
- Database backend
- Reverse proxy with Nginx
- Full networked architecture using Docker
Want frontend too? Add another service with React or Vue, and proxy it through Nginx. You’ve got the skills now.
Conclusion: You don’t need more tutorials you need projects that break you
Let’s be real: most people know Docker like they know Git barely enough to not panic.
But if you go through these 10 projects, you’ll:
- Stop guessing how volumes work
- Actually understand container networking
- Build real apps you can deploy tomorrow
- Think in systems, not commands
These are the Docker reps that build DevOps muscle.