This guide includes Docker installation, creating and optimizing Dockerfiles, pulling images, managing containers, working with volumes and networks, and troubleshooting commands — with real world examples using our base project — EasyShop
1. Installation for Each Platform
Linux (Ubuntu/Debian)
sudo apt update
sudo apt install docker.io -y
sudo systemctl enable docker
sudo systemctl start docker
docker --version
Mac & Windows
Visit the official Docs
Download Docker Desktop:
docker --version
2. Docker Architecture
Docker Client
The Docker Client (or Docker CLI Command Line Interface)) both are same it allows you to communicate with Docker Deamon . Here we can write commands like docker run, docker pull, docker build etc.
Docker Host
It is a Physical or Virtual Machine that runs the Docker Engine. It is the main environment provides that provides necessary resources (CPU, memory, storage, networking) to execute Docker containers.
What is dockerd?
dockerd is the Docker daemon which runs in the background. It listens for Docker API requests and handles tasks like:
Building Images
Running and Stoping Containers
Managing Networks and Volumes
You can check the status using:
systemctl status docker
DockerHub
It is an cloud-based public registry where you can store your docker images.
You can push your images to Docker Hub and pull them when needed for development or deployment.
Docker Hub integrates seamlessly with Continuous Integration and Continuous Delivery (CI/CD) pipelines, and allows you for automated builds and deployments.
3. Pulling Docker Images
The commands are used for pulling docker images from Artifact registries like DockerHub
docker pull nginx
docker pull mongo
docker pull <your dockerhub imagename>
4. Checking Stats of running Containers
docker stats
5. Images and Container Management
Images
What are docker Images?
Images are Executable Packages that contains dependencies (e.g: Node.js, Python), Configuration (e.g: environment variables), software packages and Application code)for the purpose of creating containers.
Container
What is container ?
A container is a lightweight, standalone unit of software that includes everything required to run an application including code, runtime, and dependencies ensuring it runs quickly and reliably across different computing environments.
Commands:
To get the list of Docker Images
docker images
To get the list of only running containers
docker ps
To get the list of all running and stopped containers
docker ps -a
To Stop the container
docker stop <container_id or container name>
To Remove Container
docker rm <container_id or container name>
To remove the image
docker rmi <image_id>
7. Troubleshooting Commands
To Check container logs
docker logs <container_id or container name>>
To Inspect containers/images
docker inspect <container_id or image_id>
To Enter container shell/ your container
docker exec -it <container_id> /bin/bash
To check Disk usage and cleanup
docker system df
docker system prune
8. Builder Cache & Clean Images
Each instruction in the Dockerfile adds a layer to the image. The intermediate layers are generated during the build process. When we rebuild the image without making any changes to the Dockerfile or source code, Docker can reuse the cached layers to speed up the build process.
To clean cache:
docker builder prune
To avoid caching during build
docker build --no-cache -t easyshop .
Dockerfile Instruction Explanation with Layer-by-Layer Breakdown & Caching Tips
FROM node:18
Here we define the base image that includes everything needed to run apps Eg: Node:18 image is pre-installed. It’s cached unless you change the Node version. Always use a specific version like node:18 to ensure consistency and Docker’s layer caching.
WORKDIR /app
This sets the working directory inside the container to /app. All subsequent commands will run from this location. It doesn’t change often, so it’s always cached and adds to Dockerfile readability and structure.
COPY package*.json ./
This step copies only package.json and package-lock.json. It’s one of the most important steps for caching because dependencies don’t change as often as your application code. Keeping this separate helps Docker cache the npm install layer effectively.
RUN npm install
This installs all the dependencies listed in the package files. If the previous layer is unchanged, Docker reuses this cached layer and skips re-installing, and significantly speeds up builds.
COPY . .
Now that dependencies are installed, we copy rest of the app files from source to destination which is from host system to container. This ensures that if you change only your app code, Docker won’t repeat the installation step just this one gets rebuilt. That’s the whole idea behind leveraging Docker’s layer caching.
EXPOSE 5000
This exposes port 5000 so others know which port the app is running on. It doesn’t affect caching or container behaviour directly, but is useful for documentation and when using Docker Compose.
CMD [“node”, “index.js”]
This is the default command that runs when the container starts and you can override this command. It usually remains unchanged and gets cached normally.
9. Volumes — Persistent Storage
What are Volumes? Why Use Docker Volumes?
Volumes provide a way to persist the data outside the containers file system making it available even after container restarts or deletions. So it can be backed up or shared. It allows multiple containers to share same data and ensures that the data persists even if the containers are stopped or removed.
The path for volumes is /var/lib/docker/volumes/
Commands:
To create volume
docker volume create <volume name>
To run a container and mount the volume to /app/data inside the container
docker run -v $(pwd)/data:/app/data <image name>
10. Docker Networks
What are Docker Networks?
Docker provides In-built networking features enabling secure and efficient communication between containers and the host machine. It provides isolation between containers, allowing you to control which containers can communicate with each other.
Types of Networks
Bridge Network
It is a default network using this you can bind the host and container port. It creates a virtual bridge on the host to connect containers.
Host Network
This network is to run the container on same network where your host system is running that means both the ports are auto-mapped. If we give this network then we don’t need to do port mapping.
Custom Bridge Network (User-Defined Network)
That means to create your own secure network and assign is to the container.
Overlay Network
Enables communication between containers on different hosts, it is used to communicate in clustered environments.
Commands:
To create Custom Bridge Network
docker network create <network name>
To run the containers on same network
docker run -d --name <container name> --network <network name> <image name>
DockerFile Creation and Pushing images to DockerHub
Dockerfile — EasyShop Backend (Node.js + Express)
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD ["node", "index.js"]
To build Dockerfile and create image of it
docker build -t your-image-name:tag .
DockerHub
To push the images to DockerHub
docker login -u <username>
It will ask for your Docker Hub username and password (or personal access token).
To tag image with your Docker Hub username and pushing the image to DockerHub
docker tag local-image-name:tag dockerhub-username/repository-name:tag
Eg:
sudo docker tag easyshop-backend:latest poojabhavani08/easyshop-backend:latest
EasyShop — Base Demo App
EasyShop is our demo e-commerce app built with:
Frontend: React
Backend: Node.js/Express
Database: MongoDB
The Repository URL:
Clone this Repository
https://github.com/iemafzalhassan/easyshop--demo
You can use Docker to:
Containerize each part
Connect them using Docker networks
Persist data with Docker volumes
Practice cleanup, image builds, and container orchestration
Thanks all. Good luck out there!
Follow for more such amazing content 🙂
Happy Learning 😊