🐳Developer

Docker for Developers Who Keep Putting It Off

The practical starting point for containerizing your apps without getting lost in orchestration theory. What Docker actually does, when to use it, and a real workflow.

8 min readFebruary 2, 2026By FreeToolKit TeamFree to read

Docker's documentation starts with the big picture — images, registries, orchestration, Kubernetes. Most developers need to containerize a specific app for local development or deployment, not learn container theory. Start here.

A Dockerfile That Actually Works for a Node App

Dockerfile

FROM node:20-alpine

WORKDIR /app

# Copy and install dependencies first (Docker layer caching)
COPY package*.json ./
RUN npm ci

# Copy source code
COPY . .

# Build if needed
RUN npm run build

# Expose port and start
EXPOSE 3000
CMD ["node", "dist/index.js"]

The order matters: copy package.json and install before copying source code. Docker caches each layer — if only your source code changed (not dependencies), Docker reuses the cached dependency layer, making rebuilds significantly faster.

The Commands You'll Use 90% of the Time

  • docker build -t myapp . — build an image from the Dockerfile in the current directory, tag it 'myapp'.
  • docker run -p 3000:3000 myapp — run the container, map port 3000 in the container to port 3000 on your host.
  • docker run -d --name myapp -p 3000:3000 myapp — detached mode (background), with a name.
  • docker ps — list running containers. docker ps -a shows stopped containers too.
  • docker logs myapp -f — follow logs from a container. Essential for debugging.
  • docker exec -it myapp sh — get a shell inside a running container. Use bash if sh doesn't work.
  • docker stop myapp && docker rm myapp — stop and remove a container.
  • docker images — list local images. docker rmi myapp — delete an image.

Docker Compose for Local Development

docker-compose.yml

version: '3.8'
services:
  api:
    build: .
    ports:
      - '3000:3000'
    environment:
      - DATABASE_URL=postgres://user:pass@db:5432/myapp
    depends_on:
      - db
  db:
    image: postgres:16-alpine
    environment:
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=pass
      - POSTGRES_DB=myapp
    volumes:
      - postgres_data:/var/lib/postgresql/data
volumes:
  postgres_data:

docker-compose up starts both. The db hostname resolves to the postgres container automatically. The volume postgres_data persists database data between restarts. docker-compose down stops everything. docker-compose down -v also deletes the volumes (destroys the database).

The .dockerignore File You're Forgetting

Without it, COPY . . copies your node_modules (500MB+), .git directory, .env files, and everything else into the image. Create .dockerignore:

.dockerignore

node_modules
.git
.env
*.log
dist
.next

Frequently Asked Questions

What problem does Docker actually solve?+
The 'works on my machine' problem. Docker packages an application with its entire runtime environment — OS libraries, dependencies, configuration — into a container. That container runs identically on your laptop, your colleague's MacBook, your CI server, and your production Linux box. Without Docker, subtle differences in OS versions, library versions, or environment variables cause bugs that only appear in certain environments. Docker eliminates the environment as a variable.
What's the difference between a Docker image and a container?+
An image is the blueprint — a read-only template with everything needed to run your application. A container is a running instance of an image. The analogy: an image is like a class definition, a container is like an object instance. You can run multiple containers from the same image. Images are built once (or rebuilt when you change them); containers are ephemeral — they start, run, and stop. Data inside a container is lost when it stops unless you use volumes for persistence.
Do I need Kubernetes if I'm already using Docker?+
Not usually, and not to start. Docker alone handles running single containers or small groups of containers with Docker Compose. Kubernetes adds orchestration for running many containers across multiple servers with automatic scaling, health checking, and rolling deployments. It's enormously complex. If you're deploying one or a few services on a single server or using a managed platform (Railway, Render, Fly.io), Docker is sufficient. Kubernetes is for when you have enough traffic and complexity that the overhead is worth it — which is not as soon as most tutorials imply.
What is Docker Compose and when do I need it?+
Docker Compose lets you define and run multiple containers together with a single configuration file. Your app might need: a Node.js API, a PostgreSQL database, a Redis cache, and an Nginx reverse proxy. Docker Compose defines all four in docker-compose.yml and starts them all with docker-compose up. Without it, you'd have to start each container manually and configure networking between them. For local development, Docker Compose is the right tool. For production with multiple servers, Kubernetes or a similar orchestrator is more appropriate.

🔧 Free Tools Used in This Guide

FT

FreeToolKit Team

FreeToolKit Team

We build free browser-based tools and write practical guides without the fluff.

Tags:

dockerdevopscontainersdeveloper