Docker Compose Cloud Hosting: Deploy Multi-Container Apps Without Kubernetes
Docker Compose deployment on managed infrastructure — databases, queues, and apps in one stack.
In This Guide
Docker Cloud Hosting: How to Run Any Application Stack in Production
Docker has become the standard unit of application deployment. If your application runs in a Docker container locally, deploying it to a Docker-native cloud platform should take minutes — not a weekend of infrastructure work. Here's how that works in practice, and what to look for in a platform that actually delivers on the promise.
Why Docker-First Deployment Changes Everything
Traditional cloud hosting platforms were built around specific languages and runtimes. You deployed a Python app, or a PHP app, or a Node.js app — and the platform had opinions about how each should be structured. Dockerfile-based deployment eliminates that abstraction:
If it runs in Docker, it runs. The platform doesn't need to understand your framework, your language, your dependency manager, or your startup sequence. The Dockerfile is the complete specification of how to build and run your application.
This has significant practical consequences:
Any language, any framework: Rust API servers, Go microservices, Elixir Phoenix apps, .NET Core applications — none of these have first-class support on most PaaS platforms. They all work perfectly as Docker containers.
Exact parity between dev and production: Your local development environment and production run the same container image. The "works on my machine" problem is structurally eliminated — if your container works locally, it works in production.
Custom system dependencies: Applications that require specific system libraries (ImageMagick for image processing, FFmpeg for video, GDAL for geospatial work, LibreOffice for document conversion) can't be deployed on platforms that don't give you OS-level control. With Docker, you install exactly what you need.
Complex multi-process applications: A single container can run multiple processes via a process supervisor. An application with a web server, a background worker, and a cron daemon can ship as a single deployable unit.
Writing a Production-Ready Dockerfile
The Dockerfile you use for local development is usually not suitable for production. Production Dockerfiles optimize for image size, build reproducibility, security, and runtime performance.
Multi-Stage Builds
Multi-stage builds separate the build environment from the runtime environment. The result is a smaller production image that doesn't include build tools, development dependencies, or compilation artifacts.
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
# Copy dependency files first (better layer caching)
COPY package.json package-lock.json ./
RUN npm ci
# Copy source and build
COPY . .
RUN npm run build
# Stage 2: Production runtime
FROM node:20-alpine AS runner
WORKDIR /app
# Create non-root user
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# Copy only what the runtime needs
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
ENV HOSTNAME "0.0.0.0"
CMD ["node", "server.js"]
The final image contains only the Node.js runtime and compiled application code — no TypeScript compiler, no source files, no development dependencies.
Python Example
FROM python:3.12-slim AS base
# Install system dependencies
RUN apt-get update && apt-get install -y \
libpq-dev \
gcc \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application
COPY . .
# Create non-root user
RUN useradd --no-create-home --shell /bin/false appuser
USER appuser
EXPOSE 8000
CMD ["gunicorn", "myapp.wsgi:application", "--bind", "0.0.0.0:8000", "--workers", "4"]
Go Example
Go compiles to a single static binary, which means the production image can be extremely minimal:
# Build stage
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o main ./cmd/server
# Production stage — just the binary
FROM scratch
COPY --from=builder /app/main /main
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
EXPOSE 8080
CMD ["/main"]
This produces a production image of under 20MB containing nothing but the compiled binary and SSL certificates. There's no shell, no OS, no attack surface beyond the application itself.
Environment Variables and Secrets
Docker containers receive configuration through environment variables. This separation between code and configuration is a core security principle (12-factor app methodology).
For secrets management:
Platform-managed secrets: The best cloud platforms let you set environment variables through a secure UI that are injected into containers at runtime. The values never appear in your Dockerfile or codebase.
Never COPY .env in production: Baking secrets into image layers is a security risk — anyone with access to the image can extract them. Environment variables injected at runtime can be rotated without rebuilding the image.
Build ARGs vs ENV: Use ARG for values needed only during the build process (like access tokens for private packages). Use ENV for values needed at runtime. ARG values don't persist in the final image layers.
# Correct: build-time token for private package access
ARG NPM_TOKEN
RUN npm config set //registry.npmjs.org/:_authToken=$NPM_TOKEN \
&& npm ci \
&& npm config delete //registry.npmjs.org/:_authToken
# Note: ARG values don't persist in the final image
Health Checks
Docker supports native health checks that platforms use to determine container readiness:
HEALTHCHECK --interval=30s --timeout=10s --start-period=15s --retries=3 \
CMD curl -f http://localhost:8080/health || exit 1
Parameters:
- --interval=30s: Check every 30 seconds
- --timeout=10s: Consider unhealthy if health check takes more than 10 seconds
- --start-period=15s: Don't count failures during the first 15 seconds (app startup grace period)
- --retries=3: Mark unhealthy after 3 consecutive failures
A proper health endpoint does more than return 200:
// Go example
func healthHandler(w http.ResponseWriter, r *http.Request) {
// Check database connectivity
if err := db.Ping(); err != nil {
w.WriteHeader(http.StatusServiceUnavailable)
json.NewEncoder(w).Encode(map[string]string{
"status": "degraded",
"reason": "database unreachable",
})
return
}
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(map[string]string{"status": "healthy"})
}
Docker Compose for Multi-Service Applications
Most production applications aren't a single container. A typical stack might include a web application, a background worker, a database, and a cache. Docker Compose defines and connects these services:
version: '3.8'
services:
app:
build: .
ports:
- "8080:8080"
environment:
DATABASE_URL: postgresql://user:password@db:5432/myapp
REDIS_URL: redis://redis:6379
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
restart: unless-stopped
worker:
build: .
command: ["python", "manage.py", "runworker"]
environment:
DATABASE_URL: postgresql://user:password@db:5432/myapp
REDIS_URL: redis://redis:6379
depends_on:
- db
- redis
restart: unless-stopped
db:
image: mariadb:11
environment:
MARIADB_ROOT_PASSWORD: rootpassword
MARIADB_DATABASE: myapp
MARIADB_USER: user
MARIADB_PASSWORD: password
volumes:
- db_data:/var/lib/mysql
healthcheck:
test: ["CMD", "healthcheck.sh", "--connect"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
restart: unless-stopped
volumes:
db_data:
Cloud platforms that support Docker Compose deployment let you define your entire stack in a single file and deploy it as a unit. Services communicate over the internal Docker network using service names as hostnames — db:5432, redis:6379. Nothing is exposed externally except the ports you explicitly publish.
Persistent Storage
Container filesystems are ephemeral — data written to the container filesystem is lost when the container restarts. Applications that need to persist data have two options:
Volumes: Named volumes managed by Docker (or the cloud platform) that persist independently of the container. Mount them at specific paths:
VOLUME ["/app/uploads"]
Or in Docker Compose:
volumes:
- uploads_data:/app/uploads
Volumes are the correct solution for database data, upload directories, and any stateful application data.
External object storage: For file uploads specifically, storing in S3-compatible object storage (Cloudflare R2, Backblaze B2, MinIO) is often better than volumes for horizontally scaled applications. Multiple container instances can all access the same storage, which is impossible with local volumes.
Container Security Best Practices
Run as non-root: The default Docker behavior runs processes as root inside the container. This is a security risk — if an attacker escapes the container, they're root. Always create a dedicated user:
RUN useradd --no-create-home --shell /bin/false appuser
USER appuser
Read-only filesystem: If your application only writes to specific directories, make the rest of the filesystem read-only:
docker run --read-only --tmpfs /tmp myapp
Minimal base images: Prefer alpine or slim variants. Fewer packages means fewer vulnerabilities. Use distroless images for Go or Java applications that compile to static binaries.
Scan images for vulnerabilities: Tools like Trivy scan Docker images for known CVEs in OS packages and application dependencies. Run scans in your CI pipeline before pushing images to production.
Logs and Observability
Write all logs to stdout and stderr. Avoid writing to log files inside the container:
# Python — logging to stdout
import logging
import sys
logging.basicConfig(
stream=sys.stdout,
level=logging.INFO,
format='%(asctime)s %(levelname)s %(name)s %(message)s'
)
Cloud platforms capture stdout/stderr from containers and make them available through their log viewing interface. Writing to /var/log/myapp.log inside a container means those logs are lost on restart and invisible to the platform's log aggregation.
Structured logging (JSON format) makes logs easier to query:
import json
import logging
class JsonFormatter(logging.Formatter):
def format(self, record):
return json.dumps({
'timestamp': self.formatTime(record),
'level': record.levelname,
'message': record.getMessage(),
'logger': record.name,
})
What a Good Docker-Native Platform Provides
When evaluating cloud platforms for Docker deployment:
Image registry: Push your built image to a registry, and the platform pulls it for deployment. Or let the platform build from a Dockerfile in your repository — your CI/CD pipeline triggers the build.
Zero-downtime deploys: New container starts and passes health checks before old container stops. Traffic switches atomically. Zero dropped requests.
Rollback: If a new deployment starts failing health checks, the platform should revert to the previous container automatically or provide a one-click rollback.
Resource limits: Specify CPU cores and memory per container. Prevents a misbehaving container from consuming all host resources.
Private networking: Containers communicate over internal network without traffic leaving the host. Databases aren't exposed to the internet.
Persistent volumes: Platform-managed volumes that survive container restarts, with backup support.
SSH/exec access: The ability to run commands inside a running container is essential for debugging. docker exec -it container_id /bin/sh should be available through the platform's UI or CLI.
The Docker ecosystem has made application packaging solved. The remaining variable is where you run those containers. Platforms that treat Docker as a first-class deployment primitive — not an afterthought — give you the full benefit of containerization without rebuilding your application around platform-specific abstractions.
Deploy Your App with Git Push
Automatic builds, environment variables, live logs, rollback, and custom domains. No server management required.
Deploy Free — No Card RequiredPowered by WHMCompleteSolution