Docker Containerization for Python Flask Apps: From Development to Production
A practical guide to containerizing Python Flask applications with Docker, covering multi-stage builds, environment management, and Railway deployment.
Why Docker for Flask?
Python environments are notoriously fragile. Different Python versions, conflicting package versions, and OS-specific dependencies make "it works on my machine" a constant problem. Docker eliminates this by packaging your app with its exact dependencies.
Basic Dockerfile
Start with a simple, production-ready Dockerfile:
FROM python:3.11-slim
WORKDIR /app
# Install dependencies first (cached layer)
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Create non-root user
RUN useradd -m appuser && chown -R appuser:appuser /app
USER appuser
EXPOSE 5000
CMD ["gunicorn", "-w", "2", "-b", "0.0.0.0:5000", "app:app"]
Why python:3.11-slim?
slimis ~120MB vs. ~900MB for the full image- Includes enough for most Python packages
- Missing build tools can be added when needed
Multi-Stage Build for AI/ML Apps
For Pixel Prep (AI image processing), the build stage includes model downloads and compilation, while the final image stays lean:
# Stage 1: Build
FROM python:3.11-slim as builder
WORKDIR /app
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip install --no-cache-dir --prefix=/install -r requirements.txt
# Stage 2: Runtime
FROM python:3.11-slim
WORKDIR /app
COPY --from=builder /install /usr/local
COPY . .
RUN useradd -m appuser && chown -R appuser:appuser /app
USER appuser
EXPOSE 5000
CMD ["gunicorn", "-w", "2", "-b", "0.0.0.0:5000", "app:app"]
This approach keeps build-only dependencies out of the final image.
Docker Compose for Local Development
version: '3.8'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/app
environment:
- FLASK_ENV=development
- DATABASE_URL=postgresql://postgres:postgres@db:5432/stocksync
depends_on:
- db
db:
image: postgres:15-alpine
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=stocksync
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
Hot Reloading in Development
Mount your source code as a volume and use Flask's debug mode:
services:
web:
volumes:
- .:/app
command: flask run --host=0.0.0.0 --debug
environment:
- FLASK_APP=app.py
- FLASK_ENV=development
Environment Variable Management
Never bake secrets into your Docker image. Use environment variables:
import os
class Config:
DATABASE_URL = os.environ.get('DATABASE_URL', 'sqlite:///dev.db')
SECRET_KEY = os.environ.get('SECRET_KEY', 'dev-key-change-me')
SHOPIFY_TOKEN = os.environ.get('SHOPIFY_TOKEN')
@property
def is_production(self):
return os.environ.get('RAILWAY_ENVIRONMENT') == 'production'
APScheduler in Docker
The stock sync dashboard uses APScheduler for periodic scraping. In Docker, the scheduler runs in the same process as the Flask app:
from apscheduler.schedulers.background import BackgroundScheduler
def create_app():
app = Flask(__name__)
# Only start scheduler in the main process
# Gunicorn workers would duplicate the scheduler
if os.environ.get('SCHEDULER_ENABLED', 'true') == 'true':
scheduler = BackgroundScheduler()
scheduler.add_job(
run_stock_sync,
'interval',
hours=4,
id='stock_sync'
)
scheduler.start()
return app
Important: With Gunicorn using 2+ workers, the scheduler runs in each worker. Either use 1 worker for the scheduler service or use a proper task queue like Celery.
Health Checks
HEALTHCHECK --interval=30s --timeout=3s --retries=3 \
CMD curl -f http://localhost:5000/health || exit 1
@app.route('/health')
def health():
try:
db.session.execute(text('SELECT 1'))
return jsonify({'status': 'healthy'}), 200
except Exception as e:
return jsonify({'status': 'unhealthy', 'error': str(e)}), 503
Optimizing Image Size
Every megabyte matters for deployment speed:
| Optimization | Size Reduction |
|-------------|----------------|
| python:slim vs python | ~780MB |
| --no-cache-dir in pip | ~50-200MB |
| Multi-stage build | ~200MB+ |
| .dockerignore | ~10-50MB |
Essential .dockerignore
.git
.env
__pycache__
*.pyc
venv/
.pytest_cache
node_modules
*.md
.github
tests/
Deploying to Railway
Railway detects Dockerfiles automatically. Your deployment flow:
- Push to GitHub
- Railway builds the Docker image
- Image is deployed with environment variables from Railway dashboard
- Health check confirms the service is running
No Dockerfile changes needed for Railway — it handles port mapping, HTTPS termination, and container orchestration.
Common Issues
gunicornnot found — ensure it's inrequirements.txt, not just installed locally- Permission denied — use a non-root user in the Dockerfile
- Port mismatch — bind to
0.0.0.0:$PORTfor Railway, notlocalhost - Large image size — check for unnecessary files with
docker image inspect - Slow builds — order Dockerfile commands from least to most frequently changed
Conclusion
Docker turns Python's environment chaos into reproducible, deployable containers. For Flask applications — whether simple APIs or AI-powered processing tools — a well-structured Dockerfile and Docker Compose setup eliminates deployment surprises and makes local development consistent across the team.
Related Project
Supplier Stock Sync DashboardAutomated inventory policy sync between three supplier stock feeds and Shopify, with a real-time web dashboard for manual triggers and live progress monitoring.