Docker Deployment
Docker provides consistent, reproducible deployments across any environment.
Prerequisites
- Docker installed
- Docker Compose (included with Docker Desktop)
- Discord bot token
Quick Start
-
Clone the repository
Terminal window git clone https://github.com/KevinTrinh1227/discord-forum-api.gitcd discord-forum-api -
Create environment file
Terminal window cp .env.example .env -
Edit
.envwith your credentials -
Start with Docker Compose
Terminal window docker-compose up -d
Docker Compose Configuration
Create docker-compose.yml in the project root:
version: '3.8'
services: bot: build: context: . dockerfile: docker/Dockerfile.bot env_file: .env environment: - NODE_ENV=production volumes: - ./data:/app/data restart: unless-stopped depends_on: - api networks: - forum-network
api: build: context: . dockerfile: docker/Dockerfile.api env_file: .env environment: - NODE_ENV=production - API_PORT=3000 ports: - "3000:3000" volumes: - ./data:/app/data restart: unless-stopped networks: - forum-network
networks: forum-network: driver: bridge
volumes: data:Dockerfiles
Bot Dockerfile
Create docker/Dockerfile.bot:
# Build stageFROM node:20-alpine AS builder
WORKDIR /app
# Enable pnpmRUN corepack enable pnpm
# Copy package filesCOPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./COPY packages/db/package.json packages/db/COPY packages/bot/package.json packages/bot/
# Install dependenciesRUN pnpm install --frozen-lockfile
# Copy source codeCOPY packages/db packages/dbCOPY packages/bot packages/botCOPY tsconfig.json ./
# BuildRUN pnpm --filter @discolink/db buildRUN pnpm --filter @discolink/bot build
# Production stageFROM node:20-alpine
WORKDIR /app
RUN corepack enable pnpm
# Copy built filesCOPY --from=builder /app/package.json /app/pnpm-lock.yaml /app/pnpm-workspace.yaml ./COPY --from=builder /app/packages/db/package.json packages/db/COPY --from=builder /app/packages/db/dist packages/db/distCOPY --from=builder /app/packages/bot/package.json packages/bot/COPY --from=builder /app/packages/bot/dist packages/bot/dist
# Install production dependencies onlyRUN pnpm install --frozen-lockfile --prod
# Create data directoryRUN mkdir -p /app/data
CMD ["node", "packages/bot/dist/index.js"]API Dockerfile
Create docker/Dockerfile.api:
# Build stageFROM node:20-alpine AS builder
WORKDIR /app
RUN corepack enable pnpm
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./COPY packages/db/package.json packages/db/COPY packages/api/package.json packages/api/
RUN pnpm install --frozen-lockfile
COPY packages/db packages/dbCOPY packages/api packages/apiCOPY tsconfig.json ./
RUN pnpm --filter @discolink/db buildRUN pnpm --filter @discolink/api build
# Production stageFROM node:20-alpine
WORKDIR /app
RUN corepack enable pnpm
COPY --from=builder /app/package.json /app/pnpm-lock.yaml /app/pnpm-workspace.yaml ./COPY --from=builder /app/packages/db/package.json packages/db/COPY --from=builder /app/packages/db/dist packages/db/distCOPY --from=builder /app/packages/api/package.json packages/api/COPY --from=builder /app/packages/api/dist packages/api/dist
RUN pnpm install --frozen-lockfile --prod
RUN mkdir -p /app/data
EXPOSE 3000
CMD ["node", "packages/api/dist/index.js"]Environment File
Create .env:
# DiscordDISCORD_TOKEN=your_bot_tokenDISCORD_CLIENT_ID=your_client_idDISCORD_CLIENT_SECRET=your_client_secret
# Database (SQLite for Docker)DATABASE_TYPE=sqliteDATABASE_PATH=/app/data/discord-forum.db
# APIAPI_PORT=3000CORS_ORIGIN=https://yourdomain.comNODE_ENV=productionCommands
Build and Start
# Build imagesdocker-compose build
# Start servicesdocker-compose up -d
# View logsdocker-compose logs -f
# Stop servicesdocker-compose downIndividual Services
# Start only the botdocker-compose up -d bot
# Rebuild specific servicedocker-compose build apidocker-compose up -d api
# View specific logsdocker-compose logs -f botShell Access
# Access bot containerdocker-compose exec bot sh
# Access API containerdocker-compose exec api shWith Reverse Proxy
Nginx Configuration
Create docker/nginx.conf:
upstream api { server api:3000;}
server { listen 80; server_name api.yourdomain.com;
location / { proxy_pass http://api; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_cache_bypass $http_upgrade; }}Docker Compose with Nginx
version: '3.8'
services: bot: # ... same as before
api: # ... same as before but remove ports expose: - "3000"
nginx: image: nginx:alpine ports: - "80:80" - "443:443" volumes: - ./docker/nginx.conf:/etc/nginx/conf.d/default.conf:ro - ./certbot/conf:/etc/letsencrypt:ro depends_on: - api networks: - forum-network
certbot: image: certbot/certbot volumes: - ./certbot/conf:/etc/letsencrypt - ./certbot/www:/var/www/certbot entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"Health Checks
Add health checks to docker-compose:
services: api: # ... other config healthcheck: test: ["CMD", "curl", "-f", "http://localhost:3000/health"] interval: 30s timeout: 10s retries: 3 start_period: 40sProduction Optimizations
Multi-stage Build
The Dockerfiles above use multi-stage builds to:
- Keep final images small
- Exclude dev dependencies
- Separate build tools from runtime
Security
services: api: # Run as non-root user user: "1000:1000" # Read-only filesystem (except data) read_only: true tmpfs: - /tmp # Limit capabilities cap_drop: - ALLResource Limits
services: bot: deploy: resources: limits: cpus: '0.5' memory: 512M reservations: cpus: '0.25' memory: 256M
api: deploy: resources: limits: cpus: '1.0' memory: 1G reservations: cpus: '0.5' memory: 512MPersistent Data
SQLite Volume
volumes: forum-data: driver: local
services: bot: volumes: - forum-data:/app/data api: volumes: - forum-data:/app/dataBackup
# Backup databasedocker-compose exec bot cp /app/data/discord-forum.db /app/data/backup-$(date +%Y%m%d).db
# Or from hostdocker cp $(docker-compose ps -q bot):/app/data/discord-forum.db ./backup.dbUsing with Turso
For Turso instead of SQLite:
DATABASE_TYPE=tursoTURSO_DATABASE_URL=libsql://your-db.turso.ioTURSO_AUTH_TOKEN=your_tokenRemove the volumes for /app/data since Turso is remote.
CI/CD Integration
GitHub Actions
name: Docker Build
on: push: branches: [main]
jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4
- name: Build and push uses: docker/build-push-action@v5 with: context: . file: docker/Dockerfile.api push: true tags: ghcr.io/${{ github.repository }}/api:latestTroubleshooting
Container Won’t Start
# Check logsdocker-compose logs bot
# Check if build succeededdocker-compose build --no-cache botDatabase Locked
Multiple containers accessing SQLite can cause locks:
- Ensure only one bot instance
- Consider using Turso for multi-container setups
- Add WAL mode to SQLite config
Out of Memory
# Check container memorydocker stats
# Increase limits in docker-compose.ymlNetwork Issues
# Check networkdocker network lsdocker network inspect discord-forum-api_forum-network
# Verify services can communicatedocker-compose exec api ping bot