Skip to main content
Complete reference for all configuration options in preview-deployer.

Repository Configuration (preview-config.yml)

Place this file in your repository root to configure preview deployments. The orchestrator reads it after cloning the repo; any values you set here override defaults and auto-detection (e.g. framework is taken from this file when present, otherwise detected from the repo).

Required Fields

# Framework selection (overrides auto-detection when set)
framework: nestjs # Options: nestjs, go, laravel

# Database type
database: postgres # Options: postgres, mysql, mongodb

# Health check endpoint path (used when waiting for the app to be ready)
health_check_path: /health # Path to health check endpoint

Optional Fields

# Commands run on the host in repo root before docker compose up (e.g. copy .env).
# Non-zero exit fails the deployment. Use for file setup, not for npm install (that stays in Dockerfile).
build_commands:
  - cp .env.example .env
  - mkdir -p uploads

# Extra infra: list of known template names. App is wired automatically (e.g. REDIS_URL for redis).
extra_services:
  - redis # Adds Redis service; app gets REDIS_URL=redis://redis:6379 (BullMQ, cache, etc.)

# Environment variables (injected at runtime into the app container; keep .env in .dockerignore)
env:
  - NODE_ENV=preview
  - DEBUG=true
  - API_KEY=value

# Optional: Env file(s) relative to repo root (e.g. .env from build_commands). Loaded by Compose at runtime.
# Use with build_commands: [cp .env.example .env] so the file exists before docker compose up.
# env_file: .env
# env_file:
#   - .env
#   - .env.preview

# Commands run inside the app container before the main process (migrations, seeding, etc.).
# Runs in order; non-zero exit fails the container. Then the app starts as usual.
startup_commands:
  - npm run migration:run
  - npm run seed
# Or: npx prisma migrate deploy && npx prisma db seed

# Custom Dockerfile path (relative to repo root)
dockerfile: ./Dockerfile

Repo-owned preview Compose (docker-compose.preview.yml)

You can provide your own Docker Compose file strictly for preview by placing docker-compose.preview.yml or docker-compose.preview.yaml in your repository root (exact names only; no fuzzy matching). When present, the orchestrator uses it instead of generating one from framework templates (same idea as using your own Dockerfile).
  • File names: docker-compose.preview.yml or docker-compose.preview.yaml (in repo root). Only these two exact filenames are accepted. If you use .yaml, the orchestrator renames it to .yml so one standard path is used everywhere.
  • When used: If either file exists after clone/checkout, the orchestrator parses it, injects host port mappings (see Ports), and writes docker-compose.preview.generated.yml in the deployment directory. That generated file is used for docker compose up/down. Otherwise the orchestrator generates docker-compose.preview.yml from templates.
  • Ports: Do not specify host ports for the app or db services in your compose file. The orchestrator injects them at runtime so each preview gets unique ports and nginx can route correctly. Use service names app and db. Container ports are inferred from framework (NestJS 3000, Go 8080, Laravel 8000) and database type (Postgres 5432, MySQL 3306, MongoDB 27017). If you omit ports for app/db, we add them; if you had host ports, we override them.
  • Project name: The orchestrator always runs with -p <deploymentId> (e.g. myorg-myapp-12). Do not rely on a fixed project name in your file.
  • Rebuild/cleanup: Update and cleanup use the same generated file: docker-compose.preview.generated.yml when you provide repo compose, otherwise docker-compose.preview.yml (orchestrator-generated).

CLI Configuration (~/.preview-deployer/config.yml)

Generated by preview init. Sensitive values are stored in OS keychain.
digitalocean:
  token: keychain # Stored in OS keychain
  region: nyc3
  droplet_size: s-2vcpu-4gb

github:
  token: keychain # Stored in OS keychain
  webhook_secret: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  repositories:
    - owner/repo-name

orchestrator:
  cleanup_ttl_days: 7
  max_concurrent_previews: 10

Environment Variables

Orchestrator Service

Set in Ansible or systemd service file:
# GitHub Configuration
GITHUB_TOKEN=ghp_xxxxxxxxxxxx
GITHUB_WEBHOOK_SECRET=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
ALLOWED_REPOS=owner/repo1,owner/repo2

# Preview Configuration
PREVIEW_BASE_URL=http://YOUR_SERVER_IP
CLEANUP_TTL_DAYS=7
MAX_CONCURRENT_PREVIEWS=10

# Server Configuration
ORCHESTRATOR_PORT=3000
NODE_ENV=production

# Deployment Paths
DEPLOYMENTS_DIR=/opt/preview-deployments
NGINX_CONFIG_DIR=/etc/nginx/preview-configs
DEPLOYMENTS_DB=/opt/preview-deployer/deployments.json

# Logging
LOG_LEVEL=info  # Options: debug, info, warn, error

Terraform Variables

Set in terraform/terraform.tfvars:
do_token       = "your-digital-ocean-api-token"
ssh_public_key = "ssh-rsa AAAA..."
region         = "nyc3"  # Options: nyc1, nyc3, sfo3, ams3, etc.
droplet_size   = "s-2vcpu-4gb"  # Options: s-1vcpu-2gb, s-2vcpu-4gb, s-4vcpu-8gb
project_name   = "preview-deployer"

Available Regions

  • nyc1, nyc3: New York
  • sfo3: San Francisco
  • ams3: Amsterdam
  • sgp1: Singapore
  • lon1: London
  • fra1: Frankfurt
  • tor1: Toronto
  • blr1: Bangalore

Available Droplet Sizes

  • s-1vcpu-2gb: $12/month
  • s-2vcpu-4gb: $24/month (default)
  • s-4vcpu-8gb: $48/month
  • s-8vcpu-16gb: $96/month

Ansible Variables

Set via -e flag or in playbook:
deployment_user: preview-deployer
orchestrator_dir: /opt/preview-deployer
orchestrator_port: 3000
github_token: ghp_xxxxxxxxxxxx
github_webhook_secret: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
allowed_repos: owner/repo1,owner/repo2
server_ip: 1.2.3.4
preview_base_url: http://1.2.3.4
cleanup_ttl_days: 7
max_concurrent_previews: 10

Optional SSL (Let’s Encrypt)

When you have a domain pointing at the server, set these so the nginx role obtains a certificate and serves HTTPS:
preview_domain: preview.example.com # FQDN that resolves to the droplet
ssl_email: [email protected] # Used for Let's Encrypt agree-tos
Then set preview_base_url (and orchestrator env) to https://preview.example.com so PR comments get HTTPS preview links. Certbot runs via the nginx role (webroot); HTTP is redirected to HTTPS and ACME challenges are served on port 80 for renewal.

Docker Compose Templates

Templates are located in orchestrator/templates/:
  • docker-compose.nestjs.yml.hbs: NestJS application template
  • docker-compose.go.yml.hbs: Go application template

Template Variables

  • {{prNumber}}: PR number
  • {{appPort}}: Allocated app port
  • {{dbPort}}: Allocated database port

Container Resources

Default limits (configurable in templates):
deploy:
  resources:
    limits:
      cpus: '0.5'
      memory: 512M
    reservations:
      cpus: '0.25'
      memory: 256M

Nginx Configuration

The nginx role installs nginx and a default server block (port 80, or 80+443 when SSL is enabled). Optional SSL is handled inside the same role: when preview_domain and ssl_email are set, it installs certbot, obtains a certificate (webroot), and re-deploys nginx with listen 443 and HTTP→HTTPS redirect. Preview configs are generated in /etc/nginx/preview-configs/. That directory is owned by the deployment user (e.g. preview-deployer) so the orchestrator can create and remove config files without root. After writing a config, the orchestrator runs nginx -t and nginx -s reload via sudo; Ansible deploys a sudoers fragment at /etc/sudoers.d/preview-deployer-nginx so the deployment user can run only those two nginx commands without a password. The orchestrator systemd unit has NoNewPrivileges=false so that sudo can be used for this limited reload, and ReadWritePaths includes /var/log/nginx and /run so the nginx child process can write its error log and pid file when testing/reloading.
location /{PROJECT_SLUG}/pr-{PR_NUMBER}/ {
    proxy_pass http://localhost:{APP_PORT}/;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_cache_bypass $http_upgrade;
    proxy_read_timeout 300s;
    proxy_connect_timeout 75s;
}

Health Check Configuration

Your application must expose a health check endpoint:

NestJS Example

@Controller()
export class AppController {
  @Get('health')
  health() {
    return { status: 'ok' };
  }
}

Go Example

func healthHandler(w http.ResponseWriter, r *http.Request) {
    w.WriteHeader(http.StatusOK)
    w.Write([]byte(`{"status":"ok"}`))
}
The orchestrator will poll this endpoint until it returns 200 OK.

Framework detection and Dockerfiles

The orchestrator detects the app framework (NestJS, Go, or Laravel) from the cloned repo and uses the matching docker-compose template. If the repo has no Dockerfile, the orchestrator injects a default one for that framework from orchestrator/templates/ (e.g. Dockerfile.nestjs, Dockerfile.go, Dockerfile.laravel). Repos can override by providing their own Dockerfile at the repo root. Detection order: NestJS (nest-cli.json or @nestjs/core in package.json) → Go (go.mod) → Laravel (laravel/framework in composer.json). If none match, NestJS is assumed.

Custom Dockerfiles

If your repository has a custom Dockerfile, ensure it:
  1. Exposes the correct port (3000 NestJS, 8080 Go, 8000 Laravel)
  2. Includes a health check endpoint (or /health for orchestrator polling)
  3. Runs as non-root user (recommended)
  4. Handles SIGTERM gracefully

Example Dockerfile (NestJS)

FROM node:20-alpine

WORKDIR /app

COPY package*.json ./
RUN npm ci --production

COPY . .
RUN npm run build

EXPOSE 3000

USER node

CMD ["node", "dist/main.js"]

Database Configuration

PostgreSQL (Default)

database: postgres
Connection string format:
postgresql://preview:preview@db:5432/pr_{PR_NUMBER}

MySQL

database: mysql
Connection string format:
mysql://preview:preview@db:3306/pr_{PR_NUMBER}

MongoDB

database: mongodb
Connection string format:
mongodb://preview:preview@db:27017/pr_{PR_NUMBER}

GitHub Webhook Configuration

Webhooks are automatically created by the CLI. Manual configuration:
  1. Go to repository Settings > Webhooks
  2. Add webhook:
    • Payload URL: http://YOUR_SERVER_IP/webhook/github
    • Content type: application/json
    • Secret: From ~/.preview-deployer/config.yml
    • Events: Select “Pull requests”
    • Active: Checked

Logging Configuration

Orchestrator Logs

Location: /opt/preview-deployer/logs/
  • orchestrator.log: Application logs
  • orchestrator-error.log: Error logs
View logs:
ssh root@YOUR_SERVER_IP
journalctl -u preview-deployer-orchestrator -f

Docker Logs

View container logs:
docker logs {projectSlug}-pr-{PR_NUMBER}-app
docker logs {projectSlug}-pr-{PR_NUMBER}-db

Nginx Logs

Location: /var/log/nginx/
  • access.log: Access logs
  • error.log: Error logs

Troubleshooting Configuration

See Troubleshooting Guide for common configuration issues.