Skip to main content

Configuration

FOVEA services are configured through environment variables and configuration files. This page covers common configuration scenarios and best practices.

Configuration Methods

Environment Variables

Environment variables control service behavior and connections. Set them in:

  1. .env file in project root (Docker Compose reads automatically)
  2. docker-compose.yml service definitions
  3. Shell environment before running services

See Environment Variables Reference for complete variable list.

Configuration Files

Some services use configuration files for detailed settings:

ServiceFilePurpose
Model Servicemodel-service/config/models.yamlModel selection and parameters
Backendserver/prisma/schema.prismaDatabase schema
OTEL Collectorotel-collector-config.yamlTelemetry configuration
Prometheusprometheus.ymlMetrics scrape config

Quick Configuration

Basic Setup

For default configuration, no changes are needed:

docker compose up -d

Custom Configuration

Create .env file in project root:

cp .env.example .env

Edit .env with your settings. Key variables:

# Database
POSTGRES_PASSWORD=your_secure_password

# GPU Configuration (GPU mode only)
CUDA_VISIBLE_DEVICES=0,1

# Security
GF_SECURITY_ADMIN_PASSWORD=grafana_password

Backend Configuration

Database Connection

Set database connection in .env:

DATABASE_URL=postgresql://user:password@host:port/database

Development (Docker Compose default):

DATABASE_URL=postgresql://fovea:fovea@postgres:5432/fovea

Production (external database):

DATABASE_URL=postgresql://fovea_user:secure_pass@db.example.com:5432/fovea_prod

Redis Configuration

REDIS_URL=redis://redis:6379

For Redis with password:

REDIS_URL=redis://:password@redis:6379

CORS Configuration

Set allowed origins for API access:

CORS_ORIGIN=http://localhost:3000

Multiple origins (comma-separated):

CORS_ORIGIN=http://localhost:3000,https://fovea.example.com

Production: Use specific domains, not wildcards.

Model Service Configuration

models.yaml File

Configure AI models in model-service/config/models.yaml:

models:
video_summarization:
selected: "llama-4-maverick"
options:
max_tokens: 1024
temperature: 0.7

ontology_augmentation:
selected: "llama-4-scout"
options:
max_tokens: 512

object_detection:
selected: "yolo-world-v2"
options:
confidence_threshold: 0.7

video_tracking:
selected: "samurai"
options:
tracking_mode: "default"

Available Models

Video Summarization (VLM):

  • llama-4-maverick (default)
  • qwen2-vl-7b-instruct

Ontology Augmentation (LLM):

  • llama-4-scout (default)
  • Custom models from HuggingFace

Object Detection:

  • yolo-world-v2 (default)
  • grounding-dino

Video Tracking:

  • samurai (default)
  • bytetrack
  • bot-sort

GPU Memory Configuration

Control CUDA memory allocation in .env:

# Limit memory fragmentation
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512

# Additional options
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512,roundup_power2_divisions:16

Device Selection

CPU mode:

DEVICE=cpu

GPU mode (single GPU):

DEVICE=cuda
CUDA_VISIBLE_DEVICES=0

Multi-GPU:

DEVICE=cuda
CUDA_VISIBLE_DEVICES=0,1,2,3

Frontend Configuration

API Endpoint

Set backend URL in .env or annotation-tool/.env:

Note: For local development with npm run dev, use port 5173. For Docker deployment, port 3000 is used.

VITE_API_URL=http://localhost:3001
VITE_VIDEO_BASE_URL=http://localhost:3001/videos

Production (behind reverse proxy):

VITE_API_URL=https://api.fovea.example.com
VITE_VIDEO_BASE_URL=https://api.fovea.example.com/videos

Build-Time Variables

Frontend environment variables are embedded at build time. After changing variables, rebuild:

docker compose build frontend
docker compose up -d frontend

Observability Configuration

OpenTelemetry

Configure OTEL endpoint in backend:

OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4318

Disable telemetry for development:

OTEL_SDK_DISABLED=true

Prometheus

Edit prometheus.yml to adjust scrape intervals:

scrape_configs:
- job_name: 'otel-collector'
scrape_interval: 15s # Default: 15s
static_configs:
- targets: ['otel-collector:8889']

Grafana

Change admin credentials in .env:

GF_SECURITY_ADMIN_USER=admin
GF_SECURITY_ADMIN_PASSWORD=secure_password_here

Disable anonymous access:

GF_AUTH_ANONYMOUS_ENABLED=false

Production Configuration

Security Checklist

  1. Change default passwords:
POSTGRES_PASSWORD=strong_random_password
GF_SECURITY_ADMIN_PASSWORD=another_strong_password
  1. Restrict CORS:
CORS_ORIGIN=https://fovea.example.com
  1. Use HTTPS with reverse proxy (nginx, Traefik)

  2. Set NODE_ENV to production:

NODE_ENV=production

Resource Limits

Add resource limits in docker-compose.yml:

model-service:
deploy:
resources:
limits:
cpus: '4'
memory: 8G
reservations:
memory: 4G

Database Connection Pooling

Configure connection pool in Prisma (backend):

DATABASE_URL=postgresql://user:pass@host:5432/db?connection_limit=10&pool_timeout=20

Common Configuration Scenarios

Development with Hot Reload

Backend hot reload:

backend:
volumes:
- ./server/src:/app/src
command: npm run dev

Frontend hot reload (already enabled by default in dev):

cd annotation-tool
npm run dev

Using External PostgreSQL

Stop built-in PostgreSQL and point to external:

# Comment out postgres service in docker-compose.yml
# postgres:
# image: postgres:16
# ...

Update backend connection:

DATABASE_URL=postgresql://user:pass@external-db.example.com:5432/fovea

Custom Model Path

Mount custom models directory:

model-service:
volumes:
- ./custom-models:/app/models
environment:
MODEL_CONFIG_PATH: /app/models/custom-models.yaml

Multiple GPU Configuration

Assign specific GPUs to service:

CUDA_VISIBLE_DEVICES=0,1  # Use first two GPUs

Or in docker-compose.yml:

model-service-gpu:
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0', '1']
capabilities: [gpu]

Configuration Validation

Check Current Configuration

View environment variables in running container:

docker compose exec backend env
docker compose exec model-service env

Test Database Connection

docker compose exec backend npx prisma db execute --stdin <<< "SELECT 1"

Verify Model Configuration

docker compose exec model-service python -c "
import yaml
with open('/app/config/models.yaml') as f:
config = yaml.safe_load(f)
print(config)
"

Check GPU Configuration

docker compose exec model-service-gpu python -c "
import torch
print(f'CUDA available: {torch.cuda.is_available()}')
print(f'Device count: {torch.cuda.device_count()}')
"

Troubleshooting Configuration

Configuration Not Applied

If changes do not take effect:

  1. Restart services:
docker compose restart backend
  1. Rebuild if needed (build-time variables):
docker compose up -d --build frontend
  1. Clear cache:
docker compose down
docker compose up -d

Invalid YAML Syntax

Check models.yaml syntax:

# Install yamllint if needed
yamllint model-service/config/models.yaml

Environment Variable Not Found

Check variable is set:

# In container
docker compose exec backend printenv | grep DATABASE_URL

# Or
docker compose config

Next Steps