Docker Deployment
Deploy Obot using Docker for local development, testing, and proof-of-concept scenarios.
Overview
Docker deployment is the fastest way to get Obot running. It's ideal for:
- Local development and testing
- Single-machine deployments
- Proof-of-concept and evaluation
- Small team usage
For production deployments, see Kubernetes Deployment.
Prerequisites
- Docker installed
- 2+ CPU cores and 4GB RAM available
- 10GB disk space
Quick Start
Basic Deployment (Built-in PostgreSQL)
Run Obot with the built-in PostgreSQL instance (suitable for development and testing):
docker run -d \
--name obot \
-v obot-data:/data \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 8080:8080 \
-e OPENAI_API_KEY=your-openai-key \
ghcr.io/obot-platform/obot:latest
With Authentication
docker run -d \
--name obot \
-v obot-data:/data \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 8080:8080 \
-e OPENAI_API_KEY=your-openai-key \
-e OBOT_SERVER_ENABLE_AUTHENTICATION=true \
-e OBOT_BOOTSTRAP_TOKEN=your-bootstrap-token \
ghcr.io/obot-platform/obot:latest
Using a Custom Port
If you want to expose Obot on a different port (e.g., -p 9999:8080), you must also set OBOT_SERVER_HOSTNAME so that authentication redirects and MCP server connection URLs work correctly:
docker run -d \
--name obot \
-v obot-data:/data \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 9999:8080 \
-e OPENAI_API_KEY=your-openai-key \
-e OBOT_SERVER_HOSTNAME=localhost:9999 \
ghcr.io/obot-platform/obot:latest
Accessing Obot
Once started, access Obot at http://localhost:8080.
If you enabled authentication, use your bootstrap token to log in as the owner and set up an authentication provider. If you didn't supply a bootstrap token, a random one will be generated and can be found in the container's logs by searching for "Bootstrap token".
Workflow Sharing Storage
Workflow sharing is available through Obot's Nanobot integration and stores published workflows separately from workspace files.
For local Docker deployments:
- The default local published-workflow store is usually sufficient
- Keep the Obot data volume mounted if you want published workflows to survive container replacement
If you want published workflows in Docker to use external object storage instead of local disk, configure the published workflow storage environment variables described in Server Configuration and Workflow Sharing.
Next Steps
- Configure Authentication: Set up auth providers for secure access
- Configure Model Providers: Configure model providers (OpenAI, Anthropic, etc.)
- Set Up MCP Servers: Configure MCP servers for extended functionality