Helm

Deploy Agentfield on Kubernetes using Helm charts

Helm Deployment

Values-driven Kubernetes install with full customization

Helm provides the most flexible way to deploy Agentfield on Kubernetes. Use values.yaml to customize storage, authentication, scaling, and demo agents.

For plain Kubernetes manifests without Helm, see the Kustomize guide.


Quick Start

Install with PostgreSQL and Demo Agent

helm upgrade --install agentfield deployments/helm/agentfield \
  -n agentfield --create-namespace \
  --set postgres.enabled=true \
  --set controlPlane.storage.mode=postgres \
  --set demoPythonAgent.enabled=true

Port-Forward the UI/API

kubectl -n agentfield port-forward svc/agentfield-control-plane 8080:8080

Open http://localhost:8080/ui/

Wait for Demo Agent

The Python agent installs dependencies on first boot:

kubectl -n agentfield wait --for=condition=Ready pod -l app.kubernetes.io/component=demo-python-agent --timeout=600s

Execute an Agent

curl -X POST http://localhost:8080/api/v1/execute/demo-python-agent.hello \
  -H "Content-Type: application/json" \
  -d '{"input":{"name":"World"}}'

Storage Options

Local Storage (SQLite/BoltDB)

Best for development and single-node deployments:

helm upgrade --install agentfield deployments/helm/agentfield \
  -n agentfield --create-namespace \
  --set controlPlane.storage.mode=local

Deploy with the bundled PostgreSQL StatefulSet:

helm upgrade --install agentfield deployments/helm/agentfield \
  -n agentfield --create-namespace \
  --set postgres.enabled=true \
  --set controlPlane.storage.mode=postgres

Or use an external managed database (AWS RDS, Cloud SQL, etc.):

helm upgrade --install agentfield deployments/helm/agentfield \
  -n agentfield --create-namespace \
  --set controlPlane.storage.mode=postgres \
  --set controlPlane.env.AGENTFIELD_POSTGRES_HOST=your-db.rds.amazonaws.com \
  --set controlPlane.env.AGENTFIELD_POSTGRES_USER=agentfield \
  --set controlPlane.env.AGENTFIELD_POSTGRES_PASSWORD=your-password \
  --set controlPlane.env.AGENTFIELD_POSTGRES_DB=agentfield

Demo Agents

Python Demo Agent (No Build Required)

Installs the SDK from PyPI at startup:

helm upgrade --install agentfield deployments/helm/agentfield \
  -n agentfield --create-namespace \
  --set demoPythonAgent.enabled=true

Go Demo Agent (Requires Custom Image)

Build and load the image first (see Dockerfile.demo-go-agent):

docker build -t agentfield-demo-go-agent:local -f deployments/docker/Dockerfile.demo-go-agent .
minikube image load agentfield-demo-go-agent:local
docker build -t agentfield-demo-go-agent:local -f deployments/docker/Dockerfile.demo-go-agent .
kind load docker-image agentfield-demo-go-agent:local --name <cluster-name>

Then enable in Helm:

helm upgrade --install agentfield deployments/helm/agentfield \
  -n agentfield --create-namespace \
  --set demoAgent.enabled=true

API Authentication

Enable API key authentication for the control plane:

helm upgrade --install agentfield deployments/helm/agentfield \
  -n agentfield --create-namespace \
  --set apiAuth.enabled=true \
  --set apiAuth.apiKey='your-secret-key'

When enabled, API calls require the key header (UI remains accessible):

curl -H "X-API-Key: your-secret-key" http://localhost:8080/api/v1/nodes

Common Configurations

Full Production Setup

helm upgrade --install agentfield deployments/helm/agentfield \
  -n agentfield --create-namespace \
  --set postgres.enabled=true \
  --set controlPlane.storage.mode=postgres \
  --set controlPlane.replicas=3 \
  --set apiAuth.enabled=true \
  --set apiAuth.apiKey='production-secret-key'

Development with Local Storage

helm upgrade --install agentfield deployments/helm/agentfield \
  -n agentfield --create-namespace \
  --set controlPlane.storage.mode=local \
  --set demoPythonAgent.enabled=true

Useful Commands

# Check release status
helm status agentfield -n agentfield

# View computed values
helm get values agentfield -n agentfield

# Upgrade with new values
helm upgrade agentfield deployments/helm/agentfield \
  -n agentfield \
  --reuse-values \
  --set controlPlane.replicas=5

# Uninstall
helm uninstall agentfield -n agentfield

Notes

  • The chart sets AGENTFIELD_CONFIG_FILE=/dev/null so the control plane uses built-in defaults plus environment variables
  • Admin gRPC listens on port 8180 (HTTP port + 100) and is exposed via the Service port named grpc

Source files: