SDK Configuration
Configure AI, memory, and async execution behavior
The Agentfield Python SDK provides comprehensive configuration options for AI behavior, memory management, and async execution. All configuration classes support environment variable overrides and sensible defaults.
AIConfig
Configure LLM behavior, model selection, multimodal settings, and rate limiting.
from agentfield import Agent, AIConfig
ai_config = AIConfig(
model="gpt-4o",
temperature=0.7,
max_tokens=2000,
vision_model="dall-e-3",
audio_model="tts-1-hd",
timeout=60
)
app = Agent(node_id="my_agent", ai_config=ai_config)Core Parameters
Prop
Type
Multimodal Parameters
Prop
Type
Reliability Parameters
Prop
Type
LiteLLM Integration
Prop
Type
Usage Examples
from agentfield import AIConfig
# Simple configuration
config = AIConfig(
model="gpt-4o",
temperature=0.7,
max_tokens=2000
)
# Use with agent
app = Agent(node_id="my_agent", ai_config=config)from agentfield import AIConfig
# Multimodal configuration
config = AIConfig(
model="gpt-4o",
vision_model="dall-e-3",
audio_model="tts-1-hd",
image_quality="high",
audio_format="mp3"
)
# Generate images
response = await app.ai_with_vision(
"A serene mountain landscape",
size="1792x1024",
quality="hd"
)from agentfield import AIConfig
# Robust rate limit handling
config = AIConfig(
model="gpt-4o",
enable_rate_limit_retry=True,
rate_limit_max_retries=20,
rate_limit_base_delay=1.0,
rate_limit_max_delay=300.0,
retry_attempts=3
)from agentfield import AIConfig
# Load from environment variables
# OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.
config = AIConfig.from_env()
# Override specific settings
config = AIConfig.from_env(
model="gpt-4o",
temperature=0.8
)LiteLLM automatically detects API keys from environment variables
(OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.). You only need to set api_key if
using a non-standard configuration.
MemoryConfig
Configure memory behavior including auto-injection, retention policies, and caching.
from agentfield import Agent, MemoryConfig
memory_config = MemoryConfig(
auto_inject=["user_context", "conversation_history"],
memory_retention="persistent",
cache_results=True
)
app = Agent(node_id="my_agent", memory_config=memory_config)Parameters
Prop
Type
Usage Examples
from agentfield import MemoryConfig
# Session-scoped memory (cleared after session)
config = MemoryConfig(
memory_retention="session",
cache_results=True
)
app = Agent(node_id="session_agent", memory_config=config)
@app.reasoner
async def chat(message: str, user_id: str) -> str:
# Memory cleared when session ends
history = await app.memory.get(f"user_{user_id}_chat", default=[])
history.append(message)
await app.memory.set(f"user_{user_id}_chat", history)
return await app.ai(
system=f"Chat history: {history[-5:]}",
user=message
)from agentfield import MemoryConfig
# Persistent memory (never cleared)
config = MemoryConfig(
memory_retention="persistent",
cache_results=True
)
app = Agent(node_id="persistent_agent", memory_config=config)
@app.reasoner
async def learn_preference(user_id: str, preference: dict) -> str:
# Persists across all sessions
prefs = await app.memory.get(f"user_{user_id}_prefs", default={})
prefs.update(preference)
await app.memory.set(f"user_{user_id}_prefs", prefs)
return f"Learned preference: {preference}"from agentfield import MemoryConfig
# Auto-inject memory into AI calls
config = MemoryConfig(
auto_inject=["user_context", "conversation_history"],
memory_retention="persistent"
)
app = Agent(node_id="context_agent", memory_config=config)
# Memory automatically included in AI context
@app.reasoner
async def personalized_response(message: str) -> str:
# user_context and conversation_history
# automatically injected into AI call
return await app.ai(user=message)AsyncConfig
Configure async execution behavior including polling strategies, timeouts, and resource limits.
from agentfield import Agent
from agentfield.async_config import AsyncConfig
async_config = AsyncConfig(
enable_async_execution=True,
max_execution_timeout=3600,
polling_timeout=20,
enable_batch_polling=True
)
app = Agent(node_id="my_agent", async_config=async_config)Execution Control
Prop
Type
Timeout Configuration
Prop
Type
Polling Strategy
Prop
Type
Resource Limits & Networking
Prop
Type
Batch Processing
Prop
Type
Caching
Prop
Type
Memory Management
Prop
Type
Retry & Backoff
Prop
Type
Circuit Breaker
Prop
Type
Logging & Monitoring
Prop
Type
Feature Flags & Streaming
Prop
Type
Usage Examples
from agentfield.async_config import AsyncConfig
# Enable async execution
config = AsyncConfig(
enable_async_execution=True,
max_execution_timeout=3600,
fallback_to_sync=True
)
app = Agent(node_id="async_agent", async_config=config)
# Long-running tasks automatically use async execution
result = await app.call(
"research_agent.deep_analysis",
topic="quantum computing"
)from agentfield.async_config import AsyncConfig
# Aggressive polling for fast response
config = AsyncConfig(
initial_poll_interval=0.01, # 10ms
fast_poll_interval=0.05, # 50ms
medium_poll_interval=0.2, # 200ms
slow_poll_interval=1.0, # 1s
max_poll_interval=2.0 # 2s max
)
app = Agent(node_id="fast_agent", async_config=config)from agentfield.async_config import AsyncConfig
# Load from environment variables
# BRAIN_ASYNC_MAX_EXECUTION_TIMEOUT=1800
# BRAIN_ASYNC_BATCH_SIZE=50
config = AsyncConfig.from_environment()
app = Agent(node_id="env_agent", async_config=config)Environment Variables Reference
All configuration classes support environment variable overrides. This section provides a comprehensive reference of all available environment variables.
Agent Configuration
Prop
Type
Docker Deployment Note
When running the Agentfield control plane in Docker with agents on the host machine, you must set:
export AGENT_CALLBACK_URL=http://host.docker.internal:8001This is required because Docker containers cannot reach localhost on the host machine. See Docker Deployment Guide for details.
# LiteLLM automatically detects these
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="..."
# Agent configuration
export AGENTFIELD_SERVER="http://localhost:8080"
export AGENT_CALLBACK_URL="http://host.docker.internal:8001" # For Docker
export AGENT_PORT="8001"
# AI configuration overrides (optional)
export BRAIN_AI_MODEL="gpt-4o"
export BRAIN_AI_TEMPERATURE="0.7"
export BRAIN_AI_MAX_TOKENS="2000"
# Async execution settings
export BRAIN_ASYNC_ENABLE_ASYNC_EXECUTION="true"
export BRAIN_ASYNC_MAX_EXECUTION_TIMEOUT="3600"
export BRAIN_ASYNC_POLLING_TIMEOUT="20"
export BRAIN_ASYNC_BATCH_SIZE="100"
export BRAIN_ASYNC_CONNECTION_POOL_SIZE="64"Async Execution Performance Tuning
The Python SDK uses adaptive polling to efficiently track long-running agent executions. These settings control how aggressively the SDK polls the control plane for execution results.
Polling Intervals
The SDK starts with fast polling and gradually backs off for longer tasks:
| Environment Variable | Default | Description |
|---|---|---|
AGENTFIELD_ASYNC_INITIAL_POLL_INTERVAL | 0.03 (30ms) | Initial poll interval for very fast tasks |
AGENTFIELD_ASYNC_FAST_POLL_INTERVAL | 0.08 (80ms) | Fast polling for tasks under 1 second |
AGENTFIELD_ASYNC_MEDIUM_POLL_INTERVAL | 0.4 (400ms) | Medium polling for tasks under 10 seconds |
AGENTFIELD_ASYNC_SLOW_POLL_INTERVAL | 1.5 (1.5s) | Slow polling for long-running tasks |
AGENTFIELD_ASYNC_MAX_POLL_INTERVAL | 4.0 (4s) | Maximum polling interval (prevents excessive delays) |
Adaptive Polling Strategy
The SDK automatically adjusts polling frequency based on execution duration:
- 0-1s: Fast polling (30-80ms intervals) for quick responses
- 1-10s: Medium polling (400ms intervals) for moderate tasks
- 10s+: Slow polling (1.5-4s intervals) for long-running tasks
This balances responsiveness with server load. Most users don't need to change these defaults.
Example: More aggressive polling for latency-sensitive applications
AGENTFIELD_ASYNC_INITIAL_POLL_INTERVAL=0.01 # 10ms initial
AGENTFIELD_ASYNC_FAST_POLL_INTERVAL=0.05 # 50ms fast
AGENTFIELD_ASYNC_MAX_POLL_INTERVAL=2.0 # 2s maximumTimeouts
| Environment Variable | Default | Description |
|---|---|---|
AGENTFIELD_ASYNC_MAX_EXECUTION_TIMEOUT | 21600.0 (6 hours) | Absolute maximum timeout for any execution |
AGENTFIELD_ASYNC_DEFAULT_EXECUTION_TIMEOUT | 7200.0 (2 hours) | Default timeout if not specified per-call |
AGENTFIELD_ASYNC_POLLING_TIMEOUT | 20.0 (20s) | HTTP timeout for individual poll requests |
Example: Shorter timeouts for web applications
AGENTFIELD_ASYNC_DEFAULT_EXECUTION_TIMEOUT=300.0 # 5 minutes
AGENTFIELD_ASYNC_POLLING_TIMEOUT=10.0 # 10s per pollConcurrency & Batching
| Environment Variable | Default | Description |
|---|---|---|
AGENTFIELD_ASYNC_MAX_CONCURRENT_EXECUTIONS | 4096 | Maximum simultaneous executions to track |
AGENTFIELD_ASYNC_MAX_ACTIVE_POLLS | 512 | Maximum concurrent polling operations |
AGENTFIELD_ASYNC_CONNECTION_POOL_SIZE | 64 | HTTP connection pool size for control plane API |
AGENTFIELD_ASYNC_BATCH_SIZE | 100 | Executions to check in single batch request |
High-Throughput Configuration
For agents handling thousands of concurrent requests:
AGENTFIELD_ASYNC_MAX_CONCURRENT_EXECUTIONS=8192
AGENTFIELD_ASYNC_MAX_ACTIVE_POLLS=1024
AGENTFIELD_ASYNC_CONNECTION_POOL_SIZE=128
AGENTFIELD_ASYNC_BATCH_SIZE=200Monitor memory usage - each tracked execution consumes ~1-2KB.
Feature Flags
| Environment Variable | Default | Description |
|---|---|---|
AGENTFIELD_ASYNC_ENABLE_ASYNC_EXECUTION | true | Master switch - disable to force synchronous mode |
AGENTFIELD_ASYNC_ENABLE_BATCH_POLLING | true | Batch multiple status checks (more efficient) |
AGENTFIELD_ASYNC_ENABLE_RESULT_CACHING | true | Cache completed results to reduce API calls |
AGENTFIELD_ASYNC_FALLBACK_TO_SYNC | true | Auto-retry failed async calls as synchronous |
AGENTFIELD_ASYNC_ENABLE_EVENT_STREAM | false | Use Server-Sent Events for real-time updates (experimental) |
Example: Debugging async issues (force synchronous mode)
AGENTFIELD_ASYNC_ENABLE_ASYNC_EXECUTION=falseSee also: Environment Variables Reference for complete async configuration details.
Production Deployment Configuration
Critical Variables for Production
When deploying agents to production, ensure these environment variables are properly configured:
Essential:
AGENTFIELD_SERVER- Control plane URL (e.g.,https://agentfield.company.com)OPENAI_API_KEY/ANTHROPIC_API_KEY- AI provider credentialsAGENT_CALLBACK_URL- Required for containers (e.g.,http://agent-service:8000)PORT- Agent server port (default8000, auto-detected by most platforms)
Performance:
UVICORN_WORKERS- Number of worker processes (set to 2× CPU cores + 1)AGENTFIELD_ASYNC_MAX_CONCURRENT_EXECUTIONS- Concurrent execution limitAGENTFIELD_ASYNC_CONNECTION_POOL_SIZE- Connection pool size
Logging:
AGENTFIELD_LOG_LEVEL- Set toINFOorWARNING(avoidDEBUGin production)AGENTFIELD_LOG_TRUNCATE- Log message truncation length
Example: Production Configuration
# Agent connectivity
AGENTFIELD_SERVER=https://agentfield.company.com
AGENT_CALLBACK_URL=http://my-agent.internal:8000
PORT=8000
# AI providers
OPENAI_API_KEY=sk-proj-...
# Performance
UVICORN_WORKERS=4
AGENTFIELD_ASYNC_MAX_CONCURRENT_EXECUTIONS=2048
AGENTFIELD_ASYNC_CONNECTION_POOL_SIZE=128
# Logging
AGENTFIELD_LOG_LEVEL=INFO
AGENTFIELD_LOG_TRUNCATE=500See Production Best Practices for comprehensive deployment guidance.
Logging Configuration
Control SDK logging verbosity and output format:
| Environment Variable | Default | Description |
|---|---|---|
AGENTFIELD_LOG_LEVEL | WARNING | Log level: DEBUG, INFO, WARNING, ERROR, SILENT |
AGENTFIELD_LOG_TRUNCATE | 200 | Maximum log message length (characters) |
AGENTFIELD_LOG_PAYLOADS | false | Log full request/response payloads (very verbose) |
AGENTFIELD_LOG_TRACKING | false | Log execution tracking details |
AGENTFIELD_LOG_FIRE | false | Log fire-and-forget workflow operations |
Example: Verbose debugging
AGENTFIELD_LOG_LEVEL=DEBUG
AGENTFIELD_LOG_PAYLOADS=true
AGENTFIELD_LOG_TRACKING=trueExample: Production logging (quiet)
AGENTFIELD_LOG_LEVEL=INFO
AGENTFIELD_LOG_TRUNCATE=500Complete Example
from agentfield import Agent, AIConfig, MemoryConfig
from agentfield.async_config import AsyncConfig
# Production configuration
app = Agent(
node_id="production_agent",
agentfield_server="https://agentfield.company.com",
version="2.0.0",
ai_config=AIConfig(
model="gpt-4o",
temperature=0.7,
max_tokens=2000,
timeout=60,
retry_attempts=3,
enable_rate_limit_retry=True
),
memory_config=MemoryConfig(
auto_inject=["user_context", "session_data"],
memory_retention="persistent",
cache_results=True
),
async_config=AsyncConfig(
enable_async_execution=True,
max_execution_timeout=3600,
polling_timeout=20,
enable_batch_polling=True
),
dev_mode=False
)Related
- Agent Class - Agent initialization and lifecycle
- Python SDK Overview - Getting started guide
- Environment Variables Reference - Complete environment variable reference
- Docker Deployment - Container deployment with networking
- Production Best Practices - Production deployment patterns
- app.ai() - AI interface using AIConfig
- app.memory - Memory interface using MemoryConfig
- Async Execution - Long-running task patterns