Python SDK Overview
Build intelligent AI agents with the Agentfield Python SDK
The Agentfield Python SDK enables you to build intelligent AI agents that can reason, communicate across agent networks, and maintain persistent memory. Built on FastAPI, it provides a decorator-based API for defining AI-powered functions (reasoners) and deterministic business logic (skills).
Installation
Install the Agentfield SDK via pip:
pip install agentfieldQuick Start
Create your first agent in under 5 minutes:
from agentfield import Agent
# Initialize agent
app = Agent(
node_id="support_agent",
agentfield_server="http://localhost:8080"
)
# Define an AI-powered reasoner
from pydantic import BaseModel
class TicketAnalysis(BaseModel):
category: str
priority: str
suggested_action: str
@app.reasoner
async def analyze_ticket(ticket_text: str) -> TicketAnalysis:
"""Analyze support ticket and suggest resolution."""
return await app.ai(
system="You are a support ticket analyzer.",
user=f"Analyze this ticket: {ticket_text}",
schema=TicketAnalysis
)
# Define a deterministic skill
@app.skill()
def format_response(category: str, priority: str) -> str:
"""Format analysis into user-friendly response."""
return f"Category: {category}\nPriority: {priority}"
# Start the agent server
if __name__ == "__main__":
app.serve(port=8001)The agent automatically registers with the Agentfield server, creates REST API endpoints, and enables cross-agent communication.
Core Concepts
Agent
The Agent class is the foundation of your AI agent. It inherits from FastAPI and provides:
- Automatic REST API generation
- Agentfield server integration
- Workflow tracking and DAG building
- Memory management
- Cross-agent communication
from agentfield import Agent, AIConfig, MemoryConfig
app = Agent(
node_id="my_agent",
agentfield_server="http://localhost:8080",
ai_config=AIConfig(model="gpt-4o", temperature=0.7),
memory_config=MemoryConfig(
auto_inject=["user_context"],
memory_retention="persistent",
cache_results=True
)
)Reasoners
Reasoners are AI-powered functions decorated with @app.reasoner. They use LLMs to process inputs and generate intelligent outputs:
from pydantic import BaseModel
class SentimentResult(BaseModel):
sentiment: str
confidence: float
reasoning: str
@app.reasoner
async def analyze_sentiment(text: str) -> SentimentResult:
"""Analyze text sentiment with structured output."""
return await app.ai(
system="You are a sentiment analyzer.",
user=text,
schema=SentimentResult
)Key Features:
- Automatic workflow tracking
- Pydantic validation
- REST API endpoints
- Execution context propagation
Skills
Skills are deterministic functions decorated with @app.skill(). They handle business logic, integrations, and data processing:
@app.skill(tags=["database", "user"])
def get_user_profile(user_id: int) -> dict:
"""Retrieve user profile from database."""
user = database.query(User).filter_by(id=user_id).first()
return {
"id": user.id,
"name": user.name,
"email": user.email
}Key Features:
- Type-safe function signatures
- Automatic API generation
- Tagging for organization
- No AI overhead
Cross-Agent Communication
Use app.call() to invoke reasoners and skills on other agents:
@app.reasoner
async def comprehensive_analysis(ticket_text: str) -> dict:
"""Multi-agent ticket analysis."""
# Call sentiment analyzer on different agent
sentiment = await app.call(
"sentiment_agent.analyze_sentiment",
text=ticket_text
)
# Call priority classifier
priority = await app.call(
"priority_agent.classify_priority",
ticket_text=ticket_text
)
return {
"sentiment": sentiment,
"priority": priority,
"ticket": ticket_text
}All cross-agent calls automatically build workflow DAGs showing the complete execution flow.
Memory System
Access persistent and session-based storage with app.memory:
@app.reasoner
async def personalized_response(user_id: str, message: str) -> str:
"""Generate response with user context."""
# Get user preferences from memory
preferences = await app.memory.get(f"user_{user_id}_preferences", default={})
# Generate personalized response
response = await app.ai(
system=f"User preferences: {preferences}",
user=message
)
# Update conversation history
history = await app.memory.get(f"user_{user_id}_history", default=[])
history.append({"message": message, "response": response})
await app.memory.set(f"user_{user_id}_history", history)
return responseDevelopment Workflow
# Enable development mode for detailed logging
app = Agent(
node_id="dev_agent",
agentfield_server="http://localhost:8080",
dev_mode=True # Enhanced logging and debugging
)
# Start in development mode
if __name__ == "__main__":
app.serve(
port=8001,
dev=True # Enhanced logging and error reporting
)# Production configuration
app = Agent(
node_id="prod_agent",
agentfield_server="https://agentfield.company.com",
ai_config=AIConfig(
model="gpt-4o",
max_tokens=2000,
timeout=30
),
async_config=AsyncConfig(
enable_async_execution=True,
max_execution_timeout=3600
)
)
# Start production server
if __name__ == "__main__":
app.serve(
port=8080,
host="0.0.0.0"
)# AWS Lambda / Cloud Functions
app = Agent(
node_id="serverless_agent",
auto_register=False # Manual registration
)
def adapter(event: dict) -> dict:
body = event.get("body")
if isinstance(body, str):
import json
try:
body = json.loads(body)
except json.JSONDecodeError:
body = {}
return {
"path": event.get("rawPath") or event.get("path") or "/execute",
"headers": event.get("headers") or {},
"target": event.get("target") or event.get("reasoner"),
"input": (body or {}).get("input") or body or event.get("input", {}),
"executionContext": event.get("executionContext") or event.get("execution_context"),
}
@app.reasoner
async def process_event(data: dict) -> dict:
return await app.ai(user=f"Process: {data}")
# Lambda handler (adapter normalizes provider-specific event shapes)
def lambda_handler(event, context):
return app.handle_serverless(event, adapter=adapter)Configuration
The SDK provides comprehensive configuration options:
AI Configuration
Configure LLM behavior with AIConfig:
from agentfield import AIConfig
ai_config = AIConfig(
model="gpt-4o",
temperature=0.7,
max_tokens=2000,
vision_model="dall-e-3",
audio_model="tts-1-hd",
timeout=60
)
app = Agent(node_id="my_agent", ai_config=ai_config)Memory Configuration
Control memory behavior with MemoryConfig:
from agentfield import MemoryConfig
memory_config = MemoryConfig(
auto_inject=["user_context", "conversation_history"],
memory_retention="persistent",
cache_results=True
)
app = Agent(node_id="my_agent", memory_config=memory_config)Async Execution
Configure async execution with AsyncConfig:
from agentfield import AsyncConfig
async_config = AsyncConfig(
enable_async_execution=True,
max_execution_timeout=3600,
polling_timeout=20
)
app = Agent(node_id="my_agent", async_config=async_config)
# Register a webhook when queuing async work
from agentfield.client import AgentFieldClient
from agentfield.types import WebhookConfig
client = AgentFieldClient(async_config=async_config)
execution_id = await client.execute_async(
target="research-agent.deep_analysis",
input_data={"topic": "autonomous software architecture trends"},
webhook=WebhookConfig(
url="https://app.example.com/webhooks/agentfield",
secret="your-webhook-secret",
headers={"X-Custom-ID": "research-123"},
),
)Next Steps
Core Concepts
- Agent Class - Initialize and configure agents
- @app.reasoner - Define AI-powered functions
- @app.skill() - Create deterministic skills
AI Integration
- app.ai() - LLM interface
- app.call() - Cross-agent communication
- app.memory - Persistent storage
Advanced Features
- Workflow API - Execution tracking
- AgentRouter - Organize reasoners
- Configuration - AI, Memory, Async settings
Deployment
- Async Execution - Long-running tasks
- Webhooks - Event notifications
- Serverless Agents - Lambda/Cloud Functions
Examples
Explore complete examples in the Agentfield repository:
- Deep Research - Recursive web research with memory
- Documentation Chatbot - RAG-based documentation assistant
- Agentic RAG - Advanced retrieval-augmented generation
- Simulation Engine - Multi-agent simulation framework