Agent Execution

Execute AI agents synchronously via HTTP and get immediate results

Agent Execution

Execute AI agents synchronously via HTTP and get immediate results

Execute reasoners and skills synchronously and receive results immediately. Agentfield automatically tracks execution metadata, enabling distributed tracing and workflow visualization without manual instrumentation.

Quick Start

Execute any reasoner or skill using the unified execution endpoint:

curl -X POST http://localhost:8080/api/v1/execute/support-agent.analyze_sentiment \
  -H "Content-Type: application/json" \
  -d '{
    "input": {
      "message": "This is the third time I am calling!",
      "conversation_history": []
    }
  }'

Response:

{
  "execution_id": "exec_abc123",
  "run_id": "run_def456",
  "status": "succeeded",
  "result": {
    "sentiment": "frustrated",
    "confidence": 0.92,
    "recommendation": "escalate"
  },
  "duration_ms": 1247,
  "finished_at": "2024-01-15T10:30:45Z"
}

Calling Router-Organized Functions

If your agent uses AgentRouter to organize functions, the router prefix becomes part of the endpoint:

# Agent code with router
from agentfield.router import AgentRouter

support = AgentRouter(prefix="support")

@support.reasoner()
async def analyze_sentiment(message: str) -> dict:
    """Analyze customer sentiment."""
    return await support.ai(...)

app.include_router(support)  # Registered as: support_analyze_sentiment

API call includes the router prefix:

# Note: "support" prefix is now part of the function name
curl -X POST http://localhost:8080/api/v1/execute/support-agent.support_analyze_sentiment \
  -H "Content-Type: application/json" \
  -d '{
    "input": {
      "message": "This is the third time I am calling!"
    }
  }'

Without router: support-agent.analyze_sentiment With router prefix "support": support-agent.support_analyze_sentiment

See AgentRouter documentation for details on how prefixes translate to API endpoints.

Endpoint

Examples

# Execute sentiment analysiscurl -X POST http://localhost:8080/api/v1/execute/support-agent.analyze_sentiment \-H "Content-Type: application/json" \-H "X-Session-ID: session_user123" \-H "X-Actor-ID: user_john_doe" \-d '{  "input": {    "message": "This is the third time I am calling about this issue!",    "conversation_history": [      {"role": "user", "content": "My order never arrived"},      {"role": "agent", "content": "Let me check that for you"}    ]  }}'

Response Fields

FieldTypeDescription
execution_idstringUnique identifier for this execution
run_idstringWorkflow run identifier grouping related executions
statusstringExecution status: succeeded, failed, waiting
resultobjectThe reasoner/skill output (structure defined by your agent)
error_messagestringError details if status is failed
duration_msnumberExecution duration in milliseconds
finished_atstringCompletion timestamp (ISO 8601)

Execution Flow

When you execute an agent, Agentfield automatically:

  1. Validates the request against your agent's Pydantic schema
  2. Persists execution metadata for tracing and observability
  3. Routes the request to your agent's HTTP endpoint
  4. Tracks execution duration and status
  5. Returns the result with complete metadata

Under the Hood: Agentfield uses a distributed execution model where each agent runs as an independent service. The control plane orchestrates requests, tracks execution state, and constructs workflow DAGs automatically—no manual instrumentation needed.

Multi-Agent Workflows

Chain multiple agents together using the execution response:

// Step 1: Analyze sentiment
const sentiment = await fetch(
  'http://localhost:8080/api/v1/execute/support-agent.analyze_sentiment',
  {
    method: 'POST',
    headers: { 'Content-Type': 'application/json', 'X-Workflow-ID': 'wf_123' },
    body: JSON.stringify({
      input: { message: customerMessage }
    })
  }
).then(r => r.json());

// Step 2: Route based on AI decision
if (sentiment.result.recommendation === 'escalate') {
  await fetch(
    'http://localhost:8080/api/v1/execute/escalation-agent.create_ticket',
    {
      method: 'POST',
      headers: { 'Content-Type': 'application/json', 'X-Workflow-ID': 'wf_123' },
      body: JSON.stringify({
        input: {
          customer_id: customerId,
          sentiment: sentiment.result,
          priority: 'high'
        }
      })
    }
  );
}

Agentfield automatically links these executions into a workflow DAG using the X-Workflow-ID header.

Error Handling

Handle execution errors gracefully:

import requests

try:
    response = requests.post(
        'http://localhost:8080/api/v1/execute/support-agent.analyze_sentiment',
        json={'input': {'message': 'Help!'}},
        timeout=30
    )
    response.raise_for_status()

    result = response.json()

    if result['status'] == 'failed':
        print(f"Execution failed: {result.get('error_message')}")
    else:
        print(f"Result: {result['result']}")

except requests.exceptions.Timeout:
    print("Execution timed out")
except requests.exceptions.RequestException as e:
    print(f"Request failed: {e}")

Best Practices

1. Use Workflow Headers

Always include workflow headers for related executions:

curl -X POST http://localhost:8080/api/v1/execute/agent.reasoner \
  -H "X-Workflow-ID: wf_customer_support_001" \
  -H "X-Session-ID: session_user_456" \
  -H "X-Actor-ID: user_456" \
  -d '{"input": {...}}'

This enables:

  • Automatic workflow DAG construction
  • Session-scoped memory access
  • Actor-based attribution and auditing

2. Handle Timeouts

Set appropriate timeouts for AI operations:

const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), 30000); // 30s

try {
  const response = await fetch(url, {
    signal: controller.signal,
    // ... other options
  });
} finally {
  clearTimeout(timeout);
}

3. Validate Input Schemas

Ensure your input matches the agent's Pydantic schema:

# Your agent definition
class SentimentInput(BaseModel):
    message: str
    conversation_history: List[Dict[str, str]] = []

# Valid request
{
  "input": {
    "message": "Help!",
    "conversation_history": [
      {"role": "user", "content": "Previous message"}
    ]
  }
}

Long-Running Tasks

For tasks that take more than a few seconds, use async execution instead:

# Queue long-running task
curl -X POST http://localhost:8080/api/v1/execute/async/research-agent.deep_analysis \
  -H "Content-Type: application/json" \
  -d '{
    "input": {"topic": "market analysis"},
    "webhook": {
      "url": "https://app.example.com/webhooks/agentfield",
      "secret": "your-secret"
    }
  }'

Human-in-the-Loop: Waiting State

When an agent needs human approval before continuing, the execution enters a waiting state. The agent has paused itself and is waiting for a human decision. This is triggered when agent code calls app.pause(), which signals the control plane to suspend execution until a reviewer responds.

The full flow:

  1. Agent calls app.pause() with an approval request ID and URL
  2. Control plane sets execution status to waiting
  3. A human reviewer visits the approval URL and approves, rejects, or requests changes
  4. Control plane resumes the execution and the agent continues from where it paused

When you poll an execution in the waiting state, the response includes a waiting_for object with details about the pending approval:

{
  "execution_id": "exec_abc123",
  "status": "waiting",
  "waiting_for": {
    "type": "approval",
    "approval_request_id": "claim-12345",
    "approval_request_url": "https://dashboard.example.com/claims/12345",
    "expires_at": "2024-01-16T10:30:45Z"
  }
}

The waiting_for.approval_request_url is the URL where a human reviewer can take action. Your application is responsible for notifying the reviewer (via email, Slack, or another channel) that their input is needed.

Note: The waiting state is specific to Human-in-the-Loop workflows. Executions in other states (pending, queued, running) do not include a waiting_for field.

Approval Endpoints

These endpoints manage the approval lifecycle for executions in the waiting state.

Request Approval

Called automatically by the agent SDK when app.pause() is invoked. You do not need to call this directly unless you are building a custom agent runtime.

Example request:

curl -X POST http://localhost:8080/api/v1/executions/exec_abc123/request-approval \
  -H "Content-Type: application/json" \
  -d '{
    "approval_request_id": "claim-12345",
    "approval_request_url": "https://dashboard.example.com/claims/12345",
    "expires_in_hours": 24
  }'

Check Approval Status

Poll this endpoint to check whether a reviewer has responded to an approval request.

Example request:

curl "http://localhost:8080/api/v1/executions/exec_abc123/approval-status?approval_request_id=claim-12345"

The status field reflects the reviewer's decision: pending, approved, rejected, or request_changes.

Approve or Reject

Called by a human reviewer or your dashboard when a decision is made. Submitting a decision resumes the paused execution with the outcome.

Example request:

curl -X POST http://localhost:8080/api/v1/webhooks/approval \
  -H "Content-Type: application/json" \
  -d '{
    "approval_request_id": "claim-12345",
    "decision": "approved",
    "feedback": "Claim verified against policy"
  }'

After a decision is submitted, the execution transitions out of waiting and resumes. The agent receives the decision and feedback via the return value of app.pause().

For more on building Human-in-the-Loop workflows, see Human-in-the-Loop concepts and the Python SDK pause reference.