Skip to main content

Overview

Agents are the core intelligent entities in OrbitAI that execute tasks autonomously. Each agent is designed with a specific role, purpose, and context, equipped with tools and capabilities to accomplish their assigned work.

Autonomous

Self-directed execution with intelligent decision-making

Specialized

Configured with specific roles and domain expertise

Tool-Enabled

Access to tools for extended capabilities

Context-Aware

Maintains memory and understands task context

Collaborative

Works with other agents in orchestrated workflows

Monitored

Comprehensive metrics and performance tracking

Key Characteristics

Each agent has a defined role that shapes its behavior and decision-making process. Roles provide context for how the agent should approach tasks and interact with other agents.
Agents are created with specific purposes that guide their actions and define their primary objectives within the system.
Agents maintain context through detailed background information, memory systems, and knowledge bases, enabling sophisticated reasoning.
Agents leverage Large Language Models for natural language understanding, reasoning, and content generation.

Agent Architecture

Agent (Core Entity)
    ├── Identity
    │   ├── ID (OrbitAIID)
    │   ├── Role (String)
    │   └── Purpose (String)

    ├── Intelligence
    │   ├── LLM Provider
    │   ├── Context/Background
    │   └── Temperature

    ├── Capabilities
    │   ├── Tools (Array)
    │   ├── Memory System
    │   └── Knowledge Base

    ├── Execution
    │   ├── OrbitAgentExecutor
    │   ├── Task Queue
    │   └── Output Generation

    └── Monitoring
        ├── Usage Metrics
        ├── Performance Data
        └── Execution History

Core Components

  • Agent Structure
  • Agent Executor
  • Agent Factory
The Agent actor is the fundamental unit that encapsulates:
  • Identity and configuration
  • LLM integration
  • Tool access
  • Memory management
  • Execution state
public actor Agent: Identifiable, Codable, Sendable {
    public let id: OrbitAIID
    public let role: String
    public let purpose: String
    public let context: String
    // ... additional properties
}

Agent Parameters

Core Properties

  • Required
  • LLM Configuration
  • Capabilities
  • Memory & Knowledge
  • Execution & Output
PropertyTypeDescription
idOrbitAIIDUnique identifier (UUID) for the agent
roleStringThe agent’s role or title (e.g., “Senior Data Analyst”)
purposeStringPrimary objective and responsibility
contextStringBackground information and expertise details
let agent = Agent(
    role: "Financial Analyst",
    purpose: "Analyze financial data and provide investment insights",
    context: """
    Expert financial analyst with 15 years of experience in:
    - Financial statement analysis
    - Market trend identification
    - Risk assessment and portfolio management
    - Investment recommendations
    """
)
The role, purpose, and context form the agent’s “system message” that shapes all LLM interactions.

State Properties

PropertyTypeDescription
currentTaskORTask?Currently executing task
taskHistory[ORTask]Completed task history
totalUsageMetricsUsageMetricsCumulative usage statistics
executionCountIntNumber of tasks executed
averageExecutionTimeTimeIntervalAverage task completion time
State properties are automatically managed during agent execution and provide insights into agent performance.

Critical Parameters

Role, Purpose, and Context

The most critical parameters that define agent behavior:
1

Role: Define the Agent's Identity

The role establishes who the agent is. Be specific and professional:
// Good roles
role: "Senior Financial Analyst"
role: "Technical Writer specializing in API documentation"
role: "Customer Support Specialist"

// Poor roles
role: "Helper"
role: "AI"
role: "Agent"
Use professional titles that reflect real-world expertise levels and specializations.
2

Purpose: State the Primary Objective

The purpose clarifies what the agent should accomplish:
// Good purposes
purpose: "Analyze financial statements and provide investment recommendations"
purpose: "Create comprehensive API documentation from code specifications"
purpose: "Resolve customer inquiries and escalate complex issues"

// Poor purposes
purpose: "Help with stuff"
purpose: "Do tasks"
purpose: "Be useful"
3

Context: Provide Background and Expertise

The context gives the agent detailed background knowledge:
context: """
You are a senior financial analyst with 15+ years of experience in:
- Equity research and valuation modeling
- Financial statement analysis (10-K, 10-Q reports)
- Market trend analysis and forecasting
- Risk assessment and portfolio management
- Regulatory compliance (SEC, FINRA)

Your analysis is data-driven and considers:
- Industry benchmarks and peer comparisons
- Macroeconomic factors and market conditions
- Technical and fundamental indicators
- Risk-adjusted returns

Always provide specific, actionable recommendations backed by data.
Use appropriate financial terminology and explain complex concepts clearly.
"""
Avoid generic context. Provide specific expertise, methodologies, and guidelines that shape agent behavior.

Temperature Configuration

Temperature controls randomness and creativity:
  • Low (0.0-0.3)
  • Medium (0.4-0.7)
  • High (0.8-1.0)
Use for:
  • Data analysis
  • Code generation
  • Factual reporting
  • Structured outputs
  • Mathematical tasks
let analyst = Agent(
    role: "Data Analyst",
    purpose: "Analyze datasets and generate reports",
    context: "Expert in statistical analysis",
    temperature: 0.2  // Deterministic, focused
)

Memory and Context

Memory Systems

OrbitAI provides multiple memory systems for context retention:

Short-Term Memory

Enabled with: memory: trueRetains information within a single conversation or task sequence.
memory: true
Use cases:
  • Multi-turn conversations
  • Sequential task workflows
  • Context building within sessions

Long-Term Memory

Enabled with: longTermMemory: truePersists information across sessions and executions.
longTermMemory: true
Use cases:
  • User preference tracking
  • Historical data retention
  • Cross-session learning

Entity Memory

Enabled with: entityMemory: trueTracks and remembers named entities (people, places, organizations).
entityMemory: true
Use cases:
  • Customer relationship management
  • Knowledge graph building
  • Entity relationship tracking

Knowledge Sources

Configured with: knowledgeSources: [paths]Load external knowledge from files.
knowledgeSources: [
    "./docs/api-reference.md",
    "./data/product-catalog.json"
]
Supported formats:
  • PDF documents
  • Markdown files
  • JSON data
  • Plain text

Memory Configuration

Fine-tune memory behavior with MemoryConfiguration:
let memoryConfig = MemoryConfiguration(
    maxMemoryItems: 100,          // Maximum stored items
    persistencePath: "./memory",   // Storage location
    embeddingModel: "text-embedding-ada-002",
    similarityThreshold: 0.75,    // Relevance threshold
    compressionEnabled: true       // Automatic summarization
)

let agent = Agent(
    role: "Knowledge Assistant",
    purpose: "Provide informed responses using memory",
    context: "Assistant with excellent recall",
    memory: true,
    longTermMemory: true,
    memoryConfig: memoryConfig
)

Context Window Management

Manage token limits and context overflow:
let agent = Agent(
    role: "Long-Context Analyst",
    purpose: "Analyze large documents",
    context: "Expert in processing extensive content",
    respectContextWindow: true,    // Enforce limits
    maxTokens: 4000               // Per-response limit
)
When respectContextWindow: true, the system automatically:
  1. Monitors token usage
  2. Prunes old messages when approaching limits
  3. Retains system messages and recent context
  4. Maintains conversation coherence
// Automatic pruning when context exceeds limits
if tokenCount > maxContextTokens * 0.8 {
    messages = pruneContext(messages, keepRecent: 10)
}
Memory Systems:
  • Store information externally
  • Retrieve relevant data as needed
  • Not limited by context window
  • Slower access (retrieval step)
Context Window:
  • All data in active conversation
  • Immediate access
  • Limited by token count
  • Faster processing
Use memory for large knowledge bases and long-term retention. Keep recent, relevant information in the context window.

Execution Control

Iteration and Reasoning

Control how agents reason through complex problems:
let agent = Agent(
    role: "Problem Solver",
    purpose: "Solve complex analytical problems",
    context: "Expert in systematic problem-solving",
    maxIter: 20,                  // Allow up to 20 reasoning steps
    allowDelegation: true          // Can delegate subtasks
)
The maxIter parameter limits the number of reasoning cycles, preventing infinite loops while allowing thorough analysis.

Rate Limiting

Prevent API throttling and control costs:
let agent = Agent(
    role: "High-Volume Assistant",
    purpose: "Handle multiple concurrent requests",
    context: "Efficient assistant for batch processing",
    maxRPM: 60,                   // 60 requests per minute
    maxExecutionTime: 300         // 5 minute timeout
)

Delegation

Enable agents to delegate to other agents:
// Manager agent that can delegate
let manager = Agent(
    role: "Project Manager",
    purpose: "Coordinate complex projects across teams",
    context: "Experienced PM with delegation skills",
    allowDelegation: true,
    tools: ["task_delegator", "progress_tracker"]
)

// Worker agents for specialized tasks
let dataAgent = Agent(
    role: "Data Processor",
    purpose: "Process and transform data",
    context: "Data engineering expert"
)

let reportAgent = Agent(
    role: "Report Generator",
    purpose: "Create formatted reports",
    context: "Technical writing specialist"
)

Callbacks and Monitoring

Track agent execution with callbacks:
let agent = Agent(
    role: "Monitored Agent",
    purpose: "Execute tasks with detailed tracking",
    context: "Agent with comprehensive monitoring",
    verbose: true,                // Log all steps
    stepCallback: "onAgentStep"   // Custom callback function
)

// Callback implementation
func onAgentStep(step: AgentStep) {
    print("Agent: \(step.agentId)")
    print("Action: \(step.action)")
    print("Thought: \(step.thought)")
    print("Observation: \(step.observation)")
}

Agent Tools

Tool Integration

Agents gain capabilities through tools:
let agent = Agent(
    role: "Research Analyst",
    purpose: "Conduct comprehensive market research",
    context: "Expert researcher with analytical capabilities",
    tools: [
        "web_search",          // Search the internet
        "data_analyzer",       // Analyze datasets
        "chart_generator",     // Create visualizations
        "pdf_reader",          // Read PDF documents
        "calculator",          // Perform calculations
        "report_writer"        // Generate reports
    ]
)

Built-in Tools

Web Search

Search the internet for current information
tools: ["web_search"]

Calculator

Perform mathematical calculations
tools: ["calculator"]

File Operations

Read, write, and manipulate files
tools: ["file_reader", "file_writer"]

Data Analysis

Analyze and process datasets
tools: ["data_analyzer", "csv_processor"]

Code Execution

Execute code safely
tools: ["code_executor", "python_repl"]

API Integration

Call external APIs
tools: ["api_caller", "rest_client"]

Custom Tools

Create domain-specific tools:
  • Define Tool
  • Register Tool
  • Tool Best Practices
import OrbitAI

final class DatabaseQueryTool: BaseTool {
    override var name: String { "database_query" }
    override var description: String {
        "Query the company database for information"
    }

    override var parametersSchema: JSONSchema {
        JSONSchema(
            type: .object,
            properties: [
                "query": JSONSchema(
                    type: .string,
                    description: "SQL query to execute"
                ),
                "limit": JSONSchema(
                    type: .integer,
                    description: "Maximum rows to return"
                )
            ],
            required: ["query"]
        )
    }

    override func execute(
        with parameters: Metadata
    ) async throws -> ToolResult {
        guard let query = parameters["query"]?.stringValue else {
            throw OrbitAIError.invalidToolParameters("Missing query")
        }

        let limit = parameters["limit"]?.intValue ?? 100

        // Execute database query
        let results = try await database.execute(query, limit: limit)

        var resultData = Metadata()
        resultData["rows"] = .array(results.map { .dictionary($0) })
        resultData["count"] = .int(results.count)

        return ToolResult(success: true, data: resultData)
    }
}

Tool Selection

Agents automatically select appropriate tools based on:
  1. Task requirements: Tools mentioned in task description
  2. Agent configuration: Available tools in agent’s tool list
  3. LLM reasoning: Model determines when tools are needed
  4. Context: Previous tool usage and results
// Agent selects tools based on task
let task = ORTask(
    description: """
    Research current AI trends, analyze the data,
    and create a visualization chart.
    """,
    expectedOutput: "Research report with charts"
)

// Agent with multiple tools
let agent = Agent(
    role: "Research Analyst",
    purpose: "Conduct and visualize research",
    context: "Expert researcher with data viz skills",
    tools: [
        "web_search",      // Used for research
        "data_analyzer",   // Used for analysis
        "chart_generator"  // Used for visualization
    ]
)

// Agent automatically selects tools in order:
// 1. web_search for research
// 2. data_analyzer for analysis
// 3. chart_generator for visualization

Structured Outputs

Type-Safe Output Generation

Use Swift’s Codable for structured responses:
struct AnalysisReport: Codable, Sendable {
    let title: String
    let executiveSummary: String
    let findings: [Finding]
    let recommendations: [Recommendation]
    let confidence: Double
    let generatedAt: Date

    struct Finding: Codable, Sendable {
        let category: String
        let description: String
        let impact: Impact
        let evidence: [String]
    }

    struct Recommendation: Codable, Sendable {
        let title: String
        let description: String
        let priority: Priority
        let estimatedCost: Double?
    }

    enum Impact: String, Codable {
        case high, medium, low
    }

    enum Priority: String, Codable {
        case critical, high, medium, low
    }
}

// Agent configured for structured output
let agent = Agent(
    role: "Business Analyst",
    purpose: "Generate structured analysis reports",
    context: "Expert analyst with strong reporting skills",
    temperature: 0.3  // Lower for consistent structure
)

// Task with structured output
let task = ORTask.withStructuredOutput(
    description: "Analyze Q4 2024 business performance",
    expectedType: AnalysisReport.self,
    agent: agent.id
)

JSON Schema Validation

Define schemas for output validation:
let reportSchema = JSONSchema(
    type: .object,
    properties: [
        "title": JSONSchema(type: .string),
        "summary": JSONSchema(type: .string),
        "metrics": JSONSchema(
            type: .object,
            properties: [
                "revenue": JSONSchema(type: .number),
                "growth": JSONSchema(type: .number),
                "customers": JSONSchema(type: .integer)
            ]
        ),
        "recommendations": JSONSchema(
            type: .array,
            items: .init(value: JSONSchema(type: .string))
        )
    ],
    required: ["title", "summary", "metrics"]
)

let task = ORTask(
    description: "Generate quarterly business report",
    expectedOutput: "Structured report with metrics and recommendations",
    outputFormat: .structured(reportSchema)
)
Use structured outputs for:
  • API integrations
  • Database storage
  • UI rendering
  • Data pipelines
  • System integrations

Best Practices

Agent Design

Single Responsibility

Design agents with one clear purposeGood:
role: "Email Response Specialist"
purpose: "Draft professional email responses"
Bad:
role: "Everything Agent"
purpose: "Do all tasks"

Domain Expertise

Provide specific, relevant expertiseGood:
context: """
Expert in GDPR compliance with knowledge of:
- Data protection principles
- Legal requirements
- Implementation best practices
"""
Bad:
context: "Knows about privacy"

Appropriate Tools

Only include necessary toolsGood:
tools: ["web_search", "data_analyzer"]
// Relevant for research tasks
Bad:
tools: ["web_search", "image_editor",
        "video_processor", "code_executor"]
// Too many unrelated tools

Temperature Tuning

Match temperature to task typeGood:
// Data analysis
temperature: 0.2

// Creative writing
temperature: 0.9
Bad:
// Always using default
temperature: 0.7

Performance Optimization

Enable memory only when needed:
// Short tasks - no memory needed
let quickAgent = Agent(
    role: "Quick Responder",
    purpose: "Answer simple questions",
    context: "General knowledge assistant",
    memory: false  // No memory overhead
)

// Multi-turn conversations - enable memory
let conversationalAgent = Agent(
    role: "Conversational Assistant",
    purpose: "Engage in ongoing conversations",
    context: "Friendly assistant with good memory",
    memory: true,
    longTermMemory: true
)
Benefits:
  • Reduced resource usage
  • Faster execution
  • Lower storage costs
Optimize tool assignment:
// Specific tools for specific tasks
let codeAgent = Agent(
    role: "Code Reviewer",
    purpose: "Review code for issues",
    context: "Expert code reviewer",
    tools: [
        "code_analyzer",
        "security_scanner"
    ]  // Only relevant tools
)
Avoid:
  • Giving every agent all available tools
  • Including tools the agent won’t use
  • Complex tools for simple tasks
Manage context efficiently:
let agent = Agent(
    role: "Document Processor",
    purpose: "Process long documents",
    context: "Expert in document analysis",
    respectContextWindow: true,  // Auto-manage
    maxTokens: 2000             // Per-response limit
)
Strategies:
  • Enable respectContextWindow for auto-pruning
  • Set appropriate maxTokens limits
  • Use knowledge sources for large documents
  • Implement chunking for very long content
Prevent throttling:
let agent = Agent(
    role: "Batch Processor",
    purpose: "Process items in batches",
    context: "Efficient batch processing agent",
    maxRPM: 50,              // Stay under API limits
    maxExecutionTime: 600    // 10 minute timeout
)
Benefits:
  • Avoid API rate limit errors
  • Control costs
  • Predictable performance

Security and Safety

1

Input Validation

Validate all inputs and parameters:
// Sanitize user input
let sanitizedInput = userInput
    .trimmingCharacters(in: .whitespacesAndNewlines)
    .replacingOccurrences(of: #"<script.*?</script>"#,
                          with: "",
                          options: .regularExpression)
2

Access Control

Limit agent access to sensitive operations:
let publicAgent = Agent(
    role: "Public Assistant",
    purpose: "Help public users",
    context: "Public-facing assistant",
    tools: [
        "web_search",      // Safe
        "calculator"       // Safe
    ]
    // No file system access
    // No database access
    // No email sending
)
3

Output Filtering

Filter potentially harmful outputs:
let agent = Agent(
    role: "Content Generator",
    purpose: "Generate user-facing content",
    context: "Professional content creator",
    guardrails: [
        .noHarmfulContent,
        .noPII,  // No personal information
        .contentFilter
    ]
)
4

Audit Logging

Track agent actions for security:
let agent = Agent(
    role: "Privileged Agent",
    purpose: "Perform sensitive operations",
    context: "Authorized for sensitive tasks",
    verbose: true,  // Detailed logging
    tools: ["database_access", "file_writer"]
)

// Monitor logs for suspicious activity

Troubleshooting

Symptoms: Agent outputs are off-target or low qualityCommon Causes:
  1. Vague or unclear role/purpose/context
  2. Inappropriate temperature setting
  3. Missing necessary tools
  4. Insufficient context information
Solutions:
// Before: Vague configuration
let vague = Agent(
    role: "Helper",
    purpose: "Help users",
    context: "Helpful assistant"
)

// After: Clear, specific configuration
let specific = Agent(
    role: "Technical Support Specialist",
    purpose: "Diagnose and resolve technical issues for software users",
    context: """
    Expert technical support specialist with:
    - 5+ years experience in software troubleshooting
    - Deep knowledge of common technical issues
    - Excellent problem-solving and communication skills
    - Systematic approach to diagnostics

    Always:
    1. Ask clarifying questions
    2. Provide step-by-step solutions
    3. Explain technical concepts clearly
    4. Verify issue resolution
    """,
    temperature: 0.4,  // Balanced for support tasks
    tools: ["knowledge_base", "diagnostic_tool"]
)
Symptoms: Tasks fail with timeout errorsCommon Causes:
  1. Insufficient maxExecutionTime
  2. Complex reasoning requiring many iterations
  3. Slow tool execution
  4. LLM provider latency
Solutions:
// Increase execution time and iterations
let agent = Agent(
    role: "Complex Problem Solver",
    purpose: "Solve intricate problems",
    context: "Expert problem solver",
    maxExecutionTime: 1800,  // 30 minutes
    maxIter: 25,             // More reasoning steps
    tools: ["complex_analyzer"]
)

// Or use async execution
let task = ORTask(
    description: "Long-running analysis",
    expectedOutput: "Comprehensive analysis",
    agent: agent.id,
    async: true  // Don't block
)
Monitor averageExecutionTime to set appropriate timeouts.
Symptoms: Agent reports tool execution errorsCommon Causes:
  1. Tool not registered with ToolsHandler
  2. Invalid tool parameters
  3. Tool permissions issues
  4. Tool dependencies missing
Solutions:
// Verify tool registration
let toolsHandler = ToolsHandler.shared
let isRegistered = await toolsHandler.isToolAvailable(
    named: "my_custom_tool"
)

if !isRegistered {
    await toolsHandler.registerTool(MyCustomTool())
}

// Add better error handling in tools
override func execute(
    with parameters: Metadata
) async throws -> ToolResult {
    do {
        // Tool logic
        return ToolResult(success: true, data: result)
    } catch {
        return ToolResult(
            success: false,
            error: "Tool failed: \(error.localizedDescription)"
        )
    }
}
Symptoms: High memory usage or memory-related errorsCommon Causes:
  1. Memory enabled but not needed
  2. Too many memory items stored
  3. Large knowledge sources
  4. Memory not being pruned
Solutions:
// Configure memory limits
let memoryConfig = MemoryConfiguration(
    maxMemoryItems: 50,        // Limit storage
    compressionEnabled: true,   // Auto-summarize
    pruneOldItems: true        // Remove old entries
)

let agent = Agent(
    role: "Memory-Efficient Agent",
    purpose: "Work efficiently with memory",
    context: "Efficient agent design",
    memory: true,
    memoryConfig: memoryConfig
)

// Or disable memory if not needed
let simpleAgent = Agent(
    role: "Simple Task Agent",
    purpose: "Handle simple one-off tasks",
    context: "Quick task handler",
    memory: false  // No memory overhead
)
Symptoms: Agent produces different outputs for same inputsCommon Causes:
  1. High temperature setting
  2. Non-deterministic tool behavior
  3. Memory state differences
  4. Random sampling in LLM
Solutions:
// Lower temperature for consistency
let consistentAgent = Agent(
    role: "Data Processor",
    purpose: "Process data consistently",
    context: "Reliable data processing agent",
    temperature: 0.0,  // Deterministic
    tools: ["data_processor"]
)

// Use structured outputs
struct ProcessingResult: Codable, Sendable {
    let status: String
    let processedItems: Int
    let errors: [String]
}

let task = ORTask.withStructuredOutput(
    description: "Process data batch",
    expectedType: ProcessingResult.self,
    agent: consistentAgent.id
)
Symptoms: API rate limit exceeded errorsCommon Causes:
  1. No maxRPM configured
  2. Too many concurrent agents
  3. Rapid successive requests
  4. Insufficient retry backoff
Solutions:
// Configure rate limiting
let rateLimitedAgent = Agent(
    role: "Controlled Agent",
    purpose: "Respect API rate limits",
    context: "Rate-aware agent",
    maxRPM: 50,  // Limit requests per minute
    tools: ["api_caller"]
)

// Implement exponential backoff
func executeWithBackoff(
    agent: Agent,
    task: ORTask,
    maxRetries: Int = 3
) async throws -> TaskOutput {
    var lastError: Error?

    for attempt in 1...maxRetries {
        do {
            return try await executor.executeTask(
                agent: agent,
                task: task
            )
        } catch let error as OrbitAIError {
            if case .llmRateLimitExceeded = error {
                let delay = pow(2.0, Double(attempt))
                try await Task.sleep(for: .seconds(delay))
                lastError = error
            } else {
                throw error
            }
        }
    }

    throw lastError ?? OrbitAIError.taskExecutionFailed("Max retries exceeded")
}

Debugging Strategies

1

Enable Verbose Logging

let debugAgent = Agent(
    role: "Debug Subject",
    purpose: "Agent being debugged",
    context: "Test agent for debugging",
    verbose: true,  // Detailed logs
    stepCallback: "onAgentStep"
)

func onAgentStep(step: AgentStep) {
    print("=== Agent Step ===")
    print("Thought: \(step.thought)")
    print("Action: \(step.action)")
    print("Observation: \(step.observation)")
    print("================")
}
2

Monitor Metrics

// Check agent performance
print("Execution count: \(await agent.executionCount)")
print("Avg time: \(await agent.averageExecutionTime)s")
print("Total tokens: \(await agent.totalUsageMetrics.totalTokens)")

// Analyze tool usage
if let taskOutput = result.output {
    for toolUsage in taskOutput.toolsUsed {
        print("Tool: \(toolUsage.toolName)")
        print("Time: \(toolUsage.executionTime)s")
        print("Success: \(toolUsage.success)")
    }
}
3

Test with Simple Tasks

// Start simple
let simpleTask = ORTask(
    description: "What is 2 + 2?",
    expectedOutput: "The number 4"
)

let result = try await executor.executeTask(
    agent: agent,
    task: simpleTask
)

// Verify basic functionality before complex tasks
assert(result.rawOutput.contains("4"))
4

Isolate Components

// Test without tools
let noToolsAgent = Agent(
    role: agent.role,
    purpose: agent.purpose,
    context: agent.context,
    tools: nil  // Remove tools to isolate issue
)

// Test without memory
let noMemoryAgent = Agent(
    role: agent.role,
    purpose: agent.purpose,
    context: agent.context,
    memory: false,
    longTermMemory: false
)

// Identify which component causes issues

Next Steps

For additional support, consult the GitHub Discussions or check the Issue Tracker.