Overview
Agents are the core intelligent entities in OrbitAI that execute tasks autonomously. Each agent is designed with a specific role, purpose, and context, equipped with tools and capabilities to accomplish their assigned work.
Autonomous Self-directed execution with intelligent decision-making
Specialized Configured with specific roles and domain expertise
Tool-Enabled Access to tools for extended capabilities
Context-Aware Maintains memory and understands task context
Collaborative Works with other agents in orchestrated workflows
Monitored Comprehensive metrics and performance tracking
Key Characteristics
Each agent has a defined role that shapes its behavior and decision-making process. Roles provide context for how the agent should approach tasks and interact with other agents.
Agents are created with specific purposes that guide their actions and define their primary objectives within the system.
Agents maintain context through detailed background information, memory systems, and knowledge bases, enabling sophisticated reasoning.
Agents leverage Large Language Models for natural language understanding, reasoning, and content generation.
Agent Architecture
Agent (Core Entity)
├── Identity
│ ├── ID (OrbitAIID)
│ ├── Role (String)
│ └── Purpose (String)
│
├── Intelligence
│ ├── LLM Provider
│ ├── Context/Background
│ └── Temperature
│
├── Capabilities
│ ├── Tools (Array)
│ ├── Memory System
│ └── Knowledge Base
│
├── Execution
│ ├── OrbitAgentExecutor
│ ├── Task Queue
│ └── Output Generation
│
└── Monitoring
├── Usage Metrics
├── Performance Data
└── Execution History
Core Components
Agent Structure
Agent Executor
Agent Factory
The Agent actor is the fundamental unit that encapsulates:
Identity and configuration
LLM integration
Tool access
Memory management
Execution state
public actor Agent : Identifiable , Codable , Sendable {
public let id: OrbitAIID
public let role: String
public let purpose: String
public let context: String
// ... additional properties
}
The OrbitAgentExecutor handles:
Task execution logic
Tool orchestration
Memory integration
Output formatting
Error handling
public actor OrbitAgentExecutor {
func executeTask (
agent : Agent,
task : ORTask,
context : TaskExecutionContext
) async throws -> TaskOutput
}
Provides pre-configured agent templates:
Research agents
Writing agents
Analysis agents
Custom specialized agents
let researcher = AgentFactory. createResearchAgent (
goal : "Conduct market research" ,
llmID : . openAI ,
tools : [ "web_search" , "data_analysis" ]
)
Agent Parameters
Core Properties
Required
LLM Configuration
Capabilities
Memory & Knowledge
Execution & Output
Property Type Description
idOrbitAIIDUnique identifier (UUID) for the agent roleStringThe agent’s role or title (e.g., “Senior Data Analyst”) purposeStringPrimary objective and responsibility contextStringBackground information and expertise details
let agent = Agent (
role : "Financial Analyst" ,
purpose : "Analyze financial data and provide investment insights" ,
context : """
Expert financial analyst with 15 years of experience in:
- Financial statement analysis
- Market trend identification
- Risk assessment and portfolio management
- Investment recommendations
"""
)
The role, purpose, and context form the agent’s “system message” that shapes all LLM interactions.
Property Type Default Description
llmIDLLMProviderID?nilLLM provider to use llmString?nilLegacy string-based provider (deprecated) temperatureDouble?0.7LLM temperature (0.0-1.0) maxTokensInt?nilMaximum tokens per response
let agent = Agent (
role : "Creative Writer" ,
purpose : "Generate engaging marketing content" ,
context : "Expert copywriter with SEO knowledge" ,
llmID : . anthropic , // Use Anthropic Claude
temperature : 0.9 , // High creativity
maxTokens : 2000 // Longer responses
)
Use lower temperatures (0.1-0.3) for analytical tasks and higher (0.7-0.9) for creative work.
Property Type Default Description
tools[String]?nilAvailable tool names allowDelegationBoolfalseCan delegate to other agents maxIterInt?15Maximum reasoning iterations maxRPMInt?nilRate limit (requests per minute) maxExecutionTimeTimeInterval?nilMaximum execution time
let agent = Agent (
role : "Research Specialist" ,
purpose : "Conduct comprehensive research" ,
context : "Expert researcher with analytical skills" ,
tools : [
"web_search" ,
"data_analyzer" ,
"chart_generator" ,
"report_writer"
],
allowDelegation : true ,
maxIter : 20 , // More iterations for complex research
maxRPM : 60 // 60 requests per minute
)
Property Type Default Description
memoryBoolfalseEnable short-term memory longTermMemoryBoolfalseEnable long-term memory storage entityMemoryBoolfalseTrack named entities memoryConfigMemoryConfiguration?nilMemory system configuration knowledgeSources[String]?nilKnowledge base file paths
let agent = Agent (
role : "Personal Assistant" ,
purpose : "Provide personalized assistance" ,
context : "Helpful assistant with memory of user preferences" ,
memory : true , // Short-term memory
longTermMemory : true , // Persistent storage
entityMemory : true , // Track people, places, things
knowledgeSources : [
"./company-policies.pdf" ,
"./product-catalog.json" ,
"./faq-database.txt"
]
)
Memory systems consume additional resources. Enable only when needed for context retention.
Property Type Default Description
verboseBoolfalseEnable detailed logging stepCallbackString?nilCallback for step events respectContextWindowBooltrueEnforce context limits cacheHandlerCacheHandler?nilCustom cache implementation systemTemplateString?nilCustom system message template promptTemplateString?nilCustom prompt template responseTemplateString?nilCustom response template
let agent = Agent (
role : "Debug Assistant" ,
purpose : "Help debug code issues" ,
context : "Experienced debugger with knowledge of Swift" ,
verbose : true , // Detailed logging
respectContextWindow : true , // Prevent token overflow
systemTemplate : """
You are {role}.
Your purpose: {purpose}
Background: {context}
Always provide clear explanations with code examples.
"""
)
State Properties
Property Type Description
currentTaskORTask?Currently executing task taskHistory[ORTask]Completed task history totalUsageMetricsUsageMetricsCumulative usage statistics executionCountIntNumber of tasks executed averageExecutionTimeTimeIntervalAverage task completion time
State properties are automatically managed during agent execution and provide insights into agent performance.
Critical Parameters
Role, Purpose, and Context
The most critical parameters that define agent behavior:
Role: Define the Agent's Identity
The role establishes who the agent is. Be specific and professional: // Good roles
role : "Senior Financial Analyst"
role : "Technical Writer specializing in API documentation"
role : "Customer Support Specialist"
// Poor roles
role : "Helper"
role : "AI"
role : "Agent"
Use professional titles that reflect real-world expertise levels and specializations.
Purpose: State the Primary Objective
The purpose clarifies what the agent should accomplish: // Good purposes
purpose : "Analyze financial statements and provide investment recommendations"
purpose : "Create comprehensive API documentation from code specifications"
purpose : "Resolve customer inquiries and escalate complex issues"
// Poor purposes
purpose : "Help with stuff"
purpose : "Do tasks"
purpose : "Be useful"
Context: Provide Background and Expertise
The context gives the agent detailed background knowledge: context : """
You are a senior financial analyst with 15+ years of experience in:
- Equity research and valuation modeling
- Financial statement analysis (10-K, 10-Q reports)
- Market trend analysis and forecasting
- Risk assessment and portfolio management
- Regulatory compliance (SEC, FINRA)
Your analysis is data-driven and considers:
- Industry benchmarks and peer comparisons
- Macroeconomic factors and market conditions
- Technical and fundamental indicators
- Risk-adjusted returns
Always provide specific, actionable recommendations backed by data.
Use appropriate financial terminology and explain complex concepts clearly.
"""
Avoid generic context. Provide specific expertise, methodologies, and guidelines that shape agent behavior.
Temperature Configuration
Temperature controls randomness and creativity:
Low (0.0-0.3)
Medium (0.4-0.7)
High (0.8-1.0)
Use for:
Data analysis
Code generation
Factual reporting
Structured outputs
Mathematical tasks
let analyst = Agent (
role : "Data Analyst" ,
purpose : "Analyze datasets and generate reports" ,
context : "Expert in statistical analysis" ,
temperature : 0.2 // Deterministic, focused
)
Use for:
General assistance
Balanced tasks
Problem-solving
Question answering
Standard workflows
let assistant = Agent (
role : "General Assistant" ,
purpose : "Help with various tasks" ,
context : "Versatile assistant with broad knowledge" ,
temperature : 0.5 // Balanced approach
)
Use for:
Creative writing
Brainstorming
Marketing content
Story generation
Artistic tasks
let writer = Agent (
role : "Creative Writer" ,
purpose : "Generate engaging marketing copy" ,
context : "Award-winning copywriter" ,
temperature : 0.9 // Highly creative
)
Memory and Context
Memory Systems
OrbitAI provides multiple memory systems for context retention:
Short-Term Memory Enabled with: memory: trueRetains information within a single conversation or task sequence. Use cases:
Multi-turn conversations
Sequential task workflows
Context building within sessions
Long-Term Memory Enabled with: longTermMemory: truePersists information across sessions and executions. Use cases:
User preference tracking
Historical data retention
Cross-session learning
Entity Memory Enabled with: entityMemory: trueTracks and remembers named entities (people, places, organizations). Use cases:
Customer relationship management
Knowledge graph building
Entity relationship tracking
Knowledge Sources Configured with: knowledgeSources: [paths]Load external knowledge from files. knowledgeSources : [
"./docs/api-reference.md" ,
"./data/product-catalog.json"
]
Supported formats:
PDF documents
Markdown files
JSON data
Plain text
Memory Configuration
Fine-tune memory behavior with MemoryConfiguration:
let memoryConfig = MemoryConfiguration (
maxMemoryItems : 100 , // Maximum stored items
persistencePath : "./memory" , // Storage location
embeddingModel : "text-embedding-ada-002" ,
similarityThreshold : 0.75 , // Relevance threshold
compressionEnabled : true // Automatic summarization
)
let agent = Agent (
role : "Knowledge Assistant" ,
purpose : "Provide informed responses using memory" ,
context : "Assistant with excellent recall" ,
memory : true ,
longTermMemory : true ,
memoryConfig : memoryConfig
)
Context Window Management
Manage token limits and context overflow:
let agent = Agent (
role : "Long-Context Analyst" ,
purpose : "Analyze large documents" ,
context : "Expert in processing extensive content" ,
respectContextWindow : true , // Enforce limits
maxTokens : 4000 // Per-response limit
)
When respectContextWindow: true, the system automatically:
Monitors token usage
Prunes old messages when approaching limits
Retains system messages and recent context
Maintains conversation coherence
// Automatic pruning when context exceeds limits
if tokenCount > maxContextTokens * 0.8 {
messages = pruneContext (messages, keepRecent : 10 )
}
Memory Systems:
Store information externally
Retrieve relevant data as needed
Not limited by context window
Slower access (retrieval step)
Context Window:
All data in active conversation
Immediate access
Limited by token count
Faster processing
Use memory for large knowledge bases and long-term retention. Keep recent, relevant information in the context window.
Execution Control
Iteration and Reasoning
Control how agents reason through complex problems:
let agent = Agent (
role : "Problem Solver" ,
purpose : "Solve complex analytical problems" ,
context : "Expert in systematic problem-solving" ,
maxIter : 20 , // Allow up to 20 reasoning steps
allowDelegation : true // Can delegate subtasks
)
The maxIter parameter limits the number of reasoning cycles, preventing infinite loops while allowing thorough analysis.
Rate Limiting
Prevent API throttling and control costs:
let agent = Agent (
role : "High-Volume Assistant" ,
purpose : "Handle multiple concurrent requests" ,
context : "Efficient assistant for batch processing" ,
maxRPM : 60 , // 60 requests per minute
maxExecutionTime : 300 // 5 minute timeout
)
Delegation
Enable agents to delegate to other agents:
// Manager agent that can delegate
let manager = Agent (
role : "Project Manager" ,
purpose : "Coordinate complex projects across teams" ,
context : "Experienced PM with delegation skills" ,
allowDelegation : true ,
tools : [ "task_delegator" , "progress_tracker" ]
)
// Worker agents for specialized tasks
let dataAgent = Agent (
role : "Data Processor" ,
purpose : "Process and transform data" ,
context : "Data engineering expert"
)
let reportAgent = Agent (
role : "Report Generator" ,
purpose : "Create formatted reports" ,
context : "Technical writing specialist"
)
Callbacks and Monitoring
Track agent execution with callbacks:
let agent = Agent (
role : "Monitored Agent" ,
purpose : "Execute tasks with detailed tracking" ,
context : "Agent with comprehensive monitoring" ,
verbose : true , // Log all steps
stepCallback : "onAgentStep" // Custom callback function
)
// Callback implementation
func onAgentStep ( step : AgentStep) {
print ( "Agent: \( step. agentId ) " )
print ( "Action: \( step. action ) " )
print ( "Thought: \( step. thought ) " )
print ( "Observation: \( step. observation ) " )
}
Agents gain capabilities through tools:
let agent = Agent (
role : "Research Analyst" ,
purpose : "Conduct comprehensive market research" ,
context : "Expert researcher with analytical capabilities" ,
tools : [
"web_search" , // Search the internet
"data_analyzer" , // Analyze datasets
"chart_generator" , // Create visualizations
"pdf_reader" , // Read PDF documents
"calculator" , // Perform calculations
"report_writer" // Generate reports
]
)
Web Search Search the internet for current information
Calculator Perform mathematical calculations
File Operations Read, write, and manipulate files tools : [ "file_reader" , "file_writer" ]
Data Analysis Analyze and process datasets tools : [ "data_analyzer" , "csv_processor" ]
Code Execution Execute code safely tools : [ "code_executor" , "python_repl" ]
API Integration Call external APIs tools : [ "api_caller" , "rest_client" ]
Create domain-specific tools:
Agents automatically select appropriate tools based on:
Task requirements : Tools mentioned in task description
Agent configuration : Available tools in agent’s tool list
LLM reasoning : Model determines when tools are needed
Context : Previous tool usage and results
// Agent selects tools based on task
let task = ORTask (
description : """
Research current AI trends, analyze the data,
and create a visualization chart.
""" ,
expectedOutput : "Research report with charts"
)
// Agent with multiple tools
let agent = Agent (
role : "Research Analyst" ,
purpose : "Conduct and visualize research" ,
context : "Expert researcher with data viz skills" ,
tools : [
"web_search" , // Used for research
"data_analyzer" , // Used for analysis
"chart_generator" // Used for visualization
]
)
// Agent automatically selects tools in order:
// 1. web_search for research
// 2. data_analyzer for analysis
// 3. chart_generator for visualization
Structured Outputs
Type-Safe Output Generation
Use Swift’s Codable for structured responses:
struct AnalysisReport : Codable , Sendable {
let title: String
let executiveSummary: String
let findings: [Finding]
let recommendations: [Recommendation]
let confidence: Double
let generatedAt: Date
struct Finding : Codable , Sendable {
let category: String
let description: String
let impact: Impact
let evidence: [ String ]
}
struct Recommendation : Codable , Sendable {
let title: String
let description: String
let priority: Priority
let estimatedCost: Double ?
}
enum Impact : String , Codable {
case high , medium , low
}
enum Priority : String , Codable {
case critical , high , medium , low
}
}
// Agent configured for structured output
let agent = Agent (
role : "Business Analyst" ,
purpose : "Generate structured analysis reports" ,
context : "Expert analyst with strong reporting skills" ,
temperature : 0.3 // Lower for consistent structure
)
// Task with structured output
let task = ORTask. withStructuredOutput (
description : "Analyze Q4 2024 business performance" ,
expectedType : AnalysisReport. self ,
agent : agent. id
)
JSON Schema Validation
Define schemas for output validation:
let reportSchema = JSONSchema (
type : . object ,
properties : [
"title" : JSONSchema ( type : . string ),
"summary" : JSONSchema ( type : . string ),
"metrics" : JSONSchema (
type : . object ,
properties : [
"revenue" : JSONSchema ( type : . number ),
"growth" : JSONSchema ( type : . number ),
"customers" : JSONSchema ( type : . integer )
]
),
"recommendations" : JSONSchema (
type : . array ,
items : . init ( value : JSONSchema ( type : . string ))
)
],
required : [ "title" , "summary" , "metrics" ]
)
let task = ORTask (
description : "Generate quarterly business report" ,
expectedOutput : "Structured report with metrics and recommendations" ,
outputFormat : . structured (reportSchema)
)
Use structured outputs for:
API integrations
Database storage
UI rendering
Data pipelines
System integrations
Best Practices
Agent Design
Single Responsibility Design agents with one clear purpose ✅ Good: role : "Email Response Specialist"
purpose : "Draft professional email responses"
❌ Bad: role : "Everything Agent"
purpose : "Do all tasks"
Domain Expertise Provide specific, relevant expertise ✅ Good: context : """
Expert in GDPR compliance with knowledge of:
- Data protection principles
- Legal requirements
- Implementation best practices
"""
❌ Bad: context : "Knows about privacy"
Appropriate Tools Only include necessary tools ✅ Good: tools : [ "web_search" , "data_analyzer" ]
// Relevant for research tasks
❌ Bad: tools : [ "web_search" , "image_editor" ,
"video_processor" , "code_executor" ]
// Too many unrelated tools
Temperature Tuning Match temperature to task type ✅ Good: // Data analysis
temperature : 0.2
// Creative writing
temperature : 0.9
❌ Bad: // Always using default
temperature : 0.7
Enable memory only when needed: // Short tasks - no memory needed
let quickAgent = Agent (
role : "Quick Responder" ,
purpose : "Answer simple questions" ,
context : "General knowledge assistant" ,
memory : false // No memory overhead
)
// Multi-turn conversations - enable memory
let conversationalAgent = Agent (
role : "Conversational Assistant" ,
purpose : "Engage in ongoing conversations" ,
context : "Friendly assistant with good memory" ,
memory : true ,
longTermMemory : true
)
Benefits:
Reduced resource usage
Faster execution
Lower storage costs
Manage context efficiently: let agent = Agent (
role : "Document Processor" ,
purpose : "Process long documents" ,
context : "Expert in document analysis" ,
respectContextWindow : true , // Auto-manage
maxTokens : 2000 // Per-response limit
)
Strategies:
Enable respectContextWindow for auto-pruning
Set appropriate maxTokens limits
Use knowledge sources for large documents
Implement chunking for very long content
Prevent throttling: let agent = Agent (
role : "Batch Processor" ,
purpose : "Process items in batches" ,
context : "Efficient batch processing agent" ,
maxRPM : 50 , // Stay under API limits
maxExecutionTime : 600 // 10 minute timeout
)
Benefits:
Avoid API rate limit errors
Control costs
Predictable performance
Security and Safety
Input Validation
Validate all inputs and parameters: // Sanitize user input
let sanitizedInput = userInput
. trimmingCharacters ( in : . whitespacesAndNewlines )
. replacingOccurrences ( of : #"<script.*?</script>"# ,
with : "" ,
options : . regularExpression )
Access Control
Limit agent access to sensitive operations: let publicAgent = Agent (
role : "Public Assistant" ,
purpose : "Help public users" ,
context : "Public-facing assistant" ,
tools : [
"web_search" , // Safe
"calculator" // Safe
]
// No file system access
// No database access
// No email sending
)
Output Filtering
Filter potentially harmful outputs: let agent = Agent (
role : "Content Generator" ,
purpose : "Generate user-facing content" ,
context : "Professional content creator" ,
guardrails : [
. noHarmfulContent ,
. noPII , // No personal information
. contentFilter
]
)
Audit Logging
Track agent actions for security: let agent = Agent (
role : "Privileged Agent" ,
purpose : "Perform sensitive operations" ,
context : "Authorized for sensitive tasks" ,
verbose : true , // Detailed logging
tools : [ "database_access" , "file_writer" ]
)
// Monitor logs for suspicious activity
Troubleshooting
Agent Not Producing Expected Results
Symptoms : Agent outputs are off-target or low qualityCommon Causes:
Vague or unclear role/purpose/context
Inappropriate temperature setting
Missing necessary tools
Insufficient context information
Solutions: // Before: Vague configuration
let vague = Agent (
role : "Helper" ,
purpose : "Help users" ,
context : "Helpful assistant"
)
// After: Clear, specific configuration
let specific = Agent (
role : "Technical Support Specialist" ,
purpose : "Diagnose and resolve technical issues for software users" ,
context : """
Expert technical support specialist with:
- 5+ years experience in software troubleshooting
- Deep knowledge of common technical issues
- Excellent problem-solving and communication skills
- Systematic approach to diagnostics
Always:
1. Ask clarifying questions
2. Provide step-by-step solutions
3. Explain technical concepts clearly
4. Verify issue resolution
""" ,
temperature : 0.4 , // Balanced for support tasks
tools : [ "knowledge_base" , "diagnostic_tool" ]
)
Symptoms : Tasks fail with timeout errorsCommon Causes:
Insufficient maxExecutionTime
Complex reasoning requiring many iterations
Slow tool execution
LLM provider latency
Solutions: // Increase execution time and iterations
let agent = Agent (
role : "Complex Problem Solver" ,
purpose : "Solve intricate problems" ,
context : "Expert problem solver" ,
maxExecutionTime : 1800 , // 30 minutes
maxIter : 25 , // More reasoning steps
tools : [ "complex_analyzer" ]
)
// Or use async execution
let task = ORTask (
description : "Long-running analysis" ,
expectedOutput : "Comprehensive analysis" ,
agent : agent. id ,
async : true // Don't block
)
Monitor averageExecutionTime to set appropriate timeouts.
Symptoms : High memory usage or memory-related errorsCommon Causes:
Memory enabled but not needed
Too many memory items stored
Large knowledge sources
Memory not being pruned
Solutions: // Configure memory limits
let memoryConfig = MemoryConfiguration (
maxMemoryItems : 50 , // Limit storage
compressionEnabled : true , // Auto-summarize
pruneOldItems : true // Remove old entries
)
let agent = Agent (
role : "Memory-Efficient Agent" ,
purpose : "Work efficiently with memory" ,
context : "Efficient agent design" ,
memory : true ,
memoryConfig : memoryConfig
)
// Or disable memory if not needed
let simpleAgent = Agent (
role : "Simple Task Agent" ,
purpose : "Handle simple one-off tasks" ,
context : "Quick task handler" ,
memory : false // No memory overhead
)
Symptoms : Agent produces different outputs for same inputsCommon Causes:
High temperature setting
Non-deterministic tool behavior
Memory state differences
Random sampling in LLM
Solutions: // Lower temperature for consistency
let consistentAgent = Agent (
role : "Data Processor" ,
purpose : "Process data consistently" ,
context : "Reliable data processing agent" ,
temperature : 0.0 , // Deterministic
tools : [ "data_processor" ]
)
// Use structured outputs
struct ProcessingResult : Codable , Sendable {
let status: String
let processedItems: Int
let errors: [ String ]
}
let task = ORTask. withStructuredOutput (
description : "Process data batch" ,
expectedType : ProcessingResult. self ,
agent : consistentAgent. id
)
Symptoms : API rate limit exceeded errorsCommon Causes:
No maxRPM configured
Too many concurrent agents
Rapid successive requests
Insufficient retry backoff
Solutions: // Configure rate limiting
let rateLimitedAgent = Agent (
role : "Controlled Agent" ,
purpose : "Respect API rate limits" ,
context : "Rate-aware agent" ,
maxRPM : 50 , // Limit requests per minute
tools : [ "api_caller" ]
)
// Implement exponential backoff
func executeWithBackoff (
agent : Agent,
task : ORTask,
maxRetries : Int = 3
) async throws -> TaskOutput {
var lastError: Error ?
for attempt in 1 ... maxRetries {
do {
return try await executor. executeTask (
agent : agent,
task : task
)
} catch let error as OrbitAIError {
if case .llmRateLimitExceeded = error {
let delay = pow ( 2.0 , Double (attempt))
try await Task. sleep ( for : . seconds (delay))
lastError = error
} else {
throw error
}
}
}
throw lastError ?? OrbitAIError. taskExecutionFailed ( "Max retries exceeded" )
}
Debugging Strategies
Enable Verbose Logging
let debugAgent = Agent (
role : "Debug Subject" ,
purpose : "Agent being debugged" ,
context : "Test agent for debugging" ,
verbose : true , // Detailed logs
stepCallback : "onAgentStep"
)
func onAgentStep ( step : AgentStep) {
print ( "=== Agent Step ===" )
print ( "Thought: \( step. thought ) " )
print ( "Action: \( step. action ) " )
print ( "Observation: \( step. observation ) " )
print ( "================" )
}
Monitor Metrics
// Check agent performance
print ( "Execution count: \( await agent. executionCount ) " )
print ( "Avg time: \( await agent. averageExecutionTime ) s" )
print ( "Total tokens: \( await agent. totalUsageMetrics . totalTokens ) " )
// Analyze tool usage
if let taskOutput = result.output {
for toolUsage in taskOutput.toolsUsed {
print ( "Tool: \( toolUsage. toolName ) " )
print ( "Time: \( toolUsage. executionTime ) s" )
print ( "Success: \( toolUsage. success ) " )
}
}
Test with Simple Tasks
// Start simple
let simpleTask = ORTask (
description : "What is 2 + 2?" ,
expectedOutput : "The number 4"
)
let result = try await executor. executeTask (
agent : agent,
task : simpleTask
)
// Verify basic functionality before complex tasks
assert (result. rawOutput . contains ( "4" ))
Isolate Components
// Test without tools
let noToolsAgent = Agent (
role : agent. role ,
purpose : agent. purpose ,
context : agent. context ,
tools : nil // Remove tools to isolate issue
)
// Test without memory
let noMemoryAgent = Agent (
role : agent. role ,
purpose : agent. purpose ,
context : agent. context ,
memory : false ,
longTermMemory : false
)
// Identify which component causes issues
Next Steps