Overview
OrbitAI Tasks are the fundamental execution units in the OrbitAI framework. A task represents a specific piece of work that needs to be completed by an AI agent, with defined requirements, constraints, and expected outputs.
Type Safety Built with Swift’s strong typing for compile-time safety
Async/Await Native Swift concurrency for efficient execution
Structured Outputs Support for text and structured data outputs
Validation Built-in guardrails and validation mechanisms
Monitoring Comprehensive telemetry and performance tracking
Flexibility Various output formats and execution patterns
Architecture
OrbitAI’s task system consists of several key components working together to orchestrate task execution:
ORTask (Task Definition)
↓
TaskExecutionEngine (Orchestrator)
├── Agent Selection
├── ToolsHandler Integration
├── MemoryStorage Access
├── KnowledgeBase Queries
└── TelemetryManager Tracking
↓
TaskResult → TaskOutput
Key Components
The main task definition structure containing all task specifications, requirements, and configuration.
Central orchestrator that manages task execution across agents, handles concurrency, and enforces constraints.
Success/failure result wrapper that encapsulates execution outcomes and errors.
Contains execution results, metadata, usage metrics, and tool execution information.
Provides execution context including metadata and outputs from dependent tasks.
Task Parameters
Core Properties
Required
Optional
Execution Control
Execution State
Property Type Description
idOrbitAIIDUnique identifier (UUID) for the task descriptionStringDetailed description of what to accomplish expectedOutputStringSpecification of output format and content
let task = ORTask (
description : "Analyze Q4 2024 sales data and identify top products" ,
expectedOutput : "JSON report with top 10 products by revenue"
)
Property Type Description
agentOrbitAIID?Optional agent ID assignment tools[String]?List of tool names available asyncBoolAsynchronous execution support context[OrbitAIID]?IDs of tasks providing context outputFormatOutputFormatFormat specification (default: .text) outputFileString?File path for saving output humanInputBoolRequires human input
let task = ORTask (
description : "Generate market analysis report" ,
expectedOutput : "Comprehensive analysis in markdown" ,
agent : analystAgent. id ,
tools : [ "web_search" , "data_analyzer" ],
outputFormat : . markdown ,
context : [researchTask. id ]
)
Property Type Description
maxExecutionTimeTimeInterval?Maximum execution time (seconds) guardrails[TypeSafeGuardrail]?Safety and validation constraints retryLimitInt?Maximum retry attempts validationStrictnessValidationStrictness?Validation strictness level
let task = ORTask (
description : "Complex data processing task" ,
expectedOutput : "Processed data report" ,
maxExecutionTime : 600 , // 10 minutes
guardrails : [. noHarmfulContent , . tokenLimit ],
retryLimit : 3 ,
validationStrictness : . standard
)
Property Type Description
statusTaskStatusCurrent execution status resultTaskResult?Execution result (after completion) startTimeDate?When execution began endTimeDate?When execution completed executionTimeTimeInterval?Total execution duration usageMetricsUsageMetrics?Token and usage metrics
Execution state properties are automatically managed by the TaskExecutionEngine during task execution.
OrbitAI supports multiple output formats to meet different use cases:
Text Plain text output for human-readable results
JSON Generic JSON output for flexible data structures
Markdown Markdown formatted text for documentation
CSV Comma-separated values for tabular data
XML XML structured output for legacy systems
Structured Schema-validated structured output outputFormat : . structured (schema)
Structured Output Example
struct ReportData : Codable , Sendable {
let title: String
let summary: String
let findings: [ String ]
let recommendations: [ String ]
let confidence: Double
}
let task = ORTask. withStructuredOutput (
description : "Generate quarterly business report" ,
expectedType : ReportData. self ,
agent : analystAgent. id
)
Use structured outputs with Swift’s Codable types for type-safe data handling and compile-time validation.
Task Execution
Execution Patterns
Sequential
Hierarchical
Flow-Based
Tasks execute one after another, with context updated between tasks: let results = try await taskEngine. executeTasks (
tasks : [researchTask, analysisTask, reportTask],
agents : availableAgents,
process : . sequential ,
context : executionContext
)
Use when:
Tasks depend on previous results
Order of execution matters
Context needs to build progressively
Sequential execution is the default and most common pattern for task workflows.
A manager agent coordinates and delegates task assignment: let results = try await taskEngine. executeTasks (
tasks : complexTasks,
agents : workerAgents,
process : . hierarchical ,
context : executionContext,
manager : managerAgent
)
Use when:
Complex coordination is needed
Dynamic task distribution is required
Manager oversight is beneficial
Hierarchical execution is ideal for complex projects requiring intelligent task delegation.
Complex dependency-based execution with topological sorting: let taskFlow = TaskFlow (
name : "Data Processing Pipeline" ,
tasks : [dataIngestionTask, transformTask, analysisTask],
stopOnFailure : true
)
let result = try await taskEngine. executeTaskFlow (
taskFlow : taskFlow,
agents : agents,
context : context
)
Use when:
Tasks have complex dependencies
Parallel execution is possible
DAG-based workflows are needed
Flow-based execution automatically detects and prevents circular dependencies.
Agent Selection
The TaskExecutionEngine uses a multi-step approach for optimal agent selection:
Explicit Assignment
Respects task-agent assignments when specified: let task = ORTask (
description : "Security vulnerability assessment" ,
expectedOutput : "Security report" ,
agent : securityExpertAgent. id // Explicit assignment
)
Compatibility Scoring
Evaluates tool compatibility and role relevance: // Engine calculates scores based on:
// - Tool availability (5 points per matching tool)
// - Domain expertise matching
// - Task complexity alignment
// - Previous success rates
LLM-Based Selection
Uses LLM reasoning for close decisions when scores are similar.
Caching
Caches selections for similar tasks to improve performance.
Task Dependencies
Context Dependencies
Tasks can depend on outputs from previous tasks:
let researchTask = ORTask (
description : "Research AI trends in healthcare" ,
expectedOutput : "Research findings with key trends"
)
let analysisTask = ORTask (
description : """
Analyze research findings for business impact.
Use the previous research: {task_0_output}
""" ,
expectedOutput : "Business analysis report" ,
context : [researchTask. id ] // Depends on research output
)
Reference previous task outputs using {task_N_output} syntax in task descriptions, where N is the task index.
Conditional Tasks
Execute tasks based on previous results:
let conditionalTask = ConditionalTask (
task : followUpTask,
condition : TaskCondition (
type : . outputContains ,
taskId : previousTask. id ,
expectedValue : "requires follow-up"
)
)
Dependency Resolution
The system automatically handles dependency resolution:
Tasks are automatically sorted to ensure dependencies execute in the correct order: // Automatic topological sort ensures proper execution order
private func topologicalSort ( tasks : [ORTask]) throws -> [ORTask] {
var sorted: [ORTask] = []
var visited: Set <OrbitAIID> = []
var visiting: Set <OrbitAIID> = []
func visit ( _ task : ORTask) throws {
if visiting. contains (task. id ) {
throw OrbitAIError. taskExecutionFailed ( "Circular dependency detected" )
}
if visited. contains (task. id ) { return }
visiting. insert (task. id )
// Visit dependencies first
if let dependencies = task.dependencies {
for depId in dependencies {
if let depTask = tasks. first ( where : { $0 . id == depId }) {
try visit (depTask)
}
}
}
visiting. remove (task. id )
visited. insert (task. id )
sorted. append (task)
}
for task in tasks {
try visit (task)
}
return sorted
}
Circular Dependency Detection
The system automatically detects and prevents circular dependencies, throwing an error when detected. Circular dependencies will cause task execution to fail with OrbitAIError.taskExecutionFailed.
Task Results
Result Structure
public enum TaskResult : Codable , Sendable {
case success (TaskOutput)
case failure (OrbitAIError)
public var output: TaskOutput ? { /* ... */ }
public var error: OrbitAIError ? { /* ... */ }
public var isSuccess: Bool { /* ... */ }
}
Task Output Components
public struct TaskOutput : Codable , Sendable {
public let rawOutput: String // Raw output text
public let structuredOutput: StructuredOutput ? // Parsed structured data
public let usageMetrics: UsageMetrics // Token/resource usage
public let toolsUsed: [ToolUsage] // Tools that were executed
public let agentId: OrbitAIID // Executing agent
public let taskId: OrbitAIID // Source task
public let timestamp: Date // Generation timestamp
}
Type-Safe Decoding
Direct Decoding
Safe Decoding
Fallback Decoding
// Decode output to specific type
let reportData: ReportData = try taskOutput. decode ( as : ReportData. self )
// Check if output can be decoded
if taskOutput. canDecode ( as : ReportData. self ) {
let data = try taskOutput. decode ( as : ReportData. self )
// Process structured data
} else {
// Fallback to raw text
print (taskOutput. rawOutput )
}
// Decode with automatic fallback strategies
let data = try taskOutput. decodeWithFallback ( as : ReportData. self )
Fallback strategies include:
Direct decoding
Wrapper unwrapping (data, result, response, content keys)
Field normalization (alternative field names)
Partial extraction with defaults
Markdown cleaning
Validation & Guardrails
OrbitAI includes a comprehensive type-safe guardrail system for validation and safety:
Content Safety NoHarmfulContentGuardrail Checks for harmful keywords and patterns in content. guardrails : [. noHarmfulContent ]
Token Limits TokenLimitGuardrail Enforces maximum token limits for content. TokenLimitGuardrail (
maxTokens : 8000 ,
model : "gpt-4o"
)
Rate Limiting RateLimitGuardrail Prevents excessive requests from agents. RateLimitGuardrail (
maxRequestsPerMinute : 60
)
Custom Guardrails OrbitGuardrail Protocol Implement custom validation logic. protocol OrbitGuardrail {
func check ( context : Context) async throws -> GuardrailResult
}
Validation Strictness
Control how strictly task outputs are validated:
public enum ValidationStrictness : String , Codable {
case lenient // Partial approvals accepted
case standard // Balanced validation
case strict // Even partial approvals require revision
}
Manager Validation
Tasks can be validated by manager agents with feedback:
public enum TaskValidationResult {
case approved
case partiallyApproved ( feedback : String )
case needsRevision ( feedback : String )
case validationFailed
}
Metrics & Monitoring
Usage Metrics
Comprehensive metrics are collected for each task execution:
public struct UsageMetrics : Codable , Sendable {
public let promptTokens: Int // Tokens in prompts
public let completionTokens: Int // Tokens in responses
public let totalTokens: Int // Total token usage
public let successfulRequests: Int // Successful API calls
public let totalRequests: Int // Total API calls made
}
public struct ToolUsage : Codable , Sendable {
public let toolName: String // Name of the tool used
public let executionTime: TimeInterval // Tool execution duration
public let success: Bool // Execution success status
public let inputSize: Int // Size of tool input
public let outputSize: Int // Size of tool output
}
Key metrics tracked include:
Task completion duration
LLM token consumption
Tool execution statistics
Success/failure ratios
Resource consumption tracking:
Memory usage
API call counts
Cache hit rates
Concurrent task counts
Best Practices
Task Design
Clear Descriptions Write specific, actionable task descriptions ✅ Good: description : """
Analyze Q4 2024 sales data and provide:
1. Top 10 products by revenue
2. Regional performance trends
3. Customer segment analysis
"""
❌ Bad: description : "Look at sales stuff"
Defined Outputs Specify expected output format clearly ✅ Good: expectedOutput : """
JSON report with:
- products: Array of {name, revenue}
- trends: Object with regional data
- analysis: String summary
"""
❌ Bad: expectedOutput : "Some insights"
Appropriate Scope Keep tasks focused and reasonably sized Break large tasks into smaller, manageable subtasks for better control and debugging.
Tool Selection Choose relevant tools for requirements tools : [
"csv_analyzer" ,
"chart_generator" ,
"statistical_analyzer"
]
Error Handling
Configure appropriate error handling and retry mechanisms:
let task = ORTask (
description : "Generate comprehensive market analysis" ,
expectedOutput : "Market analysis report with trends" ,
maxExecutionTime : 600 , // 10 minutes
retryLimit : 3 ,
guardrails : [. noHarmfulContent , . tokenLimit ],
validationStrictness : . standard
)
Always set reasonable execution timeouts to prevent tasks from running indefinitely.
Structured Outputs
Use strongly-typed output structures when possible:
struct MarketAnalysis : Codable , Sendable {
let executiveSummary: String
let marketTrends: [TrendData]
let competitorAnalysis: [CompetitorInfo]
let recommendations: [Recommendation]
let confidence: Double
}
let task = ORTask. withStructuredOutput (
description : "Generate Q4 2024 market analysis for tech sector" ,
expectedType : MarketAnalysis. self ,
agent : analystAgent. id
)
Troubleshooting
Symptoms : Tasks fail with timeout errorsCauses :
Insufficient execution time limits
Complex tasks requiring more processing
Agent inefficiency or loops
Solutions :// Increase timeout for complex tasks
let task = ORTask (
description : "Complex data analysis task" ,
expectedOutput : "Detailed analysis report" ,
maxExecutionTime : 1800 // 30 minutes
)
// Use async execution for long-running tasks
let asyncTask = ORTask (
description : "Long-running background analysis" ,
expectedOutput : "Analysis results" ,
async : true
)
Structured Output Parsing Failures
Symptoms : JSON parsing errors, type mismatchesCauses :
LLM returning malformed JSON
Unexpected output format
Missing required fields
Solutions :// Use fallback decoding with error recovery
do {
let data = try taskOutput. decodeWithFallback ( as : ExpectedType. self )
} catch {
// Handle parsing failure gracefully
print ( "Failed to parse: \( error ) " )
let rawText = taskOutput. rawOutput
// Implement custom parsing
}
// Allow retries for malformed output
let task = ORTask. withStructuredOutput (
description : "Generate structured report" ,
expectedType : ReportData. self ,
retryLimit : 2
)
Symptoms : Tasks assigned to inappropriate agentsCauses :
Insufficient capability matching
Missing tool assignments
Poor role alignment
Solutions :// Explicit agent assignment for critical tasks
let criticalTask = ORTask (
description : "High-priority security analysis" ,
expectedOutput : "Security assessment report" ,
agent : securityExpertAgent. id // Explicit
)
// Better agent configuration
let specializedAgent = Agent (
role : "Security Analyst" ,
purpose : "Perform security assessments" ,
context : "Expert in cybersecurity with 10+ years experience" ,
tools : [
"vulnerability_scanner" ,
"threat_analyzer" ,
"compliance_checker"
]
)
Memory & Performance Issues
Symptoms : Tasks fail due to guardrail violationsCauses :
Content safety issues
Token limits exceeded
Rate limits hit
Solutions :// Adjust guardrail thresholds
let tokenGuardrail = TokenLimitGuardrail (
maxTokens : 8000 , // Increase limit
model : "gpt-4o"
)
// Use lenient validation for development
let task = ORTask (
description : "Development task" ,
expectedOutput : "Development output" ,
validationStrictness : . lenient
)
Debugging Tips
Enable Verbose Logging
let taskEngine = TaskExecutionEngine (
agentExecutor : executor,
enableVerboseLogging : true
)
let verboseAgent = Agent (
role : "Debug Agent" ,
purpose : "Debugging task execution" ,
context : "Debug context" ,
verbose : true
)
Monitor Execution Status
// Check execution status periodically
let status = await taskEngine. getExecutionStatus ()
print ( "Queued: \( status. queuedTasks ) " )
print ( "Active: \( status. activeTasks ) " )
print ( "Completed: \( status. completedTasks ) " )
// Monitor task progress
if let result = task.result {
switch result {
case . success ( let output) :
print ( "Execution time: \( task. executionTime ?? 0 ) s" )
print ( "Tokens used: \( output. usageMetrics . totalTokens ) " )
case . failure ( let error) :
print ( "Task failed: \( error. localizedDescription ) " )
}
}
Analyze Task Metrics
// Review detailed metrics
if let metrics = task.usageMetrics {
print ( "Prompt tokens: \( metrics. promptTokens ) " )
print ( "Completion tokens: \( metrics. completionTokens ) " )
print ( "Success rate: \( metrics. successfulRequests ) / \( metrics. totalRequests ) " )
}
// Analyze tool usage
if let output = task.result ? .output {
for toolUsage in output.toolsUsed {
print ( "Tool: \( toolUsage. toolName ) " )
print ( "Time: \( toolUsage. executionTime ) s" )
print ( "Success: \( toolUsage. success ) " )
}
}
Test with Simple Tasks
// Start with simple test tasks
let testTask = ORTask (
description : "Simple test: add 2 + 2" ,
expectedOutput : "The number 4" ,
maxExecutionTime : 30
)
// Gradually increase complexity
let complexTask = ORTask (
description : "Complex analysis task..." ,
expectedOutput : "Detailed analysis..." ,
tools : [ "analyzer_tool" ],
maxExecutionTime : 300
)
Next Steps