Overview
Orbits are the top-level orchestration constructs in OrbitAI that bring together agents, tasks, and processes into cohesive, executable workflows. An orbit manages the complete lifecycle of multi-agent execution, from initialization through task distribution to result aggregation.
Orchestration Coordinates multiple agents and tasks
Lifecycle Management Manages execution from start to finish
LLM Integration Automatic LLM provider setup
Memory Systems Shared and agent-specific memory
Context Flow Manages data flow between tasks
Telemetry Comprehensive metrics and monitoring
What is an Orbit?
An Orbit is a complete workflow system that:
Manages agents : Configures and coordinates AI agents
Executes tasks : Runs tasks according to process type
Handles state : Tracks execution progress and results
Provides infrastructure : LLM providers, memory, tools, knowledge bases
Monitors performance : Collects metrics and usage data
Ensures reliability : Error handling and recovery mechanisms
Think of an Orbit as a “project” or “workflow instance” that contains everything needed to execute a multi-agent system.
Key Characteristics
Orbits encapsulate all necessary components—agents, tasks, LLM providers, memory, and tools—making them portable and reusable.
Execution follows one of three process types (Sequential, Hierarchical, Flow-Based), providing predictable orchestration patterns.
Maintains execution context that flows through tasks, enabling agents to build upon previous work and share information.
Provides comprehensive visibility into execution through verbose logging, metrics, and telemetry integration.
Orbit Lifecycle
Understanding the orbit lifecycle helps you manage execution and handle edge cases effectively.
┌─────────────────────────────────────────────────────┐
│ ORBIT LIFECYCLE │
└─────────────────────────────────────────────────────┘
1. INITIALIZATION
├─ Create Orbit instance
├─ Validate configuration
├─ Setup LLM providers
├─ Initialize memory systems
└─ Register tools
↓
2. READY
├─ Orbit configured and ready
├─ Waiting for start() call
└─ All components initialized
↓
3. RUNNING
├─ Executing tasks
├─ Agents processing work
├─ Context flowing
└─ Metrics collecting
↓
4. COMPLETING
├─ Final tasks finishing
├─ Aggregating results
└─ Collecting final metrics
↓
5. COMPLETED / FAILED
├─ OrbitOutput generated
├─ Resources cleaned up
└─ Final state recorded
Lifecycle States
Initialization
Ready
Running
Completing
Completed/Failed
Phase : Orbit construction and setupActivities :
Validate required parameters
Create internal components
Setup LLM manager
Initialize memory systems
Register tools with ToolsHandler
Prepare knowledge bases
Validate agent configurations
Validate task configurations
Code Example :// Initialization phase
let orbit = try await Orbit. create (
name : "Data Analysis Pipeline" ,
description : "Analyze customer data and generate insights" ,
agents : [dataAgent, analysisAgent, reportAgent],
tasks : [extractTask, analyzeTask, reportTask],
process : . sequential ,
verbose : true ,
memory : true ,
usageMetrics : true
)
// Orbit is now in READY state
Common Errors :
Missing required parameters
Invalid agent configurations
LLM provider setup failures
Tool registration issues
Initialization errors are thrown immediately—catch and handle them before attempting execution.
Phase : Orbit configured and awaiting executionState :
All components initialized
Agents registered
Tasks validated
LLM providers ready
Memory systems active
Available Operations :// Inspect configuration
let agentCount = await orbit. getAgents (). count
let taskCount = orbit. tasks . count
let processType = orbit. process
// Add additional configuration
let customProvider = try OpenAIProvider (
model : . gpt4o ,
apiKey : apiKey
)
await orbit. configureLLMProvider (customProvider)
// Add knowledge sources
await orbit. addKnowledgeSource ( "./docs/guide.pdf" )
Transition to Running :// Start execution
let result = try await orbit. start ()
// Orbit transitions to RUNNING state
Phase : Active task executionActivities :
Task execution engine running
Agents processing tasks
Tools being called
Memory being accessed/updated
Context flowing between tasks
Metrics being collected
Telemetry events being recorded
Monitoring :// During execution (in another task/thread)
Task {
while await orbit. isRunning () {
let status = await orbit. getExecutionStatus ()
print ( "Active tasks: \( status. activeTasks ) " )
print ( "Completed: \( status. completedTasks ) " )
print ( "Queued: \( status. queuedTasks ) " )
try await Task. sleep ( for : . seconds ( 5 ))
}
}
State Management :
Task states updated in real-time
Context accumulates results
Memory systems record information
Metrics aggregate continuously
The RUNNING state persists until all tasks complete or an unrecoverable error occurs.
Phase : Final task completion and result aggregationActivities :
Last tasks finishing execution
Results being aggregated
Final metrics calculated
Memory being persisted (if enabled)
Telemetry final events sent
OrbitOutput being constructed
Duration : Usually brief (milliseconds to seconds)Code Path :// Internal orbit completion logic
private func completeExecution (
taskResults : [TaskResult]
) async throws -> OrbitOutput {
// Aggregate all task outputs
let outputs = taskResults. compactMap { $0 . output }
// Calculate final metrics
let totalMetrics = calculateTotalMetrics (outputs)
// Persist memory if enabled
if memory {
await memoryStorage. persist ()
}
// Create output
return OrbitOutput (
taskOutputs : outputs,
usageMetrics : totalMetrics,
executionTime : totalExecutionTime
)
}
Phase : Execution finishedCompleted State :
All tasks executed successfully
OrbitOutput contains results
Metrics finalized
Resources cleaned up
let result = try await orbit. start ()
// Success - orbit in COMPLETED state
print ( "Tasks completed: \( result. taskOutputs . count ) " )
print ( "Total tokens: \( result. usageMetrics . totalTokens ) " )
print ( "Execution time: \( result. executionTime ) s" )
Failed State :
Task execution failed
Error captured
Partial results may be available
Cleanup performed
do {
let result = try await orbit. start ()
} catch let error as OrbitAIError {
// Error - orbit in FAILED state
print ( "Orbit failed: \( error ) " )
// Check partial results
let completedTasks = orbit. tasks . filter {
$0 . status == . completed
}
print ( "Completed before failure: \( completedTasks. count ) " )
}
Once in COMPLETED or FAILED state, an orbit cannot be restarted. Create a new orbit instance for re-execution.
Parameters and Configuration
Core Parameters
Required
Process & Execution
Memory & Knowledge
Monitoring & Output
Additional Config
Parameter Type Description
nameStringHuman-readable name for the orbit agents[Agent]Array of agents available for task execution tasks[ORTask]Array of tasks to execute
let orbit = try await Orbit. create (
name : "Content Creation Workflow" ,
agents : [researcher, writer, editor],
tasks : [researchTask, writeTask, editTask]
)
These three parameters are the minimum required to create a functioning orbit.
Parameter Type Default Description
processProcess.sequentialExecution orchestration pattern managerAgent?nilManager agent (for hierarchical) verboseBoolfalseEnable detailed logging maxConcurrentTasksInt?3Max parallel tasks (flow-based)
let orbit = try await Orbit. create (
name : "Parallel Processing" ,
agents : workers,
tasks : processingTasks,
process : . sequential , // or .hierarchical
manager : coordinatorAgent, // required for hierarchical
verbose : true , // detailed logs
maxConcurrentTasks : 5 // parallel execution limit
)
Parameter Type Default Description
memoryBoolfalseEnable short-term memory longTermMemoryBoolfalseEnable long-term memory entityMemoryBoolfalseTrack named entities memoryConfigMemoryConfiguration?nilMemory system config knowledgeSources[String]?nilKnowledge base file paths
let orbit = try await Orbit. create (
name : "Knowledge-Based Assistant" ,
agents : [assistant],
tasks : assistanceTasks,
memory : true ,
longTermMemory : true ,
entityMemory : true ,
knowledgeSources : [
"./docs/product-guide.pdf" ,
"./data/faq.json" ,
"./policies/company-policies.md"
]
)
Memory systems enable agents to learn from interactions and maintain context across sessions.
Parameter Type Default Description
usageMetricsBooltrueCollect usage metrics stepCallbackString?nilCallback for execution steps outputDirectoryString?nilDirectory for output files
let orbit = try await Orbit. create (
name : "Monitored Workflow" ,
agents : agents,
tasks : tasks,
usageMetrics : true , // track token usage
stepCallback : "onStepComplete" , // execution callback
outputDirectory : "./outputs" // save outputs
)
// Callback implementation
func onStepComplete ( step : ExecutionStep) {
print ( "Step: \( step. description ) " )
print ( "Agent: \( step. agentId ) " )
print ( "Duration: \( step. duration ) s" )
}
Parameter Type Default Description
descriptionString?nilOrbit description idOrbitAIID?Auto-generated Custom orbit ID cacheHandlerCacheHandler?nilCustom cache implementation telemetryManagerTelemetryManager?Shared Custom telemetry manager
let orbit = try await Orbit. create (
name : "Custom Orbit" ,
description : "Specialized workflow with custom config" ,
agents : agents,
tasks : tasks,
id : OrbitAIID ( uuidString : "custom-orbit-id" ),
cacheHandler : customCache,
telemetryManager : customTelemetry
)
Creating and Configuring Orbits
Basic Creation
Define Agents
Create agents with appropriate roles and capabilities: let researcher = Agent (
role : "Research Analyst" ,
purpose : "Conduct thorough research on topics" ,
context : "Expert researcher with analytical skills" ,
tools : [ "web_search" , "data_analyzer" ]
)
let writer = Agent (
role : "Content Writer" ,
purpose : "Create engaging, well-written content" ,
context : "Professional writer with storytelling skills" ,
tools : [ "content_optimizer" , "grammar_checker" ]
)
Define Tasks
Create tasks with clear descriptions and expected outputs: let researchTask = ORTask (
description : "Research current trends in AI for healthcare" ,
expectedOutput : "Comprehensive research report with sources" ,
agent : researcher. id
)
let writingTask = ORTask (
description : "Write article based on research: {task_0_output}" ,
expectedOutput : "1500-word article in markdown" ,
agent : writer. id ,
context : [researchTask. id ]
)
Create Orbit
Bring agents and tasks together: let orbit = try await Orbit. create (
name : "Article Creation Pipeline" ,
description : "Research and write articles on AI topics" ,
agents : [researcher, writer],
tasks : [researchTask, writingTask],
process : . sequential ,
verbose : true
)
Execute
Start the orbit and get results: let result = try await orbit. start ()
// Access outputs
for (index, output) in result.taskOutputs. enumerated () {
print ( "Task \( index ) : \( output. rawOutput ) " )
}
// Check metrics
print ( "Total tokens: \( result. usageMetrics . totalTokens ) " )
print ( "Execution time: \( result. executionTime ) s" )
Advanced Configuration
Custom LLM Setup
Memory Configuration
Knowledge Sources
Tool Registration
Configure specific LLM providers: // Create orbit
let orbit = try await Orbit. create (
name : "Custom LLM Workflow" ,
agents : agents,
tasks : tasks
)
// Add primary provider
let openAIProvider = try OpenAIProvider (
model : . gpt4o ,
apiKey : openAIKey
)
await orbit. configureLLMProvider (
openAIProvider,
asDefault : true
)
// Add fallback provider
let anthropicProvider = AnthropicProvider (
model : . claude35Sonnet ,
apiKey : anthropicKey
)
await orbit. configureLLMProvider (anthropicProvider)
// Execute with configured providers
let result = try await orbit. start ()
Setup advanced memory systems: // Configure memory
let memoryConfig = MemoryConfiguration (
maxMemoryItems : 100 ,
persistencePath : "./memory/orbit-memory" ,
embeddingModel : "text-embedding-ada-002" ,
similarityThreshold : 0.75 ,
compressionEnabled : true ,
autoSummarize : true
)
let orbit = try await Orbit. create (
name : "Memory-Enhanced Workflow" ,
agents : agents,
tasks : tasks,
memory : true ,
longTermMemory : true ,
entityMemory : true ,
memoryConfig : memoryConfig
)
Add external knowledge: let orbit = try await Orbit. create (
name : "Knowledge-Based System" ,
agents : [knowledgeAgent],
tasks : queryTasks,
knowledgeSources : [
// PDF documents
"./docs/product-manual.pdf" ,
"./docs/api-reference.pdf" ,
// Markdown files
"./wiki/getting-started.md" ,
"./wiki/advanced-topics.md" ,
// JSON data
"./data/product-catalog.json" ,
"./data/customer-segments.json" ,
// Plain text
"./data/faq.txt" ,
"./policies/terms-of-service.txt"
]
)
// Or add sources after creation
await orbit. addKnowledgeSource ( "./new-doc.pdf" )
Register custom tools: // Create orbit
let orbit = try await Orbit. create (
name : "Custom Tools Workflow" ,
agents : agents,
tasks : tasks
)
// Register custom tools
let toolsHandler = ToolsHandler. shared
await toolsHandler. registerTool ( CustomDatabaseTool ())
await toolsHandler. registerTool ( CustomAPITool ())
await toolsHandler. registerTool ( CustomAnalysisTool ())
// Agents can now use these tools
let agent = Agent (
role : "Data Processor" ,
purpose : "Process data using custom tools" ,
context : "Specialist in custom data operations" ,
tools : [
"custom_database" ,
"custom_api" ,
"custom_analysis"
]
)
Execution Process
Execution Flow
Sequential
Hierarchical
Flow-Based
orbit.start()
↓
Initialize LLM Manager
↓
Load Knowledge Sources
↓
Setup Memory Systems
↓
Create Task Execution Engine
↓
Execute Task 1 (Agent A)
↓
Update Context with Output 1
↓
Execute Task 2 (Agent B)
↓
Update Context with Output 2
↓
Execute Task 3 (Agent C)
↓
Update Context with Output 3
↓
Aggregate Results
↓
Calculate Metrics
↓
Return OrbitOutput
Characteristics :
Strict linear execution
Each task waits for previous
Context builds sequentially
Predictable timing
orbit.start()
↓
Initialize Infrastructure
↓
Manager Agent Analyzes Workflow
↓
Manager Creates Execution Plan
↓
┌────────────────┼────────────────┐
│ │ │
Manager Delegates:
Task 1 → Agent A
Task 2 → Agent B
Task 3 → Agent C
↓ ↓ ↓
Execute in Parallel/Sequence (Manager decides)
↓ ↓ ↓
└────────────────┼────────────────┘
↓
Manager Validates Results
↓
Manager Coordinates Next Phase
↓
Aggregate and Return Results
Characteristics :
Manager-driven coordination
Dynamic task assignment
Quality validation
Adaptive execution
orbit.start()
↓
Initialize Infrastructure
↓
Build Dependency Graph
↓
Topological Sort Tasks
↓
Identify Parallelizable Tasks
↓
Execute Independent Tasks in Parallel
┌────────┬────────┬────────┐
Task 1 Task 2 Task 3
(No dependencies)
└────────┴────────┴────────┘
↓
Execute Dependent Tasks
(After dependencies complete)
↓
Continue Until All Complete
↓
Aggregate and Return Results
Characteristics :
Dependency-driven
Maximum parallelization
Topologically sorted
Efficient execution
Execution Context
The execution context flows through tasks:
public struct TaskExecutionContext : Sendable {
// Previous task outputs
public var taskOutputs: [TaskOutput]
// Orbit-level inputs
public var inputs: Metadata
// Shared memory
public var memory: MemoryStorage ?
// Knowledge base access
public var knowledgeBase: KnowledgeBase ?
// Available tools
public var availableTools: [ String ]
}
Context Flow Example :
// Task 1 executes
let output1 = TaskOutput ( rawOutput : "Research findings..." )
// Context updated
context. taskOutputs . append (output1)
// Task 2 receives context with output1
// Can reference via {task_0_output}
// Task 2 executes
let output2 = TaskOutput ( rawOutput : "Article based on research..." )
// Context updated again
context. taskOutputs . append (output2)
// Task 3 receives both outputs
// Can reference {task_0_output} and {task_1_output}
Provide dynamic inputs to orbits:
Basic Usage
Typed Inputs
Complex Inputs
Default Values
// Define task with placeholders
let task = ORTask (
description : """
Analyze {industry} market for {product} in {region}.
Focus on {timeframe} trends.
""" ,
expectedOutput : "Market analysis report"
)
let orbit = try await Orbit. create (
name : "Market Analysis" ,
agents : [analyst],
tasks : [task]
)
// Provide inputs at runtime
let inputs = OrbitInput ( Metadata ([
"industry" : . string ( "healthcare" ),
"product" : . string ( "AI diagnostics" ),
"region" : . string ( "North America" ),
"timeframe" : . string ( "2024 Q4" )
]))
let result = try await orbit. start ( inputs : inputs)
// Task description becomes:
// "Analyze healthcare market for AI diagnostics in North America.
// Focus on 2024 Q4 trends."
// Define input structure
struct AnalysisInputs : Codable {
let industry: String
let product: String
let region: String
let timeframe: String
let budget: Double
let priorities: [ String ]
}
// Create inputs
let inputData = AnalysisInputs (
industry : "healthcare" ,
product : "AI diagnostics" ,
region : "North America" ,
timeframe : "2024 Q4" ,
budget : 50000.0 ,
priorities : [ "accuracy" , "speed" , "cost" ]
)
// Convert to OrbitInput
let inputs = try OrbitInput ( from : inputData)
// Use in tasks
let task = ORTask (
description : """
Market analysis for {product} in {industry}.
Region: {region}
Period: {timeframe}
Budget: ${budget}
Priorities: {priorities}
""" ,
expectedOutput : "Detailed analysis"
)
// Nested and complex data structures
let complexInputs = OrbitInput ( Metadata ([
"config" : . dictionary ([
"api_endpoint" : . string ( "https://api.example.com" ),
"timeout" : . int ( 30 ),
"retries" : . int ( 3 )
]),
"filters" : . array ([
. string ( "active" ),
. string ( "verified" ),
. string ( "premium" )
]),
"thresholds" : . dictionary ([
"min_confidence" : . double ( 0.85 ),
"max_results" : . int ( 100 )
])
]))
// Access in task descriptions
let task = ORTask (
description : """
Fetch data from {config.api_endpoint}
Apply filters: {filters}
Confidence threshold: {thresholds.min_confidence}
Max results: {thresholds.max_results}
""" ,
expectedOutput : "Filtered dataset"
)
// Define tasks with default values
let task = ORTask (
description : """
Analyze {industry:technology} market in {region:global}.
Period: {timeframe:last quarter}
""" ,
expectedOutput : "Analysis report"
)
// Execute without inputs - uses defaults
let result1 = try await orbit. start ()
// "Analyze technology market in global.
// Period: last quarter"
// Execute with partial inputs - overrides some defaults
let inputs = OrbitInput ( Metadata ([
"industry" : . string ( "healthcare" )
]))
let result2 = try await orbit. start ( inputs : inputs)
// "Analyze healthcare market in global.
// Period: last quarter"
Variable Interpolation
Reference previous task outputs: let task1 = ORTask (
description : "Research AI trends" ,
expectedOutput : "Research report"
)
let task2 = ORTask (
description : """
Write article based on:
{task_0_output}
""" ,
expectedOutput : "Article" ,
context : [task1. id ]
)
let task3 = ORTask (
description : """
Edit article for publication.
Original research: {task_0_output}
Article draft: {task_1_output}
""" ,
expectedOutput : "Final article" ,
context : [task1. id , task2. id ]
)
Variable Format :
{task_0_output} - First task
{task_1_output} - Second task
{task_N_output} - Nth task
Conditional Interpolation
Use conditional values: let inputs = OrbitInput ( Metadata ([
"mode" : . string ( "detailed" ),
"include_charts" : . bool ( true )
]))
let task = ORTask (
description : """
Generate report in {mode} mode.
{if include_charts}Include visualizations and charts.{endif}
{if mode=detailed}Provide comprehensive analysis with examples.{endif}
""" ,
expectedOutput : "Report"
)
Conditional interpolation is processed before task execution.
Outputs
OrbitOutput Structure
public struct OrbitOutput : Codable , Sendable {
// Task execution results
public let taskOutputs: [TaskOutput]
// Aggregated usage metrics
public let usageMetrics: UsageMetrics
// Total execution time
public let executionTime: TimeInterval
// Orbit metadata
public let orbitId: OrbitAIID
public let orbitName: String
// Completion timestamp
public let completedAt: Date
}
Accessing Results
Basic Access
Structured Outputs
Error Results
Exporting Results
let result = try await orbit. start ()
// Access task outputs
print ( "Total tasks: \( result. taskOutputs . count ) " )
for (index, output) in result.taskOutputs. enumerated () {
print ( " \n === Task \( index ) ===" )
print ( "Output: \( output. rawOutput ) " )
print ( "Agent: \( output. agentId ) " )
print ( "Tokens: \( output. usageMetrics . totalTokens ) " )
}
// Access metrics
print ( " \n === Metrics ===" )
print ( "Total tokens: \( result. usageMetrics . totalTokens ) " )
print ( "Execution time: \( result. executionTime ) s" )
// Define output type
struct AnalysisReport : Codable , Sendable {
let summary: String
let findings: [ String ]
let recommendations: [ String ]
}
// Task with structured output
let task = ORTask. withStructuredOutput (
description : "Analyze data and generate report" ,
expectedType : AnalysisReport. self ,
agent : analyst. id
)
let result = try await orbit. start ()
// Decode structured output
if let taskOutput = result.taskOutputs. first {
let report = try taskOutput. decode ( as : AnalysisReport. self )
print ( "Summary: \( report. summary ) " )
print ( "Findings: \( report. findings . count ) " )
print ( "Recommendations: \( report. recommendations . count ) " )
}
do {
let result = try await orbit. start ()
// Success case
print ( "Success: \( result. taskOutputs . count ) tasks completed" )
} catch let error as OrbitAIError {
// Error case
print ( "Orbit failed: \( error ) " )
// Check partial results
let tasks = await orbit. getTasks ()
let completed = tasks. filter { $0 . status == . completed }
let failed = tasks. filter { $0 . status == . failed }
print ( "Completed: \( completed. count ) " )
print ( "Failed: \( failed. count ) " )
// Access completed outputs
for task in completed {
if let result = task.result ? .output {
print ( "Task \( task. id ) : \( result. rawOutput ) " )
}
}
// Identify failed task
for task in failed {
print ( "Failed task: \( task. description ) " )
if let error = task.result ? . error {
print ( "Error: \( error ) " )
}
}
}
let result = try await orbit. start ()
// Export to JSON
let encoder = JSONEncoder ()
encoder. outputFormatting = . prettyPrinted
let jsonData = try encoder. encode (result)
try jsonData. write ( to : URL ( fileURLWithPath : "./results.json" ))
// Export individual outputs
for (index, output) in result.taskOutputs. enumerated () {
let filename = "./output_task_ \( index ) .txt"
try output. rawOutput . write (
toFile : filename,
atomically : true ,
encoding : . utf8
)
}
// Generate summary report
let summary = """
Orbit: \( result. orbitName )
Tasks: \( result. taskOutputs . count )
Tokens: \( result. usageMetrics . totalTokens )
Time: \( result. executionTime ) s
Completed: \( result. completedAt )
"""
try summary. write (
toFile : "./summary.txt" ,
atomically : true ,
encoding : . utf8
)
Monitoring and Telemetry
Real-Time Monitoring
Execution Status
Agent Metrics
Task Progress
Custom Telemetry
// Monitor orbit execution
let orbit = try await Orbit. create (
name : "Long-Running Workflow" ,
agents : agents,
tasks : tasks,
verbose : true
)
// Start execution in background
Task {
do {
let result = try await orbit. start ()
print ( "Orbit completed!" )
} catch {
print ( "Orbit failed: \( error ) " )
}
}
// Monitor progress
while await orbit. isRunning () {
let status = await orbit. getExecutionStatus ()
print ( "Status Update:" )
print ( " Queued: \( status. queuedTasks ) " )
print ( " Active: \( status. activeTasks ) " )
print ( " Completed: \( status. completedTasks ) " )
print ( " Failed: \( status. failedTasks ) " )
print ( " Progress: \( status. completionPercentage ) %" )
try await Task. sleep ( for : . seconds ( 5 ))
}
// Get per-agent metrics
let agents = await orbit. getAgents ()
for agent in agents {
let metrics = await agent. totalUsageMetrics
print ( " \n === \( agent. role ) ===" )
print ( "Executions: \( await agent. executionCount ) " )
print ( "Avg time: \( await agent. averageExecutionTime ) s" )
print ( "Total tokens: \( metrics. totalTokens ) " )
print ( "Success rate: \( metrics. successfulRequests ) / \( metrics. totalRequests ) " )
}
// Monitor individual tasks
let tasks = await orbit. getTasks ()
for task in tasks {
print ( " \n Task: \( task. description ) " )
print ( "Status: \( task. status ) " )
if let startTime = task.startTime {
print ( "Started: \( startTime ) " )
}
if let endTime = task.endTime {
print ( "Ended: \( endTime ) " )
print ( "Duration: \( task. executionTime ?? 0 ) s" )
}
if let metrics = task.usageMetrics {
print ( "Tokens: \( metrics. totalTokens ) " )
}
}
// Integrate with custom telemetry
let customTelemetry = CustomTelemetryManager ()
let orbit = try await Orbit. create (
name : "Monitored Workflow" ,
agents : agents,
tasks : tasks,
telemetryManager : customTelemetry,
stepCallback : "onStepComplete"
)
// Callback receives execution events
func onStepComplete ( step : ExecutionStep) {
// Log to custom system
customTelemetry. logEvent (
name : "task.step.completed" ,
properties : [
"orbit" : step. orbitId ,
"task" : step. taskId ,
"agent" : step. agentId ,
"duration" : step. duration
]
)
// Update dashboard
dashboardService. updateProgress (
orbitId : step. orbitId ,
progress : step. progressPercentage
)
}
Metrics Collection
// Enable metrics collection
let orbit = try await Orbit. create (
name : "Tracked Workflow" ,
agents : agents,
tasks : tasks,
usageMetrics : true // Default: true
)
let result = try await orbit. start ()
// Access aggregated metrics
let metrics = result. usageMetrics
print ( "Token Usage:" )
print ( " Prompt: \( metrics. promptTokens ) " )
print ( " Completion: \( metrics. completionTokens ) " )
print ( " Total: \( metrics. totalTokens ) " )
print ( " \n API Calls:" )
print ( " Successful: \( metrics. successfulRequests ) " )
print ( " Total: \( metrics. totalRequests ) " )
print ( " Success rate: \( ( Double (metrics. successfulRequests ) / Double (metrics. totalRequests )) * 100 ) %" )
// Calculate costs (example for OpenAI)
let inputCost = Double (metrics. promptTokens ) * 0.00001 // $0.01 per 1K
let outputCost = Double (metrics. completionTokens ) * 0.00003 // $0.03 per 1K
let totalCost = inputCost + outputCost
print ( " \n Estimated Cost: $ \( String ( format : "%.4f" , totalCost) ) " )
Error Handling
Error Types
Configuration Errors When : During orbit creationdo {
let orbit = try await Orbit. create (
name : "Test" ,
agents : [], // Empty!
tasks : [] // Empty!
)
} catch OrbitAIError. configuration ( let msg) {
print ( "Config error: \( msg ) " )
}
Common Causes :
Missing required parameters
Invalid agent configurations
Empty agents or tasks arrays
Execution Errors When : During orbit.start()do {
let result = try await orbit. start ()
} catch OrbitAIError. taskExecutionFailed ( let msg) {
print ( "Execution failed: \( msg ) " )
}
Common Causes :
Task execution failures
Agent errors
Tool failures
Timeout exceeded
LLM Errors When : LLM provider issuesdo {
let result = try await orbit. start ()
} catch OrbitAIError. llmRateLimitExceeded ( let msg) {
print ( "Rate limited: \( msg ) " )
// Implement backoff
} catch OrbitAIError. llmRequestFailed ( let msg) {
print ( "LLM error: \( msg ) " )
}
Common Causes :
Rate limits
API key issues
Provider unavailability
Token limits exceeded
Resource Errors When : Resource constraintsdo {
let result = try await orbit. start ()
} catch OrbitAIError. memoryExhausted {
print ( "Out of memory" )
} catch OrbitAIError. timeout {
print ( "Execution timeout" )
}
Common Causes :
Memory constraints
Timeout limits
Disk space issues
Error Recovery Strategies
Retry Logic
Fallback Providers
Partial Results
Graceful Degradation
func executeOrbitWithRetry (
orbit : Orbit,
maxRetries : Int = 3
) async throws -> OrbitOutput {
var lastError: Error ?
for attempt in 1 ... maxRetries {
do {
return try await orbit. start ()
} catch let error as OrbitAIError {
lastError = error
switch error {
case . llmRateLimitExceeded :
// Exponential backoff
let delay = pow ( 2.0 , Double (attempt))
print ( "Rate limited, retrying in \( delay ) s..." )
try await Task. sleep ( for : . seconds (delay))
case . taskExecutionFailed ( let msg) where msg. contains ( "timeout" ) :
// Increase timeout for retry
print ( "Timeout, increasing limits for retry..." )
// Would need to recreate orbit with higher limits
default :
// Other errors - don't retry
throw error
}
}
}
throw lastError ?? OrbitAIError. taskExecutionFailed ( "Max retries exceeded" )
}
do {
// Try primary provider
let orbit = try await Orbit. create (
name : "Workflow" ,
agents : agents,
tasks : tasks
)
let primaryProvider = try OpenAIProvider (
model : . gpt4o ,
apiKey : openAIKey
)
await orbit. configureLLMProvider (primaryProvider, asDefault : true )
let result = try await orbit. start ()
} catch OrbitAIError. llmRequestFailed {
print ( "Primary provider failed, using fallback..." )
// Retry with fallback provider
let newOrbit = try await Orbit. create (
name : "Workflow" ,
agents : agents,
tasks : tasks
)
let fallbackProvider = AnthropicProvider (
model : . claude35Sonnet ,
apiKey : anthropicKey
)
await newOrbit. configureLLMProvider (fallbackProvider, asDefault : true )
let result = try await newOrbit. start ()
}
do {
let result = try await orbit. start ()
return result
} catch {
print ( "Orbit failed, recovering partial results..." )
// Get completed tasks
let tasks = await orbit. getTasks ()
let completed = tasks. filter { $0 . status == . completed }
if completed. isEmpty {
throw error // Nothing to recover
}
print ( "Recovered \( completed. count ) completed tasks" )
// Extract outputs
let outputs = completed. compactMap { $0 . result ? . output }
// Create partial result
let partialResult = OrbitOutput (
taskOutputs : outputs,
usageMetrics : calculatePartialMetrics (outputs),
executionTime : calculatePartialTime (tasks),
orbitId : orbit. id ,
orbitName : orbit. name ,
completedAt : Date ()
)
return partialResult
}
struct RobustOrbitExecutor {
func execute ( orbit : Orbit) async -> Result<OrbitOutput, OrbitError> {
do {
let result = try await orbit. start ()
return . success (result)
} catch let error as OrbitAIError {
// Log error
logger. error ( "Orbit execution failed: \( error ) " )
// Attempt recovery
if let recovery = await attemptRecovery ( orbit : orbit, error : error) {
return . success (recovery)
}
// Return detailed error
return . failure (OrbitError. executionFailed (
underlying : error,
partialResults : await extractPartialResults (orbit),
failedTask : await identifyFailedTask (orbit)
))
}
}
private func attemptRecovery (
orbit : Orbit,
error : OrbitAIError
) async -> OrbitOutput ? {
// Implement recovery strategies
switch error {
case . llmRateLimitExceeded :
// Wait and retry
try ? await Task. sleep ( for : . seconds ( 60 ))
return try ? await orbit. start ()
case . taskExecutionFailed :
// Try to recover partial results
return await createPartialOutput (orbit)
default :
return nil
}
}
}
Best Practices
Orbit Design
Appropriate Scope Do : Create focused orbits for specific workflows// Good: Focused workflow
let contentOrbit = Orbit. create (
name : "Content Creation" ,
agents : [researcher, writer, editor],
tasks : [research, write, edit]
)
Don’t : Create monolithic orbits// Bad: Too many responsibilities
let everythingOrbit = Orbit. create (
name : "Everything" ,
agents : [ 50 different agents],
tasks : [ 100 different tasks]
)
Agent Specialization Do : Assign specialized agents to relevant taskslet codeAgent = Agent (
role : "Senior Developer" ,
purpose : "Write production code" ,
context : "Expert in Swift" ,
tools : [ "code_generator" , "linter" ]
)
let codeTask = ORTask (
description : "Implement authentication" ,
agent : codeAgent. id
)
Don’t : Use generic agents for everythinglet genericAgent = Agent (
role : "Helper" ,
purpose : "Do stuff"
)
Memory Management Do : Enable memory only when needed// Conversational: needs memory
let chatOrbit = Orbit. create (
name : "Chat" ,
agents : [chatAgent],
tasks : chatTasks,
memory : true
)
// One-off: no memory needed
let reportOrbit = Orbit. create (
name : "Report" ,
agents : [reportAgent],
tasks : [reportTask],
memory : false
)
Error Boundaries Do : Implement proper error handlingdo {
let result = try await orbit. start ()
await processSuccess (result)
} catch {
await handleFailure (error)
await notifyStakeholders (error)
await cleanupResources ()
}
Don’t : Ignore errors// Bad
try ? await orbit. start ()
Choose Appropriate Process
// Sequential: for linear workflows
let simpleWorkflow = Orbit. create (
process : . sequential
)
// Hierarchical: for complex coordination
let complexWorkflow = Orbit. create (
process : . hierarchical ,
manager : coordinator
)
// Flow-based: for parallel opportunities
let parallelWorkflow = Orbit. create (
process : . flowBased // Not a real enum, just for illustration
)
// Use TaskFlow for flow-based execution
Optimize Task Granularity
// Good: Balanced task size
let tasks = [
extractDataTask, // ~30s
transformDataTask, // ~45s
analyzeDataTask, // ~60s
reportTask // ~20s
]
// Bad: Too granular (high overhead)
let tooManyTasks = [
loadFile1, loadFile2, loadFile3, // Each 1s
parseFile1, parseFile2, parseFile3,
// ... 50 more tiny tasks
]
// Bad: Too coarse (long blocking)
let tooFewTasks = [
doEverythingTask // 30 minutes
]
Configure Concurrency
// Set appropriate limits
let orbit = Orbit. create (
name : "Parallel Processing" ,
agents : agents,
tasks : tasks,
maxConcurrentTasks : 5 // Balance parallelism vs resources
)
// Consider:
// - API rate limits (e.g., 60 req/min)
// - Memory constraints
// - CPU availability
Monitor and Tune
let result = try await orbit. start ()
// Analyze performance
print ( "Total time: \( result. executionTime ) s" )
// Identify bottlenecks
for (index, output) in result.taskOutputs. enumerated () {
if let task = orbit.tasks[ safe : index] {
let time = task. executionTime ?? 0
if time > 60 { // Tasks over 1 minute
print ( "Slow task: \( task. description ) ( \( time ) s)" )
}
}
}
// Optimize based on findings
Troubleshooting
Orbit Fails to Initialize
Symptoms : Errors during Orbit.create()Common Causes :
Missing required parameters
Invalid agent/task configurations
LLM provider setup failure
Solutions :do {
let orbit = try await Orbit. create (
name : "Test Orbit" ,
agents : agents,
tasks : tasks
)
} catch OrbitAIError. configuration ( let message) {
print ( "Configuration error: \( message ) " )
// Check specific issues
if agents. isEmpty {
print ( "Error: No agents provided" )
}
if tasks. isEmpty {
print ( "Error: No tasks provided" )
}
// Validate agent configs
for agent in agents {
if agent.role. isEmpty {
print ( "Error: Agent missing role" )
}
}
}
Symptoms : Orbit starts but tasks don’t runCommon Causes :
Agent assignment issues
Missing tools
LLM provider not configured
Solutions :// Verify agents have necessary tools
for agent in agents {
let tools = await agent. getToolNames ()
print ( " \( agent. role ) tools: \( tools ) " )
}
// Verify LLM provider
let llmManager = await orbit. getLLMManager ()
let providers = await llmManager. getAvailableProviders ()
print ( "Available providers: \( providers ) " )
if providers. isEmpty {
// Configure provider
let provider = try OpenAIProvider. fromEnvironment (
model : . gpt4o
)
await orbit. configureLLMProvider (provider, asDefault : true )
}
// Enable verbose logging
let orbit = try await Orbit. create (
name : "Debug Orbit" ,
agents : agents,
tasks : tasks,
verbose : true // See what's happening
)
Context Variables Not Resolving
Symptoms : {variable} appears literally in outputsCommon Causes :
Missing context declaration
Wrong variable names
Inputs not provided
Solutions :// Ensure context is declared
let task = ORTask (
description : "Process: {task_0_output}" ,
expectedOutput : "Result" ,
context : [previousTask. id ] // Required!
)
// Provide orbit inputs
let inputs = OrbitInput ( Metadata ([
"variable" : . string ( "value" )
]))
let result = try await orbit. start ( inputs : inputs)
// Verify variable names match
// Description: "Analyze {industry}"
// Input key must be: "industry"
Symptoms : Memory consumption growing excessivelyCommon Causes :
Memory enabled unnecessarily
Large outputs accumulating
Knowledge bases loaded but not needed
Solutions :// Disable unnecessary memory
let orbit = Orbit. create (
name : "Lightweight Orbit" ,
agents : agents,
tasks : tasks,
memory : false , // Disable if not needed
longTermMemory : false ,
entityMemory : false
)
// Configure memory limits
let memoryConfig = MemoryConfiguration (
maxMemoryItems : 50 , // Limit storage
compressionEnabled : true , // Auto-summarize
pruneOldItems : true // Remove old entries
)
// Extract only needed data
let summaryTask = ORTask (
description : "Summarize key points from research" ,
expectedOutput : "5 bullet points" // Compact output
)
Symptoms : Orbit takes much longer than expectedCommon Causes :
Sequential execution when parallel possible
Large context windows
Inefficient tool usage
No concurrency limits set
Solutions :// Use flow-based for parallelization
let taskFlow = TaskFlow (
tasks : independentTasks
)
let engine = TaskExecutionEngine (
agentExecutor : executor,
maxConcurrentTasks : 5 // Enable parallelism
)
// Optimize context
let task = ORTask (
description : "Use summary: {task_1_output}" ,
context : [summaryTask. id ] // Reference specific task, not all
)
// Enable context window management
let agent = Agent (
role : "Agent" ,
purpose : "Process efficiently" ,
context : "..." ,
respectContextWindow : true // Auto-prune context
)
// Profile execution
let result = try await orbit. start ()
for (i, output) in result.taskOutputs. enumerated () {
if let task = orbit.tasks[ safe : i] {
print ( "Task \( i ) : \( task. executionTime ?? 0 ) s" )
}
}
Symptoms : Different outputs for same inputsCommon Causes :
High temperature settings
Non-deterministic tools
Memory state differences
Random LLM sampling
Solutions :// Lower temperature for consistency
let agent = Agent (
role : "Data Processor" ,
purpose : "Process data consistently" ,
context : "Deterministic processing" ,
temperature : 0.0 // Fully deterministic
)
// Use structured outputs
struct ProcessingResult : Codable , Sendable {
let status: String
let count: Int
}
let task = ORTask. withStructuredOutput (
description : "Process data" ,
expectedType : ProcessingResult. self ,
agent : agent. id
)
// Disable memory for stateless execution
let orbit = Orbit. create (
name : "Stateless Orbit" ,
agents : [agent],
tasks : [task],
memory : false // No state
)
Next Steps