Overview
Orbits are the top-level orchestration constructs in OrbitAI that bring together agents, tasks, and processes into cohesive, executable workflows. An orbit manages the complete lifecycle of multi-agent execution, from initialization through task distribution to result aggregation.
Orchestration Coordinates multiple agents and tasks
Lifecycle Management Manages execution from start to finish
LLM Integration Automatic LLM provider setup
Memory Systems Shared and agent-specific memory
Context Flow Manages data flow between tasks
Telemetry Comprehensive metrics and monitoring
What is an Orbit?
An Orbit is a complete workflow system that:
Manages agents : Configures and coordinates AI agents
Executes tasks : Runs tasks according to process type
Handles state : Tracks execution progress and results
Provides infrastructure : LLM providers, memory, tools, knowledge bases
Monitors performance : Collects metrics and usage data
Ensures reliability : Error handling and recovery mechanisms
Think of an Orbit as a “project” or “workflow instance” that contains everything needed to execute a multi-agent system.
Key Characteristics
Orbits encapsulate all necessary components—agents, tasks, LLM providers, memory, and tools—making them portable and reusable.
Execution follows one of three process types (Sequential, Hierarchical, Flow-Based), providing predictable orchestration patterns.
Maintains execution context that flows through tasks, enabling agents to build upon previous work and share information.
Provides comprehensive visibility into execution through verbose logging, metrics, and telemetry integration.
Orbit Lifecycle
Understanding the orbit lifecycle helps you manage execution and handle edge cases effectively.
┌─────────────────────────────────────────────────────┐
│ ORBIT LIFECYCLE │
└─────────────────────────────────────────────────────┘
1. INITIALIZATION
├─ Create Orbit instance
├─ Validate configuration
├─ Setup LLM providers
├─ Initialize memory systems
└─ Register tools
↓
2. READY
├─ Orbit configured and ready
├─ Waiting for start() call
└─ All components initialized
↓
3. RUNNING
├─ Executing tasks
├─ Agents processing work
├─ Context flowing
└─ Metrics collecting
↓
4. COMPLETING
├─ Final tasks finishing
├─ Aggregating results
└─ Collecting final metrics
↓
5. COMPLETED / FAILED
├─ OrbitOutput generated
├─ Resources cleaned up
└─ Final state recorded
Lifecycle States
Initialization
Ready
Running
Completing
Completed/Failed
Phase : Orbit construction and setupActivities :
Validate required parameters
Create internal components
Setup LLM manager
Initialize memory systems
Register tools with ToolsHandler
Prepare knowledge bases
Validate agent configurations
Validate task configurations
Code Example :// Initialization phase
let orbit = try await Orbit. create (
name : "Data Analysis Pipeline" ,
description : "Analyze customer data and generate insights" ,
agents : [dataAgent, analysisAgent, reportAgent],
tasks : [extractTask, analyzeTask, reportTask],
process : . sequential ,
verbose : true ,
memory : true ,
usageMetrics : true
)
// Orbit is now in READY state
Common Errors :
Missing required parameters
Invalid agent configurations
LLM provider setup failures
Tool registration issues
Initialization errors are thrown immediately—catch and handle them before attempting execution.
Parameters and Configuration
Core Parameters
Required
Process & Execution
Memory & Knowledge
Monitoring & Output
Additional Config
Parameter Type Description
nameStringHuman-readable name for the orbit agents[Agent]Array of agents available for task execution tasks[ORTask]Array of tasks to execute
let orbit = try await Orbit. create (
name : "Content Creation Workflow" ,
agents : [researcher, writer, editor],
tasks : [researchTask, writeTask, editTask]
)
These three parameters are the minimum required to create a functioning orbit.
Creating and Configuring Orbits
Basic Creation
Define Agents
Create agents with appropriate roles and capabilities: let researcher = Agent (
role : "Research Analyst" ,
purpose : "Conduct thorough research on topics" ,
context : "Expert researcher with analytical skills" ,
tools : [ "web_search" , "data_analyzer" ]
)
let writer = Agent (
role : "Content Writer" ,
purpose : "Create engaging, well-written content" ,
context : "Professional writer with storytelling skills" ,
tools : [ "content_optimizer" , "grammar_checker" ]
)
Define Tasks
Create tasks with clear descriptions and expected outputs: let researchTask = ORTask (
description : "Research current trends in AI for healthcare" ,
expectedOutput : "Comprehensive research report with sources" ,
agent : researcher. id
)
let writingTask = ORTask (
description : "Write article based on research: {task_0_output}" ,
expectedOutput : "1500-word article in markdown" ,
agent : writer. id ,
context : [researchTask. id ]
)
Create Orbit
Bring agents and tasks together: let orbit = try await Orbit. create (
name : "Article Creation Pipeline" ,
description : "Research and write articles on AI topics" ,
agents : [researcher, writer],
tasks : [researchTask, writingTask],
process : . sequential ,
verbose : true
)
Execute
Start the orbit and get results: let result = try await orbit. start ()
// Access outputs
for (index, output) in result.taskOutputs. enumerated () {
print ( "Task \( index ) : \( output. rawOutput ) " )
}
// Check metrics
print ( "Total tokens: \( result. usageMetrics . totalTokens ) " )
print ( "Execution time: \( result. executionTime ) s" )
Advanced Configuration
Custom LLM Setup
Memory Configuration
Knowledge Sources
Tool Registration
Configure specific LLM providers: // Create orbit
let orbit = try await Orbit. create (
name : "Custom LLM Workflow" ,
agents : agents,
tasks : tasks
)
// Add primary provider
let openAIProvider = try OpenAIProvider (
model : . gpt4o ,
apiKey : openAIKey
)
await orbit. configureLLMProvider (
openAIProvider,
asDefault : true
)
// Add fallback provider
let anthropicProvider = AnthropicProvider (
model : . claude35Sonnet ,
apiKey : anthropicKey
)
await orbit. configureLLMProvider (anthropicProvider)
// Execute with configured providers
let result = try await orbit. start ()
Execution Process
Execution Flow
Sequential
Hierarchical
Flow-Based
orbit.start()
↓
Initialize LLM Manager
↓
Load Knowledge Sources
↓
Setup Memory Systems
↓
Create Task Execution Engine
↓
Execute Task 1 (Agent A)
↓
Update Context with Output 1
↓
Execute Task 2 (Agent B)
↓
Update Context with Output 2
↓
Execute Task 3 (Agent C)
↓
Update Context with Output 3
↓
Aggregate Results
↓
Calculate Metrics
↓
Return OrbitOutput
Characteristics :
Strict linear execution
Each task waits for previous
Context builds sequentially
Predictable timing
Execution Context
The execution context flows through tasks:
public struct TaskExecutionContext : Sendable {
// Previous task outputs
public var taskOutputs: [TaskOutput]
// Orbit-level inputs
public var inputs: Metadata
// Shared memory
public var memory: MemoryStorage ?
// Knowledge base access
public var knowledgeBase: KnowledgeBase ?
// Available tools
public var availableTools: [ String ]
}
Context Flow Example :
// Task 1 executes
let output1 = TaskOutput ( rawOutput : "Research findings..." )
// Context updated
context. taskOutputs . append (output1)
// Task 2 receives context with output1
// Can reference via {task_0_output}
// Task 2 executes
let output2 = TaskOutput ( rawOutput : "Article based on research..." )
// Context updated again
context. taskOutputs . append (output2)
// Task 3 receives both outputs
// Can reference {task_0_output} and {task_1_output}
Provide dynamic inputs to orbits:
Basic Usage
Typed Inputs
Complex Inputs
Default Values
// Define task with placeholders
let task = ORTask (
description : """
Analyze {industry} market for {product} in {region}.
Focus on {timeframe} trends.
""" ,
expectedOutput : "Market analysis report"
)
let orbit = try await Orbit. create (
name : "Market Analysis" ,
agents : [analyst],
tasks : [task]
)
// Provide inputs at runtime
let inputs = OrbitInput ( Metadata ([
"industry" : . string ( "healthcare" ),
"product" : . string ( "AI diagnostics" ),
"region" : . string ( "North America" ),
"timeframe" : . string ( "2024 Q4" )
]))
let result = try await orbit. start ( inputs : inputs)
// Task description becomes:
// "Analyze healthcare market for AI diagnostics in North America.
// Focus on 2024 Q4 trends."
Variable Interpolation
Reference previous task outputs: let task1 = ORTask (
description : "Research AI trends" ,
expectedOutput : "Research report"
)
let task2 = ORTask (
description : """
Write article based on:
{task_0_output}
""" ,
expectedOutput : "Article" ,
context : [task1. id ]
)
let task3 = ORTask (
description : """
Edit article for publication.
Original research: {task_0_output}
Article draft: {task_1_output}
""" ,
expectedOutput : "Final article" ,
context : [task1. id , task2. id ]
)
Variable Format :
{task_0_output} - First task
{task_1_output} - Second task
{task_N_output} - Nth task
Conditional Interpolation
Use conditional values: let inputs = OrbitInput ( Metadata ([
"mode" : . string ( "detailed" ),
"include_charts" : . bool ( true )
]))
let task = ORTask (
description : """
Generate report in {mode} mode.
{if include_charts}Include visualizations and charts.{endif}
{if mode=detailed}Provide comprehensive analysis with examples.{endif}
""" ,
expectedOutput : "Report"
)
Conditional interpolation is processed before task execution.
Outputs
OrbitOutput Structure
public struct OrbitOutput : Codable , Sendable {
// Task execution results
public let taskOutputs: [TaskOutput]
// Aggregated usage metrics
public let usageMetrics: UsageMetrics
// Total execution time
public let executionTime: TimeInterval
// Orbit metadata
public let orbitId: OrbitAIID
public let orbitName: String
// Completion timestamp
public let completedAt: Date
}
Accessing Results
Basic Access
Structured Outputs
Error Results
Exporting Results
let result = try await orbit. start ()
// Access task outputs
print ( "Total tasks: \( result. taskOutputs . count ) " )
for (index, output) in result.taskOutputs. enumerated () {
print ( " \n === Task \( index ) ===" )
print ( "Output: \( output. rawOutput ) " )
print ( "Agent: \( output. agentId ) " )
print ( "Tokens: \( output. usageMetrics . totalTokens ) " )
}
// Access metrics
print ( " \n === Metrics ===" )
print ( "Total tokens: \( result. usageMetrics . totalTokens ) " )
print ( "Execution time: \( result. executionTime ) s" )
Monitoring and Telemetry
Real-Time Monitoring
Execution Status
Agent Metrics
Task Progress
Custom Telemetry
// Monitor orbit execution
let orbit = try await Orbit. create (
name : "Long-Running Workflow" ,
agents : agents,
tasks : tasks,
verbose : true
)
// Start execution in background
Task {
do {
let result = try await orbit. start ()
print ( "Orbit completed!" )
} catch {
print ( "Orbit failed: \( error ) " )
}
}
// Monitor progress
while await orbit. isRunning () {
let status = await orbit. getExecutionStatus ()
print ( "Status Update:" )
print ( " Queued: \( status. queuedTasks ) " )
print ( " Active: \( status. activeTasks ) " )
print ( " Completed: \( status. completedTasks ) " )
print ( " Failed: \( status. failedTasks ) " )
print ( " Progress: \( status. completionPercentage ) %" )
try await Task. sleep ( for : . seconds ( 5 ))
}
Metrics Collection
// Enable metrics collection
let orbit = try await Orbit. create (
name : "Tracked Workflow" ,
agents : agents,
tasks : tasks,
usageMetrics : true // Default: true
)
let result = try await orbit. start ()
// Access aggregated metrics
let metrics = result. usageMetrics
print ( "Token Usage:" )
print ( " Prompt: \( metrics. promptTokens ) " )
print ( " Completion: \( metrics. completionTokens ) " )
print ( " Total: \( metrics. totalTokens ) " )
print ( " \n API Calls:" )
print ( " Successful: \( metrics. successfulRequests ) " )
print ( " Total: \( metrics. totalRequests ) " )
print ( " Success rate: \( ( Double (metrics. successfulRequests ) / Double (metrics. totalRequests )) * 100 ) %" )
// Calculate costs (example for OpenAI)
let inputCost = Double (metrics. promptTokens ) * 0.00001 // $0.01 per 1K
let outputCost = Double (metrics. completionTokens ) * 0.00003 // $0.03 per 1K
let totalCost = inputCost + outputCost
print ( " \n Estimated Cost: $ \( String ( format : "%.4f" , totalCost) ) " )
Error Handling
Error Types
Configuration Errors When : During orbit creationdo {
let orbit = try await Orbit. create (
name : "Test" ,
agents : [], // Empty!
tasks : [] // Empty!
)
} catch OrbitAIError. configuration ( let msg) {
print ( "Config error: \( msg ) " )
}
Common Causes :
Missing required parameters
Invalid agent configurations
Empty agents or tasks arrays
Execution Errors When : During orbit.start()do {
let result = try await orbit. start ()
} catch OrbitAIError. taskExecutionFailed ( let msg) {
print ( "Execution failed: \( msg ) " )
}
Common Causes :
Task execution failures
Agent errors
Tool failures
Timeout exceeded
LLM Errors When : LLM provider issuesdo {
let result = try await orbit. start ()
} catch OrbitAIError. llmRateLimitExceeded ( let msg) {
print ( "Rate limited: \( msg ) " )
// Implement backoff
} catch OrbitAIError. llmRequestFailed ( let msg) {
print ( "LLM error: \( msg ) " )
}
Common Causes :
Rate limits
API key issues
Provider unavailability
Token limits exceeded
Resource Errors When : Resource constraintsdo {
let result = try await orbit. start ()
} catch OrbitAIError. memoryExhausted {
print ( "Out of memory" )
} catch OrbitAIError. timeout {
print ( "Execution timeout" )
}
Common Causes :
Memory constraints
Timeout limits
Disk space issues
Error Recovery Strategies
Retry Logic
Fallback Providers
Partial Results
Graceful Degradation
func executeOrbitWithRetry (
orbit : Orbit,
maxRetries : Int = 3
) async throws -> OrbitOutput {
var lastError: Error ?
for attempt in 1 ... maxRetries {
do {
return try await orbit. start ()
} catch let error as OrbitAIError {
lastError = error
switch error {
case . llmRateLimitExceeded :
// Exponential backoff
let delay = pow ( 2.0 , Double (attempt))
print ( "Rate limited, retrying in \( delay ) s..." )
try await Task. sleep ( for : . seconds (delay))
case . taskExecutionFailed ( let msg) where msg. contains ( "timeout" ) :
// Increase timeout for retry
print ( "Timeout, increasing limits for retry..." )
// Would need to recreate orbit with higher limits
default :
// Other errors - don't retry
throw error
}
}
}
throw lastError ?? OrbitAIError. taskExecutionFailed ( "Max retries exceeded" )
}
Best Practices
Orbit Design
Appropriate Scope Do : Create focused orbits for specific workflows// Good: Focused workflow
let contentOrbit = Orbit. create (
name : "Content Creation" ,
agents : [researcher, writer, editor],
tasks : [research, write, edit]
)
Don’t : Create monolithic orbits// Bad: Too many responsibilities
let everythingOrbit = Orbit. create (
name : "Everything" ,
agents : [ 50 different agents],
tasks : [ 100 different tasks]
)
Agent Specialization Do : Assign specialized agents to relevant taskslet codeAgent = Agent (
role : "Senior Developer" ,
purpose : "Write production code" ,
context : "Expert in Swift" ,
tools : [ "code_generator" , "linter" ]
)
let codeTask = ORTask (
description : "Implement authentication" ,
agent : codeAgent. id
)
Don’t : Use generic agents for everythinglet genericAgent = Agent (
role : "Helper" ,
purpose : "Do stuff"
)
Memory Management Do : Enable memory only when needed// Conversational: needs memory
let chatOrbit = Orbit. create (
name : "Chat" ,
agents : [chatAgent],
tasks : chatTasks,
memory : true
)
// One-off: no memory needed
let reportOrbit = Orbit. create (
name : "Report" ,
agents : [reportAgent],
tasks : [reportTask],
memory : false
)
Error Boundaries Do : Implement proper error handlingdo {
let result = try await orbit. start ()
await processSuccess (result)
} catch {
await handleFailure (error)
await notifyStakeholders (error)
await cleanupResources ()
}
Don’t : Ignore errors// Bad
try ? await orbit. start ()
Choose Appropriate Process
// Sequential: for linear workflows
let simpleWorkflow = Orbit. create (
process : . sequential
)
// Hierarchical: for complex coordination
let complexWorkflow = Orbit. create (
process : . hierarchical ,
manager : coordinator
)
// Flow-based: for parallel opportunities
let parallelWorkflow = Orbit. create (
process : . flowBased // Not a real enum, just for illustration
)
// Use TaskFlow for flow-based execution
Optimize Task Granularity
// Good: Balanced task size
let tasks = [
extractDataTask, // ~30s
transformDataTask, // ~45s
analyzeDataTask, // ~60s
reportTask // ~20s
]
// Bad: Too granular (high overhead)
let tooManyTasks = [
loadFile1, loadFile2, loadFile3, // Each 1s
parseFile1, parseFile2, parseFile3,
// ... 50 more tiny tasks
]
// Bad: Too coarse (long blocking)
let tooFewTasks = [
doEverythingTask // 30 minutes
]
Configure Concurrency
// Set appropriate limits
let orbit = Orbit. create (
name : "Parallel Processing" ,
agents : agents,
tasks : tasks,
maxConcurrentTasks : 5 // Balance parallelism vs resources
)
// Consider:
// - API rate limits (e.g., 60 req/min)
// - Memory constraints
// - CPU availability
Monitor and Tune
let result = try await orbit. start ()
// Analyze performance
print ( "Total time: \( result. executionTime ) s" )
// Identify bottlenecks
for (index, output) in result.taskOutputs. enumerated () {
if let task = orbit.tasks[ safe : index] {
let time = task. executionTime ?? 0
if time > 60 { // Tasks over 1 minute
print ( "Slow task: \( task. description ) ( \( time ) s)" )
}
}
}
// Optimize based on findings
Troubleshooting
Orbit Fails to Initialize
Symptoms : Errors during Orbit.create()Common Causes :
Missing required parameters
Invalid agent/task configurations
LLM provider setup failure
Solutions :do {
let orbit = try await Orbit. create (
name : "Test Orbit" ,
agents : agents,
tasks : tasks
)
} catch OrbitAIError. configuration ( let message) {
print ( "Configuration error: \( message ) " )
// Check specific issues
if agents. isEmpty {
print ( "Error: No agents provided" )
}
if tasks. isEmpty {
print ( "Error: No tasks provided" )
}
// Validate agent configs
for agent in agents {
if agent.role. isEmpty {
print ( "Error: Agent missing role" )
}
}
}
Symptoms : Orbit starts but tasks don’t runCommon Causes :
Agent assignment issues
Missing tools
LLM provider not configured
Solutions :// Verify agents have necessary tools
for agent in agents {
let tools = await agent. getToolNames ()
print ( " \( agent. role ) tools: \( tools ) " )
}
// Verify LLM provider
let llmManager = await orbit. getLLMManager ()
let providers = await llmManager. getAvailableProviders ()
print ( "Available providers: \( providers ) " )
if providers. isEmpty {
// Configure provider
let provider = try OpenAIProvider. fromEnvironment (
model : . gpt4o
)
await orbit. configureLLMProvider (provider, asDefault : true )
}
// Enable verbose logging
let orbit = try await Orbit. create (
name : "Debug Orbit" ,
agents : agents,
tasks : tasks,
verbose : true // See what's happening
)
Context Variables Not Resolving
Symptoms : {variable} appears literally in outputsCommon Causes :
Missing context declaration
Wrong variable names
Inputs not provided
Solutions :// Ensure context is declared
let task = ORTask (
description : "Process: {task_0_output}" ,
expectedOutput : "Result" ,
context : [previousTask. id ] // Required!
)
// Provide orbit inputs
let inputs = OrbitInput ( Metadata ([
"variable" : . string ( "value" )
]))
let result = try await orbit. start ( inputs : inputs)
// Verify variable names match
// Description: "Analyze {industry}"
// Input key must be: "industry"
Symptoms : Memory consumption growing excessivelyCommon Causes :
Memory enabled unnecessarily
Large outputs accumulating
Knowledge bases loaded but not needed
Solutions :// Disable unnecessary memory
let orbit = Orbit. create (
name : "Lightweight Orbit" ,
agents : agents,
tasks : tasks,
memory : false , // Disable if not needed
longTermMemory : false ,
entityMemory : false
)
// Configure memory limits
let memoryConfig = MemoryConfiguration (
maxMemoryItems : 50 , // Limit storage
compressionEnabled : true , // Auto-summarize
pruneOldItems : true // Remove old entries
)
// Extract only needed data
let summaryTask = ORTask (
description : "Summarize key points from research" ,
expectedOutput : "5 bullet points" // Compact output
)
Symptoms : Orbit takes much longer than expectedCommon Causes :
Sequential execution when parallel possible
Large context windows
Inefficient tool usage
No concurrency limits set
Solutions :// Use flow-based for parallelization
let taskFlow = TaskFlow (
tasks : independentTasks
)
let engine = TaskExecutionEngine (
agentExecutor : executor,
maxConcurrentTasks : 5 // Enable parallelism
)
// Optimize context
let task = ORTask (
description : "Use summary: {task_1_output}" ,
context : [summaryTask. id ] // Reference specific task, not all
)
// Enable context window management
let agent = Agent (
role : "Agent" ,
purpose : "Process efficiently" ,
context : "..." ,
respectContextWindow : true // Auto-prune context
)
// Profile execution
let result = try await orbit. start ()
for (i, output) in result.taskOutputs. enumerated () {
if let task = orbit.tasks[ safe : i] {
print ( "Task \( i ) : \( task. executionTime ?? 0 ) s" )
}
}
Symptoms : Different outputs for same inputsCommon Causes :
High temperature settings
Non-deterministic tools
Memory state differences
Random LLM sampling
Solutions :// Lower temperature for consistency
let agent = Agent (
role : "Data Processor" ,
purpose : "Process data consistently" ,
context : "Deterministic processing" ,
temperature : 0.0 // Fully deterministic
)
// Use structured outputs
struct ProcessingResult : Codable , Sendable {
let status: String
let count: Int
}
let task = ORTask. withStructuredOutput (
description : "Process data" ,
expectedType : ProcessingResult. self ,
agent : agent. id
)
// Disable memory for stateless execution
let orbit = Orbit. create (
name : "Stateless Orbit" ,
agents : [agent],
tasks : [task],
memory : false // No state
)
Next Steps