Skip to main content

Overview

OrbitAI Tasks are the fundamental execution units in the OrbitAI framework. A task represents a specific piece of work that needs to be completed by an AI agent, with defined requirements, constraints, and expected outputs.

Type Safety

Built with Swift’s strong typing for compile-time safety

Async/Await

Native Swift concurrency for efficient execution

Structured Outputs

Support for text and structured data outputs

Validation

Built-in guardrails and validation mechanisms

Monitoring

Comprehensive telemetry and performance tracking

Flexibility

Various output formats and execution patterns

Architecture

OrbitAI’s task system consists of several key components working together to orchestrate task execution:
ORTask (Task Definition)

TaskExecutionEngine (Orchestrator)
    ├── Agent Selection
    ├── ToolsHandler Integration
    ├── MemoryStorage Access
    ├── KnowledgeBase Queries
    └── TelemetryManager Tracking

TaskResult → TaskOutput

Key Components

The main task definition structure containing all task specifications, requirements, and configuration.
Central orchestrator that manages task execution across agents, handles concurrency, and enforces constraints.
Success/failure result wrapper that encapsulates execution outcomes and errors.
Contains execution results, metadata, usage metrics, and tool execution information.
Provides execution context including metadata and outputs from dependent tasks.

Task Parameters

Core Properties

  • Required
  • Optional
  • Execution Control
  • Execution State
PropertyTypeDescription
idOrbitAIIDUnique identifier (UUID) for the task
descriptionStringDetailed description of what to accomplish
expectedOutputStringSpecification of output format and content
let task = ORTask(
    description: "Analyze Q4 2024 sales data and identify top products",
    expectedOutput: "JSON report with top 10 products by revenue"
)

Output Formats

OrbitAI supports multiple output formats to meet different use cases:

Text

Plain text output for human-readable results
outputFormat: .text

JSON

Generic JSON output for flexible data structures
outputFormat: .json

Markdown

Markdown formatted text for documentation
outputFormat: .markdown

CSV

Comma-separated values for tabular data
outputFormat: .csv

XML

XML structured output for legacy systems
outputFormat: .xml

Structured

Schema-validated structured output
outputFormat: .structured(schema)

Structured Output Example

struct ReportData: Codable, Sendable {
    let title: String
    let summary: String
    let findings: [String]
    let recommendations: [String]
    let confidence: Double
}

let task = ORTask.withStructuredOutput(
    description: "Generate quarterly business report",
    expectedType: ReportData.self,
    agent: analystAgent.id
)
Use structured outputs with Swift’s Codable types for type-safe data handling and compile-time validation.

Task Execution

Execution Patterns

  • Sequential
  • Hierarchical
  • Flow-Based
Tasks execute one after another, with context updated between tasks:
let results = try await taskEngine.executeTasks(
    tasks: [researchTask, analysisTask, reportTask],
    agents: availableAgents,
    process: .sequential,
    context: executionContext
)
Use when:
  • Tasks depend on previous results
  • Order of execution matters
  • Context needs to build progressively
Sequential execution is the default and most common pattern for task workflows.

Agent Selection

The TaskExecutionEngine uses a multi-step approach for optimal agent selection:
1

Explicit Assignment

Respects task-agent assignments when specified:
let task = ORTask(
    description: "Security vulnerability assessment",
    expectedOutput: "Security report",
    agent: securityExpertAgent.id // Explicit assignment
)
2

Compatibility Scoring

Evaluates tool compatibility and role relevance:
// Engine calculates scores based on:
// - Tool availability (5 points per matching tool)
// - Domain expertise matching
// - Task complexity alignment
// - Previous success rates
3

LLM-Based Selection

Uses LLM reasoning for close decisions when scores are similar.
4

Caching

Caches selections for similar tasks to improve performance.

Task Dependencies

Context Dependencies

Tasks can depend on outputs from previous tasks:
let researchTask = ORTask(
    description: "Research AI trends in healthcare",
    expectedOutput: "Research findings with key trends"
)

let analysisTask = ORTask(
    description: """
    Analyze research findings for business impact.
    Use the previous research: {task_0_output}
    """,
    expectedOutput: "Business analysis report",
    context: [researchTask.id] // Depends on research output
)
Reference previous task outputs using {task_N_output} syntax in task descriptions, where N is the task index.

Conditional Tasks

Execute tasks based on previous results:
let conditionalTask = ConditionalTask(
    task: followUpTask,
    condition: TaskCondition(
        type: .outputContains,
        taskId: previousTask.id,
        expectedValue: "requires follow-up"
    )
)

Dependency Resolution

The system automatically handles dependency resolution:
Tasks are automatically sorted to ensure dependencies execute in the correct order:
// Automatic topological sort ensures proper execution order
private func topologicalSort(tasks: [ORTask]) throws -> [ORTask] {
    var sorted: [ORTask] = []
    var visited: Set<OrbitAIID> = []
    var visiting: Set<OrbitAIID> = []

    func visit(_ task: ORTask) throws {
        if visiting.contains(task.id) {
            throw OrbitAIError.taskExecutionFailed("Circular dependency detected")
        }

        if visited.contains(task.id) { return }

        visiting.insert(task.id)

        // Visit dependencies first
        if let dependencies = task.dependencies {
            for depId in dependencies {
                if let depTask = tasks.first(where: { $0.id == depId }) {
                    try visit(depTask)
                }
            }
        }

        visiting.remove(task.id)
        visited.insert(task.id)
        sorted.append(task)
    }

    for task in tasks {
        try visit(task)
    }

    return sorted
}
The system automatically detects and prevents circular dependencies, throwing an error when detected.
Circular dependencies will cause task execution to fail with OrbitAIError.taskExecutionFailed.

Task Results

Result Structure

public enum TaskResult: Codable, Sendable {
    case success(TaskOutput)
    case failure(OrbitAIError)

    public var output: TaskOutput? { /* ... */ }
    public var error: OrbitAIError? { /* ... */ }
    public var isSuccess: Bool { /* ... */ }
}

Task Output Components

public struct TaskOutput: Codable, Sendable {
    public let rawOutput: String           // Raw output text
    public let structuredOutput: StructuredOutput? // Parsed structured data
    public let usageMetrics: UsageMetrics  // Token/resource usage
    public let toolsUsed: [ToolUsage]     // Tools that were executed
    public let agentId: OrbitAIID         // Executing agent
    public let taskId: OrbitAIID          // Source task
    public let timestamp: Date            // Generation timestamp
}

Type-Safe Decoding

  • Direct Decoding
  • Safe Decoding
  • Fallback Decoding
// Decode output to specific type
let reportData: ReportData = try taskOutput.decode(as: ReportData.self)

Validation & Guardrails

OrbitAI includes a comprehensive type-safe guardrail system for validation and safety:

Content Safety

NoHarmfulContentGuardrailChecks for harmful keywords and patterns in content.
guardrails: [.noHarmfulContent]

Token Limits

TokenLimitGuardrailEnforces maximum token limits for content.
TokenLimitGuardrail(
    maxTokens: 8000,
    model: "gpt-4o"
)

Rate Limiting

RateLimitGuardrailPrevents excessive requests from agents.
RateLimitGuardrail(
    maxRequestsPerMinute: 60
)

Custom Guardrails

OrbitGuardrail ProtocolImplement custom validation logic.
protocol OrbitGuardrail {
    func check(context: Context) async throws -> GuardrailResult
}

Validation Strictness

Control how strictly task outputs are validated:
public enum ValidationStrictness: String, Codable {
    case lenient   // Partial approvals accepted
    case standard  // Balanced validation
    case strict    // Even partial approvals require revision
}

Manager Validation

Tasks can be validated by manager agents with feedback:
public enum TaskValidationResult {
    case approved
    case partiallyApproved(feedback: String)
    case needsRevision(feedback: String)
    case validationFailed
}

Metrics & Monitoring

Usage Metrics

Comprehensive metrics are collected for each task execution:
public struct UsageMetrics: Codable, Sendable {
    public let promptTokens: Int      // Tokens in prompts
    public let completionTokens: Int  // Tokens in responses
    public let totalTokens: Int       // Total token usage
    public let successfulRequests: Int // Successful API calls
    public let totalRequests: Int     // Total API calls made
}

Tool Usage Tracking

public struct ToolUsage: Codable, Sendable {
    public let toolName: String       // Name of the tool used
    public let executionTime: TimeInterval // Tool execution duration
    public let success: Bool         // Execution success status
    public let inputSize: Int        // Size of tool input
    public let outputSize: Int       // Size of tool output
}

Performance Monitoring

Key metrics tracked include:
  • Task completion duration
  • LLM token consumption
  • Tool execution statistics
  • Success/failure ratios
Per-agent execution metrics:
  • Average task completion time
  • Success rate
  • Token efficiency
  • Tool utilization
Resource consumption tracking:
  • Memory usage
  • API call counts
  • Cache hit rates
  • Concurrent task counts

Best Practices

Task Design

Clear Descriptions

Write specific, actionable task descriptionsGood:
description: """
Analyze Q4 2024 sales data and provide:
1. Top 10 products by revenue
2. Regional performance trends
3. Customer segment analysis
"""
Bad:
description: "Look at sales stuff"

Defined Outputs

Specify expected output format clearlyGood:
expectedOutput: """
JSON report with:
- products: Array of {name, revenue}
- trends: Object with regional data
- analysis: String summary
"""
Bad:
expectedOutput: "Some insights"

Appropriate Scope

Keep tasks focused and reasonably sized
Break large tasks into smaller, manageable subtasks for better control and debugging.

Tool Selection

Choose relevant tools for requirements
tools: [
    "csv_analyzer",
    "chart_generator",
    "statistical_analyzer"
]

Error Handling

Configure appropriate error handling and retry mechanisms:
let task = ORTask(
    description: "Generate comprehensive market analysis",
    expectedOutput: "Market analysis report with trends",
    maxExecutionTime: 600, // 10 minutes
    retryLimit: 3,
    guardrails: [.noHarmfulContent, .tokenLimit],
    validationStrictness: .standard
)
Always set reasonable execution timeouts to prevent tasks from running indefinitely.

Structured Outputs

Use strongly-typed output structures when possible:
struct MarketAnalysis: Codable, Sendable {
    let executiveSummary: String
    let marketTrends: [TrendData]
    let competitorAnalysis: [CompetitorInfo]
    let recommendations: [Recommendation]
    let confidence: Double
}

let task = ORTask.withStructuredOutput(
    description: "Generate Q4 2024 market analysis for tech sector",
    expectedType: MarketAnalysis.self,
    agent: analystAgent.id
)

Troubleshooting

Symptoms: Tasks fail with timeout errorsCauses:
  • Insufficient execution time limits
  • Complex tasks requiring more processing
  • Agent inefficiency or loops
Solutions:
// Increase timeout for complex tasks
let task = ORTask(
    description: "Complex data analysis task",
    expectedOutput: "Detailed analysis report",
    maxExecutionTime: 1800 // 30 minutes
)

// Use async execution for long-running tasks
let asyncTask = ORTask(
    description: "Long-running background analysis",
    expectedOutput: "Analysis results",
    async: true
)
Symptoms: JSON parsing errors, type mismatchesCauses:
  • LLM returning malformed JSON
  • Unexpected output format
  • Missing required fields
Solutions:
// Use fallback decoding with error recovery
do {
    let data = try taskOutput.decodeWithFallback(as: ExpectedType.self)
} catch {
    // Handle parsing failure gracefully
    print("Failed to parse: \(error)")
    let rawText = taskOutput.rawOutput
    // Implement custom parsing
}

// Allow retries for malformed output
let task = ORTask.withStructuredOutput(
    description: "Generate structured report",
    expectedType: ReportData.self,
    retryLimit: 2
)
Symptoms: Tasks assigned to inappropriate agentsCauses:
  • Insufficient capability matching
  • Missing tool assignments
  • Poor role alignment
Solutions:
// Explicit agent assignment for critical tasks
let criticalTask = ORTask(
    description: "High-priority security analysis",
    expectedOutput: "Security assessment report",
    agent: securityExpertAgent.id // Explicit
)

// Better agent configuration
let specializedAgent = Agent(
    role: "Security Analyst",
    purpose: "Perform security assessments",
    context: "Expert in cybersecurity with 10+ years experience",
    tools: [
        "vulnerability_scanner",
        "threat_analyzer",
        "compliance_checker"
    ]
)
Symptoms: High memory usage, slow executionCauses:
  • Too many concurrent tasks
  • Insufficient resource limits
  • Memory leaks
Solutions:
// Configure resource limits
let taskEngine = TaskExecutionEngine(
    agentExecutor: executor,
    maxConcurrentTasks: 3, // Limit concurrency
    enableVerboseLogging: false
)

// Use resource guardrails
let task = ORTask(
    description: "Memory-intensive analysis",
    expectedOutput: "Analysis results",
    guardrails: [.resourceLimit]
)
Symptoms: Tasks fail due to guardrail violationsCauses:
  • Content safety issues
  • Token limits exceeded
  • Rate limits hit
Solutions:
// Adjust guardrail thresholds
let tokenGuardrail = TokenLimitGuardrail(
    maxTokens: 8000, // Increase limit
    model: "gpt-4o"
)

// Use lenient validation for development
let task = ORTask(
    description: "Development task",
    expectedOutput: "Development output",
    validationStrictness: .lenient
)

Debugging Tips

1

Enable Verbose Logging

let taskEngine = TaskExecutionEngine(
    agentExecutor: executor,
    enableVerboseLogging: true
)

let verboseAgent = Agent(
    role: "Debug Agent",
    purpose: "Debugging task execution",
    context: "Debug context",
    verbose: true
)
2

Monitor Execution Status

// Check execution status periodically
let status = await taskEngine.getExecutionStatus()
print("Queued: \(status.queuedTasks)")
print("Active: \(status.activeTasks)")
print("Completed: \(status.completedTasks)")

// Monitor task progress
if let result = task.result {
    switch result {
    case .success(let output):
        print("Execution time: \(task.executionTime ?? 0)s")
        print("Tokens used: \(output.usageMetrics.totalTokens)")
    case .failure(let error):
        print("Task failed: \(error.localizedDescription)")
    }
}
3

Analyze Task Metrics

// Review detailed metrics
if let metrics = task.usageMetrics {
    print("Prompt tokens: \(metrics.promptTokens)")
    print("Completion tokens: \(metrics.completionTokens)")
    print("Success rate: \(metrics.successfulRequests)/\(metrics.totalRequests)")
}

// Analyze tool usage
if let output = task.result?.output {
    for toolUsage in output.toolsUsed {
        print("Tool: \(toolUsage.toolName)")
        print("Time: \(toolUsage.executionTime)s")
        print("Success: \(toolUsage.success)")
    }
}
4

Test with Simple Tasks

// Start with simple test tasks
let testTask = ORTask(
    description: "Simple test: add 2 + 2",
    expectedOutput: "The number 4",
    maxExecutionTime: 30
)

// Gradually increase complexity
let complexTask = ORTask(
    description: "Complex analysis task...",
    expectedOutput: "Detailed analysis...",
    tools: ["analyzer_tool"],
    maxExecutionTime: 300
)

Next Steps

For additional support, consult the GitHub Discussions or check the Issue Tracker.