Configuration Guide

AgenticFleet provides flexible configuration options to customize its behavior for your specific needs.

Environment Variables

Set up your environment variables in a .env file:

# API Keys
OPENAI_API_KEY=your_openai_key
AZURE_OPENAI_KEY=your_azure_key
AZURE_OPENAI_ENDPOINT=your_azure_endpoint
AZURE_OPENAI_VERSION=your_azure_version

# Application Settings
DEBUG=false
LOG_LEVEL=INFO
HOST=localhost
PORT=8000

# OAuth Settings (Optional)
OAUTH_CLIENT_ID=your_client_id
OAUTH_CLIENT_SECRET=your_client_secret

Configuration File

Create a config.yaml file in your project root:

application:
  debug: false
  log_level: INFO
  host: localhost
  port: 8000
  max_retries: 3
  timeout: 30

agents:
  default_model: gpt-4
  temperature: 0.7
  max_tokens: 2000
  tools_enabled: true

fleets:
  max_agents: 10
  coordination_pattern: sequential
  timeout: 300
  auto_scaling: true

memory:
  storage_type: redis
  ttl: 86400
  max_size: 1000000

oauth:
  enabled: true
  provider: github
  scopes:
    - read:user
    - repo

Application Configuration

The ApplicationConfig class manages core settings:

from agentic_fleet import ApplicationConfig

config = ApplicationConfig(
    debug=False,
    log_level="INFO",
    host="localhost",
    port=8000
)

Available Settings

SettingTypeDefaultDescription
debugboolFalseEnable debug mode
log_levelstr”INFO”Logging level
hoststr”localhost”Server host
portint8000Server port
max_retriesint3Max retry attempts
timeoutint30Request timeout

Agent Configuration

Configure individual agents:

from agentic_fleet import AgentConfig

agent_config = AgentConfig(
    model="gpt-4",
    temperature=0.7,
    max_tokens=2000,
    tools_enabled=True
)

Model Settings

SettingTypeDefaultDescription
modelstr”gpt-4”Model to use
temperaturefloat0.7Response randomness
max_tokensint2000Max response length
tools_enabledboolTrueEnable tool usage

Fleet Configuration

Configure fleet behavior:

from agentic_fleet import FleetConfig

fleet_config = FleetConfig(
    max_agents=10,
    coordination_pattern="sequential",
    timeout=300,
    auto_scaling=True
)

Fleet Settings

SettingTypeDefaultDescription
max_agentsint10Max agents per fleet
coordination_patternstr”sequential”Fleet pattern
timeoutint300Fleet operation timeout
auto_scalingboolTrueEnable auto-scaling

Memory Configuration

Configure memory storage:

from agentic_fleet import MemoryConfig

memory_config = MemoryConfig(
    storage_type="redis",
    ttl=86400,
    max_size=1000000
)

Memory Settings

SettingTypeDefaultDescription
storage_typestr”redis”Storage backend
ttlint86400Time-to-live (seconds)
max_sizeint1000000Max storage size

OAuth Configuration

Configure OAuth settings:

from agentic_fleet import OAuthConfig

oauth_config = OAuthConfig(
    enabled=True,
    provider="github",
    scopes=["read:user", "repo"]
)

OAuth Settings

SettingTypeDefaultDescription
enabledboolTrueEnable OAuth
providerstr”github”OAuth provider
scopeslist[]OAuth scopes

Best Practices

  1. Environment Management

    • Use .env for sensitive data
    • Use config.yaml for app settings
    • Override with environment variables
  2. Security

    • Never commit API keys
    • Rotate secrets regularly
    • Use minimum required scopes
  3. Performance

    • Adjust timeouts for your use case
    • Configure memory TTL appropriately
    • Enable auto-scaling as needed