Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.qredence.ai/llms.txt

Use this file to discover all available pages before exploring further.

Installation

No module named 'fleet_rlm'

Ensure fleet-rlm is added to the active uv project:
uv tree | rg fleet-rlm
If missing, add it:
uv add fleet-rlm

Python version mismatch

fleet-rlm requires Python 3.10 or later. Check your version:
python3 --version
Install a newer Python via uv if needed:
uv python install 3.12

Permission denied on CLI commands

Always invoke fleet-rlm through uv run inside the project, instead of relying on a global tool install:
uv run fleet --help
uv run fleet-rlm --help

Configuration

Daytona configuration missing

Add Daytona credentials to .env or export them in the shell:
export DAYTONA_API_KEY=your-daytona-api-key
# Optional override:
export DAYTONA_API_URL=https://app.daytona.io/api

DSPY_LM_MODEL not set

Create a .env file with at minimum the LLM model and key:
DSPY_LM_MODEL=openai/gpt-4o-mini
DSPY_LLM_API_KEY=sk-...
If you have an .env.example in your project, copy it:
cp .env.example .env

Web UI / API server

Connection refused at localhost:8000

The default port may be in use. Check:
lsof -i :8000
Run on a different port:
uv run fleet-rlm serve-api --port 8001

Web UI loads but chat hangs

Most often this is a missing or invalid LLM API key. Inspect the readiness endpoint:
curl http://127.0.0.1:8000/ready
If planner: missing, your DSPY_LM_MODEL / DSPY_LLM_API_KEY are not configured correctly.

WebSocket disconnects behind a reverse proxy

Ensure your proxy upgrades HTTP/1.1 connections and does not buffer. fleet-rlm exposes:
  • /api/v1/ws/execution
  • /api/v1/ws/execution/events
Both require the same auth as HTTP endpoints when AUTH_REQUIRED=true.

Daytona / sandbox

Smoke test before debugging

Validate Daytona connectivity in isolation, without invoking an LM:
uv run fleet-rlm daytona-smoke \
  --repo https://github.com/Qredence/fleet-rlm.git \
  --ref main
A successful run confirms credentials, network, and sandbox lifecycle.

Child sandbox creation fails

If recursive delegation fails to spawn child sandboxes, set the fork fallback:
RLM_CHILD_FORK_FALLBACK=clean
This retries with a clean child sandbox when forking the parent fails. See Recursive RLM for the full isolation policy.

Recursive runs hit the call budget

The shared rlm_max_llm_calls budget covers the entire recursive tree. If you see budget-exhaustion errors, either:
  • Raise the budget for that runtime, or
  • Reduce max_iterations or sibling fan-out (sub_rlm_batched caps at 4).

Database / persistence

database: missing in /ready

Set DATABASE_URL to a valid Postgres/Neon connection string and confirm DATABASE_REQUIRED=true is intentional. For local development:
DATABASE_REQUIRED=false
This disables the Neon-backed multi-tenant store and falls back to local storage.

MLflow

Tracing not appearing

Confirm:
  • MLFLOW_ENABLED=true (default).
  • MLFLOW_TRACKING_URI points at a reachable MLflow server.
  • In APP_ENV=local, MLFLOW_AUTO_START is not set to false.
Start the local MLflow server explicitly:
make mlflow-server

Still stuck?