Documentation Index
Fetch the complete documentation index at: https://docs.qredence.ai/llms.txt
Use this file to discover all available pages before exploring further.
Installation
No module named 'fleet_rlm'
Ensure fleet-rlm is added to the active uv project:
Python version mismatch
fleet-rlm requires Python 3.10 or later. Check your version:Permission denied on CLI commands
Always invoke fleet-rlm through uv run inside the project, instead of relying on a global tool install:
Configuration
Daytona configuration missing
Add Daytona credentials to .env or export them in the shell:
DSPY_LM_MODEL not set
Create a .env file with at minimum the LLM model and key:
.env.example in your project, copy it:
Web UI / API server
Connection refused at localhost:8000
The default port may be in use. Check:
Web UI loads but chat hangs
Most often this is a missing or invalid LLM API key. Inspect the readiness endpoint:planner: missing, your DSPY_LM_MODEL / DSPY_LLM_API_KEY are not configured correctly.
WebSocket disconnects behind a reverse proxy
Ensure your proxy upgrades HTTP/1.1 connections and does not buffer. fleet-rlm exposes:/api/v1/ws/execution/api/v1/ws/execution/events
AUTH_REQUIRED=true.
Daytona / sandbox
Smoke test before debugging
Validate Daytona connectivity in isolation, without invoking an LM:Child sandbox creation fails
If recursive delegation fails to spawn child sandboxes, set the fork fallback:Recursive runs hit the call budget
The sharedrlm_max_llm_calls budget covers the entire recursive tree. If you see budget-exhaustion errors, either:
- Raise the budget for that runtime, or
- Reduce
max_iterationsor sibling fan-out (sub_rlm_batchedcaps at 4).
Database / persistence
database: missing in /ready
Set DATABASE_URL to a valid Postgres/Neon connection string and confirm DATABASE_REQUIRED=true is intentional. For local development:
MLflow
Tracing not appearing
Confirm:MLFLOW_ENABLED=true(default).MLFLOW_TRACKING_URIpoints at a reachable MLflow server.- In
APP_ENV=local,MLFLOW_AUTO_STARTis not set tofalse.
Still stuck?
- File an issue: github.com/Qredence/fleet-rlm/issues
- Read the source —
src/fleet_rlm/api/main.pyis the canonical entry point and the architecture page lists the right reading order.