Documentation Index
Fetch the complete documentation index at: https://docs.qredence.ai/llms.txt
Use this file to discover all available pages before exploring further.
fleet-rlm exposes two command entrypoints:
fleet-rlm — Typer-based command group with subcommands for server modes, optimization, and terminal chat.
fleet — Lightweight launcher for terminal chat and Web UI startup.
fleet-rlm
Usage: fleet-rlm [OPTIONS] COMMAND [ARGS]...
Commands:
serve-api Run the FastAPI server surface (used by `fleet web`).
optimize Run DSPy optimization workflows.
chat Start standalone in-process interactive terminal chat.
daytona-smoke Run a native Daytona smoke validation without invoking an LM.
fleet-rlm serve-api
Run the FastAPI HTTP/WebSocket server. This is the backend for the Web UI and is invoked by fleet web.
Usage: fleet-rlm serve-api [OPTIONS]
Options:
--host TEXT Bind host [default: 127.0.0.1]
--port INTEGER Bind port [default: 8000]
Examples:
# Default host:port
uv run fleet-rlm serve-api
# Bind to all interfaces on a custom port
uv run fleet-rlm serve-api --host 0.0.0.0 --port 8080
fleet-rlm chat
Start standalone in-process interactive terminal chat with the RLM agent.
Usage: fleet-rlm chat [OPTIONS]
Options:
--docs-path PATH Optional document path to preload as active context
--trace / --no-trace Enable verbose thought/status display
--trace-mode TEXT Trace display mode: compact, verbose, or off
--volume-name TEXT Optional Daytona volume name for persistent storage
Trace modes:
| Mode | Description |
|---|
compact | Condensed trace output (default) |
verbose | Full thought/status display |
off | Disable trace output |
Examples:
# Interactive chat with default trace
uv run fleet-rlm chat
# Preload a document as active context
uv run fleet-rlm chat --docs-path ./docs/architecture.md
# Verbose trace output
uv run fleet-rlm chat --trace --trace-mode verbose
# Persist chat state to a specific Daytona volume
uv run fleet-rlm chat --volume-name my-volume
fleet-rlm optimize
Run GEPA offline optimization for a registered DSPy module.
Usage: fleet-rlm optimize [OPTIONS] MODULE [DATASET]
Arguments:
MODULE Registered module slug (use 'list' to see available modules).
DATASET Path to JSON or JSONL dataset.
Options:
--output-path, -o PATH Where to save the optimized DSPy module artifact.
--train-ratio FLOAT Training split ratio for GEPA compilation. [default: 0.8]
--auto TEXT Optimization intensity: light, medium, heavy. [default: light]
--report Print a markdown summary after optimization.
Examples:
# List registered optimization modules
uv run fleet-rlm optimize list
# Default light optimization
uv run fleet-rlm optimize my-module dataset.jsonl
# Heavy optimization with a custom output path
uv run fleet-rlm optimize my-module dataset.jsonl \
--auto heavy \
--output-path ./artifacts/optimized.pkl
# Generate a markdown report
uv run fleet-rlm optimize my-module dataset.jsonl --report
Optimized artifacts are saved for manual review and are not auto-loaded into the live runtime.
fleet-rlm daytona-smoke
Run a native Daytona smoke validation without invoking an LM. Useful for verifying credentials and sandbox lifecycle in isolation.
Usage: fleet-rlm daytona-smoke [OPTIONS]
Options:
--repo TEXT Repository URL to clone into the Daytona sandbox. [required]
--ref TEXT Optional branch or commit SHA to checkout after clone.
Example:
uv run fleet-rlm daytona-smoke \
--repo https://github.com/Qredence/fleet-rlm.git \
--ref main
fleet
Lightweight launcher for terminal chat and Web UI startup.
usage: fleet [-h] [--docs-path DOCS_PATH]
[--trace-mode {compact,verbose,off}]
[--volume-name VOLUME_NAME]
[{web}]
Examples:
# Terminal chat
uv run fleet
# Web UI (alias for `fleet-rlm serve-api` + SPA)
uv run fleet web
# Preload a document into terminal chat
uv run fleet --docs-path README.md
# Verbose terminal trace
uv run fleet --trace-mode verbose
Hydra overrides are supported as key=value tokens after the command.
See also