Documentation Index
Fetch the complete documentation index at: https://docs.qredence.ai/llms.txt
Use this file to discover all available pages before exploring further.
fleet-rlm’s execution substrate is Daytona. The interpreter, recursive child sandboxes, durable storage, and host-callback bridge all live in src/fleet_rlm/integrations/daytona/. The provider is async-first internally via AsyncDaytona; sync helpers are compatibility shims only.
This page is the deep cut on what DaytonaInterpreter actually does. For the higher-level architecture, see Architecture.
What is directly built on the Daytona SDK
The runtime uses the official Daytona Python SDK directly — no abstraction layer in between:
| Need | Daytona SDK surface |
|---|
| Client construction | from daytona import AsyncDaytona, DaytonaConfig |
| Sandbox bootstrap and resume | DaytonaSandboxRuntime over the native SDK |
| Repo checkout | sandbox.git.clone(...) |
| Local context staging | sandbox.fs.* (aread_file, awrite_file, alist_files) |
| Persistent volume | client.volume.get(volume_name, create=True) + VolumeMount(...) |
| Stateful Python execution | sandbox.code_interpreter.create_context(...) + run_code(...) |
| Process sessions (broker) | sandbox.process.create_session(...) + execute_session_command(...) |
| Preview URLs | sandbox.get_preview_link(...) |
There is no parallel runtime contract — daytona_pilot is the public runtime path, built on the shared dspy.ReAct + dspy.RLM architecture.
DaytonaInterpreter: the public facade
src/fleet_rlm/integrations/daytona/interpreter.py exposes DaytonaInterpreter, the single facade everything outside integrations/daytona/ uses. Behind it sit typed collaborators with narrow responsibilities:
| Collaborator | Owns |
|---|
workspace_manager.py | Workspace config, session lifecycle, persisted Daytona state, runtime metadata, import/export |
workspace_config.py | Normalized immutable workspace configuration boundary (Pydantic v2) |
sandbox_executor.py | Code execution, sanitization, bridge/setup state, tool callback dispatch, result finalization |
child_delegation.py | Concrete interpreter hooks for recursive child sandbox creation |
child_isolation.py | Isolation-mode policy decisions (auto / clean / context) |
runtime.py | Workspace bootstrap, context staging, snapshot helpers |
bridge.py | Minimal sandbox-side broker for host callbacks |
filesystem.py | Volume-aware filesystem helpers |
diagnostics.py | Structured diagnostics + smoke validation |
types.py | Provider-local configuration, staged context, smoke result, chat/session contracts |
volumes.py | Provider-specific volume browsing helpers |
These collaborators are wired via small internal Protocols, not mixin-style dynamic forwarding. Pydantic v2 is used for normalized configuration/state boundaries (WorkspaceConfig); hot execution-path carriers (DaytonaExecutionResponse) remain lightweight dataclasses.
Sandbox lifecycle
Snapshot vs. image fallback
Sandbox creation prefers the reusable fleet-rlm-base Daytona snapshot, an environment template pre-baking the default Python runtime packages (dspy-ai, numpy, pandas, httpx, pydantic).
If the snapshot is missing or not active, sandbox creation falls back to the same declarative image build so startup still has the expected dependencies. Operators bootstrap or refresh the template with:
uv run fleet-rlm daytona-snapshot
uv run fleet-rlm daytona-snapshot --refresh
Session continuity model
The Daytona runtime treats sandbox continuity as the default operating mode for a chat session:
- One long-lived root Daytona sandbox session per agent session.
- One persistent Daytona code-interpreter context reused across warm turns.
- Repo/ref/context changes reconcile in place inside that sandbox.
- The mounted volume remains the canonical durable target for
memory/, artifacts/, buffers/, and meta/.
The runtime deliberately separates two concerns:
- Sandbox identity — the long-lived Daytona sandbox and mounted volume.
- Workspace configuration — the repo checkout, ref selection, staged
.fleet-rlm/context inputs, and helper setup inside that sandbox.
Repo, ref, or staged-context changes are not automatic reasons to delete the root sandbox. Instead, the runtime:
- Clones a repo if the desired checkout is missing.
- Fetches and updates the checkout in place when the ref changes.
- Clears and re-stages
.fleet-rlm/context when host context inputs change.
- Reruns sandbox helper setup so the live interpreter context retargets the new workspace path without discarding its in-memory state.
The runtime only forces sandbox recreation when continuity would be unsafe:
- Explicit session reset /
force_new_session.
- Mounted volume incompatibility.
- Unrecoverable sandbox or reconcile failure.
- Resume failure for a persisted sandbox/context snapshot.
This is the foundation for deeper dspy.RLM analysis flows — warm turns continue in the same sandbox, durable outputs accumulate on the mounted volume, and resumed sessions become a first-class continuity path.
Three storage layers
The Daytona runtime separates three distinct memory layers. Mixing them is the most common source of confusion.
| Layer | Lifetime | What lives there |
|---|
| Reusable environment template | Across sandboxes | fleet-rlm-base snapshot — only for faster sandbox creation; not a session persistence mechanism |
| Volatile execution-context state | Across warm turns in one sandbox | Python globals, imports, helper functions, in-memory objects in the code-interpreter context |
| Durable mounted-volume storage | Across sandbox restarts | Files under /home/daytona/memory/ with memory/, artifacts/, buffers/, meta/ |
Repos, staged context, package installs, caches, and scratch files in the workspace are not durable by default. Files survive context reset, sandbox restart, or session resume only when they are explicitly promoted into the mounted-volume durable directories.
Workspace vs. volume vs. context
| Root | Purpose |
|---|
| Workspace root | Live repo checkout plus transient execution files inside the sandbox |
| Context root | Run-scoped host inputs staged into the workspace under .fleet-rlm/context |
| Mounted volume root | Durable storage only — /home/daytona/memory |
Workspace-aware tools target the live sandbox workspace. Volume-aware tools target the canonical durable directories. There is no automatic workspace-to-volume sync — code that needs durable memory or artifacts must explicitly write to the mounted volume.
Volume naming
- The Daytona persistent volume name is derived from the authenticated workspace/tenant claim.
DAYTONA_TARGET is only Daytona SDK routing/config input — never a workspace id, sandbox id, or volume name.
- Session manifests live under
meta/workspaces/<workspace_id>/users/<user_id>/react-session-<session_id>.json.
- Root and recursive child runs share the same workspace-scoped volume when one is configured, while still using distinct sandbox sessions per child.
The host-callback bridge
The provider is intentionally hybrid:
- Direct async Daytona SDK calls for client, sandbox, volume, filesystem, preview, process-session, and code-interpreter operations.
- A minimal guide-style broker bridge (
bridge.py) for host callbacks only.
The broker runs as a sandbox-side process started via sandbox.process.create_session(...). It exists so code running inside the sandbox can reach back to the host for:
llm_query / llm_query_batched — semantic LLM calls.
sub_rlm / sub_rlm_batched — recursive child RLM creation.
- Custom tool dispatch.
SUBMIT(...) — final-artifact capture.
This is the only path sandbox code uses to reach the host. Budget enforcement and MLflow trace continuity happen at this boundary.
Execution path
The interpreter uses Daytona’s stateful Python execution context, not a fresh REPL per call:
sandbox.code_interpreter.create_context(...) is created once at sandbox start. Python state lives here across calls.
sandbox.code_interpreter.run_code(code, context_id=...) is the primary execution path.
- The broker process is started only when host callbacks are needed.
This keeps the provider aligned with the Daytona SDK while preserving the extra RLM contract the shared runtime needs: host callbacks, custom tools, SUBMIT(...), and stable result translation.
Filesystem helpers
Sandbox/file helper code should treat DaytonaSandboxSession as the canonical interface:
- Async flows use
aread_file, awrite_file, alist_files.
- Sync helpers use
_ensure_session_sync() only at the public sync boundary.
- Helper code should not fall back to raw
sandbox.fs.* access or mixed ad-hoc session shapes.
AsyncDaytona clients owned by an interpreter must be closed when the interpreter is discarded — otherwise HTTP sessions leak.
What is intentionally not Daytona’s responsibility
The runtime is SDK-owned. Repo-side .daytona configs, devcontainer configs, and Declarative Builder configs are not consulted at runtime in this iteration. Declarative Builder is relevant only as a future base-image strategy.
Diagnostics and smoke validation
Validate sandbox connectivity in isolation, without invoking an LM:
uv run fleet-rlm daytona-smoke \
--repo https://github.com/Qredence/fleet-rlm.git \
--ref main
A successful run confirms credentials, network, sandbox lifecycle, volume mount, and basic execution. The smoke result type is fleet_rlm.integrations.daytona.types.DaytonaSmokeResult.
For programmatic checks, hit the runtime test endpoints:
POST /api/v1/runtime/tests/daytona
POST /api/v1/runtime/tests/lm
See also