# Qredence Documentation ## Docs - [Adaptive workspace contract](https://docs.qredence.ai/fleet-pi/adaptive-workspace.md): The accepted canonical workspace contract for Fleet Pi: manifest, section families, and projection boundary. - [Agent workspace](https://docs.qredence.ai/fleet-pi/agent-workspace.md): agent-workspace/ is Fleet Pi's durable, repo-local home for agent memory, plans, skills, and artifacts. - [API reference](https://docs.qredence.ai/fleet-pi/api-reference.md): HTTP API for the Fleet Pi local web app, including the chat NDJSON stream and supporting endpoints. - [Architecture](https://docs.qredence.ai/fleet-pi/architecture.md): Runtime boundaries between the browser client, the TanStack Start backend, the agent workspace, and AWS Bedrock. - [Codex setup](https://docs.qredence.ai/fleet-pi/codex.md): Use Fleet Pi's shared Codex local environment and worktree bootstrap flow for advanced multi-agent setups. - [Introduction](https://docs.qredence.ai/fleet-pi/introduction.md): Fleet Pi is a local browser workspace for Pi-powered coding agents, with durable plans, memory, and repo-scoped tools in Git. - [Project structure](https://docs.qredence.ai/fleet-pi/project-structure.md): Overview of the Fleet Pi monorepo workspace layout, key dependencies, and runtime data flow. - [Quickstart](https://docs.qredence.ai/fleet-pi/quickstart.md): Install Fleet Pi, configure AWS Bedrock, and launch the local workspace in under five minutes. - [Runbooks](https://docs.qredence.ai/fleet-pi/runbooks.md): Operational runbooks for incident response and troubleshooting the Fleet Pi chat application. - [Runtime SDK integration seams](https://docs.qredence.ai/fleet-pi/runtime-sdk-integration.md): Current Fleet Pi runtime integration boundaries for deeper platform and adaptive workspace work. - [Architecture](https://docs.qredence.ai/fleet-rlm/concepts/architecture.md): Thin FastAPI/WebSocket transport, runtime core, and Daytona substrate — three layers, intentional boundaries. - [Core concepts](https://docs.qredence.ai/fleet-rlm/concepts/overview.md): ReAct orchestration, recursive long-context execution, and the Daytona-backed runtime. - [Recursive RLM](https://docs.qredence.ai/fleet-rlm/concepts/recursive-rlm.md): How the ReAct agent delegates to dspy.RLM, child sandbox isolation, and the shared LLM-call budget. - [Deploying the API server](https://docs.qredence.ai/fleet-rlm/guides/deployment.md): Run fleet-rlm in production with Entra auth, Neon-backed persistence, and Daytona sandboxes. - [DSPy integration](https://docs.qredence.ai/fleet-rlm/guides/dspy-integration.md): How fleet-rlm builds on dspy.ReAct, dspy.RLM, and the offline GEPA optimization layer. - [Troubleshooting](https://docs.qredence.ai/fleet-rlm/guides/troubleshooting.md): Common installation, configuration, and runtime issues with fleet-rlm. - [Installation](https://docs.qredence.ai/fleet-rlm/installation.md): Install fleet-rlm from PyPI for end users or from source for contributors. - [Introduction](https://docs.qredence.ai/fleet-rlm/introduction.md): fleet-rlm is a Daytona-backed web workspace for running recursive language-model tasks on top of DSPy. - [Quickstart](https://docs.qredence.ai/fleet-rlm/quickstart.md): Install fleet-rlm, configure Daytona and an LLM provider, and launch the Web UI. - [CLI reference](https://docs.qredence.ai/fleet-rlm/reference/cli.md): fleet-rlm and fleet command-line surfaces. - [Configuration](https://docs.qredence.ai/fleet-rlm/reference/configuration.md): Environment variables for fleet-rlm — LLM, Daytona, auth, database, MLflow, and recursive RLM. - [HTTP & WebSocket API](https://docs.qredence.ai/fleet-rlm/reference/http-api.md): REST and WebSocket surface exposed by src/fleet_rlm/api/main.py. - [Authoring a plugin](https://docs.qredence.ai/qredence-plugins/authoring.md): Folder layout, manifests, and the marketplace checklist - [Claude Code usage](https://docs.qredence.ai/qredence-plugins/claude.md): Install and run Qredence plugins inside Claude Code - [Codex usage](https://docs.qredence.ai/qredence-plugins/codex.md): Install and run Qredence plugins inside OpenAI Codex - [Introduction](https://docs.qredence.ai/qredence-plugins/introduction.md): User-focused plugins for Claude Code and OpenAI Codex - [autoresearch-dspy](https://docs.qredence.ai/qredence-plugins/plugins/autoresearch-dspy.md): Ratchet-style autonomous experiment loops with DSPy 3.1.3 and dspy.RLM - [development](https://docs.qredence.ai/qredence-plugins/plugins/development.md): Skill-library plugin for stress-testing plans, rendering data viz, and scoring skills - [harness-engineering](https://docs.qredence.ai/qredence-plugins/plugins/harness-engineering.md): Repo legibility and validation workflows for Claude and Codex - [legal](https://docs.qredence.ai/qredence-plugins/plugins/legal.md): Audit Terms of Service and privacy policies for unfair clauses - [meta-harness](https://docs.qredence.ai/qredence-plugins/plugins/meta-harness.md): Automated harness engineering for benchmarked LLM tasks - [rlm-wiki](https://docs.qredence.ai/qredence-plugins/plugins/rlm-wiki.md): Daytona-backed markdown wiki workflows for Fleet-RLM - [symphony](https://docs.qredence.ai/qredence-plugins/plugins/symphony.md): Reference Symphony service for Linear-driven Codex orchestration - [Quickstart](https://docs.qredence.ai/qredence-plugins/quickstart.md): Install and run a Qredence plugin in Claude Code or Codex ## OpenAPI Specs - [openapi](https://docs.qredence.ai/openapi.json) ## Optional - [GitHub Organization](https://github.com/qredence) - [Discord Community](https://discord.gg/qredence) - [Status Page](https://status.qredence.com)