Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Configuration

CLI Flags

FlagDescription
--provider <name>Force provider (openai, anthropic, or gemini)
--model <name>Override model name
--verbose / -vShow debug-level log entries in TUI
--no-tuiDisable TUI, use plain text output
--autonomy <level>Set autonomy level (low, medium, high, full)
--log-file <dir>Override session log directory
--mcpRun as MCP server on stdio (replaces TUI)
--control-socketEnable Unix control socket at /tmp/intendant-<pid>.sock
--jsonJSONL structured output to stdout (implies --no-tui)
--sandboxEnable Landlock filesystem sandboxing (Linux kernel 5.13+)
--directForce single-agent direct mode (skip orchestrator even for complex tasks)
--no-presenceDisable the presence layer (direct agent interaction)
--continue / -cResume most recent session for this project
--resume <id> / -r <id>Resume specific session by ID or prefix
--web [PORT]Start web dashboard with Activity/Usage/Terminal/Displays tabs + optional voice (default port 8765)
--transcriptionEnable server-side audio transcription (Whisper API)

The TUI launches only when both stdin and stdout are terminals. When piping input/output or in sub-agent mode, intendant falls back to headless mode.

Environment Variables

VariableDefaultDescription
OPENAI_API_KEY / OPENAIOpenAI API key
ANTHROPIC_API_KEY / ANTHROPICAnthropic API key
GEMINI_API_KEYGoogle AI (Gemini) API key
PROVIDERauto-detect"openai", "anthropic", or "gemini" (used when multiple keys are set)
MODEL_NAMEper-provider defaultModel to use (e.g. gpt-5.2-codex, claude-sonnet-4-5-20250929, gemini-2.5-pro)
USE_NATIVE_TOOLStrueEnable native API tool calling; false falls back to text-based JSON extraction
MODEL_CONTEXT_WINDOWper-model defaultContext window size in tokens
MAX_OUTPUT_TOKENSper-model defaultMax output tokens per API call (sent to API)
STRUCTURED_OUTPUTtrue for gpt-5+/o3/o4Enable JSON object mode for deterministic parsing
REASONING_EFFORTReasoning effort for GPT-5/o3/o4 models (low, medium, high)
REASONING_SUMMARYReasoning summary mode (auto, concise, detailed)
PRESENCE_PROVIDEROverride provider for the presence layer (fallback: PROVIDER)
PRESENCE_MODELOverride model for the presence layer
INTENDANT_LOG_DIRautoSession log directory (set automatically by caller for the runtime)

Sub-Agent Environment Variables

These are set automatically when spawning sub-agents (see Multi-Agent Orchestration):

VariableDescription
INTENDANT_ROLESub-agent role (orchestrator, research, implementation, testing)
INTENDANT_IDUnique sub-agent identifier
INTENDANT_TASKTask description for the sub-agent
INTENDANT_RESULT_FILEPath for sub-agent to write final results
INTENDANT_PROGRESS_FILEPath for sub-agent to write periodic progress
INTENDANT_PARENT_KNOWLEDGEPath to parent’s knowledge store for inheritance
INTENDANT_INHERIT_MEMORY1 to inherit project memory
INTENDANT_SANDBOX_WRITE_PATHSLandlock write paths (set by caller when sandboxing)
INTENDANT_MCP_RELOAD1 when process was exec’d for MCP hot-reload

The agent runner hard timeout is 120s default, automatically extended to 600s when askHuman is present in the command batch.

Project Configuration

Create intendant.toml in the project root:

[memory]
enabled = true  # default: true

[model]
context_window = 200000       # override per-model default
max_output_tokens = 8192      # override per-model default

[orchestrator]
max_parallel_agents = 4       # max concurrent sub-agents
sub_agent_dir = ".intendant/subagents"  # where sub-agent workspaces are created

[approval]
file_read = "auto"            # auto-approve file reads
file_write = "ask"            # ask before file writes (default)
file_delete = "ask"           # ask before file deletes (default)
command_exec = "auto"         # auto-approve command execution
network = "auto"              # auto-approve network requests
destructive = "ask"           # ask before destructive commands (default)

[presence]
enabled = true                # enable the conversational presence layer (default: true)
provider = "gemini"           # provider for the presence model (optional, falls back to PROVIDER)
model = "gemini-2.5-flash"    # model for the presence layer (optional)
live_provider = "gemini"      # provider for browser-side live presence (optional)
live_model = "gemini-2.5-flash-native-audio-preview-12-2025"  # model for browser-side live presence (optional)
context_window = 32768        # context window for the presence conversation (default: 32768)

[transcription]
enabled = false               # enable server-side audio transcription (default: false)
provider = "openai"           # transcription provider (default: "openai")
model = "whisper-1"           # transcription model (default: "whisper-1")
language = "en"               # ISO-639-1 language hint (optional, auto-detect if omitted)
# endpoint = "http://..."     # custom endpoint for self-hosted whisper.cpp

[sandbox]
enabled = false               # enable Landlock filesystem sandboxing (default: false)
extra_write_paths = ["/var/log"]  # additional writable paths beyond project root, /tmp, log dir

# External MCP servers to connect to as a client
[[mcp_servers]]
name = "filesystem"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]

[[mcp_servers]]
name = "github"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-github"]

[mcp_servers.env]
GITHUB_TOKEN = "ghp_..."

Skills

Skills are named instruction sets stored as SKILL.md files with YAML frontmatter. They are discovered from two directories (project-scoped first):

  1. <project_root>/.intendant/skills/<name>/SKILL.md
  2. ~/.intendant/skills/<name>/SKILL.md

Example SKILL.md:

---
name: deploy
description: Deploy the application to production
autonomy: high
disable-auto-invocation: true
---

## Steps

1. Run tests
2. Build release binary
3. Deploy to server

Frontmatter fields:

  • name — skill identifier (required)
  • description — shown in skill catalog (required)
  • autonomy — override session autonomy level when active (optional)
  • disable-auto-invocation — if true, only user can trigger this skill (optional, default false)
  • sandbox — override session sandbox setting (optional)

Project skills take precedence over personal skills with the same name. Available skills are formatted into a catalog and injected into the agent’s conversation.

When sandboxing is enabled (via --sandbox or [sandbox].enabled = true), runtime command execution is restricted to read-only filesystem access plus writes to project root, /tmp, session log directory, ~/.intendant, and extra_write_paths. On kernels without Landlock support, sandboxing is silently skipped.

INTENDANT.md Project Instructions

Place an INTENDANT.md file in your project root or at ~/.config/intendant/INTENDANT.md for global instructions. These are injected into the conversation at session start, before knowledge/memory. Both files are loaded if present (global first, project-local second).

System Prompts

System prompts are compiled into the binary at build time, so intendant works from any directory without needing the source tree. Two base prompt variants exist:

  • SysPrompt.md — Full prompt with JSON schema and per-function documentation (used with text-based JSON extraction)
  • SysPrompt_tools.md — Condensed prompt for native tool calling mode (function docs live in API tool definitions, reducing system prompt tokens)

The active variant is selected automatically based on whether the provider has native tool calling enabled.

Prompts are resolved using a 3-layer cascade (highest priority first):

  1. Project root<git-root>/SysPrompt.md or SysPrompt_tools.md (per-project customization)
  2. Global config~/.config/intendant/SysPrompt.md or SysPrompt_tools.md (user-wide customization)
  3. Compiled-in default — always available, zero-config

Role-specific prompts (SysPrompt_orchestrator.md, SysPrompt_research.md, SysPrompt_implementation.md) follow the same cascade and are appended to the base prompt. The presence layer uses its own standalone prompt (SysPrompt_presence.md).

To customize prompts for a specific project, place your modified .md files in the project’s git root. For user-wide customization, place them in ~/.config/intendant/.