[{"data":1,"prerenderedAt":143},["ShallowReactive",2],{"search-api":3},[4,18,35,53,69,84,100,117,134],{"id":5,"path":6,"dir":7,"title":8,"description":9,"keywords":10,"body":17},"content:docs:_index.md","\u002Fdocs\u002F_index","docs","Hermes Agent Architecture Course","",[11,12,13,14,15,16],"Welcome, Brian! 👋","📚 Course Structure","🎯 Learning Outcomes","🛠️ Prerequisites","📖 How to Use This Course","🚀 Quick Start","  Hermes Agent Architecture Course  Welcome, Brian! 👋  This course will take you from zero to deep understanding of how Hermes Agent works under the hood. We'll explore the codebase step by step, with hands-on examples and exercises.   📚 Course Structure     Module  Topic  Estimated Time     01   Overview & Mental Model  30 min    02   Core Agent Loop & Conversation Flow  60 min    03   Tools System & Toolsets  90 min    04   Gateway & Messaging Platforms  60 min    05   Sessions, Memory & Persistence  45 min    06   Skills System & Self-Improvement  75 min    07   Advanced Topics: Subagents, Cron, MCP  60 min   🎯 Learning Outcomes  By the end of this course, you'll understand:    How Hermes works as a whole  — The big picture architecture and data flow   The agent loop in detail  — How conversations progress turn by turn   Tool orchestration  — How tools are discovered, registered, and called   Multi-platform support  — How one codebase serves CLI, Telegram, Discord, etc.   Persistence layer  — Sessions, memory, and how context survives across restarts   Learning loop  — How skills are created, improved, and reused   Advanced features  — Subagents, scheduling, MCP integration   🛠️ Prerequisites   Basic Python knowledge (functions, classes, decorators)  Familiarity with LLMs and chat interfaces  The cloned repo at   ~\u002Fgit\u002Fnous-hermes-agent   📖 How to Use This Course    Read each module in order  — They build on each other   Follow along in the codebase  — Open files as we reference them   Try the exercises  — Hands-on practice reinforces learning   Ask questions  — Use your Telegram chat to explore concepts interactively   🚀 Quick Start  Start with Module 01:     cd   ~\u002Fgit\u002Fnous-hermes-agent\u002Fcourse\u002F01-overview\n  Then read   README.md  and follow the guided tour.    Course created: March 24, 2026  Based on Hermes Agent from NousResearch\u002Fhermes-agent  html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}",{"id":19,"path":20,"dir":21,"title":22,"description":9,"keywords":23,"body":34},"content:docs:01-overview:overview.md","\u002Fdocs\u002F01-overview\u002Foverview","01-overview","Module 01: Overview & Mental Model",[24,25,26,27,28,29,30,31,32,33],"🎯 What You'll Learn","1.1 What is Hermes Agent?","1.2 High-Level Architecture","1.3 Core Components Deep Dive","1.4 Data Flow: A Typical Turn","1.5 File Dependency Chain","1.6 Key Files to Explore First","1.7 Hands-On Exercise","1.8 Common Questions","✅ Module 1 Checklist","  Module 01: Overview & Mental Model  🎯 What You'll Learn   The big picture: what Hermes Agent actually is  High-level architecture diagram  Key components and their responsibilities  How data flows through the system  Where to find things in the codebase   1.1 What is Hermes Agent?  Hermes is a   tool-using AI agent  that can:   Chat with you naturally (CLI or messaging apps)  Execute tools (terminal, files, web, browser, etc.)  Remember past conversations and learn from experience  Run autonomously on schedules or in the background  Delegate work to subagents for parallel processing  Key Design Principles    Model-agnostic  — Use any LLM provider (OpenRouter, Nous Portal, local, etc.)   Platform-agnostic  — Same agent, multiple interfaces (CLI, Telegram, Discord...)   Self-improving  — Creates and refines skills from experience   Persistent  — Remembers across sessions via SQLite + memory system   Research-ready  — Batch processing, trajectory logging, RL environments   1.2 High-Level Architecture   ┌─────────────────────────────────────────────────────────────┐\n│                    USER INTERFACES                          │\n│  ┌──────────┐  ┌──────────┐  ┌──────────┐  ┌──────────┐   │\n│  │   CLI    │  │ Telegram │  │ Discord  │  │ Slack... │   │\n│  └────┬─────┘  └────┬─────┘  └────┬─────┘  └────┬─────┘   │\n└───────┼────────────┼────────────┼────────────┼──────────────┘\n        │            │            │            │\n        └────────────┴─────┬──────┴────────────┘\n                           │\n              ┌────────────▼────────────┐\n              │     GATEWAY LAYER       │\n              │  (gateway\u002Frun.py)       │\n              │  - Message routing      │\n              │  - Platform adapters    │\n              │  - Session management   │\n              └────────────┬────────────┘\n                           │\n              ┌────────────▼────────────┐\n              │     AGENT CORE          │\n              │  (run_agent.py)         │\n              │  - Conversation loop    │\n              │  - Tool orchestration   │\n              │  - Context building     │\n              └────────────┬────────────┘\n                           │\n        ┌──────────────────┼──────────────────┐\n        │                  │                  │\n┌───────▼────────┐ ┌──────▼───────┐ ┌────────▼────────┐\n│   TOOL LAYER   │ │  MEMORY LAYER│ │  SKILLS LAYER   │\n│ (tools\u002F*.py)   │ │(hermes_state.│ │ (optional-skills│\n│ - terminal     │ │      py)     │ │  + ~\u002F.hermes\u002F   │\n│ - files        │ │ - sessions   │ │    skills\u002F)     │\n│ - web          │ │ - memories   │ │ - skill defs    │\n│ - browser      │ │ - search     │ │ - templates     │\n└────────────────┘ └──────────────┘ └─────────────────┘\n   1.3 Core Components Deep Dive  Gateway Layer (  gateway\u002F )   Purpose:  Bridge between messaging platforms and the agent core.     File  Responsibility     run.py  (255KB!)  Main gateway loop, message dispatch, slash commands    session.py  SessionStore — persists conversations to SQLite    platforms\u002F  Telegram, Discord, Slack adapters    config.py  Gateway configuration   Key Concept:  The gateway runs as a   separate process  from the CLI. It polls for incoming messages and dispatches them to the agent.   Agent Core (  run_agent.py ,   model_tools.py )   Purpose:  The brain — manages conversations and tool execution.     File  Responsibility     run_agent.py  AIAgent class, conversation loop    model_tools.py  Tool discovery, function call handling    cli.py  Interactive CLI (separate from gateway)   Key Concept:  The agent loop is   synchronous : receive message → build context → call LLM → execute tools → repeat.   Tools Layer (  tools\u002F )   Purpose:  Implementations of all callable functions.   tools\u002F\n├── registry.py        # Central tool registry (schemas + dispatch)\n├── terminal_tool.py   # Shell command execution\n├── file_tools.py      # Read\u002Fwrite\u002Fsearch\u002Fpatch files\n├── web_tools.py       # Web search + extraction\n├── browser_tool.py    # Browser automation (Browserbase)\n├── delegate_tool.py   # Spawn subagents\n├── mcp_tool.py        # MCP client (~1050 lines)\n└── ... (20+ more)\n   Key Concept:  All tools register themselves with   registry.register()  at import time. The registry builds the schema sent to the LLM.   Memory Layer (  hermes_state.py )   Purpose:  Persistent storage for sessions, memories, and search.     Feature  Implementation    Sessions  SQLite with FTS5 full-text search   Memories  Key-value store in   ~\u002F.hermes\u002Fmemories\u002F   User profile  JSON in   ~\u002F.hermes\u002Fconfig.yaml   Skills Layer (  optional-skills\u002F ,   ~\u002F.hermes\u002Fskills\u002F )   Purpose:  Procedural memory — reusable workflows.   skill\u002F\n├── SKILL.md           # Instructions (YAML frontmatter + markdown)\n├── references\u002F        # Supporting docs\n├── templates\u002F         # Reusable templates\n└── scripts\u002F          # Helper Python scripts\n   Key Concept:  Skills are   loaded as user messages , not system prompts — preserves prompt caching.   1.4 Data Flow: A Typical Turn  Let's trace what happens when you send a message:  Step 1: Message Arrival (Gateway)     # gateway\u002Frun.py - Main loop\n   while True:\n       messages = platform.poll()  # Telegram, Discord, etc.\n       for msg in messages:\n           session = session_store.get(msg.chat_id)\n           response = agent.chat(msg.text, session.context)\n           platform.send(response)\n  Step 2: Context Building (Agent Core)     # run_agent.py - AIAgent.run_conversation()\n   messages = [\n       {\"role\": \"system\", \"content\": system_prompt},\n       *session.history,  # Past conversation\n       {\"role\": \"user\", \"content\": new_message}\n   ]\n   tool_schemas = registry.get_available_tools()  # From enabled toolsets\n  Step 3: LLM Call (Model Tools)     # model_tools.py\n   response = client.chat.completions.create(\n       model=\"anthropic\u002Fclaude-opus-4.6\",\n       messages=messages,\n       tools=tool_schemas\n   )\n  Step 4: Tool Execution (Registry)     # model_tools.py - handle_function_call()\n   if response.tool_calls:\n       for tool_call in response.tool_calls:\n           result = registry.dispatch(tool_call.name, tool_call.args)\n           messages.append({\"role\": \"tool\", \"content\": result})\n  Step 5: Response (Back to Gateway)     # Agent returns final text → Gateway sends to platform\n   platform.send(final_response)\n   session_store.save(history + [user_msg, assistant_msg])\n   1.5 File Dependency Chain  From the AGENTS.md documentation:   tools\u002Fregistry.py  (no deps — imported by all tool files)\n       ↑\ntools\u002F*.py  (each calls registry.register() at import time)\n       ↑\nmodel_tools.py  (imports tools\u002Fregistry + triggers tool discovery)\n       ↑\nrun_agent.py, cli.py, batch_runner.py, environments\u002F\n   Why this matters:  If you add a new tool:   Create   tools\u002Fmy_tool.py  with   registry.register()  Add import in   model_tools.py    _discover_tools()  Add to toolset in   toolsets.py   1.6 Key Files to Explore First     File  Lines  What It Does     run_agent.py  ~800  Core agent loop    model_tools.py  ~500  Tool orchestration    tools\u002Fregistry.py  ~300  Tool registration + dispatch    gateway\u002Frun.py  ~2500  Gateway main loop (large but readable)    hermes_cli\u002Fcommands.py  ~770  Slash command registry    hermes_state.py  ~950  Session storage + search   1.7 Hands-On Exercise  Exercise 1: Trace the Code   Open   ~\u002Fgit\u002Fnous-hermes-agent\u002Frun_agent.py  Find the   AIAgent.run_conversation()  method (around line 150)  Read through the main while loop — identify where:\n   The LLM is called  Tool results are appended to messages  The final response is returned  Exercise 2: Find Your Session     # Look at your current session file\n   cat   ~\u002F.hermes\u002Fsessions\u002Fsession_20260322_124359_6b542f4e.json   |   head   -50\n  What fields do you see? What does   session_key  mean?  Exercise 3: List Available Tools     # In your Telegram chat, try:\n   \u002Ftools   list\n   \n   # Or in CLI:\n   cd   ~\u002Fgit\u002Fnous-hermes-agent\n   source   venv\u002Fbin\u002Factivate\n   hermes\n   tools   list\n  How many tools are enabled? What toolsets do they belong to?   1.8 Common Questions   Q: Why is   gateway\u002Frun.py  so large (255KB)? \nA: It contains the main loop, all platform adapters, slash command handling, and delivery logic — essentially the entire messaging interface.   Q: What's the difference between CLI and gateway? \nA: CLI (  hermes_cli\u002Fmain.py ) is for interactive terminal use. Gateway (  gateway\u002Frun.py ) runs as a background service polling messaging platforms.   Q: How does the agent \"know\" which tools to offer the LLM? \nA:   model_tools.py  builds tool schemas from the registry based on enabled toolsets in config.   Q: Where do skills come from? \nA: Built-in skills are in   optional-skills\u002F . User-installed skills go in   ~\u002F.hermes\u002Fskills\u002F .   ✅ Module 1 Checklist     Understand the high-level architecture    Know where key components live in the codebase    Trace a message through the system (gateway → agent → tools → response)    Complete all three exercises    Next:    Module 02: Core Agent Loop & Conversation Flow  html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}",{"id":36,"path":37,"dir":38,"title":39,"description":9,"keywords":40,"body":52},"content:docs:02-core-loop:core-loop.md","\u002Fdocs\u002F02-core-loop\u002Fcore-loop","02-core-loop","Module 02: Core Agent Loop & Conversation Flow",[24,41,42,43,44,45,46,47,48,49,50,51],"2.1 The Agent Loop: Step by Step","2.2 The Core Loop (Simplified)","2.3 Context Building","2.4 Tool Discovery & Schema Generation","2.5 Function Call Handling","2.6 Context Compression","2.7 Iteration Budget & Loop Control","2.8 Special Agent-Internal Tools","2.9 Hands-On Exercise","2.10 Common Pitfalls","✅ Module 2 Checklist","  Module 02: Core Agent Loop & Conversation Flow  🎯 What You'll Learn   The synchronous agent loop in detail  How context is built for each LLM call  Tool discovery and schema generation  Function call handling and result injection  Context compression strategies  Iteration limits and budget management   2.1 The Agent Loop: Step by Step  The entire agent logic lives in   run_agent.py . Let's break it down:  Entry Point:   AIAgent.chat()     # run_agent.py (simplified)\n   class AIAgent:\n       def chat(self, message: str) -> str:\n           \"\"\"Simple interface — returns final response string.\"\"\"\n           result = self.run_conversation(user_message=message)\n           return result[\"final_response\"]\n  The Real Work:   run_conversation()     def run_conversation(self, user_message: str, system_message: str = None,\n                        conversation_history: list = None, task_id: str = None) -> dict:\n       \"\"\"\n       Main agent loop. Returns dict with final_response + messages.\n       \n       This is where the magic happens — synchronous turn-by-turn execution.\n       \"\"\"\n   2.2 The Core Loop (Simplified)     # run_agent.py - Main loop structure\n   api_call_count = 0\n   while api_call_count \u003C self.max_iterations and self.iteration_budget.remaining > 0:\n       # Step 1: Build context for this turn\n       messages = self._build_context(user_message, conversation_history)\n       tool_schemas = self._get_tool_definitions()\n       \n       # Step 2: Call the LLM\n       response = client.chat.completions.create(\n           model=self.model,\n           messages=messages,\n           tools=tool_schemas if tool_schemas else None,\n           temperature=self.temperature,\n       )\n       \n       # Step 3: Check for tool calls\n       if response.tool_calls:\n           # Execute each tool and inject results\n           for tool_call in response.tool_calls:\n               result = handle_function_call(\n                   tool_call.name, \n                   tool_call.args, \n                   task_id=task_id\n               )\n               messages.append({\n                   \"role\": \"tool\",\n                   \"name\": tool_call.name,\n                   \"content\": result\n               })\n           api_call_count += 1\n       else:\n           # No tools — LLM is done, return response\n           return {\n               \"final_response\": response.content,\n               \"messages\": messages,\n               \"api_calls\": api_call_count,\n           }\n   2.3 Context Building  System Prompt Assembly     # agent\u002Fprompt_builder.py\n   def build_system_prompt(\n       personality: str,\n       enabled_toolsets: list,\n       skills: list = None,\n       user_profile: dict = None,\n   ) -> str:\n       \"\"\"\n       Constructs the system prompt from multiple sources.\n       \n       Order matters for prompt caching!\n       \"\"\"\n       parts = [\n           CORE_SYSTEM_PROMPT,           # Fixed base\n           f\"Personality: {personality}\",\n           format_tool_descriptions(enabled_toolsets),\n       ]\n       \n       if skills:\n           parts.append(format_skills(skills))  # Loaded as user message, not system!\n       \n       if user_profile:\n           parts.append(f\"User profile: {user_profile}\")\n       \n       return \"\\n\\n---\\n\\n\".join(parts)\n  Conversation History     # run_agent.py - _build_context()\n   def _build_context(self, user_message: str, conversation_history: list = None):\n       messages = [\n           {\"role\": \"system\", \"content\": self.system_prompt},\n       ]\n       \n       if conversation_history:\n           # Compress if needed\n           if len(conversation_history) > self.context_threshold:\n               conversation_history = self._compress_context(conversation_history)\n           messages.extend(conversation_history)\n       \n       messages.append({\"role\": \"user\", \"content\": user_message})\n       return messages\n   2.4 Tool Discovery & Schema Generation  The Registry Pattern  All tools register themselves at import time:     # tools\u002Fregistry.py\n   class ToolRegistry:\n       def __init__(self):\n           self._tools = {}  # name -> tool info\n           self._toolsets = defaultdict(list)  # toolset -> [tool names]\n       \n       def register(self, name: str, toolset: str, schema: dict, handler: Callable,\n                    check_fn: Callable = None, requires_env: list = None):\n           \"\"\"\n           Register a tool. Called at import time in each tool file.\n           \"\"\"\n           self._tools[name] = {\n               \"schema\": schema,\n               \"handler\": handler,\n               \"check_fn\": check_fn,  # Optional availability check\n               \"requires_env\": requires_env or [],\n               \"toolset\": toolset,\n           }\n  Tool Discovery in Model Tools     # model_tools.py - _discover_tools()\n   def _discover_tools(self) -> dict:\n       \"\"\"\n       Collect all registered tools, filter by enabled toolsets.\n       \n       Returns: {tool_name: schema_dict}\n       \"\"\"\n       available = {}\n       for name, info in registry._tools.items():\n           # Check if toolset is enabled\n           if info[\"toolset\"] not in self.enabled_toolsets:\n               continue\n           \n           # Check environment requirements\n           if info.get(\"requires_env\"):\n               if not all(os.getenv(env) for env in info[\"requires_env\"]):\n                   continue  # Skip if API keys missing\n           \n           # Check availability function\n           if info.get(\"check_fn\") and not info[\"check_fn\"]():\n               continue\n           \n           available[name] = info[\"schema\"]\n       \n       return available\n  Post-Processing: Cross-Tool References     # model_tools.py - get_tool_definitions()\n   def get_tool_definitions(self) -> list:\n       schemas = self._discover_tools()\n       \n       # Add dynamic cross-references (avoid hardcoded mentions)\n       for name, schema in schemas.items():\n           if \"browser_navigate\" in name and \"web_search\" in schemas:\n               # Dynamically add reference if both tools available\n               schema[\"description\"] += \"\\nYou can also use web_search to find URLs.\"\n       \n       return list(schemas.values())\n   Why dynamic?  If a tool mentions another by name but that tool is disabled, the LLM will hallucinate calls to non-existent tools.   2.5 Function Call Handling  Dispatch Logic     # model_tools.py - handle_function_call()\n   def handle_function_call(tool_name: str, args_json: str, task_id: str = None) -> str:\n       \"\"\"\n       Execute a tool call and return JSON result.\n       \n       All handlers MUST return JSON strings!\n       \"\"\"\n       # Parse arguments\n       try:\n           args = json.loads(args_json)\n       except json.JSONDecodeError:\n           return json.dumps({\"error\": f\"Invalid JSON: {args_json}\"})\n       \n       # Find handler\n       if tool_name not in registry._tools:\n           return json.dumps({\"error\": f\"Unknown tool: {tool_name}\"})\n       \n       tool_info = registry._tools[tool_name]\n       handler = tool_info[\"handler\"]\n       \n       # Execute with task_id for background process tracking\n       try:\n           result = handler(args, task_id=task_id)\n           return result  # Already a JSON string from the handler\n       except Exception as e:\n           return json.dumps({\"error\": str(e), \"tool\": tool_name})\n  Example: Terminal Tool     # tools\u002Fterminal_tool.py\n   def terminal_tool(command: str, background: bool = False, \n                     timeout: int = 180, task_id: str = None) -> str:\n       \"\"\"\n       Execute a shell command.\n       \n       Returns JSON with output, exit_code, and optionally process_id.\n       \"\"\"\n       if background:\n           # Start background process\n           proc = subprocess.Popen(\n               command, shell=True,\n               stdout=subprocess.PIPE,\n               stderr=subprocess.STDOUT,\n               text=True\n           )\n           \n           # Register with process registry for tracking\n           process_registry.register(task_id, proc)\n           \n           return json.dumps({\n               \"status\": \"running\",\n               \"process_id\": task_id,\n               \"message\": f\"Started background process: {command[:50]}...\"\n           })\n       else:\n           # Foreground - wait for completion\n           result = subprocess.run(\n               command, shell=True,\n               capture_output=True,\n               text=True,\n               timeout=timeout\n           )\n           \n           return json.dumps({\n               \"output\": result.stdout,\n               \"error\": result.stderr,\n               \"exit_code\": result.returncode\n           })\n   2.6 Context Compression  Why Compress?  LLMs have context limits (e.g., 128K, 200K tokens). As conversations grow, we need to:   Remove redundant information  Summarize older turns  Keep recent context intact  Automatic Compression     # agent\u002Fcontext_compressor.py\n   def compress_context(messages: list, max_tokens: int) -> list:\n       \"\"\"\n       Compress conversation history when it exceeds token limit.\n       \n       Strategy:\n       1. Keep system prompt + recent N turns intact\n       2. Summarize older turns into a single block\n       3. Preserve tool results (they contain important state)\n       \"\"\"\n       if count_tokens(messages) \u003C= max_tokens:\n           return messages\n       \n       # Separate recent and old\n       recent = messages[-10:]  # Keep last 10 turns\n       old = messages[:-10]\n       \n       # Summarize old conversation\n       summary = summarize_conversation(old)\n       \n       return [\n           {\"role\": \"system\", \"content\": \"[Context compressed. Previous conversation summarized below.]\\n\\n\" + summary},\n           *recent,\n       ]\n  Manual Compression     # User can trigger with \u002Fcompress command\n   @slash_command(\"compress\")\n   def compress(session: Session):\n       \"\"\"Manually compress conversation context.\"\"\"\n       compressed = compress_context(session.history, target_tokens=50000)\n       session.history = compressed\n       return \"Context compressed. Token count reduced by X%.\"\n   2.7 Iteration Budget & Loop Control  Max Iterations     # run_agent.py - __init__\n   def __init__(self, max_iterations: int = 90, ...):\n       self.max_iterations = max_iterations\n       self.iteration_budget = IterationBudget(max_iterations)\n   What counts as an iteration?  Each LLM API call that results in tool execution. If the LLM responds without tools, that's the final turn (doesn't count against budget).  Early Termination Conditions     while api_call_count \u003C self.max_iterations and self.iteration_budget.remaining > 0:\n       # ... LLM call ...\n       \n       if response.tool_calls:\n           # Execute tools\n           api_call_count += 1\n       else:\n           break  # Done!\n  Manual Stop     # User can interrupt with Ctrl+C (CLI) or \u002Fstop (gateway)\n   def handle_interrupt(session: Session):\n       \"\"\"Kill all running background processes and stop the loop.\"\"\"\n       process_registry.kill_all()\n       return \"Interrupted. Background processes terminated.\"\n   2.8 Special Agent-Internal Tools  Some tools are intercepted before   handle_function_call() :     # run_agent.py - _intercept_internal_tools()\n   def _intercept_internal_tools(self, tool_name: str, args: dict) -> Optional[str]:\n       \"\"\"\n       Handle special tools that don't go through the normal registry.\n       \n       These are agent-internal operations (todo, memory).\n       \"\"\"\n       if tool_name == \"todo\":\n           return self.todo_tool.execute(args)\n       elif tool_name == \"memory\":\n           return self.memory_tool.execute(args)\n       return None  # Pass through to normal registry\n  Todo Tool Example     # tools\u002Ftodo_tool.py\n   def todo_tool(action: str, content: str = None, id: str = None) -> str:\n       \"\"\"\n       Manage a task list within the conversation.\n       \n       Actions: create, update, complete, cancel, list\n       \"\"\"\n       if action == \"create\":\n           todo_list.append({\"id\": id or gen_id(), \"content\": content, \"status\": \"pending\"})\n       elif action == \"complete\":\n           for t in todo_list:\n               if t[\"id\"] == id:\n                   t[\"status\"] = \"completed\"\n       \n       return json.dumps({\"todos\": todo_list})\n   2.9 Hands-On Exercise  Exercise 1: Read the Agent Loop     cd   ~\u002Fgit\u002Fnous-hermes-agent\n  Open   run_agent.py  and find:   The   AIAgent.run_conversation()  method (around line 150)  The main while loop (search for   while api_call_count )  Where tool results are appended to messages  Where the final response is returned   Questions:   What's the default   max_iterations ?  How does it handle errors during tool execution?  Where does context compression happen?  Exercise 2: Trace a Tool Call   Open   model_tools.py  Find   handle_function_call()  function  Follow the flow:\n   Parse JSON args  Look up handler in registry  Execute with task_id  Return result   Questions:   What happens if the tool name doesn't exist?  Why must handlers return JSON strings?  How is   task_id  used?  Exercise 3: Inspect Tool Registry     # Run this in a Python shell\n   cd ~\u002Fgit\u002Fnous-hermes-agent\n   source venv\u002Fbin\u002Factivate\n   \n   >>> from tools.registry import registry\n   >>> len(registry._tools)  # How many total tools?\n   >>> list(registry._tools.keys())[:10]  # First 10 tool names\n   >>> registry._tools[\"terminal_tool\"]  # Inspect one tool's info\n  What fields do you see in the tool info dict? What's the schema structure?  Exercise 4: Test Context Building     # Start Hermes CLI\n   hermes\n   \n   # Send a simple message, then watch what happens\n   # Try:\n   \u002Fverbose   verbose      # See detailed tool progress\n   \n   # Now send: \"What's 2+2?\"\n   # Watch the tool calls and responses in real-time\n   Observe:   Does it use any tools for this simple question?  How many API calls does it take to respond?  What does the verbose output show?   2.10 Common Pitfalls  ❌ Don't Modify Context Mid-Conversation     # BAD: This breaks prompt caching!\n   def bad_approach():\n       messages = build_context()\n       # ... later ...\n       messages[0][\"content\"] = \"modified\"  # Cache invalidates!\n   Why:  The LLM's cached context becomes stale. Forces full re-computation.  ✅ Do Use Immutable Updates     # GOOD: Create new message list\n   def good_approach():\n       messages = [\n           {\"role\": \"system\", \"content\": updated_prompt},\n           *new_history,\n       ]\n  ❌ Don't Hardcode Cross-Tool References     # BAD: In tool schema description\n   schema[\"description\"] = \"Use browser_navigate to open URLs.\"\n   # What if browser_tool is disabled?\n  ✅ Do Add Dynamic References     # GOOD: In get_tool_definitions()\n   if \"browser_navigate\" in schemas and \"web_search\" in schemas:\n       schema[\"description\"] += \"\\nYou can also use web_search to find URLs.\"\n   ✅ Module 2 Checklist     Understand the synchronous agent loop structure    Trace how context is built for each LLM call    Explain tool discovery and schema generation    Follow a function call from LLM response to result injection    Understand context compression strategies    Complete all four exercises    Next:    Module 03: Tools System & Toolsets  html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}",{"id":54,"path":55,"dir":56,"title":57,"description":9,"keywords":58,"body":68},"content:docs:03-tools-system:tools-system.md","\u002Fdocs\u002F03-tools-system\u002Ftools-system","03-tools-system","Module 03: Tools System & Toolsets",[24,59,60,61,62,63,64,65,66,67],"3.1 Tool Architecture Overview","3.2 The Registry Pattern","3.3 Toolsets System","3.4 Core Tool Categories","3.5 Background Process Management","3.6 MCP Integration (tools\u002Fmcp_tool.py)","3.7 Hands-On Exercise","3.8 Common Pitfalls","✅ Module 3 Checklist","  Module 03: Tools System & Toolsets  🎯 What You'll Learn   How tools are registered and discovered  The toolset system for organizing capabilities  Implementing a new tool from scratch  Environment variable requirements  Background process management  MCP integration basics   3.1 Tool Architecture Overview  Three-Layer Design   ┌─────────────────────────────────────┐\n│         TOOLSETS (toolsets.py)      │\n│   - Group tools by capability       │\n│   - User enables\u002Fdisables sets      │\n└──────────────┬──────────────────────┘\n               │\n┌──────────────▼──────────────────────┐\n│        REGISTRY (registry.py)       │\n│   - Central dispatch                │\n│   - Schema collection               │\n│   - Availability checking           │\n└──────────────┬──────────────────────┘\n               │\n┌──────────────▼──────────────────────┐\n│      TOOL IMPLEMENTATIONS           │\n│   tools\u002Fterminal_tool.py            │\n│   tools\u002Ffile_tools.py               │\n│   tools\u002Fweb_tools.py                │\n│   ... (20+ files)                   │\n└─────────────────────────────────────┘\n   3.2 The Registry Pattern  Registration at Import Time     # tools\u002Fregistry.py\n   class ToolRegistry:\n       def __init__(self):\n           self._tools = {}  # name -> tool info dict\n           self._toolsets = defaultdict(list)  # toolset -> [names]\n       \n       def register(self, name: str, toolset: str, schema: dict,\n                    handler: Callable, check_fn: Callable = None,\n                    requires_env: list = None):\n           \"\"\"\n           Register a tool. Called at module import time.\n           \n           Args:\n               name: Unique tool identifier (e.g., \"terminal_tool\")\n               toolset: Which set it belongs to (e.g., \"core\", \"web\")\n               schema: OpenAI-compatible function schema\n               handler: Function that executes the tool\n               check_fn: Optional availability check (returns bool)\n               requires_env: List of required env vars\n           \"\"\"\n           self._tools[name] = {\n               \"schema\": schema,\n               \"handler\": handler,\n               \"check_fn\": check_fn,\n               \"requires_env\": requires_env or [],\n               \"toolset\": toolset,\n           }\n           \n           self._toolsets[toolset].append(name)\n   \n   # Global instance\n   registry = ToolRegistry()\n  Example: Registering a Tool     # tools\u002Fexample_tool.py\n   import json, os\n   from tools.registry import registry\n   \n   def check_requirements() -> bool:\n       \"\"\"Return True if tool can run (has API keys, etc.).\"\"\"\n       return bool(os.getenv(\"EXAMPLE_API_KEY\"))\n   \n   def example_tool(query: str, limit: int = 10, task_id: str = None) -> str:\n       \"\"\"\n       Example tool that does something.\n       \n       Args:\n           query: Search query\n           limit: Max results to return\n           task_id: Optional ID for background tracking\n       \n       Returns:\n           JSON string with result\n       \"\"\"\n       # Do the work...\n       results = {\"query\": query, \"count\": limit}\n       \n       return json.dumps({\"success\": True, \"data\": results})\n   \n   # Register at import time\n   registry.register(\n       name=\"example_tool\",\n       toolset=\"example\",\n       schema={\n           \"name\": \"example_tool\",\n           \"description\": \"Does something useful with a query.\",\n           \"parameters\": {\n               \"type\": \"object\",\n               \"properties\": {\n                   \"query\": {\"type\": \"string\", \"description\": \"Search query\"},\n                   \"limit\": {\"type\": \"integer\", \"default\": 10, \"description\": \"Max results\"},\n               },\n               \"required\": [\"query\"],\n           },\n       },\n       handler=lambda args, **kw: example_tool(\n           query=args.get(\"query\", \"\"),\n           limit=args.get(\"limit\", 10),\n           task_id=kw.get(\"task_id\")\n       ),\n       check_fn=check_requirements,\n       requires_env=[\"EXAMPLE_API_KEY\"],\n   )\n   Key Points:   Registration happens   at import time  (when Python loads the module)  Handler must return a   JSON string   task_id  is passed for background process tracking   check_fn  and   requires_env  control availability   3.3 Toolsets System  Defining Toolsets     # toolsets.py\n   _HERMES_CORE_TOOLS = [\n       \"terminal_tool\",\n       \"file_read\", \"file_write\", \"file_search\", \"file_patch\",\n       \"todo\",\n   ]\n   \n   _TOOLSETS = {\n       \"core\": {\n           \"description\": \"Core tools for terminal, files, and task management\",\n           \"tools\": _HERMES_CORE_TOOLS,\n           \"enabled_by_default\": True,\n       },\n       \"web\": {\n           \"description\": \"Web search and content extraction\",\n           \"tools\": [\"web_search\", \"web_extract\"],\n           \"enabled_by_default\": False,  # Requires API key\n       },\n       \"browser\": {\n           \"description\": \"Browser automation via Browserbase\",\n           \"tools\": [\"browser_navigate\", \"browser_click\", \"browser_type\"],\n           \"enabled_by_default\": False,\n       },\n   }\n  Enabling\u002FDisabling Toolsets     # User config in ~\u002F.hermes\u002Fconfig.yaml\n   enabled_toolsets:\n     - core\n     - web\n     \n   disabled_toolsets: []  # Override defaults\n  Runtime Filtering     # model_tools.py - _discover_tools()\n   def _discover_tools(self) -> dict:\n       \"\"\"Collect available tools based on enabled toolsets.\"\"\"\n       available = {}\n       \n       for name, info in registry._tools.items():\n           # Filter by enabled toolset\n           if info[\"toolset\"] not in self.enabled_toolsets:\n               continue\n           \n           # Check environment requirements\n           if info.get(\"requires_env\"):\n               missing = [env for env in info[\"requires_env\"] if not os.getenv(env)]\n               if missing:\n                   continue  # Skip if API keys missing\n           \n           # Run availability check\n           if info.get(\"check_fn\") and not info[\"check_fn\"]():\n               continue\n           \n           available[name] = info[\"schema\"]\n       \n       return available\n   3.4 Core Tool Categories  Terminal Tools (  tools\u002Fterminal_tool.py )     def terminal_tool(command: str, background: bool = False,\n                     timeout: int = 180, workdir: str = None,\n                     task_id: str = None) -> str:\n       \"\"\"\n       Execute shell commands.\n       \n       Modes:\n       - Foreground (default): Wait for completion\n       - Background: Return immediately with process ID\n       \"\"\"\n       if background:\n           # Start in background\n           proc = subprocess.Popen(\n               command, shell=True,\n               stdout=subprocess.PIPE,\n               stderr=subprocess.STDOUT,\n               text=True,\n               cwd=workdir\n           )\n           \n           # Register for tracking\n           process_registry.register(task_id, proc)\n           \n           return json.dumps({\n               \"status\": \"running\",\n               \"process_id\": task_id,\n               \"command\": command[:100]\n           })\n       else:\n           # Foreground - wait for result\n           result = subprocess.run(\n               command, shell=True,\n               capture_output=True,\n               text=True,\n               timeout=timeout,\n               cwd=workdir\n           )\n           \n           return json.dumps({\n               \"output\": result.stdout,\n               \"error\": result.stderr,\n               \"exit_code\": result.returncode\n           })\n  File Tools (  tools\u002Ffile_tools.py )     def file_read(path: str, offset: int = 1, limit: int = 500) -> str:\n       \"\"\"Read a text file with line numbers and pagination.\"\"\"\n       try:\n           with open(path, 'r') as f:\n               lines = f.readlines()\n           \n           # Apply pagination\n           start = offset - 1\n           end = min(start + limit, len(lines))\n           content = ''.join(lines[start:end])\n           \n           return json.dumps({\n               \"content\": content,\n               \"total_lines\": len(lines),\n               \"offset\": offset,\n               \"limit\": limit,\n           })\n       except Exception as e:\n           return json.dumps({\"error\": str(e)})\n   \n   \n   def file_patch(path: str, old_string: str, new_string: str,\n                  replace_all: bool = False) -> str:\n       \"\"\"Find-and-replace edit in a file.\"\"\"\n       try:\n           with open(path, 'r') as f:\n               content = f.read()\n           \n           if old_string not in content:\n               return json.dumps({\n                   \"success\": False,\n                   \"error\": \"String not found\",\n                   \"path\": path\n               })\n           \n           new_content = content.replace(old_string, new_string)\n           \n           with open(path, 'w') as f:\n               f.write(new_content)\n           \n           return json.dumps({\n               \"success\": True,\n               \"replacements\": content.count(old_string) if replace_all else 1\n           })\n       except Exception as e:\n           return json.dumps({\"error\": str(e)})\n  Web Tools (  tools\u002Fweb_tools.py )     def web_search(query: str, num_results: int = 10) -> str:\n       \"\"\"\n       Search the web using Parallel search or Firecrawl.\n       \n       Requires: PARALLEL_API_KEY or FIRECRAWL_API_KEY\n       \"\"\"\n       if os.getenv(\"PARALLEL_API_KEY\"):\n           # Use Parallel search\n           results = parallel_search(query, limit=num_results)\n       elif os.getenv(\"FIRECRAWL_API_KEY\"):\n           # Use Firecrawl\n           results = firecrawl_search(query, limit=num_results)\n       else:\n           return json.dumps({\"error\": \"No web search API key configured\"})\n       \n       return json.dumps({\"results\": results})\n   3.5 Background Process Management  Process Registry     # tools\u002Fprocess_registry.py\n   class ProcessRegistry:\n       def __init__(self):\n           self._processes = {}  # task_id -> proc info\n       \n       def register(self, task_id: str, proc: subprocess.Popen):\n           \"\"\"Track a background process.\"\"\"\n           self._processes[task_id] = {\n               \"proc\": proc,\n               \"started_at\": datetime.now(),\n               \"status\": \"running\",\n           }\n       \n       def get_status(self, task_id: str) -> dict:\n           \"\"\"Check if process is still running.\"\"\"\n           if task_id not in self._processes:\n               return {\"error\": \"Unknown process ID\"}\n           \n           proc_info = self._processes[task_id]\n           proc = proc_info[\"proc\"]\n           \n           if proc.poll() is None:\n               status = \"running\"\n           else:\n               status = \"completed\"\n               proc_info[\"status\"] = \"completed\"\n           \n           return {\n               \"task_id\": task_id,\n               \"status\": status,\n               \"started_at\": str(proc_info[\"started_at\"]),\n           }\n       \n       def kill(self, task_id: str) -> dict:\n           \"\"\"Terminate a background process.\"\"\"\n           if task_id not in self._processes:\n               return {\"error\": \"Unknown process ID\"}\n           \n           proc = self._processes[task_id][\"proc\"]\n           proc.terminate()\n           \n           return {\"success\": True, \"task_id\": task_id}\n       \n       def kill_all(self):\n           \"\"\"Kill all running background processes.\"\"\"\n           for task_id in list(self._processes.keys()):\n               self.kill(task_id)\n  Using Background Processes     # In a tool call\n   result = terminal_tool(\n       command=\"python long_script.py\",\n       background=True,\n       task_id=\"my-task-123\"\n   )\n   \n   # Later, check status\n   status = process_registry.get_status(\"my-task-123\")\n   \n   # Or kill it\n   process_registry.kill(\"my-task-123\")\n   3.6 MCP Integration (  tools\u002Fmcp_tool.py )  What is MCP?  MCP (Model Context Protocol) is a standard for connecting AI agents to external tools and data sources.  MCP Client Implementation     # tools\u002Fmcp_tool.py (~1050 lines)\n   class MCPClient:\n       def __init__(self, config_path: str = \"~\u002F.hermes\u002Fconfig.yaml\"):\n           self.servers = self._load_servers(config_path)\n           self.clients = {}  # server_name -> MCP client\n       \n       def _load_servers(self, config_path: str) -> dict:\n           \"\"\"Load MCP server configs from YAML.\"\"\"\n           with open(config_path) as f:\n               config = yaml.safe_load(f)\n           \n           return config.get(\"mcp\", {}).get(\"servers\", {})\n       \n       def connect(self, server_name: str):\n           \"\"\"Connect to an MCP server.\"\"\"\n           if server_name not in self.servers:\n               raise ValueError(f\"Unknown server: {server_name}\")\n           \n           config = self.servers[server_name]\n           client = MCPClientFor(config)  # Platform-specific\n           client.connect()\n           \n           self.clients[server_name] = client\n       \n       def list_tools(self, server_name: str) -> list:\n           \"\"\"List available tools from an MCP server.\"\"\"\n           if server_name not in self.clients:\n               self.connect(server_name)\n           \n           return self.clients[server_name].list_tools()\n       \n       def call_tool(self, server_name: str, tool_name: str, args: dict) -> str:\n           \"\"\"Call a tool on an MCP server.\"\"\"\n           if server_name not in self.clients:\n               self.connect(server_name)\n           \n           result = self.clients[server_name].call_tool(tool_name, args)\n           return json.dumps({\"result\": result})\n  Configuring MCP Servers     # ~\u002F.hermes\u002Fconfig.yaml\n   mcp  :\n     servers  :\n       filesystem  :\n         command  :   \"npx -y @modelcontextprotocol\u002Fserver-filesystem\"\n         args  : [  \"\u002Fhome\u002Fbrian\"  ]\n       github  :\n         command  :   \"npx -y @modelcontextprotocol\u002Fserver-github\"\n         env  :\n           GITHUB_TOKEN  :   \"${GITHUB_TOKEN}\"\n   3.7 Hands-On Exercise  Exercise 1: Inspect the Registry     cd   ~\u002Fgit\u002Fnous-hermes-agent\n   source   venv\u002Fbin\u002Factivate\n   \n   >>>   from   tools.registry   import   registry\n   >>> \n   >>>   # How many total tools?\n   >>>   len(registry._tools  )\n   >>> \n   >>>   # What toolsets exist?\n   >>>   list(registry._toolsets.keys(  ))\n   >>> \n   >>>   # Tools in the 'core' set\n   >>>   registry._toolsets[  'core'  ]\n   >>> \n   >>>   # Inspect a specific tool\n   >>>   import   pprint\n   >>>   pprint.pprint(registry._tools[  'terminal_tool'  ]  )\n   Questions:   How many tools are registered?  What's the schema structure for   terminal_tool ?  Does it have environment requirements?  Exercise 2: Trace Tool Discovery     # Create a minimal test\n   from model_tools import ModelTools\n   \n   agent = ModelTools(\n       enabled_toolsets=[\"core\", \"web\"],\n       disabled_toolsets=[]\n   )\n   \n   tools = agent._discover_tools()\n   print(f\"Available tools: {len(tools)}\")\n   for name in sorted(tools.keys()):\n       print(f\"  - {name}\")\n   Questions:   How many tools are available with just \"core\" enabled?  What changes when you add \"web\"? (Requires API key)  Exercise 3: Write a Simple Tool  Create   ~\u002Fgit\u002Fnous-hermes-agent\u002Ftools\u002Fhello_tool.py :     import json\n   from tools.registry import registry\n   \n   def hello_tool(name: str, greeting: str = \"Hello\") -> str:\n       \"\"\"Say hello to someone.\"\"\"\n       message = f\"{greeting}, {name}!\"\n       return json.dumps({\"message\": message})\n   \n   registry.register(\n       name=\"hello_tool\",\n       toolset=\"core\",  # Always available\n       schema={\n           \"name\": \"hello_tool\",\n           \"description\": \"Say hello to someone by name.\",\n           \"parameters\": {\n               \"type\": \"object\",\n               \"properties\": {\n                   \"name\": {\"type\": \"string\", \"description\": \"Person's name\"},\n                   \"greeting\": {\"type\": \"string\", \"default\": \"Hello\", \n                               \"description\": \"Greeting word\"},\n               },\n               \"required\": [\"name\"],\n           },\n       },\n       handler=lambda args, **kw: hello_tool(\n           name=args.get(\"name\", \"World\"),\n           greeting=args.get(\"greeting\", \"Hello\")\n       ),\n   )\n  Now test it:     # In Python shell\n   from tools.registry import registry\n   result = registry._tools['hello_tool']['handler']({\"name\": \"Brian\"})\n   print(result)\n  Exercise 4: Test Background Processes     # Start Hermes CLI\n   hermes\n   \n   # Run a background command\n   \u002Fterminal   sleep   30   &&   echo   \"Done!\"   --background\n   \n   # Check status (if supported in your version)\n   \u002Fstatus\n   \n   # Or kill it\n   \u002Fstop\n   Observe:   What response do you get immediately?  How can you track the process?   3.8 Common Pitfalls  ❌ Don't Forget JSON Return Values     # BAD: Returns plain string\n   def bad_tool():\n       return \"Success!\"  # LLM expects JSON!\n   \n   # GOOD: Always return JSON\n   def good_tool():\n       return json.dumps({\"success\": True, \"message\": \"Success!\"})\n  ❌ Don't Hardcode Paths     # BAD\n   def read_config():\n       with open(\"\u002Fhome\u002Fbrian\u002F.hermes\u002Fconfig.yaml\") as f:  # Won't work for others!\n           return yaml.safe_load(f)\n   \n   # GOOD\n   def read_config():\n       config_path = os.getenv(\"HERMES_HOME\", \"~\u002F.hermes\") + \"\u002Fconfig.yaml\"\n       with open(os.path.expanduser(config_path)) as f:\n           return yaml.safe_load(f)\n  ❌ Don't Block on Long Operations Without Background Option     # BAD: No way to interrupt\n   def long_process():\n       for i in range(1000000):\n           time.sleep(1)  # Blocks forever!\n   \n   # GOOD: Support background mode\n   def process(data, background=False):\n       if background:\n           return start_background_task(data)\n       else:\n           return run_synchronously(data)\n   ✅ Module 3 Checklist     Understand the registry pattern and registration at import time    Explain how toolsets filter available tools    Trace a tool call from LLM to execution to result    Implement a simple tool from scratch    Understand background process management    Complete all four exercises    Next:    Module 04: Gateway & Messaging Platforms  html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}",{"id":70,"path":71,"dir":72,"title":73,"description":9,"keywords":74,"body":83},"content:docs:04-gateway-platforms:gateway-platforms.md","\u002Fdocs\u002F04-gateway-platforms\u002Fgateway-platforms","04-gateway-platforms","Module 04: Gateway & Messaging Platforms",[24,75,76,77,78,79,80,81,82],"4.1 Gateway Architecture Overview","4.2 Platform Adapters","4.3 Session Management","4.4 Slash Command Routing","4.5 Background Process Notifications","4.6 Hands-On Exercise","4.7 Common Pitfalls","✅ Module 4 Checklist","  Module 04: Gateway & Messaging Platforms  🎯 What You'll Learn   How the gateway layer works  Platform adapters (Telegram, Discord, Slack, etc.)  Session management across platforms  Slash command routing and autocomplete  Background process notifications   4.1 Gateway Architecture Overview  The Gateway Pattern     # gateway\u002Frun.py - Main loop structure\n   while True:\n       # Poll all connected platforms for new messages\n       for platform in self.platforms.values():\n           messages = platform.poll()\n           \n           for msg in messages:\n               # Resolve which session this belongs to\n               session_key = self._resolve_session(msg)\n               session = self.session_store.get(session_key)\n               \n               # Dispatch to agent\n               response = self.agent.chat(msg.text, session.context)\n               \n               # Send back to platform\n               platform.send(msg.chat_id, response)\n               \n               # Persist conversation\n               session.add_message(msg.role, msg.text)\n  Key Components     Component  File  Responsibility     Main Loop   gateway\u002Frun.py  (255KB)  Polling, dispatch, delivery    Session Store   gateway\u002Fsession.py  Conversation persistence    Platform Adapters   gateway\u002Fplatforms\u002F  Telegram, Discord, Slack, etc.    Config   gateway\u002Fconfig.py  Gateway settings   4.2 Platform Adapters  Telegram Adapter (  gateway\u002Fplatforms\u002Ftelegram.py )     class TelegramPlatform:\n       def __init__(self, bot_token: str):\n           self.bot = telegram.Bot(token=bot_token)\n           self.offset = 0  # For polling updates\n       \n       def poll(self) -> list:\n           \"\"\"\n           Fetch new messages from Telegram.\n           \n           Returns list of Message objects.\n           \"\"\"\n           updates = self.bot.get_updates(offset=self.offset)\n           \n           messages = []\n           for update in updates:\n               if update.message:\n                   msg = Message(\n                       chat_id=update.message.chat.id,\n                       text=update.message.text,\n                       role=\"user\",\n                       platform=\"telegram\",\n                   )\n                   messages.append(msg)\n                   \n                   self.offset = update.update_id + 1\n           \n           return messages\n       \n       def send(self, chat_id: str, text: str):\n           \"\"\"Send a message to a Telegram chat.\"\"\"\n           # Split long messages (Telegram has 4096 char limit)\n           for chunk in self._chunk_text(text):\n               self.bot.send_message(chat_id=chat_id, text=chunk)\n       \n       def _chunk_text(self, text: str) -> list:\n           \"\"\"Split text into chunks under 4096 chars.\"\"\"\n           chunks = []\n           while len(text) > 4096:\n               # Find last newline before limit\n               split_at = text.rfind('\\n', 0, 4096)\n               if split_at == -1:\n                   split_at = 4096\n               chunks.append(text[:split_at])\n               text = text[split_at:].lstrip()\n           \n           if text:\n               chunks.append(text)\n           \n           return chunks\n  Discord Adapter (  gateway\u002Fplatforms\u002Fdiscord.py )     class DiscordPlatform:\n       def __init__(self, bot_token: str):\n           self.client = discord.Client(intents=discord.Intents.default())\n           self.message_queue = asyncio.Queue()\n           \n           @self.client.event\n           async def on_message(message):\n               if message.author.bot:\n                   return\n               \n               # Queue the message for processing\n               await self.message_queue.put(Message(\n                   chat_id=message.channel.id,\n                   text=message.content,\n                   role=\"user\",\n                   platform=\"discord\",\n               ))\n       \n       def poll(self) -> list:\n           \"\"\"Fetch messages from queue.\"\"\"\n           messages = []\n           while not self.message_queue.empty():\n               messages.append(self.message_queue.get_nowait())\n           return messages\n       \n       async def send(self, channel_id: str, text: str):\n           \"\"\"Send a message to a Discord channel.\"\"\"\n           channel = await self.client.fetch_channel(channel_id)\n           await channel.send(text)\n  Slack Adapter (  gateway\u002Fplatforms\u002Fslack.py )     class SlackPlatform:\n       def __init__(self, bot_token: str):\n           self.client = slack_sdk.WebClient(token=bot_token)\n           self.rtm_client = slack_sdk.RTMClient(token=bot_token)\n           \n           @self.rtm_client.on(\"message\")\n           async def on_message(event):\n               # Process Slack message\n               pass\n       \n       def poll(self) -> list:\n           \"\"\"Fetch messages from RTM or Events API.\"\"\"\n           # Implementation depends on Slack's API mode\n           pass\n   4.3 Session Management  Session Keys  Each conversation is identified by a unique key:     # gateway\u002Fsession.py\n   def make_session_key(platform: str, chat_id: str, thread_id: str = None) -> str:\n       \"\"\"\n       Generate a unique session key.\n       \n       Examples:\n           telegram:dm:5490634439          # DM with user\n           telegram:group:-1001234567890   # Group chat\n           discord:#engineering            # Discord channel\n           cli                             # CLI (single session)\n       \"\"\"\n       if thread_id:\n           return f\"{platform}:{chat_id}:{thread_id}\"\n       else:\n           return f\"{platform}:{chat_id}\"\n  Session Store     class SessionStore:\n       def __init__(self, db_path: str = \"~\u002F.hermes\u002Fstate.db\"):\n           self.conn = sqlite3.connect(db_path)\n           self._setup_tables()\n       \n       def _setup_tables(self):\n           \"\"\"Create SQLite tables with FTS5 for search.\"\"\"\n           cursor = self.conn.cursor()\n           \n           # Main sessions table\n           cursor.execute(\"\"\"\n               CREATE TABLE IF NOT EXISTS sessions (\n                   session_key TEXT PRIMARY KEY,\n                   session_id TEXT,\n                   created_at TIMESTAMP,\n                   updated_at TIMESTAMP,\n                   platform TEXT,\n                   chat_id TEXT,\n                   display_name TEXT\n               )\n           \"\"\")\n           \n           # Messages table\n           cursor.execute(\"\"\"\n               CREATE TABLE IF NOT EXISTS messages (\n                   id INTEGER PRIMARY KEY AUTOINCREMENT,\n                   session_key TEXT,\n                   role TEXT,  # 'user' or 'assistant'\n                   content TEXT,\n                   timestamp TIMESTAMP,\n                   FOREIGN KEY (session_key) REFERENCES sessions(session_key)\n               )\n           \"\"\")\n           \n           # FTS5 for full-text search\n           cursor.execute(\"\"\"\n               CREATE VIRTUAL TABLE IF NOT EXISTS messages_fts USING fts5(\n                   content,\n                   content='messages',\n                   content_rowid='id'\n               )\n           \"\"\")\n           \n           self.conn.commit()\n       \n       def get(self, session_key: str) -> Session:\n           \"\"\"Get or create a session.\"\"\"\n           cursor = self.conn.cursor()\n           cursor.execute(\n               \"SELECT * FROM sessions WHERE session_key = ?\",\n               (session_key,)\n           )\n           row = cursor.fetchone()\n           \n           if row:\n               return Session.from_row(row)\n           else:\n               # Create new session\n               session_id = generate_session_id()\n               cursor.execute(\"\"\"\n                   INSERT INTO sessions (session_key, session_id, created_at, updated_at)\n                   VALUES (?, ?, ?, ?)\n               \"\"\", (session_key, session_id, datetime.now(), datetime.now()))\n               self.conn.commit()\n               \n               return Session(session_key=session_key, session_id=session_id)\n       \n       def add_message(self, session_key: str, role: str, content: str):\n           \"\"\"Add a message to a session.\"\"\"\n           cursor = self.conn.cursor()\n           cursor.execute(\"\"\"\n               INSERT INTO messages (session_key, role, content, timestamp)\n               VALUES (?, ?, ?, ?)\n           \"\"\", (session_key, role, content, datetime.now()))\n           \n           # Also add to FTS index\n           cursor.execute(\"\"\"\n               INSERT INTO messages_fts(rowid, content)\n               SELECT id, content FROM messages WHERE rowid = last_insert_rowid()\n           \"\"\")\n           \n           # Update session timestamp\n           cursor.execute(\"\"\"\n               UPDATE sessions SET updated_at = ? WHERE session_key = ?\n           \"\"\", (datetime.now(), session_key))\n           \n           self.conn.commit()\n       \n       def search(self, query: str, limit: int = 10) -> list:\n           \"\"\"Search all messages using FTS5.\"\"\"\n           cursor = self.conn.cursor()\n           cursor.execute(\"\"\"\n               SELECT m.*, s.display_name\n               FROM messages_fts f\n               JOIN messages m ON f.rowid = m.id\n               JOIN sessions s ON m.session_key = s.session_key\n               WHERE f MATCH ?\n               ORDER BY m.timestamp DESC\n               LIMIT ?\n           \"\"\", (query, limit))\n           \n           return cursor.fetchall()\n   4.4 Slash Command Routing  Central Registry (  hermes_cli\u002Fcommands.py )  All slash commands are defined in one place:     @dataclass(frozen=True)\n   class CommandDef:\n       \"\"\"Definition of a slash command.\"\"\"\n       name: str                      # \"resume\"\n       description: str               # \"Resume a previously-named session\"\n       category: str                  # \"Session\"\n       aliases: tuple[str, ...] = ()  # (\"reset\",)\n       args_hint: str = \"\"            # \"[name]\"\n       cli_only: bool = False         # Only in CLI?\n       gateway_only: bool = False     # Only in messaging?\n   \n   # Central registry\n   COMMAND_REGISTRY: list[CommandDef] = [\n       CommandDef(\"new\", \"Start a new session\", \"Session\",\n                  aliases=(\"reset\",)),\n       CommandDef(\"resume\", \"Resume a named session\", \"Session\",\n                  args_hint=\"[name]\"),\n       # ... more commands\n   ]\n  CLI Dispatch (  hermes_cli\u002Fmain.py )     def process_command(self, command: str) -> str:\n       \"\"\"\n       Parse and dispatch a slash command in the CLI.\n       \n       Example: \u002Fresume my-session\n       \"\"\"\n       # Parse command\n       parts = command.strip().split()\n       cmd_name = parts[0][1:]  # Remove leading slash\n       args = ' '.join(parts[1:])\n       \n       # Resolve to canonical name (handles aliases)\n       canonical = resolve_command(cmd_name)  # \"resume\"\n       \n       if not canonical:\n           return f\"Unknown command: {cmd_name}\"\n       \n       # Dispatch\n       if canonical == \"new\":\n           return self._handle_new()\n       elif canonical == \"resume\":\n           return self._handle_resume(args)\n       # ... more handlers\n  Gateway Dispatch (  gateway\u002Frun.py )     async def handle_message(self, msg: Message) -> str:\n       \"\"\"\n       Handle an incoming message from any platform.\n       \n       If it's a slash command, dispatch immediately.\n       Otherwise, pass to agent for normal processing.\n       \"\"\"\n       text = msg.text.strip()\n       \n       # Check if it's a slash command\n       if text.startswith('\u002F'):\n           return await self._handle_slash_command(msg)\n       \n       # Normal message - send to agent\n       session = self.session_store.get(msg.session_key)\n       response = self.agent.chat(text, session.context)\n       return response\n   \n   async def _handle_slash_command(self, msg: Message) -> str:\n       \"\"\"\n       Parse and dispatch a slash command in the gateway.\n       \n       Similar to CLI but platform-aware.\n       \"\"\"\n       parts = msg.text.strip().split()\n       cmd_name = parts[0][1:]  # Remove leading slash\n       args = ' '.join(parts[1:])\n       \n       # Resolve to canonical name\n       canonical = resolve_command(cmd_name)\n       \n       if not canonical:\n           return f\"Unknown command: {cmd_name}\"\n       \n       # Check platform restrictions\n       cmd_def = get_command_def(canonical)\n       if cmd_def.cli_only:\n           return \"This command is only available in the CLI.\"\n       if cmd_def.gateway_only and msg.platform != 'cli':\n           return f\"This command is only available on {msg.platform}.\"\n       \n       # Dispatch\n       if canonical == \"new\":\n           return await self._handle_new(msg)\n       elif canonical == \"resume\":\n           return await self._handle_resume(msg, args)\n       # ... more handlers\n  Autocomplete (  hermes_cli\u002Fcommands.py )     class SlashCommandCompleter(Completer):\n       \"\"\"\n       Tab completion for slash commands in the CLI.\n       \n       Provides suggestions as you type.\n       \"\"\"\n       def __init__(self):\n           # Build flat dict of all commands + aliases\n           self.commands = {}\n           for cmd_def in COMMAND_REGISTRY:\n               self.commands[cmd_def.name] = cmd_def\n               for alias in cmd_def.aliases:\n                   self.commands[alias] = cmd_def\n       \n       def get_completions(self, document, complete_event):\n           \"\"\"\n           Suggest commands as user types.\n           \n           Example: User types \"\u002Fre\" → suggests \"\u002Fresume\", \"\u002Fretry\"\n           \"\"\"\n           text = document.text_before_cursor\n           \n           if not text.startswith('\u002F'):\n               return\n           \n           # Extract partial command\n           parts = text.split()\n           partial = parts[-1]  # Last word after slash\n           \n           # Find matches\n           for cmd_name, cmd_def in self.commands.items():\n               if cmd_name.startswith(partial.lstrip('\u002F')):\n                   yield Completion(\n                       cmd_name,\n                       start_position=-len(partial),\n                       display_meta=cmd_def.description[:50],\n                   )\n   4.5 Background Process Notifications  Check Interval Pattern     # tools\u002Fterminal_tool.py\n   def terminal_tool(command: str, background: bool = False,\n                     check_interval: int = None, **kwargs) -> str:\n       \"\"\"\n       Execute a shell command.\n       \n       Args:\n           command: Shell command to run\n           background: Run asynchronously?\n           check_interval: Seconds between status updates (gateway only)\n       \"\"\"\n       if background and check_interval:\n           # Start with periodic notifications\n           proc = subprocess.Popen(\n               command, shell=True,\n               stdout=subprocess.PIPE,\n               stderr=subprocess.STDOUT,\n               text=True\n           )\n           \n           # Register watcher in gateway\n           process_registry.register(task_id=kwargs.get('task_id'),\n                                     proc=proc,\n                                     check_interval=check_interval)\n       \n       # ... rest of implementation\n  Gateway Watcher (  gateway\u002Frun.py )     class BackgroundWatcher:\n       \"\"\"\n       Monitor background processes and push updates to users.\n       \n       Respects user's notification preferences from config.\n       \"\"\"\n       def __init__(self, session_store: SessionStore):\n           self.session_store = session_store\n           self.watchers = {}  # task_id -> watcher thread\n       \n       def start_watching(self, task_id: str, proc: subprocess.Popen,\n                          check_interval: int, chat_id: str):\n           \"\"\"\n           Start watching a background process.\n           \n           Pushes updates to the user's chat at check_interval seconds.\n           \"\"\"\n           def watcher_loop():\n               while proc.poll() is None:\n                   time.sleep(check_interval)\n                   \n                   # Get latest output\n                   output = self._get_new_output(proc)\n                   \n                   if output:\n                       # Check user preferences\n                       prefs = self._get_notification_prefs(chat_id)\n                       \n                       if prefs == 'all':\n                           self._send_update(chat_id, output)\n           \n           thread = threading.Thread(target=watcher_loop, daemon=True)\n           thread.start()\n           self.watchers[task_id] = thread\n       \n       def _get_notification_prefs(self, chat_id: str) -> str:\n           \"\"\"\n           Get user's notification preferences.\n           \n           Options: 'all', 'result', 'error', 'off'\n           \"\"\"\n           # Load from config.yaml or session settings\n           return 'all'  # Default\n  Config Option     # ~\u002F.hermes\u002Fconfig.yaml\n   display  :\n     background_process_notifications  :   all    # 'all', 'result', 'error', 'off'\n   4.6 Hands-On Exercise  Exercise 1: Inspect Gateway Code     cd   ~\u002Fgit\u002Fnous-hermes-agent\u002Fgateway\n  Open   run.py  and find:   The main polling loop (search for   while True )  Platform adapter initialization  Slash command dispatch logic  Session resolution   Questions:   How does it handle multiple platforms simultaneously?  Where are Telegram, Discord adapters initialized?  What's the structure of a Message object?  Exercise 2: Trace a Telegram Message     # Simulate a Telegram message flow\n   from gateway.session import SessionStore\n   from gateway.run import Gateway\n   \n   # Create session store\n   store = SessionStore()\n   \n   # Get or create a session\n   session_key = \"telegram:dm:5490634439\"\n   session = store.get(session_key)\n   \n   print(f\"Session ID: {session.session_id}\")\n   print(f\"Platform: {session.platform}\")\n   Questions:   What session_key format is used for Telegram DMs?  How does the session get persisted to SQLite?  Exercise 3: List Slash Commands     # In your Telegram chat, try:\n   \u002Fhelp\n   \n   # Or in CLI:\n   hermes\n   \u002Fhelp\n   Observe:   What categories of commands are shown?  Which commands have aliases (e.g.,   \u002Freset  =   \u002Fnew )?  Are there any gateway-only commands you can't use in CLI?  Exercise 4: Test Session Persistence     # Start Hermes CLI\n   hermes\n   \n   # Send a message and get a response\n   \"What's your name?\"\n   \n   # Exit with \u002Fquit\n   \u002Fquit\n   \n   # Restart Hermes\n   hermes\n   \n   # Check if history is preserved\n   \u002Fhistory\n   Observe:   Is the previous conversation visible?  What's the session ID now vs. before?  How many messages are in history?   4.7 Common Pitfalls  ❌ Don't Assume Single Session Per User     # BAD: One user = one session\n   user_sessions = {\"brian\": session1}\n   \n   # GOOD: One chat = one session (users can have multiple chats)\n   session_keys = {\n       \"telegram:dm:5490634439\": session1,\n       \"telegram:group:-1001234567890\": session2,\n       \"cli\": session3,\n   }\n  ❌ Don't Ignore Platform Limits     # BAD: Send long messages as-is\n   def send(text):\n       bot.send_message(chat_id, text)  # Telegram has 4096 char limit!\n   \n   # GOOD: Chunk long messages\n   def send(text):\n       for chunk in chunk_text(text, max_length=4096):\n           bot.send_message(chat_id, chunk)\n  ❌ Don't Block the Main Loop     # BAD: Blocking I\u002FO in gateway loop\n   def poll():\n       time.sleep(5)  # Blocks entire gateway!\n       return messages\n   \n   # GOOD: Non-blocking or async\n   def poll():\n       updates = bot.get_updates(timeout=10)  # Non-blocking\n       return updates\n   ✅ Module 4 Checklist     Understand the gateway main loop structure    Explain how platform adapters work (Telegram, Discord, Slack)    Trace a message from arrival to response    Understand session key format and persistence    Explain slash command routing across CLI and gateway    Complete all four exercises    Next:    Module 05: Sessions, Memory & Persistence  html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}",{"id":85,"path":86,"dir":87,"title":88,"description":9,"keywords":89,"body":99},"content:docs:05-sessions-memory:sessions-memory.md","\u002Fdocs\u002F05-sessions-memory\u002Fsessions-memory","05-sessions-memory","Module 05: Sessions, Memory & Persistence",[24,90,91,92,93,94,95,96,97,98],"5.1 Session Storage Architecture","5.2 Session CRUD Operations","5.3 Full-Text Search with FTS5","5.4 Memory System","5.5 User Profiles","5.6 Context Compression Strategies","5.7 Hands-On Exercise","5.8 Common Pitfalls","✅ Module 5 Checklist","  Module 05: Sessions, Memory & Persistence  🎯 What You'll Learn   Session storage architecture (SQLite + FTS5)  How conversation history is persisted and retrieved  The memory system (key-value store)  User profiles and cross-session continuity  Context compression strategies  Searching past conversations   5.1 Session Storage Architecture  SQLite Database Structure     # hermes_state.py - SessionDB class\n   class SessionDB:\n       \"\"\"\n       Persistent session storage using SQLite with FTS5 full-text search.\n       \n       Tables:\n           - sessions: Metadata (session_key, timestamps, platform info)\n           - messages: Conversation content (role, content, timestamp)\n           - memories: Key-value user preferences and facts\n       \"\"\"\n       \n       def __init__(self, db_path: str = \"~\u002F.hermes\u002Fstate.db\"):\n           self.conn = sqlite3.connect(db_path)\n           self._setup_tables()\n  Schema Definition     # hermes_state.py - _setup_tables()\n   def _setup_tables(self):\n       cursor = self.conn.cursor()\n       \n       # Sessions table\n       cursor.execute(\"\"\"\n           CREATE TABLE IF NOT EXISTS sessions (\n               session_key TEXT PRIMARY KEY,\n               session_id TEXT UNIQUE NOT NULL,\n               created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,\n               updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,\n               platform TEXT NOT NULL,\n               chat_id TEXT NOT NULL,\n               display_name TEXT,\n               input_tokens INTEGER DEFAULT 0,\n               output_tokens INTEGER DEFAULT 0,\n               total_tokens INTEGER DEFAULT 0\n           )\n       \"\"\")\n       \n       # Messages table\n       cursor.execute(\"\"\"\n           CREATE TABLE IF NOT EXISTS messages (\n               id INTEGER PRIMARY KEY AUTOINCREMENT,\n               session_key TEXT NOT NULL,\n               role TEXT NOT NULL,  -- 'user' or 'assistant'\n               content TEXT NOT NULL,\n               timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,\n               FOREIGN KEY (session_key) REFERENCES sessions(session_key)\n           )\n       \"\"\")\n       \n       # FTS5 virtual table for full-text search\n       cursor.execute(\"\"\"\n           CREATE VIRTUAL TABLE IF NOT EXISTS messages_fts USING fts5(\n               content,\n               role,\n               content='messages',\n               content_rowid='id'\n           )\n       \"\"\")\n       \n       # Indexes for performance\n       cursor.execute(\"\"\"\n           CREATE INDEX IF NOT EXISTS idx_messages_session\n           ON messages(session_key, timestamp)\n       \"\"\")\n       \n       self.conn.commit()\n  Session Key Format     # Unique identifier for each conversation thread\n   def make_session_key(platform: str, chat_id: str, thread_id: str = None) -> str:\n       \"\"\"\n       Generate a unique session key.\n       \n       Examples:\n           cli                                    # CLI mode (single session)\n           telegram:dm:5490634439                # Telegram DM with user\n           telegram:group:-1001234567890         # Telegram group chat\n           discord:#engineering                  # Discord channel\n           slack:C012AB3CD                       # Slack channel\u002Fuser\n       \"\"\"\n       if thread_id:\n           return f\"{platform}:{chat_id}:{thread_id}\"\n       else:\n           return f\"{platform}:{chat_id}\"\n   5.2 Session CRUD Operations  Create\u002FGet Session     # hermes_state.py\n   def get_or_create_session(self, platform: str, chat_id: str,\n                             display_name: str = None) -> Session:\n       \"\"\"\n       Get existing session or create a new one.\n       \n       Returns Session object with metadata and message history.\n       \"\"\"\n       session_key = self.make_session_key(platform, chat_id)\n       \n       cursor = self.conn.cursor()\n       \n       # Try to find existing session\n       cursor.execute(\"\"\"\n           SELECT * FROM sessions WHERE session_key = ?\n       \"\"\", (session_key,))\n       \n       row = cursor.fetchone()\n       \n       if row:\n           # Return existing session\n           return Session.from_db_row(row)\n       else:\n           # Create new session\n           session_id = self._generate_session_id()\n           created_at = datetime.now()\n           \n           cursor.execute(\"\"\"\n               INSERT INTO sessions \n               (session_key, session_id, platform, chat_id, display_name, created_at)\n               VALUES (?, ?, ?, ?, ?, ?)\n           \"\"\", (session_key, session_id, platform, chat_id, display_name, created_at))\n           \n           self.conn.commit()\n           \n           return Session(\n               session_key=session_key,\n               session_id=session_id,\n               platform=platform,\n               chat_id=chat_id,\n               display_name=display_name,\n               messages=[]  # Fresh session\n           )\n  Save Message     # hermes_state.py\n   def add_message(self, session_key: str, role: str, content: str):\n       \"\"\"\n       Add a message to a session and update FTS index.\n       \n       This is called after every user\u002Fassistant turn.\n       \"\"\"\n       cursor = self.conn.cursor()\n       \n       # Insert message\n       cursor.execute(\"\"\"\n           INSERT INTO messages (session_key, role, content)\n           VALUES (?, ?, ?)\n       \"\"\", (session_key, role, content))\n       \n       # Sync to FTS index\n       cursor.execute(\"\"\"\n           INSERT INTO messages_fts(rowid, content, role)\n           SELECT id, content, role FROM messages \n           WHERE rowid = last_insert_rowid()\n       \"\"\")\n       \n       # Update session timestamp and token counts\n       if role == 'user':\n           cursor.execute(\"\"\"\n               UPDATE sessions SET updated_at = ?, input_tokens = input_tokens + ?\n               WHERE session_key = ?\n           \"\"\", (datetime.now(), estimate_tokens(content), session_key))\n       else:\n           cursor.execute(\"\"\"\n               UPDATE sessions SET updated_at = ?, output_tokens = output_tokens + ?\n               WHERE session_key = ?\n           \"\"\", (datetime.now(), estimate_tokens(content), session_key))\n       \n       self.conn.commit()\n  Get Session History     # hermes_state.py\n   def get_messages(self, session_key: str, limit: int = None,\n                    offset: int = 0) -> list:\n       \"\"\"\n       Retrieve conversation history for a session.\n       \n       Args:\n           session_key: The session identifier\n           limit: Max messages to return (None = all)\n           offset: Skip first N messages (for pagination)\n       \n       Returns:\n           List of Message objects in chronological order\n       \"\"\"\n       cursor = self.conn.cursor()\n       \n       query = \"\"\"\n           SELECT role, content, timestamp FROM messages\n           WHERE session_key = ?\n           ORDER BY timestamp ASC\n       \"\"\"\n       \n       params = [session_key]\n       \n       if limit:\n           query += \" LIMIT ? OFFSET ?\"\n           params.extend([limit, offset])\n       else:\n           query += f\" LIMIT 1000 OFFSET {offset}\"  # Safety cap\n       \n       cursor.execute(query, params)\n       \n       return [\n           Message(role=row[0], content=row[1], timestamp=row[2])\n           for row in cursor.fetchall()\n       ]\n   5.3 Full-Text Search with FTS5  SQLite FTS5 Overview  FTS5 is SQLite's full-text search extension. It creates a virtual table that:   Indexes text content for fast searching  Supports boolean queries (AND, OR, NOT)  Returns results ranked by relevance  Search Implementation     # hermes_state.py\n   def search_messages(self, query: str, limit: int = 10,\n                       platform: str = None) -> list:\n       \"\"\"\n       Search all messages using FTS5 full-text search.\n       \n       Args:\n           query: Search terms (FTS5 syntax supported)\n           limit: Max results to return\n           platform: Filter by platform (optional)\n       \n       Returns:\n           List of (session_key, role, content, timestamp, display_name) tuples\n       \"\"\"\n       cursor = self.conn.cursor()\n       \n       # Build query with optional platform filter\n       sql = \"\"\"\n           SELECT DISTINCT m.session_key, m.role, m.content, m.timestamp,\n                  s.display_name, s.platform\n           FROM messages_fts f\n           JOIN messages m ON f.rowid = m.id\n           JOIN sessions s ON m.session_key = s.session_key\n           WHERE f MATCH ?\n       \"\"\"\n       \n       params = [query]\n       \n       if platform:\n           sql += \" AND s.platform = ?\"\n           params.append(platform)\n       \n       sql += \" ORDER BY m.timestamp DESC LIMIT ?\"\n       params.append(limit)\n       \n       cursor.execute(sql, params)\n       \n       return [\n           {\n               'session_key': row[0],\n               'role': row[1],\n               'content': row[2],\n               'timestamp': row[3],\n               'display_name': row[4],\n               'platform': row[5]\n           }\n           for row in cursor.fetchall()\n       ]\n  FTS5 Query Syntax     # Examples of valid search queries\n   search_messages(\"terminal AND python\")      # Both terms required\n   search_messages(\"git OR github\")            # Either term\n   search_messages(\"NOT docker\")               # Exclude term\n   search_messages(\"\\\"error message\\\"\")        # Exact phrase\n   search_messages(\"bug*\")                     # Wildcard (prefix)\n  LLM-Powered Summarization     # agent\u002Fauxiliary_client.py\n   def summarize_search_results(results: list, query: str) -> str:\n       \"\"\"\n       Use an LLM to summarize search results for the user.\n       \n       This provides a natural language summary instead of raw matches.\n       \"\"\"\n       context = \"\\n\\n---\\n\\n\".join([\n           f\"[{r['timestamp']}] {r['display_name']} ({r['platform']}):\\n{r['content']}\"\n           for r in results[:10]  # Top 10 matches\n       ])\n       \n       prompt = f\"\"\"\n       Search query: \"{query}\"\n       \n       Found {len(results)} matching messages from past conversations.\n       \n       Summarize what the user was asking about or working on:\n       \n       {context}\n       \n       Summary:\n       \"\"\"\n       \n       summary = call_llm(prompt, model=\"small\u002Fcheap\")\n       return summary\n   5.4 Memory System  Key-Value Storage     # hermes_state.py - MemoryStore class\n   class MemoryStore:\n       \"\"\"\n       Simple key-value store for persistent facts and preferences.\n       \n       Stored in ~\u002F.hermes\u002Fmemories\u002F as individual JSON files.\n       \"\"\"\n       \n       def __init__(self, base_path: str = \"~\u002F.hermes\u002Fmemories\"):\n           self.base_path = os.path.expanduser(base_path)\n           os.makedirs(self.base_path, exist_ok=True)\n       \n       def get(self, key: str) -> Optional[str]:\n           \"\"\"Get a memory value by key.\"\"\"\n           path = os.path.join(self.base_path, f\"{key}.json\")\n           \n           if not os.path.exists(path):\n               return None\n           \n           with open(path) as f:\n               data = json.load(f)\n           \n           return data.get('value')\n       \n       def set(self, key: str, value: str):\n           \"\"\"Set a memory value.\"\"\"\n           path = os.path.join(self.base_path, f\"{key}.json\")\n           \n           with open(path, 'w') as f:\n               json.dump({\n                   'key': key,\n                   'value': value,\n                   'updated_at': datetime.now().isoformat()\n               }, f, indent=2)\n       \n       def delete(self, key: str):\n           \"\"\"Delete a memory.\"\"\"\n           path = os.path.join(self.base_path, f\"{key}.json\")\n           \n           if os.path.exists(path):\n               os.remove(path)\n       \n       def list_all(self) -> dict:\n           \"\"\"List all memories as {key: value} dict.\"\"\"\n           memories = {}\n           \n           for filename in os.listdir(self.base_path):\n               if filename.endswith('.json'):\n                   key = filename[:-5]  # Remove .json\n                   memories[key] = self.get(key)\n           \n           return memories\n  Memory Tool     # tools\u002Fmemory_tool.py\n   def memory_tool(action: str, key: str = None, value: str = None) -> str:\n       \"\"\"\n       Manage persistent memories.\n       \n       Actions:\n           - get: Retrieve a memory by key\n           - set: Store a memory (key + value)\n           - delete: Remove a memory\n           - list: Show all memories\n       \"\"\"\n       store = MemoryStore()\n       \n       if action == 'get':\n           result = store.get(key)\n           if result is None:\n               return json.dumps({\"error\": f\"No memory found for key: {key}\"})\n           return json.dumps({\"key\": key, \"value\": result})\n       \n       elif action == 'set':\n           store.set(key, value)\n           return json.dumps({\"success\": True, \"key\": key})\n       \n       elif action == 'delete':\n           store.delete(key)\n           return json.dumps({\"success\": True, \"key\": key})\n       \n       elif action == 'list':\n           memories = store.list_all()\n           return json.dumps({\"memories\": memories})\n  Automatic Memory Nudges     # agent\u002Fauxiliary_client.py - Periodic memory maintenance\n   def periodic_memory_nudge(session_history: list):\n       \"\"\"\n       Analyze conversation and suggest new memories to store.\n       \n       This runs periodically (e.g., every 10 turns) to help the agent\n       remember important facts about the user.\n       \"\"\"\n       prompt = f\"\"\"\n       Based on this conversation, what important facts about the user\n       should be remembered for future sessions?\n       \n       Examples:\n           - User's name is Brian\n           - User prefers Python over JavaScript\n           - User works on a homelab project\n           - User's favorite color is orange\n       \n       Conversation:\n       {session_history[-20:]}  # Last 20 turns\n       \n       Suggested memories (one per line, key=value format):\n       \"\"\"\n       \n       suggestions = call_llm(prompt)\n       \n       # Parse and store suggestions\n       for line in suggestions.strip().split('\\n'):\n           if '=' in line:\n               key, value = line.split('=', 1)\n               memory_tool(action='set', key=key.strip(), value=value.strip())\n   5.5 User Profiles  Profile Structure     # ~\u002F.hermes\u002Fconfig.yaml (user profile section)\n   user  :\n     name  :   \"Brian Caffey\"\n     timezone  :   \"America\u002FNew_York\"\n     preferences  :\n       model  :   \"anthropic\u002Fclaude-opus-4.6\"\n       personality  :   \"warm, playful assistant\"\n       display  :\n         skin  :   \"default\"\n         verbose  :   true\n  Loading User Profile     # hermes_cli\u002Fconfig.py\n   def load_user_profile() -> dict:\n       \"\"\"\n       Load user-specific preferences from config.yaml.\n       \n       This is injected into the system prompt for personalization.\n       \"\"\"\n       config_path = os.path.expanduser(\"~\u002F.hermes\u002Fconfig.yaml\")\n       \n       if not os.path.exists(config_path):\n           return {}  # Default profile\n       \n       with open(config_path) as f:\n           config = yaml.safe_load(f)\n       \n       return {\n           'name': config.get('user', {}).get('name'),\n           'timezone': config.get('user', {}).get('timezone'),\n           'preferences': config.get('user', {}).get('preferences', {}),\n       }\n  Profile in System Prompt     # agent\u002Fprompt_builder.py\n   def build_system_prompt(user_profile: dict = None, **kwargs):\n       \"\"\"\n       Construct the full system prompt.\n       \n       Includes user profile for personalization.\n       \"\"\"\n       parts = [\n           CORE_SYSTEM_PROMPT,\n       ]\n       \n       if user_profile:\n           profile_text = f\"\"\"\n   ---\n   User Profile:\n     Name: {user_profile.get('name', 'User')}\n     Timezone: {user_profile.get('timezone', 'UTC')}\n     Preferences:\n   {yaml.dump(user_profile.get('preferences', {}), indent=4)}\n   \"\"\"\n           parts.append(profile_text)\n       \n       return \"\\n\\n---\\n\\n\".join(parts)\n   5.6 Context Compression Strategies  Why Compress?   LLM context limits (128K, 200K tokens)  Cost reduction (fewer input tokens)  Performance (faster API calls with smaller contexts)  Automatic Compression     # agent\u002Fcontext_compressor.py\n   def compress_context(messages: list, max_tokens: int = 100000) -> list:\n       \"\"\"\n       Compress conversation history when it exceeds token limit.\n       \n       Strategy:\n           1. Keep system prompt + recent N turns intact\n           2. Summarize older turns into a single block\n           3. Preserve tool results (they contain important state)\n       \"\"\"\n       if count_tokens(messages) \u003C= max_tokens:\n           return messages  # No compression needed\n       \n       # Separate recent and old\n       recent_count = 15  # Keep last 15 turns intact\n       recent = messages[-recent_count:]\n       old = messages[:-recent_count]\n       \n       # Summarize older conversation\n       summary = summarize_conversation(old)\n       \n       return [\n           {\n               \"role\": \"system\",\n               \"content\": (\n                   f\"[Context compressed. Previous conversation summarized below.]\\n\\n\"\n                   f\"{summary}\\n\\n\"\n                   f\"---\\n\\nRecent conversation follows:\"\n               )\n           },\n           *recent\n       ]\n  Manual Compression Command     # hermes_cli\u002Fmain.py - \u002Fcompress command\n   @slash_command(\"compress\")\n   def compress(session: Session, target_tokens: int = 50000):\n       \"\"\"\n       Manually compress conversation context.\n       \n       Usage: \u002Fcompress [target_tokens]\n       \"\"\"\n       original_count = count_tokens(session.messages)\n       \n       compressed = compress_context(session.messages, target_tokens)\n       session.messages = compressed\n       \n       new_count = count_tokens(compressed)\n       reduction = ((original_count - new_count) \u002F original_count) * 100\n       \n       return f\"Context compressed from {original_count:,} to {new_count:,} tokens ({reduction:.1f}% reduction)\"\n   5.7 Hands-On Exercise  Exercise 1: Inspect Your Session Database     # Navigate to Hermes home\n   cd   ~\u002F.hermes\n   \n   # Open SQLite database\n   sqlite3   state.db\n   \n   # List tables\n   .tables\n   \n   # See your sessions\n   SELECT   session_key,   session_id,   platform,   display_name,   created_at   \n   FROM   sessions   \n   ORDER   BY   updated_at   DESC  ;\n   \n   # See message count per session\n   SELECT   session_key,   COUNT  (  *  )   as   msg_count   \n   FROM   messages   \n   GROUP   BY   session_key   \n   ORDER   BY   msg_count   DESC  ;\n   \n   # Search for specific content\n   SELECT   role,   substr  (  content,   1,   100  ) \n   FROM   messages_fts   \n   WHERE   messages_fts   MATCH   'terminal OR python'   \n   LIMIT   5  ;\n   \n   # Exit SQLite\n   .quit\n  Exercise 2: Test FTS5 Search     # In Python shell\n   cd ~\u002Fgit\u002Fnous-hermes-agent\n   source venv\u002Fbin\u002Factivate\n   \n   >>> from hermes_state import SessionDB\n   >>> db = SessionDB()\n   >>> \n   >>> # Search for specific terms\n   >>> results = db.search_messages(\"terminal\", limit=5)\n   >>> for r in results:\n   ...     print(f\"[{r['timestamp']}] {r['role']}: {r['content'][:100]}\")\n   Questions:   How many results did you get?  What's the relevance ranking based on?  Exercise 3: Inspect Memory Files     # List all memory files\n   ls   -la   ~\u002F.hermes\u002Fmemories\u002F\n   \n   # Read a specific memory\n   cat   ~\u002F.hermes\u002Fmemories\u002Fuser_name.json\n   \n   # Or list via CLI (if supported)\n   hermes\n   \u002Fmemory   list\n   Observe:   What memories are stored?  How is the data formatted?  Exercise 4: Test Context Compression     # Start Hermes CLI\n   hermes\n   \n   # Have a long conversation (or load an existing one)\n   # Then trigger compression\n   \u002Fcompress   50000\n   \n   # Check token usage\n   \u002Fusage\n   Observe:   How many tokens before compression?  How many after?  Is the conversation still usable?   5.8 Common Pitfalls  ❌ Don't Ignore Token Limits     # BAD: No context size checking\n   messages = build_context()  # Could be millions of tokens!\n   response = llm.chat(messages)\n   \n   # GOOD: Check and compress\n   if count_tokens(messages) > MAX_TOKENS * 0.9:\n       messages = compress_context(messages, MAX_TOKENS * 0.8)\n  ❌ Don't Store Sensitive Data in Memories     # BAD: Storing secrets as memories\n   memory_tool('set', 'api_key', 'sk-1234567890abcdef')\n   \n   # GOOD: Use environment variables or secure vaults\n   os.environ['API_KEY'] = 'sk-1234567890abcdef'\n  ❌ Don't Assume Session Continuity     # BAD: Assuming session persists forever\n   session = db.get(session_key)\n   # ... hours later ...\n   session.messages  # Might be stale!\n   \n   # GOOD: Always reload from database\n   def get_current_session(session_key):\n       return db.get(session_key)  # Fresh read each time\n   ✅ Module 5 Checklist     Understand SQLite schema for sessions and messages    Explain FTS5 full-text search integration    Trace how messages are persisted after each turn    Understand memory system (key-value store)    Explain context compression strategies    Complete all four exercises    Next:    Module 06: Skills System & Self-Improvement  html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}",{"id":101,"path":102,"dir":103,"title":104,"description":9,"keywords":105,"body":116},"content:docs:06-skills-learning:skills-learning.md","\u002Fdocs\u002F06-skills-learning\u002Fskills-learning","06-skills-learning","Module 06: Skills System & Self-Improvement",[24,106,107,108,109,108,109,110,111,112,113,114,115],"6.1 Skills Overview","6.2 Skill Structure","Pitfalls","Verification","6.4 Automatic Skill Creation","6.5 Skill Search & Installation","6.6 Skill Self-Improvement","6.7 Hands-On Exercise","6.8 Common Pitfalls","✅ Module 6 Checklist","  Module 06: Skills System & Self-Improvement  🎯 What You'll Learn   How skills work as procedural memory  Skill structure (SKILL.md format)  Automatic skill creation from experience  Skill search and installation  Skill execution and context passing  Updating skills when they break   6.1 Skills Overview  What Are Skills?  Skills are   reusable workflows  that the agent can execute. They're like functions with:   Instructions (what to do)  References (supporting docs)  Templates (reusable snippets)  Scripts (helper code)  Key Design Principles    Procedural Memory  — Skills persist across sessions   Self-Improving  — Agent can update skills when they break   User-Extensible  — Install new skills from community or create your own   Context-Aware  — Skills load into the agent's context as needed  Skill Storage Locations     # Built-in skills (from repo)\n   ~  \u002Fgit\u002Fnous-hermes-agent\u002Foptional-skills\u002F\n   \n   # User-installed skills\n   ~  \u002F.hermes\u002Fskills\u002F\n   \n   # Skill format: Each skill is a directory with:\n   my-skill\u002F\n   ├──   SKILL.md            # Main instructions\n   ├──   references\u002F         # Supporting documentation\n   ├──   templates\u002F          # Reusable templates (YAML, JSON, etc.)\n   └──   scripts\u002F            # Helper Python scripts\n   6.2 Skill Structure  SKILL.md Format     # Skill Name\n   \n   Short description of what this skill does.\n   \n   ---\n   \n   ## Trigger Conditions\n   \n   Use this skill when:\n   -   Condition 1\n   -   Condition 2\n   \n   Do NOT use this skill when:\n   -   Condition A\n   -   Condition B\n   \n   ---\n   \n   ## Prerequisites\n   \n   Required tools\u002FAPIs:\n   -   Tool X must be installed\n   -   API key Y must be set\n   \n   ---\n   \n   ## Steps\n   \n   1.   **Step name**  : Description of what to do\n      -   Sub-step if needed\n      -   Another sub-step\n   \n   2.   **Another step**  : More details\n      ```python\n      # Example code snippet\n      result = do_something()\n    Final step : Wrap up   Pitfalls  Common mistakes to avoid:   Don't do X (explain why)  Watch out for Y (edge case)   Verification  After completing this skill, verify:     Result looks correct    No errors in output    Files are in expected locations   \n### Example Skill: GitHub PR Workflow\n\n```markdown\n# github-pr-workflow\n\nFull pull request lifecycle — create branches, commit changes, open PRs.\n\n---\n\n## Trigger Conditions\n\nUse this skill when:\n- User wants to create a new pull request\n- User needs to update an existing PR\n- User wants to review code changes\n\nDo NOT use this skill when:\n- Just viewing repo (use github-repo-management instead)\n- Simple file edit without version control\n\n---\n\n## Prerequisites\n\nRequired tools\u002FAPIs:\n- `gh` CLI installed and authenticated (`gh auth status`)\n- Git configured with user.name and user.email\n- Write access to target repository\n\n---\n\n## Steps\n\n1. **Understand the request**: Clarify what changes need to be made\n   - Ask for branch name if not specified\n   - Confirm target repository\n\n2. **Create\u002Fcheckout branch**:\n   ```bash\n   git checkout -b feature\u002Fmy-feature\n   # or: git checkout existing-branch\n    Make changes : Use file tools to edit code   Read files first to understand context  Make targeted edits with   file_patch  Test if applicable   Commit changes :     git   add   .\n   git   commit   -m   \"feat: descriptive message\"\n   Push branch :     git   push   origin   feature\u002Fmy-feature\n   Create PR : Use   gh pr create     gh   pr   create   \\\n     --title   \"Descriptive title\"   \\\n     --body   \"## Description\\nWhat this does\\n\\n## Changes\\n- Change 1\\n- Change 2\"\n   Pitfalls    Don't commit without reviewing : Always show the user a diff before committing   Don't push to main : Always create a feature branch first   Watch for merge conflicts : Check if upstream has new commits   Verification  After completing this skill, verify:     PR is created and visible on GitHub    Branch is pushed to remote    Local working directory is clean (  git status )   \n---\n\n## 6.3 Skill Loading & Execution\n\n### Skills Hub (`hermes_cli\u002Fskills_hub.py`)\n\n```python\nclass SkillsHub:\n    \"\"\"\n    Search, browse, install, and manage skills.\n    \n    Handles both built-in and user-installed skills.\n    \"\"\"\n    \n    def __init__(self):\n        self.builtin_path = \"optional-skills\u002F\"\n        self.user_path = os.path.expanduser(\"~\u002F.hermes\u002Fskills\u002F\")\n    \n    def list_all(self) -> list:\n        \"\"\"List all available skills (built-in + user).\"\"\"\n        skills = []\n        \n        # Built-in skills\n        if os.path.exists(self.builtin_path):\n            for name in os.listdir(self.builtin_path):\n                path = os.path.join(self.builtin_path, name)\n                if os.path.isdir(path) and os.path.exists(f\"{path}\u002FSKILL.md\"):\n                    skills.append({\n                        'name': name,\n                        'type': 'builtin',\n                        'path': path\n                    })\n        \n        # User skills\n        if os.path.exists(self.user_path):\n            for name in os.listdir(self.user_path):\n                path = os.path.join(self.user_path, name)\n                if os.path.isdir(path) and os.path.exists(f\"{path}\u002FSKILL.md\"):\n                    skills.append({\n                        'name': name,\n                        'type': 'user',\n                        'path': path\n                    })\n        \n        return skills\n    \n    def get_skill(self, name: str) -> Optional[dict]:\n        \"\"\"Get a specific skill's content.\"\"\"\n        # Check user skills first (override built-ins)\n        user_path = os.path.join(self.user_path, name, \"SKILL.md\")\n        if os.path.exists(user_path):\n            return self._load_skill(name, user_path)\n        \n        # Check built-in\n        builtin_path = os.path.join(self.builtin_path, name, \"SKILL.md\")\n        if os.path.exists(builtin_path):\n            return self._load_skill(name, builtin_path)\n        \n        return None\n    \n    def _load_skill(self, name: str, path: str) -> dict:\n        \"\"\"Load a skill from disk.\"\"\"\n        with open(path) as f:\n            content = f.read()\n        \n        # Parse YAML frontmatter if present\n        parts = content.split('---\\n', 2)\n        if len(parts) == 3:\n            import yaml\n            metadata = yaml.safe_load(parts[1])\n            body = parts[2]\n        else:\n            metadata = {}\n            body = content\n        \n        return {\n            'name': name,\n            'metadata': metadata,\n            'content': body,\n            'path': path\n        }\n  Skill Slash Command (  hermes_cli\u002Fskill_commands.py )     def skill_command(skill_name: str, args: dict = None) -> str:\n       \"\"\"\n       Execute a skill as part of the conversation.\n       \n       Skills are loaded as USER messages (not system prompt) to preserve caching.\n       \"\"\"\n       hub = SkillsHub()\n       skill = hub.get_skill(skill_name)\n       \n       if not skill:\n           return f\"Skill '{skill_name}' not found. Use \u002Fskills search to find available skills.\"\n       \n       # Build skill context\n       skill_context = f\"\"\"\n   ---\n   SKILL: {skill['name']}\n   \n   {skill['content']}\n   \n   ---\n   Context:\n   {json.dumps(args or {})}\n   \"\"\"\n       \n       # Return as user message (will be injected into conversation)\n       return skill_context\n  Usage in Conversation     # run_agent.py - Inject skills when needed\n   def build_context(user_message: str, skills: list = None):\n       messages = [\n           {\"role\": \"system\", \"content\": system_prompt},\n           *conversation_history,\n       ]\n       \n       # Load skills as user message (not system!)\n       if skills:\n           for skill_name in skills:\n               skill_content = skill_command(skill_name)\n               messages.append({\"role\": \"user\", \"content\": skill_content})\n       \n       messages.append({\"role\": \"user\", \"content\": user_message})\n       return messages\n   6.4 Automatic Skill Creation  When to Create Skills     # agent\u002Fauxiliary_client.py - Periodic skill suggestion\n   def suggest_new_skills(conversation: list):\n       \"\"\"\n       Analyze conversation and suggest new skills to create.\n       \n       Triggers after complex multi-step tasks (5+ tool calls).\n       \"\"\"\n       if count_tool_calls(conversation) \u003C 5:\n           return None  # Not complex enough\n       \n       prompt = f\"\"\"\n       Based on this conversation, could a reusable skill be created?\n       \n       Look for patterns like:\n       - Repeated multi-step workflows\n       - Common tasks the user asks about\n       - Procedures that could be documented\n       \n       Conversation excerpt:\n       {conversation[-30:]}  # Last 30 turns\n       \n       If a skill makes sense, suggest:\n       1. Skill name (kebab-case)\n       2. When to use it\n       3. Step-by-step instructions\n       4. Common pitfalls\n       \n       Otherwise say \"No skill needed.\"\n       \"\"\"\n       \n       suggestion = call_llm(prompt)\n       return parse_skill_suggestion(suggestion)\n  Creating a Skill from Suggestion     def create_skill_from_suggestion(suggestion: dict):\n       \"\"\"\n       Create a new skill directory with SKILL.md.\n       \n       Asks user for approval before saving.\n       \"\"\"\n       name = suggestion['name']\n       path = os.path.expanduser(f\"~\u002F.hermes\u002Fskills\u002F{name}\u002FSKILL.md\")\n       \n       # Create directory\n       os.makedirs(os.path.dirname(path), exist_ok=True)\n       \n       # Write skill content\n       with open(path, 'w') as f:\n           f.write(suggestion['instructions'])\n       \n       return {\n           'success': True,\n           'name': name,\n           'path': path,\n           'message': f\"Skill '{name}' created! You can now use \u002F{name} in conversations.\"\n       }\n   6.5 Skill Search & Installation  Skills Hub CLI (  \u002Fskills  command)     # hermes_cli\u002Fmain.py - \u002Fskills slash command\n   @slash_command(\"skills\")\n   def skills(subcommand: str = None, query: str = None):\n       \"\"\"\n       Manage skills.\n       \n       Usage:\n           \u002Fskills              # List all available skills\n           \u002Fskills search \u003Cq>   # Search skills by keyword\n           \u002Fskills inspect \u003Cname>  # Show a specific skill\n           \u002Fskills install \u003Curl>   # Install from GitHub\n       \"\"\"\n       hub = SkillsHub()\n       \n       if not subcommand:\n           # List all skills\n           skills = hub.list_all()\n           return format_skills_list(skills)\n       \n       elif subcommand == 'search':\n           # Search by keyword in skill content\n           results = []\n           for skill in hub.list_all():\n               skill_data = hub.get_skill(skill['name'])\n               if query.lower() in skill_data['content'].lower():\n                   results.append(skill)\n           \n           return format_search_results(results, query)\n       \n       elif subcommand == 'inspect':\n           # Show specific skill\n           skill = hub.get_skill(query)\n           if not skill:\n               return f\"Skill '{query}' not found.\"\n           \n           return f\"# {skill['name']}\\n\\n{skill['content']}\"\n       \n       elif subcommand == 'install':\n           # Install from GitHub URL\n           return install_skill_from_github(query)\n  Installing from GitHub     def install_skill_from_github(url: str) -> str:\n       \"\"\"\n       Install a skill from a GitHub repository.\n       \n       Expected format: https:\u002F\u002Fgithub.com\u002Fuser\u002Frepo\u002Ftree\u002Fmain\u002Fskill-name\n       Or: user\u002Frepo\u002Fskill-name (short form)\n       \"\"\"\n       # Parse URL to get repo and skill name\n       parsed = parse_github_url(url)\n       \n       if not parsed:\n           return \"Invalid GitHub URL format.\"\n       \n       # Clone or download the skill directory\n       target_path = os.path.expanduser(f\"~\u002F.hermes\u002Fskills\u002F{parsed['skill_name']}\")\n       \n       if os.path.exists(target_path):\n           return f\"Skill '{parsed['skill_name']}' already exists. Use \u002Fskills update first.\"\n       \n       # Download from GitHub\n       success = download_skill_from_github(parsed, target_path)\n       \n       if success:\n           return f\"Skill '{parsed['skill_name']}' installed! You can now use \u002F{parsed['skill_name']}.\"\n       else:\n           return \"Failed to install skill. Check the URL and try again.\"\n   6.6 Skill Self-Improvement  Updating Skills When They Break     def update_skill_from_error(skill_name: str, error_context: dict):\n       \"\"\"\n       Automatically update a skill when it fails.\n       \n       This is the \"self-improving\" part of Hermes.\n       \"\"\"\n       hub = SkillsHub()\n       skill = hub.get_skill(skill_name)\n       \n       if not skill:\n           return f\"Skill '{skill_name}' not found.\"\n       \n       prompt = f\"\"\"\n       This skill failed with an error. Update the instructions to prevent this.\n       \n       Skill: {skill['name']}\n       Current instructions:\n       {skill['content']}\n       \n       Error context:\n       - What went wrong: {error_context['error']}\n       - Steps taken before failure: {error_context['steps']}\n       - Environment state: {error_context['state']}\n       \n       Update the SKILL.md to:\n       1. Add this scenario to \"Pitfalls\" section\n       2. Modify steps if needed\n       3. Add verification checks\n       \n       Return only the updated SKILL.md content.\n       \"\"\"\n       \n       updated_content = call_llm(prompt)\n       \n       # Save updated skill\n       path = os.path.expanduser(f\"~\u002F.hermes\u002Fskills\u002F{skill_name}\u002FSKILL.md\")\n       with open(path, 'w') as f:\n           f.write(updated_content)\n       \n       return f\"Skill '{skill_name}' updated to handle this error case.\"\n  Manual Skill Updates     # User can also manually update skills\n   @slash_command(\"skills\")\n   def skills_update(skill_name: str, changes: str):\n       \"\"\"\n       Manually update a skill.\n       \n       Usage: \u002Fskills update \u003Cname> \u003Cchanges>\n       \"\"\"\n       hub = SkillsHub()\n       skill = hub.get_skill(skill_name)\n       \n       if not skill:\n           return f\"Skill '{skill_name}' not found.\"\n       \n       # Apply changes (simple find-and-replace or full rewrite)\n       updated_content = apply_changes(skill['content'], changes)\n       \n       path = os.path.expanduser(f\"~\u002F.hermes\u002Fskills\u002F{skill_name}\u002FSKILL.md\")\n       with open(path, 'w') as f:\n           f.write(updated_content)\n       \n       return f\"Skill '{skill_name}' updated.\"\n   6.7 Hands-On Exercise  Exercise 1: List Available Skills     # In your Telegram chat or CLI\n   \u002Fskills\n   \n   # Or search for specific skills\n   \u002Fskills   search   github\n   \u002Fskills   search   terminal\n   Observe:   How many built-in skills are available?  What categories do they fall into?  Can you find a skill related to something you do often?  Exercise 2: Inspect a Skill     # Look at a specific skill's content\n   \u002Fskills   inspect   github-pr-workflow\n   \n   # Or directly read the file\n   cat   ~\u002Fgit\u002Fnous-hermes-agent\u002Foptional-skills\u002Fgithub\u002Fgithub-pr-workflow\u002FSKILL.md\n   Questions:   What are the trigger conditions?  What pitfalls does it warn about?  How would you improve this skill?  Exercise 3: Use a Skill in Conversation     # Start a conversation and ask for help with something that has a skill\n   \"Help me create a GitHub pull request for my changes\"\n   \n   # The agent should recognize the task and load the appropriate skill\n   Observe:   Does the agent mention using a skill?  How does it guide you through the steps?  Are there any pitfalls it warns about?  Exercise 4: Create Your Own Skill     # Create a simple custom skill\n   mkdir   -p   ~\u002F.hermes\u002Fskills\u002Fmy-custom-task\n   \n   cat   >   ~\u002F.hermes\u002Fskills\u002Fmy-custom-task\u002FSKILL.md   \u003C\u003C   'EOF'\n   # my-custom-task\n   \n   A custom workflow for doing X.\n   \n   ---\n   \n   ## Trigger Conditions\n   \n   Use this when:\n   - User wants to do X\n   - User needs help with Y\n   \n   ---\n   \n   ## Steps\n   \n   1. Do step 1\n   2. Do step 2\n   3. Verify result\n   \n   ---\n   \n   ## Pitfalls\n   \n   - Don't forget to check Z\n   EOF\n   \n   # Test it\n   hermes\n   \u002Fmy-custom-task\n   Observe:   Does the agent recognize your custom skill?  Can you use it in conversation?   6.8 Common Pitfalls  ❌ Don't Store Skills Only in System Prompt     # BAD: Skills as system prompt (breaks caching)\n   def build_system_prompt():\n       skills = load_all_skills()  # Every skill, every time!\n       return f\"{CORE_PROMPT}\\n\\nSkills:\\n{skills}\"\n   \n   # GOOD: Load as user message on demand\n   def build_context(user_message):\n       messages = [{\"role\": \"system\", \"content\": CORE_PROMPT}]\n       \n       if needed_skills:\n           for skill in needed_skills:\n               content = load_skill(skill)\n               messages.append({\"role\": \"user\", \"content\": f\"SKILL: {content}\"})\n       \n       messages.append({\"role\": \"user\", \"content\": user_message})\n  ❌ Don't Create Skills for One-Off Tasks     # BAD: Too specific, won't be reused\n   skill_name = \"fix-this-specific-bug-in-project-x\"\n   \n   # GOOD: Generalize the pattern\n   skill_name = \"debug-terminal-tool-errors\"\n  ❌ Don't Ignore Skill Updates When They Break     # BAD: Let skills rot as APIs change\n   def execute_skill(skill):\n       # Old code that no longer works\n       old_api_call()  # Returns error!\n   \n   # GOOD: Update skills proactively\n   def update_broken_skills():\n       for skill in list_all_skills():\n           if skill_has_deprecated_apis(skill):\n               suggest_update(skill)\n   ✅ Module 6 Checklist     Understand skill structure (SKILL.md format)    Explain how skills are loaded and executed    Trace automatic skill creation from experience    Use the \u002Fskills command to search\u002Finspect\u002Finstall    Understand skill self-improvement when they break    Complete all four exercises    Next:    Module 07: Advanced Topics  html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}",{"id":118,"path":119,"dir":120,"title":121,"description":9,"keywords":122,"body":133},"content:docs:07-advanced-topics:advanced-topics.md","\u002Fdocs\u002F07-advanced-topics\u002Fadvanced-topics","07-advanced-topics","Module 07: Advanced Topics",[24,123,124,125,126,127,128,129,130,131,132],"7.1 Subagent Delegation","7.2 Cron Scheduler","7.3 MCP (Model Context Protocol) Integration","7.4 Terminal Backends","7.5 Batch Trajectory Generation","7.6 RL Training Environments (Atropos)","7.7 Hands-On Exercise","7.8 Common Pitfalls","✅ Module 7 Checklist","🎉 Course Complete!","  Module 07: Advanced Topics  🎯 What You'll Learn   Subagent delegation for parallel workstreams  Cron scheduler for automated tasks  MCP (Model Context Protocol) integration  Terminal backends (local, Docker, SSH, Modal)  Batch trajectory generation for research  RL training environments (Atropos)   7.1 Subagent Delegation  Why Subagents?  Subagents allow the agent to:    Parallelize work  — Run multiple independent tasks simultaneously   Isolate context  — Each subagent has its own clean environment   Reduce token usage  — Subagent results don't pollute parent's context   Handle complex workflows  — Break big tasks into smaller pieces  Delegate Tool (  tools\u002Fdelegate_tool.py )     def delegate_task(goal: str, context: str = None, \n                     toolsets: list = None) -> str:\n       \"\"\"\n       Spawn a subagent to work on a task.\n       \n       Args:\n           goal: What the subagent should accomplish\n           context: Background info the subagent needs\n           toolsets: Which tools to enable (defaults to parent's)\n       \n       Returns:\n           Summary of what the subagent accomplished\n       \"\"\"\n       # Create isolated agent instance\n       subagent = AIAgent(\n           model=parent.model,\n           enabled_toolsets=toolsets or parent.enabled_toolsets,\n           session_id=generate_session_id(),\n           platform=\"subagent\",\n       )\n       \n       # Build system prompt with context\n       system_prompt = f\"\"\"\n   You are an autonomous agent working on a delegated task.\n   \n   Goal: {goal}\n   \n   Context:\n   {context or 'No additional context provided.'}\n   \n   Work independently and report back when done.\n   \"\"\"\n       \n       # Run the subagent (blocks until completion)\n       result = subagent.chat(\n           message=\"Start working on your assigned task.\",\n           system_message=system_prompt,\n           max_iterations=50  # Limit iterations\n       )\n       \n       return f\"Subagent completed: {result}\"\n  Batch Delegation (Parallel Subagents)     def delegate_batch(tasks: list) -> list:\n       \"\"\"\n       Spawn multiple subagents in parallel.\n       \n       Args:\n           tasks: List of {goal, context, toolsets} dicts\n       \n       Returns:\n           List of results (one per task)\n       \"\"\"\n       import concurrent.futures\n       \n       def run_task(task):\n           return delegate_task(\n               goal=task['goal'],\n               context=task.get('context'),\n               toolsets=task.get('toolsets')\n           )\n       \n       # Run all tasks in parallel\n       with concurrent.futures.ThreadPoolExecutor(max_workers=len(tasks)) as executor:\n           results = list(executor.map(run_task, tasks))\n       \n       return results\n  Usage Example     # In a conversation\n   \"I need to research three topics: Python, Rust, and Go. \n   Please delegate each to a separate subagent for parallel work.\"\n   \n   # Agent creates 3 subagents:\n   delegate_task(\n       goal=\"Research Python's strengths and use cases\",\n       context=\"User is comparing programming languages\"\n   )\n   delegate_task(\n       goal=\"Research Rust's strengths and use cases\",\n       context=\"User is comparing programming languages\"\n   )\n   delegate_task(\n       goal=\"Research Go's strengths and use cases\",\n       context=\"User is comparing programming languages\"\n   )\n   \n   # Wait for all to complete, then synthesize results\n   \"Based on the three subagent reports, create a comparison table.\"\n   7.2 Cron Scheduler  Overview  Hermes has a built-in cron scheduler for:   Daily\u002Fweekly automated reports  Periodic backups  Scheduled research tasks  Recurring reminders  Job Structure (  cron\u002Fjobs.py )     class CronJob:\n       \"\"\"\n       A scheduled task.\n       \n       Attributes:\n           job_id: Unique identifier\n           name: Human-readable name\n           schedule: Cron expression (e.g., \"0 9 * * *\")\n           prompt: Task to execute when triggered\n           skills: Skills to load before running\n           deliver: Where to send results (origin, telegram, local file)\n           model: Model override (optional)\n           repeat: Number of times to repeat (None = forever)\n           status: 'active', 'paused', or 'completed'\n       \"\"\"\n       \n       def __init__(self, name: str, schedule: str, prompt: str,\n                    deliver: str = \"origin\", skills: list = None):\n           self.job_id = generate_job_id()\n           self.name = name\n           self.schedule = schedule  # Cron format\n           self.prompt = prompt\n           self.skills = skills or []\n           self.deliver = deliver\n           self.status = 'active'\n           self.last_run = None\n           self.next_run = self._calculate_next_run()\n  Creating a Cron Job     # Via CLI\n   @slash_command(\"cron\")\n   def cron_create(name: str, schedule: str, prompt: str,\n                   deliver: str = \"origin\", skills: list = None):\n       \"\"\"\n       Create a scheduled task.\n       \n       Usage:\n           \u002Fcron create \"Daily Report\" \"0 9 * * *\" \\\n               \"Summarize today's news from Hacker News\" \\\n               deliver=telegram\n       \"\"\"\n       job = CronJob(\n           name=name,\n           schedule=schedule,\n           prompt=prompt,\n           deliver=deliver,\n           skills=skills\n       )\n       \n       scheduler.add_job(job)\n       return f\"Job '{name}' scheduled for {schedule}. Next run: {job.next_run}\"\n  Cron Expression Format     # Standard cron format: \"minute hour day month weekday\"\n   schedules = {\n       \"every minute\": \"* * * * *\",\n       \"hourly\": \"0 * * * *\",\n       \"daily at 9am\": \"0 9 * * *\",\n       \"weekly on Monday\": \"0 0 * * 1\",\n       \"every 30 minutes\": \"*\u002F30 * * * *\",\n       \"first of month\": \"0 0 1 * *\",\n   }\n  Scheduler Loop (  cron\u002Fscheduler.py )     class Scheduler:\n       def __init__(self):\n           self.jobs = []  # List of CronJob objects\n           self.running = False\n       \n       def start(self):\n           \"\"\"Start the scheduler loop.\"\"\"\n           self.running = True\n           \n           while self.running:\n               now = datetime.now()\n               \n               for job in self.jobs:\n                   if job.status != 'active':\n                       continue\n                   \n                   if now >= job.next_run:\n                       # Trigger the job\n                       self._run_job(job)\n                       \n                       # Calculate next run time\n                       job.last_run = now\n                       job.next_run = self._calculate_next_run(job.schedule, now)\n               \n               # Check every minute\n               time.sleep(60)\n       \n       def _run_job(self, job: CronJob):\n           \"\"\"\n           Execute a scheduled job.\n           \n           Runs in isolated context with specified skills loaded.\n           \"\"\"\n           print(f\"Running scheduled job: {job.name}\")\n           \n           # Create agent with job's skills\n           agent = AIAgent(\n               model=job.model or default_model,\n               enabled_toolsets=get_enabled_toolsets(),\n               session_id=f\"cron:{job.job_id}:{timestamp()}\",\n           )\n           \n           # Load skills before running prompt\n           for skill_name in job.skills:\n               load_skill(skill_name)\n           \n           # Execute the job's prompt\n           result = agent.chat(job.prompt)\n           \n           # Deliver result\n           if job.deliver == \"origin\":\n               send_to_origin(result)  # Back to where cron was created\n           elif job.deliver == \"telegram\":\n               send_to_telegram(result)\n           elif job.deliver == \"local\":\n               save_to_file(f\"~\u002F.hermes\u002Fcron\u002F{job.job_id}.txt\", result)\n  Example Jobs     # ~\u002F.hermes\u002Fconfig.yaml (example cron jobs)\n   cron  :\n     -   name  :   \"Daily Hacker News Summary\"\n       schedule  :   \"0 9 * * *\"    # Daily at 9am\n       prompt  :   \"Fetch top 10 stories from HN and summarize each in one sentence\"\n       skills  : [  \"hn\"  ]\n       deliver  :   telegram\n       \n     -   name  :   \"Weekly Backup\"\n       schedule  :   \"0 0 * * 0\"    # Sunday midnight\n       prompt  :   \"Backup all ~\u002F.hermes\u002Fsessions to compressed archive\"\n       deliver  :   local\n       \n     -   name  :   \"Token Usage Report\"\n       schedule  :   \"0 12 * * *\"    # Daily at noon\n       prompt  :   \"Show token usage for all sessions this week\"\n       skills  : [  \"token-usage\"  ]\n       deliver  :   origin\n   7.3 MCP (Model Context Protocol) Integration  What is MCP?  MCP is a standard protocol for connecting AI agents to external tools and data sources:   Filesystem access  Database queries  API integrations  Custom tool servers  MCP Client (  tools\u002Fmcp_tool.py )     class MCPClient:\n       \"\"\"\n       Built-in MCP client for connecting to MCP servers.\n       \n       Configured via ~\u002F.hermes\u002Fconfig.yaml\n       \"\"\"\n       \n       def __init__(self, config_path: str = \"~\u002F.hermes\u002Fconfig.yaml\"):\n           self.servers = self._load_servers(config_path)\n           self.clients = {}  # server_name -> MCP client connection\n       \n       def _load_servers(self, config_path: str) -> dict:\n           \"\"\"\n           Load MCP server configurations from YAML.\n           \n           Example config:\n               mcp:\n                 servers:\n                   filesystem:\n                     command: \"npx -y @modelcontextprotocol\u002Fserver-filesystem\"\n                     args: [\"\u002Fhome\u002Fbrian\"]\n                   github:\n                     command: \"npx -y @modelcontextprotocol\u002Fserver-github\"\n                     env:\n                       GITHUB_TOKEN: \"${GITHUB_TOKEN}\"\n           \"\"\"\n           with open(config_path) as f:\n               config = yaml.safe_load(f)\n           \n           return config.get('mcp', {}).get('servers', {})\n       \n       def connect(self, server_name: str):\n           \"\"\"\n           Connect to an MCP server.\n           \n           Spawns the server process and establishes stdio connection.\n           \"\"\"\n           if server_name not in self.servers:\n               raise ValueError(f\"Unknown MCP server: {server_name}\")\n           \n           config = self.servers[server_name]\n           \n           # Spawn server process\n           proc = subprocess.Popen(\n               [config['command']] + config.get('args', []),\n               stdin=subprocess.PIPE,\n               stdout=subprocess.PIPE,\n               stderr=subprocess.PIPE,\n               env={**os.environ, **config.get('env', {})},\n               text=True\n           )\n           \n           # Create client wrapper\n           client = MCPClientFor(proc)\n           client.connect()\n           \n           self.clients[server_name] = client\n       \n       def list_tools(self, server_name: str) -> list:\n           \"\"\"\n           List available tools from an MCP server.\n           \n           Returns list of {name, description, parameters} dicts.\n           \"\"\"\n           if server_name not in self.clients:\n               self.connect(server_name)\n           \n           return self.clients[server_name].list_tools()\n       \n       def call_tool(self, server_name: str, tool_name: str,\n                     args: dict) -> str:\n           \"\"\"\n           Call a tool on an MCP server.\n           \n           Returns result as string (usually JSON).\n           \"\"\"\n           if server_name not in self.clients:\n               self.connect(server_name)\n           \n           result = self.clients[server_name].call_tool(tool_name, args)\n           return json.dumps({\"result\": result})\n  MCP Tool Wrapper     # tools\u002Fmcp_tool.py - Register as regular tool\n   def mcp_call(server: str, tool: str, args_json: str) -> str:\n       \"\"\"\n       Call an MCP server tool.\n       \n       Usage: mcp_call(\"filesystem\", \"read_file\", {\"path\": \"\u002Fhome\u002Fbrian\u002Ffile.txt\"})\n       \"\"\"\n       client = MCPClient()\n       return client.call_tool(server, tool, json.loads(args_json))\n   \n   # Register with registry\n   registry.register(\n       name=\"mcp_call\",\n       toolset=\"mcp\",\n       schema={\n           \"name\": \"mcp_call\",\n           \"description\": \"Call a tool on an MCP server.\",\n           \"parameters\": {\n               \"type\": \"object\",\n               \"properties\": {\n                   \"server\": {\"type\": \"string\", \"description\": \"MCP server name\"},\n                   \"tool\": {\"type\": \"string\", \"description\": \"Tool to call\"},\n                   \"args_json\": {\"type\": \"string\", \"description\": \"JSON args for the tool\"}\n               },\n               \"required\": [\"server\", \"tool\", \"args_json\"]\n           }\n       },\n       handler=lambda args, **kw: mcp_call(\n           server=args[\"server\"],\n           tool=args[\"tool\"],\n           args_json=args[\"args_json\"]\n       )\n   )\n  Using MCP in Conversations     # User asks:\n   \"Read the file \u002Fhome\u002Fbrian\u002Fproject\u002Fconfig.yaml and tell me what's in it\"\n   \n   # Agent uses MCP filesystem tool:\n   mcp_call(\n       server=\"filesystem\",\n       tool=\"read_file\",\n       args_json='{\"path\": \"\u002Fhome\u002Fbrian\u002Fproject\u002Fconfig.yaml\"}'\n   )\n   \n   # Returns file contents, agent summarizes for user\n   7.4 Terminal Backends  Overview  Hermes supports multiple terminal execution backends:     Backend  Description  Use Case     local  Run on current machine  Development, testing    docker  Run in Docker container  Isolation, reproducibility    ssh  Run on remote server via SSH  Remote development    daytona  Serverless containers  Persistent environments    modal  Serverless GPU compute  ML training, heavy workloads    singularity  HPC container platform  Research clusters  Backend Selection     # ~\u002F.hermes\u002Fconfig.yaml\n   terminal:\n     backend: \"local\"  # or \"docker\", \"ssh\", \"daytona\", \"modal\"\n     \n     # Backend-specific config\n     ssh:\n       host: \"myserver.example.com\"\n       user: \"brian\"\n       port: 22\n     \n     docker:\n       image: \"python:3.11-slim\"\n       volumes:\n         - \"\u002Fhome\u002Fbrian\u002Fproject:\u002Fworkspace\"\n     \n     modal:\n       gpu: \"T4\"  # or \"A10G\", \"A100\"\n  Docker Backend Example     # tools\u002Fenvironments\u002Fdocker.py\n   class DockerBackend:\n       def __init__(self, image: str = \"python:3.11-slim\",\n                    volumes: dict = None):\n           self.image = image\n           self.volumes = volumes or {}\n       \n       def execute(self, command: str, workdir: str = None) -> str:\n           \"\"\"\n           Run a command in a Docker container.\n           \"\"\"\n           # Build volume mounts\n           volume_args = []\n           for host_path, container_path in self.volumes.items():\n               volume_args.extend([\"-v\", f\"{host_path}:{container_path}\"])\n           \n           # Build working directory\n           workdir_arg = f\"-w {workdir}\" if workdir else \"\"\n           \n           # Run command in container (ephemeral)\n           cmd = [\n               \"docker\", \"run\", \"--rm\",\n               *volume_args,\n               workdir_arg,\n               self.image,\n               \"bash\", \"-c\", command\n           ]\n           \n           result = subprocess.run(\n               cmd, capture_output=True, text=True, timeout=300\n           )\n           \n           return {\n               \"output\": result.stdout,\n               \"error\": result.stderr,\n               \"exit_code\": result.returncode\n           }\n   7.5 Batch Trajectory Generation  Purpose  For research and training data generation:   Generate many agent trajectories for analysis  Create training data for fine-tuning  Benchmark performance across tasks  Batch Runner (  batch_runner.py )     class BatchRunner:\n       \"\"\"\n       Run multiple agent tasks in parallel.\n       \n       Used for research, benchmarking, and dataset generation.\n       \"\"\"\n       \n       def __init__(self, num_workers: int = 4):\n           self.num_workers = num_workers\n       \n       def run_tasks(self, tasks: list) -> list:\n           \"\"\"\n           Execute multiple tasks in parallel.\n           \n           Args:\n               tasks: List of {prompt, expected_output, metadata} dicts\n           \n           Returns:\n               List of {task_id, trajectory, success, metrics} dicts\n           \"\"\"\n           import concurrent.futures\n           \n           def run_task(task):\n               agent = AIAgent(\n                   model=\"anthropic\u002Fclaude-opus-4.6\",\n                   max_iterations=50,\n                   save_trajectories=True,\n               )\n               \n               result = agent.chat(task['prompt'])\n               trajectory = agent.get_trajectory()\n               \n               return {\n                   'task_id': task.get('id'),\n                   'trajectory': trajectory,\n                   'success': self._evaluate(result, task.get('expected_output')),\n                   'metrics': self._compute_metrics(trajectory)\n               }\n           \n           with concurrent.futures.ThreadPoolExecutor(\n               max_workers=self.num_workers\n           ) as executor:\n               results = list(executor.map(run_task, tasks))\n           \n           return results\n       \n       def save_results(self, results: list, output_path: str):\n           \"\"\"\n           Save batch results to file for analysis.\n           \"\"\"\n           with open(output_path, 'w') as f:\n               for result in results:\n                   json.dump(result, f)\n                   f.write('\\n')  # JSON Lines format\n  Usage Example     # Generate training data\n   tasks = [\n       {\"id\": \"task_1\", \"prompt\": \"Write a Python function to sort a list\"},\n       {\"id\": \"task_2\", \"prompt\": \"Debug this error: ...\"},\n       # ... more tasks\n   ]\n   \n   runner = BatchRunner(num_workers=8)\n   results = runner.run_tasks(tasks)\n   runner.save_results(results, \"~\u002Ftrajectories.jsonl\")\n   7.6 RL Training Environments (Atropos)  Overview  Hermes includes RL training environments for fine-tuning agents:    Atropos : Custom environment for tool-use RL  Compatible with TRL, Axolotl, Unsloth for fine-tuning  Environment Structure (  environments\u002F )     # environments\u002Fatropos\u002Fenv.py\n   class AtroposEnv(gym.Env):\n       \"\"\"\n       RL environment for training tool-using agents.\n       \n       Observation: Current task + conversation history\n       Action: Tool call or final answer\n       Reward: Task success + intermediate steps\n       \"\"\"\n       \n       def __init__(self, tasks: list):\n           super().__init__()\n           self.tasks = tasks\n           self.current_task = None\n           self.history = []\n       \n       def reset(self):\n           \"\"\"Start a new task.\"\"\"\n           self.current_task = random.choice(self.tasks)\n           self.history = [{\"role\": \"user\", \"content\": self.current_task['prompt']}]\n           return self._get_observation()\n       \n       def step(self, action: dict) -> tuple:\n           \"\"\"\n           Execute an action (tool call or answer).\n           \n           Returns: (observation, reward, done, info)\n           \"\"\"\n           if action['type'] == 'tool_call':\n               # Execute tool\n               result = execute_tool(action['name'], action['args'])\n               self.history.append({\n                   \"role\": \"assistant\",\n                   \"content\": f\"Tool call: {action['name']}\"\n               })\n               self.history.append({\n                   \"role\": \"tool\", \n                   \"content\": result\n               })\n               \n               return (\n                   self._get_observation(),\n                   0.1,  # Small reward for taking action\n                   False,\n                   {}\n               )\n           \n           elif action['type'] == 'answer':\n               # Final answer\n               success = self._evaluate(action['content'])\n               reward = 1.0 if success else -1.0\n               \n               return (\n                   self._get_observation(),\n                   reward,\n                   True,  # Episode done\n                   {\"success\": success}\n               )\n       \n       def _get_observation(self) -> str:\n           \"\"\"\n           Build observation for the agent.\n           \n           Returns conversation history as string.\n           \"\"\"\n           return \"\\n\".join(\n               f\"{msg['role']}: {msg['content']}\"\n               for msg in self.history\n           )\n  Training with TRL     # Train an agent using Atropos environment\n   from trl import PPOConfig, PPOTrainer\n   from environments.atropos.env import AtroposEnv\n   \n   # Create environment\n   env = AtroposEnv(tasks=load_tasks(\"~\u002Ftasks.jsonl\"))\n   \n   # Configure training\n   config = PPOConfig(\n       model_name=\"anthropic\u002Fclaude-3.5-sonnet\",\n       learning_rate=1e-5,\n       batch_size=32,\n   )\n   \n   # Train\n   trainer = PPOTrainer(config=config, env=env)\n   trainer.train(num_steps=10000)\n   7.7 Hands-On Exercise  Exercise 1: Test Subagent Delegation     # Start Hermes CLI\n   hermes\n   \n   # Ask for parallel work\n   \"Research three AI frameworks (PyTorch, JAX, TensorFlow) in parallel using subagents.\n   Create a comparison table of their strengths.\"\n   \n   # Observe:\n   # - How many subagents are spawned?\n   # - How long does it take vs sequential?\n   # - What's the final output like?\n  Exercise 2: Create a Cron Job     # In CLI or Telegram\n   \u002Fcron   create   \"Daily News Summary\"   \"0 9 * * *\"   \\\n       \"Fetch top stories from Hacker News and summarize each in one sentence\"   \\\n       deliver=telegram\n   \n   # Check it was created\n   \u002Fcron   list\n   \n   # Pause if you don't want it running\n   \u002Fcron   pause   \"Daily News Summary\"\n  Exercise 3: Configure MCP Server     # Add filesystem MCP server to config\n   cat   >>   ~\u002F.hermes\u002Fconfig.yaml   \u003C\u003C   'EOF'\n   mcp:\n     servers:\n       filesystem:\n         command: \"npx -y @modelcontextprotocol\u002Fserver-filesystem\"\n         args: [\"\u002Fhome\u002Fbrian\"]\n   EOF\n   \n   # Reload MCP servers\n   \u002Freload-mcp\n   \n   # Test it\n   hermes\n   \"Read the file \u002Fhome\u002Fbrian\u002Fgit\u002Fhermes-agent\u002FREADME.md and summarize it\"\n  Exercise 4: Run a Batch Task     # Create a simple batch script\n   cat > ~\u002Fbatch_test.py \u003C\u003C 'EOF'\n   import sys\n   sys.path.insert(0, '~\u002Fgit\u002Fnous-hermes-agent')\n   \n   from batch_runner import BatchRunner\n   \n   # Define tasks\n   tasks = [\n       {\"id\": \"math_1\", \"prompt\": \"What is 234 * 567?\"},\n       {\"id\": \"code_1\", \"prompt\": \"Write a Python function to reverse a string\"},\n       {\"id\": \"text_1\", \"prompt\": \"Summarize: The quick brown fox jumps over the lazy dog\"}\n   ]\n   \n   # Run batch\n   runner = BatchRunner(num_workers=2)\n   results = runner.run_tasks(tasks)\n   \n   for r in results:\n       print(f\"\\nTask {r['task_id']}:\")\n       print(f\"  Success: {r['success']}\")\n       print(f\"  Trajectory length: {len(r['trajectory'])} turns\")\n   EOF\n   \n   # Run it\n   python ~\u002Fbatch_test.py\n   7.8 Common Pitfalls  ❌ Don't Overuse Subagents     # BAD: Too much overhead for simple tasks\n   \"What's 2+2?\"  # No need to delegate!\n   \n   # GOOD: Use subagents for complex, independent work\n   \"Research these 5 topics and create a comparison report\"\n  ❌ Don't Forget Cron Job Cleanup     # BAD: Let jobs accumulate forever\n   cron.create(\"Daily Task\", \"0 * * * *\", prompt)  # Never removed!\n   \n   # GOOD: Set repeat limit or manually clean up\n   cron.create(\"One-time Report\", schedule, prompt, repeat=1)\n   cron.remove(\"Old Job\")\n  ❌ Don't Ignore MCP Server Errors     # BAD: Assume MCP servers always work\n   result = mcp_call(\"filesystem\", \"read_file\", args)  # Might fail!\n   \n   # GOOD: Handle errors gracefully\n   try:\n       result = mcp_call(\"filesystem\", \"read_file\", args)\n   except Exception as e:\n       log_error(f\"MCP call failed: {e}\")\n       fallback_to_alternative()\n   ✅ Module 7 Checklist     Understand subagent delegation and parallelization    Create and manage cron jobs    Configure and use MCP servers    Explain different terminal backends    Run batch trajectory generation    Understand RL training environments (Atropos)    Complete all four exercises   🎉 Course Complete!  Congratulations! You've completed the full Hermes Agent architecture course.  What You've Learned    Overview & Mental Model  — Big picture architecture and data flow   Core Agent Loop  — How conversations progress turn by turn   Tools System  — Tool registration, discovery, and execution   Gateway & Platforms  — Multi-platform messaging support   Sessions & Memory  — Persistence, search, and context management   Skills System  — Procedural memory and self-improvement   Advanced Topics  — Subagents, cron, MCP, backends, research tools  Next Steps    Explore the codebase : Dive deeper into specific modules that interest you   Contribute : Fix bugs, add features, improve documentation   Build skills : Create custom skills for your workflows   Experiment : Try different models, toolsets, and configurations   Join the community : Discord, GitHub issues, discussions  Resources    Documentation :   https:\u002F\u002Fhermes-agent.nousresearch.com\u002Fdocs\u002F   GitHub :   https:\u002F\u002Fgithub.com\u002FNousResearch\u002Fhermes-agent   Discord :   https:\u002F\u002Fdiscord.gg\u002FNousResearch    Course created: March 24, 2026  Based on Hermes Agent from NousResearch\u002Fhermes-agent  html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}",{"id":135,"path":136,"dir":9,"title":8,"description":137,"keywords":138,"body":142},"content:index.md","\u002F","A comprehensive, step-by-step curriculum to understand the Hermes Agent codebase from NousResearch.",[24,139,14,16,140,141],"📖 Course Modules","📝 How to Use This Course","🔗 Resources","  📚 Hermes Agent Architecture Course  Welcome! This course will take you from zero to deep understanding of how Hermes Agent works under the hood. We'll explore the codebase step by step, with hands-on examples and exercises.  🎯 What You'll Learn  By the end of this course, you'll understand:    How Hermes works as a whole  — The big picture architecture and data flow   The agent loop in detail  — How conversations progress turn by turn   Tool orchestration  — How tools are discovered, registered, and called   Multi-platform support  — How one codebase serves CLI, Telegram, Discord, etc.   Persistence layer  — Sessions, memory, and how context survives across restarts   Learning loop  — How skills are created, improved, and reused   Advanced features  — Subagents, scheduling, MCP integration  📖 Course Modules     Module  Topic  Estimated Time     01  Overview & Mental Model  30 min     02  Core Agent Loop & Conversation Flow     03  Tools System & Toolsets     04  Gateway & Messaging Platforms     05  Sessions, Memory & Persistence     06  Skills System & Self-Improvement     07  Advanced Topics: Subagents, Cron, MCP   Total Time:  ~6 hours (spread over multiple sessions)  🛠️ Prerequisites   Basic Python knowledge (functions, classes, decorators)  Familiarity with LLMs and chat interfaces  The cloned repo at   ~\u002Fgit\u002Fnous-hermes-agent  🚀 Quick Start    Start with Module 01  — Read the overview to understand the big picture   Progress through modules in order  — Each builds on previous knowledge   Code along  — Actually run the exercises, don't just read them   Take breaks  — 6 hours is a lot; spread it over several days if needed  📝 How to Use This Course  Each module includes:   ✅ Conceptual explanations with diagrams  ✅ Real code from the Hermes codebase  ✅ Hands-on exercises you can try  ✅ Common pitfalls to avoid  ✅ Checklist to verify understanding  🔗 Resources    Hermes Agent Repo:    https:\u002F\u002Fgithub.com\u002FNousResearch\u002Fhermes-agent   Official Docs:    https:\u002F\u002Fhermes-agent.nousresearch.com\u002Fdocs\u002F   Discord Community:    https:\u002F\u002Fdiscord.gg\u002FNousResearch   Ready to start? Begin with   Module 01: Overview & Mental Model  🚀",1774384551007]