<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:media="http://search.yahoo.com/mrss/">
  <channel>
    <title>GitHub All Languages Monthly Trending</title>
    <description>Monthly Trending of All Languages in GitHub</description>
    <pubDate>Sat, 18 Apr 2026 01:48:31 GMT</pubDate>
    <link>http://mshibanami.github.io/GitHubTrendingRSS</link>
    
    <item>
      <title>NousResearch/hermes-agent</title>
      <link>https://github.com/NousResearch/hermes-agent</link>
      <description>&lt;p&gt;The agent that grows with you&lt;/p&gt;&lt;hr&gt;&lt;p align=&quot;center&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/NousResearch/hermes-agent/main/assets/banner.png&quot; alt=&quot;Hermes Agent&quot; width=&quot;100%&quot; /&gt; &lt;/p&gt; 
&lt;h1&gt;Hermes Agent ☤&lt;/h1&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/Docs-hermes--agent.nousresearch.com-FFD700?style=for-the-badge&quot; alt=&quot;Documentation&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://discord.gg/NousResearch&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/Discord-5865F2?style=for-the-badge&amp;amp;logo=discord&amp;amp;logoColor=white&quot; alt=&quot;Discord&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://github.com/NousResearch/hermes-agent/raw/main/LICENSE&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/License-MIT-green?style=for-the-badge&quot; alt=&quot;License: MIT&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://nousresearch.com&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/Built%20by-Nous%20Research-blueviolet?style=for-the-badge&quot; alt=&quot;Built by Nous Research&quot; /&gt;&lt;/a&gt; &lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;The self-improving AI agent built by &lt;a href=&quot;https://nousresearch.com&quot;&gt;Nous Research&lt;/a&gt;.&lt;/strong&gt; It&#39;s the only agent with a built-in learning loop — it creates skills from experience, improves them during use, nudges itself to persist knowledge, searches its own past conversations, and builds a deepening model of who you are across sessions. Run it on a $5 VPS, a GPU cluster, or serverless infrastructure that costs nearly nothing when idle. It&#39;s not tied to your laptop — talk to it from Telegram while it works on a cloud VM.&lt;/p&gt; 
&lt;p&gt;Use any model you want — &lt;a href=&quot;https://portal.nousresearch.com&quot;&gt;Nous Portal&lt;/a&gt;, &lt;a href=&quot;https://openrouter.ai&quot;&gt;OpenRouter&lt;/a&gt; (200+ models), &lt;a href=&quot;https://build.nvidia.com&quot;&gt;NVIDIA NIM&lt;/a&gt; (Nemotron), &lt;a href=&quot;https://platform.xiaomimimo.com&quot;&gt;Xiaomi MiMo&lt;/a&gt;, &lt;a href=&quot;https://z.ai&quot;&gt;z.ai/GLM&lt;/a&gt;, &lt;a href=&quot;https://platform.moonshot.ai&quot;&gt;Kimi/Moonshot&lt;/a&gt;, &lt;a href=&quot;https://www.minimax.io&quot;&gt;MiniMax&lt;/a&gt;, &lt;a href=&quot;https://huggingface.co&quot;&gt;Hugging Face&lt;/a&gt;, OpenAI, or your own endpoint. Switch with &lt;code&gt;hermes model&lt;/code&gt; — no code changes, no lock-in.&lt;/p&gt; 
&lt;table&gt; 
 &lt;tbody&gt;
  &lt;tr&gt;
   &lt;td&gt;&lt;b&gt;A real terminal interface&lt;/b&gt;&lt;/td&gt;
   &lt;td&gt;Full TUI with multiline editing, slash-command autocomplete, conversation history, interrupt-and-redirect, and streaming tool output.&lt;/td&gt;
  &lt;/tr&gt; 
  &lt;tr&gt;
   &lt;td&gt;&lt;b&gt;Lives where you do&lt;/b&gt;&lt;/td&gt;
   &lt;td&gt;Telegram, Discord, Slack, WhatsApp, Signal, and CLI — all from a single gateway process. Voice memo transcription, cross-platform conversation continuity.&lt;/td&gt;
  &lt;/tr&gt; 
  &lt;tr&gt;
   &lt;td&gt;&lt;b&gt;A closed learning loop&lt;/b&gt;&lt;/td&gt;
   &lt;td&gt;Agent-curated memory with periodic nudges. Autonomous skill creation after complex tasks. Skills self-improve during use. FTS5 session search with LLM summarization for cross-session recall. &lt;a href=&quot;https://github.com/plastic-labs/honcho&quot;&gt;Honcho&lt;/a&gt; dialectic user modeling. Compatible with the &lt;a href=&quot;https://agentskills.io&quot;&gt;agentskills.io&lt;/a&gt; open standard.&lt;/td&gt;
  &lt;/tr&gt; 
  &lt;tr&gt;
   &lt;td&gt;&lt;b&gt;Scheduled automations&lt;/b&gt;&lt;/td&gt;
   &lt;td&gt;Built-in cron scheduler with delivery to any platform. Daily reports, nightly backups, weekly audits — all in natural language, running unattended.&lt;/td&gt;
  &lt;/tr&gt; 
  &lt;tr&gt;
   &lt;td&gt;&lt;b&gt;Delegates and parallelizes&lt;/b&gt;&lt;/td&gt;
   &lt;td&gt;Spawn isolated subagents for parallel workstreams. Write Python scripts that call tools via RPC, collapsing multi-step pipelines into zero-context-cost turns.&lt;/td&gt;
  &lt;/tr&gt; 
  &lt;tr&gt;
   &lt;td&gt;&lt;b&gt;Runs anywhere, not just your laptop&lt;/b&gt;&lt;/td&gt;
   &lt;td&gt;Six terminal backends — local, Docker, SSH, Daytona, Singularity, and Modal. Daytona and Modal offer serverless persistence — your agent&#39;s environment hibernates when idle and wakes on demand, costing nearly nothing between sessions. Run it on a $5 VPS or a GPU cluster.&lt;/td&gt;
  &lt;/tr&gt; 
  &lt;tr&gt;
   &lt;td&gt;&lt;b&gt;Research-ready&lt;/b&gt;&lt;/td&gt;
   &lt;td&gt;Batch trajectory generation, Atropos RL environments, trajectory compression for training the next generation of tool-calling models.&lt;/td&gt;
  &lt;/tr&gt; 
 &lt;/tbody&gt;
&lt;/table&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Quick Install&lt;/h2&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Works on Linux, macOS, WSL2, and Android via Termux. The installer handles the platform-specific setup for you.&lt;/p&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;Android / Termux:&lt;/strong&gt; The tested manual path is documented in the &lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/getting-started/termux&quot;&gt;Termux guide&lt;/a&gt;. On Termux, Hermes installs a curated &lt;code&gt;.[termux]&lt;/code&gt; extra because the full &lt;code&gt;.[all]&lt;/code&gt; extra currently pulls Android-incompatible voice dependencies.&lt;/p&gt; 
 &lt;p&gt;&lt;strong&gt;Windows:&lt;/strong&gt; Native Windows is not supported. Please install &lt;a href=&quot;https://learn.microsoft.com/en-us/windows/wsl/install&quot;&gt;WSL2&lt;/a&gt; and run the command above.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;p&gt;After installation:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;source ~/.bashrc    # reload shell (or: source ~/.zshrc)
hermes              # start chatting!
&lt;/code&gt;&lt;/pre&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Getting Started&lt;/h2&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;hermes              # Interactive CLI — start a conversation
hermes model        # Choose your LLM provider and model
hermes tools        # Configure which tools are enabled
hermes config set   # Set individual config values
hermes gateway      # Start the messaging gateway (Telegram, Discord, etc.)
hermes setup        # Run the full setup wizard (configures everything at once)
hermes claw migrate # Migrate from OpenClaw (if coming from OpenClaw)
hermes update       # Update to the latest version
hermes doctor       # Diagnose any issues
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;📖 &lt;strong&gt;&lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/&quot;&gt;Full documentation →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt; 
&lt;h2&gt;CLI vs Messaging Quick Reference&lt;/h2&gt; 
&lt;p&gt;Hermes has two entry points: start the terminal UI with &lt;code&gt;hermes&lt;/code&gt;, or run the gateway and talk to it from Telegram, Discord, Slack, WhatsApp, Signal, or Email. Once you&#39;re in a conversation, many slash commands are shared across both interfaces.&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Action&lt;/th&gt; 
   &lt;th&gt;CLI&lt;/th&gt; 
   &lt;th&gt;Messaging platforms&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Start chatting&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;hermes&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Run &lt;code&gt;hermes gateway setup&lt;/code&gt; + &lt;code&gt;hermes gateway start&lt;/code&gt;, then send the bot a message&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Start fresh conversation&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/new&lt;/code&gt; or &lt;code&gt;/reset&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/new&lt;/code&gt; or &lt;code&gt;/reset&lt;/code&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Change model&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/model [provider:model]&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/model [provider:model]&lt;/code&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Set a personality&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/personality [name]&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/personality [name]&lt;/code&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Retry or undo the last turn&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/retry&lt;/code&gt;, &lt;code&gt;/undo&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/retry&lt;/code&gt;, &lt;code&gt;/undo&lt;/code&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Compress context / check usage&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/compress&lt;/code&gt;, &lt;code&gt;/usage&lt;/code&gt;, &lt;code&gt;/insights [--days N]&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/compress&lt;/code&gt;, &lt;code&gt;/usage&lt;/code&gt;, &lt;code&gt;/insights [days]&lt;/code&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Browse skills&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/skills&lt;/code&gt; or &lt;code&gt;/&amp;lt;skill-name&amp;gt;&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/skills&lt;/code&gt; or &lt;code&gt;/&amp;lt;skill-name&amp;gt;&lt;/code&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Interrupt current work&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;Ctrl+C&lt;/code&gt; or send a new message&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/stop&lt;/code&gt; or send a new message&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Platform-specific status&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/platforms&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/status&lt;/code&gt;, &lt;code&gt;/sethome&lt;/code&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p&gt;For the full command lists, see the &lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/user-guide/cli&quot;&gt;CLI guide&lt;/a&gt; and the &lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/user-guide/messaging&quot;&gt;Messaging Gateway guide&lt;/a&gt;.&lt;/p&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Documentation&lt;/h2&gt; 
&lt;p&gt;All documentation lives at &lt;strong&gt;&lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/&quot;&gt;hermes-agent.nousresearch.com/docs&lt;/a&gt;&lt;/strong&gt;:&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Section&lt;/th&gt; 
   &lt;th&gt;What&#39;s Covered&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/getting-started/quickstart&quot;&gt;Quickstart&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Install → setup → first conversation in 2 minutes&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/user-guide/cli&quot;&gt;CLI Usage&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Commands, keybindings, personalities, sessions&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/user-guide/configuration&quot;&gt;Configuration&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Config file, providers, models, all options&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/user-guide/messaging&quot;&gt;Messaging Gateway&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Telegram, Discord, Slack, WhatsApp, Signal, Home Assistant&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/user-guide/security&quot;&gt;Security&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Command approval, DM pairing, container isolation&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/user-guide/features/tools&quot;&gt;Tools &amp;amp; Toolsets&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;40+ tools, toolset system, terminal backends&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/user-guide/features/skills&quot;&gt;Skills System&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Procedural memory, Skills Hub, creating skills&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/user-guide/features/memory&quot;&gt;Memory&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Persistent memory, user profiles, best practices&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/user-guide/features/mcp&quot;&gt;MCP Integration&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Connect any MCP server for extended capabilities&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/user-guide/features/cron&quot;&gt;Cron Scheduling&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Scheduled tasks with platform delivery&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/user-guide/features/context-files&quot;&gt;Context Files&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Project context that shapes every conversation&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/developer-guide/architecture&quot;&gt;Architecture&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Project structure, agent loop, key classes&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/developer-guide/contributing&quot;&gt;Contributing&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Development setup, PR process, code style&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/reference/cli-commands&quot;&gt;CLI Reference&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;All commands and flags&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/reference/environment-variables&quot;&gt;Environment Variables&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Complete env var reference&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Migrating from OpenClaw&lt;/h2&gt; 
&lt;p&gt;If you&#39;re coming from OpenClaw, Hermes can automatically import your settings, memories, skills, and API keys.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;During first-time setup:&lt;/strong&gt; The setup wizard (&lt;code&gt;hermes setup&lt;/code&gt;) automatically detects &lt;code&gt;~/.openclaw&lt;/code&gt; and offers to migrate before configuration begins.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Anytime after install:&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;hermes claw migrate              # Interactive migration (full preset)
hermes claw migrate --dry-run    # Preview what would be migrated
hermes claw migrate --preset user-data   # Migrate without secrets
hermes claw migrate --overwrite  # Overwrite existing conflicts
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;What gets imported:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;&lt;a href=&quot;http://SOUL.md&quot;&gt;SOUL.md&lt;/a&gt;&lt;/strong&gt; — persona file&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Memories&lt;/strong&gt; — &lt;a href=&quot;http://MEMORY.md&quot;&gt;MEMORY.md&lt;/a&gt; and &lt;a href=&quot;http://USER.md&quot;&gt;USER.md&lt;/a&gt; entries&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Skills&lt;/strong&gt; — user-created skills → &lt;code&gt;~/.hermes/skills/openclaw-imports/&lt;/code&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Command allowlist&lt;/strong&gt; — approval patterns&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Messaging settings&lt;/strong&gt; — platform configs, allowed users, working directory&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;API keys&lt;/strong&gt; — allowlisted secrets (Telegram, OpenRouter, OpenAI, Anthropic, ElevenLabs)&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;TTS assets&lt;/strong&gt; — workspace audio files&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Workspace instructions&lt;/strong&gt; — &lt;a href=&quot;http://AGENTS.md&quot;&gt;AGENTS.md&lt;/a&gt; (with &lt;code&gt;--workspace-target&lt;/code&gt;)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;See &lt;code&gt;hermes claw migrate --help&lt;/code&gt; for all options, or use the &lt;code&gt;openclaw-migration&lt;/code&gt; skill for an interactive agent-guided migration with dry-run previews.&lt;/p&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Contributing&lt;/h2&gt; 
&lt;p&gt;We welcome contributions! See the &lt;a href=&quot;https://hermes-agent.nousresearch.com/docs/developer-guide/contributing&quot;&gt;Contributing Guide&lt;/a&gt; for development setup, code style, and PR process.&lt;/p&gt; 
&lt;p&gt;Quick start for contributors — clone and go with &lt;code&gt;setup-hermes.sh&lt;/code&gt;:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git clone https://github.com/NousResearch/hermes-agent.git
cd hermes-agent
./setup-hermes.sh     # installs uv, creates venv, installs .[all], symlinks ~/.local/bin/hermes
./hermes              # auto-detects the venv, no need to `source` first
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Manual path (equivalent to the above):&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;curl -LsSf https://astral.sh/uv/install.sh | sh
uv venv venv --python 3.11
source venv/bin/activate
uv pip install -e &quot;.[all,dev]&quot;
python -m pytest tests/ -q
&lt;/code&gt;&lt;/pre&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;RL Training (optional):&lt;/strong&gt; To work on the RL/Tinker-Atropos integration:&lt;/p&gt; 
 &lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git submodule update --init tinker-atropos
uv pip install -e &quot;./tinker-atropos&quot;
&lt;/code&gt;&lt;/pre&gt; 
&lt;/blockquote&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Community&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;💬 &lt;a href=&quot;https://discord.gg/NousResearch&quot;&gt;Discord&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;📚 &lt;a href=&quot;https://agentskills.io&quot;&gt;Skills Hub&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;🐛 &lt;a href=&quot;https://github.com/NousResearch/hermes-agent/issues&quot;&gt;Issues&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;💡 &lt;a href=&quot;https://github.com/NousResearch/hermes-agent/discussions&quot;&gt;Discussions&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;🔌 &lt;a href=&quot;https://github.com/AaronWong1999/hermesclaw&quot;&gt;HermesClaw&lt;/a&gt; — Community WeChat bridge: Run Hermes Agent and OpenClaw on the same WeChat account.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;hr /&gt; 
&lt;h2&gt;License&lt;/h2&gt; 
&lt;p&gt;MIT — see &lt;a href=&quot;https://raw.githubusercontent.com/NousResearch/hermes-agent/main/LICENSE&quot;&gt;LICENSE&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;Built by &lt;a href=&quot;https://nousresearch.com&quot;&gt;Nous Research&lt;/a&gt;.&lt;/p&gt;</description>
      
      <media:content url="https://opengraph.githubassets.com/f53606a4bff2ed01ddc60e285b5cb62fee8c77ff76b3ee5d184ca1ede0ddda55/NousResearch/hermes-agent" medium="image" />
      
    </item>
    
    <item>
      <title>forrestchang/andrej-karpathy-skills</title>
      <link>https://github.com/forrestchang/andrej-karpathy-skills</link>
      <description>&lt;p&gt;A single CLAUDE.md file to improve Claude Code behavior, derived from Andrej Karpathy&#39;s observations on LLM coding pitfalls.&lt;/p&gt;&lt;hr&gt;&lt;h1&gt;Karpathy-Inspired Claude Code Guidelines&lt;/h1&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;Check out my new project &lt;a href=&quot;https://github.com/multica-ai/multica&quot;&gt;Multica&lt;/a&gt; — an open-source platform for running and managing coding agents with reusable skills.&lt;/p&gt; 
 &lt;p&gt;Follow me on X: &lt;a href=&quot;https://x.com/jiayuan_jy&quot;&gt;https://x.com/jiayuan_jy&lt;/a&gt;&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;p&gt;A single &lt;code&gt;CLAUDE.md&lt;/code&gt; file to improve Claude Code behavior, derived from &lt;a href=&quot;https://x.com/karpathy/status/2015883857489522876&quot;&gt;Andrej Karpathy&#39;s observations&lt;/a&gt; on LLM coding pitfalls.&lt;/p&gt; 
&lt;h2&gt;The Problems&lt;/h2&gt; 
&lt;p&gt;From Andrej&#39;s post:&lt;/p&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&quot;The models make wrong assumptions on your behalf and just run along with them without checking. They don&#39;t manage their confusion, don&#39;t seek clarifications, don&#39;t surface inconsistencies, don&#39;t present tradeoffs, don&#39;t push back when they should.&quot;&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&quot;They really like to overcomplicate code and APIs, bloat abstractions, don&#39;t clean up dead code... implement a bloated construction over 1000 lines when 100 would do.&quot;&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&quot;They still sometimes change/remove comments and code they don&#39;t sufficiently understand as side effects, even if orthogonal to the task.&quot;&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;h2&gt;The Solution&lt;/h2&gt; 
&lt;p&gt;Four principles in one file that directly address these issues:&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Principle&lt;/th&gt; 
   &lt;th&gt;Addresses&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Think Before Coding&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;Wrong assumptions, hidden confusion, missing tradeoffs&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Simplicity First&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;Overcomplication, bloated abstractions&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Surgical Changes&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;Orthogonal edits, touching code you shouldn&#39;t&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Goal-Driven Execution&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;Leverage through tests-first, verifiable success criteria&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;h2&gt;The Four Principles in Detail&lt;/h2&gt; 
&lt;h3&gt;1. Think Before Coding&lt;/h3&gt; 
&lt;p&gt;&lt;strong&gt;Don&#39;t assume. Don&#39;t hide confusion. Surface tradeoffs.&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;LLMs often pick an interpretation silently and run with it. This principle forces explicit reasoning:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;State assumptions explicitly&lt;/strong&gt; — If uncertain, ask rather than guess&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Present multiple interpretations&lt;/strong&gt; — Don&#39;t pick silently when ambiguity exists&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Push back when warranted&lt;/strong&gt; — If a simpler approach exists, say so&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Stop when confused&lt;/strong&gt; — Name what&#39;s unclear and ask for clarification&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;2. Simplicity First&lt;/h3&gt; 
&lt;p&gt;&lt;strong&gt;Minimum code that solves the problem. Nothing speculative.&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;Combat the tendency toward overengineering:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;No features beyond what was asked&lt;/li&gt; 
 &lt;li&gt;No abstractions for single-use code&lt;/li&gt; 
 &lt;li&gt;No &quot;flexibility&quot; or &quot;configurability&quot; that wasn&#39;t requested&lt;/li&gt; 
 &lt;li&gt;No error handling for impossible scenarios&lt;/li&gt; 
 &lt;li&gt;If 200 lines could be 50, rewrite it&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;The test:&lt;/strong&gt; Would a senior engineer say this is overcomplicated? If yes, simplify.&lt;/p&gt; 
&lt;h3&gt;3. Surgical Changes&lt;/h3&gt; 
&lt;p&gt;&lt;strong&gt;Touch only what you must. Clean up only your own mess.&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;When editing existing code:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Don&#39;t &quot;improve&quot; adjacent code, comments, or formatting&lt;/li&gt; 
 &lt;li&gt;Don&#39;t refactor things that aren&#39;t broken&lt;/li&gt; 
 &lt;li&gt;Match existing style, even if you&#39;d do it differently&lt;/li&gt; 
 &lt;li&gt;If you notice unrelated dead code, mention it — don&#39;t delete it&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;When your changes create orphans:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Remove imports/variables/functions that YOUR changes made unused&lt;/li&gt; 
 &lt;li&gt;Don&#39;t remove pre-existing dead code unless asked&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;The test:&lt;/strong&gt; Every changed line should trace directly to the user&#39;s request.&lt;/p&gt; 
&lt;h3&gt;4. Goal-Driven Execution&lt;/h3&gt; 
&lt;p&gt;&lt;strong&gt;Define success criteria. Loop until verified.&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;Transform imperative tasks into verifiable goals:&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Instead of...&lt;/th&gt; 
   &lt;th&gt;Transform to...&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&quot;Add validation&quot;&lt;/td&gt; 
   &lt;td&gt;&quot;Write tests for invalid inputs, then make them pass&quot;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&quot;Fix the bug&quot;&lt;/td&gt; 
   &lt;td&gt;&quot;Write a test that reproduces it, then make it pass&quot;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&quot;Refactor X&quot;&lt;/td&gt; 
   &lt;td&gt;&quot;Ensure tests pass before and after&quot;&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p&gt;For multi-step tasks, state a brief plan:&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;1. [Step] → verify: [check]
2. [Step] → verify: [check]
3. [Step] → verify: [check]
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Strong success criteria let the LLM loop independently. Weak criteria (&quot;make it work&quot;) require constant clarification.&lt;/p&gt; 
&lt;h2&gt;Install&lt;/h2&gt; 
&lt;p&gt;&lt;strong&gt;Option A: Claude Code Plugin (recommended)&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;From within Claude Code, first add the marketplace:&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;/plugin marketplace add forrestchang/andrej-karpathy-skills
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Then install the plugin:&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;/plugin install andrej-karpathy-skills@karpathy-skills
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;This installs the guidelines as a Claude Code plugin, making the skill available across all your projects.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Option B: &lt;a href=&quot;http://CLAUDE.md&quot;&gt;CLAUDE.md&lt;/a&gt; (per-project)&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;New project:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;curl -o CLAUDE.md https://raw.githubusercontent.com/forrestchang/andrej-karpathy-skills/main/CLAUDE.md
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Existing project (append):&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;echo &quot;&quot; &amp;gt;&amp;gt; CLAUDE.md
curl https://raw.githubusercontent.com/forrestchang/andrej-karpathy-skills/main/CLAUDE.md &amp;gt;&amp;gt; CLAUDE.md
&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Key Insight&lt;/h2&gt; 
&lt;p&gt;From Andrej:&lt;/p&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&quot;LLMs are exceptionally good at looping until they meet specific goals... Don&#39;t tell it what to do, give it success criteria and watch it go.&quot;&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;p&gt;The &quot;Goal-Driven Execution&quot; principle captures this: transform imperative instructions into declarative goals with verification loops.&lt;/p&gt; 
&lt;h2&gt;How to Know It&#39;s Working&lt;/h2&gt; 
&lt;p&gt;These guidelines are working if you see:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Fewer unnecessary changes in diffs&lt;/strong&gt; — Only requested changes appear&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Fewer rewrites due to overcomplication&lt;/strong&gt; — Code is simple the first time&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Clarifying questions come before implementation&lt;/strong&gt; — Not after mistakes&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Clean, minimal PRs&lt;/strong&gt; — No drive-by refactoring or &quot;improvements&quot;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Customization&lt;/h2&gt; 
&lt;p&gt;These guidelines are designed to be merged with project-specific instructions. Add them to your existing &lt;code&gt;CLAUDE.md&lt;/code&gt; or create a new one.&lt;/p&gt; 
&lt;p&gt;For project-specific rules, add sections like:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;## Project-Specific Guidelines

- Use TypeScript strict mode
- All API endpoints must have tests
- Follow the existing error handling patterns in `src/utils/errors.ts`
&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Tradeoff Note&lt;/h2&gt; 
&lt;p&gt;These guidelines bias toward &lt;strong&gt;caution over speed&lt;/strong&gt;. For trivial tasks (simple typo fixes, obvious one-liners), use judgment — not every change needs the full rigor.&lt;/p&gt; 
&lt;p&gt;The goal is reducing costly mistakes on non-trivial work, not slowing down simple tasks.&lt;/p&gt; 
&lt;h2&gt;License&lt;/h2&gt; 
&lt;p&gt;MIT&lt;/p&gt;</description>
      
      <media:content url="https://opengraph.githubassets.com/715e21e00d7f1d6e19dcb546d5d281898c7c6820566f611a9edee1f492346c37/forrestchang/andrej-karpathy-skills" medium="image" />
      
    </item>
    
    <item>
      <title>Crosstalk-Solutions/project-nomad</title>
      <link>https://github.com/Crosstalk-Solutions/project-nomad</link>
      <description>&lt;p&gt;Project N.O.M.A.D, is a self-contained, offline survival computer packed with critical tools, knowledge, and AI to keep you informed and empowered—anytime, anywhere.&lt;/p&gt;&lt;hr&gt;&lt;div align=&quot;center&quot;&gt; 
 &lt;img src=&quot;https://raw.githubusercontent.com/Crosstalk-Solutions/project-nomad/main/admin/public/project_nomad_logo.webp&quot; width=&quot;200&quot; height=&quot;200&quot; /&gt; 
 &lt;h1&gt;Project N.O.M.A.D.&lt;/h1&gt; 
 &lt;h3&gt;Node for Offline Media, Archives, and Data&lt;/h3&gt; 
 &lt;p&gt;&lt;strong&gt;Knowledge That Never Goes Offline&lt;/strong&gt;&lt;/p&gt; 
 &lt;p&gt;&lt;a href=&quot;https://www.projectnomad.us&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/Website-projectnomad.us-blue&quot; alt=&quot;Website&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://discord.com/invite/crosstalksolutions&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/Discord-Join%20Community-5865F2&quot; alt=&quot;Discord&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://benchmark.projectnomad.us&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/Benchmark-Leaderboard-green&quot; alt=&quot;Benchmark&quot; /&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;/div&gt; 
&lt;hr /&gt; 
&lt;p&gt;Project N.O.M.A.D. is a self-contained, offline-first knowledge and education server packed with critical tools, knowledge, and AI to keep you informed and empowered—anytime, anywhere.&lt;/p&gt; 
&lt;h2&gt;Installation &amp;amp; Quickstart&lt;/h2&gt; 
&lt;p&gt;Project N.O.M.A.D. can be installed on any Debian-based operating system (we recommend Ubuntu). Installation is completely terminal-based, and all tools and resources are designed to be accessed through the browser, so there&#39;s no need for a desktop environment if you&#39;d rather setup N.O.M.A.D. as a &quot;server&quot; and access it through other clients.&lt;/p&gt; 
&lt;p&gt;&lt;em&gt;Note: sudo/root privileges are required to run the install script&lt;/em&gt;&lt;/p&gt; 
&lt;h3&gt;Quick Install (Debian-based OS Only)&lt;/h3&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo apt-get update &amp;amp;&amp;amp; \
sudo apt-get install -y curl &amp;amp;&amp;amp; \
curl -fsSL https://raw.githubusercontent.com/Crosstalk-Solutions/project-nomad/refs/heads/main/install/install_nomad.sh \
  -o install_nomad.sh &amp;amp;&amp;amp; \
sudo bash install_nomad.sh
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Project N.O.M.A.D. is now installed on your device! Open a browser and navigate to &lt;code&gt;http://localhost:8080&lt;/code&gt; (or &lt;code&gt;http://DEVICE_IP:8080&lt;/code&gt;) to start exploring!&lt;/p&gt; 
&lt;p&gt;For a complete step-by-step walkthrough (including Ubuntu installation), see the &lt;a href=&quot;https://www.projectnomad.us/install&quot;&gt;Installation Guide&lt;/a&gt;.&lt;/p&gt; 
&lt;h3&gt;Advanced Installation&lt;/h3&gt; 
&lt;p&gt;For more control over the installation process, copy and paste the &lt;a href=&quot;https://raw.githubusercontent.com/Crosstalk-Solutions/project-nomad/refs/heads/main/install/management_compose.yaml&quot;&gt;Docker Compose template&lt;/a&gt; into a &lt;code&gt;docker-compose.yml&lt;/code&gt; file and customize it to your liking (be sure to replace any placeholders with your actual values). Then, run &lt;code&gt;docker compose up -d&lt;/code&gt; to start the Command Center and its dependencies. Note: this method is recommended for advanced users only, as it requires familiarity with Docker and manual configuration before starting.&lt;/p&gt; 
&lt;h2&gt;How It Works&lt;/h2&gt; 
&lt;p&gt;N.O.M.A.D. is a management UI (&quot;Command Center&quot;) and API that orchestrates a collection of containerized tools and resources via &lt;a href=&quot;https://www.docker.com/&quot;&gt;Docker&lt;/a&gt;. It handles installation, configuration, and updates for everything — so you don&#39;t have to.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Built-in capabilities include:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;AI Chat with Knowledge Base&lt;/strong&gt; — local AI chat powered by &lt;a href=&quot;https://ollama.com/&quot;&gt;Ollama&lt;/a&gt; or you can use OpenAI API compatible software such as LM Studio or llama.cpp, with document upload and semantic search (RAG via &lt;a href=&quot;https://qdrant.tech/&quot;&gt;Qdrant&lt;/a&gt;)&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Information Library&lt;/strong&gt; — offline Wikipedia, medical references, ebooks, and more via &lt;a href=&quot;https://kiwix.org/&quot;&gt;Kiwix&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Education Platform&lt;/strong&gt; — Khan Academy courses with progress tracking via &lt;a href=&quot;https://learningequality.org/kolibri/&quot;&gt;Kolibri&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Offline Maps&lt;/strong&gt; — downloadable regional maps via &lt;a href=&quot;https://protomaps.com&quot;&gt;ProtoMaps&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Data Tools&lt;/strong&gt; — encryption, encoding, and analysis via &lt;a href=&quot;https://gchq.github.io/CyberChef/&quot;&gt;CyberChef&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Notes&lt;/strong&gt; — local note-taking via &lt;a href=&quot;https://github.com/dullage/flatnotes&quot;&gt;FlatNotes&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;System Benchmark&lt;/strong&gt; — hardware scoring with a &lt;a href=&quot;https://benchmark.projectnomad.us&quot;&gt;community leaderboard&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Easy Setup Wizard&lt;/strong&gt; — guided first-time configuration with curated content collections&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;N.O.M.A.D. also includes built-in tools like a Wikipedia content selector, ZIM library manager, and content explorer.&lt;/p&gt; 
&lt;h2&gt;What&#39;s Included&lt;/h2&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Capability&lt;/th&gt; 
   &lt;th&gt;Powered By&lt;/th&gt; 
   &lt;th&gt;What You Get&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Information Library&lt;/td&gt; 
   &lt;td&gt;Kiwix&lt;/td&gt; 
   &lt;td&gt;Offline Wikipedia, medical references, survival guides, ebooks&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;AI Assistant&lt;/td&gt; 
   &lt;td&gt;Ollama + Qdrant&lt;/td&gt; 
   &lt;td&gt;Built-in chat with document upload and semantic search&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Education Platform&lt;/td&gt; 
   &lt;td&gt;Kolibri&lt;/td&gt; 
   &lt;td&gt;Khan Academy courses, progress tracking, multi-user support&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Offline Maps&lt;/td&gt; 
   &lt;td&gt;ProtoMaps&lt;/td&gt; 
   &lt;td&gt;Downloadable regional maps with search and navigation&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Data Tools&lt;/td&gt; 
   &lt;td&gt;CyberChef&lt;/td&gt; 
   &lt;td&gt;Encryption, encoding, hashing, and data analysis&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Notes&lt;/td&gt; 
   &lt;td&gt;FlatNotes&lt;/td&gt; 
   &lt;td&gt;Local note-taking with markdown support&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;System Benchmark&lt;/td&gt; 
   &lt;td&gt;Built-in&lt;/td&gt; 
   &lt;td&gt;Hardware scoring, Builder Tags, and community leaderboard&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;h2&gt;Device Requirements&lt;/h2&gt; 
&lt;p&gt;While many similar offline survival computers are designed to be run on bare-minimum, lightweight hardware, Project N.O.M.A.D. is quite the opposite. To install and run the available AI tools, we highly encourage the use of a beefy, GPU-backed device to make the most of your install.&lt;/p&gt; 
&lt;p&gt;At it&#39;s core, however, N.O.M.A.D. is still very lightweight. For a barebones installation of the management application itself, the following minimal specs are required:&lt;/p&gt; 
&lt;p&gt;&lt;em&gt;Note: Project N.O.M.A.D. is not sponsored by any hardware manufacturer and is designed to be as hardware-agnostic as possible. The harware listed below is for example/comparison use only&lt;/em&gt;&lt;/p&gt; 
&lt;h4&gt;Minimum Specs&lt;/h4&gt; 
&lt;ul&gt; 
 &lt;li&gt;Processor: 2 GHz dual-core processor or better&lt;/li&gt; 
 &lt;li&gt;RAM: 4GB system memory&lt;/li&gt; 
 &lt;li&gt;Storage: At least 5 GB free disk space&lt;/li&gt; 
 &lt;li&gt;OS: Debian-based (Ubuntu recommended)&lt;/li&gt; 
 &lt;li&gt;Stable internet connection (required during install only)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;To run LLM&#39;s and other included AI tools:&lt;/p&gt; 
&lt;h4&gt;Optimal Specs&lt;/h4&gt; 
&lt;ul&gt; 
 &lt;li&gt;Processor: AMD Ryzen 7 or Intel Core i7 or better&lt;/li&gt; 
 &lt;li&gt;RAM: 32 GB system memory&lt;/li&gt; 
 &lt;li&gt;Graphics: NVIDIA RTX 3060 or AMD equivalent or better (more VRAM = run larger models)&lt;/li&gt; 
 &lt;li&gt;Storage: At least 250 GB free disk space (preferably on SSD)&lt;/li&gt; 
 &lt;li&gt;OS: Debian-based (Ubuntu recommended)&lt;/li&gt; 
 &lt;li&gt;Stable internet connection (required during install only)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;For detailed build recommendations at three price points ($150–$1,000+), see the &lt;a href=&quot;https://www.projectnomad.us/hardware&quot;&gt;Hardware Guide&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;Again, Project N.O.M.A.D. itself is quite lightweight - it&#39;s the tools and resources you choose to install with N.O.M.A.D. that will determine the specs required for your unique deployment&lt;/p&gt; 
&lt;h4&gt;Running AI models on a different host&lt;/h4&gt; 
&lt;p&gt;By default, N.O.M.A.D.&#39;s installer will attempt to setup Ollama on the host when the AI Assistant is installed. However, if you would like to run the AI model on a different host, you can go to the settings of of the AI assistant and input a URL for either an ollama or OpenAI-compatible API server (such as LM Studio).&lt;br /&gt; Note that if you use Ollama on a different host, you must start the server with this option &lt;code&gt;OLLAMA_HOST=0.0.0.0&lt;/code&gt;.&lt;br /&gt; Ollama is the preferred way to use the AI assistant as it has features such as model download that OpenAI API does not support. So when using LM Studio for example, you will have to use LM Studio to download models. You are responsible for the setup of Ollama/OpenAI server on the other host.&lt;/p&gt; 
&lt;h2&gt;Frequently Asked Questions (FAQ)&lt;/h2&gt; 
&lt;p&gt;For answers to common questions about Project N.O.M.A.D., please see our &lt;a href=&quot;https://raw.githubusercontent.com/Crosstalk-Solutions/project-nomad/main/FAQ.md&quot;&gt;FAQ&lt;/a&gt; page.&lt;/p&gt; 
&lt;h2&gt;About Internet Usage &amp;amp; Privacy&lt;/h2&gt; 
&lt;p&gt;Project N.O.M.A.D. is designed for offline usage. An internet connection is only required during the initial installation (to download dependencies) and if you (the user) decide to download additional tools and resources at a later time. Otherwise, N.O.M.A.D. does not require an internet connection and has ZERO built-in telemetry.&lt;/p&gt; 
&lt;p&gt;To test internet connectivity, N.O.M.A.D. attempts to make a request to Cloudflare&#39;s utility endpoint, &lt;code&gt;https://1.1.1.1/cdn-cgi/trace&lt;/code&gt; and checks for a successful response.&lt;/p&gt; 
&lt;h2&gt;About Security&lt;/h2&gt; 
&lt;p&gt;By design, Project N.O.M.A.D. is intended to be open and available without hurdles - it includes no authentication. If you decide to connect your device to a local network after install (e.g. for allowing other devices to access it&#39;s resources), you can block/open ports to control which services are exposed.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Will authentication be added in the future?&lt;/strong&gt; Maybe. It&#39;s not currently a priority, but if there&#39;s enough demand for it, we may consider building in an optional authentication layer in a future release to support uses cases where multiple users need access to the same instance but with different permission levels (e.g. family use with parental controls, classroom use with teacher/admin accounts, etc.). We have a suggestion for this on our public roadmap, so if this is something you&#39;d like to see, please upvote it here: &lt;a href=&quot;https://roadmap.projectnomad.us/posts/1/user-authentication-please-build-in-user-auth-with-admin-user-roles&quot;&gt;https://roadmap.projectnomad.us/posts/1/user-authentication-please-build-in-user-auth-with-admin-user-roles&lt;/a&gt;&lt;/p&gt; 
&lt;p&gt;For now, we recommend using network-level controls to manage access if you&#39;re planning to expose your N.O.M.A.D. instance to other devices on a local network. N.O.M.A.D. is not designed to be exposed directly to the internet, and we strongly advise against doing so unless you really know what you&#39;re doing, have taken appropriate security measures, and understand the risks involved.&lt;/p&gt; 
&lt;h2&gt;Contributing&lt;/h2&gt; 
&lt;p&gt;Contributions are welcome and appreciated! Please see &lt;a href=&quot;https://raw.githubusercontent.com/Crosstalk-Solutions/project-nomad/main/CONTRIBUTING.md&quot;&gt;CONTRIBUTING.md&lt;/a&gt; for guidelines on how to contribute to the project.&lt;/p&gt; 
&lt;h2&gt;Community &amp;amp; Resources&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Website:&lt;/strong&gt; &lt;a href=&quot;https://www.projectnomad.us&quot;&gt;www.projectnomad.us&lt;/a&gt; - Learn more about the project&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Discord:&lt;/strong&gt; &lt;a href=&quot;https://discord.com/invite/crosstalksolutions&quot;&gt;Join the Community&lt;/a&gt; - Get help, share your builds, and connect with other NOMAD users&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Benchmark Leaderboard:&lt;/strong&gt; &lt;a href=&quot;https://benchmark.projectnomad.us&quot;&gt;benchmark.projectnomad.us&lt;/a&gt; - See how your hardware stacks up against other NOMAD builds&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Troubleshooting Guide:&lt;/strong&gt; &lt;a href=&quot;https://raw.githubusercontent.com/Crosstalk-Solutions/project-nomad/main/TROUBLESHOOTING.md&quot;&gt;TROUBLESHOOTING.md&lt;/a&gt; - Find solutions to common issues&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;FAQ:&lt;/strong&gt; &lt;a href=&quot;https://raw.githubusercontent.com/Crosstalk-Solutions/project-nomad/main/FAQ.md&quot;&gt;FAQ.md&lt;/a&gt; - Find answers to frequently asked questions&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;License&lt;/h2&gt; 
&lt;p&gt;Project N.O.M.A.D. is licensed under the &lt;a href=&quot;https://raw.githubusercontent.com/Crosstalk-Solutions/project-nomad/main/LICENSE&quot;&gt;Apache License 2.0&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;Helper Scripts&lt;/h2&gt; 
&lt;p&gt;Once installed, Project N.O.M.A.D. has a few helper scripts should you ever need to troubleshoot issues or perform maintenance that can&#39;t be done through the Command Center. All of these scripts are found in Project N.O.M.A.D.&#39;s install directory, &lt;code&gt;/opt/project-nomad&lt;/code&gt;&lt;/p&gt; 
&lt;h3&gt;&lt;/h3&gt; 
&lt;h6&gt;Start Script - Starts all installed project containers&lt;/h6&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo bash /opt/project-nomad/start_nomad.sh
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;&lt;/h3&gt; 
&lt;h6&gt;Stop Script - Stops all installed project containers&lt;/h6&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo bash /opt/project-nomad/stop_nomad.sh
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;&lt;/h3&gt; 
&lt;h6&gt;Update Script - Attempts to pull the latest images for the Command Center and its dependencies (i.e. mysql) and recreate the containers. Note: this &lt;em&gt;only&lt;/em&gt; updates the Command Center containers. It does not update the installable application containers - that should be done through the Command Center UI&lt;/h6&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo bash /opt/project-nomad/update_nomad.sh
&lt;/code&gt;&lt;/pre&gt; 
&lt;h6&gt;Uninstall Script - Need to start fresh? Use the uninstall script to make your life easy. Note: this cannot be undone!&lt;/h6&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;curl -fsSL https://raw.githubusercontent.com/Crosstalk-Solutions/project-nomad/refs/heads/main/install/uninstall_nomad.sh -o uninstall_nomad.sh &amp;amp;&amp;amp; sudo bash uninstall_nomad.sh
&lt;/code&gt;&lt;/pre&gt;</description>
      
      <media:content url="https://opengraph.githubassets.com/3e2ce387b53281fee38944db0e7ef927e55ea7d477ca4897fe7e88dbec4a77c2/Crosstalk-Solutions/project-nomad" medium="image" />
      
    </item>
    
    <item>
      <title>bytedance/deer-flow</title>
      <link>https://github.com/bytedance/deer-flow</link>
      <description>&lt;p&gt;An open-source long-horizon SuperAgent harness that researches, codes, and creates. With the help of sandboxes, memories, tools, skill, subagents and message gateway, it handles different levels of tasks that could take minutes to hours.&lt;/p&gt;&lt;hr&gt;&lt;h1&gt;🦌 DeerFlow - 2.0&lt;/h1&gt; 
&lt;p&gt;English | &lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/README_zh.md&quot;&gt;中文&lt;/a&gt; | &lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/README_ja.md&quot;&gt;日本語&lt;/a&gt; | &lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/README_fr.md&quot;&gt;Français&lt;/a&gt; | &lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/README_ru.md&quot;&gt;Русский&lt;/a&gt;&lt;/p&gt; 
&lt;p&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/backend/pyproject.toml&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/Python-3.12%2B-3776AB?logo=python&amp;amp;logoColor=white&quot; alt=&quot;Python&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/Makefile&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/Node.js-22%2B-339933?logo=node.js&amp;amp;logoColor=white&quot; alt=&quot;Node.js&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/LICENSE&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/License-MIT-yellow.svg?sanitize=true&quot; alt=&quot;License: MIT&quot; /&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p&gt;&lt;a href=&quot;https://trendshift.io/repositories/14699&quot; target=&quot;_blank&quot;&gt;&lt;img src=&quot;https://trendshift.io/api/badge/repositories/14699&quot; alt=&quot;bytedance%2Fdeer-flow | Trendshift&quot; style=&quot;width: 250px; height: 55px;&quot; width=&quot;250&quot; height=&quot;55&quot; /&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;On February 28th, 2026, DeerFlow claimed the 🏆 #1 spot on GitHub Trending following the launch of version 2. Thanks a million to our incredible community — you made this happen! 💪🔥&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;p&gt;DeerFlow (&lt;strong&gt;D&lt;/strong&gt;eep &lt;strong&gt;E&lt;/strong&gt;xploration and &lt;strong&gt;E&lt;/strong&gt;fficient &lt;strong&gt;R&lt;/strong&gt;esearch &lt;strong&gt;Flow&lt;/strong&gt;) is an open-source &lt;strong&gt;super agent harness&lt;/strong&gt; that orchestrates &lt;strong&gt;sub-agents&lt;/strong&gt;, &lt;strong&gt;memory&lt;/strong&gt;, and &lt;strong&gt;sandboxes&lt;/strong&gt; to do almost anything — powered by &lt;strong&gt;extensible skills&lt;/strong&gt;.&lt;/p&gt; 
&lt;p&gt;&lt;a href=&quot;https://github.com/user-attachments/assets/a8bcadc4-e040-4cf2-8fda-dd768b999c18&quot;&gt;https://github.com/user-attachments/assets/a8bcadc4-e040-4cf2-8fda-dd768b999c18&lt;/a&gt;&lt;/p&gt; 
&lt;div class=&quot;markdown-alert markdown-alert-note&quot;&gt;
 &lt;p class=&quot;markdown-alert-title&quot;&gt;
  &lt;svg class=&quot;octicon octicon-info mr-2&quot; viewbox=&quot;0 0 16 16&quot; version=&quot;1.1&quot; width=&quot;16&quot; height=&quot;16&quot; aria-hidden=&quot;true&quot;&gt;
   &lt;path d=&quot;M0 8a8 8 0 1 1 16 0A8 8 0 0 1 0 8Zm8-6.5a6.5 6.5 0 1 0 0 13 6.5 6.5 0 0 0 0-13ZM6.5 7.75A.75.75 0 0 1 7.25 7h1a.75.75 0 0 1 .75.75v2.75h.25a.75.75 0 0 1 0 1.5h-2a.75.75 0 0 1 0-1.5h.25v-2h-.25a.75.75 0 0 1-.75-.75ZM8 6a1 1 0 1 1 0-2 1 1 0 0 1 0 2Z&quot;&gt;&lt;/path&gt;
  &lt;/svg&gt;Note&lt;/p&gt;
 &lt;p&gt;&lt;strong&gt;DeerFlow 2.0 is a ground-up rewrite.&lt;/strong&gt; It shares no code with v1. If you&#39;re looking for the original Deep Research framework, it&#39;s maintained on the &lt;a href=&quot;https://github.com/bytedance/deer-flow/tree/main-1.x&quot;&gt;&lt;code&gt;1.x&lt;/code&gt; branch&lt;/a&gt; — contributions there are still welcome. Active development has moved to 2.0.&lt;/p&gt; 
&lt;/div&gt; 
&lt;h2&gt;Official Website&lt;/h2&gt; 
&lt;p&gt;&lt;a href=&quot;https://deerflow.tech&quot;&gt;&lt;img width=&quot;2880&quot; height=&quot;1600&quot; alt=&quot;image&quot; src=&quot;https://github.com/user-attachments/assets/a598c49f-3b2f-41ea-a052-05e21349188a&quot; /&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p&gt;Learn more and see &lt;strong&gt;real demos&lt;/strong&gt; on our &lt;a href=&quot;https://deerflow.tech&quot;&gt;&lt;strong&gt;official website&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;Coding Plan from ByteDance Volcengine&lt;/h2&gt; 
&lt;img width=&quot;4808&quot; height=&quot;2400&quot; alt=&quot;英文方舟&quot; src=&quot;https://github.com/user-attachments/assets/2ecc7b9d-50be-4185-b1f7-5542d222fb2d&quot; /&gt; 
&lt;ul&gt; 
 &lt;li&gt;We strongly recommend using Doubao-Seed-2.0-Code, DeepSeek v3.2 and Kimi 2.5 to run DeerFlow&lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://www.byteplus.com/en/activity/codingplan?utm_campaign=deer_flow&amp;amp;utm_content=deer_flow&amp;amp;utm_medium=devrel&amp;amp;utm_source=OWO&amp;amp;utm_term=deer_flow&quot;&gt;Learn more&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://www.volcengine.com/activity/codingplan?utm_campaign=deer_flow&amp;amp;utm_content=deer_flow&amp;amp;utm_medium=devrel&amp;amp;utm_source=OWO&amp;amp;utm_term=deer_flow&quot;&gt;中国大陆地区的开发者请点击这里&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;InfoQuest&lt;/h2&gt; 
&lt;p&gt;DeerFlow has newly integrated the intelligent search and crawling toolset independently developed by BytePlus--&lt;a href=&quot;https://docs.byteplus.com/en/docs/InfoQuest/What_is_Info_Quest&quot;&gt;InfoQuest (supports free online experience)&lt;/a&gt;&lt;/p&gt; 
&lt;a href=&quot;https://docs.byteplus.com/en/docs/InfoQuest/What_is_Info_Quest&quot; target=&quot;_blank&quot;&gt; &lt;img src=&quot;https://sf16-sg.tiktokcdn.com/obj/eden-sg/hubseh7bsbps/20251208-160108.png&quot; alt=&quot;InfoQuest_banner&quot; /&gt; &lt;/a&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Table of Contents&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#-deerflow---20&quot;&gt;🦌 DeerFlow - 2.0&lt;/a&gt; 
  &lt;ul&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#official-website&quot;&gt;Official Website&lt;/a&gt;&lt;/li&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#coding-plan-from-bytedance-volcengine&quot;&gt;Coding Plan from ByteDance Volcengine&lt;/a&gt;&lt;/li&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#infoquest&quot;&gt;InfoQuest&lt;/a&gt;&lt;/li&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#table-of-contents&quot;&gt;Table of Contents&lt;/a&gt;&lt;/li&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#one-line-agent-setup&quot;&gt;One-Line Agent Setup&lt;/a&gt;&lt;/li&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#quick-start&quot;&gt;Quick Start&lt;/a&gt; 
    &lt;ul&gt; 
     &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#configuration&quot;&gt;Configuration&lt;/a&gt;&lt;/li&gt; 
     &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#running-the-application&quot;&gt;Running the Application&lt;/a&gt; 
      &lt;ul&gt; 
       &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#deployment-sizing&quot;&gt;Deployment Sizing&lt;/a&gt;&lt;/li&gt; 
       &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#option-1-docker-recommended&quot;&gt;Option 1: Docker (Recommended)&lt;/a&gt;&lt;/li&gt; 
       &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#option-2-local-development&quot;&gt;Option 2: Local Development&lt;/a&gt;&lt;/li&gt; 
      &lt;/ul&gt; &lt;/li&gt; 
     &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#advanced&quot;&gt;Advanced&lt;/a&gt; 
      &lt;ul&gt; 
       &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#sandbox-mode&quot;&gt;Sandbox Mode&lt;/a&gt;&lt;/li&gt; 
       &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#mcp-server&quot;&gt;MCP Server&lt;/a&gt;&lt;/li&gt; 
       &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#im-channels&quot;&gt;IM Channels&lt;/a&gt;&lt;/li&gt; 
       &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#langsmith-tracing&quot;&gt;LangSmith Tracing&lt;/a&gt;&lt;/li&gt; 
       &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#langfuse-tracing&quot;&gt;Langfuse Tracing&lt;/a&gt;&lt;/li&gt; 
       &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#using-both-providers&quot;&gt;Using Both Providers&lt;/a&gt;&lt;/li&gt; 
      &lt;/ul&gt; &lt;/li&gt; 
    &lt;/ul&gt; &lt;/li&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#from-deep-research-to-super-agent-harness&quot;&gt;From Deep Research to Super Agent Harness&lt;/a&gt;&lt;/li&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#core-features&quot;&gt;Core Features&lt;/a&gt; 
    &lt;ul&gt; 
     &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#skills--tools&quot;&gt;Skills &amp;amp; Tools&lt;/a&gt; 
      &lt;ul&gt; 
       &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#claude-code-integration&quot;&gt;Claude Code Integration&lt;/a&gt;&lt;/li&gt; 
      &lt;/ul&gt; &lt;/li&gt; 
     &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#sub-agents&quot;&gt;Sub-Agents&lt;/a&gt;&lt;/li&gt; 
     &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#sandbox--file-system&quot;&gt;Sandbox &amp;amp; File System&lt;/a&gt;&lt;/li&gt; 
     &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#context-engineering&quot;&gt;Context Engineering&lt;/a&gt;&lt;/li&gt; 
     &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#long-term-memory&quot;&gt;Long-Term Memory&lt;/a&gt;&lt;/li&gt; 
    &lt;/ul&gt; &lt;/li&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#recommended-models&quot;&gt;Recommended Models&lt;/a&gt;&lt;/li&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#embedded-python-client&quot;&gt;Embedded Python Client&lt;/a&gt;&lt;/li&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#documentation&quot;&gt;Documentation&lt;/a&gt;&lt;/li&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#%EF%B8%8F-security-notice&quot;&gt;⚠️ Security Notice&lt;/a&gt; 
    &lt;ul&gt; 
     &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#improper-deployment-may-introduce-security-risks&quot;&gt;Improper Deployment May Introduce Security Risks&lt;/a&gt;&lt;/li&gt; 
     &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#security-recommendations&quot;&gt;Security Recommendations&lt;/a&gt;&lt;/li&gt; 
    &lt;/ul&gt; &lt;/li&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#contributing&quot;&gt;Contributing&lt;/a&gt;&lt;/li&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#license&quot;&gt;License&lt;/a&gt;&lt;/li&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#acknowledgments&quot;&gt;Acknowledgments&lt;/a&gt; 
    &lt;ul&gt; 
     &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#key-contributors&quot;&gt;Key Contributors&lt;/a&gt;&lt;/li&gt; 
    &lt;/ul&gt; &lt;/li&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/#star-history&quot;&gt;Star History&lt;/a&gt;&lt;/li&gt; 
  &lt;/ul&gt; &lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;One-Line Agent Setup&lt;/h2&gt; 
&lt;p&gt;If you use Claude Code, Codex, Cursor, Windsurf, or another coding agent, you can hand it the setup instructions in one sentence:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;Help me clone DeerFlow if needed, then bootstrap it for local development by following https://raw.githubusercontent.com/bytedance/deer-flow/main/Install.md
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;That prompt is intended for coding agents. It tells the agent to clone the repo if needed, choose Docker when available, and stop with the exact next command plus any missing config the user still needs to provide.&lt;/p&gt; 
&lt;h2&gt;Quick Start&lt;/h2&gt; 
&lt;h3&gt;Configuration&lt;/h3&gt; 
&lt;ol&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;Clone the DeerFlow repository&lt;/strong&gt;&lt;/p&gt; &lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git clone https://github.com/bytedance/deer-flow.git
cd deer-flow
&lt;/code&gt;&lt;/pre&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;Run the setup wizard&lt;/strong&gt;&lt;/p&gt; &lt;p&gt;From the project root directory (&lt;code&gt;deer-flow/&lt;/code&gt;), run:&lt;/p&gt; &lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;make setup
&lt;/code&gt;&lt;/pre&gt; &lt;p&gt;This launches an interactive wizard that guides you through choosing an LLM provider, optional web search, and execution/safety preferences such as sandbox mode, bash access, and file-write tools. It generates a minimal &lt;code&gt;config.yaml&lt;/code&gt; and writes your keys to &lt;code&gt;.env&lt;/code&gt;. Takes about 2 minutes.&lt;/p&gt; &lt;p&gt;The wizard also lets you configure an optional web search provider, or skip it for now.&lt;/p&gt; &lt;p&gt;Run &lt;code&gt;make doctor&lt;/code&gt; at any time to verify your setup and get actionable fix hints.&lt;/p&gt; 
  &lt;blockquote&gt; 
   &lt;p&gt;&lt;strong&gt;Advanced / manual configuration&lt;/strong&gt;: If you prefer to edit &lt;code&gt;config.yaml&lt;/code&gt; directly, run &lt;code&gt;make config&lt;/code&gt; instead to copy the full template. See &lt;code&gt;config.example.yaml&lt;/code&gt; for the complete reference including CLI-backed providers (Codex CLI, Claude Code OAuth), OpenRouter, Responses API, and more.&lt;/p&gt; 
  &lt;/blockquote&gt; 
  &lt;details&gt; 
   &lt;summary&gt;Manual model configuration examples&lt;/summary&gt; 
   &lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;models:
  - name: gpt-4o
    display_name: GPT-4o
    use: langchain_openai:ChatOpenAI
    model: gpt-4o
    api_key: $OPENAI_API_KEY

  - name: openrouter-gemini-2.5-flash
    display_name: Gemini 2.5 Flash (OpenRouter)
    use: langchain_openai:ChatOpenAI
    model: google/gemini-2.5-flash-preview
    api_key: $OPENROUTER_API_KEY
    base_url: https://openrouter.ai/api/v1

  - name: gpt-5-responses
    display_name: GPT-5 (Responses API)
    use: langchain_openai:ChatOpenAI
    model: gpt-5
    api_key: $OPENAI_API_KEY
    use_responses_api: true
    output_version: responses/v1

  - name: qwen3-32b-vllm
    display_name: Qwen3 32B (vLLM)
    use: deerflow.models.vllm_provider:VllmChatModel
    model: Qwen/Qwen3-32B
    api_key: $VLLM_API_KEY
    base_url: http://localhost:8000/v1
    supports_thinking: true
    when_thinking_enabled:
      extra_body:
        chat_template_kwargs:
          enable_thinking: true
&lt;/code&gt;&lt;/pre&gt; 
   &lt;p&gt;OpenRouter and similar OpenAI-compatible gateways should be configured with &lt;code&gt;langchain_openai:ChatOpenAI&lt;/code&gt; plus &lt;code&gt;base_url&lt;/code&gt;. If you prefer a provider-specific environment variable name, point &lt;code&gt;api_key&lt;/code&gt; at that variable explicitly (for example &lt;code&gt;api_key: $OPENROUTER_API_KEY&lt;/code&gt;).&lt;/p&gt; 
   &lt;p&gt;To route OpenAI models through &lt;code&gt;/v1/responses&lt;/code&gt;, keep using &lt;code&gt;langchain_openai:ChatOpenAI&lt;/code&gt; and set &lt;code&gt;use_responses_api: true&lt;/code&gt; with &lt;code&gt;output_version: responses/v1&lt;/code&gt;.&lt;/p&gt; 
   &lt;p&gt;For vLLM 0.19.0, use &lt;code&gt;deerflow.models.vllm_provider:VllmChatModel&lt;/code&gt;. For Qwen-style reasoning models, DeerFlow toggles reasoning with &lt;code&gt;extra_body.chat_template_kwargs.enable_thinking&lt;/code&gt; and preserves vLLM&#39;s non-standard &lt;code&gt;reasoning&lt;/code&gt; field across multi-turn tool-call conversations. Legacy &lt;code&gt;thinking&lt;/code&gt; configs are normalized automatically for backward compatibility. Reasoning models may also require the server to be started with &lt;code&gt;--reasoning-parser ...&lt;/code&gt;. If your local vLLM deployment accepts any non-empty API key, you can still set &lt;code&gt;VLLM_API_KEY&lt;/code&gt; to a placeholder value.&lt;/p&gt; 
   &lt;p&gt;CLI-backed provider examples:&lt;/p&gt; 
   &lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;models:
  - name: gpt-5.4
    display_name: GPT-5.4 (Codex CLI)
    use: deerflow.models.openai_codex_provider:CodexChatModel
    model: gpt-5.4
    supports_thinking: true
    supports_reasoning_effort: true

  - name: claude-sonnet-4.6
    display_name: Claude Sonnet 4.6 (Claude Code OAuth)
    use: deerflow.models.claude_provider:ClaudeChatModel
    model: claude-sonnet-4-6
    max_tokens: 4096
    supports_thinking: true
&lt;/code&gt;&lt;/pre&gt; 
   &lt;ul&gt; 
    &lt;li&gt;Codex CLI reads &lt;code&gt;~/.codex/auth.json&lt;/code&gt;&lt;/li&gt; 
    &lt;li&gt;Claude Code accepts &lt;code&gt;CLAUDE_CODE_OAUTH_TOKEN&lt;/code&gt;, &lt;code&gt;ANTHROPIC_AUTH_TOKEN&lt;/code&gt;, &lt;code&gt;CLAUDE_CODE_CREDENTIALS_PATH&lt;/code&gt;, or &lt;code&gt;~/.claude/.credentials.json&lt;/code&gt;&lt;/li&gt; 
    &lt;li&gt;ACP agent entries are separate from model providers — if you configure &lt;code&gt;acp_agents.codex&lt;/code&gt;, point it at a Codex ACP adapter such as &lt;code&gt;npx -y @zed-industries/codex-acp&lt;/code&gt;&lt;/li&gt; 
    &lt;li&gt;On macOS, export Claude Code auth explicitly if needed:&lt;/li&gt; 
   &lt;/ul&gt; 
   &lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;eval &quot;$(python3 scripts/export_claude_code_oauth.py --print-export)&quot;
&lt;/code&gt;&lt;/pre&gt; 
   &lt;p&gt;API keys can also be set manually in &lt;code&gt;.env&lt;/code&gt; (recommended) or exported in your shell:&lt;/p&gt; 
   &lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;OPENAI_API_KEY=your-openai-api-key
TAVILY_API_KEY=your-tavily-api-key
&lt;/code&gt;&lt;/pre&gt; 
  &lt;/details&gt; &lt;/li&gt; 
&lt;/ol&gt; 
&lt;h3&gt;Running the Application&lt;/h3&gt; 
&lt;h4&gt;Deployment Sizing&lt;/h4&gt; 
&lt;p&gt;Use the table below as a practical starting point when choosing how to run DeerFlow:&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Deployment target&lt;/th&gt; 
   &lt;th&gt;Starting point&lt;/th&gt; 
   &lt;th&gt;Recommended&lt;/th&gt; 
   &lt;th&gt;Notes&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Local evaluation / &lt;code&gt;make dev&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;4 vCPU, 8 GB RAM, 20 GB free SSD&lt;/td&gt; 
   &lt;td&gt;8 vCPU, 16 GB RAM&lt;/td&gt; 
   &lt;td&gt;Good for one developer or one light session with hosted model APIs. &lt;code&gt;2 vCPU / 4 GB&lt;/code&gt; is usually not enough.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Docker development / &lt;code&gt;make docker-start&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;4 vCPU, 8 GB RAM, 25 GB free SSD&lt;/td&gt; 
   &lt;td&gt;8 vCPU, 16 GB RAM&lt;/td&gt; 
   &lt;td&gt;Image builds, bind mounts, and sandbox containers need more headroom than pure local dev.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Long-running server / &lt;code&gt;make up&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;8 vCPU, 16 GB RAM, 40 GB free SSD&lt;/td&gt; 
   &lt;td&gt;16 vCPU, 32 GB RAM&lt;/td&gt; 
   &lt;td&gt;Preferred for shared use, multi-agent runs, report generation, or heavier sandbox workloads.&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;ul&gt; 
 &lt;li&gt;These numbers cover DeerFlow itself. If you also host a local LLM, size that service separately.&lt;/li&gt; 
 &lt;li&gt;Linux plus Docker is the recommended deployment target for a persistent server. macOS and Windows are best treated as development or evaluation environments.&lt;/li&gt; 
 &lt;li&gt;If CPU or memory usage stays pinned, reduce concurrent runs first, then move to the next sizing tier.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h4&gt;Option 1: Docker (Recommended)&lt;/h4&gt; 
&lt;p&gt;&lt;strong&gt;Development&lt;/strong&gt; (hot-reload, source mounts):&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;make docker-init    # Pull sandbox image (only once or when image updates)
make docker-start   # Start services (auto-detects sandbox mode from config.yaml)
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;code&gt;make docker-start&lt;/code&gt; starts &lt;code&gt;provisioner&lt;/code&gt; only when &lt;code&gt;config.yaml&lt;/code&gt; uses provisioner mode (&lt;code&gt;sandbox.use: deerflow.community.aio_sandbox:AioSandboxProvider&lt;/code&gt; with &lt;code&gt;provisioner_url&lt;/code&gt;).&lt;/p&gt; 
&lt;p&gt;Docker builds use the upstream &lt;code&gt;uv&lt;/code&gt; registry by default. If you need faster mirrors in restricted networks, export &lt;code&gt;UV_INDEX_URL=https://pypi.tuna.tsinghua.edu.cn/simple&lt;/code&gt; and &lt;code&gt;NPM_REGISTRY=https://registry.npmmirror.com&lt;/code&gt; before running &lt;code&gt;make docker-init&lt;/code&gt; or &lt;code&gt;make docker-start&lt;/code&gt;.&lt;/p&gt; 
&lt;p&gt;Backend processes automatically pick up &lt;code&gt;config.yaml&lt;/code&gt; changes on the next config access, so model metadata updates do not require a manual restart during development.&lt;/p&gt; 
&lt;div class=&quot;markdown-alert markdown-alert-tip&quot;&gt;
 &lt;p class=&quot;markdown-alert-title&quot;&gt;
  &lt;svg class=&quot;octicon octicon-light-bulb mr-2&quot; viewbox=&quot;0 0 16 16&quot; version=&quot;1.1&quot; width=&quot;16&quot; height=&quot;16&quot; aria-hidden=&quot;true&quot;&gt;
   &lt;path d=&quot;M8 1.5c-2.363 0-4 1.69-4 3.75 0 .984.424 1.625.984 2.304l.214.253c.223.264.47.556.673.848.284.411.537.896.621 1.49a.75.75 0 0 1-1.484.211c-.04-.282-.163-.547-.37-.847a8.456 8.456 0 0 0-.542-.68c-.084-.1-.173-.205-.268-.32C3.201 7.75 2.5 6.766 2.5 5.25 2.5 2.31 4.863 0 8 0s5.5 2.31 5.5 5.25c0 1.516-.701 2.5-1.328 3.259-.095.115-.184.22-.268.319-.207.245-.383.453-.541.681-.208.3-.33.565-.37.847a.751.751 0 0 1-1.485-.212c.084-.593.337-1.078.621-1.489.203-.292.45-.584.673-.848.075-.088.147-.173.213-.253.561-.679.985-1.32.985-2.304 0-2.06-1.637-3.75-4-3.75ZM5.75 12h4.5a.75.75 0 0 1 0 1.5h-4.5a.75.75 0 0 1 0-1.5ZM6 15.25a.75.75 0 0 1 .75-.75h2.5a.75.75 0 0 1 0 1.5h-2.5a.75.75 0 0 1-.75-.75Z&quot;&gt;&lt;/path&gt;
  &lt;/svg&gt;Tip&lt;/p&gt;
 &lt;p&gt;On Linux, if Docker-based commands fail with &lt;code&gt;permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock&lt;/code&gt;, add your user to the &lt;code&gt;docker&lt;/code&gt; group and re-login before retrying. See &lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/CONTRIBUTING.md#linux-docker-daemon-permission-denied&quot;&gt;CONTRIBUTING.md&lt;/a&gt; for the full fix.&lt;/p&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;strong&gt;Production&lt;/strong&gt; (builds images locally, mounts runtime config and data):&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;make up     # Build images and start all production services
make down   # Stop and remove containers
&lt;/code&gt;&lt;/pre&gt; 
&lt;div class=&quot;markdown-alert markdown-alert-note&quot;&gt;
 &lt;p class=&quot;markdown-alert-title&quot;&gt;
  &lt;svg class=&quot;octicon octicon-info mr-2&quot; viewbox=&quot;0 0 16 16&quot; version=&quot;1.1&quot; width=&quot;16&quot; height=&quot;16&quot; aria-hidden=&quot;true&quot;&gt;
   &lt;path d=&quot;M0 8a8 8 0 1 1 16 0A8 8 0 0 1 0 8Zm8-6.5a6.5 6.5 0 1 0 0 13 6.5 6.5 0 0 0 0-13ZM6.5 7.75A.75.75 0 0 1 7.25 7h1a.75.75 0 0 1 .75.75v2.75h.25a.75.75 0 0 1 0 1.5h-2a.75.75 0 0 1 0-1.5h.25v-2h-.25a.75.75 0 0 1-.75-.75ZM8 6a1 1 0 1 1 0-2 1 1 0 0 1 0 2Z&quot;&gt;&lt;/path&gt;
  &lt;/svg&gt;Note&lt;/p&gt;
 &lt;p&gt;The LangGraph agent server currently runs via &lt;code&gt;langgraph dev&lt;/code&gt; (the open-source CLI server).&lt;/p&gt; 
&lt;/div&gt; 
&lt;p&gt;Access: &lt;a href=&quot;http://localhost:2026&quot;&gt;http://localhost:2026&lt;/a&gt;&lt;/p&gt; 
&lt;p&gt;See &lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/CONTRIBUTING.md&quot;&gt;CONTRIBUTING.md&lt;/a&gt; for detailed Docker development guide.&lt;/p&gt; 
&lt;h4&gt;Option 2: Local Development&lt;/h4&gt; 
&lt;p&gt;If you prefer running services locally:&lt;/p&gt; 
&lt;p&gt;Prerequisite: complete the &quot;Configuration&quot; steps above first (&lt;code&gt;make setup&lt;/code&gt;). &lt;code&gt;make dev&lt;/code&gt; requires a valid &lt;code&gt;config.yaml&lt;/code&gt; in the project root (can be overridden via &lt;code&gt;DEER_FLOW_CONFIG_PATH&lt;/code&gt;). Run &lt;code&gt;make doctor&lt;/code&gt; to verify your setup before starting. On Windows, run the local development flow from Git Bash. Native &lt;code&gt;cmd.exe&lt;/code&gt; and PowerShell shells are not supported for the bash-based service scripts, and WSL is not guaranteed because some scripts rely on Git for Windows utilities such as &lt;code&gt;cygpath&lt;/code&gt;.&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;Check prerequisites&lt;/strong&gt;:&lt;/p&gt; &lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;make check  # Verifies Node.js 22+, pnpm, uv, nginx
&lt;/code&gt;&lt;/pre&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;Install dependencies&lt;/strong&gt;:&lt;/p&gt; &lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;make install  # Install backend + frontend dependencies
&lt;/code&gt;&lt;/pre&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;(Optional) Pre-pull sandbox image&lt;/strong&gt;:&lt;/p&gt; &lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Recommended if using Docker/Container-based sandbox
make setup-sandbox
&lt;/code&gt;&lt;/pre&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;(Optional) Load sample memory data for local review&lt;/strong&gt;:&lt;/p&gt; &lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;python scripts/load_memory_sample.py
&lt;/code&gt;&lt;/pre&gt; &lt;p&gt;This copies the sample fixture into the default local runtime memory file so reviewers can immediately test &lt;code&gt;Settings &amp;gt; Memory&lt;/code&gt;. See &lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/backend/docs/MEMORY_SETTINGS_REVIEW.md&quot;&gt;backend/docs/MEMORY_SETTINGS_REVIEW.md&lt;/a&gt; for the shortest review flow.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;Start services&lt;/strong&gt;:&lt;/p&gt; &lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;make dev
&lt;/code&gt;&lt;/pre&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;Access&lt;/strong&gt;: &lt;a href=&quot;http://localhost:2026&quot;&gt;http://localhost:2026&lt;/a&gt;&lt;/p&gt; &lt;/li&gt; 
&lt;/ol&gt; 
&lt;h4&gt;Startup Modes&lt;/h4&gt; 
&lt;p&gt;DeerFlow supports multiple startup modes across two dimensions:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Dev / Prod&lt;/strong&gt; — dev enables hot-reload; prod uses pre-built frontend&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Standard / Gateway&lt;/strong&gt; — standard uses a separate LangGraph server (4 processes); Gateway mode (experimental) embeds the agent runtime in the Gateway API (3 processes)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;&lt;/th&gt; 
   &lt;th&gt;&lt;strong&gt;Local Foreground&lt;/strong&gt;&lt;/th&gt; 
   &lt;th&gt;&lt;strong&gt;Local Daemon&lt;/strong&gt;&lt;/th&gt; 
   &lt;th&gt;&lt;strong&gt;Docker Dev&lt;/strong&gt;&lt;/th&gt; 
   &lt;th&gt;&lt;strong&gt;Docker Prod&lt;/strong&gt;&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Dev&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;./scripts/serve.sh --dev&lt;/code&gt;&lt;br /&gt;&lt;code&gt;make dev&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;./scripts/serve.sh --dev --daemon&lt;/code&gt;&lt;br /&gt;&lt;code&gt;make dev-daemon&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;./scripts/docker.sh start&lt;/code&gt;&lt;br /&gt;&lt;code&gt;make docker-start&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;—&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Dev + Gateway&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;./scripts/serve.sh --dev --gateway&lt;/code&gt;&lt;br /&gt;&lt;code&gt;make dev-pro&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;./scripts/serve.sh --dev --gateway --daemon&lt;/code&gt;&lt;br /&gt;&lt;code&gt;make dev-daemon-pro&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;./scripts/docker.sh start --gateway&lt;/code&gt;&lt;br /&gt;&lt;code&gt;make docker-start-pro&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;—&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Prod&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;./scripts/serve.sh --prod&lt;/code&gt;&lt;br /&gt;&lt;code&gt;make start&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;./scripts/serve.sh --prod --daemon&lt;/code&gt;&lt;br /&gt;&lt;code&gt;make start-daemon&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;—&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;./scripts/deploy.sh&lt;/code&gt;&lt;br /&gt;&lt;code&gt;make up&lt;/code&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Prod + Gateway&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;./scripts/serve.sh --prod --gateway&lt;/code&gt;&lt;br /&gt;&lt;code&gt;make start-pro&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;./scripts/serve.sh --prod --gateway --daemon&lt;/code&gt;&lt;br /&gt;&lt;code&gt;make start-daemon-pro&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;—&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;./scripts/deploy.sh --gateway&lt;/code&gt;&lt;br /&gt;&lt;code&gt;make up-pro&lt;/code&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Action&lt;/th&gt; 
   &lt;th&gt;Local&lt;/th&gt; 
   &lt;th&gt;Docker Dev&lt;/th&gt; 
   &lt;th&gt;Docker Prod&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Stop&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;./scripts/serve.sh --stop&lt;/code&gt;&lt;br /&gt;&lt;code&gt;make stop&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;./scripts/docker.sh stop&lt;/code&gt;&lt;br /&gt;&lt;code&gt;make docker-stop&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;./scripts/deploy.sh down&lt;/code&gt;&lt;br /&gt;&lt;code&gt;make down&lt;/code&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Restart&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;./scripts/serve.sh --restart [flags]&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;./scripts/docker.sh restart&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;—&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;Gateway mode&lt;/strong&gt; eliminates the LangGraph server process — the Gateway API handles agent execution directly via async tasks, managing its own concurrency.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;h4&gt;Why Gateway Mode?&lt;/h4&gt; 
&lt;p&gt;In standard mode, DeerFlow runs a dedicated &lt;a href=&quot;https://langchain-ai.github.io/langgraph/&quot;&gt;LangGraph Platform&lt;/a&gt; server alongside the Gateway API. This architecture works well but has trade-offs:&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;&lt;/th&gt; 
   &lt;th&gt;Standard Mode&lt;/th&gt; 
   &lt;th&gt;Gateway Mode&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;Gateway (REST API) + LangGraph (agent runtime)&lt;/td&gt; 
   &lt;td&gt;Gateway embeds agent runtime&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Concurrency&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;--n-jobs-per-worker&lt;/code&gt; per worker (requires license)&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;--workers&lt;/code&gt; × async tasks (no per-worker cap)&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Containers / Processes&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;4 (frontend, gateway, langgraph, nginx)&lt;/td&gt; 
   &lt;td&gt;3 (frontend, gateway, nginx)&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Resource usage&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;Higher (two Python runtimes)&lt;/td&gt; 
   &lt;td&gt;Lower (single Python runtime)&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;LangGraph Platform license&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;Required for production images&lt;/td&gt; 
   &lt;td&gt;Not required&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Cold start&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;Slower (two services to initialize)&lt;/td&gt; 
   &lt;td&gt;Faster&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p&gt;Both modes are functionally equivalent — the same agents, tools, and skills work in either mode.&lt;/p&gt; 
&lt;h4&gt;Docker Production Deployment&lt;/h4&gt; 
&lt;p&gt;&lt;code&gt;deploy.sh&lt;/code&gt; supports building and starting separately. Images are mode-agnostic — runtime mode is selected at start time:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# One-step (build + start)
deploy.sh                    # standard mode (default)
deploy.sh --gateway          # gateway mode

# Two-step (build once, start with any mode)
deploy.sh build              # build all images
deploy.sh start              # start in standard mode
deploy.sh start --gateway    # start in gateway mode

# Stop
deploy.sh down
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Advanced&lt;/h3&gt; 
&lt;h4&gt;Sandbox Mode&lt;/h4&gt; 
&lt;p&gt;DeerFlow supports multiple sandbox execution modes:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Local Execution&lt;/strong&gt; (runs sandbox code directly on the host machine)&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Docker Execution&lt;/strong&gt; (runs sandbox code in isolated Docker containers)&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Docker Execution with Kubernetes&lt;/strong&gt; (runs sandbox code in Kubernetes pods via provisioner service)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;For Docker development, service startup follows &lt;code&gt;config.yaml&lt;/code&gt; sandbox mode. In Local/Docker modes, &lt;code&gt;provisioner&lt;/code&gt; is not started.&lt;/p&gt; 
&lt;p&gt;See the &lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/backend/docs/CONFIGURATION.md#sandbox&quot;&gt;Sandbox Configuration Guide&lt;/a&gt; to configure your preferred mode.&lt;/p&gt; 
&lt;h4&gt;MCP Server&lt;/h4&gt; 
&lt;p&gt;DeerFlow supports configurable MCP servers and skills to extend its capabilities. For HTTP/SSE MCP servers, OAuth token flows are supported (&lt;code&gt;client_credentials&lt;/code&gt;, &lt;code&gt;refresh_token&lt;/code&gt;). See the &lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/backend/docs/MCP_SERVER.md&quot;&gt;MCP Server Guide&lt;/a&gt; for detailed instructions.&lt;/p&gt; 
&lt;h4&gt;IM Channels&lt;/h4&gt; 
&lt;p&gt;DeerFlow supports receiving tasks from messaging apps. Channels auto-start when configured — no public IP required for any of them.&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Channel&lt;/th&gt; 
   &lt;th&gt;Transport&lt;/th&gt; 
   &lt;th&gt;Difficulty&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Telegram&lt;/td&gt; 
   &lt;td&gt;Bot API (long-polling)&lt;/td&gt; 
   &lt;td&gt;Easy&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Slack&lt;/td&gt; 
   &lt;td&gt;Socket Mode&lt;/td&gt; 
   &lt;td&gt;Moderate&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Feishu / Lark&lt;/td&gt; 
   &lt;td&gt;WebSocket&lt;/td&gt; 
   &lt;td&gt;Moderate&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;WeChat&lt;/td&gt; 
   &lt;td&gt;Tencent iLink (long-polling)&lt;/td&gt; 
   &lt;td&gt;Moderate&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;WeCom&lt;/td&gt; 
   &lt;td&gt;WebSocket&lt;/td&gt; 
   &lt;td&gt;Moderate&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p&gt;&lt;strong&gt;Configuration in &lt;code&gt;config.yaml&lt;/code&gt;:&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;channels:
  # LangGraph Server URL (default: http://localhost:2024)
  langgraph_url: http://localhost:2024
  # Gateway API URL (default: http://localhost:8001)
  gateway_url: http://localhost:8001

  # Optional: global session defaults for all mobile channels
  session:
    assistant_id: lead_agent  # or a custom agent name; custom agents are routed via lead_agent + agent_name
    config:
      recursion_limit: 100
    context:
      thinking_enabled: true
      is_plan_mode: false
      subagent_enabled: false

  feishu:
    enabled: true
    app_id: $FEISHU_APP_ID
    app_secret: $FEISHU_APP_SECRET
    # domain: https://open.feishu.cn       # China (default)
    # domain: https://open.larksuite.com   # International

  wecom:
    enabled: true
    bot_id: $WECOM_BOT_ID
    bot_secret: $WECOM_BOT_SECRET

  slack:
    enabled: true
    bot_token: $SLACK_BOT_TOKEN     # xoxb-...
    app_token: $SLACK_APP_TOKEN     # xapp-... (Socket Mode)
    allowed_users: []               # empty = allow all

  telegram:
    enabled: true
    bot_token: $TELEGRAM_BOT_TOKEN
    allowed_users: []               # empty = allow all

  wechat:
    enabled: false
    bot_token: $WECHAT_BOT_TOKEN
    ilink_bot_id: $WECHAT_ILINK_BOT_ID
    qrcode_login_enabled: true      # optional: allow first-time QR bootstrap when bot_token is absent
    allowed_users: []               # empty = allow all
    polling_timeout: 35
    state_dir: ./.deer-flow/wechat/state
    max_inbound_image_bytes: 20971520
    max_outbound_image_bytes: 20971520
    max_inbound_file_bytes: 52428800
    max_outbound_file_bytes: 52428800

    # Optional: per-channel / per-user session settings
    session:
      assistant_id: mobile-agent  # custom agent names are also supported here
      context:
        thinking_enabled: false
      users:
        &quot;123456789&quot;:
          assistant_id: vip-agent
          config:
            recursion_limit: 150
          context:
            thinking_enabled: true
            subagent_enabled: true
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Notes:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;code&gt;assistant_id: lead_agent&lt;/code&gt; calls the default LangGraph assistant directly.&lt;/li&gt; 
 &lt;li&gt;If &lt;code&gt;assistant_id&lt;/code&gt; is set to a custom agent name, DeerFlow still routes through &lt;code&gt;lead_agent&lt;/code&gt; and injects that value as &lt;code&gt;agent_name&lt;/code&gt;, so the custom agent&#39;s SOUL/config takes effect for IM channels.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Set the corresponding API keys in your &lt;code&gt;.env&lt;/code&gt; file:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Telegram
TELEGRAM_BOT_TOKEN=123456789:ABCdefGHIjklMNOpqrSTUvwxYZ

# Slack
SLACK_BOT_TOKEN=xoxb-...
SLACK_APP_TOKEN=xapp-...

# Feishu / Lark
FEISHU_APP_ID=cli_xxxx
FEISHU_APP_SECRET=your_app_secret

# WeChat iLink
WECHAT_BOT_TOKEN=your_ilink_bot_token
WECHAT_ILINK_BOT_ID=your_ilink_bot_id

# WeCom
WECOM_BOT_ID=your_bot_id
WECOM_BOT_SECRET=your_bot_secret
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Telegram Setup&lt;/strong&gt;&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Chat with &lt;a href=&quot;https://t.me/BotFather&quot;&gt;@BotFather&lt;/a&gt;, send &lt;code&gt;/newbot&lt;/code&gt;, and copy the HTTP API token.&lt;/li&gt; 
 &lt;li&gt;Set &lt;code&gt;TELEGRAM_BOT_TOKEN&lt;/code&gt; in &lt;code&gt;.env&lt;/code&gt; and enable the channel in &lt;code&gt;config.yaml&lt;/code&gt;.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;&lt;strong&gt;Slack Setup&lt;/strong&gt;&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Create a Slack App at &lt;a href=&quot;https://api.slack.com/apps&quot;&gt;api.slack.com/apps&lt;/a&gt; → Create New App → From scratch.&lt;/li&gt; 
 &lt;li&gt;Under &lt;strong&gt;OAuth &amp;amp; Permissions&lt;/strong&gt;, add Bot Token Scopes: &lt;code&gt;app_mentions:read&lt;/code&gt;, &lt;code&gt;chat:write&lt;/code&gt;, &lt;code&gt;im:history&lt;/code&gt;, &lt;code&gt;im:read&lt;/code&gt;, &lt;code&gt;im:write&lt;/code&gt;, &lt;code&gt;files:write&lt;/code&gt;.&lt;/li&gt; 
 &lt;li&gt;Enable &lt;strong&gt;Socket Mode&lt;/strong&gt; → generate an App-Level Token (&lt;code&gt;xapp-…&lt;/code&gt;) with &lt;code&gt;connections:write&lt;/code&gt; scope.&lt;/li&gt; 
 &lt;li&gt;Under &lt;strong&gt;Event Subscriptions&lt;/strong&gt;, subscribe to bot events: &lt;code&gt;app_mention&lt;/code&gt;, &lt;code&gt;message.im&lt;/code&gt;.&lt;/li&gt; 
 &lt;li&gt;Set &lt;code&gt;SLACK_BOT_TOKEN&lt;/code&gt; and &lt;code&gt;SLACK_APP_TOKEN&lt;/code&gt; in &lt;code&gt;.env&lt;/code&gt; and enable the channel in &lt;code&gt;config.yaml&lt;/code&gt;.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;&lt;strong&gt;Feishu / Lark Setup&lt;/strong&gt;&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Create an app on &lt;a href=&quot;https://open.feishu.cn/&quot;&gt;Feishu Open Platform&lt;/a&gt; → enable &lt;strong&gt;Bot&lt;/strong&gt; capability.&lt;/li&gt; 
 &lt;li&gt;Add permissions: &lt;code&gt;im:message&lt;/code&gt;, &lt;code&gt;im:message.p2p_msg:readonly&lt;/code&gt;, &lt;code&gt;im:resource&lt;/code&gt;.&lt;/li&gt; 
 &lt;li&gt;Under &lt;strong&gt;Events&lt;/strong&gt;, subscribe to &lt;code&gt;im.message.receive_v1&lt;/code&gt; and select &lt;strong&gt;Long Connection&lt;/strong&gt; mode.&lt;/li&gt; 
 &lt;li&gt;Copy the App ID and App Secret. Set &lt;code&gt;FEISHU_APP_ID&lt;/code&gt; and &lt;code&gt;FEISHU_APP_SECRET&lt;/code&gt; in &lt;code&gt;.env&lt;/code&gt; and enable the channel in &lt;code&gt;config.yaml&lt;/code&gt;.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;&lt;strong&gt;WeChat Setup&lt;/strong&gt;&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Enable the &lt;code&gt;wechat&lt;/code&gt; channel in &lt;code&gt;config.yaml&lt;/code&gt;.&lt;/li&gt; 
 &lt;li&gt;Either set &lt;code&gt;WECHAT_BOT_TOKEN&lt;/code&gt; in &lt;code&gt;.env&lt;/code&gt;, or set &lt;code&gt;qrcode_login_enabled: true&lt;/code&gt; for first-time QR bootstrap.&lt;/li&gt; 
 &lt;li&gt;When &lt;code&gt;bot_token&lt;/code&gt; is absent and QR bootstrap is enabled, watch backend logs for the QR content returned by iLink and complete the binding flow.&lt;/li&gt; 
 &lt;li&gt;After the QR flow succeeds, DeerFlow persists the acquired token under &lt;code&gt;state_dir&lt;/code&gt; for later restarts.&lt;/li&gt; 
 &lt;li&gt;For Docker Compose deployments, keep &lt;code&gt;state_dir&lt;/code&gt; on a persistent volume so the &lt;code&gt;get_updates_buf&lt;/code&gt; cursor and saved auth state survive restarts.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;&lt;strong&gt;WeCom Setup&lt;/strong&gt;&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Create a bot on the WeCom AI Bot platform and obtain the &lt;code&gt;bot_id&lt;/code&gt; and &lt;code&gt;bot_secret&lt;/code&gt;.&lt;/li&gt; 
 &lt;li&gt;Enable &lt;code&gt;channels.wecom&lt;/code&gt; in &lt;code&gt;config.yaml&lt;/code&gt; and fill in &lt;code&gt;bot_id&lt;/code&gt; / &lt;code&gt;bot_secret&lt;/code&gt;.&lt;/li&gt; 
 &lt;li&gt;Set &lt;code&gt;WECOM_BOT_ID&lt;/code&gt; and &lt;code&gt;WECOM_BOT_SECRET&lt;/code&gt; in &lt;code&gt;.env&lt;/code&gt;.&lt;/li&gt; 
 &lt;li&gt;Make sure backend dependencies include &lt;code&gt;wecom-aibot-python-sdk&lt;/code&gt;. The channel uses a WebSocket long connection and does not require a public callback URL.&lt;/li&gt; 
 &lt;li&gt;The current integration supports inbound text, image, and file messages. Final images/files generated by the agent are also sent back to the WeCom conversation.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;When DeerFlow runs in Docker Compose, IM channels execute inside the &lt;code&gt;gateway&lt;/code&gt; container. In that case, do not point &lt;code&gt;channels.langgraph_url&lt;/code&gt; or &lt;code&gt;channels.gateway_url&lt;/code&gt; at &lt;code&gt;localhost&lt;/code&gt;; use container service names such as &lt;code&gt;http://langgraph:2024&lt;/code&gt; and &lt;code&gt;http://gateway:8001&lt;/code&gt;, or set &lt;code&gt;DEER_FLOW_CHANNELS_LANGGRAPH_URL&lt;/code&gt; and &lt;code&gt;DEER_FLOW_CHANNELS_GATEWAY_URL&lt;/code&gt;.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Commands&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;Once a channel is connected, you can interact with DeerFlow directly from the chat:&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Command&lt;/th&gt; 
   &lt;th&gt;Description&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/new&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Start a new conversation&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/status&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Show current thread info&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/models&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;List available models&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/memory&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;View memory&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/help&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Show help&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;Messages without a command prefix are treated as regular chat — DeerFlow creates a thread and responds conversationally.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;h4&gt;LangSmith Tracing&lt;/h4&gt; 
&lt;p&gt;DeerFlow has built-in &lt;a href=&quot;https://smith.langchain.com&quot;&gt;LangSmith&lt;/a&gt; integration for observability. When enabled, all LLM calls, agent runs, and tool executions are traced and visible in the LangSmith dashboard.&lt;/p&gt; 
&lt;p&gt;Add the following to your &lt;code&gt;.env&lt;/code&gt; file:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT=https://api.smith.langchain.com
LANGSMITH_API_KEY=lsv2_pt_xxxxxxxxxxxxxxxx
LANGSMITH_PROJECT=xxx
&lt;/code&gt;&lt;/pre&gt; 
&lt;h4&gt;Langfuse Tracing&lt;/h4&gt; 
&lt;p&gt;DeerFlow also supports &lt;a href=&quot;https://langfuse.com&quot;&gt;Langfuse&lt;/a&gt; observability for LangChain-compatible runs.&lt;/p&gt; 
&lt;p&gt;Add the following to your &lt;code&gt;.env&lt;/code&gt; file:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;LANGFUSE_TRACING=true
LANGFUSE_PUBLIC_KEY=pk-lf-xxxxxxxxxxxxxxxx
LANGFUSE_SECRET_KEY=sk-lf-xxxxxxxxxxxxxxxx
LANGFUSE_BASE_URL=https://cloud.langfuse.com
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;If you are using a self-hosted Langfuse instance, set &lt;code&gt;LANGFUSE_BASE_URL&lt;/code&gt; to your deployment URL.&lt;/p&gt; 
&lt;h4&gt;Using Both Providers&lt;/h4&gt; 
&lt;p&gt;If both LangSmith and Langfuse are enabled, DeerFlow attaches both tracing callbacks and reports the same model activity to both systems.&lt;/p&gt; 
&lt;p&gt;If a provider is explicitly enabled but missing required credentials, or if its callback fails to initialize, DeerFlow fails fast when tracing is initialized during model creation and the error message names the provider that caused the failure.&lt;/p&gt; 
&lt;p&gt;For Docker deployments, tracing is disabled by default. Set &lt;code&gt;LANGSMITH_TRACING=true&lt;/code&gt; and &lt;code&gt;LANGSMITH_API_KEY&lt;/code&gt; in your &lt;code&gt;.env&lt;/code&gt; to enable it.&lt;/p&gt; 
&lt;h2&gt;From Deep Research to Super Agent Harness&lt;/h2&gt; 
&lt;p&gt;DeerFlow started as a Deep Research framework — and the community ran with it. Since launch, developers have pushed it far beyond research: building data pipelines, generating slide decks, spinning up dashboards, automating content workflows. Things we never anticipated.&lt;/p&gt; 
&lt;p&gt;That told us something important: DeerFlow wasn&#39;t just a research tool. It was a &lt;strong&gt;harness&lt;/strong&gt; — a runtime that gives agents the infrastructure to actually get work done.&lt;/p&gt; 
&lt;p&gt;So we rebuilt it from scratch.&lt;/p&gt; 
&lt;p&gt;DeerFlow 2.0 is no longer a framework you wire together. It&#39;s a super agent harness — batteries included, fully extensible. Built on LangGraph and LangChain, it ships with everything an agent needs out of the box: a filesystem, memory, skills, sandbox-aware execution, and the ability to plan and spawn sub-agents for complex, multi-step tasks.&lt;/p&gt; 
&lt;p&gt;Use it as-is. Or tear it apart and make it yours.&lt;/p&gt; 
&lt;h2&gt;Core Features&lt;/h2&gt; 
&lt;h3&gt;Skills &amp;amp; Tools&lt;/h3&gt; 
&lt;p&gt;Skills are what make DeerFlow do &lt;em&gt;almost anything&lt;/em&gt;.&lt;/p&gt; 
&lt;p&gt;A standard Agent Skill is a structured capability module — a Markdown file that defines a workflow, best practices, and references to supporting resources. DeerFlow ships with built-in skills for research, report generation, slide creation, web pages, image and video generation, and more. But the real power is extensibility: add your own skills, replace the built-in ones, or combine them into compound workflows.&lt;/p&gt; 
&lt;p&gt;Skills are loaded progressively — only when the task needs them, not all at once. This keeps the context window lean and makes DeerFlow work well even with token-sensitive models.&lt;/p&gt; 
&lt;p&gt;When you install &lt;code&gt;.skill&lt;/code&gt; archives through the Gateway, DeerFlow accepts standard optional frontmatter metadata such as &lt;code&gt;version&lt;/code&gt;, &lt;code&gt;author&lt;/code&gt;, and &lt;code&gt;compatibility&lt;/code&gt; instead of rejecting otherwise valid external skills.&lt;/p&gt; 
&lt;p&gt;Tools follow the same philosophy. DeerFlow comes with a core toolset — web search, web fetch, file operations, bash execution — and supports custom tools via MCP servers and Python functions. Swap anything. Add anything.&lt;/p&gt; 
&lt;p&gt;Gateway-generated follow-up suggestions now normalize both plain-string model output and block/list-style rich content before parsing the JSON array response, so provider-specific content wrappers do not silently drop suggestions.&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;# Paths inside the sandbox container
/mnt/skills/public
├── research/SKILL.md
├── report-generation/SKILL.md
├── slide-creation/SKILL.md
├── web-page/SKILL.md
└── image-generation/SKILL.md

/mnt/skills/custom
└── your-custom-skill/SKILL.md      ← yours
&lt;/code&gt;&lt;/pre&gt; 
&lt;h4&gt;Claude Code Integration&lt;/h4&gt; 
&lt;p&gt;The &lt;code&gt;claude-to-deerflow&lt;/code&gt; skill lets you interact with a running DeerFlow instance directly from &lt;a href=&quot;https://docs.anthropic.com/en/docs/claude-code&quot;&gt;Claude Code&lt;/a&gt;. Send research tasks, check status, manage threads — all without leaving the terminal.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Install the skill&lt;/strong&gt;:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;npx skills add https://github.com/bytedance/deer-flow --skill claude-to-deerflow
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Then make sure DeerFlow is running (default at &lt;code&gt;http://localhost:2026&lt;/code&gt;) and use the &lt;code&gt;/claude-to-deerflow&lt;/code&gt; command in Claude Code.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;What you can do&lt;/strong&gt;:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Send messages to DeerFlow and get streaming responses&lt;/li&gt; 
 &lt;li&gt;Choose execution modes: flash (fast), standard, pro (planning), ultra (sub-agents)&lt;/li&gt; 
 &lt;li&gt;Check DeerFlow health, list models/skills/agents&lt;/li&gt; 
 &lt;li&gt;Manage threads and conversation history&lt;/li&gt; 
 &lt;li&gt;Upload files for analysis&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;Environment variables&lt;/strong&gt; (optional, for custom endpoints):&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;DEERFLOW_URL=http://localhost:2026            # Unified proxy base URL
DEERFLOW_GATEWAY_URL=http://localhost:2026    # Gateway API
DEERFLOW_LANGGRAPH_URL=http://localhost:2026/api/langgraph  # LangGraph API
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;See &lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/skills/public/claude-to-deerflow/SKILL.md&quot;&gt;&lt;code&gt;skills/public/claude-to-deerflow/SKILL.md&lt;/code&gt;&lt;/a&gt; for the full API reference.&lt;/p&gt; 
&lt;h3&gt;Sub-Agents&lt;/h3&gt; 
&lt;p&gt;Complex tasks rarely fit in a single pass. DeerFlow decomposes them.&lt;/p&gt; 
&lt;p&gt;The lead agent can spawn sub-agents on the fly — each with its own scoped context, tools, and termination conditions. Sub-agents run in parallel when possible, report back structured results, and the lead agent synthesizes everything into a coherent output.&lt;/p&gt; 
&lt;p&gt;This is how DeerFlow handles tasks that take minutes to hours: a research task might fan out into a dozen sub-agents, each exploring a different angle, then converge into a single report — or a website — or a slide deck with generated visuals. One harness, many hands.&lt;/p&gt; 
&lt;h3&gt;Sandbox &amp;amp; File System&lt;/h3&gt; 
&lt;p&gt;DeerFlow doesn&#39;t just &lt;em&gt;talk&lt;/em&gt; about doing things. It has its own computer.&lt;/p&gt; 
&lt;p&gt;Each task gets its own execution environment with a full filesystem view — skills, workspace, uploads, outputs. The agent reads, writes, and edits files. It can view images and, when configured safely, execute shell commands.&lt;/p&gt; 
&lt;p&gt;With &lt;code&gt;AioSandboxProvider&lt;/code&gt;, shell execution runs inside isolated containers. With &lt;code&gt;LocalSandboxProvider&lt;/code&gt;, file tools still map to per-thread directories on the host, but host &lt;code&gt;bash&lt;/code&gt; is disabled by default because it is not a secure isolation boundary. Re-enable host bash only for fully trusted local workflows.&lt;/p&gt; 
&lt;p&gt;This is the difference between a chatbot with tool access and an agent with an actual execution environment.&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;# Paths inside the sandbox container
/mnt/user-data/
├── uploads/          ← your files
├── workspace/        ← agents&#39; working directory
└── outputs/          ← final deliverables
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Context Engineering&lt;/h3&gt; 
&lt;p&gt;&lt;strong&gt;Isolated Sub-Agent Context&lt;/strong&gt;: Each sub-agent runs in its own isolated context. This means that the sub-agent will not be able to see the context of the main agent or other sub-agents. This is important to ensure that the sub-agent is able to focus on the task at hand and not be distracted by the context of the main agent or other sub-agents.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Summarization&lt;/strong&gt;: Within a session, DeerFlow manages context aggressively — summarizing completed sub-tasks, offloading intermediate results to the filesystem, compressing what&#39;s no longer immediately relevant. This lets it stay sharp across long, multi-step tasks without blowing the context window.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Strict Tool-Call Recovery&lt;/strong&gt;: When a provider or middleware interrupts a tool-call loop, DeerFlow now strips provider-level raw tool-call metadata on forced-stop assistant messages and injects placeholder tool results for dangling calls before the next model invocation. This keeps OpenAI-compatible reasoning models that strictly validate &lt;code&gt;tool_call_id&lt;/code&gt; sequences from failing with malformed history errors.&lt;/p&gt; 
&lt;h3&gt;Long-Term Memory&lt;/h3&gt; 
&lt;p&gt;Most agents forget everything the moment a conversation ends. DeerFlow remembers.&lt;/p&gt; 
&lt;p&gt;Across sessions, DeerFlow builds a persistent memory of your profile, preferences, and accumulated knowledge. The more you use it, the better it knows you — your writing style, your technical stack, your recurring workflows. Memory is stored locally and stays under your control.&lt;/p&gt; 
&lt;p&gt;Memory updates now skip duplicate fact entries at apply time, so repeated preferences and context do not accumulate endlessly across sessions.&lt;/p&gt; 
&lt;h2&gt;Recommended Models&lt;/h2&gt; 
&lt;p&gt;DeerFlow is model-agnostic — it works with any LLM that implements the OpenAI-compatible API. That said, it performs best with models that support:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Long context windows&lt;/strong&gt; (100k+ tokens) for deep research and multi-step tasks&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Reasoning capabilities&lt;/strong&gt; for adaptive planning and complex decomposition&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Multimodal inputs&lt;/strong&gt; for image understanding and video comprehension&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Strong tool-use&lt;/strong&gt; for reliable function calling and structured outputs&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Embedded Python Client&lt;/h2&gt; 
&lt;p&gt;DeerFlow can be used as an embedded Python library without running the full HTTP services. The &lt;code&gt;DeerFlowClient&lt;/code&gt; provides direct in-process access to all agent and Gateway capabilities, returning the same response schemas as the HTTP Gateway API. The HTTP Gateway also exposes &lt;code&gt;DELETE /api/threads/{thread_id}&lt;/code&gt; to remove DeerFlow-managed local thread data after the LangGraph thread itself has been deleted:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from deerflow.client import DeerFlowClient

client = DeerFlowClient()

# Chat
response = client.chat(&quot;Analyze this paper for me&quot;, thread_id=&quot;my-thread&quot;)

# Streaming (LangGraph SSE protocol: values, messages-tuple, end)
for event in client.stream(&quot;hello&quot;):
    if event.type == &quot;messages-tuple&quot; and event.data.get(&quot;type&quot;) == &quot;ai&quot;:
        print(event.data[&quot;content&quot;])

# Configuration &amp;amp; management — returns Gateway-aligned dicts
models = client.list_models()        # {&quot;models&quot;: [...]}
skills = client.list_skills()        # {&quot;skills&quot;: [...]}
client.update_skill(&quot;web-search&quot;, enabled=True)
client.upload_files(&quot;thread-1&quot;, [&quot;./report.pdf&quot;])  # {&quot;success&quot;: True, &quot;files&quot;: [...]}
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;All dict-returning methods are validated against Gateway Pydantic response models in CI (&lt;code&gt;TestGatewayConformance&lt;/code&gt;), ensuring the embedded client stays in sync with the HTTP API schemas. See &lt;code&gt;backend/packages/harness/deerflow/client.py&lt;/code&gt; for full API documentation.&lt;/p&gt; 
&lt;h2&gt;Documentation&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/CONTRIBUTING.md&quot;&gt;Contributing Guide&lt;/a&gt; - Development environment setup and workflow&lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/backend/docs/CONFIGURATION.md&quot;&gt;Configuration Guide&lt;/a&gt; - Setup and configuration instructions&lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/backend/CLAUDE.md&quot;&gt;Architecture Overview&lt;/a&gt; - Technical architecture details&lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/backend/README.md&quot;&gt;Backend Architecture&lt;/a&gt; - Backend architecture and API reference&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;⚠️ Security Notice&lt;/h2&gt; 
&lt;h3&gt;Improper Deployment May Introduce Security Risks&lt;/h3&gt; 
&lt;p&gt;DeerFlow has key high-privilege capabilities including &lt;strong&gt;system command execution, resource operations, and business logic invocation&lt;/strong&gt;, and is designed by default to be &lt;strong&gt;deployed in a local trusted environment (accessible only via the 127.0.0.1 loopback interface)&lt;/strong&gt;. If you deploy the agent in untrusted environments — such as LAN networks, public cloud servers, or other multi-endpoint accessible environments — without strict security measures, it may introduce security risks, including:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Unauthorized illegal invocation&lt;/strong&gt;: Agent functionality could be discovered by unauthorized third parties or malicious internet scanners, triggering bulk unauthorized requests that execute high-risk operations such as system commands and file read/write, potentially causing serious security consequences.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Compliance and legal risks&lt;/strong&gt;: If the agent is illegally invoked to conduct cyberattacks, data theft, or other illegal activities, it may result in legal liability and compliance risks.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Security Recommendations&lt;/h3&gt; 
&lt;p&gt;&lt;strong&gt;Note: We strongly recommend deploying DeerFlow in a local trusted network environment.&lt;/strong&gt; If you need cross-device or cross-network deployment, you must implement strict security measures, such as:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;IP allowlist&lt;/strong&gt;: Use &lt;code&gt;iptables&lt;/code&gt;, or deploy hardware firewalls / switches with Access Control Lists (ACL), to &lt;strong&gt;configure IP allowlist rules&lt;/strong&gt; and deny access from all other IP addresses.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Authentication gateway&lt;/strong&gt;: Configure a reverse proxy (e.g., nginx) and &lt;strong&gt;enable strong pre-authentication&lt;/strong&gt;, blocking any unauthenticated access.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Network isolation&lt;/strong&gt;: Where possible, place the agent and trusted devices in the &lt;strong&gt;same dedicated VLAN&lt;/strong&gt;, isolated from other network devices.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Stay updated&lt;/strong&gt;: Continue to follow DeerFlow&#39;s security feature updates.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Contributing&lt;/h2&gt; 
&lt;p&gt;We welcome contributions! Please see &lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/CONTRIBUTING.md&quot;&gt;CONTRIBUTING.md&lt;/a&gt; for development setup, workflow, and guidelines.&lt;/p&gt; 
&lt;p&gt;Regression coverage includes Docker sandbox mode detection and provisioner kubeconfig-path handling tests in &lt;code&gt;backend/tests/&lt;/code&gt;. Gateway artifact serving now forces active web content types (&lt;code&gt;text/html&lt;/code&gt;, &lt;code&gt;application/xhtml+xml&lt;/code&gt;, &lt;code&gt;image/svg+xml&lt;/code&gt;) to download as attachments instead of inline rendering, reducing XSS risk for generated artifacts.&lt;/p&gt; 
&lt;h2&gt;License&lt;/h2&gt; 
&lt;p&gt;This project is open source and available under the &lt;a href=&quot;https://raw.githubusercontent.com/bytedance/deer-flow/main/LICENSE&quot;&gt;MIT License&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;Acknowledgments&lt;/h2&gt; 
&lt;p&gt;DeerFlow is built upon the incredible work of the open-source community. We are deeply grateful to all the projects and contributors whose efforts have made DeerFlow possible. Truly, we stand on the shoulders of giants.&lt;/p&gt; 
&lt;p&gt;We would like to extend our sincere appreciation to the following projects for their invaluable contributions:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/langchain-ai/langchain&quot;&gt;LangChain&lt;/a&gt;&lt;/strong&gt;: Their exceptional framework powers our LLM interactions and chains, enabling seamless integration and functionality.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/langchain-ai/langgraph&quot;&gt;LangGraph&lt;/a&gt;&lt;/strong&gt;: Their innovative approach to multi-agent orchestration has been instrumental in enabling DeerFlow&#39;s sophisticated workflows.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;These projects exemplify the transformative power of open-source collaboration, and we are proud to build upon their foundations.&lt;/p&gt; 
&lt;h3&gt;Key Contributors&lt;/h3&gt; 
&lt;p&gt;A heartfelt thank you goes out to the core authors of &lt;code&gt;DeerFlow&lt;/code&gt;, whose vision, passion, and dedication have brought this project to life:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/hetaoBackend/&quot;&gt;Daniel Walnut&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/magiccube/&quot;&gt;Henry Li&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Your unwavering commitment and expertise have been the driving force behind DeerFlow&#39;s success. We are honored to have you at the helm of this journey.&lt;/p&gt; 
&lt;h2&gt;Star History&lt;/h2&gt; 
&lt;p&gt;&lt;a href=&quot;https://star-history.com/#bytedance/deer-flow&amp;amp;Date&quot;&gt;&lt;img src=&quot;https://api.star-history.com/svg?repos=bytedance/deer-flow&amp;amp;type=Date&quot; alt=&quot;Star History Chart&quot; /&gt;&lt;/a&gt;&lt;/p&gt;</description>
      
      <media:content url="https://opengraph.githubassets.com/c798534cb4c906ff10b4c3e84eca43a6dae9f3246afc59a1a204fa0c1deea0a3/bytedance/deer-flow" medium="image" />
      
    </item>
    
    <item>
      <title>siddharthvaddem/openscreen</title>
      <link>https://github.com/siddharthvaddem/openscreen</link>
      <description>&lt;p&gt;Create stunning demos for free. Open-source, no subscriptions, no watermarks, and free for commercial use. An alternative to Screen Studio.&lt;/p&gt;&lt;hr&gt;&lt;div class=&quot;markdown-alert markdown-alert-warning&quot;&gt;
 &lt;p class=&quot;markdown-alert-title&quot;&gt;
  &lt;svg class=&quot;octicon octicon-alert mr-2&quot; viewbox=&quot;0 0 16 16&quot; version=&quot;1.1&quot; width=&quot;16&quot; height=&quot;16&quot; aria-hidden=&quot;true&quot;&gt;
   &lt;path d=&quot;M6.457 1.047c.659-1.234 2.427-1.234 3.086 0l6.082 11.378A1.75 1.75 0 0 1 14.082 15H1.918a1.75 1.75 0 0 1-1.543-2.575Zm1.763.707a.25.25 0 0 0-.44 0L1.698 13.132a.25.25 0 0 0 .22.368h12.164a.25.25 0 0 0 .22-.368Zm.53 3.996v2.5a.75.75 0 0 1-1.5 0v-2.5a.75.75 0 0 1 1.5 0ZM9 11a1 1 0 1 1-2 0 1 1 0 0 1 2 0Z&quot;&gt;&lt;/path&gt;
  &lt;/svg&gt;Warning&lt;/p&gt;
 &lt;p&gt;This is very much in beta and might be buggy here and there (but hope you have a good experience!).&lt;/p&gt; 
&lt;/div&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/siddharthvaddem/openscreen/main/public/openscreen.png&quot; alt=&quot;OpenScreen Logo&quot; width=&quot;64&quot; /&gt; &lt;br /&gt; &lt;br /&gt; &lt;a href=&quot;https://deepwiki.com/siddharthvaddem/openscreen&quot;&gt; &lt;img src=&quot;https://deepwiki.com/badge.svg?sanitize=true&quot; alt=&quot;Ask DeepWiki&quot; /&gt; &lt;/a&gt; &amp;nbsp; &lt;a href=&quot;https://discord.gg/yAQQhRaEeg&quot;&gt; &lt;img src=&quot;https://img.shields.io/discord/pHAUbcqNd?logo=discord&amp;amp;label=Discord&amp;amp;color=5865F2&quot; alt=&quot;Join Discord&quot; /&gt; &lt;/a&gt; &lt;/p&gt; 
&lt;h1&gt;&lt;p align=&quot;center&quot;&gt;OpenScreen&lt;/p&gt;&lt;/h1&gt; 
&lt;p align=&quot;center&quot;&gt;&lt;strong&gt;OpenScreen is your free, open-source alternative to Screen Studio (sort of).&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;If you don&#39;t want to pay $29/month for Screen Studio but want a much simpler version that does what most people seem to need, making beautiful product demos and walkthroughs, here&#39;s a free-to-use app for you. OpenScreen does not offer all Screen Studio features, but covers the basics well!&lt;/p&gt; 
&lt;p&gt;Screen Studio is an awesome product and this is definitely not a 1:1 clone. OpenScreen is a much simpler take, just the basics for folks who want control and don&#39;t want to pay. If you need all the fancy features, your best bet is to support Screen Studio (they really do a great job, haha). But if you just want something free (no gotchas) and open, this project does the job!&lt;/p&gt; 
&lt;p&gt;OpenScreen is 100% free for personal and commercial use. Use it, modify it, distribute it. (Just be cool 😁 and give a shoutout if you feel like it !)&lt;/p&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/siddharthvaddem/openscreen/main/public/preview3.png&quot; alt=&quot;OpenScreen App Preview 3&quot; style=&quot;height: 0.2467; margin-right: 12px;&quot; /&gt; &lt;img src=&quot;https://raw.githubusercontent.com/siddharthvaddem/openscreen/main/public/preview4.png&quot; alt=&quot;OpenScreen App Preview 4&quot; style=&quot;height: 0.1678; margin-right: 12px;&quot; /&gt; &lt;/p&gt; 
&lt;h2&gt;Core Features&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;Record specific windows or your whole screen.&lt;/li&gt; 
 &lt;li&gt;Add automatic or manual zooms (adjustable depth levels) and customize their durarion and position.&lt;/li&gt; 
 &lt;li&gt;Record microphone and system audio.&lt;/li&gt; 
 &lt;li&gt;Crop video recordings to hide parts.&lt;/li&gt; 
 &lt;li&gt;Choose between wallpapers, solid colors, gradients or a custom background.&lt;/li&gt; 
 &lt;li&gt;Motion blur for smoother pan and zoom effects.&lt;/li&gt; 
 &lt;li&gt;Add annotations (text, arrows, images).&lt;/li&gt; 
 &lt;li&gt;Trim sections of the clip.&lt;/li&gt; 
 &lt;li&gt;Customize the speed of different segments.&lt;/li&gt; 
 &lt;li&gt;Export in different aspect ratios and resolutions.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Installation&lt;/h2&gt; 
&lt;p&gt;Download the latest installer for your platform from the &lt;a href=&quot;https://github.com/siddharthvaddem/openscreen/releases&quot;&gt;GitHub Releases&lt;/a&gt; page.&lt;/p&gt; 
&lt;h3&gt;macOS&lt;/h3&gt; 
&lt;p&gt;If you encounter issues with macOS Gatekeeper blocking the app (since it does not come with a developer certificate), you can bypass this by running the following command in your terminal after installation:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;xattr -rd com.apple.quarantine /Applications/Openscreen.app
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Note: Give your terminal Full Disk Access in &lt;strong&gt;System Settings &amp;gt; Privacy &amp;amp; Security&lt;/strong&gt; to grant you access and then run the above command.&lt;/p&gt; 
&lt;p&gt;After running this command, proceed to &lt;strong&gt;System Preferences &amp;gt; Security &amp;amp; Privacy&lt;/strong&gt; to grant the necessary permissions for &quot;screen recording&quot; and &quot;accessibility&quot;. Once permissions are granted, you can launch the app.&lt;/p&gt; 
&lt;h3&gt;Linux&lt;/h3&gt; 
&lt;p&gt;Download the &lt;code&gt;.AppImage&lt;/code&gt; file from the releases page. Make it executable and run:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;chmod +x Openscreen-Linux-*.AppImage
./Openscreen-Linux-*.AppImage
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;You may need to grant screen recording permissions depending on your desktop environment.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If the app fails to launch due to a &quot;sandbox&quot; error, run it with --no-sandbox:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;./Openscreen-Linux-*.AppImage --no-sandbox
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Limitations&lt;/h3&gt; 
&lt;p&gt;System audio capture relies on Electron&#39;s &lt;a href=&quot;https://www.electronjs.org/docs/latest/api/desktop-capturer&quot;&gt;desktopCapturer&lt;/a&gt; and has some platform-specific quirks:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;macOS&lt;/strong&gt;: Requires macOS 13+. On macOS 14.2+ you&#39;ll be prompted to grant audio capture permission. macOS 12 and below does not support system audio (mic still works).&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Windows&lt;/strong&gt;: Works out of the box.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Linux&lt;/strong&gt;: Needs PipeWire (default on Ubuntu 22.04+, Fedora 34+). Older PulseAudio-only setups may not support system audio (mic should still work).&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Built with&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;Electron&lt;/li&gt; 
 &lt;li&gt;React&lt;/li&gt; 
 &lt;li&gt;TypeScript&lt;/li&gt; 
 &lt;li&gt;Vite&lt;/li&gt; 
 &lt;li&gt;PixiJS&lt;/li&gt; 
 &lt;li&gt;dnd-timeline&lt;/li&gt; 
&lt;/ul&gt; 
&lt;hr /&gt; 
&lt;p&gt;&lt;em&gt;I&#39;m new to open source, idk what I&#39;m doing lol. If something is wrong please raise an issue 🙏&lt;/em&gt;&lt;/p&gt; 
&lt;h2&gt;Documentation&lt;/h2&gt; 
&lt;p&gt;See the documentation here: &lt;a href=&quot;https://deepwiki.com/siddharthvaddem/openscreen&quot;&gt;OpenScreen Docs&lt;/a&gt;&lt;/p&gt; 
&lt;h2&gt;Contributing&lt;/h2&gt; 
&lt;p&gt;Contributions are welcome! If you’d like to help out or see what’s currently being worked on, take a look at the open issues and the &lt;a href=&quot;https://github.com/users/siddharthvaddem/projects/3&quot;&gt;project roadmap&lt;/a&gt; to understand the current direction of the project and find ways to contribute.&lt;/p&gt; 
&lt;h2&gt;License&lt;/h2&gt; 
&lt;p&gt;This project is licensed under the &lt;a href=&quot;https://raw.githubusercontent.com/siddharthvaddem/openscreen/main/LICENSE&quot;&gt;MIT License&lt;/a&gt;. By using this software, you agree that the authors are not liable for any issues, damages, or claims arising from its use.&lt;/p&gt;</description>
      
      <media:content url="https://opengraph.githubassets.com/84553f63be39e17d6abbce548952f2506365682a9990732c61d05bd24f843870/siddharthvaddem/openscreen" medium="image" />
      
    </item>
    
    <item>
      <title>FujiwaraChoki/MoneyPrinterV2</title>
      <link>https://github.com/FujiwaraChoki/MoneyPrinterV2</link>
      <description>&lt;p&gt;Automate the process of making money online.&lt;/p&gt;&lt;hr&gt;&lt;h1&gt;MoneyPrinter V2&lt;/h1&gt; 
&lt;p&gt;Sponsored by Post Bridge&lt;/p&gt; 
&lt;a href=&quot;https://www.post-bridge.com/?ref=moneyprinter&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/FujiwaraChoki/MoneyPrinterV2/main/docs/repo/PostBridgeBanner.png&quot; alt=&quot;Post Bridge integration banner&quot; width=&quot;720&quot; /&gt; &lt;/a&gt; 
&lt;p&gt;&lt;a href=&quot;https://github.com/FujiwaraChoki/MoneyPrinterV2&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/made_with-%E2%9D%A4-red?style=for-the-badge&amp;amp;labelColor=orange&quot; alt=&quot;madewithlove&quot; /&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p&gt;&lt;a href=&quot;https://www.buymeacoffee.com/fujicodes&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/Buy%20Me%20A%20Coffee-Donate-brightgreen?logo=buymeacoffee&quot; alt=&quot;Buy Me A Coffee&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://github.com/FujiwaraChoki/MoneyPrinterV2/raw/main/LICENSE&quot;&gt;&lt;img src=&quot;https://img.shields.io/github/license/FujiwaraChoki/MoneyPrinterV2?style=for-the-badge&quot; alt=&quot;GitHub license&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://github.com/FujiwaraChoki/MoneyPrinterV2/issues&quot;&gt;&lt;img src=&quot;https://img.shields.io/github/issues/FujiwaraChoki/MoneyPrinterV2?style=for-the-badge&quot; alt=&quot;GitHub issues&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://github.com/FujiwaraChoki/MoneyPrinterV2/stargazers&quot;&gt;&lt;img src=&quot;https://img.shields.io/github/stars/FujiwaraChoki/MoneyPrinterV2?style=for-the-badge&quot; alt=&quot;GitHub stars&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://dsc.gg/fuji-community&quot;&gt;&lt;img src=&quot;https://img.shields.io/discord/1134848537704804432?style=for-the-badge&quot; alt=&quot;Discord&quot; /&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p&gt;An Application that automates the process of making money online. MPV2 (MoneyPrinter Version 2) is, as the name suggests, the second version of the MoneyPrinter project. It is a complete rewrite of the original project, with a focus on a wider range of features and a more modular architecture.&lt;/p&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; MPV2 needs Python 3.12 to function effectively. Watch the YouTube video &lt;a href=&quot;https://youtu.be/wAZ_ZSuIqfk&quot;&gt;here&lt;/a&gt;&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;h2&gt;Features&lt;/h2&gt; 
&lt;ul class=&quot;task-list&quot;&gt; 
 &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; id=&quot;cbx_0&quot; checked=&quot;true&quot; disabled=&quot;true&quot; /&gt;&lt;label for=&quot;cbx_0&quot;&gt; Twitter Bot (with CRON Jobs =&amp;gt; &lt;code&gt;scheduler&lt;/code&gt;)&lt;/label&gt;&lt;/li&gt; 
 &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; id=&quot;cbx_1&quot; checked=&quot;true&quot; disabled=&quot;true&quot; /&gt;&lt;label for=&quot;cbx_1&quot;&gt; YouTube Shorts Automator (with CRON Jobs =&amp;gt; &lt;code&gt;scheduler&lt;/code&gt;)&lt;/label&gt;&lt;/li&gt; 
 &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; id=&quot;cbx_2&quot; checked=&quot;true&quot; disabled=&quot;true&quot; /&gt;&lt;label for=&quot;cbx_2&quot;&gt; Affiliate Marketing (Amazon + Twitter)&lt;/label&gt;&lt;/li&gt; 
 &lt;li class=&quot;task-list-item&quot;&gt;&lt;input type=&quot;checkbox&quot; id=&quot;cbx_3&quot; checked=&quot;true&quot; disabled=&quot;true&quot; /&gt;&lt;label for=&quot;cbx_3&quot;&gt; Find local businesses &amp;amp; cold outreach&lt;/label&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Versions&lt;/h2&gt; 
&lt;p&gt;MoneyPrinter has different versions for multiple languages developed by the community for the community. Here are some known versions:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Chinese: &lt;a href=&quot;https://github.com/harry0703/MoneyPrinterTurbo&quot;&gt;MoneyPrinterTurbo&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;If you would like to submit your own version/fork of MoneyPrinter, please open an issue describing the changes you made to the fork.&lt;/p&gt; 
&lt;h2&gt;Installation&lt;/h2&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;⚠️ If you are planning to reach out to scraped businesses per E-Mail, please first install the &lt;a href=&quot;https://golang.org/&quot;&gt;Go Programming Language&lt;/a&gt;.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git clone https://github.com/FujiwaraChoki/MoneyPrinterV2.git

cd MoneyPrinterV2
# Copy Example Configuration and fill out values in config.json
cp config.example.json config.json

# Create a virtual environment
python -m venv venv

# Activate the virtual environment - Windows
.\venv\Scripts\activate

# Activate the virtual environment - Unix
source venv/bin/activate

# Install the requirements
pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Usage&lt;/h2&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Run the application
python src/main.py
&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Documentation&lt;/h2&gt; 
&lt;p&gt;All relevant documents can be found &lt;a href=&quot;https://raw.githubusercontent.com/FujiwaraChoki/MoneyPrinterV2/main/docs/&quot;&gt;here&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;Scripts&lt;/h2&gt; 
&lt;p&gt;For easier usage, there are some scripts in the &lt;code&gt;scripts&lt;/code&gt; directory that can be used to directly access the core functionality of MPV2 without the need for user interaction.&lt;/p&gt; 
&lt;p&gt;All scripts need to be run from the root directory of the project, e.g. &lt;code&gt;bash scripts/upload_video.sh&lt;/code&gt;.&lt;/p&gt; 
&lt;h2&gt;Contributing&lt;/h2&gt; 
&lt;p&gt;Please read &lt;a href=&quot;https://raw.githubusercontent.com/FujiwaraChoki/MoneyPrinterV2/main/CONTRIBUTING.md&quot;&gt;CONTRIBUTING.md&lt;/a&gt; for details on our code of conduct, and the process for submitting pull requests to us. Check out &lt;a href=&quot;https://raw.githubusercontent.com/FujiwaraChoki/MoneyPrinterV2/main/docs/Roadmap.md&quot;&gt;docs/Roadmap.md&lt;/a&gt; for a list of features that need to be implemented.&lt;/p&gt; 
&lt;h2&gt;Code of Conduct&lt;/h2&gt; 
&lt;p&gt;Please read &lt;a href=&quot;https://raw.githubusercontent.com/FujiwaraChoki/MoneyPrinterV2/main/CODE_OF_CONDUCT.md&quot;&gt;CODE_OF_CONDUCT.md&lt;/a&gt; for details on our code of conduct, and the process for submitting pull requests to us.&lt;/p&gt; 
&lt;h2&gt;License&lt;/h2&gt; 
&lt;p&gt;MoneyPrinterV2 is licensed under &lt;code&gt;Affero General Public License v3.0&lt;/code&gt;. See &lt;a href=&quot;https://raw.githubusercontent.com/FujiwaraChoki/MoneyPrinterV2/main/LICENSE&quot;&gt;LICENSE&lt;/a&gt; for more information.&lt;/p&gt; 
&lt;h2&gt;Acknowledgments&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href=&quot;https://github.com/KittenML/KittenTTS&quot;&gt;KittenTTS&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://github.com/xtekky/gpt4free&quot;&gt;gpt4free&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Disclaimer&lt;/h2&gt; 
&lt;p&gt;This project is for educational purposes only. The author will not be responsible for any misuse of the information provided. All the information on this website is published in good faith and for general information purposes only. The author does not make any warranties about the completeness, reliability, and accuracy of this information. Any action you take upon the information you find on this website (FujiwaraChoki/MoneyPrinterV2) is strictly at your own risk. The author will not be liable for any losses and/or damages in connection with the use of our website.&lt;/p&gt;</description>
      
      <media:content url="https://opengraph.githubassets.com/df8bdb7e9a52adbace0ae4056035f1578efb8881c01dbba438580108af0046d8/FujiwaraChoki/MoneyPrinterV2" medium="image" />
      
    </item>
    
    <item>
      <title>google-ai-edge/gallery</title>
      <link>https://github.com/google-ai-edge/gallery</link>
      <description>&lt;p&gt;A gallery that showcases on-device ML/GenAI use cases and allows people to try and use models locally.&lt;/p&gt;&lt;hr&gt;&lt;h1&gt;Google AI Edge Gallery ✨&lt;/h1&gt; 
&lt;p&gt;&lt;a href=&quot;https://raw.githubusercontent.com/google-ai-edge/gallery/main/LICENSE&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/License-Apache%202.0-blue.svg?sanitize=true&quot; alt=&quot;License&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://github.com/google-ai-edge/gallery/releases&quot;&gt;&lt;img src=&quot;https://img.shields.io/github/v/release/google-ai-edge/gallery&quot; alt=&quot;GitHub release (latest by date)&quot; /&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Explore, Experience, and Evaluate the Future of On-Device Generative AI with Google AI Edge.&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;AI Edge Gallery is the premier destination for running the world&#39;s most powerful open-source Large Language Models (LLMs) on your mobile device. Experience high-performance Generative AI directly on your hardware—fully offline, private, and lightning-fast.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Now Featuring: Gemma 4&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;The latest version brings official support for the newly released Gemma 4 family. As the centerpiece of this release, Gemma 4 allows you to test the cutting edge of on-device AI. Experience advanced reasoning, logic, and creative capabilities without ever sending your data to a server.&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th style=&quot;text-align:left&quot;&gt;&lt;strong&gt;Install the app today from Google Play&lt;/strong&gt;&lt;/th&gt; 
   &lt;th style=&quot;text-align:left&quot;&gt;&lt;strong&gt;Install the app today from App Store&lt;/strong&gt;&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;&lt;a href=&quot;https://play.google.com/store/apps/details?id=com.google.ai.edge.gallery&quot;&gt;&lt;img alt=&quot;Get it on Google Play&quot; height=&quot;120&quot; src=&quot;https://play.google.com/intl/en_us/badges/static/images/badges/en_badge_web_generic.png&quot; /&gt;&lt;/a&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;&lt;a href=&quot;https://apps.apple.com/us/app/google-ai-edge-gallery/id6749645337?itscg=30200&amp;amp;itsct=apps_box_badge&amp;amp;mttnsubad=6749645337&quot; style=&quot;display: inline-block;&quot;&gt; &lt;img src=&quot;https://toolbox.marketingtools.apple.com/api/v2/badges/download-on-the-app-store/black/en-us?releaseDate=1771977600&quot; alt=&quot;Download on the App Store&quot; style=&quot;width: 246px; height: 90px; vertical-align: middle; object-fit: contain;&quot; /&gt;&lt;/a&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p&gt;For users without Google Play access, install the apk from the &lt;a href=&quot;https://github.com/google-ai-edge/gallery/releases/latest/&quot;&gt;&lt;strong&gt;latest release&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;h2&gt;App Preview&lt;/h2&gt; 
&lt;img width=&quot;480&quot; alt=&quot;01&quot; src=&quot;https://github.com/user-attachments/assets/a809ad78-aef4-4169-91ee-de7213cbb3bd&quot; /&gt; 
&lt;img width=&quot;480&quot; alt=&quot;02&quot; src=&quot;https://github.com/user-attachments/assets/1effd10d-f45a-4f7b-9435-f50f1bdd36b6&quot; /&gt; 
&lt;img width=&quot;480&quot; alt=&quot;03&quot; src=&quot;https://github.com/user-attachments/assets/e5089e41-2c18-4fbe-9011-ebe9e5a02044&quot; /&gt; 
&lt;img width=&quot;480&quot; alt=&quot;04&quot; src=&quot;https://github.com/user-attachments/assets/0f39d3ed-7403-4606-a7c6-b2c7e51ba6c1&quot; /&gt; 
&lt;img width=&quot;480&quot; alt=&quot;05&quot; src=&quot;https://github.com/user-attachments/assets/8c229e96-b598-4735-9f60-e96907e1d5d5&quot; /&gt; 
&lt;img width=&quot;480&quot; alt=&quot;06&quot; src=&quot;https://github.com/user-attachments/assets/ac9fb77b-81de-4197-9ed3-f6fe58290b3e&quot; /&gt; 
&lt;img width=&quot;480&quot; alt=&quot;07&quot; src=&quot;https://github.com/user-attachments/assets/bc86ba07-2eaf-49b1-980f-8a87a85c596f&quot; /&gt; 
&lt;img width=&quot;480&quot; alt=&quot;08&quot; src=&quot;https://github.com/user-attachments/assets/061564ed-030f-4630-810b-13a7863fce4c&quot; /&gt; 
&lt;h2&gt;✨ Core Features&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;Agent Skills&lt;/strong&gt;: Transform your LLM from a conversationalist into a proactive assistant. Use the Agent Skills tile to augment model capabilities with tools like Wikipedia for fact-grounding, interactive maps, and rich visual summary cards. You can even load modular skills from a URL or browse community contributions on GitHub Discussions.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;AI Chat with Thinking Mode&lt;/strong&gt;: Engage in fluid, multi-turn conversations and toggle the new Thinking Mode to peek &quot;under the hood.&quot; This feature allows you to see the model’s step-by-step reasoning process, which is perfect for understanding complex problem-solving. Note: Thinking Mode currently works with supported models, starting with the Gemma 4 family.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;Ask Image&lt;/strong&gt;: Use multimodal power to identify objects, solve visual puzzles, or get detailed descriptions using your device’s camera or photo gallery.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;Audio Scribe&lt;/strong&gt;: Transcribe and translate voice recordings into text in real-time using high-efficiency on-device language models.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;Prompt Lab&lt;/strong&gt;: A dedicated workspace to test different prompts and single-turn use cases with granular control over model parameters like temperature and top-k.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;Mobile Actions&lt;/strong&gt;: Unlock offline device controls and automated tasks powered entirely by a finetune of FuntionGemma 270m.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;Tiny Garden&lt;/strong&gt;: A fun, experimental mini-game that uses natural language to plant and harvest a virtual garden using a finetune of FunctionGemma 270m.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;Model Management &amp;amp; Benchmark&lt;/strong&gt;: Gallery is a flexible sandbox for a wide variety of open-source models. Easily download models from the list or load your own custom models. Manage your model library effortlessly and run benchmark tests to understand exactly how each model performs on your specific hardware.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;100% On-Device Privacy&lt;/strong&gt;: All model inferences happen directly on your device hardware. No internet is required, ensuring total privacy for your prompts, images, and sensitive data.&lt;/p&gt; &lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;🏁 Get Started in Minutes!&lt;/h2&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;strong&gt;Check OS Requirement&lt;/strong&gt;: Android 12 and up, and iOS 17 and up.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Download the App:&lt;/strong&gt; 
  &lt;ul&gt; 
   &lt;li&gt;Install the app from &lt;a href=&quot;https://play.google.com/store/apps/details?id=com.google.ai.edge.gallery&quot;&gt;Google Play&lt;/a&gt; or &lt;a href=&quot;https://apps.apple.com/us/app/google-ai-edge-gallery/id6749645337&quot;&gt;App Store&lt;/a&gt;.&lt;/li&gt; 
   &lt;li&gt;For users without Google Play access: install the apk from the &lt;a href=&quot;https://github.com/google-ai-edge/gallery/releases/latest/&quot;&gt;&lt;strong&gt;latest release&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt; 
  &lt;/ul&gt; &lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Install &amp;amp; Explore:&lt;/strong&gt; For detailed installation instructions (including for corporate devices) and a full user guide, head over to our &lt;a href=&quot;https://github.com/google-ai-edge/gallery/wiki&quot;&gt;&lt;strong&gt;Project Wiki&lt;/strong&gt;&lt;/a&gt;!&lt;/li&gt; 
&lt;/ol&gt; 
&lt;h2&gt;🛠️ Technology Highlights&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Google AI Edge:&lt;/strong&gt; Core APIs and tools for on-device ML.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;LiteRT:&lt;/strong&gt; Lightweight runtime for optimized model execution.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Hugging Face Integration:&lt;/strong&gt; For model discovery and download.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;⌨️ Development&lt;/h2&gt; 
&lt;p&gt;Check out the &lt;a href=&quot;https://raw.githubusercontent.com/google-ai-edge/gallery/main/DEVELOPMENT.md&quot;&gt;development notes&lt;/a&gt; for instructions about how to build the app locally.&lt;/p&gt; 
&lt;h2&gt;🤝 Feedback&lt;/h2&gt; 
&lt;p&gt;This is an &lt;strong&gt;experimental Beta release&lt;/strong&gt;, and your input is crucial!&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;🐞 &lt;strong&gt;Found a bug?&lt;/strong&gt; &lt;a href=&quot;https://github.com/google-ai-edge/gallery/issues/new?assignees=&amp;amp;labels=bug&amp;amp;template=bug_report.md&amp;amp;title=%5BBUG%5D&quot;&gt;Report it here!&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;💡 &lt;strong&gt;Have an idea?&lt;/strong&gt; &lt;a href=&quot;https://github.com/google-ai-edge/gallery/issues/new?assignees=&amp;amp;labels=enhancement&amp;amp;template=feature_request.md&amp;amp;title=%5BFEATURE%5D&quot;&gt;Suggest a feature!&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;📄 License&lt;/h2&gt; 
&lt;p&gt;Licensed under the Apache License, Version 2.0. See the &lt;a href=&quot;https://raw.githubusercontent.com/google-ai-edge/gallery/main/LICENSE&quot;&gt;LICENSE&lt;/a&gt; file for details.&lt;/p&gt; 
&lt;h2&gt;🔗 Useful Links&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href=&quot;https://github.com/google-ai-edge/gallery/wiki&quot;&gt;&lt;strong&gt;Project Wiki (Detailed Guides)&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://huggingface.co/litert-community&quot;&gt;Hugging Face LiteRT Community&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://github.com/google-ai-edge/LiteRT-LM&quot;&gt;LiteRT-LM&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://ai.google.dev/edge&quot;&gt;Google AI Edge Documentation&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt;</description>
      
      <media:content url="https://opengraph.githubassets.com/dc59d544abb0a2446d0d0cd56da32a9662a734f4c9ba55301c4b95c26a5a0971/google-ai-edge/gallery" medium="image" />
      
    </item>
    
    <item>
      <title>mvanhorn/last30days-skill</title>
      <link>https://github.com/mvanhorn/last30days-skill</link>
      <description>&lt;p&gt;AI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary&lt;/p&gt;&lt;hr&gt;&lt;h1&gt;/last30days&lt;/h1&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;a href=&quot;https://github.com/mvanhorn/last30days-skill&quot;&gt; &lt;img src=&quot;https://img.shields.io/badge/%231-Repository%20Of%20The%20Day-6f42c1?style=for-the-badge&amp;amp;logo=github&amp;amp;label=GITHUB%20TRENDING&quot; alt=&quot;GitHub Trending #1 Repository Of The Day&quot; /&gt; &lt;/a&gt; &lt;br /&gt; &lt;a href=&quot;https://trendshift.io/repositories/21997&quot; target=&quot;_blank&quot;&gt; &lt;img src=&quot;https://trendshift.io/api/badge/repositories/21997&quot; alt=&quot;mvanhorn/last30days-skill | Trendshift&quot; style=&quot;width: 250px; height: 55px;&quot; width=&quot;250&quot; height=&quot;55&quot; /&gt; &lt;/a&gt; &lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;An AI agent-led search engine scored by upvotes, likes, and real money - not editors.&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;This README tracks the current v3 pipeline. The runtime skill spec lives in &lt;a href=&quot;https://raw.githubusercontent.com/mvanhorn/last30days-skill/main/skills/last30days/SKILL.md&quot;&gt;skills/last30days/SKILL.md&lt;/a&gt;, which is the source of truth for the latest command and setup behavior.&lt;/p&gt; 
&lt;p&gt;Claude Code:&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;/plugin marketplace add mvanhorn/last30days-skill
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;OpenClaw:&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;clawhub install last30days-official
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Hermes:&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;# The skill auto-deploys when you run sync.sh
# Or manually copy to ~/.hermes/skills/research/last30days/
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Zero config. Reddit, HN, Polymarket, and GitHub work immediately. Run it once and the setup wizard unlocks X, YouTube, TikTok, and more in 30 seconds.&lt;/p&gt; 
&lt;hr /&gt; 
&lt;p&gt;Reddit upvotes. X likes. YouTube transcripts. TikTok engagement. Polymarket odds backed by real money and insider information. That&#39;s millions of people voting with their attention and their wallets every day. /last30days searches all of it in parallel, scores it by what real people actually engage with, and an AI agent judge synthesizes it into one brief.&lt;/p&gt; 
&lt;p&gt;Google aggregates editors. /last30days searches people.&lt;/p&gt; 
&lt;p&gt;You can&#39;t get this search anywhere else because no single AI has access to all of it. Google search doesn&#39;t touch Reddit comments or X posts. ChatGPT has a deal with Reddit but can&#39;t search X or TikTok. Gemini has YouTube but not Reddit. Claude has none of them natively. Each platform is a walled garden with its own API, its own tokens, its own auth. But you can bring your own keys and browser sessions, and suddenly an AI agent can search all of them at once, score them against each other, and tell you what actually matters.&lt;/p&gt; 
&lt;p&gt;That&#39;s the unlock. Not one better search engine. A dozen disconnected platforms, bridged by an agent.&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;/last30days Peter Steinberger
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;You have a meeting tomorrow. You Google them. You get their LinkedIn from 2023. /last30days gives you what they&#39;re actually doing this month: joined OpenAI to work on Codex, fighting Anthropic&#39;s ban on third-party agents, shipping 23 PRs at 85% merge rate, building &quot;LobsterOS&quot; for cross-device agent control, and r/ClaudeCode hit 569 upvotes debating whether he&#39;s a hero or &quot;insufferable.&quot; Scattered across X posts, Reddit threads, YouTube transcripts, and GitHub commits. None of it was on Google.&lt;/p&gt; 
&lt;h2&gt;Why this exists&lt;/h2&gt; 
&lt;p&gt;I built it to keep up in AI. Everything changes every day and the Reddit and X nerds are always on top of it first. I needed better prompts, and the training data was always months behind what the community had already figured out.&lt;/p&gt; 
&lt;p&gt;But it turned into something bigger. Now I run it before a sales call to know the last 30 days truth about a business. Before a meeting to read someone&#39;s recent tweets and podcast transcripts. Before a Disney World trip to know which rides are closed and what the community says about Genie+. Before I build anything to know what problems people are actually hitting.&lt;/p&gt; 
&lt;p&gt;If you&#39;re meeting with a CEO, have you read all their tweets and YouTube transcripts from the last 30 days? I have.&lt;/p&gt; 
&lt;h2&gt;Sources, scored by the people&lt;/h2&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Source&lt;/th&gt; 
   &lt;th&gt;What the people tell you&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Reddit&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;The unfiltered take. Top comments with upvote counts, free via public JSON. The real opinions that Google buries.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;X / Twitter&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;The hot take, the expert thread, the breaking reaction. First to know, first to argue.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;YouTube&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;The 45-minute deep dive. Full transcripts searched for the 5 quotable sentences that matter.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;TikTok&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;The creator reaching 3.6M people with a take you&#39;ll never find on Google.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Instagram Reels&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;The influencer perspective with spoken-word transcripts. The visual culture signal.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Hacker News&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;The developer consensus. 825 points, 899 comments. Where technical people actually argue.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Polymarket&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;Not opinions. Odds. Backed by real money. 96% confidence on album sales. 4% on an acquisition.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;GitHub&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;For people: PR velocity, top repos by stars, release notes. For topics: issues and discussions.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Threads&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;The post-Twitter text layer. Conversations from creators and brands.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Pinterest&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;Visual discovery. Pins, saves, and comments on products and ideas.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Bluesky&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;The decentralized social layer. AT Protocol posts from the post-Twitter migration.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Perplexity&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;Grounded web search with citations via Sonar Pro.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Web&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;The editorial coverage, the blog comparisons. One signal of many, not the only one.&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p&gt;Community contributors keep adding more. Truth Social, Xiaohongshu (RED), and others are in the engine with more on the way.&lt;/p&gt; 
&lt;p&gt;A Reddit thread with 1,500 upvotes is a stronger signal than a blog post nobody read. A TikTok with 3.6M views tells you more about what&#39;s culturally relevant than a press release. Polymarket odds backed by $66K in volume are harder to argue with than a pundit&#39;s guess.&lt;/p&gt; 
&lt;p&gt;The synthesis ranks by what real people actually engaged with. Social relevancy, not SEO relevancy.&lt;/p&gt; 
&lt;h2&gt;What people actually use it for&lt;/h2&gt; 
&lt;p&gt;&lt;strong&gt;Before a meeting.&lt;/strong&gt; &lt;code&gt;/last30days Peter Steinberger&lt;/code&gt; - joined OpenAI&#39;s Codex team, fighting Anthropic&#39;s ban on third-party agents, 23 PRs merged at 85% merge rate on GitHub, building LobsterOS for cross-device agent control. r/ClaudeCode: &quot;Ever since OpenClaw released, it was widely known that if you run it through anything other than the API, you were gonna get banned eventually&quot; (227 upvotes). That&#39;s not on LinkedIn.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;When something drops.&lt;/strong&gt; &lt;code&gt;/last30days Kanye West&lt;/code&gt; - UK blocked his visa, Wireless Festival canceled, sponsors fled. But BULLY debuted #2 on Billboard. Fantano came back from his &quot;Yay sabbatical&quot; to review it (653K views). SoFi Homecoming brought out Lauryn Hill and Travis Scott for 44 songs. Polymarket: &quot;Will Kanye tweet again?&quot; 86% Yes. 23 Reddit threads, 17 YouTube videos, 86K upvotes.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;To compare tools.&lt;/strong&gt; &lt;code&gt;/last30days OpenClaw vs Hermes vs Paperclip&lt;/code&gt; - &quot;These aren&#39;t competitors, they&#39;re layers.&quot; OpenClaw is the executor (351K GitHub stars, live), Hermes is the self-improving brain (31K stars), Paperclip is the org chart (49K stars). Star counts pulled live from the GitHub API, not stale blog posts. Side-by-side table with architecture, memory, security, best-for. Per @IMJustinBrooke: &quot;OpenClaw = Charmander, Hermes = Charizard.&quot;&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;To understand the world.&lt;/strong&gt; &lt;code&gt;/last30days Iran vs USA&lt;/code&gt; - Day 38 of the war. Trump&#39;s Tuesday deadline for Iran to reopen the Strait of Hormuz. Two US warplanes downed. Oil at $126/barrel. The IEA called it &quot;the largest supply disruption in the history of the global oil market.&quot; Polymarket: ceasefire by Dec 31 at 74%. 27 X posts, 10 YouTube videos, 20 prediction markets.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Before a trip.&lt;/strong&gt; &lt;code&gt;/last30days Universal Epic Universe&lt;/code&gt; - Expansion already under construction. &quot;Project 680&quot; permit filed. Fireworks show confirmed by infrastructure but unannounced. Wait times: Mine-Cart Madness averaging 148 minutes. No annual pass yet, and locals are frustrated. Stardust Racers down for refurbishment through April 5.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;To learn something fast.&lt;/strong&gt; &lt;code&gt;/last30days Nano Banana Pro prompting&lt;/code&gt; - JSON-structured prompts are replacing tag soup. @pictsbyai&#39;s nested format prevents &quot;concept bleeding.&quot; Edit-first workflow beats regeneration. Then it writes you a production prompt using exactly what the community said works.&lt;/p&gt; 
&lt;h2&gt;What v3 Changed&lt;/h2&gt; 
&lt;h3&gt;Intelligent search: the killer feature&lt;/h3&gt; 
&lt;p&gt;The v3 engine doesn&#39;t just search for your topic. It figures out &lt;em&gt;where&lt;/em&gt; to search before the search begins. Type &quot;OpenClaw&quot; and the engine resolves @steipete (Peter Steinberger, the creator), r/openclaw, r/ClaudeCode, and the right YouTube channels and TikTok hashtags - all via a new Python pre-research brain built by &lt;a href=&quot;https://github.com/j-sperling&quot;&gt;@j-sperling&lt;/a&gt;. The old engine searched keywords. The new engine understands your topic first, then searches the right people and communities.&lt;/p&gt; 
&lt;p&gt;This is why v3 finds content v2 never could. &quot;Paperclip&quot; resolves @dotta. &quot;Dave Morin&quot; resolves @davemorin plus @OpenClaw plus the TWiST podcast. &quot;Peter Steinberger&quot; resolves @steipete on X and steipete on GitHub. Bidirectional: person to company, product to founder, name to GitHub profile. The right subreddits, the right handles, the right hashtags - resolved before a single API call fires.&lt;/p&gt; 
&lt;h3&gt;Best Takes&lt;/h3&gt; 
&lt;p&gt;Reddit and X people are funny. The old engine buried their best stuff because it scored for relevance, not cleverness. v3 has a second judge that scores every result for humor, wit, and virality alongside the relevance score. Tommy Lloyd&#39;s &quot;My Michael Jordan is Steve Kerr&quot; scores low on relevance to &quot;Arizona Basketball&quot; but off the charts on fun. Now every brief ends with a &quot;Best Takes&quot; section - the cleverest one-liners, the most viral quotes, the reactions that make you want to share the research. Built in, not a toggle.&lt;/p&gt; 
&lt;h3&gt;Cross-source cluster merging&lt;/h3&gt; 
&lt;p&gt;When the same story appears on Reddit, X, and YouTube, v3 merges them into one cluster instead of showing three separate items. Entity-based overlap detection catches matches even when the titles use different words.&lt;/p&gt; 
&lt;h3&gt;Single-pass comparisons&lt;/h3&gt; 
&lt;p&gt;&quot;CLI vs MCP&quot; used to run three serial passes (12+ minutes). v3 runs one pass with entity-aware subqueries for both sides simultaneously. Same depth, 3 minutes.&lt;/p&gt; 
&lt;h3&gt;GitHub person-mode&lt;/h3&gt; 
&lt;p&gt;When the topic is a person, the engine switches from keyword search to author-scoped queries. Instead of &quot;who mentioned this name in an issue body,&quot; it answers: what are they shipping and where is it landing?&lt;/p&gt; 
&lt;p&gt;&lt;code&gt;/last30days Peter Steinberger --github-user=steipete&lt;/code&gt; shows 22 PRs merged across 3 repos at 85% merge rate. Own projects with README summaries, star counts, and top feature requests. Release notes for what shipped this month. The synthesizer weaves it into the narrative alongside X posts and Reddit threads.&lt;/p&gt; 
&lt;h3&gt;ELI5 mode&lt;/h3&gt; 
&lt;p&gt;Say &quot;eli5 on&quot; after any research run. The synthesis rewrites in plain language. No jargon. Same data, same sources, same citations - just clearer. &quot;Arizona wins by being physical&quot; instead of &quot;Arizona&#39;s identity is paint scoring (50%+ shooting, 9th nationally).&quot; Say &quot;eli5 off&quot; to go back.&lt;/p&gt; 
&lt;h3&gt;Everything else in v3&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Free Reddit comments.&lt;/strong&gt; Public JSON gives you threads + top comments with upvote counts. No API key, no ScrapeCreators. Just works.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;YouTube transcripts that actually work.&lt;/strong&gt; Widened candidate pool 3x past music videos to reach talk/review content with captions.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Threads, Pinterest, YouTube + TikTok comments.&lt;/strong&gt; Opt-in sources via ScrapeCreators. Set &lt;code&gt;INCLUDE_SOURCES=tiktok,instagram&lt;/code&gt; and add threads, pinterest, youtube_comments, tiktok_comments for more. &lt;code&gt;youtube_comments&lt;/code&gt; and &lt;code&gt;tiktok_comments&lt;/code&gt; surface top comments with vote counts the same way Reddit does.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Perplexity Sonar.&lt;/strong&gt; Grounded web search with citations via OpenRouter. Add &lt;code&gt;OPENROUTER_API_KEY&lt;/code&gt; to unlock.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Polymarket noise filtering.&lt;/strong&gt; Common-word disambiguation prevents &quot;Apple&quot; from matching &quot;Will Apple release a car?&quot;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Resilient Reddit.&lt;/strong&gt; Timeout budgets and runtime fallback. One slow thread doesn&#39;t kill the whole run.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Fun judge v2.&lt;/strong&gt; Humor scoring baked into the narrative. Reddit&#39;s cleverest one-liners mixed into the synthesis where they fit, not dumped in a separate section.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Polymarket odds, not dollars.&lt;/strong&gt; The % odds are the magic. Dollar volumes removed from display.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Per-author cap.&lt;/strong&gt; Max 3 items per author prevents any single voice from dominating your brief.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Entity disambiguation.&lt;/strong&gt; When the engine resolves handles, the synthesis trusts them. No more Mallorca resorts winning over Washington athletic clubs.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;OpenClaw first-class citizen.&lt;/strong&gt; Auto-resolve for engine-side pre-research. Device auth for frictionless ScrapeCreators signup.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;1,012 tests passing.&lt;/strong&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Install&lt;/h2&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Surface&lt;/th&gt; 
   &lt;th&gt;Install&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;&lt;a href=&quot;http://claude.ai&quot;&gt;claude.ai&lt;/a&gt;&lt;/strong&gt; (web)&lt;/td&gt; 
   &lt;td&gt;&lt;a href=&quot;https://github.com/mvanhorn/last30days-skill/releases/latest/download/last30days.skill&quot;&gt;Download &lt;code&gt;last30days.skill&lt;/code&gt;&lt;/a&gt; and upload via Settings &amp;gt; Capabilities &amp;gt; Skills &amp;gt; +&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Claude Code&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/plugin marketplace add mvanhorn/last30days-skill&lt;/code&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;OpenClaw&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;clawhub install last30days-official&lt;/code&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Gemini CLI&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;Clone then &lt;code&gt;gemini extensions install ./last30days-skill&lt;/code&gt; (see below)&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;h3&gt;&lt;a href=&quot;http://claude.ai&quot;&gt;claude.ai&lt;/a&gt; (web)&lt;/h3&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;a href=&quot;https://github.com/mvanhorn/last30days-skill/releases/latest/download/last30days.skill&quot;&gt;Download &lt;code&gt;last30days.skill&lt;/code&gt;&lt;/a&gt; from the latest release&lt;/li&gt; 
 &lt;li&gt;Go to &lt;a href=&quot;https://claude.ai/settings/capabilities&quot;&gt;claude.ai Settings &amp;gt; Capabilities &amp;gt; Skills&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;Click the &lt;code&gt;+&lt;/code&gt; button in the Skills panel and drop the file in&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;Enable &quot;Code execution and file creation&quot; under Capabilities first - skills won&#39;t run without it.&lt;/p&gt; 
&lt;h3&gt;Claude Code&lt;/h3&gt; 
&lt;pre&gt;&lt;code&gt;/plugin marketplace add mvanhorn/last30days-skill
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Update later with &lt;code&gt;claude plugin update last30days@last30days-skill&lt;/code&gt;.&lt;/p&gt; 
&lt;h3&gt;OpenClaw&lt;/h3&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;clawhub install last30days-official
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Gemini CLI&lt;/h3&gt; 
&lt;p&gt;Gemini CLI v0.9.0 has an upstream installer bug that can fail with &lt;code&gt;Configuration file not found at /tmp/gemini-extensionXXXXXX/gemini-extension.json&lt;/code&gt; (&lt;a href=&quot;https://github.com/google-gemini/gemini-cli/issues/11452&quot;&gt;upstream issue&lt;/a&gt;). Workaround:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git clone https://github.com/mvanhorn/last30days-skill
gemini extensions install ./last30days-skill
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Manual (developer)&lt;/h3&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git clone https://github.com/mvanhorn/last30days-skill.git ~/.claude/skills/last30days
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Or build the &lt;a href=&quot;http://claude.ai&quot;&gt;claude.ai&lt;/a&gt; &lt;code&gt;.skill&lt;/code&gt; file from source: &lt;code&gt;bash scripts/build-skill.sh&lt;/code&gt; produces &lt;code&gt;dist/last30days.skill&lt;/code&gt;.&lt;/p&gt; 
&lt;p&gt;Reddit (with comments), Hacker News, Polymarket, and GitHub work immediately. Zero configuration. Run &lt;code&gt;/last30days&lt;/code&gt; once and the setup wizard unlocks more sources in 30 seconds.&lt;/p&gt; 
&lt;h2&gt;Bring your own keys&lt;/h2&gt; 
&lt;p&gt;These platforms don&#39;t have relationships with each other. X doesn&#39;t know what Reddit thinks. YouTube doesn&#39;t see TikTok. But you can bring your own API keys and browser tokens, and suddenly you have access to all of them at once.&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Sources&lt;/th&gt; 
   &lt;th&gt;What you need&lt;/th&gt; 
   &lt;th&gt;Cost&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Reddit (with comments) + HN + Polymarket + GitHub&lt;/td&gt; 
   &lt;td&gt;Nothing&lt;/td&gt; 
   &lt;td&gt;Free&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;X / Twitter&lt;/td&gt; 
   &lt;td&gt;Log into &lt;a href=&quot;http://x.com&quot;&gt;x.com&lt;/a&gt; in any browser&lt;/td&gt; 
   &lt;td&gt;Free&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;YouTube&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;brew install yt-dlp&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Free&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Bluesky&lt;/td&gt; 
   &lt;td&gt;App password from bsky.app&lt;/td&gt; 
   &lt;td&gt;Free&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;TikTok + Instagram + Threads + Pinterest + YouTube comments&lt;/td&gt; 
   &lt;td&gt;ScrapeCreators key&lt;/td&gt; 
   &lt;td&gt;10,000 free calls&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Perplexity Sonar&lt;/td&gt; 
   &lt;td&gt;OpenRouter key&lt;/td&gt; 
   &lt;td&gt;Pay as you go&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Web search&lt;/td&gt; 
   &lt;td&gt;Brave Search key&lt;/td&gt; 
   &lt;td&gt;2,000 free queries/month&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;h2&gt;How it works&lt;/h2&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;strong&gt;You type a topic.&lt;/strong&gt; Person, company, product, technology, &quot;X vs Y.&quot; Anything.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;The agent resolves who matters.&lt;/strong&gt; Finds X handles (including founders), GitHub repos, subreddits, TikTok hashtags, YouTube channels. For &quot;Kanye West&quot; it knows r/hiphopheads, @kanyewest, and &quot;bully review&quot; on YouTube. For &quot;OpenClaw&quot; it resolves openclaw/openclaw on GitHub and fetches live star counts.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;All sources searched in parallel.&lt;/strong&gt; Multi-query expansion. Results scored by engagement, relevance, freshness.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;The depth nobody else has.&lt;/strong&gt; Full YouTube transcripts from reaction videos. Top Reddit comments with upvote counts. TikTok captions. Polymarket odds. Not just titles and links.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Same story, merged.&lt;/strong&gt; Wireless Festival announced on Reddit, discussed on X, ticket prices on TikTok = one cluster, not three separate items.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Synthesized into one brief.&lt;/strong&gt; Grounded in specific data. Cited by source. Ranked by what people actually engage with. Not &quot;here&#39;s what I found.&quot; It&#39;s &quot;here&#39;s what matters.&quot;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Then it becomes your expert.&lt;/strong&gt; After one run, your Claude session knows everything the community knows. Ask follow-up questions. Have it write prompts, draft emails, plan trips, architect systems - all grounded in what&#39;s real right now.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;h2&gt;What people are saying&lt;/h2&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&quot;I found a Claude Code skill that researches any topic across Reddit, X, YouTube, and HN from the last 30 days. Then writes the prompts for you. I&#39;ve been manually searching Reddit and X for research before every piece of content I write. Tab by tab. Thread by thread. That&#39;s the part that takes 90 minutes. This eliminates it.&quot; -@itsjasonai&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&quot;This one skill replaced my entire research workflow. You give it a topic, it scrapes Reddit, X, and the web for what people are actually talking about. Not old blog posts. Real conversations from the last 30 days.&quot; -@itswilsoncharles&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&quot;5 of the 10 trending repos on GitHub today are Claude tools. #1: mvanhorn/last30days-skill&quot; -@yieldhunter95&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;h2&gt;Open source&lt;/h2&gt; 
&lt;p&gt;MIT license. No tracking. No analytics. Your research stays on your machine. 1,012 tests.&lt;/p&gt; 
&lt;p&gt;Built with Python 3.12+, yt-dlp, Node.js (vendored Bird client for X search), and ScrapeCreators API. v3 engine architecture by &lt;a href=&quot;https://github.com/j-sperling&quot;&gt;@j-sperling&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;See &lt;a href=&quot;https://raw.githubusercontent.com/mvanhorn/last30days-skill/main/CHANGELOG.md&quot;&gt;CHANGELOG.md&lt;/a&gt; for version history.&lt;/p&gt; 
&lt;h2&gt;Star History&lt;/h2&gt; 
&lt;a href=&quot;https://star-history.com/#mvanhorn/last30days-skill&amp;amp;Date&quot;&gt; 
 &lt;picture&gt; 
  &lt;source media=&quot;(prefers-color-scheme: dark)&quot; srcset=&quot;https://api.star-history.com/svg?repos=mvanhorn/last30days-skill&amp;amp;type=Date&amp;amp;theme=dark&quot; /&gt; 
  &lt;source media=&quot;(prefers-color-scheme: light)&quot; srcset=&quot;https://api.star-history.com/svg?repos=mvanhorn/last30days-skill&amp;amp;type=Date&quot; /&gt; 
  &lt;img alt=&quot;Star History Chart&quot; src=&quot;https://api.star-history.com/svg?repos=mvanhorn/last30days-skill&amp;amp;type=Date&quot; /&gt; 
 &lt;/picture&gt; &lt;/a&gt; 
&lt;hr /&gt; 
&lt;p&gt;&lt;strong&gt;@slashlast30days&lt;/strong&gt; · &lt;a href=&quot;https://github.com/mvanhorn/last30days-skill&quot;&gt;github.com/mvanhorn/last30days-skill&lt;/a&gt;&lt;/p&gt;</description>
      
      <media:content url="https://opengraph.githubassets.com/8a0356a3039a4e5e8130511221af0d04bb8169d7bffd3383bce0b6050e92f679/mvanhorn/last30days-skill" medium="image" />
      
    </item>
    
    <item>
      <title>affaan-m/everything-claude-code</title>
      <link>https://github.com/affaan-m/everything-claude-code</link>
      <description>&lt;p&gt;The agent harness performance optimization system. Skills, instincts, memory, security, and research-first development for Claude Code, Codex, Opencode, Cursor and beyond.&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;strong&gt;Language:&lt;/strong&gt; English | &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/docs/pt-BR/README.md&quot;&gt;Português (Brasil)&lt;/a&gt; | &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/README.zh-CN.md&quot;&gt;简体中文&lt;/a&gt; | &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/docs/zh-TW/README.md&quot;&gt;繁體中文&lt;/a&gt; | &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/docs/ja-JP/README.md&quot;&gt;日本語&lt;/a&gt; | &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/docs/ko-KR/README.md&quot;&gt;한국어&lt;/a&gt; | &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/docs/tr/README.md&quot;&gt;Türkçe&lt;/a&gt;&lt;/p&gt; 
&lt;h1&gt;Everything Claude Code&lt;/h1&gt; 
&lt;p&gt;&lt;a href=&quot;https://github.com/affaan-m/everything-claude-code/stargazers&quot;&gt;&lt;img src=&quot;https://img.shields.io/github/stars/affaan-m/everything-claude-code?style=flat&quot; alt=&quot;Stars&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://github.com/affaan-m/everything-claude-code/network/members&quot;&gt;&lt;img src=&quot;https://img.shields.io/github/forks/affaan-m/everything-claude-code?style=flat&quot; alt=&quot;Forks&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://github.com/affaan-m/everything-claude-code/graphs/contributors&quot;&gt;&lt;img src=&quot;https://img.shields.io/github/contributors/affaan-m/everything-claude-code?style=flat&quot; alt=&quot;Contributors&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://www.npmjs.com/package/ecc-universal&quot;&gt;&lt;img src=&quot;https://img.shields.io/npm/dw/ecc-universal?label=ecc-universal%20weekly%20downloads&amp;amp;logo=npm&quot; alt=&quot;npm ecc-universal&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://www.npmjs.com/package/ecc-agentshield&quot;&gt;&lt;img src=&quot;https://img.shields.io/npm/dw/ecc-agentshield?label=ecc-agentshield%20weekly%20downloads&amp;amp;logo=npm&quot; alt=&quot;npm ecc-agentshield&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://github.com/marketplace/ecc-tools&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/GitHub%20App-150%20installs-2ea44f?logo=github&quot; alt=&quot;GitHub App Install&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/LICENSE&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/license-MIT-blue.svg?sanitize=true&quot; alt=&quot;License&quot; /&gt;&lt;/a&gt; &lt;img src=&quot;https://img.shields.io/badge/-Shell-4EAA25?logo=gnu-bash&amp;amp;logoColor=white&quot; alt=&quot;Shell&quot; /&gt; &lt;img src=&quot;https://img.shields.io/badge/-TypeScript-3178C6?logo=typescript&amp;amp;logoColor=white&quot; alt=&quot;TypeScript&quot; /&gt; &lt;img src=&quot;https://img.shields.io/badge/-Python-3776AB?logo=python&amp;amp;logoColor=white&quot; alt=&quot;Python&quot; /&gt; &lt;img src=&quot;https://img.shields.io/badge/-Go-00ADD8?logo=go&amp;amp;logoColor=white&quot; alt=&quot;Go&quot; /&gt; &lt;img src=&quot;https://img.shields.io/badge/-Java-ED8B00?logo=openjdk&amp;amp;logoColor=white&quot; alt=&quot;Java&quot; /&gt; &lt;img src=&quot;https://img.shields.io/badge/-Perl-39457E?logo=perl&amp;amp;logoColor=white&quot; alt=&quot;Perl&quot; /&gt; &lt;img src=&quot;https://img.shields.io/badge/-Markdown-000000?logo=markdown&amp;amp;logoColor=white&quot; alt=&quot;Markdown&quot; /&gt;&lt;/p&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;140K+ stars&lt;/strong&gt; | &lt;strong&gt;21K+ forks&lt;/strong&gt; | &lt;strong&gt;170+ contributors&lt;/strong&gt; | &lt;strong&gt;12+ language ecosystems&lt;/strong&gt; | &lt;strong&gt;Anthropic Hackathon Winner&lt;/strong&gt;&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;hr /&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;p&gt;&lt;strong&gt;Language / 语言 / 語言 / Dil&lt;/strong&gt;&lt;/p&gt; 
 &lt;p&gt;&lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/README.md&quot;&gt;&lt;strong&gt;English&lt;/strong&gt;&lt;/a&gt; | &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/docs/pt-BR/README.md&quot;&gt;Português (Brasil)&lt;/a&gt; | &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/README.zh-CN.md&quot;&gt;简体中文&lt;/a&gt; | &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/docs/zh-TW/README.md&quot;&gt;繁體中文&lt;/a&gt; | &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/docs/ja-JP/README.md&quot;&gt;日本語&lt;/a&gt; | &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/docs/ko-KR/README.md&quot;&gt;한국어&lt;/a&gt; | &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/docs/tr/README.md&quot;&gt;Türkçe&lt;/a&gt;&lt;/p&gt; 
&lt;/div&gt; 
&lt;hr /&gt; 
&lt;p&gt;&lt;strong&gt;The performance optimization system for AI agent harnesses. From an Anthropic hackathon winner.&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;Not just configs. A complete system: skills, instincts, memory optimization, continuous learning, security scanning, and research-first development. Production-ready agents, skills, hooks, rules, MCP configurations, and legacy command shims evolved over 10+ months of intensive daily use building real products.&lt;/p&gt; 
&lt;p&gt;Works across &lt;strong&gt;Claude Code&lt;/strong&gt;, &lt;strong&gt;Codex&lt;/strong&gt;, &lt;strong&gt;Cursor&lt;/strong&gt;, &lt;strong&gt;OpenCode&lt;/strong&gt;, &lt;strong&gt;Gemini&lt;/strong&gt;, and other AI agent harnesses.&lt;/p&gt; 
&lt;hr /&gt; 
&lt;h2&gt;The Guides&lt;/h2&gt; 
&lt;p&gt;This repo is the raw code only. The guides explain everything.&lt;/p&gt; 
&lt;table&gt; 
 &lt;tbody&gt;
  &lt;tr&gt; 
   &lt;td width=&quot;33%&quot;&gt; &lt;a href=&quot;https://x.com/affaanmustafa/status/2012378465664745795&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/assets/images/guides/shorthand-guide.png&quot; alt=&quot;The Shorthand Guide to Everything Claude Code&quot; /&gt; &lt;/a&gt; &lt;/td&gt; 
   &lt;td width=&quot;33%&quot;&gt; &lt;a href=&quot;https://x.com/affaanmustafa/status/2014040193557471352&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/assets/images/guides/longform-guide.png&quot; alt=&quot;The Longform Guide to Everything Claude Code&quot; /&gt; &lt;/a&gt; &lt;/td&gt; 
   &lt;td width=&quot;33%&quot;&gt; &lt;a href=&quot;https://x.com/affaanmustafa/status/2033263813387223421&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/assets/images/security/security-guide-header.png&quot; alt=&quot;The Shorthand Guide to Everything Agentic Security&quot; /&gt; &lt;/a&gt; &lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td align=&quot;center&quot;&gt;&lt;b&gt;Shorthand Guide&lt;/b&gt;&lt;br /&gt;Setup, foundations, philosophy. &lt;b&gt;Read this first.&lt;/b&gt;&lt;/td&gt; 
   &lt;td align=&quot;center&quot;&gt;&lt;b&gt;Longform Guide&lt;/b&gt;&lt;br /&gt;Token optimization, memory persistence, evals, parallelization.&lt;/td&gt; 
   &lt;td align=&quot;center&quot;&gt;&lt;b&gt;Security Guide&lt;/b&gt;&lt;br /&gt;Attack vectors, sandboxing, sanitization, CVEs, AgentShield.&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt;
&lt;/table&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Topic&lt;/th&gt; 
   &lt;th&gt;What You&#39;ll Learn&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Token Optimization&lt;/td&gt; 
   &lt;td&gt;Model selection, system prompt slimming, background processes&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Memory Persistence&lt;/td&gt; 
   &lt;td&gt;Hooks that save/load context across sessions automatically&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Continuous Learning&lt;/td&gt; 
   &lt;td&gt;Auto-extract patterns from sessions into reusable skills&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Verification Loops&lt;/td&gt; 
   &lt;td&gt;Checkpoint vs continuous evals, grader types, pass@k metrics&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Parallelization&lt;/td&gt; 
   &lt;td&gt;Git worktrees, cascade method, when to scale instances&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Subagent Orchestration&lt;/td&gt; 
   &lt;td&gt;The context problem, iterative retrieval pattern&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;hr /&gt; 
&lt;h2&gt;What&#39;s New&lt;/h2&gt; 
&lt;h3&gt;v1.10.0 — Surface Refresh, Operator Workflows, and ECC 2.0 Alpha (Apr 2026)&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Dashboard GUI&lt;/strong&gt; — New Tkinter-based desktop application (&lt;code&gt;ecc_dashboard.py&lt;/code&gt; or &lt;code&gt;npm run dashboard&lt;/code&gt;) with dark/light theme toggle, font customization, and project logo in header and taskbar.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Public surface synced to the live repo&lt;/strong&gt; — metadata, catalog counts, plugin manifests, and install-facing docs now match the actual OSS surface: 38 agents, 156 skills, and 72 legacy command shims.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Operator and outbound workflow expansion&lt;/strong&gt; — &lt;code&gt;brand-voice&lt;/code&gt;, &lt;code&gt;social-graph-ranker&lt;/code&gt;, &lt;code&gt;connections-optimizer&lt;/code&gt;, &lt;code&gt;customer-billing-ops&lt;/code&gt;, &lt;code&gt;ecc-tools-cost-audit&lt;/code&gt;, &lt;code&gt;google-workspace-ops&lt;/code&gt;, &lt;code&gt;project-flow-ops&lt;/code&gt;, and &lt;code&gt;workspace-surface-audit&lt;/code&gt; round out the operator lane.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Media and launch tooling&lt;/strong&gt; — &lt;code&gt;manim-video&lt;/code&gt;, &lt;code&gt;remotion-video-creation&lt;/code&gt;, and upgraded social publishing surfaces make technical explainers and launch content part of the same system.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Framework and product surface growth&lt;/strong&gt; — &lt;code&gt;nestjs-patterns&lt;/code&gt;, richer Codex/OpenCode install surfaces, and expanded cross-harness packaging keep the repo usable beyond Claude Code alone.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;ECC 2.0 alpha is in-tree&lt;/strong&gt; — the Rust control-plane prototype in &lt;code&gt;ecc2/&lt;/code&gt; now builds locally and exposes &lt;code&gt;dashboard&lt;/code&gt;, &lt;code&gt;start&lt;/code&gt;, &lt;code&gt;sessions&lt;/code&gt;, &lt;code&gt;status&lt;/code&gt;, &lt;code&gt;stop&lt;/code&gt;, &lt;code&gt;resume&lt;/code&gt;, and &lt;code&gt;daemon&lt;/code&gt; commands. It is usable as an alpha, not yet a general release.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Ecosystem hardening&lt;/strong&gt; — AgentShield, ECC Tools cost controls, billing portal work, and website refreshes continue to ship around the core plugin instead of drifting into separate silos.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;v1.9.0 — Selective Install &amp;amp; Language Expansion (Mar 2026)&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Selective install architecture&lt;/strong&gt; — Manifest-driven install pipeline with &lt;code&gt;install-plan.js&lt;/code&gt; and &lt;code&gt;install-apply.js&lt;/code&gt; for targeted component installation. State store tracks what&#39;s installed and enables incremental updates.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;6 new agents&lt;/strong&gt; — &lt;code&gt;typescript-reviewer&lt;/code&gt;, &lt;code&gt;pytorch-build-resolver&lt;/code&gt;, &lt;code&gt;java-build-resolver&lt;/code&gt;, &lt;code&gt;java-reviewer&lt;/code&gt;, &lt;code&gt;kotlin-reviewer&lt;/code&gt;, &lt;code&gt;kotlin-build-resolver&lt;/code&gt; expand language coverage to 10 languages.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;New skills&lt;/strong&gt; — &lt;code&gt;pytorch-patterns&lt;/code&gt; for deep learning workflows, &lt;code&gt;documentation-lookup&lt;/code&gt; for API reference research, &lt;code&gt;bun-runtime&lt;/code&gt; and &lt;code&gt;nextjs-turbopack&lt;/code&gt; for modern JS toolchains, plus 8 operational domain skills and &lt;code&gt;mcp-server-patterns&lt;/code&gt;.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Session &amp;amp; state infrastructure&lt;/strong&gt; — SQLite state store with query CLI, session adapters for structured recording, skill evolution foundation for self-improving skills.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Orchestration overhaul&lt;/strong&gt; — Harness audit scoring made deterministic, orchestration status and launcher compatibility hardened, observer loop prevention with 5-layer guard.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Observer reliability&lt;/strong&gt; — Memory explosion fix with throttling and tail sampling, sandbox access fix, lazy-start logic, and re-entrancy guard.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;12 language ecosystems&lt;/strong&gt; — New rules for Java, PHP, Perl, Kotlin/Android/KMP, C++, and Rust join existing TypeScript, Python, Go, and common rules.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Community contributions&lt;/strong&gt; — Korean and Chinese translations, biome hook optimization, video processing skills, operational skills, PowerShell installer, Antigravity IDE support.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;CI hardening&lt;/strong&gt; — 19 test failure fixes, catalog count enforcement, install manifest validation, and full test suite green.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;v1.8.0 — Harness Performance System (Mar 2026)&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Harness-first release&lt;/strong&gt; — ECC is now explicitly framed as an agent harness performance system, not just a config pack.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Hook reliability overhaul&lt;/strong&gt; — SessionStart root fallback, Stop-phase session summaries, and script-based hooks replacing fragile inline one-liners.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Hook runtime controls&lt;/strong&gt; — &lt;code&gt;ECC_HOOK_PROFILE=minimal|standard|strict&lt;/code&gt; and &lt;code&gt;ECC_DISABLED_HOOKS=...&lt;/code&gt; for runtime gating without editing hook files.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;New harness commands&lt;/strong&gt; — &lt;code&gt;/harness-audit&lt;/code&gt;, &lt;code&gt;/loop-start&lt;/code&gt;, &lt;code&gt;/loop-status&lt;/code&gt;, &lt;code&gt;/quality-gate&lt;/code&gt;, &lt;code&gt;/model-route&lt;/code&gt;.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;NanoClaw v2&lt;/strong&gt; — model routing, skill hot-load, session branch/search/export/compact/metrics.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Cross-harness parity&lt;/strong&gt; — behavior tightened across Claude Code, Cursor, OpenCode, and Codex app/CLI.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;997 internal tests passing&lt;/strong&gt; — full suite green after hook/runtime refactor and compatibility updates.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;v1.7.0 — Cross-Platform Expansion &amp;amp; Presentation Builder (Feb 2026)&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Codex app + CLI support&lt;/strong&gt; — Direct &lt;code&gt;AGENTS.md&lt;/code&gt;-based Codex support, installer targeting, and Codex docs&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;&lt;code&gt;frontend-slides&lt;/code&gt; skill&lt;/strong&gt; — Zero-dependency HTML presentation builder with PPTX conversion guidance and strict viewport-fit rules&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;5 new generic business/content skills&lt;/strong&gt; — &lt;code&gt;article-writing&lt;/code&gt;, &lt;code&gt;content-engine&lt;/code&gt;, &lt;code&gt;market-research&lt;/code&gt;, &lt;code&gt;investor-materials&lt;/code&gt;, &lt;code&gt;investor-outreach&lt;/code&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Broader tool coverage&lt;/strong&gt; — Cursor, Codex, and OpenCode support tightened so the same repo ships cleanly across all major harnesses&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;992 internal tests&lt;/strong&gt; — Expanded validation and regression coverage across plugin, hooks, skills, and packaging&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;v1.6.0 — Codex CLI, AgentShield &amp;amp; Marketplace (Feb 2026)&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Codex CLI support&lt;/strong&gt; — New &lt;code&gt;/codex-setup&lt;/code&gt; command generates &lt;code&gt;codex.md&lt;/code&gt; for OpenAI Codex CLI compatibility&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;7 new skills&lt;/strong&gt; — &lt;code&gt;search-first&lt;/code&gt;, &lt;code&gt;swift-actor-persistence&lt;/code&gt;, &lt;code&gt;swift-protocol-di-testing&lt;/code&gt;, &lt;code&gt;regex-vs-llm-structured-text&lt;/code&gt;, &lt;code&gt;content-hash-cache-pattern&lt;/code&gt;, &lt;code&gt;cost-aware-llm-pipeline&lt;/code&gt;, &lt;code&gt;skill-stocktake&lt;/code&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;AgentShield integration&lt;/strong&gt; — &lt;code&gt;/security-scan&lt;/code&gt; skill runs AgentShield directly from Claude Code; 1282 tests, 102 rules&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;GitHub Marketplace&lt;/strong&gt; — ECC Tools GitHub App live at &lt;a href=&quot;https://github.com/marketplace/ecc-tools&quot;&gt;github.com/marketplace/ecc-tools&lt;/a&gt; with free/pro/enterprise tiers&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;30+ community PRs merged&lt;/strong&gt; — Contributions from 30 contributors across 6 languages&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;978 internal tests&lt;/strong&gt; — Expanded validation suite across agents, skills, commands, hooks, and rules&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;v1.4.1 — Bug Fix (Feb 2026)&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Fixed instinct import content loss&lt;/strong&gt; — &lt;code&gt;parse_instinct_file()&lt;/code&gt; was silently dropping all content after frontmatter (Action, Evidence, Examples sections) during &lt;code&gt;/instinct-import&lt;/code&gt;. (&lt;a href=&quot;https://github.com/affaan-m/everything-claude-code/issues/148&quot;&gt;#148&lt;/a&gt;, &lt;a href=&quot;https://github.com/affaan-m/everything-claude-code/pull/161&quot;&gt;#161&lt;/a&gt;)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;v1.4.0 — Multi-Language Rules, Installation Wizard &amp;amp; PM2 (Feb 2026)&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Interactive installation wizard&lt;/strong&gt; — New &lt;code&gt;configure-ecc&lt;/code&gt; skill provides guided setup with merge/overwrite detection&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;PM2 &amp;amp; multi-agent orchestration&lt;/strong&gt; — 6 new commands (&lt;code&gt;/pm2&lt;/code&gt;, &lt;code&gt;/multi-plan&lt;/code&gt;, &lt;code&gt;/multi-execute&lt;/code&gt;, &lt;code&gt;/multi-backend&lt;/code&gt;, &lt;code&gt;/multi-frontend&lt;/code&gt;, &lt;code&gt;/multi-workflow&lt;/code&gt;) for managing complex multi-service workflows&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Multi-language rules architecture&lt;/strong&gt; — Rules restructured from flat files into &lt;code&gt;common/&lt;/code&gt; + &lt;code&gt;typescript/&lt;/code&gt; + &lt;code&gt;python/&lt;/code&gt; + &lt;code&gt;golang/&lt;/code&gt; directories. Install only the languages you need&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Chinese (zh-CN) translations&lt;/strong&gt; — Complete translation of all agents, commands, skills, and rules (80+ files)&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;GitHub Sponsors support&lt;/strong&gt; — Sponsor the project via GitHub Sponsors&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Enhanced &lt;a href=&quot;http://CONTRIBUTING.md&quot;&gt;CONTRIBUTING.md&lt;/a&gt;&lt;/strong&gt; — Detailed PR templates for each contribution type&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;v1.3.0 — OpenCode Plugin Support (Feb 2026)&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Full OpenCode integration&lt;/strong&gt; — 12 agents, 24 commands, 16 skills with hook support via OpenCode&#39;s plugin system (20+ event types)&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;3 native custom tools&lt;/strong&gt; — run-tests, check-coverage, security-audit&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;LLM documentation&lt;/strong&gt; — &lt;code&gt;llms.txt&lt;/code&gt; for comprehensive OpenCode docs&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;v1.2.0 — Unified Commands &amp;amp; Skills (Feb 2026)&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Python/Django support&lt;/strong&gt; — Django patterns, security, TDD, and verification skills&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Java Spring Boot skills&lt;/strong&gt; — Patterns, security, TDD, and verification for Spring Boot&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Session management&lt;/strong&gt; — &lt;code&gt;/sessions&lt;/code&gt; command for session history&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Continuous learning v2&lt;/strong&gt; — Instinct-based learning with confidence scoring, import/export, evolution&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;See the full changelog in &lt;a href=&quot;https://github.com/affaan-m/everything-claude-code/releases&quot;&gt;Releases&lt;/a&gt;.&lt;/p&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Quick Start&lt;/h2&gt; 
&lt;p&gt;Get up and running in under 2 minutes:&lt;/p&gt; 
&lt;h3&gt;Step 1: Install the Plugin&lt;/h3&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;NOTE: The plugin is convenient, but the OSS installer below is still the most reliable path if your Claude Code build has trouble resolving self-hosted marketplace entries.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Add marketplace
/plugin marketplace add https://github.com/affaan-m/everything-claude-code

# Install plugin
/plugin install everything-claude-code@everything-claude-code
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Naming + Migration Note&lt;/h3&gt; 
&lt;p&gt;ECC now has three public identifiers, and they are not interchangeable:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;GitHub source repo: &lt;code&gt;affaan-m/everything-claude-code&lt;/code&gt;&lt;/li&gt; 
 &lt;li&gt;Claude marketplace/plugin identifier: &lt;code&gt;everything-claude-code@everything-claude-code&lt;/code&gt;&lt;/li&gt; 
 &lt;li&gt;npm package: &lt;code&gt;ecc-universal&lt;/code&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;This is intentional. Anthropic marketplace/plugin installs are keyed by a canonical plugin identifier, so ECC standardized on &lt;code&gt;everything-claude-code@everything-claude-code&lt;/code&gt; to keep the listing name, &lt;code&gt;/plugin install&lt;/code&gt;, &lt;code&gt;/plugin list&lt;/code&gt;, and repo docs aligned to one public install surface. Older posts may still show the old short-form nickname; that shorthand is deprecated. Separately, the npm package stayed on &lt;code&gt;ecc-universal&lt;/code&gt;, so npm installs and marketplace installs intentionally use different names.&lt;/p&gt; 
&lt;h3&gt;Step 2: Install Rules (Required)&lt;/h3&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;WARNING: &lt;strong&gt;Important:&lt;/strong&gt; Claude Code plugins cannot distribute &lt;code&gt;rules&lt;/code&gt; automatically. Install them manually:&lt;/p&gt; 
 &lt;p&gt;If your local Claude setup was wiped or reset, that does not mean you need to repurchase ECC. Start with &lt;code&gt;ecc list-installed&lt;/code&gt;, then run &lt;code&gt;ecc doctor&lt;/code&gt; and &lt;code&gt;ecc repair&lt;/code&gt; before reinstalling anything. That usually restores ECC-managed files without rebuilding your setup. If the problem is account or marketplace access for ECC Tools, handle billing/account recovery separately.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;If your local Claude setup was wiped or reset, that does not mean you need to repurchase ECC. Start with &lt;code&gt;ecc list-installed&lt;/code&gt;, then run &lt;code&gt;ecc doctor&lt;/code&gt; and &lt;code&gt;ecc repair&lt;/code&gt; before reinstalling anything. That usually restores ECC-managed files without rebuilding your setup. If the problem is account or marketplace access for ECC Tools, handle billing/account recovery separately.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Clone the repo first
git clone https://github.com/affaan-m/everything-claude-code.git
cd everything-claude-code

# Install dependencies (pick your package manager)
npm install        # or: pnpm install | yarn install | bun install

# macOS/Linux

# Recommended: install everything (full profile)
./install.sh --profile full

# Or install for specific languages only
./install.sh typescript    # or python or golang or swift or php
# ./install.sh typescript python golang swift php
# ./install.sh --target cursor typescript
# ./install.sh --target antigravity typescript
# ./install.sh --target gemini --profile full
&lt;/code&gt;&lt;/pre&gt; 
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;# Windows PowerShell

# Recommended: install everything (full profile)
.\install.ps1 --profile full

# Or install for specific languages only
.\install.ps1 typescript   # or python or golang or swift or php
# .\install.ps1 typescript python golang swift php
# .\install.ps1 --target cursor typescript
# .\install.ps1 --target antigravity typescript
# .\install.ps1 --target gemini --profile full

# npm-installed compatibility entrypoint also works cross-platform
npx ecc-install typescript
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;For manual install instructions see the README in the &lt;code&gt;rules/&lt;/code&gt; folder. When copying rules manually, copy the whole language directory (for example &lt;code&gt;rules/common&lt;/code&gt; or &lt;code&gt;rules/golang&lt;/code&gt;), not the files inside it, so relative references keep working and filenames do not collide.&lt;/p&gt; 
&lt;h3&gt;Step 3: Start Using&lt;/h3&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Skills are the primary workflow surface.
# Existing slash-style command names still work while ECC migrates off commands/.

# Plugin install uses the namespaced form
/ecc:plan &quot;Add user authentication&quot;

# Manual install keeps the shorter slash form:
# /plan &quot;Add user authentication&quot;

# Check available commands
/plugin list everything-claude-code@everything-claude-code
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;That&#39;s it!&lt;/strong&gt; You now have access to 48 agents, 183 skills, and 79 legacy command shims.&lt;/p&gt; 
&lt;h3&gt;Dashboard GUI&lt;/h3&gt; 
&lt;p&gt;Launch the desktop dashboard to visually explore ECC components:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;npm run dashboard
# or
python3 ./ecc_dashboard.py
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Features:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Tabbed interface: Agents, Skills, Commands, Rules, Settings&lt;/li&gt; 
 &lt;li&gt;Dark/Light theme toggle&lt;/li&gt; 
 &lt;li&gt;Font customization (family &amp;amp; size)&lt;/li&gt; 
 &lt;li&gt;Project logo in header and taskbar&lt;/li&gt; 
 &lt;li&gt;Search and filter across all components&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Multi-model commands require additional setup&lt;/h3&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;WARNING: &lt;code&gt;multi-*&lt;/code&gt; commands are &lt;strong&gt;not&lt;/strong&gt; covered by the base plugin/rules install above.&lt;/p&gt; 
 &lt;p&gt;To use &lt;code&gt;/multi-plan&lt;/code&gt;, &lt;code&gt;/multi-execute&lt;/code&gt;, &lt;code&gt;/multi-backend&lt;/code&gt;, &lt;code&gt;/multi-frontend&lt;/code&gt;, and &lt;code&gt;/multi-workflow&lt;/code&gt;, you must also install the &lt;code&gt;ccg-workflow&lt;/code&gt; runtime.&lt;/p&gt; 
 &lt;p&gt;Initialize it with &lt;code&gt;npx ccg-workflow&lt;/code&gt;.&lt;/p&gt; 
 &lt;p&gt;That runtime provides the external dependencies these commands expect, including:&lt;/p&gt; 
 &lt;ul&gt; 
  &lt;li&gt;&lt;code&gt;~/.claude/bin/codeagent-wrapper&lt;/code&gt;&lt;/li&gt; 
  &lt;li&gt;&lt;code&gt;~/.claude/.ccg/prompts/*&lt;/code&gt;&lt;/li&gt; 
 &lt;/ul&gt; 
 &lt;p&gt;Without &lt;code&gt;ccg-workflow&lt;/code&gt;, these &lt;code&gt;multi-*&lt;/code&gt; commands will not run correctly.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Cross-Platform Support&lt;/h2&gt; 
&lt;p&gt;This plugin now fully supports &lt;strong&gt;Windows, macOS, and Linux&lt;/strong&gt;, alongside tight integration across major IDEs (Cursor, OpenCode, Antigravity) and CLI harnesses. All hooks and scripts have been rewritten in Node.js for maximum compatibility.&lt;/p&gt; 
&lt;h3&gt;Package Manager Detection&lt;/h3&gt; 
&lt;p&gt;The plugin automatically detects your preferred package manager (npm, pnpm, yarn, or bun) with the following priority:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;strong&gt;Environment variable&lt;/strong&gt;: &lt;code&gt;CLAUDE_PACKAGE_MANAGER&lt;/code&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Project config&lt;/strong&gt;: &lt;code&gt;.claude/package-manager.json&lt;/code&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;package.json&lt;/strong&gt;: &lt;code&gt;packageManager&lt;/code&gt; field&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Lock file&lt;/strong&gt;: Detection from package-lock.json, yarn.lock, pnpm-lock.yaml, or bun.lockb&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Global config&lt;/strong&gt;: &lt;code&gt;~/.claude/package-manager.json&lt;/code&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Fallback&lt;/strong&gt;: First available package manager&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;To set your preferred package manager:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Via environment variable
export CLAUDE_PACKAGE_MANAGER=pnpm

# Via global config
node scripts/setup-package-manager.js --global pnpm

# Via project config
node scripts/setup-package-manager.js --project bun

# Detect current setting
node scripts/setup-package-manager.js --detect
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Or use the &lt;code&gt;/setup-pm&lt;/code&gt; command in Claude Code.&lt;/p&gt; 
&lt;h3&gt;Hook Runtime Controls&lt;/h3&gt; 
&lt;p&gt;Use runtime flags to tune strictness or disable specific hooks temporarily:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Hook strictness profile (default: standard)
export ECC_HOOK_PROFILE=standard

# Comma-separated hook IDs to disable
export ECC_DISABLED_HOOKS=&quot;pre:bash:tmux-reminder,post:edit:typecheck&quot;
&lt;/code&gt;&lt;/pre&gt; 
&lt;hr /&gt; 
&lt;h2&gt;What&#39;s Inside&lt;/h2&gt; 
&lt;p&gt;This repo is a &lt;strong&gt;Claude Code plugin&lt;/strong&gt; - install it directly or copy components manually.&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;everything-claude-code/
|-- .claude-plugin/   # Plugin and marketplace manifests
|   |-- plugin.json         # Plugin metadata and component paths
|   |-- marketplace.json    # Marketplace catalog for /plugin marketplace add
|
|-- agents/           # 36 specialized subagents for delegation
|   |-- planner.md           # Feature implementation planning
|   |-- architect.md         # System design decisions
|   |-- tdd-guide.md         # Test-driven development
|   |-- code-reviewer.md     # Quality and security review
|   |-- security-reviewer.md # Vulnerability analysis
|   |-- build-error-resolver.md
|   |-- e2e-runner.md        # Playwright E2E testing
|   |-- refactor-cleaner.md  # Dead code cleanup
|   |-- doc-updater.md       # Documentation sync
|   |-- docs-lookup.md       # Documentation/API lookup
|   |-- chief-of-staff.md    # Communication triage and drafts
|   |-- loop-operator.md     # Autonomous loop execution
|   |-- harness-optimizer.md # Harness config tuning
|   |-- cpp-reviewer.md      # C++ code review
|   |-- cpp-build-resolver.md # C++ build error resolution
|   |-- go-reviewer.md       # Go code review
|   |-- go-build-resolver.md # Go build error resolution
|   |-- python-reviewer.md   # Python code review
|   |-- database-reviewer.md # Database/Supabase review
|   |-- typescript-reviewer.md # TypeScript/JavaScript code review
|   |-- java-reviewer.md     # Java/Spring Boot code review
|   |-- java-build-resolver.md # Java/Maven/Gradle build errors
|   |-- kotlin-reviewer.md   # Kotlin/Android/KMP code review
|   |-- kotlin-build-resolver.md # Kotlin/Gradle build errors
|   |-- rust-reviewer.md     # Rust code review
|   |-- rust-build-resolver.md # Rust build error resolution
|   |-- pytorch-build-resolver.md # PyTorch/CUDA training errors
|
|-- skills/           # Workflow definitions and domain knowledge
|   |-- coding-standards/           # Language best practices
|   |-- clickhouse-io/              # ClickHouse analytics, queries, data engineering
|   |-- backend-patterns/           # API, database, caching patterns
|   |-- frontend-patterns/          # React, Next.js patterns
|   |-- frontend-slides/            # HTML slide decks and PPTX-to-web presentation workflows (NEW)
|   |-- article-writing/            # Long-form writing in a supplied voice without generic AI tone (NEW)
|   |-- content-engine/             # Multi-platform social content and repurposing workflows (NEW)
|   |-- market-research/            # Source-attributed market, competitor, and investor research (NEW)
|   |-- investor-materials/         # Pitch decks, one-pagers, memos, and financial models (NEW)
|   |-- investor-outreach/          # Personalized fundraising outreach and follow-up (NEW)
|   |-- continuous-learning/        # Legacy v1 Stop-hook pattern extraction
|   |-- continuous-learning-v2/     # Instinct-based learning with confidence scoring
|   |-- iterative-retrieval/        # Progressive context refinement for subagents
|   |-- strategic-compact/          # Manual compaction suggestions (Longform Guide)
|   |-- tdd-workflow/               # TDD methodology
|   |-- security-review/            # Security checklist
|   |-- eval-harness/               # Verification loop evaluation (Longform Guide)
|   |-- verification-loop/          # Continuous verification (Longform Guide)
|   |-- videodb/                   # Video and audio: ingest, search, edit, generate, stream (NEW)
|   |-- golang-patterns/            # Go idioms and best practices
|   |-- golang-testing/             # Go testing patterns, TDD, benchmarks
|   |-- cpp-coding-standards/         # C++ coding standards from C++ Core Guidelines (NEW)
|   |-- cpp-testing/                # C++ testing with GoogleTest, CMake/CTest (NEW)
|   |-- django-patterns/            # Django patterns, models, views (NEW)
|   |-- django-security/            # Django security best practices (NEW)
|   |-- django-tdd/                 # Django TDD workflow (NEW)
|   |-- django-verification/        # Django verification loops (NEW)
|   |-- laravel-patterns/           # Laravel architecture patterns (NEW)
|   |-- laravel-security/           # Laravel security best practices (NEW)
|   |-- laravel-tdd/                # Laravel TDD workflow (NEW)
|   |-- laravel-verification/       # Laravel verification loops (NEW)
|   |-- python-patterns/            # Python idioms and best practices (NEW)
|   |-- python-testing/             # Python testing with pytest (NEW)
|   |-- springboot-patterns/        # Java Spring Boot patterns (NEW)
|   |-- springboot-security/        # Spring Boot security (NEW)
|   |-- springboot-tdd/             # Spring Boot TDD (NEW)
|   |-- springboot-verification/    # Spring Boot verification (NEW)
|   |-- configure-ecc/              # Interactive installation wizard (NEW)
|   |-- security-scan/              # AgentShield security auditor integration (NEW)
|   |-- java-coding-standards/     # Java coding standards (NEW)
|   |-- jpa-patterns/              # JPA/Hibernate patterns (NEW)
|   |-- postgres-patterns/         # PostgreSQL optimization patterns (NEW)
|   |-- nutrient-document-processing/ # Document processing with Nutrient API (NEW)
|   |-- docs/examples/project-guidelines-template.md  # Template for project-specific skills
|   |-- database-migrations/         # Migration patterns (Prisma, Drizzle, Django, Go) (NEW)
|   |-- api-design/                  # REST API design, pagination, error responses (NEW)
|   |-- deployment-patterns/         # CI/CD, Docker, health checks, rollbacks (NEW)
|   |-- docker-patterns/            # Docker Compose, networking, volumes, container security (NEW)
|   |-- e2e-testing/                 # Playwright E2E patterns and Page Object Model (NEW)
|   |-- content-hash-cache-pattern/  # SHA-256 content hash caching for file processing (NEW)
|   |-- cost-aware-llm-pipeline/     # LLM cost optimization, model routing, budget tracking (NEW)
|   |-- regex-vs-llm-structured-text/ # Decision framework: regex vs LLM for text parsing (NEW)
|   |-- swift-actor-persistence/     # Thread-safe Swift data persistence with actors (NEW)
|   |-- swift-protocol-di-testing/   # Protocol-based DI for testable Swift code (NEW)
|   |-- search-first/               # Research-before-coding workflow (NEW)
|   |-- skill-stocktake/            # Audit skills and commands for quality (NEW)
|   |-- liquid-glass-design/         # iOS 26 Liquid Glass design system (NEW)
|   |-- foundation-models-on-device/ # Apple on-device LLM with FoundationModels (NEW)
|   |-- swift-concurrency-6-2/       # Swift 6.2 Approachable Concurrency (NEW)
|   |-- perl-patterns/             # Modern Perl 5.36+ idioms and best practices (NEW)
|   |-- perl-security/             # Perl security patterns, taint mode, safe I/O (NEW)
|   |-- perl-testing/              # Perl TDD with Test2::V0, prove, Devel::Cover (NEW)
|   |-- autonomous-loops/           # Autonomous loop patterns: sequential pipelines, PR loops, DAG orchestration (NEW)
|   |-- plankton-code-quality/      # Write-time code quality enforcement with Plankton hooks (NEW)
|
|-- commands/         # Legacy slash-entry shims; prefer skills/
|   |-- tdd.md              # /tdd - Test-driven development
|   |-- plan.md             # /plan - Implementation planning
|   |-- e2e.md              # /e2e - E2E test generation
|   |-- code-review.md      # /code-review - Quality review
|   |-- build-fix.md        # /build-fix - Fix build errors
|   |-- refactor-clean.md   # /refactor-clean - Dead code removal
|   |-- learn.md            # /learn - Extract patterns mid-session (Longform Guide)
|   |-- learn-eval.md       # /learn-eval - Extract, evaluate, and save patterns (NEW)
|   |-- checkpoint.md       # /checkpoint - Save verification state (Longform Guide)
|   |-- verify.md           # /verify - Run verification loop (Longform Guide)
|   |-- setup-pm.md         # /setup-pm - Configure package manager
|   |-- go-review.md        # /go-review - Go code review (NEW)
|   |-- go-test.md          # /go-test - Go TDD workflow (NEW)
|   |-- go-build.md         # /go-build - Fix Go build errors (NEW)
|   |-- skill-create.md     # /skill-create - Generate skills from git history (NEW)
|   |-- instinct-status.md  # /instinct-status - View learned instincts (NEW)
|   |-- instinct-import.md  # /instinct-import - Import instincts (NEW)
|   |-- instinct-export.md  # /instinct-export - Export instincts (NEW)
|   |-- evolve.md           # /evolve - Cluster instincts into skills
|   |-- prune.md            # /prune - Delete expired pending instincts (NEW)
|   |-- pm2.md              # /pm2 - PM2 service lifecycle management (NEW)
|   |-- multi-plan.md       # /multi-plan - Multi-agent task decomposition (NEW)
|   |-- multi-execute.md    # /multi-execute - Orchestrated multi-agent workflows (NEW)
|   |-- multi-backend.md    # /multi-backend - Backend multi-service orchestration (NEW)
|   |-- multi-frontend.md   # /multi-frontend - Frontend multi-service orchestration (NEW)
|   |-- multi-workflow.md   # /multi-workflow - General multi-service workflows (NEW)
|   |-- orchestrate.md      # /orchestrate - Multi-agent coordination
|   |-- sessions.md         # /sessions - Session history management
|   |-- eval.md             # /eval - Evaluate against criteria
|   |-- test-coverage.md    # /test-coverage - Test coverage analysis
|   |-- update-docs.md      # /update-docs - Update documentation
|   |-- update-codemaps.md  # /update-codemaps - Update codemaps
|   |-- python-review.md    # /python-review - Python code review (NEW)
|
|-- rules/            # Always-follow guidelines (copy to ~/.claude/rules/)
|   |-- README.md            # Structure overview and installation guide
|   |-- common/              # Language-agnostic principles
|   |   |-- coding-style.md    # Immutability, file organization
|   |   |-- git-workflow.md    # Commit format, PR process
|   |   |-- testing.md         # TDD, 80% coverage requirement
|   |   |-- performance.md     # Model selection, context management
|   |   |-- patterns.md        # Design patterns, skeleton projects
|   |   |-- hooks.md           # Hook architecture, TodoWrite
|   |   |-- agents.md          # When to delegate to subagents
|   |   |-- security.md        # Mandatory security checks
|   |-- typescript/          # TypeScript/JavaScript specific
|   |-- python/              # Python specific
|   |-- golang/              # Go specific
|   |-- swift/               # Swift specific
|   |-- php/                 # PHP specific (NEW)
|
|-- hooks/            # Trigger-based automations
|   |-- README.md                 # Hook documentation, recipes, and customization guide
|   |-- hooks.json                # All hooks config (PreToolUse, PostToolUse, Stop, etc.)
|   |-- memory-persistence/       # Session lifecycle hooks (Longform Guide)
|   |-- strategic-compact/        # Compaction suggestions (Longform Guide)
|
|-- scripts/          # Cross-platform Node.js scripts (NEW)
|   |-- lib/                     # Shared utilities
|   |   |-- utils.js             # Cross-platform file/path/system utilities
|   |   |-- package-manager.js   # Package manager detection and selection
|   |-- hooks/                   # Hook implementations
|   |   |-- session-start.js     # Load context on session start
|   |   |-- session-end.js       # Save state on session end
|   |   |-- pre-compact.js       # Pre-compaction state saving
|   |   |-- suggest-compact.js   # Strategic compaction suggestions
|   |   |-- evaluate-session.js  # Extract patterns from sessions
|   |-- setup-package-manager.js # Interactive PM setup
|
|-- tests/            # Test suite (NEW)
|   |-- lib/                     # Library tests
|   |-- hooks/                   # Hook tests
|   |-- run-all.js               # Run all tests
|
|-- contexts/         # Dynamic system prompt injection contexts (Longform Guide)
|   |-- dev.md              # Development mode context
|   |-- review.md           # Code review mode context
|   |-- research.md         # Research/exploration mode context
|
|-- examples/         # Example configurations and sessions
|   |-- CLAUDE.md             # Example project-level config
|   |-- user-CLAUDE.md        # Example user-level config
|   |-- saas-nextjs-CLAUDE.md   # Real-world SaaS (Next.js + Supabase + Stripe)
|   |-- go-microservice-CLAUDE.md # Real-world Go microservice (gRPC + PostgreSQL)
|   |-- django-api-CLAUDE.md      # Real-world Django REST API (DRF + Celery)
|   |-- laravel-api-CLAUDE.md     # Real-world Laravel API (PostgreSQL + Redis) (NEW)
|   |-- rust-api-CLAUDE.md        # Real-world Rust API (Axum + SQLx + PostgreSQL) (NEW)
|
|-- mcp-configs/      # MCP server configurations
|   |-- mcp-servers.json    # GitHub, Supabase, Vercel, Railway, etc.
|
|-- ecc_dashboard.py  # Desktop GUI dashboard (Tkinter)
|
|-- assets/           # Assets for dashboard
|   |-- images/
|       |-- ecc-logo.png
|
|-- marketplace.json  # Self-hosted marketplace config (for /plugin marketplace add)
&lt;/code&gt;&lt;/pre&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Ecosystem Tools&lt;/h2&gt; 
&lt;h3&gt;Skill Creator&lt;/h3&gt; 
&lt;p&gt;Two ways to generate Claude Code skills from your repository:&lt;/p&gt; 
&lt;h4&gt;Option A: Local Analysis (Built-in)&lt;/h4&gt; 
&lt;p&gt;Use the &lt;code&gt;/skill-create&lt;/code&gt; command for local analysis without external services:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;/skill-create                    # Analyze current repo
/skill-create --instincts        # Also generate instincts for continuous-learning-v2
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;This analyzes your git history locally and generates &lt;a href=&quot;http://SKILL.md&quot;&gt;SKILL.md&lt;/a&gt; files.&lt;/p&gt; 
&lt;h4&gt;Option B: GitHub App (Advanced)&lt;/h4&gt; 
&lt;p&gt;For advanced features (10k+ commits, auto-PRs, team sharing):&lt;/p&gt; 
&lt;p&gt;&lt;a href=&quot;https://github.com/apps/skill-creator&quot;&gt;Install GitHub App&lt;/a&gt; | &lt;a href=&quot;https://ecc.tools&quot;&gt;ecc.tools&lt;/a&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Comment on any issue:
/skill-creator analyze

# Or auto-triggers on push to default branch
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Both options create:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;&lt;a href=&quot;http://SKILL.md&quot;&gt;SKILL.md&lt;/a&gt; files&lt;/strong&gt; - Ready-to-use skills for Claude Code&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Instinct collections&lt;/strong&gt; - For continuous-learning-v2&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Pattern extraction&lt;/strong&gt; - Learns from your commit history&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;AgentShield — Security Auditor&lt;/h3&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;Built at the Claude Code Hackathon (Cerebral Valley x Anthropic, Feb 2026). 1282 tests, 98% coverage, 102 static analysis rules.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;p&gt;Scan your Claude Code configuration for vulnerabilities, misconfigurations, and injection risks.&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Quick scan (no install needed)
npx ecc-agentshield scan

# Auto-fix safe issues
npx ecc-agentshield scan --fix

# Deep analysis with three Opus 4.6 agents
npx ecc-agentshield scan --opus --stream

# Generate secure config from scratch
npx ecc-agentshield init
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;What it scans:&lt;/strong&gt; &lt;a href=&quot;http://CLAUDE.md&quot;&gt;CLAUDE.md&lt;/a&gt;, settings.json, MCP configs, hooks, agent definitions, and skills across 5 categories — secrets detection (14 patterns), permission auditing, hook injection analysis, MCP server risk profiling, and agent config review.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;The &lt;code&gt;--opus&lt;/code&gt; flag&lt;/strong&gt; runs three Claude Opus 4.6 agents in a red-team/blue-team/auditor pipeline. The attacker finds exploit chains, the defender evaluates protections, and the auditor synthesizes both into a prioritized risk assessment. Adversarial reasoning, not just pattern matching.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Output formats:&lt;/strong&gt; Terminal (color-graded A-F), JSON (CI pipelines), Markdown, HTML. Exit code 2 on critical findings for build gates.&lt;/p&gt; 
&lt;p&gt;Use &lt;code&gt;/security-scan&lt;/code&gt; in Claude Code to run it, or add to CI with the &lt;a href=&quot;https://github.com/affaan-m/agentshield&quot;&gt;GitHub Action&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;&lt;a href=&quot;https://github.com/affaan-m/agentshield&quot;&gt;GitHub&lt;/a&gt; | &lt;a href=&quot;https://www.npmjs.com/package/ecc-agentshield&quot;&gt;npm&lt;/a&gt;&lt;/p&gt; 
&lt;h3&gt;Continuous Learning v2&lt;/h3&gt; 
&lt;p&gt;The instinct-based learning system automatically learns your patterns:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;/instinct-status        # Show learned instincts with confidence
/instinct-import &amp;lt;file&amp;gt; # Import instincts from others
/instinct-export        # Export your instincts for sharing
/evolve                 # Cluster related instincts into skills
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;See &lt;code&gt;skills/continuous-learning-v2/&lt;/code&gt; for full documentation. Keep &lt;code&gt;continuous-learning/&lt;/code&gt; only when you explicitly want the legacy v1 Stop-hook learned-skill flow.&lt;/p&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Requirements&lt;/h2&gt; 
&lt;h3&gt;Claude Code CLI Version&lt;/h3&gt; 
&lt;p&gt;&lt;strong&gt;Minimum version: v2.1.0 or later&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;This plugin requires Claude Code CLI v2.1.0+ due to changes in how the plugin system handles hooks.&lt;/p&gt; 
&lt;p&gt;Check your version:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;claude --version
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Important: Hooks Auto-Loading Behavior&lt;/h3&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;WARNING: &lt;strong&gt;For Contributors:&lt;/strong&gt; Do NOT add a &lt;code&gt;&quot;hooks&quot;&lt;/code&gt; field to &lt;code&gt;.claude-plugin/plugin.json&lt;/code&gt;. This is enforced by a regression test.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;p&gt;Claude Code v2.1+ &lt;strong&gt;automatically loads&lt;/strong&gt; &lt;code&gt;hooks/hooks.json&lt;/code&gt; from any installed plugin by convention. Explicitly declaring it in &lt;code&gt;plugin.json&lt;/code&gt; causes a duplicate detection error:&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;Duplicate hooks file detected: ./hooks/hooks.json resolves to already-loaded file
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;History:&lt;/strong&gt; This has caused repeated fix/revert cycles in this repo (&lt;a href=&quot;https://github.com/affaan-m/everything-claude-code/issues/29&quot;&gt;#29&lt;/a&gt;, &lt;a href=&quot;https://github.com/affaan-m/everything-claude-code/issues/52&quot;&gt;#52&lt;/a&gt;, &lt;a href=&quot;https://github.com/affaan-m/everything-claude-code/issues/103&quot;&gt;#103&lt;/a&gt;). The behavior changed between Claude Code versions, leading to confusion. We now have a regression test to prevent this from being reintroduced.&lt;/p&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Installation&lt;/h2&gt; 
&lt;h3&gt;Option 1: Install as Plugin (Recommended)&lt;/h3&gt; 
&lt;p&gt;The easiest way to use this repo - install as a Claude Code plugin:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Add this repo as a marketplace
/plugin marketplace add https://github.com/affaan-m/everything-claude-code

# Install the plugin
/plugin install everything-claude-code@everything-claude-code
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Or add directly to your &lt;code&gt;~/.claude/settings.json&lt;/code&gt;:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;extraKnownMarketplaces&quot;: {
    &quot;ecc&quot;: {
      &quot;source&quot;: {
        &quot;source&quot;: &quot;github&quot;,
        &quot;repo&quot;: &quot;affaan-m/everything-claude-code&quot;
      }
    }
  },
  &quot;enabledPlugins&quot;: {
    &quot;everything-claude-code@everything-claude-code&quot;: true
  }
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;This gives you instant access to all commands, agents, skills, and hooks.&lt;/p&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The Claude Code plugin system does not support distributing &lt;code&gt;rules&lt;/code&gt; via plugins (&lt;a href=&quot;https://code.claude.com/docs/en/plugins-reference&quot;&gt;upstream limitation&lt;/a&gt;). You need to install rules manually:&lt;/p&gt; 
 &lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Clone the repo first
git clone https://github.com/affaan-m/everything-claude-code.git

# Option A: User-level rules (applies to all projects)
mkdir -p ~/.claude/rules
cp -r everything-claude-code/rules/common ~/.claude/rules/
cp -r everything-claude-code/rules/typescript ~/.claude/rules/   # pick your stack
cp -r everything-claude-code/rules/python ~/.claude/rules/
cp -r everything-claude-code/rules/golang ~/.claude/rules/
cp -r everything-claude-code/rules/php ~/.claude/rules/

# Option B: Project-level rules (applies to current project only)
mkdir -p .claude/rules
cp -r everything-claude-code/rules/common .claude/rules/
cp -r everything-claude-code/rules/typescript .claude/rules/     # pick your stack
&lt;/code&gt;&lt;/pre&gt; 
&lt;/blockquote&gt; 
&lt;hr /&gt; 
&lt;h3&gt;Option 2: Manual Installation&lt;/h3&gt; 
&lt;p&gt;If you prefer manual control over what&#39;s installed:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Clone the repo
git clone https://github.com/affaan-m/everything-claude-code.git

# Copy agents to your Claude config
cp everything-claude-code/agents/*.md ~/.claude/agents/

# Copy rules directories (common + language-specific)
mkdir -p ~/.claude/rules
cp -r everything-claude-code/rules/common ~/.claude/rules/
cp -r everything-claude-code/rules/typescript ~/.claude/rules/   # pick your stack
cp -r everything-claude-code/rules/python ~/.claude/rules/
cp -r everything-claude-code/rules/golang ~/.claude/rules/
cp -r everything-claude-code/rules/php ~/.claude/rules/

# Copy skills first (primary workflow surface)
# Recommended (new users): core/general skills only
cp -r everything-claude-code/.agents/skills/* ~/.claude/skills/
cp -r everything-claude-code/skills/search-first ~/.claude/skills/

# Optional: add niche/framework-specific skills only when needed
# for s in django-patterns django-tdd laravel-patterns springboot-patterns; do
# cp -r everything-claude-code/skills/$s ~/.claude/skills/
# done

# Optional: keep legacy slash-command compatibility during migration
mkdir -p ~/.claude/commands
cp everything-claude-code/commands/*.md ~/.claude/commands/
&lt;/code&gt;&lt;/pre&gt; 
&lt;h4&gt;Install hooks&lt;/h4&gt; 
&lt;p&gt;Do not copy the raw repo &lt;code&gt;hooks/hooks.json&lt;/code&gt; into &lt;code&gt;~/.claude/settings.json&lt;/code&gt; or &lt;code&gt;~/.claude/hooks/hooks.json&lt;/code&gt;. That file is plugin/repo-oriented and is meant to be installed through the ECC installer or loaded as a plugin, so raw copying is not a supported manual install path.&lt;/p&gt; 
&lt;p&gt;Use the installer to install only the Claude hook runtime so command paths are rewritten correctly:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# macOS / Linux
bash ./install.sh --target claude --modules hooks-runtime
&lt;/code&gt;&lt;/pre&gt; 
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;# Windows PowerShell
pwsh -File .\install.ps1 --target claude --modules hooks-runtime
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;That writes resolved hooks to &lt;code&gt;~/.claude/hooks/hooks.json&lt;/code&gt; and leaves any existing &lt;code&gt;~/.claude/settings.json&lt;/code&gt; untouched.&lt;/p&gt; 
&lt;p&gt;If you installed ECC via &lt;code&gt;/plugin install&lt;/code&gt;, do not copy those hooks into &lt;code&gt;settings.json&lt;/code&gt;. Claude Code v2.1+ already auto-loads plugin &lt;code&gt;hooks/hooks.json&lt;/code&gt;, and duplicating them in &lt;code&gt;settings.json&lt;/code&gt; causes duplicate execution and cross-platform hook conflicts.&lt;/p&gt; 
&lt;p&gt;Windows note: the Claude config directory is &lt;code&gt;%USERPROFILE%\\.claude&lt;/code&gt;, not &lt;code&gt;~/claude&lt;/code&gt;.&lt;/p&gt; 
&lt;h4&gt;Configure MCPs&lt;/h4&gt; 
&lt;p&gt;Copy desired MCP server definitions from &lt;code&gt;mcp-configs/mcp-servers.json&lt;/code&gt; into your official Claude Code config in &lt;code&gt;~/.claude/settings.json&lt;/code&gt;, or into a project-scoped &lt;code&gt;.mcp.json&lt;/code&gt; if you want repo-local MCP access.&lt;/p&gt; 
&lt;p&gt;If you already run your own copies of ECC-bundled MCPs, set:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;export ECC_DISABLED_MCPS=&quot;github,context7,exa,playwright,sequential-thinking,memory&quot;
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;ECC-managed install and Codex sync flows will skip or remove those bundled servers instead of re-adding duplicates.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; Replace &lt;code&gt;YOUR_*_HERE&lt;/code&gt; placeholders with your actual API keys.&lt;/p&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Key Concepts&lt;/h2&gt; 
&lt;h3&gt;Agents&lt;/h3&gt; 
&lt;p&gt;Subagents handle delegated tasks with limited scope. Example:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;---
name: code-reviewer
description: Reviews code for quality, security, and maintainability
tools: [&quot;Read&quot;, &quot;Grep&quot;, &quot;Glob&quot;, &quot;Bash&quot;]
model: opus
---

You are a senior code reviewer...
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Skills&lt;/h3&gt; 
&lt;p&gt;Skills are the primary workflow surface. They can be invoked directly, suggested automatically, and reused by agents. ECC still ships &lt;code&gt;commands/&lt;/code&gt; during migration, but new workflow development should land in &lt;code&gt;skills/&lt;/code&gt; first.&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;# TDD Workflow

1. Define interfaces first
2. Write failing tests (RED)
3. Implement minimal code (GREEN)
4. Refactor (IMPROVE)
5. Verify 80%+ coverage
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Hooks&lt;/h3&gt; 
&lt;p&gt;Hooks fire on tool events. Example - warn about console.log:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;matcher&quot;: &quot;tool == \&quot;Edit\&quot; &amp;amp;&amp;amp; tool_input.file_path matches \&quot;\\\\.(ts|tsx|js|jsx)$\&quot;&quot;,
  &quot;hooks&quot;: [{
    &quot;type&quot;: &quot;command&quot;,
    &quot;command&quot;: &quot;#!/bin/bash\ngrep -n &#39;console\\.log&#39; \&quot;$file_path\&quot; &amp;amp;&amp;amp; echo &#39;[Hook] Remove console.log&#39; &amp;gt;&amp;amp;2&quot;
  }]
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Rules&lt;/h3&gt; 
&lt;p&gt;Rules are always-follow guidelines, organized into &lt;code&gt;common/&lt;/code&gt; (language-agnostic) + language-specific directories:&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;rules/
  common/          # Universal principles (always install)
  typescript/      # TS/JS specific patterns and tools
  python/          # Python specific patterns and tools
  golang/          # Go specific patterns and tools
  swift/           # Swift specific patterns and tools
  php/             # PHP specific patterns and tools
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;See &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/rules/README.md&quot;&gt;&lt;code&gt;rules/README.md&lt;/code&gt;&lt;/a&gt; for installation and structure details.&lt;/p&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Which Agent Should I Use?&lt;/h2&gt; 
&lt;p&gt;Not sure where to start? Use this quick reference. Skills are the canonical workflow surface; slash entries below are the compatibility form most users already know.&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;I want to...&lt;/th&gt; 
   &lt;th&gt;Use this command&lt;/th&gt; 
   &lt;th&gt;Agent used&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Plan a new feature&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/ecc:plan &quot;Add auth&quot;&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;planner&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Design system architecture&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/ecc:plan&lt;/code&gt; + architect agent&lt;/td&gt; 
   &lt;td&gt;architect&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Write code with tests first&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/tdd&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;tdd-guide&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Review code I just wrote&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/code-review&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;code-reviewer&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Fix a failing build&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/build-fix&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;build-error-resolver&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Run end-to-end tests&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/e2e&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;e2e-runner&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Find security vulnerabilities&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/security-scan&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;security-reviewer&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Remove dead code&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/refactor-clean&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;refactor-cleaner&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Update documentation&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/update-docs&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;doc-updater&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Review Go code&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/go-review&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;go-reviewer&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Review Python code&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;/python-review&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;python-reviewer&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Review TypeScript/JavaScript code&lt;/td&gt; 
   &lt;td&gt;&lt;em&gt;(invoke &lt;code&gt;typescript-reviewer&lt;/code&gt; directly)&lt;/em&gt;&lt;/td&gt; 
   &lt;td&gt;typescript-reviewer&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Audit database queries&lt;/td&gt; 
   &lt;td&gt;&lt;em&gt;(auto-delegated)&lt;/em&gt;&lt;/td&gt; 
   &lt;td&gt;database-reviewer&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;h3&gt;Common Workflows&lt;/h3&gt; 
&lt;p&gt;Slash forms below are shown because they are still the fastest familiar entrypoint. Under the hood, ECC is shifting these workflows toward skills-first definitions.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Starting a new feature:&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;/ecc:plan &quot;Add user authentication with OAuth&quot;
                                              → planner creates implementation blueprint
/tdd                                          → tdd-guide enforces write-tests-first
/code-review                                  → code-reviewer checks your work
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Fixing a bug:&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;/tdd                                          → tdd-guide: write a failing test that reproduces it
                                              → implement the fix, verify test passes
/code-review                                  → code-reviewer: catch regressions
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Preparing for production:&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;/security-scan                                → security-reviewer: OWASP Top 10 audit
/e2e                                          → e2e-runner: critical user flow tests
/test-coverage                                → verify 80%+ coverage
&lt;/code&gt;&lt;/pre&gt; 
&lt;hr /&gt; 
&lt;h2&gt;FAQ&lt;/h2&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;How do I check which agents/commands are installed?&lt;/b&gt;&lt;/summary&gt; 
 &lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;/plugin list everything-claude-code@everything-claude-code
&lt;/code&gt;&lt;/pre&gt; 
 &lt;p&gt;This shows all available agents, commands, and skills from the plugin.&lt;/p&gt; 
&lt;/details&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;My hooks aren&#39;t working / I see &quot;Duplicate hooks file&quot; errors&lt;/b&gt;&lt;/summary&gt; 
 &lt;p&gt;This is the most common issue. &lt;strong&gt;Do NOT add a &lt;code&gt;&quot;hooks&quot;&lt;/code&gt; field to &lt;code&gt;.claude-plugin/plugin.json&lt;/code&gt;.&lt;/strong&gt; Claude Code v2.1+ automatically loads &lt;code&gt;hooks/hooks.json&lt;/code&gt; from installed plugins. Explicitly declaring it causes duplicate detection errors. See &lt;a href=&quot;https://github.com/affaan-m/everything-claude-code/issues/29&quot;&gt;#29&lt;/a&gt;, &lt;a href=&quot;https://github.com/affaan-m/everything-claude-code/issues/52&quot;&gt;#52&lt;/a&gt;, &lt;a href=&quot;https://github.com/affaan-m/everything-claude-code/issues/103&quot;&gt;#103&lt;/a&gt;.&lt;/p&gt; 
&lt;/details&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;Can I use ECC with Claude Code on a custom API endpoint or model gateway?&lt;/b&gt;&lt;/summary&gt; 
 &lt;p&gt;Yes. ECC does not hardcode Anthropic-hosted transport settings. It runs locally through Claude Code&#39;s normal CLI/plugin surface, so it works with:&lt;/p&gt; 
 &lt;ul&gt; 
  &lt;li&gt;Anthropic-hosted Claude Code&lt;/li&gt; 
  &lt;li&gt;Official Claude Code gateway setups using &lt;code&gt;ANTHROPIC_BASE_URL&lt;/code&gt; and &lt;code&gt;ANTHROPIC_AUTH_TOKEN&lt;/code&gt;&lt;/li&gt; 
  &lt;li&gt;Compatible custom endpoints that speak the Anthropic API Claude Code expects&lt;/li&gt; 
 &lt;/ul&gt; 
 &lt;p&gt;Minimal example:&lt;/p&gt; 
 &lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;export ANTHROPIC_BASE_URL=https://your-gateway.example.com
export ANTHROPIC_AUTH_TOKEN=your-token
claude
&lt;/code&gt;&lt;/pre&gt; 
 &lt;p&gt;If your gateway remaps model names, configure that in Claude Code rather than in ECC. ECC&#39;s hooks, skills, commands, and rules are model-provider agnostic once the &lt;code&gt;claude&lt;/code&gt; CLI is already working.&lt;/p&gt; 
 &lt;p&gt;Official references:&lt;/p&gt; 
 &lt;ul&gt; 
  &lt;li&gt;&lt;a href=&quot;https://docs.anthropic.com/en/docs/claude-code/llm-gateway&quot;&gt;Claude Code LLM gateway docs&lt;/a&gt;&lt;/li&gt; 
  &lt;li&gt;&lt;a href=&quot;https://docs.anthropic.com/en/docs/claude-code/model-config&quot;&gt;Claude Code model configuration docs&lt;/a&gt;&lt;/li&gt; 
 &lt;/ul&gt; 
&lt;/details&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;My context window is shrinking / Claude is running out of context&lt;/b&gt;&lt;/summary&gt; 
 &lt;p&gt;Too many MCP servers eat your context. Each MCP tool description consumes tokens from your 200k window, potentially reducing it to ~70k.&lt;/p&gt; 
 &lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; Disable unused MCPs per project:&lt;/p&gt; 
 &lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;// In your project&#39;s .claude/settings.json
{
  &quot;disabledMcpServers&quot;: [&quot;supabase&quot;, &quot;railway&quot;, &quot;vercel&quot;]
}
&lt;/code&gt;&lt;/pre&gt; 
 &lt;p&gt;Keep under 10 MCPs enabled and under 80 tools active.&lt;/p&gt; 
&lt;/details&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;Can I use only some components (e.g., just agents)?&lt;/b&gt;&lt;/summary&gt; 
 &lt;p&gt;Yes. Use Option 2 (manual installation) and copy only what you need:&lt;/p&gt; 
 &lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Just agents
cp everything-claude-code/agents/*.md ~/.claude/agents/

# Just rules
mkdir -p ~/.claude/rules/
cp -r everything-claude-code/rules/common ~/.claude/rules/
&lt;/code&gt;&lt;/pre&gt; 
 &lt;p&gt;Each component is fully independent.&lt;/p&gt; 
&lt;/details&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;Does this work with Cursor / OpenCode / Codex / Antigravity?&lt;/b&gt;&lt;/summary&gt; 
 &lt;p&gt;Yes. ECC is cross-platform:&lt;/p&gt; 
 &lt;ul&gt; 
  &lt;li&gt;&lt;strong&gt;Cursor&lt;/strong&gt;: Pre-translated configs in &lt;code&gt;.cursor/&lt;/code&gt;. See &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/#cursor-ide-support&quot;&gt;Cursor IDE Support&lt;/a&gt;.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Gemini CLI&lt;/strong&gt;: Experimental project-local support via &lt;code&gt;.gemini/GEMINI.md&lt;/code&gt; and shared installer plumbing.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;OpenCode&lt;/strong&gt;: Full plugin support in &lt;code&gt;.opencode/&lt;/code&gt;. See &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/#opencode-support&quot;&gt;OpenCode Support&lt;/a&gt;.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Codex&lt;/strong&gt;: First-class support for both macOS app and CLI, with adapter drift guards and SessionStart fallback. See PR &lt;a href=&quot;https://github.com/affaan-m/everything-claude-code/pull/257&quot;&gt;#257&lt;/a&gt;.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Antigravity&lt;/strong&gt;: Tightly integrated setup for workflows, skills, and flattened rules in &lt;code&gt;.agent/&lt;/code&gt;. See &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/docs/ANTIGRAVITY-GUIDE.md&quot;&gt;Antigravity Guide&lt;/a&gt;.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Non-native harnesses&lt;/strong&gt;: Manual fallback path for Grok and similar interfaces. See &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/docs/MANUAL-ADAPTATION-GUIDE.md&quot;&gt;Manual Adaptation Guide&lt;/a&gt;.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Claude Code&lt;/strong&gt;: Native — this is the primary target.&lt;/li&gt; 
 &lt;/ul&gt; 
&lt;/details&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;How do I contribute a new skill or agent?&lt;/b&gt;&lt;/summary&gt; 
 &lt;p&gt;See &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/CONTRIBUTING.md&quot;&gt;CONTRIBUTING.md&lt;/a&gt;. The short version:&lt;/p&gt; 
 &lt;ol&gt; 
  &lt;li&gt;Fork the repo&lt;/li&gt; 
  &lt;li&gt;Create your skill in &lt;code&gt;skills/your-skill-name/SKILL.md&lt;/code&gt; (with YAML frontmatter)&lt;/li&gt; 
  &lt;li&gt;Or create an agent in &lt;code&gt;agents/your-agent.md&lt;/code&gt;&lt;/li&gt; 
  &lt;li&gt;Submit a PR with a clear description of what it does and when to use it&lt;/li&gt; 
 &lt;/ol&gt; 
&lt;/details&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Running Tests&lt;/h2&gt; 
&lt;p&gt;The plugin includes a comprehensive test suite:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Run all tests
node tests/run-all.js

# Run individual test files
node tests/lib/utils.test.js
node tests/lib/package-manager.test.js
node tests/hooks/hooks.test.js
&lt;/code&gt;&lt;/pre&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Contributing&lt;/h2&gt; 
&lt;p&gt;&lt;strong&gt;Contributions are welcome and encouraged.&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;This repo is meant to be a community resource. If you have:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Useful agents or skills&lt;/li&gt; 
 &lt;li&gt;Clever hooks&lt;/li&gt; 
 &lt;li&gt;Better MCP configurations&lt;/li&gt; 
 &lt;li&gt;Improved rules&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Please contribute! See &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/CONTRIBUTING.md&quot;&gt;CONTRIBUTING.md&lt;/a&gt; for guidelines.&lt;/p&gt; 
&lt;h3&gt;Ideas for Contributions&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;Language-specific skills (Rust, C#, Kotlin, Java) — Go, Python, Perl, Swift, and TypeScript already included&lt;/li&gt; 
 &lt;li&gt;Framework-specific configs (Rails, FastAPI) — Django, NestJS, Spring Boot, and Laravel already included&lt;/li&gt; 
 &lt;li&gt;DevOps agents (Kubernetes, Terraform, AWS, Docker)&lt;/li&gt; 
 &lt;li&gt;Testing strategies (different frameworks, visual regression)&lt;/li&gt; 
 &lt;li&gt;Domain-specific knowledge (ML, data engineering, mobile)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Community Ecosystem Notes&lt;/h3&gt; 
&lt;p&gt;These are not bundled with ECC and are not audited by this repo, but they are worth knowing about if you are exploring the broader Claude Code skills ecosystem:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href=&quot;https://github.com/AgriciDaniel/claude-seo&quot;&gt;claude-seo&lt;/a&gt; — SEO-focused skill and agent collection&lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://github.com/AgriciDaniel/claude-ads&quot;&gt;claude-ads&lt;/a&gt; — Ad-audit and paid-growth workflow collection&lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://github.com/AgriciDaniel/claude-cybersecurity&quot;&gt;claude-cybersecurity&lt;/a&gt; — Security-oriented skill and agent collection&lt;/li&gt; 
&lt;/ul&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Cursor IDE Support&lt;/h2&gt; 
&lt;p&gt;ECC provides &lt;strong&gt;full Cursor IDE support&lt;/strong&gt; with hooks, rules, agents, skills, commands, and MCP configs adapted for Cursor&#39;s native format.&lt;/p&gt; 
&lt;h3&gt;Quick Start (Cursor)&lt;/h3&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# macOS/Linux
./install.sh --target cursor typescript
./install.sh --target cursor python golang swift php
&lt;/code&gt;&lt;/pre&gt; 
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;# Windows PowerShell
.\install.ps1 --target cursor typescript
.\install.ps1 --target cursor python golang swift php
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;What&#39;s Included&lt;/h3&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Component&lt;/th&gt; 
   &lt;th&gt;Count&lt;/th&gt; 
   &lt;th&gt;Details&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Hook Events&lt;/td&gt; 
   &lt;td&gt;15&lt;/td&gt; 
   &lt;td&gt;sessionStart, beforeShellExecution, afterFileEdit, beforeMCPExecution, beforeSubmitPrompt, and 10 more&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Hook Scripts&lt;/td&gt; 
   &lt;td&gt;16&lt;/td&gt; 
   &lt;td&gt;Thin Node.js scripts delegating to &lt;code&gt;scripts/hooks/&lt;/code&gt; via shared adapter&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Rules&lt;/td&gt; 
   &lt;td&gt;34&lt;/td&gt; 
   &lt;td&gt;9 common (alwaysApply) + 25 language-specific (TypeScript, Python, Go, Swift, PHP)&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Agents&lt;/td&gt; 
   &lt;td&gt;Shared&lt;/td&gt; 
   &lt;td&gt;Via &lt;a href=&quot;http://AGENTS.md&quot;&gt;AGENTS.md&lt;/a&gt; at root (read by Cursor natively)&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Skills&lt;/td&gt; 
   &lt;td&gt;Shared + Bundled&lt;/td&gt; 
   &lt;td&gt;Via &lt;a href=&quot;http://AGENTS.md&quot;&gt;AGENTS.md&lt;/a&gt; at root and &lt;code&gt;.cursor/skills/&lt;/code&gt; for translated additions&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Commands&lt;/td&gt; 
   &lt;td&gt;Shared&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;.cursor/commands/&lt;/code&gt; if installed&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;MCP Config&lt;/td&gt; 
   &lt;td&gt;Shared&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;.cursor/mcp.json&lt;/code&gt; if installed&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;h3&gt;Hook Architecture (DRY Adapter Pattern)&lt;/h3&gt; 
&lt;p&gt;Cursor has &lt;strong&gt;more hook events than Claude Code&lt;/strong&gt; (20 vs 8). The &lt;code&gt;.cursor/hooks/adapter.js&lt;/code&gt; module transforms Cursor&#39;s stdin JSON to Claude Code&#39;s format, allowing existing &lt;code&gt;scripts/hooks/*.js&lt;/code&gt; to be reused without duplication.&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;Cursor stdin JSON → adapter.js → transforms → scripts/hooks/*.js
                                              (shared with Claude Code)
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Key hooks:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;beforeShellExecution&lt;/strong&gt; — Blocks dev servers outside tmux (exit 2), git push review&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;afterFileEdit&lt;/strong&gt; — Auto-format + TypeScript check + console.log warning&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;beforeSubmitPrompt&lt;/strong&gt; — Detects secrets (sk-, ghp_, AKIA patterns) in prompts&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;beforeTabFileRead&lt;/strong&gt; — Blocks Tab from reading .env, .key, .pem files (exit 2)&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;beforeMCPExecution / afterMCPExecution&lt;/strong&gt; — MCP audit logging&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Rules Format&lt;/h3&gt; 
&lt;p&gt;Cursor rules use YAML frontmatter with &lt;code&gt;description&lt;/code&gt;, &lt;code&gt;globs&lt;/code&gt;, and &lt;code&gt;alwaysApply&lt;/code&gt;:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
description: &quot;TypeScript coding style extending common rules&quot;
globs: [&quot;**/*.ts&quot;, &quot;**/*.tsx&quot;, &quot;**/*.js&quot;, &quot;**/*.jsx&quot;]
alwaysApply: false
---
&lt;/code&gt;&lt;/pre&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Codex macOS App + CLI Support&lt;/h2&gt; 
&lt;p&gt;ECC provides &lt;strong&gt;first-class Codex support&lt;/strong&gt; for both the macOS app and CLI, with a reference configuration, Codex-specific &lt;a href=&quot;http://AGENTS.md&quot;&gt;AGENTS.md&lt;/a&gt; supplement, and shared skills.&lt;/p&gt; 
&lt;h3&gt;Quick Start (Codex App + CLI)&lt;/h3&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Run Codex CLI in the repo — AGENTS.md and .codex/ are auto-detected
codex

# Automatic setup: sync ECC assets (AGENTS.md, skills, MCP servers) into ~/.codex
npm install &amp;amp;&amp;amp; bash scripts/sync-ecc-to-codex.sh
# or: pnpm install &amp;amp;&amp;amp; bash scripts/sync-ecc-to-codex.sh
# or: yarn install &amp;amp;&amp;amp; bash scripts/sync-ecc-to-codex.sh
# or: bun install &amp;amp;&amp;amp; bash scripts/sync-ecc-to-codex.sh

# Or manually: copy the reference config to your home directory
cp .codex/config.toml ~/.codex/config.toml
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;The sync script safely merges ECC MCP servers into your existing &lt;code&gt;~/.codex/config.toml&lt;/code&gt; using an &lt;strong&gt;add-only&lt;/strong&gt; strategy — it never removes or modifies your existing servers. Run with &lt;code&gt;--dry-run&lt;/code&gt; to preview changes, or &lt;code&gt;--update-mcp&lt;/code&gt; to force-refresh ECC servers to the latest recommended config.&lt;/p&gt; 
&lt;p&gt;For Context7, ECC uses the canonical Codex section name &lt;code&gt;[mcp_servers.context7]&lt;/code&gt; while still launching the &lt;code&gt;@upstash/context7-mcp&lt;/code&gt; package. If you already have a legacy &lt;code&gt;[mcp_servers.context7-mcp]&lt;/code&gt; entry, &lt;code&gt;--update-mcp&lt;/code&gt; migrates it to the canonical section name.&lt;/p&gt; 
&lt;p&gt;Codex macOS app:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Open this repository as your workspace.&lt;/li&gt; 
 &lt;li&gt;The root &lt;code&gt;AGENTS.md&lt;/code&gt; is auto-detected.&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;.codex/config.toml&lt;/code&gt; and &lt;code&gt;.codex/agents/*.toml&lt;/code&gt; work best when kept project-local.&lt;/li&gt; 
 &lt;li&gt;The reference &lt;code&gt;.codex/config.toml&lt;/code&gt; intentionally does not pin &lt;code&gt;model&lt;/code&gt; or &lt;code&gt;model_provider&lt;/code&gt;, so Codex uses its own current default unless you override it.&lt;/li&gt; 
 &lt;li&gt;Optional: copy &lt;code&gt;.codex/config.toml&lt;/code&gt; to &lt;code&gt;~/.codex/config.toml&lt;/code&gt; for global defaults; keep the multi-agent role files project-local unless you also copy &lt;code&gt;.codex/agents/&lt;/code&gt;.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;What&#39;s Included&lt;/h3&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Component&lt;/th&gt; 
   &lt;th&gt;Count&lt;/th&gt; 
   &lt;th&gt;Details&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Config&lt;/td&gt; 
   &lt;td&gt;1&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;.codex/config.toml&lt;/code&gt; — top-level approvals/sandbox/web_search, MCP servers, notifications, profiles&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;http://AGENTS.md&quot;&gt;AGENTS.md&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;2&lt;/td&gt; 
   &lt;td&gt;Root (universal) + &lt;code&gt;.codex/AGENTS.md&lt;/code&gt; (Codex-specific supplement)&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Skills&lt;/td&gt; 
   &lt;td&gt;30&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;.agents/skills/&lt;/code&gt; — &lt;a href=&quot;http://SKILL.md&quot;&gt;SKILL.md&lt;/a&gt; + agents/openai.yaml per skill&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;MCP Servers&lt;/td&gt; 
   &lt;td&gt;6&lt;/td&gt; 
   &lt;td&gt;GitHub, Context7, Exa, Memory, Playwright, Sequential Thinking (7 with Supabase via &lt;code&gt;--update-mcp&lt;/code&gt; sync)&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Profiles&lt;/td&gt; 
   &lt;td&gt;2&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;strict&lt;/code&gt; (read-only sandbox) and &lt;code&gt;yolo&lt;/code&gt; (full auto-approve)&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Agent Roles&lt;/td&gt; 
   &lt;td&gt;3&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;.codex/agents/&lt;/code&gt; — explorer, reviewer, docs-researcher&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;h3&gt;Skills&lt;/h3&gt; 
&lt;p&gt;Skills at &lt;code&gt;.agents/skills/&lt;/code&gt; are auto-loaded by Codex:&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Skill&lt;/th&gt; 
   &lt;th&gt;Description&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;api-design&lt;/td&gt; 
   &lt;td&gt;REST API design patterns&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;article-writing&lt;/td&gt; 
   &lt;td&gt;Long-form writing from notes and voice references&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;backend-patterns&lt;/td&gt; 
   &lt;td&gt;API design, database, caching&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;brand-voice&lt;/td&gt; 
   &lt;td&gt;Source-derived writing style profiles from real content&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;bun-runtime&lt;/td&gt; 
   &lt;td&gt;Bun as runtime, package manager, bundler, and test runner&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;claude-api&lt;/td&gt; 
   &lt;td&gt;Anthropic Claude API patterns for Python and TypeScript&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;coding-standards&lt;/td&gt; 
   &lt;td&gt;Universal coding standards&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;content-engine&lt;/td&gt; 
   &lt;td&gt;Platform-native social content and repurposing&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;crosspost&lt;/td&gt; 
   &lt;td&gt;Multi-platform content distribution across X, LinkedIn, Threads&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;deep-research&lt;/td&gt; 
   &lt;td&gt;Multi-source research with synthesis and source attribution&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;dmux-workflows&lt;/td&gt; 
   &lt;td&gt;Multi-agent orchestration using tmux pane manager&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;documentation-lookup&lt;/td&gt; 
   &lt;td&gt;Up-to-date library and framework docs via Context7 MCP&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;e2e-testing&lt;/td&gt; 
   &lt;td&gt;Playwright E2E tests&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;eval-harness&lt;/td&gt; 
   &lt;td&gt;Eval-driven development&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;everything-claude-code&lt;/td&gt; 
   &lt;td&gt;Development conventions and patterns for the project&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;exa-search&lt;/td&gt; 
   &lt;td&gt;Neural search via Exa MCP for web, code, company research&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;fal-ai-media&lt;/td&gt; 
   &lt;td&gt;Unified media generation for images, video, and audio&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;frontend-patterns&lt;/td&gt; 
   &lt;td&gt;React/Next.js patterns&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;frontend-slides&lt;/td&gt; 
   &lt;td&gt;HTML presentations, PPTX conversion, visual style exploration&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;investor-materials&lt;/td&gt; 
   &lt;td&gt;Decks, memos, models, and one-pagers&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;investor-outreach&lt;/td&gt; 
   &lt;td&gt;Personalized outreach, follow-ups, and intro blurbs&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;market-research&lt;/td&gt; 
   &lt;td&gt;Source-attributed market and competitor research&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;mcp-server-patterns&lt;/td&gt; 
   &lt;td&gt;Build MCP servers with Node/TypeScript SDK&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;nextjs-turbopack&lt;/td&gt; 
   &lt;td&gt;Next.js 16+ and Turbopack incremental bundling&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;security-review&lt;/td&gt; 
   &lt;td&gt;Comprehensive security checklist&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;strategic-compact&lt;/td&gt; 
   &lt;td&gt;Context management&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;tdd-workflow&lt;/td&gt; 
   &lt;td&gt;Test-driven development with 80%+ coverage&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;verification-loop&lt;/td&gt; 
   &lt;td&gt;Build, test, lint, typecheck, security&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;video-editing&lt;/td&gt; 
   &lt;td&gt;AI-assisted video editing workflows with FFmpeg and Remotion&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;x-api&lt;/td&gt; 
   &lt;td&gt;X/Twitter API integration for posting and analytics&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;h3&gt;Key Limitation&lt;/h3&gt; 
&lt;p&gt;Codex does &lt;strong&gt;not yet provide Claude-style hook execution parity&lt;/strong&gt;. ECC enforcement there is instruction-based via &lt;code&gt;AGENTS.md&lt;/code&gt;, optional &lt;code&gt;model_instructions_file&lt;/code&gt; overrides, and sandbox/approval settings.&lt;/p&gt; 
&lt;h3&gt;Multi-Agent Support&lt;/h3&gt; 
&lt;p&gt;Current Codex builds support stable multi-agent workflows.&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Enable &lt;code&gt;features.multi_agent = true&lt;/code&gt; in &lt;code&gt;.codex/config.toml&lt;/code&gt;&lt;/li&gt; 
 &lt;li&gt;Define roles under &lt;code&gt;[agents.&amp;lt;name&amp;gt;]&lt;/code&gt;&lt;/li&gt; 
 &lt;li&gt;Point each role at a file under &lt;code&gt;.codex/agents/&lt;/code&gt;&lt;/li&gt; 
 &lt;li&gt;Use &lt;code&gt;/agent&lt;/code&gt; in the CLI to inspect or steer child agents&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;ECC ships three sample role configs:&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Role&lt;/th&gt; 
   &lt;th&gt;Purpose&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;explorer&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Read-only codebase evidence gathering before edits&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;reviewer&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Correctness, security, and missing-test review&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;docs_researcher&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Documentation and API verification before release/docs changes&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;hr /&gt; 
&lt;h2&gt;OpenCode Support&lt;/h2&gt; 
&lt;p&gt;ECC provides &lt;strong&gt;full OpenCode support&lt;/strong&gt; including plugins and hooks.&lt;/p&gt; 
&lt;h3&gt;Quick Start&lt;/h3&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Install OpenCode
npm install -g opencode

# Run in the repository root
opencode
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;The configuration is automatically detected from &lt;code&gt;.opencode/opencode.json&lt;/code&gt;.&lt;/p&gt; 
&lt;h3&gt;Feature Parity&lt;/h3&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Feature&lt;/th&gt; 
   &lt;th&gt;Claude Code&lt;/th&gt; 
   &lt;th&gt;OpenCode&lt;/th&gt; 
   &lt;th&gt;Status&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Agents&lt;/td&gt; 
   &lt;td&gt;PASS: 48 agents&lt;/td&gt; 
   &lt;td&gt;PASS: 12 agents&lt;/td&gt; 
   &lt;td&gt;&lt;strong&gt;Claude Code leads&lt;/strong&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Commands&lt;/td&gt; 
   &lt;td&gt;PASS: 79 commands&lt;/td&gt; 
   &lt;td&gt;PASS: 31 commands&lt;/td&gt; 
   &lt;td&gt;&lt;strong&gt;Claude Code leads&lt;/strong&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Skills&lt;/td&gt; 
   &lt;td&gt;PASS: 183 skills&lt;/td&gt; 
   &lt;td&gt;PASS: 37 skills&lt;/td&gt; 
   &lt;td&gt;&lt;strong&gt;Claude Code leads&lt;/strong&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Hooks&lt;/td&gt; 
   &lt;td&gt;PASS: 8 event types&lt;/td&gt; 
   &lt;td&gt;PASS: 11 events&lt;/td&gt; 
   &lt;td&gt;&lt;strong&gt;OpenCode has more!&lt;/strong&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Rules&lt;/td&gt; 
   &lt;td&gt;PASS: 29 rules&lt;/td&gt; 
   &lt;td&gt;PASS: 13 instructions&lt;/td&gt; 
   &lt;td&gt;&lt;strong&gt;Claude Code leads&lt;/strong&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;MCP Servers&lt;/td&gt; 
   &lt;td&gt;PASS: 14 servers&lt;/td&gt; 
   &lt;td&gt;PASS: Full&lt;/td&gt; 
   &lt;td&gt;&lt;strong&gt;Full parity&lt;/strong&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Custom Tools&lt;/td&gt; 
   &lt;td&gt;PASS: Via hooks&lt;/td&gt; 
   &lt;td&gt;PASS: 6 native tools&lt;/td&gt; 
   &lt;td&gt;&lt;strong&gt;OpenCode is better&lt;/strong&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;h3&gt;Hook Support via Plugins&lt;/h3&gt; 
&lt;p&gt;OpenCode&#39;s plugin system is MORE sophisticated than Claude Code with 20+ event types:&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Claude Code Hook&lt;/th&gt; 
   &lt;th&gt;OpenCode Plugin Event&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;PreToolUse&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;tool.execute.before&lt;/code&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;PostToolUse&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;tool.execute.after&lt;/code&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Stop&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;session.idle&lt;/code&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;SessionStart&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;session.created&lt;/code&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;SessionEnd&lt;/td&gt; 
   &lt;td&gt;&lt;code&gt;session.deleted&lt;/code&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p&gt;&lt;strong&gt;Additional OpenCode events&lt;/strong&gt;: &lt;code&gt;file.edited&lt;/code&gt;, &lt;code&gt;file.watcher.updated&lt;/code&gt;, &lt;code&gt;message.updated&lt;/code&gt;, &lt;code&gt;lsp.client.diagnostics&lt;/code&gt;, &lt;code&gt;tui.toast.show&lt;/code&gt;, and more.&lt;/p&gt; 
&lt;h3&gt;Available Slash Entry Shims (31+)&lt;/h3&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Command&lt;/th&gt; 
   &lt;th&gt;Description&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/plan&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Create implementation plan&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/tdd&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Enforce TDD workflow&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/code-review&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Review code changes&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/build-fix&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Fix build errors&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/e2e&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Generate E2E tests&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/refactor-clean&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Remove dead code&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/orchestrate&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Multi-agent workflow&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/learn&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Extract patterns from session&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/checkpoint&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Save verification state&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/verify&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Run verification loop&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/eval&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Evaluate against criteria&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/update-docs&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Update documentation&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/update-codemaps&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Update codemaps&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/test-coverage&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Analyze coverage&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/go-review&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Go code review&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/go-test&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Go TDD workflow&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/go-build&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Fix Go build errors&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/python-review&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Python code review (PEP 8, type hints, security)&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/multi-plan&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Multi-model collaborative planning&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/multi-execute&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Multi-model collaborative execution&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/multi-backend&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Backend-focused multi-model workflow&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/multi-frontend&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Frontend-focused multi-model workflow&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/multi-workflow&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Full multi-model development workflow&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/pm2&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Auto-generate PM2 service commands&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/sessions&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Manage session history&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/skill-create&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Generate skills from git&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/instinct-status&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;View learned instincts&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/instinct-import&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Import instincts&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/instinct-export&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Export instincts&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/evolve&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Cluster instincts into skills&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/promote&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Promote project instincts to global scope&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/projects&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;List known projects and instinct stats&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/prune&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Delete expired pending instincts (30d TTL)&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/learn-eval&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Extract and evaluate patterns before saving&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/setup-pm&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Configure package manager&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/harness-audit&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Audit harness reliability, eval readiness, and risk posture&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/loop-start&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Start controlled agentic loop execution pattern&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/loop-status&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Inspect active loop status and checkpoints&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/quality-gate&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Run quality gate checks for paths or entire repo&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/model-route&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Route tasks to models by complexity and budget&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;h3&gt;Plugin Installation&lt;/h3&gt; 
&lt;p&gt;&lt;strong&gt;Option 1: Use directly&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cd everything-claude-code
opencode
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Option 2: Install as npm package&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;npm install ecc-universal
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Then add to your &lt;code&gt;opencode.json&lt;/code&gt;:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;plugin&quot;: [&quot;ecc-universal&quot;]
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;That npm plugin entry enables ECC&#39;s published OpenCode plugin module (hooks/events and plugin tools). It does &lt;strong&gt;not&lt;/strong&gt; automatically add ECC&#39;s full command/agent/instruction catalog to your project config.&lt;/p&gt; 
&lt;p&gt;For the full ECC OpenCode setup, either:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;run OpenCode inside this repository, or&lt;/li&gt; 
 &lt;li&gt;copy the bundled &lt;code&gt;.opencode/&lt;/code&gt; config assets into your project and wire the &lt;code&gt;instructions&lt;/code&gt;, &lt;code&gt;agent&lt;/code&gt;, and &lt;code&gt;command&lt;/code&gt; entries in &lt;code&gt;opencode.json&lt;/code&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Documentation&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Migration Guide&lt;/strong&gt;: &lt;code&gt;.opencode/MIGRATION.md&lt;/code&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;OpenCode Plugin README&lt;/strong&gt;: &lt;code&gt;.opencode/README.md&lt;/code&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Consolidated Rules&lt;/strong&gt;: &lt;code&gt;.opencode/instructions/INSTRUCTIONS.md&lt;/code&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;LLM Documentation&lt;/strong&gt;: &lt;code&gt;llms.txt&lt;/code&gt; (complete OpenCode docs for LLMs)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Cross-Tool Feature Parity&lt;/h2&gt; 
&lt;p&gt;ECC is the &lt;strong&gt;first plugin to maximize every major AI coding tool&lt;/strong&gt;. Here&#39;s how each harness compares:&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Feature&lt;/th&gt; 
   &lt;th&gt;Claude Code&lt;/th&gt; 
   &lt;th&gt;Cursor IDE&lt;/th&gt; 
   &lt;th&gt;Codex CLI&lt;/th&gt; 
   &lt;th&gt;OpenCode&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Agents&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;48&lt;/td&gt; 
   &lt;td&gt;Shared (&lt;a href=&quot;http://AGENTS.md&quot;&gt;AGENTS.md&lt;/a&gt;)&lt;/td&gt; 
   &lt;td&gt;Shared (&lt;a href=&quot;http://AGENTS.md&quot;&gt;AGENTS.md&lt;/a&gt;)&lt;/td&gt; 
   &lt;td&gt;12&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Commands&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;79&lt;/td&gt; 
   &lt;td&gt;Shared&lt;/td&gt; 
   &lt;td&gt;Instruction-based&lt;/td&gt; 
   &lt;td&gt;31&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Skills&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;183&lt;/td&gt; 
   &lt;td&gt;Shared&lt;/td&gt; 
   &lt;td&gt;10 (native format)&lt;/td&gt; 
   &lt;td&gt;37&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Hook Events&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;8 types&lt;/td&gt; 
   &lt;td&gt;15 types&lt;/td&gt; 
   &lt;td&gt;None yet&lt;/td&gt; 
   &lt;td&gt;11 types&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Hook Scripts&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;20+ scripts&lt;/td&gt; 
   &lt;td&gt;16 scripts (DRY adapter)&lt;/td&gt; 
   &lt;td&gt;N/A&lt;/td&gt; 
   &lt;td&gt;Plugin hooks&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Rules&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;34 (common + lang)&lt;/td&gt; 
   &lt;td&gt;34 (YAML frontmatter)&lt;/td&gt; 
   &lt;td&gt;Instruction-based&lt;/td&gt; 
   &lt;td&gt;13 instructions&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Custom Tools&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;Via hooks&lt;/td&gt; 
   &lt;td&gt;Via hooks&lt;/td&gt; 
   &lt;td&gt;N/A&lt;/td&gt; 
   &lt;td&gt;6 native tools&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;MCP Servers&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;14&lt;/td&gt; 
   &lt;td&gt;Shared (mcp.json)&lt;/td&gt; 
   &lt;td&gt;7 (auto-merged via TOML parser)&lt;/td&gt; 
   &lt;td&gt;Full&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Config Format&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;settings.json&lt;/td&gt; 
   &lt;td&gt;hooks.json + rules/&lt;/td&gt; 
   &lt;td&gt;config.toml&lt;/td&gt; 
   &lt;td&gt;opencode.json&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Context File&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;a href=&quot;http://CLAUDE.md&quot;&gt;CLAUDE.md&lt;/a&gt; + &lt;a href=&quot;http://AGENTS.md&quot;&gt;AGENTS.md&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;a href=&quot;http://AGENTS.md&quot;&gt;AGENTS.md&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;a href=&quot;http://AGENTS.md&quot;&gt;AGENTS.md&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;&lt;a href=&quot;http://AGENTS.md&quot;&gt;AGENTS.md&lt;/a&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Secret Detection&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;Hook-based&lt;/td&gt; 
   &lt;td&gt;beforeSubmitPrompt hook&lt;/td&gt; 
   &lt;td&gt;Sandbox-based&lt;/td&gt; 
   &lt;td&gt;Hook-based&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Auto-Format&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;PostToolUse hook&lt;/td&gt; 
   &lt;td&gt;afterFileEdit hook&lt;/td&gt; 
   &lt;td&gt;N/A&lt;/td&gt; 
   &lt;td&gt;file.edited hook&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Version&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;Plugin&lt;/td&gt; 
   &lt;td&gt;Plugin&lt;/td&gt; 
   &lt;td&gt;Reference config&lt;/td&gt; 
   &lt;td&gt;1.10.0&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p&gt;&lt;strong&gt;Key architectural decisions:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;&lt;a href=&quot;http://AGENTS.md&quot;&gt;AGENTS.md&lt;/a&gt;&lt;/strong&gt; at root is the universal cross-tool file (read by all 4 tools)&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;DRY adapter pattern&lt;/strong&gt; lets Cursor reuse Claude Code&#39;s hook scripts without duplication&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Skills format&lt;/strong&gt; (&lt;a href=&quot;http://SKILL.md&quot;&gt;SKILL.md&lt;/a&gt; with YAML frontmatter) works across Claude Code, Codex, and OpenCode&lt;/li&gt; 
 &lt;li&gt;Codex&#39;s lack of hooks is compensated by &lt;code&gt;AGENTS.md&lt;/code&gt;, optional &lt;code&gt;model_instructions_file&lt;/code&gt; overrides, and sandbox permissions&lt;/li&gt; 
&lt;/ul&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Background&lt;/h2&gt; 
&lt;p&gt;I&#39;ve been using Claude Code since the experimental rollout. Won the Anthropic x Forum Ventures hackathon in Sep 2025 with &lt;a href=&quot;https://x.com/DRodriguezFX&quot;&gt;@DRodriguezFX&lt;/a&gt; — built &lt;a href=&quot;https://zenith.chat&quot;&gt;zenith.chat&lt;/a&gt; entirely using Claude Code.&lt;/p&gt; 
&lt;p&gt;These configs are battle-tested across multiple production applications.&lt;/p&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Token Optimization&lt;/h2&gt; 
&lt;p&gt;Claude Code usage can be expensive if you don&#39;t manage token consumption. These settings significantly reduce costs without sacrificing quality.&lt;/p&gt; 
&lt;h3&gt;Recommended Settings&lt;/h3&gt; 
&lt;p&gt;Add to &lt;code&gt;~/.claude/settings.json&lt;/code&gt;:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;model&quot;: &quot;sonnet&quot;,
  &quot;env&quot;: {
    &quot;MAX_THINKING_TOKENS&quot;: &quot;10000&quot;,
    &quot;CLAUDE_AUTOCOMPACT_PCT_OVERRIDE&quot;: &quot;50&quot;
  }
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Setting&lt;/th&gt; 
   &lt;th&gt;Default&lt;/th&gt; 
   &lt;th&gt;Recommended&lt;/th&gt; 
   &lt;th&gt;Impact&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;model&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;opus&lt;/td&gt; 
   &lt;td&gt;&lt;strong&gt;sonnet&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;~60% cost reduction; handles 80%+ of coding tasks&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;MAX_THINKING_TOKENS&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;31,999&lt;/td&gt; 
   &lt;td&gt;&lt;strong&gt;10,000&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;~70% reduction in hidden thinking cost per request&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;CLAUDE_AUTOCOMPACT_PCT_OVERRIDE&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;95&lt;/td&gt; 
   &lt;td&gt;&lt;strong&gt;50&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;Compacts earlier — better quality in long sessions&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p&gt;Switch to Opus only when you need deep architectural reasoning:&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;/model opus
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Daily Workflow Commands&lt;/h3&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Command&lt;/th&gt; 
   &lt;th&gt;When to Use&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/model sonnet&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Default for most tasks&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/model opus&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Complex architecture, debugging, deep reasoning&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/clear&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Between unrelated tasks (free, instant reset)&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/compact&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;At logical task breakpoints (research done, milestone complete)&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;/cost&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Monitor token spending during session&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;h3&gt;Strategic Compaction&lt;/h3&gt; 
&lt;p&gt;The &lt;code&gt;strategic-compact&lt;/code&gt; skill (included in this plugin) suggests &lt;code&gt;/compact&lt;/code&gt; at logical breakpoints instead of relying on auto-compaction at 95% context. See &lt;code&gt;skills/strategic-compact/SKILL.md&lt;/code&gt; for the full decision guide.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;When to compact:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;After research/exploration, before implementation&lt;/li&gt; 
 &lt;li&gt;After completing a milestone, before starting the next&lt;/li&gt; 
 &lt;li&gt;After debugging, before continuing feature work&lt;/li&gt; 
 &lt;li&gt;After a failed approach, before trying a new one&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;When NOT to compact:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Mid-implementation (you&#39;ll lose variable names, file paths, partial state)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Context Window Management&lt;/h3&gt; 
&lt;p&gt;&lt;strong&gt;Critical:&lt;/strong&gt; Don&#39;t enable all MCPs at once. Each MCP tool description consumes tokens from your 200k window, potentially reducing it to ~70k.&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Keep under 10 MCPs enabled per project&lt;/li&gt; 
 &lt;li&gt;Keep under 80 tools active&lt;/li&gt; 
 &lt;li&gt;Use &lt;code&gt;disabledMcpServers&lt;/code&gt; in project config to disable unused ones&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Agent Teams Cost Warning&lt;/h3&gt; 
&lt;p&gt;Agent Teams spawns multiple context windows. Each teammate consumes tokens independently. Only use for tasks where parallelism provides clear value (multi-module work, parallel reviews). For simple sequential tasks, subagents are more token-efficient.&lt;/p&gt; 
&lt;hr /&gt; 
&lt;h2&gt;WARNING: Important Notes&lt;/h2&gt; 
&lt;h3&gt;Token Optimization&lt;/h3&gt; 
&lt;p&gt;Hitting daily limits? See the &lt;strong&gt;&lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/docs/token-optimization.md&quot;&gt;Token Optimization Guide&lt;/a&gt;&lt;/strong&gt; for recommended settings and workflow tips.&lt;/p&gt; 
&lt;p&gt;Quick wins:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;// ~/.claude/settings.json
{
  &quot;model&quot;: &quot;sonnet&quot;,
  &quot;env&quot;: {
    &quot;MAX_THINKING_TOKENS&quot;: &quot;10000&quot;,
    &quot;CLAUDE_AUTOCOMPACT_PCT_OVERRIDE&quot;: &quot;50&quot;,
    &quot;CLAUDE_CODE_SUBAGENT_MODEL&quot;: &quot;haiku&quot;
  }
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Use &lt;code&gt;/clear&lt;/code&gt; between unrelated tasks, &lt;code&gt;/compact&lt;/code&gt; at logical breakpoints, and &lt;code&gt;/cost&lt;/code&gt; to monitor spending.&lt;/p&gt; 
&lt;h3&gt;Customization&lt;/h3&gt; 
&lt;p&gt;These configs work for my workflow. You should:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Start with what resonates&lt;/li&gt; 
 &lt;li&gt;Modify for your stack&lt;/li&gt; 
 &lt;li&gt;Remove what you don&#39;t use&lt;/li&gt; 
 &lt;li&gt;Add your own patterns&lt;/li&gt; 
&lt;/ol&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Community Projects&lt;/h2&gt; 
&lt;p&gt;Projects built on or inspired by Everything Claude Code:&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Project&lt;/th&gt; 
   &lt;th&gt;Description&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://github.com/SaigonXIII/evc&quot;&gt;EVC&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Marketing agent workspace — 42 commands for content operators, brand governance, and multi-channel publishing. &lt;a href=&quot;https://saigonxiii.github.io/evc&quot;&gt;Visual overview&lt;/a&gt;.&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p&gt;Built something with ECC? Open a PR to add it here.&lt;/p&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Sponsors&lt;/h2&gt; 
&lt;p&gt;This project is free and open source. Sponsors help keep it maintained and growing.&lt;/p&gt; 
&lt;p&gt;&lt;a href=&quot;https://github.com/sponsors/affaan-m&quot;&gt;&lt;strong&gt;Become a Sponsor&lt;/strong&gt;&lt;/a&gt; | &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/SPONSORS.md&quot;&gt;Sponsor Tiers&lt;/a&gt; | &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/SPONSORING.md&quot;&gt;Sponsorship Program&lt;/a&gt;&lt;/p&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Star History&lt;/h2&gt; 
&lt;p&gt;&lt;a href=&quot;https://star-history.com/#affaan-m/everything-claude-code&amp;amp;Date&quot;&gt;&lt;img src=&quot;https://api.star-history.com/svg?repos=affaan-m/everything-claude-code&amp;amp;type=Date&quot; alt=&quot;Star History Chart&quot; /&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Links&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Shorthand Guide (Start Here):&lt;/strong&gt; &lt;a href=&quot;https://x.com/affaanmustafa/status/2012378465664745795&quot;&gt;The Shorthand Guide to Everything Claude Code&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Longform Guide (Advanced):&lt;/strong&gt; &lt;a href=&quot;https://x.com/affaanmustafa/status/2014040193557471352&quot;&gt;The Longform Guide to Everything Claude Code&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Security Guide:&lt;/strong&gt; &lt;a href=&quot;https://raw.githubusercontent.com/affaan-m/everything-claude-code/main/the-security-guide.md&quot;&gt;Security Guide&lt;/a&gt; | &lt;a href=&quot;https://x.com/affaanmustafa/status/2033263813387223421&quot;&gt;Thread&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Follow:&lt;/strong&gt; &lt;a href=&quot;https://x.com/affaanmustafa&quot;&gt;@affaanmustafa&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;hr /&gt; 
&lt;h2&gt;License&lt;/h2&gt; 
&lt;p&gt;MIT - Use freely, modify as needed, contribute back if you can.&lt;/p&gt; 
&lt;hr /&gt; 
&lt;p&gt;&lt;strong&gt;Star this repo if it helps. Read both guides. Build something great.&lt;/strong&gt;&lt;/p&gt;</description>
      
      <media:content url="https://opengraph.githubassets.com/096cecf0c0253737bc943ccfdb1c2d557a5d691bd4d812e32fa878410a51b9ae/affaan-m/everything-claude-code" medium="image" />
      
    </item>
    
    <item>
      <title>microsoft/markitdown</title>
      <link>https://github.com/microsoft/markitdown</link>
      <description>&lt;p&gt;Python tool for converting files and office documents to Markdown.&lt;/p&gt;&lt;hr&gt;&lt;h1&gt;MarkItDown&lt;/h1&gt; 
&lt;p&gt;&lt;a href=&quot;https://pypi.org/project/markitdown/&quot;&gt;&lt;img src=&quot;https://img.shields.io/pypi/v/markitdown.svg?sanitize=true&quot; alt=&quot;PyPI&quot; /&gt;&lt;/a&gt; &lt;img src=&quot;https://img.shields.io/pypi/dd/markitdown&quot; alt=&quot;PyPI - Downloads&quot; /&gt; &lt;a href=&quot;https://github.com/microsoft/autogen&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/Built%20by-AutoGen%20Team-blue&quot; alt=&quot;Built by AutoGen Team&quot; /&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;div class=&quot;markdown-alert markdown-alert-tip&quot;&gt;
 &lt;p class=&quot;markdown-alert-title&quot;&gt;
  &lt;svg class=&quot;octicon octicon-light-bulb mr-2&quot; viewbox=&quot;0 0 16 16&quot; version=&quot;1.1&quot; width=&quot;16&quot; height=&quot;16&quot; aria-hidden=&quot;true&quot;&gt;
   &lt;path d=&quot;M8 1.5c-2.363 0-4 1.69-4 3.75 0 .984.424 1.625.984 2.304l.214.253c.223.264.47.556.673.848.284.411.537.896.621 1.49a.75.75 0 0 1-1.484.211c-.04-.282-.163-.547-.37-.847a8.456 8.456 0 0 0-.542-.68c-.084-.1-.173-.205-.268-.32C3.201 7.75 2.5 6.766 2.5 5.25 2.5 2.31 4.863 0 8 0s5.5 2.31 5.5 5.25c0 1.516-.701 2.5-1.328 3.259-.095.115-.184.22-.268.319-.207.245-.383.453-.541.681-.208.3-.33.565-.37.847a.751.751 0 0 1-1.485-.212c.084-.593.337-1.078.621-1.489.203-.292.45-.584.673-.848.075-.088.147-.173.213-.253.561-.679.985-1.32.985-2.304 0-2.06-1.637-3.75-4-3.75ZM5.75 12h4.5a.75.75 0 0 1 0 1.5h-4.5a.75.75 0 0 1 0-1.5ZM6 15.25a.75.75 0 0 1 .75-.75h2.5a.75.75 0 0 1 0 1.5h-2.5a.75.75 0 0 1-.75-.75Z&quot;&gt;&lt;/path&gt;
  &lt;/svg&gt;Tip&lt;/p&gt;
 &lt;p&gt;MarkItDown now offers an MCP (Model Context Protocol) server for integration with LLM applications like Claude Desktop. See &lt;a href=&quot;https://github.com/microsoft/markitdown/tree/main/packages/markitdown-mcp&quot;&gt;markitdown-mcp&lt;/a&gt; for more information.&lt;/p&gt; 
&lt;/div&gt; 
&lt;div class=&quot;markdown-alert markdown-alert-important&quot;&gt;
 &lt;p class=&quot;markdown-alert-title&quot;&gt;
  &lt;svg class=&quot;octicon octicon-report mr-2&quot; viewbox=&quot;0 0 16 16&quot; version=&quot;1.1&quot; width=&quot;16&quot; height=&quot;16&quot; aria-hidden=&quot;true&quot;&gt;
   &lt;path d=&quot;M0 1.75C0 .784.784 0 1.75 0h12.5C15.216 0 16 .784 16 1.75v9.5A1.75 1.75 0 0 1 14.25 13H8.06l-2.573 2.573A1.458 1.458 0 0 1 3 14.543V13H1.75A1.75 1.75 0 0 1 0 11.25Zm1.75-.25a.25.25 0 0 0-.25.25v9.5c0 .138.112.25.25.25h2a.75.75 0 0 1 .75.75v2.19l2.72-2.72a.749.749 0 0 1 .53-.22h6.5a.25.25 0 0 0 .25-.25v-9.5a.25.25 0 0 0-.25-.25Zm7 2.25v2.5a.75.75 0 0 1-1.5 0v-2.5a.75.75 0 0 1 1.5 0ZM9 9a1 1 0 1 1-2 0 1 1 0 0 1 2 0Z&quot;&gt;&lt;/path&gt;
  &lt;/svg&gt;Important&lt;/p&gt;
 &lt;p&gt;Breaking changes between 0.0.1 to 0.1.0:&lt;/p&gt; 
 &lt;ul&gt; 
  &lt;li&gt;Dependencies are now organized into optional feature-groups (further details below). Use &lt;code&gt;pip install &#39;markitdown[all]&#39;&lt;/code&gt; to have backward-compatible behavior.&lt;/li&gt; 
  &lt;li&gt;convert_stream() now requires a binary file-like object (e.g., a file opened in binary mode, or an io.BytesIO object). This is a breaking change from the previous version, where it previously also accepted text file-like objects, like io.StringIO.&lt;/li&gt; 
  &lt;li&gt;The DocumentConverter class interface has changed to read from file-like streams rather than file paths. &lt;em&gt;No temporary files are created anymore&lt;/em&gt;. If you are the maintainer of a plugin, or custom DocumentConverter, you likely need to update your code. Otherwise, if only using the MarkItDown class or CLI (as in these examples), you should not need to change anything.&lt;/li&gt; 
 &lt;/ul&gt; 
&lt;/div&gt; 
&lt;p&gt;MarkItDown is a lightweight Python utility for converting various files to Markdown for use with LLMs and related text analysis pipelines. To this end, it is most comparable to &lt;a href=&quot;https://github.com/deanmalmgren/textract&quot;&gt;textract&lt;/a&gt;, but with a focus on preserving important document structure and content as Markdown (including: headings, lists, tables, links, etc.) While the output is often reasonably presentable and human-friendly, it is meant to be consumed by text analysis tools -- and may not be the best option for high-fidelity document conversions for human consumption.&lt;/p&gt; 
&lt;p&gt;MarkItDown currently supports the conversion from:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;PDF&lt;/li&gt; 
 &lt;li&gt;PowerPoint&lt;/li&gt; 
 &lt;li&gt;Word&lt;/li&gt; 
 &lt;li&gt;Excel&lt;/li&gt; 
 &lt;li&gt;Images (EXIF metadata and OCR)&lt;/li&gt; 
 &lt;li&gt;Audio (EXIF metadata and speech transcription)&lt;/li&gt; 
 &lt;li&gt;HTML&lt;/li&gt; 
 &lt;li&gt;Text-based formats (CSV, JSON, XML)&lt;/li&gt; 
 &lt;li&gt;ZIP files (iterates over contents)&lt;/li&gt; 
 &lt;li&gt;Youtube URLs&lt;/li&gt; 
 &lt;li&gt;EPubs&lt;/li&gt; 
 &lt;li&gt;... and more!&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Why Markdown?&lt;/h2&gt; 
&lt;p&gt;Markdown is extremely close to plain text, with minimal markup or formatting, but still provides a way to represent important document structure. Mainstream LLMs, such as OpenAI&#39;s GPT-4o, natively &quot;&lt;em&gt;speak&lt;/em&gt;&quot; Markdown, and often incorporate Markdown into their responses unprompted. This suggests that they have been trained on vast amounts of Markdown-formatted text, and understand it well. As a side benefit, Markdown conventions are also highly token-efficient.&lt;/p&gt; 
&lt;h2&gt;Prerequisites&lt;/h2&gt; 
&lt;p&gt;MarkItDown requires Python 3.10 or higher. It is recommended to use a virtual environment to avoid dependency conflicts.&lt;/p&gt; 
&lt;p&gt;With the standard Python installation, you can create and activate a virtual environment using the following commands:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;python -m venv .venv
source .venv/bin/activate
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;If using &lt;code&gt;uv&lt;/code&gt;, you can create a virtual environment with:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;uv venv --python=3.12 .venv
source .venv/bin/activate
# NOTE: Be sure to use &#39;uv pip install&#39; rather than just &#39;pip install&#39; to install packages in this virtual environment
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;If you are using Anaconda, you can create a virtual environment with:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;conda create -n markitdown python=3.12
conda activate markitdown
&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Installation&lt;/h2&gt; 
&lt;p&gt;To install MarkItDown, use pip: &lt;code&gt;pip install &#39;markitdown[all]&#39;&lt;/code&gt;. Alternatively, you can install it from the source:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git clone git@github.com:microsoft/markitdown.git
cd markitdown
pip install -e &#39;packages/markitdown[all]&#39;
&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Usage&lt;/h2&gt; 
&lt;h3&gt;Command-Line&lt;/h3&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;markitdown path-to-file.pdf &amp;gt; document.md
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Or use &lt;code&gt;-o&lt;/code&gt; to specify the output file:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;markitdown path-to-file.pdf -o document.md
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;You can also pipe content:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cat path-to-file.pdf | markitdown
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Optional Dependencies&lt;/h3&gt; 
&lt;p&gt;MarkItDown has optional dependencies for activating various file formats. Earlier in this document, we installed all optional dependencies with the &lt;code&gt;[all]&lt;/code&gt; option. However, you can also install them individually for more control. For example:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;pip install &#39;markitdown[pdf, docx, pptx]&#39;
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;will install only the dependencies for PDF, DOCX, and PPTX files.&lt;/p&gt; 
&lt;p&gt;At the moment, the following optional dependencies are available:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;code&gt;[all]&lt;/code&gt; Installs all optional dependencies&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;[pptx]&lt;/code&gt; Installs dependencies for PowerPoint files&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;[docx]&lt;/code&gt; Installs dependencies for Word files&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;[xlsx]&lt;/code&gt; Installs dependencies for Excel files&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;[xls]&lt;/code&gt; Installs dependencies for older Excel files&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;[pdf]&lt;/code&gt; Installs dependencies for PDF files&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;[outlook]&lt;/code&gt; Installs dependencies for Outlook messages&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;[az-doc-intel]&lt;/code&gt; Installs dependencies for Azure Document Intelligence&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;[audio-transcription]&lt;/code&gt; Installs dependencies for audio transcription of wav and mp3 files&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;[youtube-transcription]&lt;/code&gt; Installs dependencies for fetching YouTube video transcription&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Plugins&lt;/h3&gt; 
&lt;p&gt;MarkItDown also supports 3rd-party plugins. Plugins are disabled by default. To list installed plugins:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;markitdown --list-plugins
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;To enable plugins use:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;markitdown --use-plugins path-to-file.pdf
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;To find available plugins, search GitHub for the hashtag &lt;code&gt;#markitdown-plugin&lt;/code&gt;. To develop a plugin, see &lt;code&gt;packages/markitdown-sample-plugin&lt;/code&gt;.&lt;/p&gt; 
&lt;h4&gt;markitdown-ocr Plugin&lt;/h4&gt; 
&lt;p&gt;The &lt;code&gt;markitdown-ocr&lt;/code&gt; plugin adds OCR support to PDF, DOCX, PPTX, and XLSX converters, extracting text from embedded images using LLM Vision — the same &lt;code&gt;llm_client&lt;/code&gt; / &lt;code&gt;llm_model&lt;/code&gt; pattern that MarkItDown already uses for image descriptions. No new ML libraries or binary dependencies required.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Installation:&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;pip install markitdown-ocr
pip install openai  # or any OpenAI-compatible client
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Usage:&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;Pass the same &lt;code&gt;llm_client&lt;/code&gt; and &lt;code&gt;llm_model&lt;/code&gt; you would use for image descriptions:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from markitdown import MarkItDown
from openai import OpenAI

md = MarkItDown(
    enable_plugins=True,
    llm_client=OpenAI(),
    llm_model=&quot;gpt-4o&quot;,
)
result = md.convert(&quot;document_with_images.pdf&quot;)
print(result.text_content)
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;If no &lt;code&gt;llm_client&lt;/code&gt; is provided the plugin still loads, but OCR is silently skipped and the standard built-in converter is used instead.&lt;/p&gt; 
&lt;p&gt;See &lt;a href=&quot;https://raw.githubusercontent.com/microsoft/markitdown/main/packages/markitdown-ocr/README.md&quot;&gt;&lt;code&gt;packages/markitdown-ocr/README.md&lt;/code&gt;&lt;/a&gt; for detailed documentation.&lt;/p&gt; 
&lt;h3&gt;Azure Document Intelligence&lt;/h3&gt; 
&lt;p&gt;To use Microsoft Document Intelligence for conversion:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;markitdown path-to-file.pdf -o document.md -d -e &quot;&amp;lt;document_intelligence_endpoint&amp;gt;&quot;
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;More information about how to set up an Azure Document Intelligence Resource can be found &lt;a href=&quot;https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/how-to-guides/create-document-intelligence-resource?view=doc-intel-4.0.0&quot;&gt;here&lt;/a&gt;&lt;/p&gt; 
&lt;h3&gt;Python API&lt;/h3&gt; 
&lt;p&gt;Basic usage in Python:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from markitdown import MarkItDown

md = MarkItDown(enable_plugins=False) # Set to True to enable plugins
result = md.convert(&quot;test.xlsx&quot;)
print(result.text_content)
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Document Intelligence conversion in Python:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from markitdown import MarkItDown

md = MarkItDown(docintel_endpoint=&quot;&amp;lt;document_intelligence_endpoint&amp;gt;&quot;)
result = md.convert(&quot;test.pdf&quot;)
print(result.text_content)
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;To use Large Language Models for image descriptions (currently only for pptx and image files), provide &lt;code&gt;llm_client&lt;/code&gt; and &lt;code&gt;llm_model&lt;/code&gt;:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from markitdown import MarkItDown
from openai import OpenAI

client = OpenAI()
md = MarkItDown(llm_client=client, llm_model=&quot;gpt-4o&quot;, llm_prompt=&quot;optional custom prompt&quot;)
result = md.convert(&quot;example.jpg&quot;)
print(result.text_content)
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Docker&lt;/h3&gt; 
&lt;pre&gt;&lt;code class=&quot;language-sh&quot;&gt;docker build -t markitdown:latest .
docker run --rm -i markitdown:latest &amp;lt; ~/your-file.pdf &amp;gt; output.md
&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Contributing&lt;/h2&gt; 
&lt;p&gt;This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit &lt;a href=&quot;https://cla.opensource.microsoft.com&quot;&gt;https://cla.opensource.microsoft.com&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.&lt;/p&gt; 
&lt;p&gt;This project has adopted the &lt;a href=&quot;https://opensource.microsoft.com/codeofconduct/&quot;&gt;Microsoft Open Source Code of Conduct&lt;/a&gt;. For more information see the &lt;a href=&quot;https://opensource.microsoft.com/codeofconduct/faq/&quot;&gt;Code of Conduct FAQ&lt;/a&gt; or contact &lt;a href=&quot;mailto:opencode@microsoft.com&quot;&gt;opencode@microsoft.com&lt;/a&gt; with any additional questions or comments.&lt;/p&gt; 
&lt;h3&gt;How to Contribute&lt;/h3&gt; 
&lt;p&gt;You can help by looking at issues or helping review PRs. Any issue or PR is welcome, but we have also marked some as &#39;open for contribution&#39; and &#39;open for reviewing&#39; to help facilitate community contributions. These are of course just suggestions and you are welcome to contribute in any way you like.&lt;/p&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;table&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;th&gt;&lt;/th&gt; 
    &lt;th&gt;All&lt;/th&gt; 
    &lt;th&gt;Especially Needs Help from Community&lt;/th&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td&gt;&lt;strong&gt;Issues&lt;/strong&gt;&lt;/td&gt; 
    &lt;td&gt;&lt;a href=&quot;https://github.com/microsoft/markitdown/issues&quot;&gt;All Issues&lt;/a&gt;&lt;/td&gt; 
    &lt;td&gt;&lt;a href=&quot;https://github.com/microsoft/markitdown/issues?q=is%3Aissue+is%3Aopen+label%3A%22open+for+contribution%22&quot;&gt;Issues open for contribution&lt;/a&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;&lt;strong&gt;PRs&lt;/strong&gt;&lt;/td&gt; 
    &lt;td&gt;&lt;a href=&quot;https://github.com/microsoft/markitdown/pulls&quot;&gt;All PRs&lt;/a&gt;&lt;/td&gt; 
    &lt;td&gt;&lt;a href=&quot;https://github.com/microsoft/markitdown/pulls?q=is%3Apr+is%3Aopen+label%3A%22open+for+reviewing%22&quot;&gt;PRs open for reviewing&lt;/a&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
&lt;/div&gt; 
&lt;h3&gt;Running Tests and Checks&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt; &lt;p&gt;Navigate to the MarkItDown package:&lt;/p&gt; &lt;pre&gt;&lt;code class=&quot;language-sh&quot;&gt;cd packages/markitdown
&lt;/code&gt;&lt;/pre&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;Install &lt;code&gt;hatch&lt;/code&gt; in your environment and run tests:&lt;/p&gt; &lt;pre&gt;&lt;code class=&quot;language-sh&quot;&gt;pip install hatch  # Other ways of installing hatch: https://hatch.pypa.io/dev/install/
hatch shell
hatch test
&lt;/code&gt;&lt;/pre&gt; &lt;p&gt;(Alternative) Use the Devcontainer which has all the dependencies installed:&lt;/p&gt; &lt;pre&gt;&lt;code class=&quot;language-sh&quot;&gt;# Reopen the project in Devcontainer and run:
hatch test
&lt;/code&gt;&lt;/pre&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;Run pre-commit checks before submitting a PR: &lt;code&gt;pre-commit run --all-files&lt;/code&gt;&lt;/p&gt; &lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Contributing 3rd-party Plugins&lt;/h3&gt; 
&lt;p&gt;You can also contribute by creating and sharing 3rd party plugins. See &lt;code&gt;packages/markitdown-sample-plugin&lt;/code&gt; for more details.&lt;/p&gt; 
&lt;h2&gt;Trademarks&lt;/h2&gt; 
&lt;p&gt;This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow &lt;a href=&quot;https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general&quot;&gt;Microsoft&#39;s Trademark &amp;amp; Brand Guidelines&lt;/a&gt;. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party&#39;s policies.&lt;/p&gt;</description>
      
      <media:content url="https://opengraph.githubassets.com/8658053a2c3f001c79a212e7ede8911a61eb5501aa600009411274ed36c6435a/microsoft/markitdown" medium="image" />
      
    </item>
    
    <item>
      <title>shiyu-coder/Kronos</title>
      <link>https://github.com/shiyu-coder/Kronos</link>
      <description>&lt;p&gt;Kronos: A Foundation Model for the Language of Financial Markets&lt;/p&gt;&lt;hr&gt;&lt;div align=&quot;center&quot;&gt; 
 &lt;h2&gt;&lt;b&gt;Kronos: A Foundation Model for the Language of Financial Markets &lt;/b&gt;&lt;/h2&gt; 
&lt;/div&gt; 
&lt;div align=&quot;center&quot;&gt;  
 &lt;a href=&quot;https://huggingface.co/NeoQuasar&quot;&gt; &lt;img src=&quot;https://img.shields.io/badge/🤗-Hugging_Face-yellow&quot; alt=&quot;Hugging Face&quot; /&gt; &lt;/a&gt; 
 &lt;a href=&quot;https://shiyu-coder.github.io/Kronos-demo/&quot;&gt; &lt;img src=&quot;https://img.shields.io/badge/🚀-Live_Demo-brightgreen&quot; alt=&quot;Live Demo&quot; /&gt; &lt;/a&gt; 
 &lt;a href=&quot;https://github.com/shiyu-coder/Kronos/graphs/commit-activity&quot;&gt; &lt;img src=&quot;https://img.shields.io/github/last-commit/shiyu-coder/Kronos?color=blue&quot; alt=&quot;Last Commit&quot; /&gt; &lt;/a&gt; 
 &lt;a href=&quot;https://github.com/shiyu-coder/Kronos/stargazers&quot;&gt; &lt;img src=&quot;https://img.shields.io/github/stars/shiyu-coder/Kronos?color=lightblue&quot; alt=&quot;GitHub Stars&quot; /&gt; &lt;/a&gt; 
 &lt;a href=&quot;https://github.com/shiyu-coder/Kronos/network/members&quot;&gt; &lt;img src=&quot;https://img.shields.io/github/forks/shiyu-coder/Kronos?color=yellow&quot; alt=&quot;GitHub Forks&quot; /&gt; &lt;/a&gt; 
 &lt;a href=&quot;https://raw.githubusercontent.com/shiyu-coder/Kronos/master/LICENSE&quot;&gt; &lt;img src=&quot;https://img.shields.io/github/license/shiyu-coder/Kronos?color=green&quot; alt=&quot;License&quot; /&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;!-- Keep these links. Translations will automatically update with the README. --&gt; 
 &lt;a href=&quot;https://zdoc.app/de/shiyu-coder/Kronos&quot;&gt;Deutsch&lt;/a&gt; | 
 &lt;a href=&quot;https://zdoc.app/es/shiyu-coder/Kronos&quot;&gt;Español&lt;/a&gt; | 
 &lt;a href=&quot;https://zdoc.app/fr/shiyu-coder/Kronos&quot;&gt;Français&lt;/a&gt; | 
 &lt;a href=&quot;https://zdoc.app/ja/shiyu-coder/Kronos&quot;&gt;日本語&lt;/a&gt; | 
 &lt;a href=&quot;https://zdoc.app/ko/shiyu-coder/Kronos&quot;&gt;한국어&lt;/a&gt; | 
 &lt;a href=&quot;https://zdoc.app/pt/shiyu-coder/Kronos&quot;&gt;Português&lt;/a&gt; | 
 &lt;a href=&quot;https://zdoc.app/ru/shiyu-coder/Kronos&quot;&gt;Русский&lt;/a&gt; | 
 &lt;a href=&quot;https://zdoc.app/zh/shiyu-coder/Kronos&quot;&gt;中文&lt;/a&gt; 
&lt;/div&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/shiyu-coder/Kronos/master/figures/logo.png&quot; width=&quot;100&quot; /&gt; &lt;/p&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;Kronos is the &lt;strong&gt;first open-source foundation model&lt;/strong&gt; for financial candlesticks (K-lines), trained on data from over &lt;strong&gt;45 global exchanges&lt;/strong&gt;.&lt;/p&gt; 
&lt;/blockquote&gt;  
&lt;h2&gt;📰 News&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;🚩 &lt;strong&gt;[2025.11.10]&lt;/strong&gt; Kronos has been accpeted by AAAI 2026.&lt;/li&gt; 
 &lt;li&gt;🚩 &lt;strong&gt;[2025.08.17]&lt;/strong&gt; We have released the scripts for fine-tuning! Check them out to adapt Kronos to your own tasks.&lt;/li&gt; 
 &lt;li&gt;🚩 &lt;strong&gt;[2025.08.02]&lt;/strong&gt; Our paper is now available on &lt;a href=&quot;https://arxiv.org/abs/2508.02739&quot;&gt;arXiv&lt;/a&gt;!&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;/p&gt;
&lt;h2&gt;📜 Introduction&lt;/h2&gt; 
&lt;p&gt;&lt;strong&gt;Kronos&lt;/strong&gt; is a family of decoder-only foundation models, pre-trained specifically for the &quot;language&quot; of financial markets—K-line sequences. Unlike general-purpose TSFMs, Kronos is designed to handle the unique, high-noise characteristics of financial data. It leverages a novel two-stage framework:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;A specialized tokenizer first quantizes continuous, multi-dimensional K-line data (OHLCV) into &lt;strong&gt;hierarchical discrete tokens&lt;/strong&gt;.&lt;/li&gt; 
 &lt;li&gt;A large, autoregressive Transformer is then pre-trained on these tokens, enabling it to serve as a unified model for diverse quantitative tasks.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/shiyu-coder/Kronos/master/figures/overview.png&quot; alt=&quot;&quot; align=&quot;center&quot; width=&quot;700px&quot; /&gt; &lt;/p&gt; 
&lt;h2&gt;✨ Live Demo&lt;/h2&gt; 
&lt;p&gt;We have set up a live demo to visualize Kronos&#39;s forecasting results. The webpage showcases a forecast for the &lt;strong&gt;BTC/USDT&lt;/strong&gt; trading pair over the next 24 hours.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;👉 &lt;a href=&quot;https://shiyu-coder.github.io/Kronos-demo/&quot;&gt;Access the Live Demo Here&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt; 
&lt;h2&gt;📦 Model Zoo&lt;/h2&gt; 
&lt;p&gt;We release a family of pre-trained models with varying capacities to suit different computational and application needs. All models are readily accessible from the Hugging Face Hub.&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Model&lt;/th&gt; 
   &lt;th&gt;Tokenizer&lt;/th&gt; 
   &lt;th&gt;Context length&lt;/th&gt; 
   &lt;th&gt;Params&lt;/th&gt; 
   &lt;th&gt;Open-source&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Kronos-mini&lt;/td&gt; 
   &lt;td&gt;&lt;a href=&quot;https://huggingface.co/NeoQuasar/Kronos-Tokenizer-2k&quot;&gt;Kronos-Tokenizer-2k&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;2048&lt;/td&gt; 
   &lt;td&gt;4.1M&lt;/td&gt; 
   &lt;td&gt;✅ &lt;a href=&quot;https://huggingface.co/NeoQuasar/Kronos-mini&quot;&gt;NeoQuasar/Kronos-mini&lt;/a&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Kronos-small&lt;/td&gt; 
   &lt;td&gt;&lt;a href=&quot;https://huggingface.co/NeoQuasar/Kronos-Tokenizer-base&quot;&gt;Kronos-Tokenizer-base&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;512&lt;/td&gt; 
   &lt;td&gt;24.7M&lt;/td&gt; 
   &lt;td&gt;✅ &lt;a href=&quot;https://huggingface.co/NeoQuasar/Kronos-small&quot;&gt;NeoQuasar/Kronos-small&lt;/a&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Kronos-base&lt;/td&gt; 
   &lt;td&gt;&lt;a href=&quot;https://huggingface.co/NeoQuasar/Kronos-Tokenizer-base&quot;&gt;Kronos-Tokenizer-base&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;512&lt;/td&gt; 
   &lt;td&gt;102.3M&lt;/td&gt; 
   &lt;td&gt;✅ &lt;a href=&quot;https://huggingface.co/NeoQuasar/Kronos-base&quot;&gt;NeoQuasar/Kronos-base&lt;/a&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Kronos-large&lt;/td&gt; 
   &lt;td&gt;&lt;a href=&quot;https://huggingface.co/NeoQuasar/Kronos-Tokenizer-base&quot;&gt;Kronos-Tokenizer-base&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;512&lt;/td&gt; 
   &lt;td&gt;499.2M&lt;/td&gt; 
   &lt;td&gt;❌&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;h2&gt;🚀 Getting Started&lt;/h2&gt; 
&lt;h3&gt;Installation&lt;/h3&gt; 
&lt;ol&gt; 
 &lt;li&gt;Install Python 3.10+, and then install the dependencies:&lt;/li&gt; 
&lt;/ol&gt; 
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;📈 Making Forecasts&lt;/h3&gt; 
&lt;p&gt;Forecasting with Kronos is straightforward using the &lt;code&gt;KronosPredictor&lt;/code&gt; class. It handles data preprocessing, normalization, prediction, and inverse normalization, allowing you to get from raw data to forecasts in just a few lines of code.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Important Note&lt;/strong&gt;: The &lt;code&gt;max_context&lt;/code&gt; for &lt;code&gt;Kronos-small&lt;/code&gt; and &lt;code&gt;Kronos-base&lt;/code&gt; is &lt;strong&gt;512&lt;/strong&gt;. This is the maximum sequence length the model can process. For optimal performance, it is recommended that your input data length (i.e., &lt;code&gt;lookback&lt;/code&gt;) does not exceed this limit. The &lt;code&gt;KronosPredictor&lt;/code&gt; will automatically handle truncation for longer contexts.&lt;/p&gt; 
&lt;p&gt;Here is a step-by-step guide to making your first forecast.&lt;/p&gt; 
&lt;h4&gt;1. Load the Tokenizer and Model&lt;/h4&gt; 
&lt;p&gt;First, load a pre-trained Kronos model and its corresponding tokenizer from the Hugging Face Hub.&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from model import Kronos, KronosTokenizer, KronosPredictor

# Load from Hugging Face Hub
tokenizer = KronosTokenizer.from_pretrained(&quot;NeoQuasar/Kronos-Tokenizer-base&quot;)
model = Kronos.from_pretrained(&quot;NeoQuasar/Kronos-small&quot;)
&lt;/code&gt;&lt;/pre&gt; 
&lt;h4&gt;2. Instantiate the Predictor&lt;/h4&gt; 
&lt;p&gt;Create an instance of &lt;code&gt;KronosPredictor&lt;/code&gt;, passing the model, tokenizer, and desired device.&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Initialize the predictor
predictor = KronosPredictor(model, tokenizer, max_context=512)
&lt;/code&gt;&lt;/pre&gt; 
&lt;h4&gt;3. Prepare Input Data&lt;/h4&gt; 
&lt;p&gt;The &lt;code&gt;predict&lt;/code&gt; method requires three main inputs:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;code&gt;df&lt;/code&gt;: A pandas DataFrame containing the historical K-line data. It must include columns &lt;code&gt;[&#39;open&#39;, &#39;high&#39;, &#39;low&#39;, &#39;close&#39;]&lt;/code&gt;. &lt;code&gt;volume&lt;/code&gt; and &lt;code&gt;amount&lt;/code&gt; are optional.&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;x_timestamp&lt;/code&gt;: A pandas Series of timestamps corresponding to the historical data in &lt;code&gt;df&lt;/code&gt;.&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;y_timestamp&lt;/code&gt;: A pandas Series of timestamps for the future periods you want to predict.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import pandas as pd

# Load your data
df = pd.read_csv(&quot;./data/XSHG_5min_600977.csv&quot;)
df[&#39;timestamps&#39;] = pd.to_datetime(df[&#39;timestamps&#39;])

# Define context window and prediction length
lookback = 400
pred_len = 120

# Prepare inputs for the predictor
x_df = df.loc[:lookback-1, [&#39;open&#39;, &#39;high&#39;, &#39;low&#39;, &#39;close&#39;, &#39;volume&#39;, &#39;amount&#39;]]
x_timestamp = df.loc[:lookback-1, &#39;timestamps&#39;]
y_timestamp = df.loc[lookback:lookback+pred_len-1, &#39;timestamps&#39;]
&lt;/code&gt;&lt;/pre&gt; 
&lt;h4&gt;4. Generate Forecasts&lt;/h4&gt; 
&lt;p&gt;Call the &lt;code&gt;predict&lt;/code&gt; method to generate forecasts. You can control the sampling process with parameters like &lt;code&gt;T&lt;/code&gt;, &lt;code&gt;top_p&lt;/code&gt;, and &lt;code&gt;sample_count&lt;/code&gt; for probabilistic forecasting.&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Generate predictions
pred_df = predictor.predict(
    df=x_df,
    x_timestamp=x_timestamp,
    y_timestamp=y_timestamp,
    pred_len=pred_len,
    T=1.0,          # Temperature for sampling
    top_p=0.9,      # Nucleus sampling probability
    sample_count=1  # Number of forecast paths to generate and average
)

print(&quot;Forecasted Data Head:&quot;)
print(pred_df.head())
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;The &lt;code&gt;predict&lt;/code&gt; method returns a pandas DataFrame containing the forecasted values for &lt;code&gt;open&lt;/code&gt;, &lt;code&gt;high&lt;/code&gt;, &lt;code&gt;low&lt;/code&gt;, &lt;code&gt;close&lt;/code&gt;, &lt;code&gt;volume&lt;/code&gt;, and &lt;code&gt;amount&lt;/code&gt;, indexed by the &lt;code&gt;y_timestamp&lt;/code&gt; you provided.&lt;/p&gt; 
&lt;p&gt;For efficient processing of multiple time series, Kronos provides a &lt;code&gt;predict_batch&lt;/code&gt; method that enables parallel prediction on multiple datasets simultaneously. This is particularly useful when you need to forecast multiple assets or time periods at once.&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Prepare multiple datasets for batch prediction
df_list = [df1, df2, df3]  # List of DataFrames
x_timestamp_list = [x_ts1, x_ts2, x_ts3]  # List of historical timestamps
y_timestamp_list = [y_ts1, y_ts2, y_ts3]  # List of future timestamps

# Generate batch predictions
pred_df_list = predictor.predict_batch(
    df_list=df_list,
    x_timestamp_list=x_timestamp_list,
    y_timestamp_list=y_timestamp_list,
    pred_len=pred_len,
    T=1.0,
    top_p=0.9,
    sample_count=1,
    verbose=True
)

# pred_df_list contains prediction results in the same order as input
for i, pred_df in enumerate(pred_df_list):
    print(f&quot;Predictions for series {i}:&quot;)
    print(pred_df.head())
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Important Requirements for Batch Prediction:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;All series must have the same historical length (lookback window)&lt;/li&gt; 
 &lt;li&gt;All series must have the same prediction length (&lt;code&gt;pred_len&lt;/code&gt;)&lt;/li&gt; 
 &lt;li&gt;Each DataFrame must contain the required columns: &lt;code&gt;[&#39;open&#39;, &#39;high&#39;, &#39;low&#39;, &#39;close&#39;]&lt;/code&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;volume&lt;/code&gt; and &lt;code&gt;amount&lt;/code&gt; columns are optional and will be filled with zeros if missing&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;The &lt;code&gt;predict_batch&lt;/code&gt; method leverages GPU parallelism for efficient processing and automatically handles normalization and denormalization for each series independently.&lt;/p&gt; 
&lt;h4&gt;5. Example and Visualization&lt;/h4&gt; 
&lt;p&gt;For a complete, runnable script that includes data loading, prediction, and plotting, please see &lt;a href=&quot;https://raw.githubusercontent.com/shiyu-coder/Kronos/master/examples/prediction_example.py&quot;&gt;&lt;code&gt;examples/prediction_example.py&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;Running this script will generate a plot comparing the ground truth data against the model&#39;s forecast, similar to the one shown below:&lt;/p&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/shiyu-coder/Kronos/master/figures/prediction_example.png&quot; alt=&quot;Forecast Example&quot; align=&quot;center&quot; width=&quot;600px&quot; /&gt; &lt;/p&gt; 
&lt;p&gt;Additionally, we provide a script that makes predictions without Volume and Amount data, which can be found in &lt;a href=&quot;https://raw.githubusercontent.com/shiyu-coder/Kronos/master/examples/prediction_wo_vol_example.py&quot;&gt;&lt;code&gt;examples/prediction_wo_vol_example.py&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;🔧 Finetuning on Your Own Data (A-Share Market Example)&lt;/h2&gt; 
&lt;p&gt;We provide a complete pipeline for finetuning Kronos on your own datasets. As an example, we demonstrate how to use &lt;a href=&quot;https://github.com/microsoft/qlib&quot;&gt;Qlib&lt;/a&gt; to prepare data from the Chinese A-share market and conduct a simple backtest.&lt;/p&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;Disclaimer:&lt;/strong&gt; This pipeline is intended as a demonstration to illustrate the finetuning process. It is a simplified example and not a production-ready quantitative trading system. A robust quantitative strategy requires more sophisticated techniques, such as portfolio optimization and risk factor neutralization, to achieve stable alpha.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;p&gt;The finetuning process is divided into four main steps:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;strong&gt;Configuration&lt;/strong&gt;: Set up paths and hyperparameters.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Data Preparation&lt;/strong&gt;: Process and split your data using Qlib.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Model Finetuning&lt;/strong&gt;: Finetune the Tokenizer and the Predictor models.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Backtesting&lt;/strong&gt;: Evaluate the finetuned model&#39;s performance.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;h3&gt;Prerequisites&lt;/h3&gt; 
&lt;ol&gt; 
 &lt;li&gt;First, ensure you have all dependencies from &lt;code&gt;requirements.txt&lt;/code&gt; installed.&lt;/li&gt; 
 &lt;li&gt;This pipeline relies on &lt;code&gt;qlib&lt;/code&gt;. Please install it:&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;  pip install pyqlib
&lt;/code&gt;&lt;/pre&gt; &lt;/li&gt; 
 &lt;li&gt;You will need to prepare your Qlib data. Follow the &lt;a href=&quot;https://github.com/microsoft/qlib&quot;&gt;official Qlib guide&lt;/a&gt; to download and set up your data locally. The example scripts assume you are using daily frequency data.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;h3&gt;Step 1: Configure Your Experiment&lt;/h3&gt; 
&lt;p&gt;All settings for data, training, and model paths are centralized in &lt;code&gt;finetune/config.py&lt;/code&gt;. Before running any scripts, please &lt;strong&gt;modify the following paths&lt;/strong&gt; according to your environment:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;code&gt;qlib_data_path&lt;/code&gt;: Path to your local Qlib data directory.&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;dataset_path&lt;/code&gt;: Directory where the processed train/validation/test pickle files will be saved.&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;save_path&lt;/code&gt;: Base directory for saving model checkpoints.&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;backtest_result_path&lt;/code&gt;: Directory for saving backtesting results.&lt;/li&gt; 
 &lt;li&gt;&lt;code&gt;pretrained_tokenizer_path&lt;/code&gt; and &lt;code&gt;pretrained_predictor_path&lt;/code&gt;: Paths to the pre-trained models you want to start from (can be local paths or Hugging Face model names).&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;You can also adjust other parameters like &lt;code&gt;instrument&lt;/code&gt;, &lt;code&gt;train_time_range&lt;/code&gt;, &lt;code&gt;epochs&lt;/code&gt;, and &lt;code&gt;batch_size&lt;/code&gt; to fit your specific task. If you don&#39;t use &lt;a href=&quot;https://www.comet.com/&quot;&gt;Comet.ml&lt;/a&gt;, set &lt;code&gt;use_comet = False&lt;/code&gt;.&lt;/p&gt; 
&lt;h3&gt;Step 2: Prepare the Dataset&lt;/h3&gt; 
&lt;p&gt;Run the data preprocessing script. This script will load raw market data from your Qlib directory, process it, split it into training, validation, and test sets, and save them as pickle files.&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;python finetune/qlib_data_preprocess.py
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;After running, you will find &lt;code&gt;train_data.pkl&lt;/code&gt;, &lt;code&gt;val_data.pkl&lt;/code&gt;, and &lt;code&gt;test_data.pkl&lt;/code&gt; in the directory specified by &lt;code&gt;dataset_path&lt;/code&gt; in your config.&lt;/p&gt; 
&lt;h3&gt;Step 3: Run the Finetuning&lt;/h3&gt; 
&lt;p&gt;The finetuning process consists of two stages: finetuning the tokenizer and then the predictor. Both training scripts are designed for multi-GPU training using &lt;code&gt;torchrun&lt;/code&gt;.&lt;/p&gt; 
&lt;h4&gt;3.1 Finetune the Tokenizer&lt;/h4&gt; 
&lt;p&gt;This step adjusts the tokenizer to the data distribution of your specific domain.&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# Replace NUM_GPUS with the number of GPUs you want to use (e.g., 2)
torchrun --standalone --nproc_per_node=NUM_GPUS finetune/train_tokenizer.py
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;The best tokenizer checkpoint will be saved to the path configured in &lt;code&gt;config.py&lt;/code&gt; (derived from &lt;code&gt;save_path&lt;/code&gt; and &lt;code&gt;tokenizer_save_folder_name&lt;/code&gt;).&lt;/p&gt; 
&lt;h4&gt;3.2 Finetune the Predictor&lt;/h4&gt; 
&lt;p&gt;This step finetunes the main Kronos model for the forecasting task.&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# Replace NUM_GPUS with the number of GPUs you want to use (e.g., 2)
torchrun --standalone --nproc_per_node=NUM_GPUS finetune/train_predictor.py
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;The best predictor checkpoint will be saved to the path configured in &lt;code&gt;config.py&lt;/code&gt;.&lt;/p&gt; 
&lt;h3&gt;Step 4: Evaluate with Backtesting&lt;/h3&gt; 
&lt;p&gt;Finally, run the backtesting script to evaluate your finetuned model. This script loads the models, performs inference on the test set, generates prediction signals (e.g., forecasted price change), and runs a simple top-K strategy backtest.&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# Specify the GPU for inference
python finetune/qlib_test.py --device cuda:0
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;The script will output a detailed performance analysis in your console and generate a plot showing the cumulative return curves of your strategy against the benchmark, similar to the one below:&lt;/p&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/shiyu-coder/Kronos/master/figures/backtest_result_example.png&quot; alt=&quot;Backtest Example&quot; align=&quot;center&quot; width=&quot;700px&quot; /&gt; &lt;/p&gt; 
&lt;h3&gt;💡 From Demo to Production: Important Considerations&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Raw Signals vs. Pure Alpha&lt;/strong&gt;: The signals generated by the model in this demo are raw predictions. In a real-world quantitative workflow, these signals would typically be fed into a portfolio optimization model. This model would apply constraints to neutralize exposure to common risk factors (e.g., market beta, style factors like size and value), thereby isolating the &lt;strong&gt;&quot;pure alpha&quot;&lt;/strong&gt; and improving the strategy&#39;s robustness.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Data Handling&lt;/strong&gt;: The provided &lt;code&gt;QlibDataset&lt;/code&gt; is an example. For different data sources or formats, you will need to adapt the data loading and preprocessing logic.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Strategy and Backtesting Complexity&lt;/strong&gt;: The simple top-K strategy used here is a basic starting point. Production-level strategies often incorporate more complex logic for portfolio construction, dynamic position sizing, and risk management (e.g., stop-loss/take-profit rules). Furthermore, a high-fidelity backtest should meticulously model transaction costs, slippage, and market impact to provide a more accurate estimate of real-world performance.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;📝 AI-Generated Comments&lt;/strong&gt;: Please note that many of the code comments within the &lt;code&gt;finetune/&lt;/code&gt; directory were generated by an AI assistant (Gemini 2.5 Pro) for explanatory purposes. While they aim to be helpful, they may contain inaccuracies. We recommend treating the code itself as the definitive source of logic.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;h2&gt;📖 Citation&lt;/h2&gt; 
&lt;p&gt;If you use Kronos in your research, we would appreciate a citation to our &lt;a href=&quot;https://arxiv.org/abs/2508.02739&quot;&gt;paper&lt;/a&gt;:&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;@misc{shi2025kronos,
      title={Kronos: A Foundation Model for the Language of Financial Markets}, 
      author={Yu Shi and Zongliang Fu and Shuo Chen and Bohan Zhao and Wei Xu and Changshui Zhang and Jian Li},
      year={2025},
      eprint={2508.02739},
      archivePrefix={arXiv},
      primaryClass={q-fin.ST},
      url={https://arxiv.org/abs/2508.02739}, 
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;📜 License&lt;/h2&gt; 
&lt;p&gt;This project is licensed under the &lt;a href=&quot;https://raw.githubusercontent.com/shiyu-coder/Kronos/master/LICENSE&quot;&gt;MIT License&lt;/a&gt;.&lt;/p&gt;</description>
      
      <media:content url="https://opengraph.githubassets.com/0351132f64d649683a138b16e0bdf2f499ee80f15cfe1c77a9c13f4171dbaf0d/shiyu-coder/Kronos" medium="image" />
      
    </item>
    
    <item>
      <title>coleam00/Archon</title>
      <link>https://github.com/coleam00/Archon</link>
      <description>&lt;p&gt;The first open-source harness builder for AI coding. Make AI coding deterministic and repeatable.&lt;/p&gt;&lt;hr&gt;&lt;p align=&quot;center&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/coleam00/Archon/dev/assets/logo.png&quot; alt=&quot;Archon&quot; width=&quot;160&quot; /&gt; &lt;/p&gt; 
&lt;h1 align=&quot;center&quot;&gt;Archon&lt;/h1&gt; 
&lt;p align=&quot;center&quot;&gt; The first open-source harness builder for AI coding. Make AI coding deterministic and repeatable. &lt;/p&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;a href=&quot;https://trendshift.io/repositories/13964&quot; target=&quot;_blank&quot;&gt;&lt;img src=&quot;https://trendshift.io/api/badge/repositories/13964&quot; alt=&quot;coleam00%2FArchon | Trendshift&quot; style=&quot;width: 250px; height: 55px;&quot; width=&quot;250&quot; height=&quot;55&quot; /&gt;&lt;/a&gt; &lt;/p&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;a href=&quot;https://raw.githubusercontent.com/coleam00/Archon/dev/LICENSE&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/License-MIT-blue.svg?sanitize=true&quot; alt=&quot;License: MIT&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://github.com/coleam00/Archon/actions/workflows/test.yml&quot;&gt;&lt;img src=&quot;https://github.com/coleam00/Archon/actions/workflows/test.yml/badge.svg?sanitize=true&quot; alt=&quot;CI&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://archon.diy&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/docs-archon.diy-blue&quot; alt=&quot;Docs&quot; /&gt;&lt;/a&gt; &lt;/p&gt; 
&lt;hr /&gt; 
&lt;p&gt;Archon is a workflow engine for AI coding agents. Define your development processes as YAML workflows - planning, implementation, validation, code review, PR creation - and run them reliably across all your projects.&lt;/p&gt; 
&lt;p&gt;Like what Dockerfiles did for infrastructure and GitHub Actions did for CI/CD - Archon does for AI coding workflows. Think n8n, but for software development.&lt;/p&gt; 
&lt;h2&gt;Why Archon?&lt;/h2&gt; 
&lt;p&gt;When you ask an AI agent to &quot;fix this bug&quot;, what happens depends on the model&#39;s mood. It might skip planning. It might forget to run tests. It might write a PR description that ignores your template. Every run is different.&lt;/p&gt; 
&lt;p&gt;Archon fixes this. Encode your development process as a workflow. The workflow defines the phases, validation gates, and artifacts. The AI fills in the intelligence at each step, but the structure is deterministic and owned by you.&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Repeatable&lt;/strong&gt; - Same workflow, same sequence, every time. Plan, implement, validate, review, PR.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Isolated&lt;/strong&gt; - Every workflow run gets its own git worktree. Run 5 fixes in parallel with no conflicts.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Fire and forget&lt;/strong&gt; - Kick off a workflow, go do other work. Come back to a finished PR with review comments.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Composable&lt;/strong&gt; - Mix deterministic nodes (bash scripts, tests, git ops) with AI nodes (planning, code generation, review). The AI only runs where it adds value.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Portable&lt;/strong&gt; - Define workflows once in &lt;code&gt;.archon/workflows/&lt;/code&gt;, commit them to your repo. They work the same from CLI, Web UI, Slack, Telegram, or GitHub.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;What It Looks Like&lt;/h2&gt; 
&lt;p&gt;Here&#39;s an example of an Archon workflow that plans, implements in a loop until tests pass, gets your approval, then creates the PR:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;# .archon/workflows/build-feature.yaml
nodes:
  - id: plan
    prompt: &quot;Explore the codebase and create an implementation plan&quot;

  - id: implement
    depends_on: [plan]
    loop:                                      # AI loop - iterate until done
      prompt: &quot;Read the plan. Implement the next task. Run validation.&quot;
      until: ALL_TASKS_COMPLETE
      fresh_context: true                      # Fresh session each iteration

  - id: run-tests
    depends_on: [implement]
    bash: &quot;bun run validate&quot;                   # Deterministic - no AI

  - id: review
    depends_on: [run-tests]
    prompt: &quot;Review all changes against the plan. Fix any issues.&quot;

  - id: approve
    depends_on: [review]
    loop:                                      # Human approval gate
      prompt: &quot;Present the changes for review. Address any feedback.&quot;
      until: APPROVED
      interactive: true                        # Pauses and waits for human input

  - id: create-pr
    depends_on: [approve]
    prompt: &quot;Push changes and create a pull request&quot;
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Tell your coding agent what you want, and Archon handles the rest:&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;You: Use archon to add dark mode to the settings page

Agent: I&#39;ll run the archon-idea-to-pr workflow for this.
       → Creating isolated worktree on branch archon/task-dark-mode...
       → Planning...
       → Implementing (task 1/4)...
       → Implementing (task 2/4)...
       → Tests failing - iterating...
       → Tests passing after 2 iterations
       → Code review complete - 0 issues
       → PR ready: https://github.com/you/project/pull/47
&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Previous Version&lt;/h2&gt; 
&lt;p&gt;Looking for the original Python-based Archon (task management + RAG)? It&#39;s fully preserved on the &lt;a href=&quot;https://github.com/coleam00/Archon/tree/archive/v1-task-management-rag&quot;&gt;&lt;code&gt;archive/v1-task-management-rag&lt;/code&gt;&lt;/a&gt; branch.&lt;/p&gt; 
&lt;h2&gt;Getting Started&lt;/h2&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;Most users should start with the &lt;a href=&quot;https://raw.githubusercontent.com/coleam00/Archon/dev/#full-setup-5-minutes&quot;&gt;Full Setup&lt;/a&gt;&lt;/strong&gt; - it walks you through credentials, installs the Archon skill into your projects, and gives you the web dashboard.&lt;/p&gt; 
 &lt;p&gt;&lt;strong&gt;Already have Claude Code and just want the CLI?&lt;/strong&gt; Jump to the &lt;a href=&quot;https://raw.githubusercontent.com/coleam00/Archon/dev/#quick-install-30-seconds&quot;&gt;Quick Install&lt;/a&gt;.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;h3&gt;Full Setup (5 minutes)&lt;/h3&gt; 
&lt;p&gt;Clone the repo and use the guided setup wizard. This configures credentials, platform integrations, and copies the Archon skill into your target projects.&lt;/p&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;Prerequisites&lt;/b&gt; - Bun, Claude Code, and the GitHub CLI&lt;/summary&gt; 
 &lt;p&gt;&lt;strong&gt;Bun&lt;/strong&gt; - &lt;a href=&quot;https://bun.sh&quot;&gt;bun.sh&lt;/a&gt;&lt;/p&gt; 
 &lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# macOS/Linux
curl -fsSL https://bun.sh/install | bash

# Windows (PowerShell)
irm bun.sh/install.ps1 | iex
&lt;/code&gt;&lt;/pre&gt; 
 &lt;p&gt;&lt;strong&gt;GitHub CLI&lt;/strong&gt; - &lt;a href=&quot;https://cli.github.com/&quot;&gt;cli.github.com&lt;/a&gt;&lt;/p&gt; 
 &lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# macOS
brew install gh

# Windows (via winget)
winget install GitHub.cli

# Linux (Debian/Ubuntu)
sudo apt install gh
&lt;/code&gt;&lt;/pre&gt; 
 &lt;p&gt;&lt;strong&gt;Claude Code&lt;/strong&gt; - &lt;a href=&quot;https://claude.ai/code&quot;&gt;claude.ai/code&lt;/a&gt;&lt;/p&gt; 
 &lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# macOS/Linux/WSL
curl -fsSL https://claude.ai/install.sh | bash

# Windows (PowerShell)
irm https://claude.ai/install.ps1 | iex
&lt;/code&gt;&lt;/pre&gt; 
&lt;/details&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git clone https://github.com/coleam00/Archon
cd Archon
bun install
claude
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Then say: &lt;strong&gt;&quot;Set up Archon&quot;&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;The setup wizard walks you through everything: CLI installation, authentication, platform selection, and copies the Archon skill to your target repo.&lt;/p&gt; 
&lt;h3&gt;Quick Install (30 seconds)&lt;/h3&gt; 
&lt;p&gt;Already have Claude Code set up? Install the standalone CLI binary and skip the wizard.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;macOS / Linux&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;curl -fsSL https://archon.diy/install | bash
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Windows (PowerShell)&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;irm https://archon.diy/install.ps1 | iex
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Homebrew&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;brew install coleam00/archon/archon
&lt;/code&gt;&lt;/pre&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;Compiled binaries need a &lt;code&gt;CLAUDE_BIN_PATH&lt;/code&gt;.&lt;/strong&gt; The quick-install binaries don&#39;t bundle Claude Code. Install it separately, then point Archon at it:&lt;/p&gt; 
 &lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# macOS / Linux / WSL
curl -fsSL https://claude.ai/install.sh | bash
export CLAUDE_BIN_PATH=&quot;$HOME/.local/bin/claude&quot;

# Windows (PowerShell)
irm https://claude.ai/install.ps1 | iex
$env:CLAUDE_BIN_PATH = &quot;$env:USERPROFILE\.local\bin\claude.exe&quot;
&lt;/code&gt;&lt;/pre&gt; 
 &lt;p&gt;Or set &lt;code&gt;assistants.claude.claudeBinaryPath&lt;/code&gt; in &lt;code&gt;~/.archon/config.yaml&lt;/code&gt;. The Docker image ships Claude Code pre-installed. See &lt;a href=&quot;https://archon.diy/docs/getting-started/ai-assistants/#binary-path-configuration-compiled-binaries-only&quot;&gt;AI Assistants → Binary path configuration&lt;/a&gt; for details.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;h3&gt;Start Using Archon&lt;/h3&gt; 
&lt;p&gt;Once you&#39;ve completed either setup path, go to your project and start working:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cd /path/to/your/project
claude
&lt;/code&gt;&lt;/pre&gt; 
&lt;pre&gt;&lt;code&gt;Use archon to fix issue #42
&lt;/code&gt;&lt;/pre&gt; 
&lt;pre&gt;&lt;code&gt;What archon workflows do I have? When would I use each one?
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;The coding agent handles workflow selection, branch naming, and worktree isolation for you. Projects are registered automatically the first time they&#39;re used.&lt;/p&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; Always run Claude Code from your target repo, not from the Archon repo. The setup wizard copies the Archon skill into your project so it works from there.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;h2&gt;Web UI&lt;/h2&gt; 
&lt;p&gt;Archon includes a web dashboard for chatting with your coding agent, running workflows, and monitoring activity. Binary installs: run &lt;code&gt;archon serve&lt;/code&gt; to download and start the web UI in one step. From source: ask your coding agent to run the frontend from the Archon repo, or run &lt;code&gt;bun run dev&lt;/code&gt; from the repo root yourself.&lt;/p&gt; 
&lt;p&gt;Register a project by clicking &lt;strong&gt;+&lt;/strong&gt; next to &quot;Project&quot; in the chat sidebar - enter a GitHub URL or local path. Then start a conversation, invoke workflows, and watch progress in real time.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Key pages:&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Chat&lt;/strong&gt; - Conversation interface with real-time streaming and tool call visualization&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Dashboard&lt;/strong&gt; - Mission Control for monitoring running workflows, with filterable history by project, status, and date&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Workflow Builder&lt;/strong&gt; - Visual drag-and-drop editor for creating DAG workflows with loop nodes&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Workflow Execution&lt;/strong&gt; - Step-by-step progress view for any running or completed workflow&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;Monitoring hub:&lt;/strong&gt; The sidebar shows conversations from &lt;strong&gt;all platforms&lt;/strong&gt; - not just the web. Workflows kicked off from the CLI, messages from Slack or Telegram, GitHub issue interactions - everything appears in one place.&lt;/p&gt; 
&lt;p&gt;See the &lt;a href=&quot;https://archon.diy/adapters/web/&quot;&gt;Web UI Guide&lt;/a&gt; for full documentation.&lt;/p&gt; 
&lt;h2&gt;What Can You Automate?&lt;/h2&gt; 
&lt;p&gt;Archon ships with workflows for common development tasks:&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Workflow&lt;/th&gt; 
   &lt;th&gt;What it does&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;archon-assist&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;General Q&amp;amp;A, debugging, exploration - full Claude Code agent with all tools&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;archon-fix-github-issue&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Classify issue → investigate/plan → implement → validate → PR → smart review → self-fix&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;archon-idea-to-pr&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Feature idea → plan → implement → validate → PR → 5 parallel reviews → self-fix&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;archon-plan-to-pr&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Execute existing plan → implement → validate → PR → review → self-fix&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;archon-issue-review-full&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Comprehensive fix + full multi-agent review pipeline for GitHub issues&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;archon-smart-pr-review&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Classify PR complexity → run targeted review agents → synthesize findings&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;archon-comprehensive-pr-review&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Multi-agent PR review (5 parallel reviewers) with automatic fixes&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;archon-create-issue&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Classify problem → gather context → investigate → create GitHub issue&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;archon-validate-pr&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Thorough PR validation testing both main and feature branches&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;archon-resolve-conflicts&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Detect merge conflicts → analyze both sides → resolve → validate → commit&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;archon-feature-development&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Implement feature from plan → validate → create PR&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;archon-architect&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Architectural sweep, complexity reduction, codebase health improvement&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;archon-refactor-safely&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Safe refactoring with type-check hooks and behavior verification&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;archon-ralph-dag&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;PRD implementation loop - iterate through stories until done&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;archon-remotion-generate&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Generate or modify Remotion video compositions with AI&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;archon-test-loop-dag&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Loop node test workflow - iterative counter until completion&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;code&gt;archon-piv-loop&lt;/code&gt;&lt;/td&gt; 
   &lt;td&gt;Guided Plan-Implement-Validate loop with human review between iterations&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p&gt;Archon ships 17 default workflows - run &lt;code&gt;archon workflow list&lt;/code&gt; or describe what you want and the router picks the right one.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Or define your own.&lt;/strong&gt; Default workflows are great starting points - copy one from &lt;code&gt;.archon/workflows/defaults/&lt;/code&gt; and customize it. Workflows are YAML files in &lt;code&gt;.archon/workflows/&lt;/code&gt;, commands are markdown files in &lt;code&gt;.archon/commands/&lt;/code&gt;. Same-named files in your repo override the bundled defaults. Commit them - your whole team runs the same process.&lt;/p&gt; 
&lt;p&gt;See &lt;a href=&quot;https://archon.diy/guides/authoring-workflows/&quot;&gt;Authoring Workflows&lt;/a&gt; and &lt;a href=&quot;https://archon.diy/guides/authoring-commands/&quot;&gt;Authoring Commands&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;Add a Platform&lt;/h2&gt; 
&lt;p&gt;The Web UI and CLI work out of the box. Optionally connect a chat platform for remote access:&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Platform&lt;/th&gt; 
   &lt;th&gt;Setup time&lt;/th&gt; 
   &lt;th&gt;Guide&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Telegram&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;5 min&lt;/td&gt; 
   &lt;td&gt;&lt;a href=&quot;https://archon.diy/adapters/telegram/&quot;&gt;Telegram Guide&lt;/a&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Slack&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;15 min&lt;/td&gt; 
   &lt;td&gt;&lt;a href=&quot;https://archon.diy/adapters/slack/&quot;&gt;Slack Guide&lt;/a&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;GitHub Webhooks&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;15 min&lt;/td&gt; 
   &lt;td&gt;&lt;a href=&quot;https://archon.diy/adapters/github/&quot;&gt;GitHub Guide&lt;/a&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Discord&lt;/strong&gt;&lt;/td&gt; 
   &lt;td&gt;5 min&lt;/td&gt; 
   &lt;td&gt;&lt;a href=&quot;https://archon.diy/adapters/community/discord/&quot;&gt;Discord Guide&lt;/a&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;h2&gt;Architecture&lt;/h2&gt; 
&lt;pre&gt;&lt;code&gt;┌─────────────────────────────────────────────────────────┐
│  Platform Adapters (Web UI, CLI, Telegram, Slack,       │
│                    Discord, GitHub)                      │
└──────────────────────────┬──────────────────────────────┘
                           │
                           ▼
┌─────────────────────────────────────────────────────────┐
│                     Orchestrator                        │
│          (Message Routing &amp;amp; Context Management)         │
└─────────────┬───────────────────────────┬───────────────┘
              │                           │
      ┌───────┴────────┐          ┌───────┴────────┐
      │                │          │                │
      ▼                ▼          ▼                ▼
┌───────────┐  ┌────────────┐  ┌──────────────────────────┐
│  Command  │  │  Workflow  │  │    AI Assistant Clients  │
│  Handler  │  │  Executor  │  │      (Claude / Codex)    │
│  (Slash)  │  │  (YAML)    │  │                          │
└───────────┘  └────────────┘  └──────────────────────────┘
      │              │                      │
      └──────────────┴──────────────────────┘
                           │
                           ▼
┌─────────────────────────────────────────────────────────┐
│              SQLite / PostgreSQL (7 Tables)             │
│   Codebases • Conversations • Sessions • Workflow Runs  │
│    Isolation Environments • Messages • Workflow Events  │
└─────────────────────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;Documentation&lt;/h2&gt; 
&lt;p&gt;Full documentation is available at &lt;strong&gt;&lt;a href=&quot;https://archon.diy&quot;&gt;archon.diy&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Topic&lt;/th&gt; 
   &lt;th&gt;Description&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://archon.diy/getting-started/overview/&quot;&gt;Getting Started&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Setup guide (Web UI or CLI)&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://archon.diy/book/&quot;&gt;The Book of Archon&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;10-chapter narrative tutorial&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://archon.diy/reference/cli/&quot;&gt;CLI Reference&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Full CLI reference&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://archon.diy/guides/authoring-workflows/&quot;&gt;Authoring Workflows&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Create custom YAML workflows&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://archon.diy/guides/authoring-commands/&quot;&gt;Authoring Commands&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Create reusable AI commands&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://archon.diy/reference/configuration/&quot;&gt;Configuration&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;All config options, env vars, YAML settings&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://archon.diy/getting-started/ai-assistants/&quot;&gt;AI Assistants&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Claude and Codex setup details&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://archon.diy/deployment/&quot;&gt;Deployment&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Docker, VPS, production setup&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://archon.diy/reference/architecture/&quot;&gt;Architecture&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;System design and internals&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://archon.diy/reference/troubleshooting/&quot;&gt;Troubleshooting&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Common issues and fixes&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;h2&gt;Telemetry&lt;/h2&gt; 
&lt;p&gt;Archon sends a single anonymous event — &lt;code&gt;workflow_invoked&lt;/code&gt; — each time a workflow starts, so maintainers can see which workflows get real usage and prioritize accordingly. &lt;strong&gt;No PII, ever.&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;What&#39;s collected:&lt;/strong&gt; the workflow name, the workflow description (both authored by you in YAML), the platform that triggered it (&lt;code&gt;cli&lt;/code&gt;, &lt;code&gt;web&lt;/code&gt;, &lt;code&gt;slack&lt;/code&gt;, etc.), the Archon version, and a random install UUID stored at &lt;code&gt;~/.archon/telemetry-id&lt;/code&gt;. Nothing else.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;What&#39;s &lt;em&gt;not&lt;/em&gt; collected:&lt;/strong&gt; your code, prompts, messages, git remotes, file paths, usernames, tokens, AI output, workflow node details — none of it.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;Opt out:&lt;/strong&gt; set any of these in your environment:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;ARCHON_TELEMETRY_DISABLED=1
DO_NOT_TRACK=1        # de facto standard honored by Astro, Bun, Prisma, Nuxt, etc.
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Self-host PostHog or use a different project by setting &lt;code&gt;POSTHOG_API_KEY&lt;/code&gt; and &lt;code&gt;POSTHOG_HOST&lt;/code&gt;.&lt;/p&gt; 
&lt;h2&gt;Contributing&lt;/h2&gt; 
&lt;p&gt;Contributions welcome! See the open &lt;a href=&quot;https://github.com/coleam00/Archon/issues&quot;&gt;issues&lt;/a&gt; for things to work on.&lt;/p&gt; 
&lt;p&gt;Please read &lt;a href=&quot;https://raw.githubusercontent.com/coleam00/Archon/dev/CONTRIBUTING.md&quot;&gt;CONTRIBUTING.md&lt;/a&gt; before submitting a pull request.&lt;/p&gt; 
&lt;h2&gt;License&lt;/h2&gt; 
&lt;p&gt;&lt;a href=&quot;https://raw.githubusercontent.com/coleam00/Archon/dev/LICENSE&quot;&gt;MIT&lt;/a&gt;&lt;/p&gt;</description>
      
      <media:content url="https://opengraph.githubassets.com/4953bef88466c1f3d9825ceb1909e69391c662209aa1dbc17e7b0226fff0b7ce/coleam00/Archon" medium="image" />
      
    </item>
    
    <item>
      <title>TauricResearch/TradingAgents</title>
      <link>https://github.com/TauricResearch/TradingAgents</link>
      <description>&lt;p&gt;TradingAgents: Multi-Agents LLM Financial Trading Framework&lt;/p&gt;&lt;hr&gt;&lt;p align=&quot;center&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/TauricResearch/TradingAgents/main/assets/TauricResearch.png&quot; style=&quot;width: 60%; height: auto;&quot; /&gt; &lt;/p&gt; 
&lt;div align=&quot;center&quot; style=&quot;line-height: 1;&quot;&gt; 
 &lt;a href=&quot;https://arxiv.org/abs/2412.20138&quot; target=&quot;_blank&quot;&gt;&lt;img alt=&quot;arXiv&quot; src=&quot;https://img.shields.io/badge/arXiv-2412.20138-B31B1B?logo=arxiv&quot; /&gt;&lt;/a&gt; 
 &lt;a href=&quot;https://discord.com/invite/hk9PGKShPK&quot; target=&quot;_blank&quot;&gt;&lt;img alt=&quot;Discord&quot; src=&quot;https://img.shields.io/badge/Discord-TradingResearch-7289da?logo=discord&amp;amp;logoColor=white&amp;amp;color=7289da&quot; /&gt;&lt;/a&gt; 
 &lt;a href=&quot;https://raw.githubusercontent.com/TauricResearch/TradingAgents/main/assets/wechat.png&quot; target=&quot;_blank&quot;&gt;&lt;img alt=&quot;WeChat&quot; src=&quot;https://img.shields.io/badge/WeChat-TauricResearch-brightgreen?logo=wechat&amp;amp;logoColor=white&quot; /&gt;&lt;/a&gt; 
 &lt;a href=&quot;https://x.com/TauricResearch&quot; target=&quot;_blank&quot;&gt;&lt;img alt=&quot;X Follow&quot; src=&quot;https://img.shields.io/badge/X-TauricResearch-white?logo=x&amp;amp;logoColor=white&quot; /&gt;&lt;/a&gt; 
 &lt;br /&gt; 
 &lt;a href=&quot;https://github.com/TauricResearch/&quot; target=&quot;_blank&quot;&gt;&lt;img alt=&quot;Community&quot; src=&quot;https://img.shields.io/badge/Join_GitHub_Community-TauricResearch-14C290?logo=discourse&quot; /&gt;&lt;/a&gt; 
&lt;/div&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;!-- Keep these links. Translations will automatically update with the README. --&gt; 
 &lt;a href=&quot;https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=de&quot;&gt;Deutsch&lt;/a&gt; | 
 &lt;a href=&quot;https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=es&quot;&gt;Español&lt;/a&gt; | 
 &lt;a href=&quot;https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=fr&quot;&gt;français&lt;/a&gt; | 
 &lt;a href=&quot;https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=ja&quot;&gt;日本語&lt;/a&gt; | 
 &lt;a href=&quot;https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=ko&quot;&gt;한국어&lt;/a&gt; | 
 &lt;a href=&quot;https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=pt&quot;&gt;Português&lt;/a&gt; | 
 &lt;a href=&quot;https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=ru&quot;&gt;Русский&lt;/a&gt; | 
 &lt;a href=&quot;https://www.readme-i18n.com/TauricResearch/TradingAgents?lang=zh&quot;&gt;中文&lt;/a&gt; 
&lt;/div&gt; 
&lt;hr /&gt; 
&lt;h1&gt;TradingAgents: Multi-Agents LLM Financial Trading Framework&lt;/h1&gt; 
&lt;h2&gt;News&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;[2026-03] &lt;strong&gt;TradingAgents v0.2.3&lt;/strong&gt; released with multi-language support, GPT-5.4 family models, unified model catalog, backtesting date fidelity, and proxy support.&lt;/li&gt; 
 &lt;li&gt;[2026-03] &lt;strong&gt;TradingAgents v0.2.2&lt;/strong&gt; released with GPT-5.4/Gemini 3.1/Claude 4.6 model coverage, five-tier rating scale, OpenAI Responses API, Anthropic effort control, and cross-platform stability.&lt;/li&gt; 
 &lt;li&gt;[2026-02] &lt;strong&gt;TradingAgents v0.2.0&lt;/strong&gt; released with multi-provider LLM support (GPT-5.x, Gemini 3.x, Claude 4.x, Grok 4.x) and improved system architecture.&lt;/li&gt; 
 &lt;li&gt;[2026-01] &lt;strong&gt;Trading-R1&lt;/strong&gt; &lt;a href=&quot;https://arxiv.org/abs/2509.11420&quot;&gt;Technical Report&lt;/a&gt; released, with &lt;a href=&quot;https://github.com/TauricResearch/Trading-R1&quot;&gt;Terminal&lt;/a&gt; expected to land soon.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;a href=&quot;https://www.star-history.com/#TauricResearch/TradingAgents&amp;amp;Date&quot;&gt; 
  &lt;picture&gt; 
   &lt;source media=&quot;(prefers-color-scheme: dark)&quot; srcset=&quot;https://api.star-history.com/svg?repos=TauricResearch/TradingAgents&amp;amp;type=Date&amp;amp;theme=dark&quot; /&gt; 
   &lt;source media=&quot;(prefers-color-scheme: light)&quot; srcset=&quot;https://api.star-history.com/svg?repos=TauricResearch/TradingAgents&amp;amp;type=Date&quot; /&gt; 
   &lt;img alt=&quot;TradingAgents Star History&quot; src=&quot;https://api.star-history.com/svg?repos=TauricResearch/TradingAgents&amp;amp;type=Date&quot; style=&quot;width: 80%; height: auto;&quot; /&gt; 
  &lt;/picture&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;🎉 &lt;strong&gt;TradingAgents&lt;/strong&gt; officially released! We have received numerous inquiries about the work, and we would like to express our thanks for the enthusiasm in our community.&lt;/p&gt; 
 &lt;p&gt;So we decided to fully open-source the framework. Looking forward to building impactful projects with you!&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;p&gt;🚀 &lt;a href=&quot;https://raw.githubusercontent.com/TauricResearch/TradingAgents/main/#tradingagents-framework&quot;&gt;TradingAgents&lt;/a&gt; | ⚡ &lt;a href=&quot;https://raw.githubusercontent.com/TauricResearch/TradingAgents/main/#installation-and-cli&quot;&gt;Installation &amp;amp; CLI&lt;/a&gt; | 🎬 &lt;a href=&quot;https://www.youtube.com/watch?v=90gr5lwjIho&quot;&gt;Demo&lt;/a&gt; | 📦 &lt;a href=&quot;https://raw.githubusercontent.com/TauricResearch/TradingAgents/main/#tradingagents-package&quot;&gt;Package Usage&lt;/a&gt; | 🤝 &lt;a href=&quot;https://raw.githubusercontent.com/TauricResearch/TradingAgents/main/#contributing&quot;&gt;Contributing&lt;/a&gt; | 📄 &lt;a href=&quot;https://raw.githubusercontent.com/TauricResearch/TradingAgents/main/#citation&quot;&gt;Citation&lt;/a&gt;&lt;/p&gt; 
&lt;/div&gt; 
&lt;h2&gt;TradingAgents Framework&lt;/h2&gt; 
&lt;p&gt;TradingAgents is a multi-agent trading framework that mirrors the dynamics of real-world trading firms. By deploying specialized LLM-powered agents: from fundamental analysts, sentiment experts, and technical analysts, to trader, risk management team, the platform collaboratively evaluates market conditions and informs trading decisions. Moreover, these agents engage in dynamic discussions to pinpoint the optimal strategy.&lt;/p&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/TauricResearch/TradingAgents/main/assets/schema.png&quot; style=&quot;width: 100%; height: auto;&quot; /&gt; &lt;/p&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;TradingAgents framework is designed for research purposes. Trading performance may vary based on many factors, including the chosen backbone language models, model temperature, trading periods, the quality of data, and other non-deterministic factors. &lt;a href=&quot;https://tauric.ai/disclaimer/&quot;&gt;It is not intended as financial, investment, or trading advice.&lt;/a&gt;&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;p&gt;Our framework decomposes complex trading tasks into specialized roles. This ensures the system achieves a robust, scalable approach to market analysis and decision-making.&lt;/p&gt; 
&lt;h3&gt;Analyst Team&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;Fundamentals Analyst: Evaluates company financials and performance metrics, identifying intrinsic values and potential red flags.&lt;/li&gt; 
 &lt;li&gt;Sentiment Analyst: Analyzes social media and public sentiment using sentiment scoring algorithms to gauge short-term market mood.&lt;/li&gt; 
 &lt;li&gt;News Analyst: Monitors global news and macroeconomic indicators, interpreting the impact of events on market conditions.&lt;/li&gt; 
 &lt;li&gt;Technical Analyst: Utilizes technical indicators (like MACD and RSI) to detect trading patterns and forecast price movements.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/TauricResearch/TradingAgents/main/assets/analyst.png&quot; width=&quot;100%&quot; style=&quot;display: inline-block; margin: 0 2%;&quot; /&gt; &lt;/p&gt; 
&lt;h3&gt;Researcher Team&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;Comprises both bullish and bearish researchers who critically assess the insights provided by the Analyst Team. Through structured debates, they balance potential gains against inherent risks.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/TauricResearch/TradingAgents/main/assets/researcher.png&quot; width=&quot;70%&quot; style=&quot;display: inline-block; margin: 0 2%;&quot; /&gt; &lt;/p&gt; 
&lt;h3&gt;Trader Agent&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;Composes reports from the analysts and researchers to make informed trading decisions. It determines the timing and magnitude of trades based on comprehensive market insights.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/TauricResearch/TradingAgents/main/assets/trader.png&quot; width=&quot;70%&quot; style=&quot;display: inline-block; margin: 0 2%;&quot; /&gt; &lt;/p&gt; 
&lt;h3&gt;Risk Management and Portfolio Manager&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;Continuously evaluates portfolio risk by assessing market volatility, liquidity, and other risk factors. The risk management team evaluates and adjusts trading strategies, providing assessment reports to the Portfolio Manager for final decision.&lt;/li&gt; 
 &lt;li&gt;The Portfolio Manager approves/rejects the transaction proposal. If approved, the order will be sent to the simulated exchange and executed.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/TauricResearch/TradingAgents/main/assets/risk.png&quot; width=&quot;70%&quot; style=&quot;display: inline-block; margin: 0 2%;&quot; /&gt; &lt;/p&gt; 
&lt;h2&gt;Installation and CLI&lt;/h2&gt; 
&lt;h3&gt;Installation&lt;/h3&gt; 
&lt;p&gt;Clone TradingAgents:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git clone https://github.com/TauricResearch/TradingAgents.git
cd TradingAgents
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Create a virtual environment in any of your favorite environment managers:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;conda create -n tradingagents python=3.13
conda activate tradingagents
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Install the package and its dependencies:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;pip install .
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Docker&lt;/h3&gt; 
&lt;p&gt;Alternatively, run with Docker:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cp .env.example .env  # add your API keys
docker compose run --rm tradingagents
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;For local models with Ollama:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker compose --profile ollama run --rm tradingagents-ollama
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Required APIs&lt;/h3&gt; 
&lt;p&gt;TradingAgents supports multiple LLM providers. Set the API key for your chosen provider:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;export OPENAI_API_KEY=...          # OpenAI (GPT)
export GOOGLE_API_KEY=...          # Google (Gemini)
export ANTHROPIC_API_KEY=...       # Anthropic (Claude)
export XAI_API_KEY=...             # xAI (Grok)
export DEEPSEEK_API_KEY=...        # DeepSeek
export DASHSCOPE_API_KEY=...       # Qwen (Alibaba DashScope)
export ZHIPU_API_KEY=...           # GLM (Zhipu)
export OPENROUTER_API_KEY=...      # OpenRouter
export ALPHA_VANTAGE_API_KEY=...   # Alpha Vantage
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;For enterprise providers (e.g. Azure OpenAI, AWS Bedrock), copy &lt;code&gt;.env.enterprise.example&lt;/code&gt; to &lt;code&gt;.env.enterprise&lt;/code&gt; and fill in your credentials.&lt;/p&gt; 
&lt;p&gt;For local models, configure Ollama with &lt;code&gt;llm_provider: &quot;ollama&quot;&lt;/code&gt; in your config.&lt;/p&gt; 
&lt;p&gt;Alternatively, copy &lt;code&gt;.env.example&lt;/code&gt; to &lt;code&gt;.env&lt;/code&gt; and fill in your keys:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cp .env.example .env
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;CLI Usage&lt;/h3&gt; 
&lt;p&gt;Launch the interactive CLI:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;tradingagents          # installed command
python -m cli.main     # alternative: run directly from source
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;You will see a screen where you can select your desired tickers, analysis date, LLM provider, research depth, and more.&lt;/p&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/TauricResearch/TradingAgents/main/assets/cli/cli_init.png&quot; width=&quot;100%&quot; style=&quot;display: inline-block; margin: 0 2%;&quot; /&gt; &lt;/p&gt; 
&lt;p&gt;An interface will appear showing results as they load, letting you track the agent&#39;s progress as it runs.&lt;/p&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/TauricResearch/TradingAgents/main/assets/cli/cli_news.png&quot; width=&quot;100%&quot; style=&quot;display: inline-block; margin: 0 2%;&quot; /&gt; &lt;/p&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/TauricResearch/TradingAgents/main/assets/cli/cli_transaction.png&quot; width=&quot;100%&quot; style=&quot;display: inline-block; margin: 0 2%;&quot; /&gt; &lt;/p&gt; 
&lt;h2&gt;TradingAgents Package&lt;/h2&gt; 
&lt;h3&gt;Implementation Details&lt;/h3&gt; 
&lt;p&gt;We built TradingAgents with LangGraph to ensure flexibility and modularity. The framework supports multiple LLM providers: OpenAI, Google, Anthropic, xAI, OpenRouter, and Ollama.&lt;/p&gt; 
&lt;h3&gt;Python Usage&lt;/h3&gt; 
&lt;p&gt;To use TradingAgents inside your code, you can import the &lt;code&gt;tradingagents&lt;/code&gt; module and initialize a &lt;code&gt;TradingAgentsGraph()&lt;/code&gt; object. The &lt;code&gt;.propagate()&lt;/code&gt; function will return a decision. You can run &lt;code&gt;main.py&lt;/code&gt;, here&#39;s also a quick example:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG

ta = TradingAgentsGraph(debug=True, config=DEFAULT_CONFIG.copy())

# forward propagate
_, decision = ta.propagate(&quot;NVDA&quot;, &quot;2026-01-15&quot;)
print(decision)
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;You can also adjust the default configuration to set your own choice of LLMs, debate rounds, etc.&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG

config = DEFAULT_CONFIG.copy()
config[&quot;llm_provider&quot;] = &quot;openai&quot;        # openai, google, anthropic, xai, openrouter, ollama
config[&quot;deep_think_llm&quot;] = &quot;gpt-5.4&quot;     # Model for complex reasoning
config[&quot;quick_think_llm&quot;] = &quot;gpt-5.4-mini&quot; # Model for quick tasks
config[&quot;max_debate_rounds&quot;] = 2

ta = TradingAgentsGraph(debug=True, config=config)
_, decision = ta.propagate(&quot;NVDA&quot;, &quot;2026-01-15&quot;)
print(decision)
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;See &lt;code&gt;tradingagents/default_config.py&lt;/code&gt; for all configuration options.&lt;/p&gt; 
&lt;h2&gt;Contributing&lt;/h2&gt; 
&lt;p&gt;We welcome contributions from the community! Whether it&#39;s fixing a bug, improving documentation, or suggesting a new feature, your input helps make this project better. If you are interested in this line of research, please consider joining our open-source financial AI research community &lt;a href=&quot;https://tauric.ai/&quot;&gt;Tauric Research&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;Citation&lt;/h2&gt; 
&lt;p&gt;Please reference our work if you find &lt;em&gt;TradingAgents&lt;/em&gt; provides you with some help 😃&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;@misc{xiao2025tradingagentsmultiagentsllmfinancial,
      title={TradingAgents: Multi-Agents LLM Financial Trading Framework}, 
      author={Yijia Xiao and Edward Sun and Di Luo and Wei Wang},
      year={2025},
      eprint={2412.20138},
      archivePrefix={arXiv},
      primaryClass={q-fin.TR},
      url={https://arxiv.org/abs/2412.20138}, 
}
&lt;/code&gt;&lt;/pre&gt;</description>
      
      <media:content url="https://repository-images.githubusercontent.com/909213664/8cfc671d-b54b-400e-beab-8ef0bbf39aa1" medium="image" />
      
    </item>
    
    <item>
      <title>microsoft/VibeVoice</title>
      <link>https://github.com/microsoft/VibeVoice</link>
      <description>&lt;p&gt;Open-Source Frontier Voice AI&lt;/p&gt;&lt;hr&gt;&lt;div align=&quot;center&quot;&gt; 
 &lt;h2&gt;🎙️ VibeVoice: Open-Source Frontier Voice AI&lt;/h2&gt; 
 &lt;p&gt;&lt;a href=&quot;https://microsoft.github.io/VibeVoice&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/Project-Page-blue?logo=githubpages&quot; alt=&quot;Project Page&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://huggingface.co/collections/microsoft/vibevoice-68a2ef24a875c44be47b034f&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/HuggingFace-Collection-orange?logo=huggingface&quot; alt=&quot;Hugging Face&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://openreview.net/pdf?id=FihSkzyxdv&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/TTS-Report-red?logo=arxiv&quot; alt=&quot;TTS Report&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://arxiv.org/pdf/2601.18184&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/ASR-Report-yellow?logo=arxiv&quot; alt=&quot;ASR Report&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://colab.research.google.com/github/microsoft/VibeVoice/blob/main/demo/VibeVoice_colab.ipynb&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/StreamingTTS-Colab-green?logo=googlecolab&quot; alt=&quot;Colab&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://aka.ms/vibevoice-asr&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/ASR-Playground-6F42C1?logo=gradio&quot; alt=&quot;ASR Playground&quot; /&gt;&lt;/a&gt;&lt;/p&gt; 
 &lt;p&gt;&lt;a href=&quot;https://trendshift.io/repositories/15465&quot;&gt;&lt;img src=&quot;https://trendshift.io/api/badge/repositories/15465&quot; alt=&quot;microsoft%2FVibeVoice | Trendshift&quot; /&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;/div&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;picture&gt; 
  &lt;source media=&quot;(prefers-color-scheme: dark)&quot; srcset=&quot;Figures/VibeVoice_logo_white.png&quot; /&gt; 
  &lt;img src=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/Figures/VibeVoice_logo.png&quot; alt=&quot;VibeVoice Logo&quot; width=&quot;300&quot; /&gt; 
 &lt;/picture&gt; 
&lt;/div&gt; 
&lt;div align=&quot;left&quot;&gt; 
 &lt;h3&gt;📰 News&lt;/h3&gt; 
 &lt;p&gt;&lt;strong&gt;🎉 &lt;a href=&quot;https://vibingjustspeakit.github.io/Vibing/&quot;&gt;Vibing&lt;/a&gt;, an intelligent voice input method built by the community, is now powered by VibeVoice-ASR. Download: &lt;a href=&quot;https://github.com/VibingJustSpeakIt/Vibing/releases/download/v0.1.0/Vibing-v0.1.0-mac.dmg&quot;&gt;macOS&lt;/a&gt; | &lt;a href=&quot;https://get.microsoft.com/installer/download/9pjf89frgg1d&quot;&gt;Windows EXE Installer (Recommended)&lt;/a&gt; | &lt;a href=&quot;https://github.com/VibingJustSpeakIt/Vibing/releases/download/v0.1.3/Vibing-v0.1.3-windows.zip&quot;&gt;Windows ZIP (Portable)&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt; 
 &lt;p&gt;&lt;a href=&quot;https://github.com/user-attachments/assets/db0bb23f-ae06-4135-a66a-1ff1669f4f84&quot;&gt;https://github.com/user-attachments/assets/db0bb23f-ae06-4135-a66a-1ff1669f4f84&lt;/a&gt;&lt;/p&gt; 
 &lt;p&gt;&lt;strong&gt;2026-03-06: 🚀 VibeVoice ASR is now part of a &lt;a href=&quot;https://huggingface.co/microsoft/VibeVoice-ASR-HF&quot;&gt;Transformers release&lt;/a&gt;! You can now use our speech recognition model directly through the Hugging Face Transformers library for seamless integration into your projects.&lt;/strong&gt;&lt;/p&gt; 
 &lt;p&gt;&lt;strong&gt;2026-01-21:&lt;/strong&gt; 📣 We open-sourced &lt;a href=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/docs/vibevoice-asr.md&quot;&gt;&lt;strong&gt;VibeVoice-ASR&lt;/strong&gt;&lt;/a&gt;, a unified speech-to-text model designed to handle 60-minute long-form audio in a single pass, generating structured transcriptions containing Who (Speaker), When (Timestamps), and What (Content), with support for User-Customized Context. Try it in &lt;a href=&quot;https://aka.ms/vibevoice-asr&quot;&gt;Playground&lt;/a&gt;.&lt;/p&gt; 
 &lt;ul&gt; 
  &lt;li&gt;⭐️ VibeVoice-ASR is natively multilingual, supporting over 50 languages — check the &lt;a href=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/docs/vibevoice-asr.md#language-distribution&quot;&gt;supported languages&lt;/a&gt; for details.&lt;/li&gt; 
  &lt;li&gt;🔥 The VibeVoice-ASR &lt;a href=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/finetuning-asr/README.md&quot;&gt;finetuning code&lt;/a&gt; is now available!&lt;/li&gt; 
  &lt;li&gt;⚡️ &lt;strong&gt;vLLM inference&lt;/strong&gt; is now supported for faster inference; see &lt;a href=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/docs/vibevoice-vllm-asr.md&quot;&gt;vllm-asr&lt;/a&gt; for more details.&lt;/li&gt; 
  &lt;li&gt;📑 &lt;a href=&quot;https://arxiv.org/pdf/2601.18184&quot;&gt;VibeVoice-ASR Technique Report&lt;/a&gt; is available.&lt;/li&gt; 
 &lt;/ul&gt; 
 &lt;p&gt;2025-12-16: 📣 We added experimental speakers to &lt;a href=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/docs/vibevoice-realtime-0.5b.md&quot;&gt;&lt;strong&gt;VibeVoice‑Realtime‑0.5B&lt;/strong&gt;&lt;/a&gt; for exploration, including multilingual voices in nine languages (DE, FR, IT, JP, KR, NL, PL, PT, ES) and 11 distinct English style voices. &lt;a href=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/docs/vibevoice-realtime-0.5b.md#optional-more-experimental-voices&quot;&gt;Try it&lt;/a&gt;. More speaker types will be added over time.&lt;/p&gt; 
 &lt;p&gt;2025-12-03: 📣 We open-sourced &lt;a href=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/docs/vibevoice-realtime-0.5b.md&quot;&gt;&lt;strong&gt;VibeVoice‑Realtime‑0.5B&lt;/strong&gt;&lt;/a&gt;, a real‑time text‑to‑speech model that supports streaming text input and robust long-form speech generation. Try it on &lt;a href=&quot;https://colab.research.google.com/github/microsoft/VibeVoice/blob/main/demo/vibevoice_realtime_colab.ipynb&quot;&gt;Colab&lt;/a&gt;.&lt;/p&gt; 
 &lt;p&gt;2025-09-05: VibeVoice is an open-source research framework intended to advance collaboration in the speech synthesis community. After release, we discovered instances where the tool was used in ways inconsistent with the stated intent. Since responsible use of AI is one of Microsoft’s guiding principles, we have removed the VibeVoice-TTS code from this repository.&lt;/p&gt; 
 &lt;p&gt;2025-08-25: 📣 We open-sourced &lt;a href=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/docs/vibevoice-tts.md&quot;&gt;&lt;strong&gt;VibeVoice-TTS&lt;/strong&gt;&lt;/a&gt;, a long-form multi-speaker text-to-speech model that can synthesize speech up to 90 minutes long with up to 4 distinct speakers. — accepted as an &lt;a href=&quot;https://openreview.net/forum?id=FihSkzyxdv&quot;&gt;Oral&lt;/a&gt; at ICLR 2026! 🔥&lt;/p&gt; 
&lt;/div&gt; 
&lt;h2&gt;Overview&lt;/h2&gt; 
&lt;p&gt;VibeVoice is a &lt;strong&gt;family of open-source frontier voice AI models&lt;/strong&gt; that includes both Text-to-Speech (TTS) and Automatic Speech Recognition (ASR) models.&lt;/p&gt; 
&lt;p&gt;A core innovation of VibeVoice is its use of continuous speech tokenizers (Acoustic and Semantic) operating at an ultra-low frame rate of &lt;strong&gt;7.5 Hz&lt;/strong&gt;. These tokenizers efficiently preserve audio fidelity while significantly boosting computational efficiency for processing long sequences. VibeVoice employs a &lt;a href=&quot;https://arxiv.org/abs/2412.08635&quot;&gt;next-token diffusion&lt;/a&gt; framework, leveraging a Large Language Model (LLM) to understand textual context and dialogue flow, and a diffusion head to generate high-fidelity acoustic details.&lt;/p&gt; 
&lt;p&gt;For more information, demos, and examples, please visit our &lt;a href=&quot;https://microsoft.github.io/VibeVoice&quot;&gt;Project Page&lt;/a&gt;.&lt;/p&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;table&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;th&gt;Model&lt;/th&gt; 
    &lt;th&gt;Weight&lt;/th&gt; 
    &lt;th&gt;Quick Try&lt;/th&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td&gt;VibeVoice-ASR-7B&lt;/td&gt; 
    &lt;td&gt;&lt;a href=&quot;https://huggingface.co/microsoft/VibeVoice-ASR&quot;&gt;HF Link&lt;/a&gt;&lt;/td&gt; 
    &lt;td&gt;&lt;a href=&quot;https://aka.ms/vibevoice-asr&quot;&gt;Playground&lt;/a&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;VibeVoice-TTS-1.5B&lt;/td&gt; 
    &lt;td&gt;&lt;a href=&quot;https://huggingface.co/microsoft/VibeVoice-1.5B&quot;&gt;HF Link&lt;/a&gt;&lt;/td&gt; 
    &lt;td&gt;Disabled&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;VibeVoice-Realtime-0.5B&lt;/td&gt; 
    &lt;td&gt;&lt;a href=&quot;https://huggingface.co/microsoft/VibeVoice-Realtime-0.5B&quot;&gt;HF Link&lt;/a&gt;&lt;/td&gt; 
    &lt;td&gt;&lt;a href=&quot;https://colab.research.google.com/github/microsoft/VibeVoice/blob/main/demo/vibevoice_realtime_colab.ipynb&quot;&gt;Colab&lt;/a&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
&lt;/div&gt; 
&lt;h2&gt;Models&lt;/h2&gt; 
&lt;h3&gt;1. 📖 &lt;a href=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/docs/vibevoice-asr.md&quot;&gt;VibeVoice-ASR&lt;/a&gt; - Long-form Speech Recognition&lt;/h3&gt; 
&lt;p&gt;&lt;strong&gt;VibeVoice-ASR&lt;/strong&gt; is a unified speech-to-text model designed to handle &lt;strong&gt;60-minute long-form audio&lt;/strong&gt; in a single pass, generating structured transcriptions containing &lt;strong&gt;Who (Speaker), When (Timestamps), and What (Content)&lt;/strong&gt;, with support for &lt;strong&gt;Customized Hotwords&lt;/strong&gt;.&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;🕒 60-minute Single-Pass Processing&lt;/strong&gt;: Unlike conventional ASR models that slice audio into short chunks (often losing global context), VibeVoice ASR accepts up to &lt;strong&gt;60 minutes&lt;/strong&gt; of continuous audio input within 64K token length. This ensures consistent speaker tracking and semantic coherence across the entire hour.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;👤 Customized Hotwords&lt;/strong&gt;: Users can provide customized hotwords (e.g., specific names, technical terms, or background info) to guide the recognition process, significantly improving accuracy on domain-specific content.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;📝 Rich Transcription (Who, When, What)&lt;/strong&gt;: The model jointly performs ASR, diarization, and timestamping, producing a structured output that indicates &lt;em&gt;who&lt;/em&gt; said &lt;em&gt;what&lt;/em&gt; and &lt;em&gt;when&lt;/em&gt;.&lt;/p&gt; &lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;a href=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/docs/vibevoice-asr.md&quot;&gt;📖 Documentation&lt;/a&gt; | &lt;a href=&quot;https://huggingface.co/microsoft/VibeVoice-ASR&quot;&gt;🤗 Hugging Face&lt;/a&gt; | &lt;a href=&quot;https://aka.ms/vibevoice-asr&quot;&gt;🎮 Playground&lt;/a&gt; | &lt;a href=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/finetuning-asr/README.md&quot;&gt;🛠️ Finetuning&lt;/a&gt; | &lt;a href=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/docs/VibeVoice-ASR-Report.pdf&quot;&gt;📊 Paper&lt;/a&gt;&lt;/p&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/Figures/DER.jpg&quot; alt=&quot;DER&quot; width=&quot;50%&quot; /&gt;&lt;br /&gt; &lt;img src=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/Figures/cpWER.jpg&quot; alt=&quot;cpWER&quot; width=&quot;50%&quot; /&gt;&lt;br /&gt; &lt;img src=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/Figures/tcpWER.jpg&quot; alt=&quot;tcpWER&quot; width=&quot;50%&quot; /&gt; &lt;/p&gt; 
&lt;div align=&quot;center&quot; id=&quot;vibevoice-asr&quot;&gt; 
 &lt;p&gt;&lt;a href=&quot;https://github.com/user-attachments/assets/acde5602-dc17-4314-9e3b-c630bc84aefa&quot;&gt;https://github.com/user-attachments/assets/acde5602-dc17-4314-9e3b-c630bc84aefa&lt;/a&gt;&lt;/p&gt; 
&lt;/div&gt; 
&lt;br /&gt; 
&lt;h3&gt;2. 🎙️ &lt;a href=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/docs/vibevoice-tts.md&quot;&gt;VibeVoice-TTS&lt;/a&gt; - Long-form Multi-speaker TTS&lt;/h3&gt; 
&lt;p&gt;&lt;strong&gt;Best for&lt;/strong&gt;: Long-form conversational audio, podcasts, multi-speaker dialogues&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;⏱️ 90-minute Long-form Generation&lt;/strong&gt;: Synthesizes conversational/single-speaker speech up to &lt;strong&gt;90 minutes&lt;/strong&gt; in a single pass, maintaining speaker consistency and semantic coherence throughout.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;👥 Multi-speaker Support&lt;/strong&gt;: Supports up to &lt;strong&gt;4 distinct speakers&lt;/strong&gt; in a single conversation, with natural turn-taking and speaker consistency across long dialogues.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;🎭 Expressive Speech&lt;/strong&gt;: Generates expressive, natural-sounding speech that captures conversational dynamics and emotional nuances.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;🌐 Multi-lingual Support&lt;/strong&gt;: Supports English, Chinese and other languages.&lt;/p&gt; &lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;a href=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/docs/vibevoice-tts.md&quot;&gt;📖 Documentation&lt;/a&gt; | &lt;a href=&quot;https://huggingface.co/microsoft/VibeVoice-1.5B&quot;&gt;🤗 Hugging Face&lt;/a&gt; | &lt;a href=&quot;https://arxiv.org/pdf/2508.19205&quot;&gt;📊 Paper&lt;/a&gt;&lt;/p&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;img src=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/Figures/VibeVoice-TTS-results.jpg&quot; alt=&quot;VibeVoice Results&quot; width=&quot;80%&quot; /&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;strong&gt;English&lt;/strong&gt;&lt;/p&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;p&gt;&lt;a href=&quot;https://github.com/user-attachments/assets/0967027c-141e-4909-bec8-091558b1b784&quot;&gt;https://github.com/user-attachments/assets/0967027c-141e-4909-bec8-091558b1b784&lt;/a&gt;&lt;/p&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;strong&gt;Chinese&lt;/strong&gt;&lt;/p&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;p&gt;&lt;a href=&quot;https://github.com/user-attachments/assets/322280b7-3093-4c67-86e3-10be4746c88f&quot;&gt;https://github.com/user-attachments/assets/322280b7-3093-4c67-86e3-10be4746c88f&lt;/a&gt;&lt;/p&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;strong&gt;Cross-Lingual&lt;/strong&gt;&lt;/p&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;p&gt;&lt;a href=&quot;https://github.com/user-attachments/assets/838d8ad9-a201-4dde-bb45-8cd3f59ce722&quot;&gt;https://github.com/user-attachments/assets/838d8ad9-a201-4dde-bb45-8cd3f59ce722&lt;/a&gt;&lt;/p&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;strong&gt;Spontaneous Singing&lt;/strong&gt;&lt;/p&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;p&gt;&lt;a href=&quot;https://github.com/user-attachments/assets/6f27a8a5-0c60-4f57-87f3-7dea2e11c730&quot;&gt;https://github.com/user-attachments/assets/6f27a8a5-0c60-4f57-87f3-7dea2e11c730&lt;/a&gt;&lt;/p&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;strong&gt;Long Conversation with 4 people&lt;/strong&gt;&lt;/p&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;p&gt;&lt;a href=&quot;https://github.com/user-attachments/assets/a357c4b6-9768-495c-a576-1618f6275727&quot;&gt;https://github.com/user-attachments/assets/a357c4b6-9768-495c-a576-1618f6275727&lt;/a&gt;&lt;/p&gt; 
&lt;/div&gt; 
&lt;br /&gt; 
&lt;h3&gt;3. ⚡ &lt;a href=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/docs/vibevoice-realtime-0.5b.md&quot;&gt;VibeVoice-Streaming&lt;/a&gt; - Real-time Streaming TTS&lt;/h3&gt; 
&lt;p&gt;VibeVoice-Realtime is a &lt;strong&gt;lightweight real‑time&lt;/strong&gt; text-to-speech model supporting &lt;strong&gt;streaming text input&lt;/strong&gt; and &lt;strong&gt;robust long-form speech generation&lt;/strong&gt;.&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Parameter size: 0.5B (deployment-friendly)&lt;/li&gt; 
 &lt;li&gt;Real-time TTS (~300 milliseconds first audible latency)&lt;/li&gt; 
 &lt;li&gt;Streaming text input&lt;/li&gt; 
 &lt;li&gt;Robust long-form speech generation (~10 minutes)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;a href=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/docs/vibevoice-realtime-0.5b.md&quot;&gt;📖 Documentation&lt;/a&gt; | &lt;a href=&quot;https://huggingface.co/microsoft/VibeVoice-Realtime-0.5B&quot;&gt;🤗 Hugging Face&lt;/a&gt; | &lt;a href=&quot;https://colab.research.google.com/github/microsoft/VibeVoice/blob/main/demo/vibevoice_realtime_colab.ipynb&quot;&gt;🚀 Colab&lt;/a&gt;&lt;/p&gt; 
&lt;div align=&quot;center&quot; id=&quot;generated-example-audio-vibevoice-realtime&quot;&gt; 
 &lt;p&gt;&lt;a href=&quot;https://github.com/user-attachments/assets/0901d274-f6ae-46ef-a0fd-3c4fba4f76dc&quot;&gt;https://github.com/user-attachments/assets/0901d274-f6ae-46ef-a0fd-3c4fba4f76dc&lt;/a&gt;&lt;/p&gt; 
&lt;/div&gt; 
&lt;br /&gt; 
&lt;h2&gt;Contributing&lt;/h2&gt; 
&lt;p&gt;Please see &lt;a href=&quot;https://raw.githubusercontent.com/microsoft/VibeVoice/main/CONTRIBUTING.md&quot;&gt;CONTRIBUTING.md&lt;/a&gt; for detailed contribution guidelines.&lt;/p&gt; 
&lt;h2&gt;⚠️ Risks and Limitations&lt;/h2&gt; 
&lt;p&gt;While efforts have been made to optimize it through various techniques, it may still produce outputs that are unexpected, biased, or inaccurate. VibeVoice inherits any biases, errors, or omissions produced by its base model (specifically, Qwen2.5 1.5b in this release). Potential for Deepfakes and Disinformation: High-quality synthetic speech can be misused to create convincing fake audio content for impersonation, fraud, or spreading disinformation. Users must ensure transcripts are reliable, check content accuracy, and avoid using generated content in misleading ways. Users are expected to use the generated content and to deploy the models in a lawful manner, in full compliance with all applicable laws and regulations in the relevant jurisdictions. It is best practice to disclose the use of AI when sharing AI-generated content.&lt;/p&gt; 
&lt;p&gt;We do not recommend using VibeVoice in commercial or real-world applications without further testing and development. This model is intended for research and development purposes only. Please use responsibly.&lt;/p&gt; 
&lt;h2&gt;Star History&lt;/h2&gt; 
&lt;p&gt;&lt;img src=&quot;https://api.star-history.com/svg?repos=Microsoft/vibevoice&amp;amp;type=date&amp;amp;legend=top-left&quot; alt=&quot;Star History Chart&quot; /&gt;&lt;/p&gt;</description>
      
      <media:content url="https://opengraph.githubassets.com/4b52c18edda9984323a0438d10501111708835090ea67494bb31a55fb56c9fc1/microsoft/VibeVoice" medium="image" />
      
    </item>
    
    <item>
      <title>OpenBMB/VoxCPM</title>
      <link>https://github.com/OpenBMB/VoxCPM</link>
      <description>&lt;p&gt;VoxCPM2: Tokenizer-Free TTS for Multilingual Speech Generation, Creative Voice Design, and True-to-Life Cloning&lt;/p&gt;&lt;hr&gt;&lt;h2 align=&quot;center&quot;&gt;VoxCPM2: Tokenizer-Free TTS for Multilingual Speech Generation, Creative Voice Design, and True-to-Life Cloning&lt;/h2&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;b&gt;English&lt;/b&gt; | &lt;a href=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/README_zh.md&quot;&gt;中文&lt;/a&gt; &lt;/p&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;a href=&quot;https://github.com/OpenBMB/VoxCPM/&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/Project%20Page-GitHub-blue&quot; alt=&quot;Project Page&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://huggingface.co/spaces/OpenBMB/VoxCPM-Demo&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/Live%20Playground-Demo-orange&quot; alt=&quot;Live Playground&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://voxcpm.readthedocs.io/en/latest/&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/Docs-ReadTheDocs-8CA1AF&quot; alt=&quot;Documentation&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://huggingface.co/openbmb/VoxCPM2&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-VoxCPM2-yellow&quot; alt=&quot;Hugging Face&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://modelscope.cn/models/OpenBMB/VoxCPM2&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/ModelScope-VoxCPM2-purple&quot; alt=&quot;ModelScope&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://openbmb.github.io/voxcpm2-demopage/&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/DemoPage-Audio Samples-red&quot; /&gt;&lt;/a&gt; &lt;/p&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;img src=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/assets/voxcpm_logo.png&quot; alt=&quot;VoxCPM Logo&quot; width=&quot;35%&quot; /&gt; 
 &lt;br /&gt;
 &lt;br /&gt; 
 &lt;a href=&quot;https://trendshift.io/repositories/17704&quot; target=&quot;_blank&quot;&gt;&lt;img src=&quot;https://trendshift.io/api/badge/repositories/17704&quot; alt=&quot;OpenBMB%2FVoxCPM | Trendshift&quot; style=&quot;width: 250px; height: 55px;&quot; width=&quot;250&quot; height=&quot;55&quot; /&gt;&lt;/a&gt; 
&lt;/div&gt; 
&lt;br /&gt; 
&lt;p align=&quot;center&quot;&gt; 👋 Join our community for discussion and support! &lt;br /&gt; &lt;a href=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/assets/feishu-group.png&quot; style=&quot;display:inline-block;vertical-align:middle; margin-left: 10px;&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/assets/feishu-logo.png&quot; width=&quot;16&quot; height=&quot;16&quot; style=&quot;vertical-align:middle;&quot; /&gt; Feishu &lt;/a&gt; &amp;nbsp;|&amp;nbsp; &lt;a href=&quot;https://discord.gg/KZUx7tVNwz&quot; style=&quot;display:inline-block;vertical-align:middle;&quot;&gt; &lt;img src=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/assets/discord-logo.png&quot; width=&quot;16&quot; height=&quot;16&quot; style=&quot;vertical-align:middle;&quot; /&gt; Discord &lt;/a&gt; &lt;/p&gt; 
&lt;p&gt;VoxCPM is a &lt;strong&gt;tokenizer-free&lt;/strong&gt; Text-to-Speech system that directly generates continuous speech representations via an end-to-end &lt;strong&gt;diffusion autoregressive architecture&lt;/strong&gt;, bypassing discrete tokenization to achieve highly natural and expressive synthesis.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;VoxCPM2&lt;/strong&gt; is the latest major release — a &lt;strong&gt;2B&lt;/strong&gt; parameter model trained on &lt;strong&gt;over 2 million hours&lt;/strong&gt; of multilingual speech data, now supporting &lt;strong&gt;30 languages&lt;/strong&gt;, &lt;strong&gt;Voice Design&lt;/strong&gt;, &lt;strong&gt;Controllable Voice Cloning&lt;/strong&gt;, and &lt;strong&gt;48kHz&lt;/strong&gt; studio-quality audio output. Built on a &lt;a href=&quot;https://github.com/OpenBMB/MiniCPM&quot;&gt;MiniCPM-4&lt;/a&gt; backbone.&lt;/p&gt; 
&lt;h3&gt;✨ Highlights&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;🌍 &lt;strong&gt;30-Language Multilingual&lt;/strong&gt; — Input text in any of the 30 supported languages and synthesize directly, no language tag needed&lt;/li&gt; 
 &lt;li&gt;🎨 &lt;strong&gt;Voice Design&lt;/strong&gt; — Create a brand-new voice from a natural-language description alone (gender, age, tone, emotion, pace …), no reference audio required&lt;/li&gt; 
 &lt;li&gt;🎛️ &lt;strong&gt;Controllable Cloning&lt;/strong&gt; — Clone any voice from a short reference clip, with optional style guidance to steer emotion, pace, and expression while preserving the original timbre&lt;/li&gt; 
 &lt;li&gt;🎙️ &lt;strong&gt;Ultimate Cloning&lt;/strong&gt; — Reproduce every vocal nuance: provide both reference audio and its transcript, and the model continues seamlessly from the reference, faithfully preserving every vocal detail — timbre, rhythm, emotion, and style (same as VoxCPM1.5)&lt;/li&gt; 
 &lt;li&gt;🔊 &lt;strong&gt;48kHz High-Quality Audio&lt;/strong&gt; — Accepts 16kHz reference audio and directly outputs 48kHz studio-quality audio via AudioVAE V2&#39;s asymmetric encode/decode design, with built-in super-resolution — no external upsampler needed&lt;/li&gt; 
 &lt;li&gt;🧠 &lt;strong&gt;Context-Aware Synthesis&lt;/strong&gt; — Automatically infers appropriate prosody and expressiveness from text content&lt;/li&gt; 
 &lt;li&gt;⚡ &lt;strong&gt;Real-Time Streaming&lt;/strong&gt; — RTF as low as ~0.3 on NVIDIA RTX 4090, and ~0.13 accelerated by &lt;a href=&quot;https://github.com/a710128/nanovllm-voxcpm&quot;&gt;Nano-vLLM&lt;/a&gt; or &lt;a href=&quot;https://github.com/vllm-project/vllm-omni&quot;&gt;vLLM-Omni&lt;/a&gt; — official vLLM omni-modal serving for VoxCPM2 with PagedAttention and an OpenAI-compatible API&lt;/li&gt; 
 &lt;li&gt;📜 &lt;strong&gt;Fully Open-Source &amp;amp; Commercial-Ready&lt;/strong&gt; — Weights and code released under the &lt;a href=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/LICENSE&quot;&gt;Apache-2.0&lt;/a&gt; license, free for commercial use&lt;/li&gt; 
&lt;/ul&gt; 
&lt;summary&gt;&lt;b&gt;🌍 Supported Languages (30)&lt;/b&gt;&lt;/summary&gt; 
&lt;br /&gt; Arabic, Burmese, Chinese, Danish, Dutch, English, Finnish, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Khmer, Korean, Lao, Malay, Norwegian, Polish, Portuguese, Russian, Spanish, Swahili, Swedish, Tagalog, Thai, Turkish, Vietnamese 
&lt;p&gt;Chinese Dialect: 四川话, 粤语, 吴语, 东北话, 河南话, 陕西话, 山东话, 天津话, 闽南话&lt;/p&gt; 
&lt;h3&gt;News&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;[2026.04]&lt;/strong&gt; 🔥 We release &lt;strong&gt;VoxCPM2&lt;/strong&gt; — 2B, 30 languages, Voice Design &amp;amp; Controllable Voice Cloning, 48kHz audio output! &lt;a href=&quot;https://huggingface.co/openbmb/VoxCPM2&quot;&gt;Weights&lt;/a&gt; | &lt;a href=&quot;https://voxcpm.readthedocs.io/en/latest/&quot;&gt;Docs&lt;/a&gt; | &lt;a href=&quot;https://huggingface.co/spaces/OpenBMB/VoxCPM-Demo&quot;&gt;Playground&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;[2025.12]&lt;/strong&gt; 🎉 Open-source &lt;strong&gt;VoxCPM1.5&lt;/strong&gt; &lt;a href=&quot;https://huggingface.co/openbmb/VoxCPM1.5&quot;&gt;weights&lt;/a&gt; with SFT &amp;amp; LoRA fine-tuning. (&lt;strong&gt;🏆 #1 GitHub Trending&lt;/strong&gt;)&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;[2025.09]&lt;/strong&gt; 🔥 Release VoxCPM &lt;a href=&quot;https://arxiv.org/abs/2509.24650&quot;&gt;Technical Report&lt;/a&gt;.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;[2025.09]&lt;/strong&gt; 🎉 Open-source &lt;strong&gt;VoxCPM-0.5B&lt;/strong&gt; &lt;a href=&quot;https://huggingface.co/openbmb/VoxCPM-0.5B&quot;&gt;weights&lt;/a&gt; (&lt;strong&gt;🏆 #1 HuggingFace Trending&lt;/strong&gt;)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;hr /&gt; 
&lt;h2&gt;Contents&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/#-quick-start&quot;&gt;Quick Start&lt;/a&gt; 
  &lt;ul&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/#installation&quot;&gt;Installation&lt;/a&gt;&lt;/li&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/#python-api&quot;&gt;Python API&lt;/a&gt;&lt;/li&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/#cli-usage&quot;&gt;CLI Usage&lt;/a&gt;&lt;/li&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/#web-demo&quot;&gt;Web Demo&lt;/a&gt;&lt;/li&gt; 
   &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/#-production-deployment-nano-vllm&quot;&gt;Production Deployment&lt;/a&gt;&lt;/li&gt; 
  &lt;/ul&gt; &lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/#-models--versions&quot;&gt;Models &amp;amp; Versions&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/#-performance&quot;&gt;Performance&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/#%EF%B8%8F-fine-tuning&quot;&gt;Fine-tuning&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/#-documentation&quot;&gt;Documentation&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/#-ecosystem--community&quot;&gt;Ecosystem &amp;amp; Community&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/#%EF%B8%8F-risks-and-limitations&quot;&gt;Risks and Limitations&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/#-citation&quot;&gt;Citation&lt;/a&gt;&lt;/li&gt; 
&lt;/ul&gt; 
&lt;hr /&gt; 
&lt;h2&gt;🚀 Quick Start&lt;/h2&gt; 
&lt;h3&gt;Installation&lt;/h3&gt; 
&lt;pre&gt;&lt;code class=&quot;language-sh&quot;&gt;pip install voxcpm
&lt;/code&gt;&lt;/pre&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;Requirements:&lt;/strong&gt; Python ≥ 3.10 (❤️.13), PyTorch ≥ 2.5.0, CUDA ≥ 12.0. See &lt;a href=&quot;https://voxcpm.readthedocs.io/en/latest/quickstart.html&quot;&gt;Quick Start Docs&lt;/a&gt; for details.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;h3&gt;Python API&lt;/h3&gt; 
&lt;h4&gt;🗣️ Text-to-Speech&lt;/h4&gt; 
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from voxcpm import VoxCPM
import soundfile as sf

model = VoxCPM.from_pretrained(
  &quot;openbmb/VoxCPM2&quot;,
  load_denoiser=False,
)

wav = model.generate(
    text=&quot;VoxCPM2 is the current recommended release for realistic multilingual speech synthesis.&quot;,
    cfg_value=2.0,
    inference_timesteps=10,
)
sf.write(&quot;demo.wav&quot;, wav, model.tts_model.sample_rate)
print(&quot;saved: demo.wav&quot;)
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;If you prefer downloading from ModelScope first, you can use:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;pip install modelscope
&lt;/code&gt;&lt;/pre&gt; 
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from modelscope import snapshot_download
snapshot_download(&quot;OpenBMB/VoxCPM2&quot;, local_dir=&#39;./pretrained_models/VoxCPM2&#39;) # specify the local directory to save the model

from voxcpm import VoxCPM
import soundfile as sf
model = VoxCPM.from_pretrained(&quot;./pretrained_models/VoxCPM2&quot;, load_denoiser=False)

wav = model.generate(
    text=&quot;VoxCPM2 is the current recommended release for realistic multilingual speech synthesis.&quot;,
    cfg_value=2.0,
    inference_timesteps=10,
)
sf.write(&quot;demo.wav&quot;, wav, model.tts_model.sample_rate)
&lt;/code&gt;&lt;/pre&gt; 
&lt;h4&gt;🎨 Voice Design&lt;/h4&gt; 
&lt;p&gt;Create a voice from a natural-language description — no reference audio needed. &lt;strong&gt;Format:&lt;/strong&gt; put the description in parentheses at the start of &lt;code&gt;text&lt;/code&gt;(e.g. &lt;code&gt;&quot;(your voice description)The text to synthesize.&quot;&lt;/code&gt;):&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;wav = model.generate(
    text=&quot;(A young woman, gentle and sweet voice)Hello, welcome to VoxCPM2!&quot;,
    cfg_value=2.0,
    inference_timesteps=10,
)
sf.write(&quot;voice_design.wav&quot;, wav, model.tts_model.sample_rate)
&lt;/code&gt;&lt;/pre&gt; 
&lt;h4&gt;🎛️ Controllable Voice Cloning&lt;/h4&gt; 
&lt;p&gt;Upload a reference audio. The model clones the timbre, and you can still use control instructions to adjust speed, emotion, or style.&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;wav = model.generate(
    text=&quot;This is a cloned voice generated by VoxCPM2.&quot;,
    reference_wav_path=&quot;path/to/voice.wav&quot;,
)
sf.write(&quot;clone.wav&quot;, wav, model.tts_model.sample_rate)

wav = model.generate(
    text=&quot;(slightly faster, cheerful tone)This is a cloned voice with style control.&quot;,
    reference_wav_path=&quot;path/to/voice.wav&quot;,
    cfg_value=2.0,
    inference_timesteps=10,
)
sf.write(&quot;controllable_clone.wav&quot;, wav, model.tts_model.sample_rate)
&lt;/code&gt;&lt;/pre&gt; 
&lt;h4&gt;🎙️ Ultimate Cloning&lt;/h4&gt; 
&lt;p&gt;Provide both the reference audio and its exact transcript for audio-continuation-based cloning with every vocal nuance reproduced. For maximum cloning similarity, pass the same reference clip to both &lt;code&gt;reference_wav_path&lt;/code&gt; and &lt;code&gt;prompt_wav_path&lt;/code&gt; as shown below:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;wav = model.generate(
    text=&quot;This is an ultimate cloning demonstration using VoxCPM2.&quot;,
    prompt_wav_path=&quot;path/to/voice.wav&quot;,
    prompt_text=&quot;The transcript of the reference audio.&quot;,
    reference_wav_path=&quot;path/to/voice.wav&quot;, # optional, for better simliarity 
)
sf.write(&quot;hifi_clone.wav&quot;, wav, model.tts_model.sample_rate)
&lt;/code&gt;&lt;/pre&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;🔄 Streaming API&lt;/b&gt;&lt;/summary&gt; 
 &lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import numpy as np

chunks = []
for chunk in model.generate_streaming(
    text=&quot;Streaming text to speech is easy with VoxCPM!&quot;,
):
    chunks.append(chunk)
wav = np.concatenate(chunks)
sf.write(&quot;streaming.wav&quot;, wav, model.tts_model.sample_rate)
&lt;/code&gt;&lt;/pre&gt; 
&lt;/details&gt; 
&lt;h3&gt;CLI Usage&lt;/h3&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Voice design (no reference audio needed)
voxcpm design \
  --text &quot;VoxCPM2 brings studio-quality multilingual speech synthesis.&quot; \
  --output out.wav

# Controllable voice cloning with style control
voxcpm design \
  --text &quot;VoxCPM2 brings studio-quality multilingual speech synthesis.&quot; \
  --control &quot;Young female voice, warm and gentle, slightly smiling&quot; \
  --output out.wav

# Voice cloning (reference audio)
voxcpm clone \
  --text &quot;This is a voice cloning demo.&quot; \
  --reference-audio path/to/voice.wav \
  --output out.wav

# Ultimate cloning (prompt audio + transcript)
voxcpm clone \
  --text &quot;This is a voice cloning demo.&quot; \
  --prompt-audio path/to/voice.wav \
  --prompt-text &quot;reference transcript&quot; \
  --reference-audio path/to/voice.wav \ # optional, for better simliarity
  --output out.wav

# Batch processing
voxcpm batch --input examples/input.txt --output-dir outs

# Help
voxcpm --help
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Web Demo&lt;/h3&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;python app.py --port 8808  # then open in browser: http://localhost:8808
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;🚢 Production Deployment (Nano-vLLM)&lt;/h3&gt; 
&lt;p&gt;For high-throughput serving, use &lt;a href=&quot;https://github.com/a710128/nanovllm-voxcpm&quot;&gt;&lt;strong&gt;Nano-vLLM-VoxCPM&lt;/strong&gt;&lt;/a&gt; — a dedicated inference engine built on Nano-vLLM with concurrent request support and an async API.&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;pip install nano-vllm-voxcpm
&lt;/code&gt;&lt;/pre&gt; 
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from nanovllm_voxcpm import VoxCPM
import numpy as np, soundfile as sf

server = VoxCPM.from_pretrained(model=&quot;/path/to/VoxCPM&quot;, devices=[0])
chunks = list(server.generate(target_text=&quot;Hello from VoxCPM!&quot;))
sf.write(&quot;out.wav&quot;, np.concatenate(chunks), 48000)
server.stop()
&lt;/code&gt;&lt;/pre&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;RTF as low as ~0.13 on NVIDIA RTX 4090&lt;/strong&gt; (vs ~0.3 with the standard PyTorch implementation), with support for batched concurrent requests and a FastAPI HTTP server. See the &lt;a href=&quot;https://github.com/a710128/nanovllm-voxcpm&quot;&gt;Nano-vLLM-VoxCPM repo&lt;/a&gt; for deployment details.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;h3&gt;🏭 Production Serving (vLLM-Omni)&lt;/h3&gt; 
&lt;p&gt;For production multi-tenant deployments, use &lt;a href=&quot;https://github.com/vllm-project/vllm-omni&quot;&gt;&lt;strong&gt;vLLM-Omni&lt;/strong&gt;&lt;/a&gt; — the official vLLM project&#39;s omni-modal extension with native &lt;strong&gt;VoxCPM2&lt;/strong&gt; support. PagedAttention KV cache, continuous batching, and a drop-in &lt;strong&gt;OpenAI-compatible&lt;/strong&gt; &lt;code&gt;/v1/audio/speech&lt;/code&gt; endpoint.&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Install from source (latest main — vllm-omni is rapidly evolving)
uv pip install vllm==0.19.0 --torch-backend=auto
git clone https://github.com/vllm-project/vllm-omni.git &amp;amp;&amp;amp; cd vllm-omni
uv pip install -e .
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;See the &lt;a href=&quot;https://vllm-omni.readthedocs.io/en/latest/getting_started/installation/&quot;&gt;vLLM-Omni installation guide&lt;/a&gt; for other platforms (ROCm, XPU, MUSA, NPU) and Docker images.&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Launch an OpenAI-compatible TTS server (--omni enables omni-modal serving)
vllm serve openbmb/VoxCPM2 --omni --port 8000

# Call it from any OpenAI client
curl http://localhost:8000/v1/audio/speech \
  -H &quot;Content-Type: application/json&quot; \
  -d &#39;{&quot;model&quot;:&quot;openbmb/VoxCPM2&quot;,&quot;input&quot;:&quot;Hello from VoxCPM2 on vLLM-Omni!&quot;,&quot;voice&quot;:&quot;default&quot;}&#39; \
  --output out.wav
&lt;/code&gt;&lt;/pre&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;Built on the upstream vLLM scheduler, with batched concurrent requests, streaming chunk delivery, and multi-GPU deployment out of the box. See the &lt;a href=&quot;https://github.com/vllm-project/vllm-omni/tree/main/examples/online_serving/voxcpm2&quot;&gt;VoxCPM2 example&lt;/a&gt; for full deployment recipes.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;Full parameter reference, multi-scenario examples, and voice cloning tips →&lt;/strong&gt; &lt;a href=&quot;https://voxcpm.readthedocs.io/en/latest/quickstart.html&quot;&gt;Quick Start Guide&lt;/a&gt; | &lt;a href=&quot;https://voxcpm.readthedocs.io/en/latest/usage_guide.html&quot;&gt;Usage Guide&lt;/a&gt; | &lt;a href=&quot;https://voxcpm.readthedocs.io/en/latest/cookbook.html&quot;&gt;Cookbook&lt;/a&gt;&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;hr /&gt; 
&lt;h2&gt;📦 Models &amp;amp; Versions&lt;/h2&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;&lt;/th&gt; 
   &lt;th style=&quot;text-align:center&quot;&gt;&lt;strong&gt;VoxCPM2&lt;/strong&gt;&lt;/th&gt; 
   &lt;th style=&quot;text-align:center&quot;&gt;&lt;strong&gt;VoxCPM1.5&lt;/strong&gt;&lt;/th&gt; 
   &lt;th style=&quot;text-align:center&quot;&gt;&lt;strong&gt;VoxCPM-0.5B&lt;/strong&gt;&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Status&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;🟢 Latest&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;Stable&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;Legacy&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Backbone Parameters&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;2B&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;0.6B&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;0.5B&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Audio Sample Rate&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;48kHz&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;44.1kHz&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;16kHz&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;LM Token Rate&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;6.25Hz&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;6.25Hz&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;12.5Hz&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Languages&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;30&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;2 (zh, en)&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;2 (zh, en)&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Cloning Mode&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;Isolated Reference &amp;amp; Continuation&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;Continuation only&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;Continuation only&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Voice Design&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;✅&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;—&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;—&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Controllable Voice Cloning&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;✅&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;—&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;—&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;SFT / LoRA&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;✅&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;✅&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;✅&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;RTF (RTX 4090)&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;~0.30&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;~0.15&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;~0.17&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;RTF in Nano-VLLM (RTX 4090)&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;~0.13&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;~0.08&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;~0.10&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;VRAM&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;~8 GB&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;~6 GB&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;~5 GB&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Weights&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;&lt;a href=&quot;https://huggingface.co/openbmb/VoxCPM2&quot;&gt;🤗 HF&lt;/a&gt; / &lt;a href=&quot;https://modelscope.cn/models/OpenBMB/VoxCPM2&quot;&gt;MS&lt;/a&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;&lt;a href=&quot;https://huggingface.co/openbmb/VoxCPM1.5&quot;&gt;🤗 HF&lt;/a&gt; / &lt;a href=&quot;https://modelscope.cn/models/OpenBMB/VoxCPM1.5&quot;&gt;MS&lt;/a&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;&lt;a href=&quot;https://huggingface.co/openbmb/VoxCPM-0.5B&quot;&gt;🤗 HF&lt;/a&gt; / &lt;a href=&quot;https://modelscope.cn/models/OpenBMB/VoxCPM-0.5B&quot;&gt;MS&lt;/a&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Technical Report&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;Coming soon&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;—&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;&lt;a href=&quot;https://arxiv.org/abs/2509.24650&quot;&gt;arXiv&lt;/a&gt; &lt;a href=&quot;https://openreview.net/forum?id=h5KLpGoqzC&quot;&gt;ICLR 2026&lt;/a&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;strong&gt;Demo Page&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;&lt;a href=&quot;https://openbmb.github.io/voxcpm2-demopage&quot;&gt;Audio Samples&lt;/a&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;—&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;&lt;a href=&quot;https://openbmb.github.io/VoxCPM-demopage&quot;&gt;Audio Samples&lt;/a&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p&gt;VoxCPM2 is built on a &lt;strong&gt;tokenizer-free, diffusion autoregressive&lt;/strong&gt; paradigm. The model operates entirely in the latent space of &lt;strong&gt;AudioVAE V2&lt;/strong&gt;, following a four-stage pipeline: &lt;strong&gt;LocEnc → TSLM → RALM → LocDiT&lt;/strong&gt;, enabling rich expressiveness and 48kHz native audio output.&lt;/p&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;img src=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/assets/voxcpm_model.png&quot; alt=&quot;VoxCPM2 Model Architecture&quot; width=&quot;90%&quot; /&gt; 
&lt;/div&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;For full architectural details, VoxCPM2-specific upgrades, and a model comparison table, see the &lt;a href=&quot;https://voxcpm.readthedocs.io/en/latest/models/architecture.html&quot;&gt;Architecture Design&lt;/a&gt;.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;hr /&gt; 
&lt;h2&gt;📊 Performance&lt;/h2&gt; 
&lt;p&gt;VoxCPM2 achieves state-of-the-art or comparable results on public zero-shot and controllable TTS benchmarks.&lt;/p&gt; 
&lt;h3&gt;Seed-TTS-eval&lt;/h3&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;Seed-TTS-eval WER(⬇)&amp;amp;SIM(⬆) Results (click to expand)&lt;/b&gt;&lt;/summary&gt; 
 &lt;table&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;th&gt;Model&lt;/th&gt; 
    &lt;th&gt;Parameters&lt;/th&gt; 
    &lt;th&gt;Open-Source&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;test-EN&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;test-ZH&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;test-Hard&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;&lt;/th&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td&gt;&lt;/td&gt; 
    &lt;td&gt;&lt;/td&gt; 
    &lt;td&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;WER/%⬇&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;SIM/%⬆&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;CER/%⬇&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;SIM/%⬆&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;CER/%⬇&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;SIM/%⬆&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;MegaTTS3&lt;/td&gt; 
    &lt;td&gt;0.5B&lt;/td&gt; 
    &lt;td&gt;❌&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.79&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;77.1&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.52&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;79.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;DiTAR&lt;/td&gt; 
    &lt;td&gt;0.6B&lt;/td&gt; 
    &lt;td&gt;❌&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.69&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;73.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.02&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;75.3&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;CosyVoice3&lt;/td&gt; 
    &lt;td&gt;0.5B&lt;/td&gt; 
    &lt;td&gt;❌&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.02&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;71.8&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.16&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;78.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;6.08&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;75.8&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;CosyVoice3&lt;/td&gt; 
    &lt;td&gt;1.5B&lt;/td&gt; 
    &lt;td&gt;❌&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.22&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;72.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.12&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;78.1&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;5.83&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;75.8&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Seed-TTS&lt;/td&gt; 
    &lt;td&gt;-&lt;/td&gt; 
    &lt;td&gt;❌&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.25&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;76.2&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.12&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;79.6&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;7.59&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;77.6&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;MiniMax-Speech&lt;/td&gt; 
    &lt;td&gt;-&lt;/td&gt; 
    &lt;td&gt;❌&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.65&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;69.2&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;0.83&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;78.3&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;F5-TTS&lt;/td&gt; 
    &lt;td&gt;0.3B&lt;/td&gt; 
    &lt;td&gt;✅&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.00&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;67.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.53&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;76.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;8.67&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;71.3&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;MaskGCT&lt;/td&gt; 
    &lt;td&gt;1B&lt;/td&gt; 
    &lt;td&gt;✅&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.62&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;71.7&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.27&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;77.4&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;CosyVoice&lt;/td&gt; 
    &lt;td&gt;0.3B&lt;/td&gt; 
    &lt;td&gt;✅&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;4.29&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;60.9&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;3.63&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;72.3&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;11.75&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;70.9&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;CosyVoice2&lt;/td&gt; 
    &lt;td&gt;0.5B&lt;/td&gt; 
    &lt;td&gt;✅&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;3.09&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;65.9&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.38&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;75.7&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;6.83&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;72.4&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;SparkTTS&lt;/td&gt; 
    &lt;td&gt;0.5B&lt;/td&gt; 
    &lt;td&gt;✅&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;3.14&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;57.3&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.54&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;66.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;FireRedTTS&lt;/td&gt; 
    &lt;td&gt;0.5B&lt;/td&gt; 
    &lt;td&gt;✅&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;3.82&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;46.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.51&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;63.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;17.45&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;62.1&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;FireRedTTS-2&lt;/td&gt; 
    &lt;td&gt;1.5B&lt;/td&gt; 
    &lt;td&gt;✅&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.95&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;66.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.14&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;73.6&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Qwen2.5-Omni&lt;/td&gt; 
    &lt;td&gt;7B&lt;/td&gt; 
    &lt;td&gt;✅&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.72&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;63.2&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.70&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;75.2&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;7.97&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;74.7&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Qwen3-Omni&lt;/td&gt; 
    &lt;td&gt;30B-A3B&lt;/td&gt; 
    &lt;td&gt;✅&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.39&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.07&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;OpenAudio-s1-mini&lt;/td&gt; 
    &lt;td&gt;0.5B&lt;/td&gt; 
    &lt;td&gt;✅&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.94&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;55.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.18&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;68.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;23.37&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;64.3&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;IndexTTS2&lt;/td&gt; 
    &lt;td&gt;1.5B&lt;/td&gt; 
    &lt;td&gt;✅&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.23&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;70.6&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.03&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;76.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;7.12&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;75.5&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;VibeVoice&lt;/td&gt; 
    &lt;td&gt;1.5B&lt;/td&gt; 
    &lt;td&gt;✅&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;3.04&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;68.9&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.16&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;74.4&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;HiggsAudio-v2&lt;/td&gt; 
    &lt;td&gt;3B&lt;/td&gt; 
    &lt;td&gt;✅&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.44&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;67.7&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.50&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;74.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;55.07&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;65.6&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;VoxCPM-0.5B&lt;/td&gt; 
    &lt;td&gt;0.6B&lt;/td&gt; 
    &lt;td&gt;✅&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.85&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;72.9&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;0.93&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;77.2&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;8.87&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;73.0&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;VoxCPM1.5&lt;/td&gt; 
    &lt;td&gt;0.8B&lt;/td&gt; 
    &lt;td&gt;✅&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.12&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;71.4&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.18&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;77.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;7.74&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;73.1&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;MOSS-TTS&lt;/td&gt; 
    &lt;td&gt;&lt;/td&gt; 
    &lt;td&gt;✅&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.85&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;73.4&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.20&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;78.8&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Qwen3-TTS&lt;/td&gt; 
    &lt;td&gt;1.7B&lt;/td&gt; 
    &lt;td&gt;✅&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.23&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;71.7&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.22&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;77.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;6.76&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;74.8&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;FishAudio S2&lt;/td&gt; 
    &lt;td&gt;4B&lt;/td&gt; 
    &lt;td&gt;✅&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;0.99&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;0.54&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;5.99&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;LongCat-Audio-DiT&lt;/td&gt; 
    &lt;td&gt;3.5B&lt;/td&gt; 
    &lt;td&gt;✅&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.50&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;78.6&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.09&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;81.8&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;6.04&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;79.7&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;&lt;strong&gt;VoxCPM2&lt;/strong&gt;&lt;/td&gt; 
    &lt;td&gt;2B&lt;/td&gt; 
    &lt;td&gt;✅&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.84&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;75.3&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;0.97&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;79.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;8.13&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;75.3&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
&lt;/details&gt; 
&lt;h3&gt;CV3-eval&lt;/h3&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;CV3-eval Multilingual WER/CER(⬇) Results (click to expand)&lt;/b&gt;&lt;/summary&gt; 
 &lt;table&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;th&gt;Model&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;zh&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;en&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;hard-zh&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;hard-en&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;ja&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;ko&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;de&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;es&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;fr&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;it&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;ru&lt;/th&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td&gt;CosyVoice2&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;4.08&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;6.32&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;12.58&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;11.96&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;9.13&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;19.7&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;-&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;CosyVoice3-1.5B&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;3.91&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;4.99&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;9.77&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;10.55&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;7.57&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;5.69&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;6.43&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;4.47&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;11.8&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;10.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;6.64&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Fish Audio S2&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.65&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.43&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;9.10&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;4.40&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;3.96&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.76&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.22&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.00&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;6.26&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.04&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.78&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;&lt;strong&gt;VoxCPM2&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;3.65&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;5.00&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;8.55&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;8.48&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;5.96&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;5.69&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;4.77&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;3.80&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;9.85&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;4.25&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;5.21&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
&lt;/details&gt; 
&lt;h3&gt;MiniMax-Multilingual-Test&lt;/h3&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;Minimax-MLS-test WER(⬇) Results (click to expand)&lt;/b&gt;&lt;/summary&gt; 
 &lt;table&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;th&gt;Language&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;Minimax&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;ElevenLabs&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;Qwen3-TTS&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;FishAudio S2&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;&lt;strong&gt;VoxCPM2&lt;/strong&gt;&lt;/th&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Arabic&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;1.665&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.666&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;3.500&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;13.046&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Cantonese&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;34.111&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;51.513&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;30.670&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;38.584&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Chinese&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.252&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;16.026&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;0.928&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;0.730&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.136&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Czech&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;3.875&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;2.108&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.840&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;24.132&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Dutch&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.143&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;0.803&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;0.990&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;0.913&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;English&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.164&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.339&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;0.934&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.620&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.289&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Finnish&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;4.666&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.964&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;3.330&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;2.632&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;French&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;4.099&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;5.216&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;2.858&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;3.050&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;4.534&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;German&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.906&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;0.572&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.235&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;0.550&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;0.679&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Greek&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.016&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;0.991&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;5.740&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.844&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Hindi&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;6.962&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;5.827&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;14.640&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;19.699&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Indonesian&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.237&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;1.059&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.460&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.084&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Italian&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.543&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.743&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;0.948&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.270&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.563&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Japanese&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;3.519&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;10.646&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;3.823&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;2.760&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;4.628&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Korean&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.747&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.865&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.755&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;1.180&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.962&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Polish&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.415&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;0.766&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.260&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.141&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Portuguese&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.877&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.331&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.526&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;1.140&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.938&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Romanian&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.878&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;1.347&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;10.740&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;21.577&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Russian&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;4.281&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;3.878&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;3.212&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;2.400&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;3.634&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Spanish&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.029&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.084&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.126&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;0.910&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.438&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Thai&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.701&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;73.936&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;4.230&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.961&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Turkish&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.52&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;0.699&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;0.870&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;0.817&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Ukrainian&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;1.082&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;0.997&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;2.300&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;6.316&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Vietnamese&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;0.88&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;73.415&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;7.410&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;3.307&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
&lt;/details&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;Minimax-MLS-test SIM(⬆) Results (click to expand)&lt;/b&gt;&lt;/summary&gt; 
 &lt;table&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;th&gt;Language&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;Minimax&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;ElevenLabs&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;Qwen3-TTS&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;FishAudio S2&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;&lt;strong&gt;VoxCPM2&lt;/strong&gt;&lt;/th&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Arabic&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;73.6&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;70.6&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;75.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;79.1&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Cantonese&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;77.8&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;67.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;80.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;83.5&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Chinese&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;78.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;67.7&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;79.9&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;81.6&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;82.5&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Czech&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;79.6&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;68.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;79.8&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;78.3&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Dutch&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;73.8&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;68.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;73.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;80.8&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;English&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;75.6&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;61.3&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;77.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;79.7&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;85.4&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Finnish&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;83.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;75.9&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;81.9&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;89.0&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;French&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;62.8&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;53.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;62.8&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;69.8&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;73.5&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;German&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;73.3&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;61.4&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;77.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;76.7&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;80.3&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Greek&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;82.6&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;73.3&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;79.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;86.0&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Hindi&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;81.8&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;73.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;82.1&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;85.6&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Indonesian&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;72.9&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;66.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;76.3&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;80.0&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Italian&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;69.9&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;57.9&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;81.7&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;74.7&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;78.0&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Japanese&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;77.6&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;73.8&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;78.8&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;79.6&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;82.8&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Korean&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;77.6&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;70.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;79.9&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;81.7&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;83.3&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Polish&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;80.2&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;72.9&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;81.9&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;88.4&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Portuguese&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;80.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;71.1&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;81.7&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;78.1&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;83.7&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Romanian&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;80.9&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;69.9&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;73.3&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;79.7&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Russian&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;76.1&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;67.6&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;79.2&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;79.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;81.1&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Spanish&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;76.2&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;61.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;81.4&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;77.6&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;83.1&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Thai&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;80.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;58.8&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;78.6&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;84.0&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Turkish&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;77.9&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;59.6&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;83.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;87.1&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Ukrainian&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;73.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;64.7&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;74.7&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;79.8&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Vietnamese&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;74.3&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;36.9&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;74.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;80.6&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
&lt;/details&gt; 
&lt;h3&gt;Internal 30-Language ASR Benchmark&lt;/h3&gt; 
&lt;p&gt;We additionally run an internal multilingual intelligibility benchmark with &lt;strong&gt;30 languages × 500 samples&lt;/strong&gt;. ASR transcription is evaluated via &lt;strong&gt;Gemini 3.1 Flash Lite API&lt;/strong&gt;.&lt;/p&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;Internal 30-Language ASR Benchmark (click to expand)&lt;/b&gt;&lt;/summary&gt; 
 &lt;table&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;th&gt;Language&lt;/th&gt; 
    &lt;th style=&quot;text-align:right&quot;&gt;Metric&lt;/th&gt; 
    &lt;th style=&quot;text-align:right&quot;&gt;VoxCPM2&lt;/th&gt; 
    &lt;th style=&quot;text-align:right&quot;&gt;Fish S2-Pro&lt;/th&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td&gt;ar (Arabic)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;CER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.23%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;0.30%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;da (Danish)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;WER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;2.70%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;3.52%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;de (German)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;WER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;0.96%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;0.64%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;el (Greek)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;WER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;3.17%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;4.61%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;en (English)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;WER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;0.42%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.03%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;es (Spanish)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;WER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.33%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;0.64%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;fi (Finnish)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;WER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;2.24%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;2.80%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;fr (French)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;WER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;2.16%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;2.34%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;he (Hebrew)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;CER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;2.98%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;15.27%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;hi (Hindi)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;CER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;0.79%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;0.91%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;id (Indonesian)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;WER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.36%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.68%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;it (Italian)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;WER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.65%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.08%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;ja (Japanese)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;CER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;2.40%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.82%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;km (Khmer)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;CER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;2.05%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;75.15%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;ko (Korean)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;CER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;0.95%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;0.29%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;lo (Lao)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;CER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.90%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;87.40%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;ms (Malay)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;WER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.75%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.41%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;my (Burmese)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;CER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.42%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;85.27%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;nl (Dutch)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;WER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.25%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.68%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;no (Norwegian)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;WER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;2.49%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;3.76%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;pl (Polish)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;WER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.90%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.65%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;pt (Portuguese)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;WER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.48%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.49%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;ru (Russian)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;WER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;0.90%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;0.86%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;sv (Swedish)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;WER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;2.22%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;2.63%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;sw (Swahili)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;CER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.07%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;2.02%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;th (Thai)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;CER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;0.94%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.92%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;tl (Tagalog)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;WER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;2.63%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;4.00%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;tr (Turkish)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;WER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.65%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.65%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;vi (Vietnamese)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;WER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.56%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;5.56%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;zh (Chinese)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;CER&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;0.92%&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;1.02%&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Average (30 languages)&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;&lt;strong&gt;1.68%&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:right&quot;&gt;-&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
&lt;/details&gt; 
&lt;h3&gt;InstructTTSEval&lt;/h3&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;Instruction-Guided Voice Design Results (click to expand)&lt;/b&gt;&lt;/summary&gt; 
 &lt;table&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;th&gt;Model&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;InstructTTSEval-ZH&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;InstructTTSEval-EN&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;&lt;/th&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;APS⬆&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;DSD⬆&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;RP⬆&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;APS⬆&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;DSD⬆&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;RP⬆&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Hume&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;83.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;75.3&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;54.3&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;VoxInstruct&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;47.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;52.3&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;42.6&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;54.9&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;57.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;39.3&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Parler-tts-mini&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;63.4&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;48.7&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;28.6&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Parler-tts-large&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;60.0&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;45.9&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;31.2&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;PromptTTS&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;64.3&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;47.2&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;31.4&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;PromptStyle&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;57.4&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;46.4&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;30.9&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;VoiceSculptor&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;75.7&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;64.7&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;61.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;–&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Mimo-Audio-7B-Instruct&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;75.7&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;74.3&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;61.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;80.6&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;77.6&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;59.5&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;Qwen3TTS-12Hz-1.7B-VD&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;85.2&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;81.1&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;65.1&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;82.9&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;82.4&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;68.4&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td&gt;&lt;strong&gt;VoxCPM2&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;85.2&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;71.5&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;60.8&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;84.2&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;83.2&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;71.4&lt;/strong&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
&lt;/details&gt; 
&lt;hr /&gt; 
&lt;h2&gt;⚙️ Fine-tuning&lt;/h2&gt; 
&lt;p&gt;VoxCPM supports both &lt;strong&gt;full fine-tuning (SFT)&lt;/strong&gt; and &lt;strong&gt;LoRA fine-tuning&lt;/strong&gt;. With as little as &lt;strong&gt;5–10 minutes&lt;/strong&gt; of audio, you can adapt to a specific speaker, language, or domain.&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# LoRA fine-tuning (parameter-efficient, recommended)
python scripts/train_voxcpm_finetune.py \
    --config_path conf/voxcpm_v2/voxcpm_finetune_lora.yaml

# Full fine-tuning
python scripts/train_voxcpm_finetune.py \
    --config_path conf/voxcpm_v2/voxcpm_finetune_all.yaml

# WebUI for training &amp;amp; inference
python lora_ft_webui.py   # then open http://localhost:7860
&lt;/code&gt;&lt;/pre&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;Full guide →&lt;/strong&gt; &lt;a href=&quot;https://voxcpm.readthedocs.io/en/latest/finetuning/finetune.html&quot;&gt;Fine-tuning Guide&lt;/a&gt; (data preparation, configuration, training, LoRA hot-swapping, FAQ)&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;hr /&gt; 
&lt;h2&gt;📚 Documentation&lt;/h2&gt; 
&lt;p&gt;Full documentation: &lt;strong&gt;&lt;a href=&quot;https://voxcpm.readthedocs.io/en/latest/&quot;&gt;voxcpm.readthedocs.io&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Topic&lt;/th&gt; 
   &lt;th&gt;Link&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Quick Start &amp;amp; Installation&lt;/td&gt; 
   &lt;td&gt;&lt;a href=&quot;https://voxcpm.readthedocs.io/en/latest/quickstart.html&quot;&gt;Quick Start&lt;/a&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Usage Guide &amp;amp; Cookbook&lt;/td&gt; 
   &lt;td&gt;&lt;a href=&quot;https://voxcpm.readthedocs.io/en/latest/usage_guide.html&quot;&gt;User Guide&lt;/a&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;VoxCPM Series&lt;/td&gt; 
   &lt;td&gt;&lt;a href=&quot;https://voxcpm.readthedocs.io/en/latest/models/version_history.html&quot;&gt;Models&lt;/a&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;Fine-tuning (SFT &amp;amp; LoRA)&lt;/td&gt; 
   &lt;td&gt;&lt;a href=&quot;https://voxcpm.readthedocs.io/en/latest/finetuning/finetune.html&quot;&gt;Fine-tuning Guide&lt;/a&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;FAQ &amp;amp; Troubleshooting&lt;/td&gt; 
   &lt;td&gt;&lt;a href=&quot;https://voxcpm.readthedocs.io/en/latest/faq.html&quot;&gt;FAQ&lt;/a&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;hr /&gt; 
&lt;h2&gt;🌟 Ecosystem &amp;amp; Community&lt;/h2&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th&gt;Project&lt;/th&gt; 
   &lt;th&gt;Description&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://github.com/a710128/nanovllm-voxcpm&quot;&gt;&lt;strong&gt;Nano-vLLM&lt;/strong&gt;&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;High-throughput and Fast GPU serving&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://github.com/vllm-project/vllm-omni&quot;&gt;&lt;strong&gt;vLLM-Omni&lt;/strong&gt;&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Official vLLM omni-modal serving for VoxCPM2 — PagedAttention, OpenAI-compatible API&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://github.com/bluryar/VoxCPM.cpp&quot;&gt;&lt;strong&gt;VoxCPM.cpp&lt;/strong&gt;&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;GGML/GGUF: CPU, CUDA, Vulkan inference&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://github.com/bluryar/VoxCPM-ONNX&quot;&gt;&lt;strong&gt;VoxCPM-ONNX&lt;/strong&gt;&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;ONNX export for CPU inference&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://github.com/0seba/VoxCPMANE&quot;&gt;&lt;strong&gt;VoxCPMANE&lt;/strong&gt;&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Apple Neural Engine backend&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://github.com/madushan1000/voxcpm_rs&quot;&gt;&lt;strong&gt;voxcpm_rs&lt;/strong&gt;&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Rust re-implementation&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://github.com/wildminder/ComfyUI-VoxCPM&quot;&gt;&lt;strong&gt;ComfyUI-VoxCPM&lt;/strong&gt;&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;ComfyUI node-based workflows&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://github.com/HM-RunningHub/ComfyUI_RH_VoxCPM&quot;&gt;&lt;strong&gt;ComfyUI_RH_VoxCPM&lt;/strong&gt;&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Feature-complete ComfyUI workflow for VoxCPM 2 with multi-speaker generation, LoRA, and auto-ASR&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://github.com/1038lab/ComfyUI-VoxCPMTTS&quot;&gt;&lt;strong&gt;ComfyUI-VoxCPMTTS&lt;/strong&gt;&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;ComfyUI TTS extension&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td&gt;&lt;a href=&quot;https://github.com/rsxdalv/tts_webui_extension.vox_cpm&quot;&gt;&lt;strong&gt;TTS WebUI&lt;/strong&gt;&lt;/a&gt;&lt;/td&gt; 
   &lt;td&gt;Browser-based TTS extension&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;See the full &lt;a href=&quot;https://voxcpm.readthedocs.io/en/latest/&quot;&gt;Ecosystem&lt;/a&gt; in the docs. Community projects are not officially maintained by OpenBMB. Built something cool? &lt;a href=&quot;https://github.com/OpenBMB/VoxCPM/issues&quot;&gt;Open an issue or PR&lt;/a&gt; to add it!&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;hr /&gt; 
&lt;h2&gt;⚠️ Risks and Limitations&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Potential for Misuse:&lt;/strong&gt; VoxCPM&#39;s voice cloning can generate highly realistic synthetic speech. It is &lt;strong&gt;strictly forbidden&lt;/strong&gt; to use VoxCPM for impersonation, fraud, or disinformation. We strongly recommend clearly marking any AI-generated content.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Controllable Generation Stability:&lt;/strong&gt; Voice Design and Controllable Voice Cloning results can vary between runs — you may try to generate 1~3 times to obtain the desired voice or style. We are actively working on improving controllability consistency.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Language Coverage:&lt;/strong&gt; VoxCPM2 officially supports 30 languages. For languages not on the list, you are welcome to test directly or try fine-tuning on your own data. We plan to expand language coverage in future releases.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Usage:&lt;/strong&gt; This model is released under the Apache-2.0 license. For production deployments, we recommend conducting thorough testing and safety evaluation tailored to your use case.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;hr /&gt; 
&lt;h2&gt;📖 Citation&lt;/h2&gt; 
&lt;p&gt;If you find VoxCPM helpful, please consider citing our work and starring ⭐ the repository!&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bib&quot;&gt;@article{voxcpm2_2026,
  title   = {VoxCPM2: Tokenizer-Free TTS for Multilingual Speech Generation, Creative Voice Design, and True-to-Life Cloning},
  author  = {VoxCPM Team},
  journal = {GitHub},
  year    = {2026},
}

@article{voxcpm2025,
  title   = {VoxCPM: Tokenizer-Free TTS for Context-Aware Speech Generation
             and True-to-Life Voice Cloning},
  author  = {Zhou, Yixuan and Zeng, Guoyang and Liu, Xin and Li, Xiang and
             Yu, Renjie and Wang, Ziyang and Ye, Runchuan and Sun, Weiyue and
             Gui, Jiancheng and Li, Kehan and Wu, Zhiyong and Liu, Zhiyuan},
  journal = {arXiv preprint arXiv:2509.24650},
  year    = {2025},
}
&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;📄 License&lt;/h2&gt; 
&lt;p&gt;VoxCPM model weights and code are open-sourced under the &lt;a href=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/LICENSE&quot;&gt;Apache-2.0&lt;/a&gt; license.&lt;/p&gt; 
&lt;h2&gt;🙏 Acknowledgments&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;a href=&quot;https://arxiv.org/abs/2502.03930&quot;&gt;DiTAR&lt;/a&gt; for the diffusion autoregressive backbone&lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://github.com/OpenBMB/MiniCPM&quot;&gt;MiniCPM-4&lt;/a&gt; for the language model foundation&lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://github.com/FunAudioLLM/CosyVoice&quot;&gt;CosyVoice&lt;/a&gt; for the Flow Matching-based LocDiT implementation&lt;/li&gt; 
 &lt;li&gt;&lt;a href=&quot;https://github.com/descriptinc/descript-audio-codec&quot;&gt;DAC&lt;/a&gt; for the Audio VAE backbone&lt;/li&gt; 
 &lt;li&gt;Our community users for trying VoxCPM, reporting issues, sharing ideas, and contributing—your support helps the project keep getting better&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Institutions&lt;/h2&gt; 
&lt;p&gt; &lt;a href=&quot;https://modelbest.cn/&quot;&gt;&lt;img src=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/assets/modelbest_logo.png&quot; width=&quot;28px&quot; /&gt; ModelBest&lt;/a&gt; &amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;a href=&quot;https://github.com/thuhcsi&quot;&gt;&lt;img src=&quot;https://raw.githubusercontent.com/OpenBMB/VoxCPM/main/assets/thuhcsi_logo.png&quot; width=&quot;28px&quot; /&gt; THUHCSI&lt;/a&gt; &lt;/p&gt; 
&lt;h2&gt;⭐ Star History&lt;/h2&gt; 
&lt;p&gt;&lt;a href=&quot;https://star-history.com/#OpenBMB/VoxCPM&amp;amp;Date&quot;&gt;&lt;img src=&quot;https://api.star-history.com/svg?repos=OpenBMB/VoxCPM&amp;amp;type=Date&quot; alt=&quot;Star History Chart&quot; /&gt;&lt;/a&gt;&lt;/p&gt;</description>
      
      <media:content url="https://opengraph.githubassets.com/12e5203c78141386a44488732ea2fd71ce865f46b8ca5e6771748c8daf758d35/OpenBMB/VoxCPM" medium="image" />
      
    </item>
    
    <item>
      <title>obra/superpowers</title>
      <link>https://github.com/obra/superpowers</link>
      <description>&lt;p&gt;An agentic skills framework &amp; software development methodology that works.&lt;/p&gt;&lt;hr&gt;&lt;h1&gt;Superpowers&lt;/h1&gt; 
&lt;p&gt;Superpowers is a complete software development methodology for your coding agents, built on top of a set of composable skills and some initial instructions that make sure your agent uses them.&lt;/p&gt; 
&lt;h2&gt;How it works&lt;/h2&gt; 
&lt;p&gt;It starts from the moment you fire up your coding agent. As soon as it sees that you&#39;re building something, it &lt;em&gt;doesn&#39;t&lt;/em&gt; just jump into trying to write code. Instead, it steps back and asks you what you&#39;re really trying to do.&lt;/p&gt; 
&lt;p&gt;Once it&#39;s teased a spec out of the conversation, it shows it to you in chunks short enough to actually read and digest.&lt;/p&gt; 
&lt;p&gt;After you&#39;ve signed off on the design, your agent puts together an implementation plan that&#39;s clear enough for an enthusiastic junior engineer with poor taste, no judgement, no project context, and an aversion to testing to follow. It emphasizes true red/green TDD, YAGNI (You Aren&#39;t Gonna Need It), and DRY.&lt;/p&gt; 
&lt;p&gt;Next up, once you say &quot;go&quot;, it launches a &lt;em&gt;subagent-driven-development&lt;/em&gt; process, having agents work through each engineering task, inspecting and reviewing their work, and continuing forward. It&#39;s not uncommon for Claude to be able to work autonomously for a couple hours at a time without deviating from the plan you put together.&lt;/p&gt; 
&lt;p&gt;There&#39;s a bunch more to it, but that&#39;s the core of the system. And because the skills trigger automatically, you don&#39;t need to do anything special. Your coding agent just has Superpowers.&lt;/p&gt; 
&lt;h2&gt;Sponsorship&lt;/h2&gt; 
&lt;p&gt;If Superpowers has helped you do stuff that makes money and you are so inclined, I&#39;d greatly appreciate it if you&#39;d consider &lt;a href=&quot;https://github.com/sponsors/obra&quot;&gt;sponsoring my opensource work&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;Thanks!&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Jesse&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Installation&lt;/h2&gt; 
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Installation differs by platform.&lt;/p&gt; 
&lt;h3&gt;Claude Code Official Marketplace&lt;/h3&gt; 
&lt;p&gt;Superpowers is available via the &lt;a href=&quot;https://claude.com/plugins/superpowers&quot;&gt;official Claude plugin marketplace&lt;/a&gt;&lt;/p&gt; 
&lt;p&gt;Install the plugin from Anthropic&#39;s official marketplace:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;/plugin install superpowers@claude-plugins-official
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Claude Code (Superpowers Marketplace)&lt;/h3&gt; 
&lt;p&gt;The Superpowers marketplace provides Superpowers and some other related plugins for Claude Code.&lt;/p&gt; 
&lt;p&gt;In Claude Code, register the marketplace first:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;/plugin marketplace add obra/superpowers-marketplace
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Then install the plugin from this marketplace:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;/plugin install superpowers@superpowers-marketplace
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;OpenAI Codex CLI&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;Open plugin search interface&lt;/li&gt; 
&lt;/ul&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;/plugins
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Search for Superpowers&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;superpowers
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Select &lt;code&gt;Install Plugin&lt;/code&gt;&lt;/p&gt; 
&lt;h3&gt;OpenAI Codex App&lt;/h3&gt; 
&lt;ul&gt; 
 &lt;li&gt;In the Codex app, click on Plugins in the sidebar.&lt;/li&gt; 
 &lt;li&gt;You should see &lt;code&gt;Superpowers&lt;/code&gt; in the Coding section.&lt;/li&gt; 
 &lt;li&gt;Click the &lt;code&gt;+&lt;/code&gt; next to Superpowers and follow the prompts.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h3&gt;Cursor (via Plugin Marketplace)&lt;/h3&gt; 
&lt;p&gt;In Cursor Agent chat, install from marketplace:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;/add-plugin superpowers
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;or search for &quot;superpowers&quot; in the plugin marketplace.&lt;/p&gt; 
&lt;h3&gt;OpenCode&lt;/h3&gt; 
&lt;p&gt;Tell OpenCode:&lt;/p&gt; 
&lt;pre&gt;&lt;code&gt;Fetch and follow instructions from https://raw.githubusercontent.com/obra/superpowers/refs/heads/main/.opencode/INSTALL.md
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Detailed docs:&lt;/strong&gt; &lt;a href=&quot;https://raw.githubusercontent.com/obra/superpowers/main/docs/README.opencode.md&quot;&gt;docs/README.opencode.md&lt;/a&gt;&lt;/p&gt; 
&lt;h3&gt;GitHub Copilot CLI&lt;/h3&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;copilot plugin marketplace add obra/superpowers-marketplace
copilot plugin install superpowers@superpowers-marketplace
&lt;/code&gt;&lt;/pre&gt; 
&lt;h3&gt;Gemini CLI&lt;/h3&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;gemini extensions install https://github.com/obra/superpowers
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;To update:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;gemini extensions update superpowers
&lt;/code&gt;&lt;/pre&gt; 
&lt;h2&gt;The Basic Workflow&lt;/h2&gt; 
&lt;ol&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;brainstorming&lt;/strong&gt; - Activates before writing code. Refines rough ideas through questions, explores alternatives, presents design in sections for validation. Saves design document.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;using-git-worktrees&lt;/strong&gt; - Activates after design approval. Creates isolated workspace on new branch, runs project setup, verifies clean test baseline.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;writing-plans&lt;/strong&gt; - Activates with approved design. Breaks work into bite-sized tasks (2-5 minutes each). Every task has exact file paths, complete code, verification steps.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;subagent-driven-development&lt;/strong&gt; or &lt;strong&gt;executing-plans&lt;/strong&gt; - Activates with plan. Dispatches fresh subagent per task with two-stage review (spec compliance, then code quality), or executes in batches with human checkpoints.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;test-driven-development&lt;/strong&gt; - Activates during implementation. Enforces RED-GREEN-REFACTOR: write failing test, watch it fail, write minimal code, watch it pass, commit. Deletes code written before tests.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;requesting-code-review&lt;/strong&gt; - Activates between tasks. Reviews against plan, reports issues by severity. Critical issues block progress.&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;strong&gt;finishing-a-development-branch&lt;/strong&gt; - Activates when tasks complete. Verifies tests, presents options (merge/PR/keep/discard), cleans up worktree.&lt;/p&gt; &lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;&lt;strong&gt;The agent checks for relevant skills before any task.&lt;/strong&gt; Mandatory workflows, not suggestions.&lt;/p&gt; 
&lt;h2&gt;What&#39;s Inside&lt;/h2&gt; 
&lt;h3&gt;Skills Library&lt;/h3&gt; 
&lt;p&gt;&lt;strong&gt;Testing&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;test-driven-development&lt;/strong&gt; - RED-GREEN-REFACTOR cycle (includes testing anti-patterns reference)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;Debugging&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;systematic-debugging&lt;/strong&gt; - 4-phase root cause process (includes root-cause-tracing, defense-in-depth, condition-based-waiting techniques)&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;verification-before-completion&lt;/strong&gt; - Ensure it&#39;s actually fixed&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;Collaboration&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;brainstorming&lt;/strong&gt; - Socratic design refinement&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;writing-plans&lt;/strong&gt; - Detailed implementation plans&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;executing-plans&lt;/strong&gt; - Batch execution with checkpoints&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;dispatching-parallel-agents&lt;/strong&gt; - Concurrent subagent workflows&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;requesting-code-review&lt;/strong&gt; - Pre-review checklist&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;receiving-code-review&lt;/strong&gt; - Responding to feedback&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;using-git-worktrees&lt;/strong&gt; - Parallel development branches&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;finishing-a-development-branch&lt;/strong&gt; - Merge/PR decision workflow&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;subagent-driven-development&lt;/strong&gt; - Fast iteration with two-stage review (spec compliance, then code quality)&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;&lt;strong&gt;Meta&lt;/strong&gt;&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;writing-skills&lt;/strong&gt; - Create new skills following best practices (includes testing methodology)&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;using-superpowers&lt;/strong&gt; - Introduction to the skills system&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;Philosophy&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Test-Driven Development&lt;/strong&gt; - Write tests first, always&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Systematic over ad-hoc&lt;/strong&gt; - Process over guessing&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Complexity reduction&lt;/strong&gt; - Simplicity as primary goal&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Evidence over claims&lt;/strong&gt; - Verify before declaring success&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Read &lt;a href=&quot;https://blog.fsck.com/2025/10/09/superpowers/&quot;&gt;the original release announcement&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;Contributing&lt;/h2&gt; 
&lt;p&gt;The general contribution process for Superpowers is below. Keep in mind that we don&#39;t generally accept contributions of new skills and that any updates to skills must work across all of the coding agents we support.&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;Fork the repository&lt;/li&gt; 
 &lt;li&gt;Switch to the &#39;dev&#39; branch&lt;/li&gt; 
 &lt;li&gt;Create a branch for your work&lt;/li&gt; 
 &lt;li&gt;Follow the &lt;code&gt;writing-skills&lt;/code&gt; skill for creating and testing new and modified skills&lt;/li&gt; 
 &lt;li&gt;Submit a PR, being sure to fill in the pull request template.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;See &lt;code&gt;skills/writing-skills/SKILL.md&lt;/code&gt; for the complete guide.&lt;/p&gt; 
&lt;h2&gt;Updating&lt;/h2&gt; 
&lt;p&gt;Superpowers updates are somewhat coding-agent dependent, but are often automatic.&lt;/p&gt; 
&lt;h2&gt;License&lt;/h2&gt; 
&lt;p&gt;MIT License - see LICENSE file for details&lt;/p&gt; 
&lt;h2&gt;Community&lt;/h2&gt; 
&lt;p&gt;Superpowers is built by &lt;a href=&quot;https://blog.fsck.com&quot;&gt;Jesse Vincent&lt;/a&gt; and the rest of the folks at &lt;a href=&quot;https://primeradiant.com&quot;&gt;Prime Radiant&lt;/a&gt;.&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Discord&lt;/strong&gt;: &lt;a href=&quot;https://discord.gg/35wsABTejz&quot;&gt;Join us&lt;/a&gt; for community support, questions, and sharing what you&#39;re building with Superpowers&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Issues&lt;/strong&gt;: &lt;a href=&quot;https://github.com/obra/superpowers/issues&quot;&gt;https://github.com/obra/superpowers/issues&lt;/a&gt;&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Release announcements&lt;/strong&gt;: &lt;a href=&quot;https://primeradiant.com/superpowers/&quot;&gt;Sign up&lt;/a&gt; to get notified about new versions&lt;/li&gt; 
&lt;/ul&gt;</description>
      
      <media:content url="https://opengraph.githubassets.com/546a071ca2efd4dbd159eac5c77dd683fa28289e55b29c0d179391c49609ec6e/obra/superpowers" medium="image" />
      
    </item>
    
    <item>
      <title>HKUDS/DeepTutor</title>
      <link>https://github.com/HKUDS/DeepTutor</link>
      <description>&lt;p&gt;&quot;DeepTutor: Agent-Native Personalized Learning Assistant&quot;&lt;/p&gt;&lt;hr&gt;&lt;div align=&quot;center&quot;&gt; 
 &lt;img src=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/assets/logo-ver2.png&quot; alt=&quot;DeepTutor&quot; width=&quot;140&quot; style=&quot;border-radius: 15px;&quot; /&gt; 
 &lt;h1&gt;DeepTutor: Agent-Native Personalized Tutoring&lt;/h1&gt; 
 &lt;p&gt;&lt;a href=&quot;https://trendshift.io/repositories/17099&quot; target=&quot;_blank&quot;&gt;&lt;img src=&quot;https://trendshift.io/api/badge/repositories/17099&quot; alt=&quot;HKUDS%2FDeepTutor | Trendshift&quot; style=&quot;width: 250px; height: 55px;&quot; width=&quot;250&quot; height=&quot;55&quot; /&gt;&lt;/a&gt;&lt;/p&gt; 
 &lt;p&gt;&lt;a href=&quot;https://www.python.org/downloads/&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/Python-3.11%2B-3776AB?style=flat-square&amp;amp;logo=python&amp;amp;logoColor=white&quot; alt=&quot;Python 3.11+&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://nextjs.org/&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/Next.js-16-000000?style=flat-square&amp;amp;logo=next.js&amp;amp;logoColor=white&quot; alt=&quot;Next.js 16&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/LICENSE&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/License-Apache_2.0-blue?style=flat-square&quot; alt=&quot;License&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/releases&quot;&gt;&lt;img src=&quot;https://img.shields.io/github/v/release/HKUDS/DeepTutor?style=flat-square&amp;amp;color=brightgreen&quot; alt=&quot;GitHub release&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/#&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/arXiv-Coming_Soon-b31b1b?style=flat-square&amp;amp;logo=arxiv&amp;amp;logoColor=white&quot; alt=&quot;arXiv&quot; /&gt;&lt;/a&gt;&lt;/p&gt; 
 &lt;p&gt;&lt;a href=&quot;https://discord.gg/eRsjPgMU4t&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/Discord-Community-5865F2?style=flat-square&amp;amp;logo=discord&amp;amp;logoColor=white&quot; alt=&quot;Discord&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/Communication.md&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/Feishu-Group-00D4AA?style=flat-square&amp;amp;logo=feishu&amp;amp;logoColor=white&quot; alt=&quot;Feishu&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/issues/78&quot;&gt;&lt;img src=&quot;https://img.shields.io/badge/WeChat-Group-07C160?style=flat-square&amp;amp;logo=wechat&amp;amp;logoColor=white&quot; alt=&quot;WeChat&quot; /&gt;&lt;/a&gt;&lt;/p&gt; 
 &lt;p&gt;&lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/#-key-features&quot;&gt;Features&lt;/a&gt; · &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/#-get-started&quot;&gt;Get Started&lt;/a&gt; · &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/#-explore-deeptutor&quot;&gt;Explore&lt;/a&gt; · &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/#-tutorbot--persistent-autonomous-ai-tutors&quot;&gt;TutorBot&lt;/a&gt; · &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/#%EF%B8%8F-deeptutor-cli--agent-native-interface&quot;&gt;CLI&lt;/a&gt; · &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/#-community--ecosystem&quot;&gt;Community&lt;/a&gt;&lt;/p&gt; 
 &lt;p&gt;&lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/assets/README/README_CN.md&quot;&gt;🇨🇳 中文&lt;/a&gt; · &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/assets/README/README_JA.md&quot;&gt;🇯🇵 日本語&lt;/a&gt; · &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/assets/README/README_ES.md&quot;&gt;🇪🇸 Español&lt;/a&gt; · &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/assets/README/README_FR.md&quot;&gt;🇫🇷 Français&lt;/a&gt; · &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/assets/README/README_AR.md&quot;&gt;🇸🇦 العربية&lt;/a&gt; · &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/assets/README/README_RU.md&quot;&gt;🇷🇺 Русский&lt;/a&gt; · &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/assets/README/README_HI.md&quot;&gt;🇮🇳 हिन्दी&lt;/a&gt; · &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/assets/README/README_PT.md&quot;&gt;🇵🇹 Português&lt;/a&gt;&lt;/p&gt; 
&lt;/div&gt; 
&lt;hr /&gt; 
&lt;h3&gt;📦 Releases&lt;/h3&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;[2026.4.17]&lt;/strong&gt; &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/releases/tag/v1.1.1&quot;&gt;v1.1.1&lt;/a&gt; — Universal &quot;Answer now&quot; escape hatch across every capability, Co-Writer resizable split with line-anchored scroll sync, Save-to-Notebook message-selection mode, real notebook system adoption across Knowledge/Guide/Save flows, unified collapsible settings panel, dedicated streaming Stop button, TutorBot config manager refactor with atomic writes, light/Snow theme refresh, and expanded test suite.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;[2026.4.15]&lt;/strong&gt; &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/releases/tag/v1.1.0&quot;&gt;v1.1.0&lt;/a&gt; — LaTeX block math parsing overhaul, LLM diagnostic probe agents.yaml configuration, extra headers forwarding in LLM factory, SaveToNotebookModal UUID fix, Docker + local LLM guidance, and expanded test suite.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;[2026.4.14]&lt;/strong&gt; &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/releases/tag/v1.1.0-beta&quot;&gt;v1.1.0-beta&lt;/a&gt; — URL-based chat routing with bookmarkable sessions, Snow theme, WebSocket heartbeat &amp;amp; auto-reconnect with resume, ChatComposer performance optimization, embedding provider registry overhaul, Serper search provider, streaming idle timeout, and expanded test suite.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;[2026.4.13]&lt;/strong&gt; &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/releases/tag/v1.0.3&quot;&gt;v1.0.3&lt;/a&gt; — Question Notebook for unified quiz review with bookmarks &amp;amp; categories, Mermaid diagram support in Visualize, embedding model mismatch detection, system message merging for Qwen/vLLM compatibility, LM Studio &amp;amp; llama.cpp provider support, and Glass theme.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;[2026.4.11]&lt;/strong&gt; &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/releases/tag/v1.0.2&quot;&gt;v1.0.2&lt;/a&gt; — Search consolidation simplification with SearXNG fallback, provider switch fix, explicit runtime config in test runner, and frontend resource leak fixes.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;[2026.4.10]&lt;/strong&gt; &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/releases/tag/v1.0.1&quot;&gt;v1.0.1&lt;/a&gt; — New Visualize capability with Chart.js/SVG rendering pipeline, quiz duplicate prevention with generation history, o4-mini model support, and server logging improvements.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;[2026.4.10]&lt;/strong&gt; &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/releases/tag/v1.0.0-beta.4&quot;&gt;v1.0.0-beta.4&lt;/a&gt; — Embedding progress tracking with HTTP 429 rate limit retry, cross-platform start tour dependency management, and case-insensitive MIME validation fix.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;[2026.4.8]&lt;/strong&gt; &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/releases/tag/v1.0.0-beta.3&quot;&gt;v1.0.0-beta.3&lt;/a&gt; — Remove litellm dependency with native OpenAI/Anthropic SDK providers, Windows Math Animator compatibility, robust JSON parsing for LLM outputs, Guided Learning KaTeX &amp;amp; navigation fixes, and full i18n coverage for Chinese.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;[2026.4.7]&lt;/strong&gt; &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/releases/tag/v1.0.0-beta.2&quot;&gt;v1.0.0-beta.2&lt;/a&gt; — Runtime cache invalidation for hot settings reload, MinerU nested output support, mimic WebSocket fix, Python 3.11+ minimum, and CI improvements.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;[2026.4.4]&lt;/strong&gt; &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/releases/tag/v1.0.0-beta.1&quot;&gt;v1.0.0-beta.1&lt;/a&gt; — Agent-native architecture rewrite (～200k lines) with two-layer plugin model (Tools + Capabilities), CLI &amp;amp; SDK entry points, TutorBot multi-channel bot agent, Co-Writer, Guided Learning, and persistent memory.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;Past releases&lt;/b&gt;&lt;/summary&gt; 
 &lt;blockquote&gt; 
  &lt;p&gt;&lt;strong&gt;[2026.1.23]&lt;/strong&gt; &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/releases/tag/v0.6.0&quot;&gt;v0.6.0&lt;/a&gt; — Session persistence, incremental document upload, flexible RAG pipeline import, and full Chinese localization.&lt;/p&gt; 
 &lt;/blockquote&gt; 
 &lt;blockquote&gt; 
  &lt;p&gt;&lt;strong&gt;[2026.1.18]&lt;/strong&gt; &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/releases/tag/v0.5.2&quot;&gt;v0.5.2&lt;/a&gt; — Docling support for RAG-Anything, logging system optimization, and bug fixes.&lt;/p&gt; 
 &lt;/blockquote&gt; 
 &lt;blockquote&gt; 
  &lt;p&gt;&lt;strong&gt;[2026.1.15]&lt;/strong&gt; &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/releases/tag/v0.5.0&quot;&gt;v0.5.0&lt;/a&gt; — Unified service configuration, RAG pipeline selection per knowledge base, question generation overhaul, and sidebar customization.&lt;/p&gt; 
 &lt;/blockquote&gt; 
 &lt;blockquote&gt; 
  &lt;p&gt;&lt;strong&gt;[2026.1.9]&lt;/strong&gt; &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/releases/tag/v0.4.0&quot;&gt;v0.4.0&lt;/a&gt; — Multi-provider LLM &amp;amp; embedding support, new home page, RAG module decoupling, and environment variable refactor.&lt;/p&gt; 
 &lt;/blockquote&gt; 
 &lt;blockquote&gt; 
  &lt;p&gt;&lt;strong&gt;[2026.1.5]&lt;/strong&gt; &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/releases/tag/v0.3.0&quot;&gt;v0.3.0&lt;/a&gt; — Unified PromptManager architecture, GitHub Actions CI/CD, and pre-built Docker images on GHCR.&lt;/p&gt; 
 &lt;/blockquote&gt; 
 &lt;blockquote&gt; 
  &lt;p&gt;&lt;strong&gt;[2026.1.2]&lt;/strong&gt; &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/releases/tag/v0.2.0&quot;&gt;v0.2.0&lt;/a&gt; — Docker deployment, Next.js 16 &amp;amp; React 19 upgrade, WebSocket security hardening, and critical vulnerability fixes.&lt;/p&gt; 
 &lt;/blockquote&gt; 
&lt;/details&gt; 
&lt;h3&gt;📰 News&lt;/h3&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;[2026.4.4]&lt;/strong&gt; Long time no see! ✨ DeepTutor v1.0.0 is finally here — an agent-native evolution featuring a ground-up architecture rewrite, TutorBot, and flexible mode switching under the Apache-2.0 license. A new chapter begins, and our story continues!&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;[2026.2.6]&lt;/strong&gt; 🚀 We&#39;ve reached 10k stars in just 39 days! A huge thank you to our incredible community for the support!&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;[2026.1.1]&lt;/strong&gt; Happy New Year! Join our &lt;a href=&quot;https://discord.gg/eRsjPgMU4t&quot;&gt;Discord&lt;/a&gt;, &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/issues/78&quot;&gt;WeChat&lt;/a&gt;, or &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/discussions&quot;&gt;Discussions&lt;/a&gt; — let&#39;s shape the future of DeepTutor together!&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;&lt;strong&gt;[2025.12.29]&lt;/strong&gt; DeepTutor is officially released!&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;h2&gt;✨ Key Features&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Unified Chat Workspace&lt;/strong&gt; — Five modes, one thread. Chat, Deep Solve, Quiz Generation, Deep Research, and Math Animator share the same context — start a conversation, escalate to multi-agent problem solving, generate quizzes, then deep-dive into research, all without losing a single message.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Personal TutorBots&lt;/strong&gt; — Not chatbots — autonomous tutors. Each TutorBot lives in its own workspace with its own memory, personality, and skill set. They set reminders, learn new abilities, and evolve as you grow. Powered by &lt;a href=&quot;https://github.com/HKUDS/nanobot&quot;&gt;nanobot&lt;/a&gt;.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;AI Co-Writer&lt;/strong&gt; — A Markdown editor where AI is a first-class collaborator. Select text, rewrite, expand, or summarize — drawing from your knowledge base and the web. Every piece feeds back into your learning ecosystem.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Guided Learning&lt;/strong&gt; — Turn your materials into structured, visual learning journeys. DeepTutor designs multi-step plans, generates interactive pages for each knowledge point, and lets you discuss alongside each step.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Knowledge Hub&lt;/strong&gt; — Upload PDFs, Markdown, and text files to build RAG-ready knowledge bases. Organize insights across sessions in color-coded notebooks. Your documents don&#39;t just sit there — they actively power every conversation.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Persistent Memory&lt;/strong&gt; — DeepTutor builds a living profile of you: what you&#39;ve studied, how you learn, and where you&#39;re heading. Shared across all features and TutorBots, it gets sharper with every interaction.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Agent-Native CLI&lt;/strong&gt; — Every capability, knowledge base, session, and TutorBot is one command away. Rich terminal output for humans, structured JSON for AI agents and pipelines. Hand DeepTutor a &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/SKILL.md&quot;&gt;&lt;code&gt;SKILL.md&lt;/code&gt;&lt;/a&gt; and your agents can operate it autonomously.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;hr /&gt; 
&lt;h2&gt;🚀 Get Started&lt;/h2&gt; 
&lt;h3&gt;Option A — Setup Tour (Recommended)&lt;/h3&gt; 
&lt;p&gt;A &lt;strong&gt;single interactive script&lt;/strong&gt; that walks you through everything: dependency installation, environment configuration, live connection testing, and launch. No manual &lt;code&gt;.env&lt;/code&gt; editing needed.&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git clone https://github.com/HKUDS/DeepTutor.git
cd DeepTutor

# Create a Python environment
conda create -n deeptutor python=3.11 &amp;amp;&amp;amp; conda activate deeptutor
# Or: python -m venv .venv &amp;amp;&amp;amp; source .venv/bin/activate

# Launch the guided tour
python scripts/start_tour.py
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;The tour asks how you&#39;d like to use DeepTutor:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Web mode&lt;/strong&gt; (recommended) — Picks a dependency profile, installs everything (pip + npm), then spins up a temporary server and opens the &lt;strong&gt;Settings&lt;/strong&gt; page in your browser. A four-step guided tour walks you through LLM, Embedding, and Search provider setup with live connection testing. Once complete, DeepTutor restarts automatically with your configuration.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;CLI mode&lt;/strong&gt; — A fully interactive terminal flow: choose a dependency profile, install dependencies, configure providers, verify connections, and apply — all without leaving the shell.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Either way, you end up with a running DeepTutor at &lt;a href=&quot;http://localhost:3782&quot;&gt;http://localhost:3782&lt;/a&gt;.&lt;/p&gt; 
&lt;h3&gt;Option B — Manual Local Install&lt;/h3&gt; 
&lt;p&gt;If you prefer full control, install and configure everything yourself.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;1. Install dependencies&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git clone https://github.com/HKUDS/DeepTutor.git
cd DeepTutor

conda create -n deeptutor python=3.11 &amp;amp;&amp;amp; conda activate deeptutor
pip install -e &quot;.[server]&quot;

# Frontend
cd web &amp;amp;&amp;amp; npm install &amp;amp;&amp;amp; cd ..
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;2. Configure environment&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cp .env.example .env
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Edit &lt;code&gt;.env&lt;/code&gt; and fill in at least the required fields:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-dotenv&quot;&gt;# LLM (Required)
LLM_BINDING=openai
LLM_MODEL=gpt-4o-mini
LLM_API_KEY=sk-xxx
LLM_HOST=https://api.openai.com/v1

# Embedding (Required for Knowledge Base)
EMBEDDING_BINDING=openai
EMBEDDING_MODEL=text-embedding-3-large
EMBEDDING_API_KEY=sk-xxx
EMBEDDING_HOST=https://api.openai.com/v1
EMBEDDING_DIMENSION=3072
&lt;/code&gt;&lt;/pre&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;Supported LLM Providers&lt;/b&gt;&lt;/summary&gt; 
 &lt;table&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Provider&lt;/th&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Binding&lt;/th&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Default Base URL&lt;/th&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;AiHubMix&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;aihubmix&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://aihubmix.com/v1&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Anthropic&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;anthropic&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://api.anthropic.com/v1&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Azure OpenAI&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;azure_openai&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;—&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;BytePlus&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;byteplus&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://ark.ap-southeast.bytepluses.com/api/v3&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;BytePlus Coding Plan&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;byteplus_coding_plan&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://ark.ap-southeast.bytepluses.com/api/coding/v3&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Custom (OpenAI-compat)&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;custom&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;—&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;DashScope (Qwen)&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;dashscope&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://dashscope.aliyuncs.com/compatible-mode/v1&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;DeepSeek&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deepseek&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://api.deepseek.com&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Gemini&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;gemini&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://generativelanguage.googleapis.com/v1beta/openai/&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;GitHub Copilot&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;github_copilot&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://api.githubcopilot.com&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Groq&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;groq&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://api.groq.com/openai/v1&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;llama.cpp&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;llama_cpp&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;http://localhost:8080/v1&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;LM Studio&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;lm_studio&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;http://localhost:1234/v1&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;MiniMax&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;minimax&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://api.minimax.io/v1&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Mistral&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;mistral&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://api.mistral.ai/v1&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Moonshot (Kimi)&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;moonshot&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://api.moonshot.ai/v1&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Ollama&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;ollama&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;http://localhost:11434/v1&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;OpenAI&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;openai&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://api.openai.com/v1&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;OpenAI Codex&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;openai_codex&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://chatgpt.com/backend-api&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;OpenRouter&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;openrouter&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://openrouter.ai/api/v1&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;OpenVINO Model Server&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;ovms&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;http://localhost:8000/v3&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Qianfan (Ernie)&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;qianfan&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://qianfan.baidubce.com/v2&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;SiliconFlow&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;siliconflow&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://api.siliconflow.cn/v1&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Step Fun&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;stepfun&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://api.stepfun.com/v1&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;vLLM&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;vllm&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;http://localhost:8000/v1&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;VolcEngine&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;volcengine&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://ark.cn-beijing.volces.com/api/v3&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;VolcEngine Coding Plan&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;volcengine_coding_plan&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://ark.cn-beijing.volces.com/api/coding/v3&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Xiaomi MIMO&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;xiaomi_mimo&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://api.xiaomimimo.com/v1&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Zhipu AI (GLM)&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;zhipu&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;https://open.bigmodel.cn/api/paas/v4&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
&lt;/details&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;Supported Embedding Providers&lt;/b&gt;&lt;/summary&gt; 
 &lt;table&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Provider&lt;/th&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Binding&lt;/th&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Model Example&lt;/th&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Default Dim&lt;/th&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;OpenAI&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;openai&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;text-embedding-3-large&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;3072&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Azure OpenAI&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;azure_openai&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;deployment name&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;—&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Cohere&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;cohere&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;embed-v4.0&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;1024&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Jina&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;jina&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;jina-embeddings-v3&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;1024&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Ollama&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;ollama&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;nomic-embed-text&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;768&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;vLLM / LM Studio&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;vllm&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Any embedding model&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;—&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Any OpenAI-compatible&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;custom&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;—&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;—&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
 &lt;p&gt;OpenAI-compatible providers (DashScope, SiliconFlow, etc.) work via the &lt;code&gt;custom&lt;/code&gt; or &lt;code&gt;openai&lt;/code&gt; binding.&lt;/p&gt; 
&lt;/details&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;Supported Web Search Providers&lt;/b&gt;&lt;/summary&gt; 
 &lt;table&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Provider&lt;/th&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Env Key&lt;/th&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Notes&lt;/th&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Brave&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;BRAVE_API_KEY&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Recommended, free tier available&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Tavily&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;TAVILY_API_KEY&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Jina&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;JINA_API_KEY&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;SearXNG&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;—&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Self-hosted, no API key needed&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;DuckDuckGo&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;—&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;No API key needed&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Perplexity&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;PERPLEXITY_API_KEY&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Requires API key&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
&lt;/details&gt; 
&lt;p&gt;&lt;strong&gt;3. Start services&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Backend (FastAPI)
python -m deeptutor.api.run_server

# Frontend (Next.js) — in a separate terminal
cd web &amp;amp;&amp;amp; npm run dev -- -p 3782
&lt;/code&gt;&lt;/pre&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th style=&quot;text-align:center&quot;&gt;Service&lt;/th&gt; 
   &lt;th style=&quot;text-align:center&quot;&gt;Default Port&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;Backend&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;&lt;code&gt;8001&lt;/code&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;Frontend&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;&lt;code&gt;3782&lt;/code&gt;&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p&gt;Open &lt;a href=&quot;http://localhost:3782&quot;&gt;http://localhost:3782&lt;/a&gt; and you&#39;re ready to go.&lt;/p&gt; 
&lt;h3&gt;Option C — Docker Deployment&lt;/h3&gt; 
&lt;p&gt;Docker wraps the backend and frontend into a single container — no local Python or Node.js required. Two options depending on your preference:&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;1. Configure environment variables&lt;/strong&gt; (required for both options)&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git clone https://github.com/HKUDS/DeepTutor.git
cd DeepTutor
cp .env.example .env
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;Edit &lt;code&gt;.env&lt;/code&gt; and fill in at least the required fields (same as &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/#option-b--manual-local-install&quot;&gt;Option B&lt;/a&gt; above).&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;2a. Pull official image (recommended)&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;Official images are published to &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/pkgs/container/deeptutor&quot;&gt;GitHub Container Registry&lt;/a&gt; on every release, built for &lt;code&gt;linux/amd64&lt;/code&gt; and &lt;code&gt;linux/arm64&lt;/code&gt;.&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker compose -f docker-compose.ghcr.yml up -d
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;To pin a specific version, edit the image tag in &lt;code&gt;docker-compose.ghcr.yml&lt;/code&gt;:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;image: ghcr.io/hkuds/deeptutor:1.0.0  # or :latest
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;2b. Build from source&lt;/strong&gt;&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker compose up -d
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;This builds the image locally from &lt;code&gt;Dockerfile&lt;/code&gt; and starts the container.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;3. Verify &amp;amp; manage&lt;/strong&gt;&lt;/p&gt; 
&lt;p&gt;Open &lt;a href=&quot;http://localhost:3782&quot;&gt;http://localhost:3782&lt;/a&gt; once the container is healthy.&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker compose logs -f   # tail logs
docker compose down       # stop and remove container
&lt;/code&gt;&lt;/pre&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;Cloud / remote server deployment&lt;/b&gt;&lt;/summary&gt; 
 &lt;p&gt;When deploying to a remote server, the browser needs to know the public URL of the backend API. Add one more variable to your &lt;code&gt;.env&lt;/code&gt;:&lt;/p&gt; 
 &lt;pre&gt;&lt;code class=&quot;language-dotenv&quot;&gt;# Set to the public URL where the backend is reachable
NEXT_PUBLIC_API_BASE_EXTERNAL=https://your-server.com:8001
&lt;/code&gt;&lt;/pre&gt; 
 &lt;p&gt;The frontend startup script applies this value at runtime — no rebuild needed.&lt;/p&gt; 
&lt;/details&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;Development mode (hot-reload)&lt;/b&gt;&lt;/summary&gt; 
 &lt;p&gt;Layer the dev override to mount source code and enable hot-reload for both services:&lt;/p&gt; 
 &lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker compose -f docker-compose.yml -f docker-compose.dev.yml up
&lt;/code&gt;&lt;/pre&gt; 
 &lt;p&gt;Changes to &lt;code&gt;deeptutor/&lt;/code&gt;, &lt;code&gt;deeptutor_cli/&lt;/code&gt;, &lt;code&gt;scripts/&lt;/code&gt;, and &lt;code&gt;web/&lt;/code&gt; are reflected immediately.&lt;/p&gt; 
&lt;/details&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;Custom ports&lt;/b&gt;&lt;/summary&gt; 
 &lt;p&gt;Override the default ports in &lt;code&gt;.env&lt;/code&gt;:&lt;/p&gt; 
 &lt;pre&gt;&lt;code class=&quot;language-dotenv&quot;&gt;BACKEND_PORT=9001
FRONTEND_PORT=4000
&lt;/code&gt;&lt;/pre&gt; 
 &lt;p&gt;Then restart:&lt;/p&gt; 
 &lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker compose up -d     # or docker compose -f docker-compose.ghcr.yml up -d
&lt;/code&gt;&lt;/pre&gt; 
&lt;/details&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;Data persistence&lt;/b&gt;&lt;/summary&gt; 
 &lt;p&gt;User data and knowledge bases are persisted via Docker volumes mapped to local directories:&lt;/p&gt; 
 &lt;table&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Container path&lt;/th&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Host path&lt;/th&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Content&lt;/th&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;/app/data/user&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;./data/user&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Settings, memory, workspace, sessions, logs&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;/app/data/knowledge_bases&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;./data/knowledge_bases&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Uploaded documents &amp;amp; vector indices&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
 &lt;p&gt;These directories survive &lt;code&gt;docker compose down&lt;/code&gt; and are reused on the next &lt;code&gt;docker compose up&lt;/code&gt;.&lt;/p&gt; 
&lt;/details&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;Environment variables reference&lt;/b&gt;&lt;/summary&gt; 
 &lt;table&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Variable&lt;/th&gt; 
    &lt;th style=&quot;text-align:center&quot;&gt;Required&lt;/th&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Description&lt;/th&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;LLM_BINDING&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;LLM provider (&lt;code&gt;openai&lt;/code&gt;, &lt;code&gt;anthropic&lt;/code&gt;, etc.)&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;LLM_MODEL&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Model name (e.g. &lt;code&gt;gpt-4o&lt;/code&gt;)&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;LLM_API_KEY&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Your LLM API key&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;LLM_HOST&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;API endpoint URL&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;EMBEDDING_BINDING&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Embedding provider&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;EMBEDDING_MODEL&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Embedding model name&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;EMBEDDING_API_KEY&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Embedding API key&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;EMBEDDING_HOST&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Embedding endpoint&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;EMBEDDING_DIMENSION&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Vector dimension&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;SEARCH_PROVIDER&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;No&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Search provider (&lt;code&gt;tavily&lt;/code&gt;, &lt;code&gt;jina&lt;/code&gt;, &lt;code&gt;serper&lt;/code&gt;, &lt;code&gt;perplexity&lt;/code&gt;, etc.)&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;SEARCH_API_KEY&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;No&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Search API key&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;BACKEND_PORT&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;No&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Backend port (default &lt;code&gt;8001&lt;/code&gt;)&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;FRONTEND_PORT&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;No&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Frontend port (default &lt;code&gt;3782&lt;/code&gt;)&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;NEXT_PUBLIC_API_BASE_EXTERNAL&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;No&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Public backend URL for cloud deployment&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;DISABLE_SSL_VERIFY&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:center&quot;&gt;No&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Disable SSL verification (default &lt;code&gt;false&lt;/code&gt;)&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
&lt;/details&gt; 
&lt;h3&gt;Option D — CLI Only&lt;/h3&gt; 
&lt;p&gt;If you just want the CLI without the web frontend:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;pip install -e &quot;.[cli]&quot;
deeptutor chat                                   # Interactive REPL
deeptutor run chat &quot;Explain Fourier transform&quot;   # One-shot capability
deeptutor run deep_solve &quot;Solve x^2 = 4&quot;         # Multi-agent problem solving
deeptutor kb create my-kb --doc textbook.pdf     # Build a knowledge base
&lt;/code&gt;&lt;/pre&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;See &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/#%EF%B8%8F-deeptutor-cli--agent-native-interface&quot;&gt;DeepTutor CLI&lt;/a&gt; for the full feature guide and command reference.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;hr /&gt; 
&lt;h2&gt;📖 Explore DeepTutor&lt;/h2&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;img src=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/assets/figs/deeptutor-architecture.png&quot; alt=&quot;DeepTutor Architecture&quot; width=&quot;800&quot; /&gt; 
&lt;/div&gt; 
&lt;h3&gt;💬 Chat — Unified Intelligent Workspace&lt;/h3&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;img src=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/assets/figs/dt-chat.png&quot; alt=&quot;Chat Workspace&quot; width=&quot;800&quot; /&gt; 
&lt;/div&gt; 
&lt;p&gt;Five distinct modes coexist in a single workspace, bound by a &lt;strong&gt;unified context management system&lt;/strong&gt;. Conversation history, knowledge bases, and references persist across modes — switch between them freely within the same topic, whenever the moment calls for it.&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th style=&quot;text-align:left&quot;&gt;Mode&lt;/th&gt; 
   &lt;th style=&quot;text-align:left&quot;&gt;What It Does&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;Chat&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;Fluid, tool-augmented conversation. Choose from RAG retrieval, web search, code execution, deep reasoning, brainstorming, and paper search — mix and match as needed.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;Deep Solve&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;Multi-agent problem solving: plan, investigate, solve, and verify — with precise source citations at every step.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;Quiz Generation&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;Generate assessments grounded in your knowledge base, with built-in validation.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;Deep Research&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;Decompose a topic into subtopics, dispatch parallel research agents across RAG, web, and academic papers, and produce a fully cited report.&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;Math Animator&lt;/strong&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;Turn mathematical concepts into visual animations and storyboards powered by Manim.&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p&gt;Tools are &lt;strong&gt;decoupled from workflows&lt;/strong&gt; — in every mode, you decide which tools to enable, how many to use, or whether to use any at all. The workflow orchestrates the reasoning; the tools are yours to compose.&lt;/p&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;Start with a quick chat question, escalate to Deep Solve when it gets hard, generate quiz questions to test yourself, then launch a Deep Research to go deeper — all in one continuous thread.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;h3&gt;✍️ Co-Writer — AI Inside Your Editor&lt;/h3&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;img src=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/assets/figs/dt-cowriter.png&quot; alt=&quot;Co-Writer&quot; width=&quot;800&quot; /&gt; 
&lt;/div&gt; 
&lt;p&gt;Co-Writer brings the intelligence of Chat directly into a writing surface. It is a full-featured Markdown editor where AI is a first-class collaborator — not a sidebar, not an afterthought.&lt;/p&gt; 
&lt;p&gt;Select any text and choose &lt;strong&gt;Rewrite&lt;/strong&gt;, &lt;strong&gt;Expand&lt;/strong&gt;, or &lt;strong&gt;Shorten&lt;/strong&gt; — optionally drawing context from your knowledge base or the web. The editing flow is non-destructive with full undo/redo, and every piece you write can be saved straight to your notebooks, feeding back into your learning ecosystem.&lt;/p&gt; 
&lt;h3&gt;🎓 Guided Learning — Visual, Step-by-Step Mastery&lt;/h3&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;img src=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/assets/figs/dt-guide.png&quot; alt=&quot;Guided Learning&quot; width=&quot;800&quot; /&gt; 
&lt;/div&gt; 
&lt;p&gt;Guided Learning turns your personal materials into structured, multi-step learning journeys. Provide a topic, optionally link notebook records, and DeepTutor will:&lt;/p&gt; 
&lt;ol&gt; 
 &lt;li&gt;&lt;strong&gt;Design a learning plan&lt;/strong&gt; — Identify 3–5 progressive knowledge points from your materials.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Generate interactive pages&lt;/strong&gt; — Each point becomes a rich visual HTML page with explanations, diagrams, and examples.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Enable contextual Q&amp;amp;A&lt;/strong&gt; — Chat alongside each step for deeper exploration.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Summarize your progress&lt;/strong&gt; — Upon completion, receive a learning summary of everything you&#39;ve covered.&lt;/li&gt; 
&lt;/ol&gt; 
&lt;p&gt;Sessions are persistent — pause, resume, or revisit any step at any time.&lt;/p&gt; 
&lt;h3&gt;📚 Knowledge Management — Your Learning Infrastructure&lt;/h3&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;img src=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/assets/figs/dt-knowledge.png&quot; alt=&quot;Knowledge Management&quot; width=&quot;800&quot; /&gt; 
&lt;/div&gt; 
&lt;p&gt;Knowledge is where you build and manage the document collections that power everything else in DeepTutor.&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Knowledge Bases&lt;/strong&gt; — Upload PDF, TXT, or Markdown files to create searchable, RAG-ready collections. Add documents incrementally as your library grows.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Notebooks&lt;/strong&gt; — Organize learning records across sessions. Save insights from Chat, Guided Learning, Co-Writer, or Deep Research into categorized, color-coded notebooks.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Your knowledge base is not passive storage — it actively participates in every conversation, every research session, and every learning path you create.&lt;/p&gt; 
&lt;h3&gt;🧠 Memory — DeepTutor Learns As You Learn&lt;/h3&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;img src=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/assets/figs/dt-memory.png&quot; alt=&quot;Memory&quot; width=&quot;800&quot; /&gt; 
&lt;/div&gt; 
&lt;p&gt;DeepTutor maintains a persistent, evolving understanding of you through two complementary dimensions:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Summary&lt;/strong&gt; — A running digest of your learning progress: what you&#39;ve studied, which topics you&#39;ve explored, and how your understanding has developed.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Profile&lt;/strong&gt; — Your learner identity: preferences, knowledge level, goals, and communication style — automatically refined through every interaction.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Memory is shared across all features and all your TutorBots. The more you use DeepTutor, the more personalized and effective it becomes.&lt;/p&gt; 
&lt;hr /&gt; 
&lt;h3&gt;🦞 TutorBot — Persistent, Autonomous AI Tutors&lt;/h3&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;img src=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/assets/figs/tutorbot-architecture.png&quot; alt=&quot;TutorBot Architecture&quot; width=&quot;800&quot; /&gt; 
&lt;/div&gt; 
&lt;p&gt;TutorBot is not a chatbot — it is a &lt;strong&gt;persistent, multi-instance agent&lt;/strong&gt; built on &lt;a href=&quot;https://github.com/HKUDS/nanobot&quot;&gt;nanobot&lt;/a&gt;. Each TutorBot runs its own agent loop with independent workspace, memory, and personality. Create a Socratic math tutor, a patient writing coach, and a rigorous research advisor — all running simultaneously, each evolving with you.&lt;/p&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;img src=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/assets/figs/tb.png&quot; alt=&quot;TutorBot&quot; width=&quot;800&quot; /&gt; 
&lt;/div&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;Soul Templates&lt;/strong&gt; — Define your tutor&#39;s personality, tone, and teaching philosophy through editable Soul files. Choose from built-in archetypes (Socratic, encouraging, rigorous) or craft your own — the soul shapes every response.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Independent Workspace&lt;/strong&gt; — Each bot has its own directory with separate memory, sessions, skills, and configuration — fully isolated yet able to access DeepTutor&#39;s shared knowledge layer.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Proactive Heartbeat&lt;/strong&gt; — Bots don&#39;t just respond — they initiate. The built-in Heartbeat system enables recurring study check-ins, review reminders, and scheduled tasks. Your tutor shows up even when you don&#39;t.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Full Tool Access&lt;/strong&gt; — Every bot reaches into DeepTutor&#39;s complete toolkit: RAG retrieval, code execution, web search, academic paper search, deep reasoning, and brainstorming.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Skill Learning&lt;/strong&gt; — Teach your bot new abilities by adding skill files to its workspace. As your needs evolve, so does your tutor&#39;s capability.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Multi-Channel Presence&lt;/strong&gt; — Connect bots to Telegram, Discord, Slack, Feishu, WeChat Work, DingTalk, Email, and more. Your tutor meets you wherever you are.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;Team &amp;amp; Sub-Agents&lt;/strong&gt; — Spawn background sub-agents or orchestrate multi-agent teams within a single bot for complex, long-running tasks.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;deeptutor bot create math-tutor --persona &quot;Socratic math teacher who uses probing questions&quot;
deeptutor bot create writing-coach --persona &quot;Patient, detail-oriented writing mentor&quot;
deeptutor bot list                  # See all your active tutors
&lt;/code&gt;&lt;/pre&gt; 
&lt;hr /&gt; 
&lt;h3&gt;⌨️ DeepTutor CLI — Agent-Native Interface&lt;/h3&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;img src=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/assets/figs/cli-architecture.png&quot; alt=&quot;DeepTutor CLI Architecture&quot; width=&quot;800&quot; /&gt; 
&lt;/div&gt; 
&lt;p&gt;DeepTutor is fully CLI-native. Every capability, knowledge base, session, memory, and TutorBot is one command away — no browser required. The CLI serves both humans (with rich terminal rendering) and AI agents (with structured JSON output).&lt;/p&gt; 
&lt;p&gt;Hand the &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/SKILL.md&quot;&gt;&lt;code&gt;SKILL.md&lt;/code&gt;&lt;/a&gt; at the project root to any tool-using agent (&lt;a href=&quot;https://github.com/HKUDS/nanobot&quot;&gt;nanobot&lt;/a&gt;, or any LLM with tool access), and it can configure and operate DeepTutor autonomously.&lt;/p&gt; 
&lt;p&gt;&lt;strong&gt;One-shot execution&lt;/strong&gt; — Run any capability directly from the terminal:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;deeptutor run chat &quot;Explain the Fourier transform&quot; -t rag --kb textbook
deeptutor run deep_solve &quot;Prove that √2 is irrational&quot; -t reason
deeptutor run deep_question &quot;Linear algebra&quot; --config num_questions=5
deeptutor run deep_research &quot;Attention mechanisms in transformers&quot;
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Interactive REPL&lt;/strong&gt; — A persistent chat session with live mode switching:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;deeptutor chat --capability deep_solve --kb my-kb
# Inside the REPL: /cap, /tool, /kb, /history, /notebook, /config to switch on the fly
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Knowledge base lifecycle&lt;/strong&gt; — Build, query, and manage RAG-ready collections entirely from the terminal:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;deeptutor kb create my-kb --doc textbook.pdf       # Create from document
deeptutor kb add my-kb --docs-dir ./papers/         # Add a folder of papers
deeptutor kb search my-kb &quot;gradient descent&quot;        # Search directly
deeptutor kb set-default my-kb                      # Set as default for all commands
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Dual output mode&lt;/strong&gt; — Rich rendering for humans, structured JSON for pipelines:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;deeptutor run chat &quot;Summarize chapter 3&quot; -f rich    # Colored, formatted output
deeptutor run chat &quot;Summarize chapter 3&quot; -f json    # Line-delimited JSON events
&lt;/code&gt;&lt;/pre&gt; 
&lt;p&gt;&lt;strong&gt;Session continuity&lt;/strong&gt; — Resume any conversation right where you left off:&lt;/p&gt; 
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;deeptutor session list                              # List all sessions
deeptutor session open &amp;lt;id&amp;gt;                         # Resume in REPL
&lt;/code&gt;&lt;/pre&gt; 
&lt;details&gt; 
 &lt;summary&gt;&lt;b&gt;Full CLI command reference&lt;/b&gt;&lt;/summary&gt; 
 &lt;p&gt;&lt;strong&gt;Top-level&lt;/strong&gt;&lt;/p&gt; 
 &lt;table&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Command&lt;/th&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Description&lt;/th&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor run &amp;lt;capability&amp;gt; &amp;lt;message&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Run any capability in a single turn (&lt;code&gt;chat&lt;/code&gt;, &lt;code&gt;deep_solve&lt;/code&gt;, &lt;code&gt;deep_question&lt;/code&gt;, &lt;code&gt;deep_research&lt;/code&gt;, &lt;code&gt;math_animator&lt;/code&gt;)&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor chat&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Interactive REPL with optional &lt;code&gt;--capability&lt;/code&gt;, &lt;code&gt;--tool&lt;/code&gt;, &lt;code&gt;--kb&lt;/code&gt;, &lt;code&gt;--language&lt;/code&gt;&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor serve&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Start the DeepTutor API server&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
 &lt;p&gt;&lt;strong&gt;&lt;code&gt;deeptutor bot&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt; 
 &lt;table&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Command&lt;/th&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Description&lt;/th&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor bot list&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;List all TutorBot instances&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor bot create &amp;lt;id&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Create and start a new bot (&lt;code&gt;--name&lt;/code&gt;, &lt;code&gt;--persona&lt;/code&gt;, &lt;code&gt;--model&lt;/code&gt;)&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor bot start &amp;lt;id&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Start a bot&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor bot stop &amp;lt;id&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Stop a bot&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
 &lt;p&gt;&lt;strong&gt;&lt;code&gt;deeptutor kb&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt; 
 &lt;table&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Command&lt;/th&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Description&lt;/th&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor kb list&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;List all knowledge bases&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor kb info &amp;lt;name&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Show knowledge base details&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor kb create &amp;lt;name&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Create from documents (&lt;code&gt;--doc&lt;/code&gt;, &lt;code&gt;--docs-dir&lt;/code&gt;)&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor kb add &amp;lt;name&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Add documents incrementally&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor kb search &amp;lt;name&amp;gt; &amp;lt;query&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Search a knowledge base&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor kb set-default &amp;lt;name&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Set as default KB&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor kb delete &amp;lt;name&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Delete a knowledge base (&lt;code&gt;--force&lt;/code&gt;)&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
 &lt;p&gt;&lt;strong&gt;&lt;code&gt;deeptutor memory&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt; 
 &lt;table&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Command&lt;/th&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Description&lt;/th&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor memory show [file]&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;View memory (&lt;code&gt;summary&lt;/code&gt;, &lt;code&gt;profile&lt;/code&gt;, or &lt;code&gt;all&lt;/code&gt;)&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor memory clear [file]&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Clear memory (&lt;code&gt;--force&lt;/code&gt;)&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
 &lt;p&gt;&lt;strong&gt;&lt;code&gt;deeptutor session&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt; 
 &lt;table&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Command&lt;/th&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Description&lt;/th&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor session list&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;List sessions (&lt;code&gt;--limit&lt;/code&gt;)&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor session show &amp;lt;id&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;View session messages&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor session open &amp;lt;id&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Resume session in REPL&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor session rename &amp;lt;id&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Rename a session (&lt;code&gt;--title&lt;/code&gt;)&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor session delete &amp;lt;id&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Delete a session&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
 &lt;p&gt;&lt;strong&gt;&lt;code&gt;deeptutor notebook&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt; 
 &lt;table&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Command&lt;/th&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Description&lt;/th&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor notebook list&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;List notebooks&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor notebook create &amp;lt;name&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Create a notebook (&lt;code&gt;--description&lt;/code&gt;)&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor notebook show &amp;lt;id&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;View notebook records&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor notebook add-md &amp;lt;id&amp;gt; &amp;lt;path&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Import markdown as record&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor notebook replace-md &amp;lt;id&amp;gt; &amp;lt;rec&amp;gt; &amp;lt;path&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Replace a markdown record&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor notebook remove-record &amp;lt;id&amp;gt; &amp;lt;rec&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Remove a record&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
 &lt;p&gt;&lt;strong&gt;&lt;code&gt;deeptutor config&lt;/code&gt; / &lt;code&gt;plugin&lt;/code&gt; / &lt;code&gt;provider&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt; 
 &lt;table&gt; 
  &lt;thead&gt; 
   &lt;tr&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Command&lt;/th&gt; 
    &lt;th style=&quot;text-align:left&quot;&gt;Description&lt;/th&gt; 
   &lt;/tr&gt; 
  &lt;/thead&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor config show&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Print current configuration summary&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor plugin list&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;List registered tools and capabilities&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor plugin info &amp;lt;name&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Show tool or capability details&lt;/td&gt; 
   &lt;/tr&gt; 
   &lt;tr&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;&lt;code&gt;deeptutor provider login &amp;lt;provider&amp;gt;&lt;/code&gt;&lt;/td&gt; 
    &lt;td style=&quot;text-align:left&quot;&gt;Provider auth (&lt;code&gt;openai-codex&lt;/code&gt; OAuth login; &lt;code&gt;github-copilot&lt;/code&gt; validates an existing Copilot auth session)&lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
&lt;/details&gt; 
&lt;h2&gt;🗺️ Roadmap&lt;/h2&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th style=&quot;text-align:center&quot;&gt;Status&lt;/th&gt; 
   &lt;th style=&quot;text-align:left&quot;&gt;Milestone&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;🎯&lt;/td&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;Authentication &amp;amp; Login&lt;/strong&gt; — Optional login page for public deployments with multi-user support&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;🎯&lt;/td&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;Themes &amp;amp; Appearance&lt;/strong&gt; — Diverse theme options and customizable UI appearance&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;🎯&lt;/td&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;Interaction Improvement&lt;/strong&gt; — optimize icon design and interaction details&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;🔜&lt;/td&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;Better Memories&lt;/strong&gt; — integrating better memory management&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;🔜&lt;/td&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;LightRAG Integration&lt;/strong&gt; — Integrate &lt;a href=&quot;https://github.com/HKUDS/LightRAG&quot;&gt;LightRAG&lt;/a&gt; as an advanced knowledge base engine&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;🔜&lt;/td&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;&lt;strong&gt;Documentation Site&lt;/strong&gt; — Comprehensive docs page with guides, API reference, and tutorials&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;If you find DeepTutor useful, &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/stargazers&quot;&gt;give us a star&lt;/a&gt; — it helps us keep going!&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;hr /&gt; 
&lt;h2&gt;🌐 Community &amp;amp; Ecosystem&lt;/h2&gt; 
&lt;p&gt;DeepTutor stands on the shoulders of outstanding open-source projects:&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th style=&quot;text-align:left&quot;&gt;Project&lt;/th&gt; 
   &lt;th style=&quot;text-align:left&quot;&gt;Role in DeepTutor&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;&lt;a href=&quot;https://github.com/HKUDS/nanobot&quot;&gt;&lt;strong&gt;nanobot&lt;/strong&gt;&lt;/a&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;Ultra-lightweight agent engine powering TutorBot&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;&lt;a href=&quot;https://github.com/run-llama/llama_index&quot;&gt;&lt;strong&gt;LlamaIndex&lt;/strong&gt;&lt;/a&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;RAG pipeline and document indexing backbone&lt;/td&gt; 
  &lt;/tr&gt; 
  &lt;tr&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;&lt;a href=&quot;https://github.com/Wing900/ManimCat&quot;&gt;&lt;strong&gt;ManimCat&lt;/strong&gt;&lt;/a&gt;&lt;/td&gt; 
   &lt;td style=&quot;text-align:left&quot;&gt;AI-driven math animation generation for Math Animator&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;p&gt;&lt;strong&gt;From the HKUDS ecosystem:&lt;/strong&gt;&lt;/p&gt; 
&lt;table&gt; 
 &lt;thead&gt; 
  &lt;tr&gt; 
   &lt;th style=&quot;text-align:center&quot;&gt;&lt;a href=&quot;https://github.com/HKUDS/LightRAG&quot;&gt;⚡ LightRAG&lt;/a&gt;&lt;/th&gt; 
   &lt;th style=&quot;text-align:center&quot;&gt;&lt;a href=&quot;https://github.com/HKUDS/AutoAgent&quot;&gt;🤖 AutoAgent&lt;/a&gt;&lt;/th&gt; 
   &lt;th style=&quot;text-align:center&quot;&gt;&lt;a href=&quot;https://github.com/HKUDS/AI-Researcher&quot;&gt;🔬 AI-Researcher&lt;/a&gt;&lt;/th&gt; 
   &lt;th style=&quot;text-align:center&quot;&gt;&lt;a href=&quot;https://github.com/HKUDS/nanobot&quot;&gt;🧬 nanobot&lt;/a&gt;&lt;/th&gt; 
  &lt;/tr&gt; 
 &lt;/thead&gt; 
 &lt;tbody&gt; 
  &lt;tr&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;Simple &amp;amp; Fast RAG&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;Zero-Code Agent Framework&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;Automated Research&lt;/td&gt; 
   &lt;td style=&quot;text-align:center&quot;&gt;Ultra-Lightweight AI Agent&lt;/td&gt; 
  &lt;/tr&gt; 
 &lt;/tbody&gt; 
&lt;/table&gt; 
&lt;h2&gt;🤝 Contributing&lt;/h2&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;p&gt;We hope DeepTutor becomes a gift for the community. 🎁&lt;/p&gt; 
 &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/graphs/contributors&quot;&gt; &lt;img src=&quot;https://contrib.rocks/image?repo=HKUDS/DeepTutor&amp;amp;max=999&quot; alt=&quot;Contributors&quot; /&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;See &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/CONTRIBUTING.md&quot;&gt;CONTRIBUTING.md&lt;/a&gt; for guidelines on setting up your development environment, code standards, and pull request workflow.&lt;/p&gt; 
&lt;h2&gt;⭐ Star History&lt;/h2&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;a href=&quot;https://www.star-history.com/#HKUDS/DeepTutor&amp;amp;type=timeline&amp;amp;legend=top-left&quot;&gt; 
  &lt;picture&gt; 
   &lt;source media=&quot;(prefers-color-scheme: dark)&quot; srcset=&quot;https://api.star-history.com/svg?repos=HKUDS/DeepTutor&amp;amp;type=timeline&amp;amp;theme=dark&amp;amp;legend=top-left&quot; /&gt; 
   &lt;source media=&quot;(prefers-color-scheme: light)&quot; srcset=&quot;https://api.star-history.com/svg?repos=HKUDS/DeepTutor&amp;amp;type=timeline&amp;amp;legend=top-left&quot; /&gt; 
   &lt;img alt=&quot;Star History Chart&quot; src=&quot;https://api.star-history.com/svg?repos=HKUDS/DeepTutor&amp;amp;type=timeline&amp;amp;legend=top-left&quot; /&gt; 
  &lt;/picture&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;a href=&quot;https://www.star-history.com/hkuds/deeptutor&quot;&gt; 
  &lt;picture&gt; 
   &lt;source media=&quot;(prefers-color-scheme: dark)&quot; srcset=&quot;https://api.star-history.com/badge?repo=HKUDS/DeepTutor&amp;amp;theme=dark&quot; /&gt; 
   &lt;source media=&quot;(prefers-color-scheme: light)&quot; srcset=&quot;https://api.star-history.com/badge?repo=HKUDS/DeepTutor&quot; /&gt; 
   &lt;img alt=&quot;Star History Rank&quot; src=&quot;https://api.star-history.com/badge?repo=HKUDS/DeepTutor&quot; /&gt; 
  &lt;/picture&gt; &lt;/a&gt; &lt;/p&gt; 
&lt;div align=&quot;center&quot;&gt; 
 &lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/HKUDS&quot;&gt;Data Intelligence Lab @ HKU&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt; 
 &lt;p&gt;&lt;a href=&quot;https://github.com/HKUDS/DeepTutor/stargazers&quot;&gt;⭐ Star us&lt;/a&gt; · &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/issues&quot;&gt;🐛 Report a bug&lt;/a&gt; · &lt;a href=&quot;https://github.com/HKUDS/DeepTutor/discussions&quot;&gt;💬 Discussions&lt;/a&gt;&lt;/p&gt; 
 &lt;hr /&gt; 
 &lt;p&gt;Licensed under the &lt;a href=&quot;https://raw.githubusercontent.com/HKUDS/DeepTutor/main/LICENSE&quot;&gt;Apache License 2.0&lt;/a&gt;.&lt;/p&gt; 
 &lt;p&gt; &lt;img src=&quot;https://visitor-badge.laobi.icu/badge?page_id=HKUDS.DeepTutor&amp;amp;style=for-the-badge&amp;amp;color=00d4ff&quot; alt=&quot;Views&quot; /&gt; &lt;/p&gt; 
&lt;/div&gt;</description>
      
      <media:content url="https://opengraph.githubassets.com/db0f1f13e7c03371ba37912ffb1fe6425c05d03be0a20d0b640d9235ba2752b7/HKUDS/DeepTutor" medium="image" />
      
    </item>
    
    <item>
      <title>onyx-dot-app/onyx</title>
      <link>https://github.com/onyx-dot-app/onyx</link>
      <description>&lt;p&gt;Open Source AI Platform - AI Chat with advanced features that works with every LLM&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;a name=&quot;readme-top&quot;&gt;&lt;/a&gt;&lt;/p&gt; 
&lt;h2 align=&quot;center&quot;&gt; &lt;a href=&quot;https://www.onyx.app/?utm_source=onyx_repo&amp;amp;utm_medium=github&amp;amp;utm_campaign=readme&quot;&gt; &lt;img width=&quot;50%&quot; src=&quot;https://github.com/onyx-dot-app/onyx/raw/logo/OnyxLogoCropped.jpg?raw=true&quot; /&gt;&lt;/a&gt; &lt;/h2&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;a href=&quot;https://discord.gg/TDJ59cGV2X&quot; target=&quot;_blank&quot;&gt; &lt;img src=&quot;https://img.shields.io/badge/discord-join-blue.svg?logo=discord&amp;amp;logoColor=white&quot; alt=&quot;Discord&quot; /&gt; &lt;/a&gt; &lt;a href=&quot;https://docs.onyx.app/?utm_source=onyx_repo&amp;amp;utm_medium=github&amp;amp;utm_campaign=readme&quot; target=&quot;_blank&quot;&gt; &lt;img src=&quot;https://img.shields.io/badge/docs-view-blue&quot; alt=&quot;Documentation&quot; /&gt; &lt;/a&gt; &lt;a href=&quot;https://www.onyx.app/?utm_source=onyx_repo&amp;amp;utm_medium=github&amp;amp;utm_campaign=readme&quot; target=&quot;_blank&quot;&gt; &lt;img src=&quot;https://img.shields.io/website?url=https://www.onyx.app&amp;amp;up_message=visit&amp;amp;up_color=blue&quot; alt=&quot;Documentation&quot; /&gt; &lt;/a&gt; &lt;a href=&quot;https://github.com/onyx-dot-app/onyx/raw/main/LICENSE&quot; target=&quot;_blank&quot;&gt; &lt;img src=&quot;https://img.shields.io/static/v1?label=license&amp;amp;message=MIT&amp;amp;color=blue&quot; alt=&quot;License&quot; /&gt; &lt;/a&gt; &lt;/p&gt; 
&lt;p align=&quot;center&quot;&gt; &lt;a href=&quot;https://trendshift.io/repositories/12516&quot; target=&quot;_blank&quot;&gt; &lt;img src=&quot;https://trendshift.io/api/badge/repositories/12516&quot; alt=&quot;onyx-dot-app/onyx | Trendshift&quot; style=&quot;width: 250px; height: 55px;&quot; /&gt; &lt;/a&gt; &lt;/p&gt; 
&lt;h1&gt;Onyx - The Open Source AI Platform&lt;/h1&gt; 
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://www.onyx.app/?utm_source=onyx_repo&amp;amp;utm_medium=github&amp;amp;utm_campaign=readme&quot;&gt;Onyx&lt;/a&gt;&lt;/strong&gt; is the application layer for LLMs - bringing a feature-rich interface that can be easily hosted by anyone. Onyx enables LLMs through advanced capabilities like RAG, web search, code execution, file creation, deep research and more.&lt;/p&gt; 
&lt;p&gt;Connect your applications with over 50+ indexing based connectors provided out of the box or via MCP.&lt;/p&gt; 
&lt;div class=&quot;markdown-alert markdown-alert-tip&quot;&gt;
 &lt;p class=&quot;markdown-alert-title&quot;&gt;
  &lt;svg class=&quot;octicon octicon-light-bulb mr-2&quot; viewbox=&quot;0 0 16 16&quot; version=&quot;1.1&quot; width=&quot;16&quot; height=&quot;16&quot; aria-hidden=&quot;true&quot;&gt;
   &lt;path d=&quot;M8 1.5c-2.363 0-4 1.69-4 3.75 0 .984.424 1.625.984 2.304l.214.253c.223.264.47.556.673.848.284.411.537.896.621 1.49a.75.75 0 0 1-1.484.211c-.04-.282-.163-.547-.37-.847a8.456 8.456 0 0 0-.542-.68c-.084-.1-.173-.205-.268-.32C3.201 7.75 2.5 6.766 2.5 5.25 2.5 2.31 4.863 0 8 0s5.5 2.31 5.5 5.25c0 1.516-.701 2.5-1.328 3.259-.095.115-.184.22-.268.319-.207.245-.383.453-.541.681-.208.3-.33.565-.37.847a.751.751 0 0 1-1.485-.212c.084-.593.337-1.078.621-1.489.203-.292.45-.584.673-.848.075-.088.147-.173.213-.253.561-.679.985-1.32.985-2.304 0-2.06-1.637-3.75-4-3.75ZM5.75 12h4.5a.75.75 0 0 1 0 1.5h-4.5a.75.75 0 0 1 0-1.5ZM6 15.25a.75.75 0 0 1 .75-.75h2.5a.75.75 0 0 1 0 1.5h-2.5a.75.75 0 0 1-.75-.75Z&quot;&gt;&lt;/path&gt;
  &lt;/svg&gt;Tip&lt;/p&gt;
 &lt;p&gt;Deploy with a single command:&lt;/p&gt; 
 &lt;pre&gt;&lt;code&gt;curl -fsSL https://onyx.app/install_onyx.sh | bash
&lt;/code&gt;&lt;/pre&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;img src=&quot;https://github.com/onyx-dot-app/onyx/releases/download/v3.0.0/Onyx.gif&quot; alt=&quot;Onyx Chat Silent Demo&quot; /&gt;&lt;/p&gt; 
&lt;hr /&gt; 
&lt;h2&gt;⭐ Features&lt;/h2&gt; 
&lt;ul&gt; 
 &lt;li&gt;&lt;strong&gt;🔍 Agentic RAG:&lt;/strong&gt; Get best in class search and answer quality based on hybrid index + AI Agents for information retrieval 
  &lt;ul&gt; 
   &lt;li&gt;Benchmark to release soon!&lt;/li&gt; 
  &lt;/ul&gt; &lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;🔬 Deep Research:&lt;/strong&gt; Get in depth reports with a multi-step research flow. 
  &lt;ul&gt; 
   &lt;li&gt;Top of &lt;a href=&quot;https://github.com/onyx-dot-app/onyx_deep_research_bench&quot;&gt;leaderboard&lt;/a&gt; as of Feb 2026.&lt;/li&gt; 
  &lt;/ul&gt; &lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;🤖 Custom Agents:&lt;/strong&gt; Build AI Agents with unique instructions, knowledge, and actions.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;🌍 Web Search:&lt;/strong&gt; Browse the web to get up to date information. 
  &lt;ul&gt; 
   &lt;li&gt;Supports Serper, Google PSE, Brave, SearXNG, and others.&lt;/li&gt; 
   &lt;li&gt;Comes with an in house web crawler and support for Firecrawl/Exa.&lt;/li&gt; 
  &lt;/ul&gt; &lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;📄 Artifacts:&lt;/strong&gt; Generate documents, graphics, and other downloadable artifacts.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;▶️ Actions &amp;amp; MCP:&lt;/strong&gt; Let Onyx agents interact with external applications, comes with flexible Auth options.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;💻 Code Execution:&lt;/strong&gt; Execute code in a sandbox to analyze data, render graphs, or modify files.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;🎙️ Voice Mode:&lt;/strong&gt; Chat with Onyx via text-to-speech and speech-to-text.&lt;/li&gt; 
 &lt;li&gt;&lt;strong&gt;🎨 Image Generation:&lt;/strong&gt; Generate images based on user prompts.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;Onyx supports all major LLM providers, both self-hosted (like Ollama, LiteLLM, vLLM, etc.) and proprietary (like Anthropic, OpenAI, Gemini, etc.).&lt;/p&gt; 
&lt;p&gt;To learn more - check out our &lt;a href=&quot;https://docs.onyx.app/welcome?utm_source=onyx_repo&amp;amp;utm_medium=github&amp;amp;utm_campaign=readme&quot;&gt;docs&lt;/a&gt;!&lt;/p&gt; 
&lt;hr /&gt; 
&lt;h2&gt;🚀 Deployment Modes&lt;/h2&gt; 
&lt;blockquote&gt; 
 &lt;p&gt;Onyx supports deployments in Docker, Kubernetes, Helm/Terraform and provides guides for major cloud providers. Detailed deployment guides found &lt;a href=&quot;https://docs.onyx.app/deployment/overview&quot;&gt;here&lt;/a&gt;.&lt;/p&gt; 
&lt;/blockquote&gt; 
&lt;p&gt;Onyx supports two separate deployment options: standard and lite.&lt;/p&gt; 
&lt;h4&gt;Onyx Lite&lt;/h4&gt; 
&lt;p&gt;The Lite mode can be thought of as a lightweight Chat UI. It requires less resources (under 1GB memory) and runs a less complex stack. It is great for users who want to test out Onyx quickly or for teams who are only interested in the Chat UI and Agents functionalities.&lt;/p&gt; 
&lt;h4&gt;Standard Onyx&lt;/h4&gt; 
&lt;p&gt;The complete feature set of Onyx which is recommended for serious users and larger teams. Additional components not included in Lite mode:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Vector + Keyword index for RAG.&lt;/li&gt; 
 &lt;li&gt;Background containers to run job queues and workers for syncing knowledge from connectors.&lt;/li&gt; 
 &lt;li&gt;AI model inference servers to run deep learning models used during indexing and inference.&lt;/li&gt; 
 &lt;li&gt;Performance optimizations for large scale use via in memory cache (Redis) and blob store (MinIO).&lt;/li&gt; 
&lt;/ul&gt; 
&lt;div class=&quot;markdown-alert markdown-alert-tip&quot;&gt;
 &lt;p class=&quot;markdown-alert-title&quot;&gt;
  &lt;svg class=&quot;octicon octicon-light-bulb mr-2&quot; viewbox=&quot;0 0 16 16&quot; version=&quot;1.1&quot; width=&quot;16&quot; height=&quot;16&quot; aria-hidden=&quot;true&quot;&gt;
   &lt;path d=&quot;M8 1.5c-2.363 0-4 1.69-4 3.75 0 .984.424 1.625.984 2.304l.214.253c.223.264.47.556.673.848.284.411.537.896.621 1.49a.75.75 0 0 1-1.484.211c-.04-.282-.163-.547-.37-.847a8.456 8.456 0 0 0-.542-.68c-.084-.1-.173-.205-.268-.32C3.201 7.75 2.5 6.766 2.5 5.25 2.5 2.31 4.863 0 8 0s5.5 2.31 5.5 5.25c0 1.516-.701 2.5-1.328 3.259-.095.115-.184.22-.268.319-.207.245-.383.453-.541.681-.208.3-.33.565-.37.847a.751.751 0 0 1-1.485-.212c.084-.593.337-1.078.621-1.489.203-.292.45-.584.673-.848.075-.088.147-.173.213-.253.561-.679.985-1.32.985-2.304 0-2.06-1.637-3.75-4-3.75ZM5.75 12h4.5a.75.75 0 0 1 0 1.5h-4.5a.75.75 0 0 1 0-1.5ZM6 15.25a.75.75 0 0 1 .75-.75h2.5a.75.75 0 0 1 0 1.5h-2.5a.75.75 0 0 1-.75-.75Z&quot;&gt;&lt;/path&gt;
  &lt;/svg&gt;Tip&lt;/p&gt;
 &lt;p&gt;&lt;strong&gt;To try Onyx for free without deploying, visit &lt;a href=&quot;https://cloud.onyx.app/signup?utm_source=onyx_repo&amp;amp;utm_medium=github&amp;amp;utm_campaign=readme&quot;&gt;Onyx Cloud&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt; 
&lt;/div&gt; 
&lt;hr /&gt; 
&lt;h2&gt;🏢 Onyx for Enterprise&lt;/h2&gt; 
&lt;p&gt;Onyx is built for teams of all sizes, from individual users to the largest global enterprises:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;👥 Collaboration: Share chats and agents with other members of your organization.&lt;/li&gt; 
 &lt;li&gt;🔐 Single Sign On: SSO via Google OAuth, OIDC, or SAML. Group syncing and user provisioning via SCIM.&lt;/li&gt; 
 &lt;li&gt;🛡️ Role Based Access Control: RBAC for sensitive resources like access to agents, actions, etc.&lt;/li&gt; 
 &lt;li&gt;📊 Analytics: Usage graphs broken down by teams, LLMs, or agents.&lt;/li&gt; 
 &lt;li&gt;🕵️ Query History: Audit usage to ensure safe adoption of AI in your organization.&lt;/li&gt; 
 &lt;li&gt;💻 Custom code: Run custom code to remove PII, reject sensitive queries, or to run custom analysis.&lt;/li&gt; 
 &lt;li&gt;🎨 Whitelabeling: Customize the look and feel of Onyx with custom naming, icons, banners, and more.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;h2&gt;📚 Licensing&lt;/h2&gt; 
&lt;p&gt;There are two editions of Onyx:&lt;/p&gt; 
&lt;ul&gt; 
 &lt;li&gt;Onyx Community Edition (CE) is available freely under the MIT license and covers all of the core features for Chat, RAG, Agents, and Actions.&lt;/li&gt; 
 &lt;li&gt;Onyx Enterprise Edition (EE) includes extra features that are primarily useful for larger organizations.&lt;/li&gt; 
&lt;/ul&gt; 
&lt;p&gt;For feature details, check out &lt;a href=&quot;https://www.onyx.app/pricing?utm_source=onyx_repo&amp;amp;utm_medium=github&amp;amp;utm_campaign=readme&quot;&gt;our website&lt;/a&gt;.&lt;/p&gt; 
&lt;h2&gt;👪 Community&lt;/h2&gt; 
&lt;p&gt;Join our open source community on &lt;strong&gt;&lt;a href=&quot;https://discord.gg/TDJ59cGV2X&quot;&gt;Discord&lt;/a&gt;&lt;/strong&gt;!&lt;/p&gt; 
&lt;h2&gt;💡 Contributing&lt;/h2&gt; 
&lt;p&gt;Looking to contribute? Please check out the &lt;a href=&quot;https://raw.githubusercontent.com/onyx-dot-app/onyx/main/CONTRIBUTING.md&quot;&gt;Contribution Guide&lt;/a&gt; for more details.&lt;/p&gt;</description>
      
      <media:content url="https://repository-images.githubusercontent.com/633262635/dca37acb-de40-4b62-9238-f06ff265241a" medium="image" />
      
    </item>
    
  </channel>
</rss>
