Mistral's CLI Agent. Swap Models, Add Tools, Keep Control
Mistral's Take on an Extensible CLI Agent
Mistral just released Vibe alongside their new Devstral coding model. Another CLI coding agent enters the arena.
At roughly 12,000 lines of Python, I reviewed the entire codebase in an afternoon. With an LLM’s help, I understood
exactly how it works. Curious about something? Trace through the code. Want to extend it? You understand what you’re extending.
Compare that to Claude Code’s closed source or Gemini CLI’s 100,000 lines. This matters. It’s the difference between using a tool and understanding one.
This transparency comes from Vibe’s core philosophy: everything relevant to the CLI’s operation should be easily configurable and extendable. Want to use Claude instead of Devstral? Change a config file. Want to add a custom tool? Drop a Python file in a folder. Want to replace the entire system prompt? Point to a different markdown file. When you understand the whole system, extending it becomes natural.
In this article, I’ll walk through Vibe’s architecture, show you how to extend it in three key ways (prompts, providers, and tools), and share practical insights I discovered while exploring the codebase.
Two Practical Advantages
Beyond the manageable codebase:
It’s truly model-agnostic. Vibe defaults to Devstral, but the provider system supports any OpenAI-compatible API. Add OpenRouter and you have access to 100+ models. Run a local model with Ollama or vLLM. Switch between them with a command-line flag.
Apache 2.0 License. Fork it, modify it, ship it in your product. The license puts no restrictions on commercial use. This matters if you’re building internal tooling or integrating AI agents into your workflow.
Architecture at a Glance
Vibe follows a clean layered architecture. Here’s the high-level view:



