Why AI Engineers love marimo notebooks
If you want to experiment with AI in Python you'll want to use a modern notebook.

AI engineering demands rapid experimentation. You don’t want to be stuck with text outputs and static interfaces when you need rich interactions to properly test AI applications. The gap between experimental code and shareable demos slows down collaboration and limits feedback cycles.
marimo notebooks eliminate this friction by treating interactive applications as a natural extension of your notebook. Built for the iterative nature of AI development, marimo lets you create, test, and share AI applications within the same environment where you do your research and development.
Notebooks for rapid prototyping
AI development requires more than simple prompt engineering. You need to test how LLMs interact with real user interfaces that can be put into more complex workflows. Traditional notebooks are not designed to interact with widget, making it hard to prototype. Building separate demo applications breaks your experimental flow and creates maintenance overhead.
marimo is a reactive notebook for Python. When an input updates, the dependant cells can automatically trigger to re-run. That means that you can write interactive chat interfaces, file upload systems, and dynamic visualizations that all respond to user inputs and model outputs. marimo notebooks can easily become apps that you can easily share with colleagues, including our specialized chat widget that you can fully make bespoke with any Python SDK. Sharing a marimo notebook is also straightforward, because each notebook is a self-contained Python file that’s aware of it’s own dependencies.
LLM support
AI engineering workflows increasingly depend on LLM assistance for code generation, debugging, and experimentation. You don’t want to switch between your notebook and separate AI tools, you want it all to happen in the same browser tab.
marimo supports multiple methods for AI copilots. You can use it in the AI sidebar to generate new cells that you can move into the current notebook. Alternatively, you can also choose to configure tab completion within cells on top of this. You can directly integrate with Gemini, Anthropic and OpenAI, but you’re more than welcome to experiment with local models via the ollama integration as well.
The integrated LLM support understands your current code context, data structures, and schemas found in data sources. This seamless integration keeps you in the flow state that’s essential for productive AI development, where ideas and implementation happen in rapid succession.
Labelling tools and evals
The future of LLMs is multi-modal, requiring sophisticated evaluation and labeling workflows that go beyond simple text comparisons. Traditional evaluation approaches struggle with the complexity of LLMs which means that you will inevitably want to work with a custom interace to annotate your examples.
Because marimo is reactive, it’s straight forward to create custom labeling interfaces. On top of that, you can build evaluation dashboards for model comparison, and design human-in-the-loop workflows for fine-tuning. All without ever leaving the notebook.
Transforming AI development
For AI engineers ready to move beyond fragmented toolchains, marimo offers a path toward faster iteration, better collaboration, and more sophisticated AI applications. The future of AI engineering is integrated, interactive, and built on tools that match the complexity and pace of modern AI development.