Altered Craft

Altered Craft

Testing Mozilla's any-llm: A Practical Take on Provider Abstraction

Sam Keen's avatar
Sam Keen
Nov 13, 2025
∙ Paid

Note to my readers: this is new format I’m testing for analysis based posts, more of a “learning in public format.” Let me know if it’s valuable. I’ll still be posting longer form content and the Weekly AI Roundups.

I spent today exploring Mozilla’s any-llm library. While it’s part o

f the larger mozilla.ai agent platform, the library stands alone perfectly well.

Why Provider Abstraction Matters Now

The library offers unified interfaces for Completions and Responses across LLM providers. It also standardizes error handling with custom exceptions for common issues like missing API keys or unsupported parameters. There are additional features such as a proxy gateway which you can read about that in their docs

LLM provider abstraction matters because the frontier model landscape is shifting rapidly. For instance Kimi’s K2 Thinking model just outranked incumbents on key benchmarks last week. With providers competing on performance, speed, and pricing, switching models when a cheaper and/or faster alternative meets your needs becomes a real advantage.

Building a Test Harness

I built a functional research POC (available on GitHub) with a simple web UI for switching between configured providers and models. My analysis focused on two questions:

  • Does the abstraction add meaningful complexity?

  • How does it handle feature disparities between models?

Simple Chat loop test harness build on any-llm

Keep reading with a 7-day free trial

Subscribe to Altered Craft to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Sam Keen
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture