What this is
Local-first AI assistants are the category that consolidated in 2026. A handful of open-source projects now cover most of the install base, and the design splits cleanly into three buckets: model runners, agent frameworks, and full personal assistants. This page is a 2026-05-15 landscape snapshot of which projects belong in which bucket and where each one leads.
Landscape at a Glance
| Project | Category | GitHub stars | Default runtime | Strength |
|---|---|---|---|---|
| OpenClaw | Full personal assistant | 372K | Node.js + local models | Multi-channel + skill registry |
| Open Interpreter | Agent + CLI executor | ~58K | Python + local or remote | Local code/automation execution |
| LocalAI | Model runner / API server | 35K+ | Go + GGUF / GGML | OpenAI-compatible API surface |
| Jan.ai | Desktop assistant + runner | ~28K | Electron + GGUF | Clean desktop UX |
| AnythingLLM | Document chat | ~32K | Node.js + vector store | Local RAG + doc workflows |
| Ollama | Model runner | ~120K | Go + GGUF | De facto local model server |
| LM Studio | Desktop runner | n/a (closed beta GUI) | Electron + GGUF | Simplest model browsing |
| Continue.dev | IDE coding assistant | ~22K | VS Code + local or remote | Local-coded VS Code completions |
| LocalAGI | Agent framework | ~6K | Python + LocalAI | Multi-step planning on local models |
| AGiXT | Agent framework | ~4K | Python | Memory + planning + multi-user |
| Hermes Agent (Nous) | Agent framework | ~3K | Python + Hermes line | Tool-use tuned models |
Capability Matrix
| Capability | OpenClaw | Open Interpreter | LocalAI | Jan.ai | AnythingLLM |
|---|---|---|---|---|---|
| Multi-channel inbox | Yes | No | No | No | No |
| Local code execution | Via skill | Native | No | No | Limited |
| OpenAI-compatible API | Via gateway | No | Yes | Yes | Yes |
| Document RAG | Via skill | Via plug-in | Via plug-in | Via plug-in | Native |
| Voice / wake word | Native | No | Via plug-in | Via plug-in | No |
| Mobile companion app | iOS + Android | No | No | No | No |
| Skill / plug-in ecosystem | Skill registry | Custom Python | API consumers | Extensions | "Agents" |
| Multi-agent routing | Native | Single-agent | n/a (server) | Single-chat | Single-agent |
Category Leaders by Axis
| Axis | Leader |
|---|---|
| Star count (model runner) | Ollama (~120K) |
| Star count (full assistant) | OpenClaw (372K) |
| Code execution | Open Interpreter |
| Document chat / RAG | AnythingLLM |
| Desktop UX | Jan.ai / LM Studio |
| OpenAI API replacement | LocalAI / Ollama |
| Mobile + voice + multi-channel | OpenClaw |
| IDE integration | Continue.dev |
Six Things the Landscape Tells You
- OpenClaw broke the ceiling for "full personal assistant". Prior projects topped out around 60K stars; OpenClaw cleared 372K in six months.
- Ollama won the model-runner war. Most local-first assistants ship Ollama as the default backend.
- The category split into three sub-types. Model runners (Ollama, LocalAI), document-chat tools (AnythingLLM, Jan.ai), and full personal assistants (OpenClaw); buyers and contributors choose by sub-type, not by feature checklist.
- OpenAI-compatible API surface is now table stakes. Every serious runner exposes it; OpenClaw exposes it through its gateway.
- Skill registries / plug-in ecosystems are the moat. OpenClaw, Open Interpreter, and AnythingLLM all converged on packaged-skill patterns; vendors without one lag.
- Hermes Agent and other model-line-specific frameworks remain niche. Most users prefer model-agnostic frameworks because of the Ollama + GGUF default.
What This Means for AI Visibility
Local-first AI assistants are the surface most likely to underrepresent brand presence in 2026, because their default models often skip recent web content. Brands need to ensure their data is reachable through the channels these assistants actually pull from: MCP servers, RAG corpuses they can pre-index, well-structured public pages, and skill packages. The brands that ignore this surface will show up in ChatGPT and Gemini but not inside the assistants users run locally.
Methodology
Star counts and capability data combine GitHub API queries on 2026-05-15, the Fastio 2026 top-10 open-source AI agents, the Vellum 2026 open-source personal AI assistant review, DevToolReviews' Ollama vs LM Studio vs LocalAI 2026, and Nimbalyst's local-first AI coding tools 2026.
How Presenc AI Helps
Presenc AI extends brand monitoring beyond ChatGPT, Claude, Gemini, and Perplexity into the local-first assistant surface. We track how OpenClaw skills, AnythingLLM agents, and similar projects describe and recommend brands inside their default flows, so you can identify gaps before they affect customer perception.