Who Is Winning the Agent Framework Race in May 2026
Three years after LangChain's breakout, the agent framework landscape has stratified into four functional categories: general-purpose orchestration, coding agents, browser and voice automation, and visual workflow builders. This page ranks the 25 most-starred AI agent frameworks on GitHub in May 2026, with creation dates, languages, and category context. Star counts are imperfect popularity proxies, but they are the only public signal that consistently moves in real time across competing projects.
Top 25 AI Agent Frameworks by GitHub Stars (May 14, 2026)
| Rank | Repository | Stars | Forks | Created | Language | Category |
|---|---|---|---|---|---|---|
| 1 | n8n-io/n8n | 187,791 | 57,618 | 2019-06 | TypeScript | Workflow |
| 2 | Significant-Gravitas/AutoGPT | 184,295 | 46,237 | 2023-03 | Python | General |
| 3 | langchain-ai/langchain | 136,707 | 22,606 | 2022-10 | Python | General |
| 4 | browser-use/browser-use | 93,857 | 10,605 | 2024-10 | Python | Browser |
| 5 | cline/cline | 61,755 | 6,417 | 2024-07 | TypeScript | Coding |
| 6 | microsoft/autogen | 58,025 | 8,755 | 2023-08 | Python | General |
| 7 | FlowiseAI/Flowise | 52,810 | 24,328 | 2023-03 | TypeScript | Workflow |
| 8 | crewAIInc/crewAI | 51,380 | 7,103 | 2023-10 | Python | General |
| 9 | run-llama/llama_index | 49,399 | 7,407 | 2022-11 | Python | General / RAG |
| 10 | BerriAI/litellm | 46,932 | 8,037 | 2023-07 | Python | Routing / Proxy |
| 11 | paul-gauthier/aider | 44,796 | 4,409 | 2023-05 | Python | Coding |
| 12 | agno-agi/agno | 40,118 | 5,377 | 2022-05 | Python | General |
| 13 | stanfordnlp/dspy | 34,408 | 2,889 | 2023-01 | Python | General |
| 14 | langchain-ai/langgraph | 32,027 | 5,432 | 2023-08 | Python | General |
| 15 | microsoft/semantic-kernel | 27,902 | 4,598 | 2023-02 | C# | General |
| 16 | huggingface/smolagents | 27,302 | 2,583 | 2024-12 | Python | General |
| 17 | openai/openai-agents-python | 26,290 | 4,027 | 2025-03 | Python | General |
| 18 | deepset-ai/haystack | 25,223 | 2,784 | 2019-11 | MDX | General / RAG |
| 19 | vercel/ai | 24,220 | 4,392 | 2023-05 | TypeScript | General SDK |
| 20 | mastra-ai/mastra | 23,871 | 2,071 | 2024-08 | TypeScript | General |
| 21 | letta-ai/letta | 22,707 | 2,413 | 2023-10 | Python | Memory / Stateful |
| 22 | pydantic/pydantic-ai | 17,055 | 2,076 | 2024-06 | Python | General |
| 23 | livekit/agents | 10,472 | 3,126 | 2023-10 | Python | Voice |
| 24 | anthropics/claude-agent-sdk-python | 6,860 | 987 | 2025-06 | Python | Vendor SDK |
| 25 | anthropics/anthropic-sdk-python | 3,441 | 674 | 2023-01 | Python | Vendor SDK |
By Category, Star Leaders
| Category | Leader | Stars | Runner-up | Stars |
|---|---|---|---|---|
| General-purpose orchestration | LangChain | 136,707 | AutoGen | 58,025 |
| Coding agents | Cline | 61,755 | Aider | 44,796 |
| Browser / web automation | browser-use | 93,857 | (no close peer) | - |
| Voice agents | LiveKit Agents | 10,472 | (no close peer) | - |
| Visual / no-code workflow | n8n | 187,791 | Flowise | 52,810 |
| Memory / stateful | Letta | 22,707 | (no close peer) | - |
| Routing / proxy | LiteLLM | 46,932 | (no close peer) | - |
| Vendor SDKs | Claude Agent SDK | 6,860 | Anthropic SDK | 3,441 |
By Language
| Language | Frameworks in Top 25 | Combined Stars |
|---|---|---|
| Python | 17 | ~792,000 |
| TypeScript | 6 | ~444,000 |
| C# | 1 (Semantic Kernel) | ~28,000 |
| MDX (Haystack docs) | 1 | ~25,000 |
Python is still the dominant language, but TypeScript has captured the visual-workflow and IDE-extension corners (n8n, Flowise, Cline, Vercel AI SDK, Mastra). The 17:6 Python:TypeScript split among general-purpose frameworks understates the TypeScript footprint because the TypeScript projects skew toward end-user applications rather than libraries.
Five Things the Rankings Tell You
- n8n is the single largest agent platform on GitHub by stars. Originally a no-code workflow tool predating the LLM era, n8n now leads agent-relevant repositories at 187K stars. The implication: many "agent" workloads in 2026 are actually workflow automation with LLM nodes, not custom orchestration code.
- AutoGPT's 184K stars are mostly historical. The repository was created in March 2023 and accumulated most of its stars in the first six months. Star count is a stock not a flow; high totals reflect past velocity and are not a reliable indicator of current activity. Compare commit frequency or contributor count for active-development signal.
- browser-use is the fastest-rising new framework. Created October 2024, the repository reached 94K stars in roughly 18 months, a velocity that puts it ahead of every framework launched in 2023 except LangChain itself. The browser-automation category is winner-take-most so far.
- Coding agents are an OSS-led category. Cline (62K) and Aider (45K) together exceed the combined star count of every closed proprietary coding agent, and Cline's 18-month rise from launch to 62K mirrors browser-use's trajectory. Open-source momentum is real in coding tooling.
- Vendor SDKs lag. Anthropic SDK (3.4K) and Claude Agent SDK (6.9K) have small star counts relative to community frameworks but high relevance-per-star (every Anthropic SDK install is an Anthropic-API user). Vendor-SDK stars are a poor metric for adoption; npm or PyPI download counts capture vendor SDK adoption far better.
What This Means for AI Visibility
Agent frameworks are the substrate of brand-recommendation workflows. When an agent decides which CRM, payment processor, or SaaS tool to surface to its user, that decision happens inside a LangChain chain, a CrewAI crew, a Cline session, or an n8n workflow. Brands optimising for AI visibility should track which frameworks dominate their target use case because each framework has different default retrieval patterns, different prompt templates, and different memory strategies. A brand that ranks well inside a LangChain tool-calling loop may rank differently inside an n8n workflow or a Cline IDE session. Visibility programmes that ignore the framework layer are optimising for the platform but missing the orchestration layer that decides the actual recommendation.
Methodology
Star, fork, creation date, and language data pulled from the public GitHub REST API on May 14, 2026 via the official gh CLI. Repositories selected from the union of (a) the top results for "AI agents" and "LLM agents" in GitHub topic search, (b) the agent frameworks tracked in the State of AI Agents 2026 reports, and (c) the SDKs published by major LLM vendors. The list is not exhaustive; closed-source frameworks (Cursor agents, Replit Agent, Devin, Lindy) are excluded by definition. Star count is a stock-not-flow popularity proxy; for active-development signal use the GitHub Pulse view or commit-frequency tooling. Refreshed quarterly.
How Presenc AI Helps
Presenc AI tracks brand-recommendation outcomes across the major AI platforms whose agents are built on these frameworks. When a brand's recommendation rate inside an agent loop diverges from its rate inside a direct chat prompt, that gap is usually traceable to a framework-level retrieval or tool-selection pattern. For brands building agent-aware visibility strategy, the framework rankings above are the input; the brand-outcome data is what closes the loop.