Best MCP servers 2026: 15 picks with security caveats
- 15 production-tested MCP servers, each with a verified GitHub repo, a specific use case, and a security caveat cited from SlowMist, arXiv, or the Astrix security report.
- The MCP catalog on GitHub has 20,000+ implementations. Most are undocumented experiments. This list cuts to the ones with active maintainers and real adoption.
- Every server that requires credentials carries a risk: 53% of MCP implementations use static API keys rather than OAuth, per the 2025 Astrix state-of-security report. Key points noted for each.
Why this list exists
The Glama registry lists over 21,000 MCP servers. The Wong2 awesome-mcp-servers list on GitHub tracks thousands more. Most of those entries are a README, a half-finished index.ts, and no maintenance. Sorting through them is the problem, not finding them.
This list applies three filters: the server must have an active maintainer or a backing organization, it must be documented enough to run in under 30 minutes, and the security posture must be something you can actually evaluate before connecting it to an agent that has write access to your data.
The security angle matters more than most lists admit. In early 2026, researchers filed 30 CVEs against MCP servers in 60 days. An arXiv paper (2603.22489) evaluated seven major MCP clients and found significant security gaps across most of them due to insufficient static validation. SlowMist's MCP Security Checklist documents the attack surface in detail. If you are connecting an agent to any server below without reading the caveat section, you are accepting a risk you have not priced.
The 15 servers
Gives an agent full access to GitHub's API surface: reading and writing issues, pull requests, discussions, code search, and branch management. Backed by GitHub's own identity and permissions model, which sets it apart from community wrappers.
When to use: Any agentic coding workflow where the agent needs to open PRs, file issues, review diffs, or search across repositories without you doing it manually.
Controls a real browser via Playwright: navigate pages, click, fill forms, take screenshots, and extract structured content using accessibility snapshots. Does not require a vision model; the agent works from the accessibility tree.
When to use: QA automation, scraping sites that block curl, filling forms as part of an agentic workflow, or visual regression testing. The accessibility-snapshot approach is faster and cheaper than screenshot-based agents.
Sandboxed read and write access to local files and directories. Configurable path allowlist prevents the agent from touching files outside the declared roots. Part of Anthropic's official reference server set.
When to use: Any agent that needs to read, write, or organize files on disk — code generation, document processing, config management. The path allowlist is the primary guardrail; configure it before anything else.
Knowledge-graph-based persistent memory stored locally as JSON. The agent can store entities, relationships, and observations across sessions. No external API calls; data stays on the machine running the server.
When to use: Long-running agentic workflows where context needs to survive across sessions — project tracking, relationship mapping, accumulating research findings. The local-storage model makes it a reasonable starting point before committing to a vector database.
Connects an agent to a Supabase project for natural-language database queries, table inspection, and schema reads. Supports both the anon key (limited) and the service-role key (full access).
When to use: Data analysis, debugging production data issues, scaffolding queries during development. Pair with read-only credentials for any production database.
Before you connect another MCP server: run the free checklist.
The SlowMist checklist covers 24 control points. Our interactive version walks you through each one and scores your configuration. Free, no signup.
Run the free checklist Or get a full audit — Septim Spire $199 →Exposes Stripe's API as MCP tools: check subscription status, create invoices, manage customers, query revenue. Stripe also hosts a remote MCP endpoint at mcp.stripe.com if you prefer not to run it locally.
When to use: Revenue dashboards, automated billing workflows, customer support agents that need to check subscription state without manual API calls. The remote endpoint removes the local dependency but introduces a network trust boundary.
Real-time web search, URL extraction, site mapping, and crawl tools built for AI agents. Tavily returns structured, LLM-ready results rather than raw HTML. Also available as a hosted remote endpoint.
When to use: Research agents, fact-checking pipelines, news monitoring, competitive analysis. More reliable than Puppeteer for basic search because it handles rate limits and anti-bot measures at the API layer.
Web and local business search via the Brave Search API. Brave does not profile or track individual queries, which matters if your agent is processing sensitive research topics. Returns web results and local business data.
When to use: General-purpose search in workflows where query privacy matters, or as a lower-cost alternative to Tavily when you need basic web results without crawling depth. Requires a Brave Search API key.
Natural-language query execution against a PostgreSQL database. Defaults to read-only mode, which prevents accidental mutations. Schema introspection lets the agent understand table structure before writing queries.
When to use: Ad-hoc data analysis, debugging slow queries, building reports without writing raw SQL. Keep it read-only unless you have a specific, well-scoped write task and a rollback plan.
Gives agents access to AWS documentation, service metadata, billing data, and CDK scaffolding. Covers a wide surface: S3, Lambda, CloudWatch, Cost Explorer, and more. Actively maintained by AWS Labs.
When to use: Infrastructure agents, cost analysis, documentation lookup during IaC development. Pairs well with Terraform MCP for a full infrastructure workflow.
Reads Sentry error events, traces, and performance telemetry through an MCP interface. An agent can triage issues, look up stack traces, and correlate errors with deploys without leaving the coding environment.
When to use: Debugging agents that need production error context, or any workflow where you want the agent to automatically check Sentry before proposing a fix. Substantially cuts time-to-diagnosis on production incidents.
Gives agents structured access to the Terraform registry: provider documentation, module schemas, resource definitions. The agent can look up resource arguments and generate correct HCL without hallucinating attribute names.
When to use: IaC generation, module discovery, debugging Terraform plan errors. This server is read-only against the public registry — it does not run terraform apply. Pair with a local Terraform execution step that you control.
Connects agents to MongoDB and Atlas clusters. Supports structured queries, schema introspection, and aggregation pipelines. Built-in auth and access control support, which puts it above most community MongoDB wrappers.
When to use: Document-store analytics, debugging MongoDB query performance, natural-language data exploration on Atlas clusters. The official auth integration makes it safer to use against real data than a community wrapper.
Fetches web content and converts it to Markdown for efficient processing by the model. Handles basic HTML-to-text extraction, which covers the majority of documentation and article reading use cases without spinning up a full browser.
When to use: Documentation lookup, reading articles, pulling changelog pages during research. Use Playwright MCP when JavaScript rendering is required; Fetch MCP when it is not.
Read, create, and update Notion pages, databases, and blocks through the Notion API. Useful for agents that need to write findings to a shared workspace, update project databases, or pull structured data from Notion tables.
When to use: Project management agents, documentation generators, any workflow where the output needs to land in a shared Notion workspace. Rate limit is 3 requests/second; batch writes accordingly.
What all 15 have in common (and why it matters)
Every server on this list requires a credential: a token, an API key, or a connection string. According to the Astrix 2025 MCP security report, 79% of API keys across the MCP server landscape are passed via environment variables, which is the right method, but 53% of those are long-lived static secrets rather than the short-lived OAuth tokens that modern auth standards prefer. Only 8.5% of surveyed implementations use OAuth.
That gap matters because a stolen environment variable is a stolen credential. There is no expiry, no rotation, no audit trail. If your agent process is compromised, every credential in its environment is compromised.
"Most clients simply accept tool descriptions without rigorous validation."
arXiv 2603.22489 — MCP Threat Modeling and Tool Poisoning Analysis, 2026The other shared risk is tool poisoning. arXiv 2603.22489 describes this specifically: malicious instructions embedded in tool metadata, not in tool outputs. You install a server that looks safe. Its tool descriptions contain hidden instructions that redirect the model's behavior. The MCPTox benchmark tested 20 agents across 45 real-world MCP servers and 353 tools, and found an attack success rate of 72.8% against o1-mini. More capable models were often more vulnerable, not less, because the attacks exploit instruction-following ability.
The practical implication: audit the tool descriptions of any server before you install it, especially community servers not on this list. A description that contains unusual instructions about how to handle "special cases" or "administrator overrides" is a red flag.
How to evaluate an MCP server you find elsewhere
The servers above have active organizational backing. Most of what you will find in the wild does not. Before installing any server, check these five things in order:
- Maintainer identity. Is there a person or organization with a verifiable identity behind the repo? Anonymous repos with no commit history outside the initial push are high risk.
- Tool description contents. Read every tool description in the server's source code. They should describe capabilities, not instruct the model on behavior.
- Credential handling. Does the server document where credentials go? Anything that logs credentials, sends them to a remote endpoint, or stores them in a file it creates is disqualified.
- Dependency count. A server with 40 transitive dependencies has a much larger supply chain attack surface than one with 3. SlowMist explicitly flags supply chain attacks as a primary MCP risk vector.
- Permission scope. What does the server actually need? A documentation-lookup server that requests filesystem write access or network egress beyond its stated API is asking for more than it needs.
None of these checks require deep security expertise. They require reading the code before you run it.
Want a structured audit of your MCP setup?
Septim Spire is a one-time technical audit: we work through your actual server configuration against the SlowMist checklist and the OWASP MCP Top 10, document every finding with severity and remediation steps, and deliver a written report within 5 business days.
Septim Spire — full audit, $199 →Further reading
- MCP server vulnerability checklist 2026 — the 24-point interactive checklist based on SlowMist's framework.
- What is the Model Context Protocol? — the plain-English primer if you are new to MCP.
- SlowMist MCP Security Checklist — the primary source for the security framework referenced throughout this post.
- arXiv 2603.22489: MCP Threat Modeling and Tool Poisoning — the academic analysis of client-side MCP vulnerabilities.
- Astrix: State of MCP Server Security 2025 — the statistical survey of credential handling across 5,205 MCP repos.