Skip to content
Back to blog
AI Integration

MCP + Skills: How to Make Your SaaS a Real Tool for AI Agents in 2026

AI agents do not click your dashboard. They call MCP servers and follow Skills. Ship both, or stay invisible inside Claude, Cursor, ChatGPT and Copilot agent workflows.

Mikhail SavchenkoApril 26, 20266 min read
MCPSkillsAI AgentsAutomation

MCP (Model Context Protocol) is the open JSON-RPC standard that lets AI agents call your SaaS like an API. Skills are markdown-packaged playbooks that tell agents when and how to use it. SaaS products that ship only one - or neither - become invisible inside Claude, Cursor, ChatGPT and Copilot agent workflows. The 2026 minimum is one MCP server, one Skill, registry-listed.

Key facts

  • MCP SDK downloads grew from ~100K/month at launch (Nov 2024) to 97M/month by March 2026 - roughly 970x in 18 months.
  • 10,000+ public MCP servers were active when MCP joined the Linux Foundation in Dec 2025; an independent Q1 2026 census indexed 17,468.
  • The reference repository modelcontextprotocol/servers reached ~84K GitHub stars by April 2026.
  • Claude lists 75+ first-party MCP connectors in its directory; Cursor, Windsurf, VS Code, ChatGPT and Goose all support MCP natively.
  • All four hyperscaler-adjacent vendors (OpenAI, Google, Microsoft, Salesforce) shipped MCP support within 13 months of launch.

What changed in 2026

AI agents do not click your dashboard. They call your tools. The interface that mattered in 2015 (web app), 2020 (REST API), and 2024 (chat UI) is being replaced by a new one: the agent calls a tool, the tool returns data, the agent decides the next step. If your SaaS is not callable by an agent, it is not in the workflow.

Two open standards now define what "callable by an agent" means:

  • MCP (Model Context Protocol) - hands. Open JSON-RPC standard from Anthropic, donated to the Linux Foundation in Dec 2025. Lets agents discover and invoke your tools.
  • Skills - brain. Markdown packages, made an open standard at agentskills.io in Dec 2025. Tell the agent when and how to use the tools.

Ship both, get into the registries, and your SaaS becomes a real tool inside Claude, Cursor, Windsurf, ChatGPT, and Copilot. Skip them and you become invisible.

MCP at a glance

MCP is JSON-RPC 2.0 over three transports: stdio (local), Streamable HTTP (remote, recommended since the 2025-03 spec), and SSE (deprecated). Servers expose three primitives:

PrimitiveWho triggers itExample
ToolsModelanalyze_url(url), generate_llms_txt(domain)
ResourcesModel readsA markdown report at report://2026-04/analyze/{id}
PromptsUser invokes/audit-aeo template that pre-fills the analyze flow

The numbers are unambiguous. SDK downloads grew from ~100K/month at launch (Nov 2024) to 97M/month by March 2026 - roughly 970x in 18 months. The official modelcontextprotocol/servers repo passed 84K GitHub stars by April 2026. The Q1 2026 Nerq census indexed 17,468 public MCP servers across registries.

Adoption is no longer optional. OpenAI, Google, Microsoft, Salesforce, Atlassian, HubSpot, Notion, Cloudflare, Sentry, Figma, Canva, Zapier, ActiveCampaign, Apollo, LinkedIn - all shipped MCP servers within 13 months of launch.

Skills at a glance

A Skill is a folder with one required file: SKILL.md. YAML frontmatter declares name and description; the markdown body explains the procedure. Optional siblings: scripts, examples, reference docs.

inite-aeo-analyzer/
  SKILL.md
  examples/
    good-llms-txt.md
    sample-citation-audit.md
  schema-templates/
    organization.json
    faqpage.json

The mechanism is progressive disclosure: at startup the agent loads only the name and description from every installed Skill into its system prompt. The body loads only when the description matches user intent. This keeps context cheap and lets you ship dozens of Skills without bloat.

Skills launched at Anthropic on Oct 16, 2025, became an open standard on Dec 18, 2025, and are now supported by Microsoft (VS Code, GitHub), Cursor, Goose, Amp and OpenCode. Launch directory partners include Atlassian, Canva, Cloudflare, Figma, Notion, Ramp and Sentry.

Why your SaaS needs both

A clean separation:

  • MCP = connectivity. "Give me access to the database."
  • Skill = procedural knowledge. "When querying the DB, always filter by tenant_id; format output as a markdown table."
  • System prompt = always-on persona. No progressive disclosure.
  • Sub-agent = isolated context window for a heavyweight task.

Ship MCP without a Skill: agents have hands but no playbook. They call analyze_url then panic. Ship a Skill without MCP: the agent reads the procedure, then fabricates outputs because it has no real connectivity. Ship both: the agent reads your Skill, calls your MCP server in the right order, returns a verifiable result.

A worked example: the inite.ai MCP server

For a B2B AEO/SEO analyzer, the surface looks like this:

ToolJobOutput cap
analyze_url(url)Run the full AEO audit4 KB summary + report URL
get_citation_lift(url, engines)Score Perplexity/Google AIO/ChatGPT citation likelihood1 KB JSON
generate_llms_txt(domain)Produce ready-to-deploy llms.txttext/markdown blob
audit_schema(url)Detect missing FAQPage, Organization, BreadcrumbList2 KB JSON
suggest_internal_links(url)Map opportunities for cross-linking2 KB JSON
get_keyword_gap(domain, competitor)Surface uncovered queries3 KB JSON
score_aeo_readiness(url)Single 0-100 score for prioritization256 byte int

Each tool description (in the MCP schema) reads like ad copy: verbs, examples, bounded outputs. The Skill (inite-aeo-analyzer/SKILL.md) tells the agent: "Use when the user wants to audit a URL for AI Engine Optimization, citation likelihood in ChatGPT/Claude/Perplexity, llms.txt generation, or schema/internal-link gaps. Call analyze_url first, then audit_schema, then suggest_internal_links. Format the output as a prioritized punch list."

That is the unit of distribution in 2026. Not a dashboard. Not a REST endpoint. An MCP server with a Skill.

A 90-day playbook

Days 0-14 - Surface mapping. Pick 5-8 core jobs-to-be-done. Decide which become Tools (write/compute), Resources (read-only reports), Prompts (user templates).

Days 15-35 - Build the MCP server. Use the TypeScript or Python SDK from modelcontextprotocol. Deploy as Streamable HTTP behind your existing API gateway. Reuse existing auth via OAuth 2.1 with dynamic client registration. Map each tool 1:1 to internal endpoints. Write LLM-optimized descriptions in the JSON Schema description fields.

Days 36-55 - Build the Skill. One SKILL.md per workflow. Frontmatter description must contain trigger phrases users actually type. Body documents invocation order, output formatting rules, and edge cases. Bundle examples and reference assets.

Days 56-75 - Distribution. Submit to the LF registry, PulseMCP, Smithery, Composio Hub, Cursor directory, Claude connectors directory. Open-source the Skill on GitHub. Submit to agentskills.io. Add "Install in Claude / Cursor / Windsurf" buttons to your site. Publish /.well-known/mcp.json.

Days 76-90 - Measurement and iteration. Instrument MCP traffic by tool, by client (via UA), by success rate. Add a /metrics Resource for power users. Publish a public changelog. Launch on Hacker News and the MCP subreddit.

By day 90, your SaaS is a real tool inside the major agent surfaces - and your competitors who skipped this are still waiting for users to log in.

What kills adoption

Nine failure modes account for most stalled MCP rollouts:

  1. Vague tool descriptions. The model never calls the tool. Treat description fields like ad copy.
  2. Unbounded JSON outputs. Blow the context window. Cap, paginate, offer summary=true.
  3. Per-call API keys instead of OAuth. Kill install-time conversion. Ship OAuth 2.1 from day one.
  4. MCP without a Skill. Hands without a brain. Wrong tools fire in wrong order.
  5. Skill without MCP. Brain without hands. Agent fabricates outputs.
  6. Stdio-only deployment. Unusable from cloud Claude or ChatGPT.
  7. Ignoring the registry. Discovery is the moat.
  8. No telemetry. Cannot tell which tool surfaces are dead weight.
  9. Generic Skill description. Never auto-loaded. Include trigger phrases users actually type.

The bottom line

The MCP curve is the steepest standard adoption curve in developer tools since REST. 970x SDK growth in 18 months, 17K+ public servers, every major hyperscaler shipping support inside 13 months. Skills closed the loop in late 2025 by giving agents the procedural knowledge MCP alone could not encode.

The 2026 minimum for a SaaS that wants to be inside agent workflows: one MCP server, one Skill, registry-listed, OAuth 2.1, instrumented. Ship that in 90 days. Skip it and your product becomes a website that nobody visits because the agent went to a competitor that took the call.

Frequently Asked Questions

Is MCP just an API for AI - why can my agent not call my REST API directly?

It can, but it will not. Agents need standardized discovery, schemas, auth, and per-tool descriptions designed for LLM context windows. MCP gives them a registry, JSON-RPC over Streamable HTTP, OAuth 2.1, and a contract that every major client (Claude, Cursor, Windsurf, ChatGPT) already speaks. A REST API forces the agent to read your docs every time. An MCP server is callable on first install.

Do I need both an MCP server AND a Skill, or is one enough?

MCP is hands - it gives the agent the ability to call your tools. A Skill is a brain - it tells the agent when to call which tool, in what order, and how to format the output. Ship MCP without a Skill and agents fire wrong tools in wrong order. Ship a Skill without MCP and the agent fabricates outputs. For non-trivial workflows, ship both.

What transport should I pick - stdio, SSE, or Streamable HTTP?

Streamable HTTP for any remote SaaS. The 2025-03 spec replaced SSE with a single endpoint that handles POST and optional SSE streaming. Stdio is for local CLIs only. SSE is deprecated. If your SaaS lives in the cloud, Streamable HTTP is non-negotiable - it is the only transport ChatGPT, remote Claude, and managed Cursor talk to.

How do AI agents discover my MCP server?

Four channels. Submit to the official Linux Foundation registry at registry.modelcontextprotocol.io. List on PulseMCP, Smithery, and Composio Hub. Get into the Cursor and Claude connector directories. Publish /.well-known/mcp.json on your domain. Servers not in at least three of those four are functionally invisible.

What does authentication look like for a remote MCP server?

OAuth 2.1 with dynamic client registration. Cursor, Claude Code, Windsurf and ChatGPT all support it natively as of late 2025. Per-call API keys still work but kill install-time conversion - users have to copy keys into config files. Ship OAuth from day one.

Keep reading

MCP + Skills: How to Make Your SaaS a Real Tool for AI Agents in 2026 | INITE AI Blog