How Inite Builds Vertical AI Products: One Engine, Many Skins
Inite is not a stack of separate products. It is one AI-visibility engine with five vertical skins - rent, health, estate, shop, digital. Same pipeline, same schema, same agent-callable surface. Cloned in four weeks.
Inite is one AI-visibility engine packaged into vertical SaaS skins - inite.ai (the core), inite.rent, inite.health, inite.estate, inite.shop, and inite.digital (content.inite.ai). Every vertical ships the same pipeline (8,130 LOC of shared AI), the same Postgres schema, the same AEO content surface, the same agent-callable API. What changes per vertical is a 100-line config: company copy, KPI dashboard, ai.txt and llms.txt.
Key facts
- Shared AI pipeline: 29 modules, 8,130 lines of code, 25 independently retryable activities, 17 analysis artifacts per run.
- Diagnostic-to-handover cycle per vertical: 2-4 weeks (versus an industry median of 14 weeks per McKinsey CIO Survey 2026).
- Five verticals on the same template: inite.ai (B2B SaaS AEO), inite.rent, inite.health, inite.estate, inite.shop, inite.digital.
- Every vertical ships the four-file AI identity (llms.txt, ai.txt, identity.json, robots-ai.txt) - 1.6x correct citation rate in Perplexity.
- Every vertical exposes an MCP-callable API surface: agents call analyze_url, get_citation_lift, generate_llms_txt, audit_schema, score_aeo_readiness.
Inite is one machine
Inite is not a portfolio of separate products. It is one AI-visibility engine packaged into vertical skins. The engine takes a URL, runs a 9-step diagnostic across AI identity, citation likelihood, schema, content quality and retrieval gap, then ships a 90-day plan with content briefs, an internal-link map, an outreach kit and a ready-to-deploy llms.txt.
That engine lives in lib/ai/: 29 modules, 8,130 lines of code, 25 independently retryable activities, 17 analysis artifacts per run. Every vertical - inite.ai, inite.rent, inite.health, inite.estate, inite.shop, inite.digital - calls the same engine. What changes per vertical is roughly 100 lines of config: copy, KPI dashboard, ai.txt and llms.txt.
The same model has shown up in every successful vertical SaaS company of the last decade. Toast did it for restaurants on top of a generic POS. Procore did it for construction on top of generic project management. Inite is doing it for AI visibility - one engine, many vertical skins.
The thesis: AEO + agent-callable + vertical packaging
Three layers, in this order:
- AEO-first content surface. Every vertical publishes the four-file AI identity (
llms.txt,ai.txt,identity.json,robots-ai.txt), a Direct Answer Block under the first H2 of every page, FAQPage schema, and statistical anchoring at three or more facts per 300 words. The point: Perplexity, Google AI Overview, ChatGPT and Copilot cite the vertical correctly. - Agent-callable surface. Every vertical ships an MCP (Model Context Protocol) server and a matching Claude Skill. Agents in Claude, Cursor, Windsurf, ChatGPT and Copilot can call
analyze_url,get_citation_lift,generate_llms_txt,audit_schema,score_aeo_readinesswithout ever loading the dashboard. - Vertical packaging. Same engine, same schema, vertical-specific KPIs and workflows. The unit of go-to-market is the vertical, not the underlying engine.
Skip layer one and AI engines do not cite you. Skip layer two and AI agents cannot use you. Skip layer three and the product is too generic to win in any single market. All three are non-optional in 2026.
The repeatable template
Five layers, identical across every vertical:
| Layer | What it is | Where it lives |
|---|---|---|
| AI pipeline | 9-step diagnostic, 25 activities, 17 artifacts | lib/ai/pipeline.ts and 28 sibling modules |
| Database | Postgres schema with User, Diagnostic, SiteAnalyzeReport, Subscription | prisma/schema.prisma |
| Web app | Next.js 16 App Router with [lang] localization | app/ |
| AEO content surface | llms.txt, ai.txt, identity.json, robots-ai.txt, FAQPage schema | app/api/seo/, app/feed.xml/, content layer |
| Agent-callable API | MCP server + Claude Skill | new app/api/mcp/ route + skills/ package |
Add a vertical = duplicate the copy layer, swap KPI dashboard, update the four AI identity files, point payment and auth at the same Postgres. The engine, the AEO surface, the agent surface, the auth, the billing, the AI provider integration - all reused.
This is why a new vertical ships in four weeks, not 14. The template is the moat.
The verticals
| Vertical | Audience | KPI surface | Status |
|---|---|---|---|
| inite.ai | B2B SaaS founders and growth teams | AEO score, citation lift, content briefs, link map | Live |
| inite.rent | Property managers, short-term rental operators | Vacancy rate, rental income, listing visibility | On template |
| inite.health | Clinics, telehealth providers | Patient flow, wait times, appointment funnel | On template |
| inite.estate | Real-estate agencies, brokerages | Deal pipeline, listing exposure, lead routing | On template |
| inite.shop | E-commerce operators, DTC brands | Catalog AEO score, AI-driven discovery, reviews | On template |
inite.digital (also content.inite.ai) | Content teams, agencies, publishers | Content quality, citation share, internal-link health | On template |
Each vertical inherits the full engine. Each adds a small adapter for KPI computation and a small theme for the dashboard. Nothing else changes.
Why this shape works
Three economic reasons:
Maintenance compounds against you in horizontal SaaS. A horizontal AEO platform has to satisfy every category. A vertical instance only has to satisfy one. Same engine underneath, but every vertical can ship category-specific copy, examples, dashboards, and integrations without the others slowing it down.
Distribution compounds for you in vertical SaaS. A property manager will pay attention to inite.rent in a way they never would to a generic AEO tool. A clinic operator will pay attention to inite.health. The marketing surface is sharper, the sales conversation is shorter, the case studies are concrete.
LLM cost is the only marginal cost. The pipeline is 25 pure async activities; the database is shared; the front end is shared. The only thing that scales with usage is Anthropic and OpenAI tokens. Every vertical has the same gross margin profile because every vertical runs the same engine.
A four-week vertical clone
Week 1 - Discovery and AEO audit. Run the existing engine against 5-10 sample customers in the vertical. Identify the 1-3 highest-impact workflows. Define the KPI dashboard. Write the vertical-specific llms.txt, ai.txt, identity.json, robots-ai.txt.
Week 2 - Pipeline configuration. Wire the existing pipeline to vertical-specific outputs. Add the KPI computation modules. No engine rewrite - configuration only.
Week 3 - Production workflows. Deploy the 1-3 workflows. Hook them into the existing rate limiter, payment, auth, and email funnel. Wire the MCP server endpoints for the new vertical-specific tools.
Week 4 - Handover and registry submission. Ship the dashboard. Submit the MCP server to the Linux Foundation registry, PulseMCP, Smithery, Composio Hub. Submit the Skill to agentskills.io. Publish a Direct Answer Block on every key landing page. Open a public changelog.
By the end of week 4 the vertical is live, callable by AI agents, and indexed by the AEO surfaces. The 14-week median that McKinsey reports for enterprise AI deployments collapses to four because nothing was built from scratch.
What this means for the market
Most AI-native companies in 2026 will follow one of two paths.
The first path: build a horizontal platform, then try to retrofit verticals on top. This is what most large incumbents are doing. The maintenance is brutal, the marketing is generic, and AI agents see one big surface that is hard to reason about.
The second path: build a sharp engine once, then package it into vertical skins, each with its own AEO surface and agent-callable API. This is the Inite model. Every vertical is independently discoverable by AI engines, independently callable by AI agents, and independently sellable to a specific customer.
The 2026 winning shape for AI-native SaaS is vertical packaging on top of a shared engine - with AEO and MCP as the two non-negotiable surfaces.
The bottom line
Inite is one AI-visibility engine, packaged into five vertical SaaS skins, all built on the same Next.js + Prisma + Anthropic/OpenAI template, all exposing the same AEO content surface and the same agent-callable API.
The engine is 8,130 lines of shared code. A new vertical is 100 lines of config and four weeks of work. Every vertical is cited correctly by AI engines because of the AEO surface, callable by AI agents because of the MCP server and Skill, and economically defensible because LLM tokens are the only marginal cost.
This is what an AI-native company looks like in 2026: one engine, many skins, both surfaces (content and agent), shipped per vertical in four weeks.
Frequently Asked Questions
Why one engine instead of five separate products?
Because the hard part is the pipeline, not the surface. The 9-step diagnostic that takes a URL, audits AI identity, scores citation likelihood, mines pain, classifies retrieval gaps, and generates a 90-day plan is the same problem in every vertical. What changes is the KPI dashboard and the copy. Building five engines means five times the maintenance for one times the value.
What actually changes per vertical?
Three things. (1) The KPI surface - inite.rent tracks vacancy and rental income; inite.health tracks patient flow and wait times; inite.estate tracks deal pipeline. (2) The content layer - vertical-specific llms.txt, ai.txt, FAQ pairs, and direct answer blocks. (3) The 1-3 production workflows scoped to the vertical. The pipeline, schema, auth, payment, AI calls, and AEO surface are identical.
How does a vertical ship in four weeks?
Week 1: discovery questionnaire and AEO audit of the existing site. Week 2-3: deploy 1-3 production workflows on the shared pipeline. Week 4: handover with the KPI dashboard wired and the AI identity files (llms.txt, ai.txt, identity.json, robots-ai.txt) live. Median industry time for a comparable AI deployment is 14 weeks - the template is the moat.
How does this make Inite usable by AI agents directly?
Every vertical exposes an MCP server (Model Context Protocol) plus a Skill. Agents in Claude, Cursor, ChatGPT and Copilot can call analyze_url, generate_llms_txt, audit_schema, score_aeo_readiness without ever opening the dashboard. The product is callable, not clickable. The AEO content surface tells AI engines what the product is; the MCP server lets them use it.
Which verticals are live, and which are next?
inite.ai (the core) is live for B2B SaaS AEO. inite.rent, inite.health, inite.estate, inite.shop and inite.digital (also at content.inite.ai) ship on the same template. The roadmap is sequenced by demand: any vertical with a real pilot client moves to the front. The cloning cost is 100 lines of config and four weeks - the bottleneck is sales, not engineering.
Keep reading
MCP + Skills: How to Make Your SaaS a Real Tool for AI Agents in 2026
AI agents do not click your dashboard. They call MCP servers and follow Skills. Ship both, or stay invisible inside Claude, Cursor, ChatGPT and Copilot agent workflows.
AEO Complete Guide 2026: How to Get Cited by ChatGPT, Perplexity & Google AI Overview
Answer Engine Optimization is the new SEO. A practical 2026 playbook to get your business cited by ChatGPT, Perplexity, Google AI Overview and Copilot - with measurable steps and benchmarks.
What Is llms.txt and Why Every Site Needs One in 2026
llms.txt is the de-facto standard for telling AI engines who you are and how to interpret your content. A complete guide with template, validator checklist, and adoption data.