SHARCSHARC

Semantic Code Search
Built for AI Agents

Give your AI agent deep codebase understanding.MCP-native. Works with Claude Code, Cursor, Codex and more.

Works with your favorite AI tools

Anthropic
Cursor
Windsurf
Cline
OpenCode
Anthropic
Cursor
Windsurf
Cline
OpenCode
Direct Retrieval

Stop feeding your best model secondhand context

Most AI agents delegate code exploration to a smaller, cheaper model. That model filters and summarizes before the reasoning model ever sees anything — silently dropping context along the way. SHARC puts the query in the reasoning model's hands directly.

The model that needs to understand your code is the one that queries and reads it.
No summarization. No intermediate model. Zero information loss.

Now see it in practice

Even when the reasoning model explores directly, file-by-file traversal burns tokens and misses context. One semantic query replaces dozens of reads.

Classic
> How does chat history persistence work in this codebase?

- Search (**/*.ts)
  - Found 64 files
- Search (**/*.tsx)
  - Found 96 files
- Search (chat.*history|saveChat|database)
  - Found 29 files
- Read (lib/db/schema.ts)
  - Read 174 lines
- Read (lib/db/queries.ts)
  - Read 594 lines
- Read (app/(chat)/api/history/route.ts)
  - Read 47 lines
- Read (components/sidebar-history.tsx)
  - Read 370 lines

------------------------------------------------------

Chat history uses Drizzle ORM with PostgreSQL. The schema defines User, Chat,
Message_v2, and Vote_v2 tables...
VS
Sharc MCP
> How does chat history persistence work?

- sharc - search_code (query: "chat history persistence", limit: 3) (MCP)
  - Found 3 results for query: "chat history persistence"

  1. Code snippet (typescript) [ai-chatbot]
     Location: lib/db/queries.ts:83-105
     Score: 0.9847
     ... +22 lines (ctrl+o to expand)

  2. Code snippet (typescript) [ai-chatbot]
     Location: lib/db/queries.ts:157-180
     Score: 0.9623
     ... +18 lines (ctrl+o to expand)

  3. Code snippet (typescript) [ai-chatbot]
     Location: app/(chat)/api/chat/route.ts:162-173
     Score: 0.9418
     ... +8 lines (ctrl+o to expand)

------------------------------------------------------

Chat persistence uses Drizzle ORM with saveChat() for creation and
getChatsByUserId() for retrieval with cursor-based pagination.
10x
fewer tool calls
33x
less code to read
15x
faster results

Product Roadmap

Where we've been and where we're headed

Q3 2025
Complete

Research & Experimentation

Basic embeddings and semantic search exploration

Internal evaluation loops, dataset curation, and early retrieval baselines.
Q4 2025
Complete

Core Models

SHARC embedding model, reranking & MCP prototype

Iterated on training recipes, reranker calibration, and MCP tool design.
Q1 2026
Current

Public Launch

MCP tool + Inference API goes live

Docs, onboarding, rate limits, and production telemetry.
Q2 2026+
Planned

Code Review Agents

AI-powered code review agents built on SHARC-MCP

Automated review workflows, codebase-aware suggestions, and deep semantic analysis powered by SHARC-MCP.
SHARC
Embeddings API
Code Reranking
MCP Server
SHARC

Questions about SHARC?

We'd love to hear from you. Reach out anytime for docs, onboarding, or integration guidance.