Davor Cukeric
Published on npm2025

ContextLens

Token-efficient code intelligence for AI agents

Tree-sitterSQLite FTS5MCP ProtocolNode.jsTypeScript
Your AI agent doesn't need to read your whole codebase — it needs to understand it.

AI coding agents burn through tokens like fuel — scanning entire codebases just to understand a single function. I built a tool that gives them surgical precision instead of brute force, cutting token usage by up to 80% while making them genuinely smarter about your code.

The Problem

What needed solving

AI coding agents are powerful but wasteful. Every time Claude Code, Cursor, or Copilot needs to understand your project, it reads entire files — sometimes your whole codebase — burning thousands of tokens on code that has nothing to do with the task at hand. For large projects, this means slower responses, higher costs, and context windows clogged with irrelevant noise.

The existing solutions aren't much better. Competitor MCP servers consume 4,700+ tokens just in overhead before they even start analyzing your code. That's like paying a consultant's day rate just to have them find the office. Developers working on serious codebases — 20K, 50K, 100K lines — hit context limits constantly, forcing them to manually guide their AI tools instead of letting the tools guide them.

The Solution

How I approached it

ContextLens is an MCP server that gives AI coding agents a structured, token-efficient map of your entire codebase. Instead of reading raw files, agents query an indexed representation — functions, classes, imports, dependencies — and get exactly the context they need. The overhead is 1,200 tokens. Not 4,700. Just 1,200.

What makes it different is the combination of speed and precision. ContextLens uses tree-sitter for AST parsing, which means it understands code structurally — not as text, but as syntax trees. Paired with SQLite FTS5 for full-text search, agents can find symbols, trace references, and understand architecture without ever reading a line of irrelevant code. It works with Claude Code, Gemini CLI, Cursor, Copilot, and Windsurf out of the box.

How It Works

Under the hood

At its core, ContextLens builds a lightweight index of your codebase using tree-sitter, a parser that generates abstract syntax trees for dozens of programming languages. When you initialize it on a project, it walks every file, extracts symbols — functions, classes, interfaces, exports — and stores them in a SQLite database with FTS5 full-text search enabled.

The MCP protocol layer exposes this index to any compatible AI agent. When an agent needs context, it doesn't ask for files — it asks questions. "What functions call getUserById?" or "Show me the interface for OrderService." ContextLens returns precise, minimal answers: just the symbols, signatures, and relationships the agent needs.

The result is a 65-80% reduction in tokens consumed per interaction. A 26,000-line codebase indexes in 2.2 seconds, producing 2,348 searchable symbols. The agent gets a richer understanding of your code while using a fraction of the context window.

Impact

Results and outcomes

Published as @cukeric/contextlens on npm, ContextLens indexes a 26K LOC codebase in 2.2 seconds, extracting 2,348 symbols with just 1,200 tokens of overhead — nearly 4x more efficient than competing solutions. Token savings of 65-80% per interaction mean AI agents can handle larger projects, maintain longer conversations, and deliver more accurate suggestions without hitting context limits.

It works across the major AI coding platforms — Claude Code, Gemini CLI, Cursor, Copilot, and Windsurf — making it a universal upgrade for any developer's AI toolkit.