1
Show HN: Limelight – Let your AI see what your app does at runtime
Limelight is an open source SDK that captures what's happening in your app while you're developing and pipes that context into AI coding assistants via MCP.
The problem: when you're debugging in Cursor or Claude Code, the AI can only see your source files. It can't see network requests, state changes, re-renders, or timing. So it guesses. Limelight gives it the actual evidence.
The SDK sits in your app and captures everything (network requests, GraphQL ops, console logs, Zustand/Redux state, component renders), then correlates it by timing and causality into what I call Debug IR. It's not a log dump. It strips sensitive data, compresses the timeline to what's relevant, and structures everything into causal chains so the LLM can reason about what caused what instead of scanning a wall of text.
The MCP server reads from that Debug IR. It's 11 tools total, which was intentional. A lot of MCP servers ship with 30-40+ tools and eat half your context window before you even ask a question.
The part I'm happiest with is full stack tracing. Drop the SDK into your React or React Native frontend and your Node/Next.js backend, and Limelight correlates everything together. You can trace a request from a button tap through the API call, into the backend handler, back through the response, into the state update, and through the re-render. All in one place. No switching between browser devtools and server logs trying to match timestamps.
So when you ask Cursor "why is my search showing stale results", it can see that two API requests came back out of order and the slower one overwrote the correct state. It suggests an abort controller fix based on what actually happened.
It started as a React Native debugger because debugging RN at all suckst. It grew from there into React, React Native, Node, and Next.js. Setup is two lines (Limelight.connect()), no wrappers, no config. The correlation engine automatically catches N+1 queries, render loops, retry storms, and stale closures. Free to use without an account.
About 3k weekly downloads on npm. MIT licensed. Solo dev.
Would love feedback, especially on the MCP workflow. Is giving AI editors access to live runtime context something you'd actually use? What's missing?
runtime visibility is the missing piece everyone ignores until theyre debugging at 2am. the real unlock here is pairing this with session replay - when your agent takes a wrong turn, you want to rewind and see exactly what state the app was in when it made that decision. most people instrument the LLM side obsessively and leave the applicaiton side totally dark. flipping that ratio is underrated.