33
Show HN: Continue – Source-controlled AI checks, enforceable in CI
We now write most of our code with agents. For a while, PRs piled up, causing review fatigue, and we had this sinking feeling that standards were slipping. Consistency is tough at this volume. I’m sharing the solution we found, which has become our main product.
Continue (https://docs.continue.dev) runs AI checks on every PR. Each check is a source-controlled markdown file in `.continue/checks/` that shows up as a GitHub status check. They run as full agents, not just reading the diff, but able to read/write files, run bash commands, and use a browser. If it finds something, the check fails with one click to accept a diff. Otherwise, it passes silently.
Here’s one of ours:
.continue/checks/metrics-integrity.md
---
name: Metrics Integrity
description: Detects changes that could inflate, deflate, or corrupt metrics (session counts, event accuracy, etc.)
---
Review this PR for changes that could unintentionally distort metrics.
These bugs are insidious because they corrupt dashboards without triggering errors or test failures.
Check for:
- "Find or create" patterns where the "find" is too narrow, causing entity duplication (e.g. querying only active sessions, missing completed ones, so every new commit creates a duplicate)
- Event tracking calls inside loops or retry paths that fire multiple times per logical action
- Refactors that accidentally remove or move tracking calls to a path that executes with different frequency
Key files: anything containing `posthog.capture` or `trackEvent`
This check passed without noise for weeks, but then caught a PR that would have silently deflated our session counts. We added it in the first place because we’d been burned in the past by bad data, only noticing when a dashboard looked off.---
To get started, paste this into Claude Code or your coding agent of choice:
Help me write checks for this codebase: https://continue.dev/walkthrough
It will:- Explore the codebase and use the `gh` CLI to read past review comments
- Write checks to `.continue/checks/`
- Optionally, show you how to run them locally or in CI
Would love your feedback!
This looks likes a more configurable version of the code review tools out there, for running arbitrary AI-powered tasks.
Do you support exporting metrics to something standard like CSV? https://docs.continue.dev/mission-control/metrics
A brief demo would be nice too.
I think this is a good summary! And the configurable part turns out to be the main bit
One of the fundamental differences between checks and code review bots is that you trade breadth for consistency. There are two things Continue should never, ever do:
1. find a surprise bug or offer an unsolicited opinion
2. fail to catch a commit that doesn't meet your specific standards
just saw the follow up questions:
- we do! right now you can export some metrics as images, or share a link publicly to the broader dashboard. will be curious if others are interested in other formats https://imgur.com/a/7sgd81r
- working on a loom video soon!
Is this the same continue that was for running local AI coding agents? Interesting rebrand.
That's us! I figure others will wonder the same, so we wrote about what exactly we're doing here: https://blog.continue.dev/from-extension-to-mission-control
tl;dr
- a _lot_ of people still use the VS Code extension and so we're still putting energy toward keeping it polished (this becomes easier with checks : ))
- our checks product is powered by an open-source CLI (we think this is important), which we recommend for jetbrains users
- the general goal is the same: we start by building tools for ourselves, share them with people in a way that avoids creating walled gardens, and aim to amplify developers (https://amplified.dev)