87
Show HN: Statewright ā Visual state machines that make AI agents reliable
Agentic problem solving in its current state is very brittle. I fell in love with it, but it creates as many problems as it solves.
I'm Ben Cochran, I spent 20+ years in the trenches with full-stack Engineering, DevOps, high performance computing & ML with stints at NVIDIA, AMD and various other organizations most recently as a Distinguished Engineer.
For agents to work reliably you either need massive parameter counts or massive context windows to keep the solution spaces workable. Most people are brute forcing reliability with bigger models and longer prompts.
What if I made the problem smaller instead of making the model bigger?
I took a different approach by using smaller models: models in the 13-20B parameter range and set them to task solving real SWE-bench problems. I constrained the tool and solution spaces using formal state machines. Each state in the machine defines which tools the model can access, how many iterations it gets and what transitions are valid. A planning state gets read-only tools. An implementation state gets edit tools (scoped to prevent mega edits) and write friendly bash tools. The testing state gets bash but only for testing commands. The model cannot physically skip steps or use the wrong tool at the wrong time. It is enforced via protocol, not via prompts.
The results were more promising than I would have expected. Across multiple model families irrespective of age (qwen-coder, gpt-oss, gemma4) and the improvements were consistent above the 13B parameter inflection point. Below that, models can navigate the state machine but can't retain enough context to produce accurate edits. More on the research bit: https://statewright.ai/research
Surprisingly this yielded improvements in frontier models as well. Haiku and Sonnet start to punch above their weight and Opus solves more reliably with fewer tokens and death spirals. Fine tuning did not yield these kinds of functional improvements for me. The takeaway it seems is that context window utilization matters more than raw context size - a tightly scoped working context at each step outperforms a model given carte blanche over everything. Constraining LLMs which are non-idempotent by using deterministic code is a pattern that nobody is currently talking about.
So, I built Statewright. Its core is a Rust engine that evaluates state machine definitions: states, transitions, guards and tool restrictions. Its orchestration doesn't use an LLM, just enforces the state machine. On top of that is a plugin layer that integrates with Claude Code (and soon Codex, Cursor and others) via MCP. When you activate a workflow, hooks enforce the guardrails per state automatically. The model sees 5 tools available instead of dozens, gets clear instructions for the current phase and transitions when conditions are met. Importantly it tells the model when it's attempting to do something that isn't in scope, incorrect or when it needs to try something else after getting stuck.
You can use your agent via MCP to build a state machine for you to solve a problem in your current context. The visual editor at statewright.ai lets you tweak these workflows in a graph view... You can clearly see the failure paths, the retry loops and the approval gates. State machines aren't DAGs; they loop and retry, which is what agentic work actually needs.
Statewright is currently live with a free tier, try it out in Claude Code by running the following:
/plugin marketplace add statewright/statewright
/plugin install statewright
/reload-plugins
Then "start the bugfix workflow" or /statewright start bugfix. You'll need to paste your API key when prompted. The latest versions of Claude may complain -- paste the API key again and say you really mean it, Claude is just being cautious here.
Feedback is welcome on the workflow editor, the plugin experience, and tell me what workflows you'd want to build first. Agents are suggestions, states are laws.
I wanted to try to reproduce the research results (https://github.com/statewright/statewright#research-results) locally but I wasn't able to find the code for it, have you publish the code for running those somewhere?
The research page (https://statewright.ai/research) mentions a patent, and a "core engine";
> Provisional patent application filed: #64/054,240 (April 30, 2026). 35 claims covering state machine guardrail enforcement for LLM agent tool access. The core engine remains Apache 2.0 open source.
I'm not sure I understand what the "core engine" is if it's not the "state machine guardrail runtime" which is what the patent cover. What parts are the open source parts exactly?
I find the idea really interesting and was nodding along the way as I read what you wrote, makes sense both for the human and the agent, seems like a really nice idea that'd help, but the patent kind of makes me want to run away and not look into it too deeply.
Thanks for digging deeper and I'm happy to clarify all three aspects:
Re: Reproducing the results: the engine, agent crate and demo TUI are all in the repo. If you have ollama running with a 13B+ model, task run:bugfix reproduces the simple bugfix result end to end. What isn't published yet is the SWE-bench experiment harness (task selection, patch scoring, control runs). I need to get that out, I prioritized the end-to-end simple Claude Code plugin for the launch. The demo crate (crates/demo) contains a demo TUI which calls ollama and runs the bugfix state machine interactively with code.
Re: Engine: The core engine (crates/engine/) is the pure Rust state machine evaluator. It's what Statewright is running on the backend. JSON in => transition decisions out. Agent (crates/agent/) builds on top of it to make it useful for LLMs. That all is Apache 2.0 with no restrictions.
Re: the Patent: The patent covers the method of using state machines to constrain LLM agent tool access at the protocol layer. It's defensive, it helps protect the managed service and the idea from "being scooped" from a larger company with more personnel and resources. It's not targeted against solo developers, self-hosters or researchers.
You'll find that the portions that I've released FSL 1.1 have explicit grants which do not restrict solo developers or single team self-hosting. The code released this way becomes Apache 2 in exactly 3 years. This is not unlike what Sentry and MariaDB did. I am planning on releasing more portions as FSL 1.1, I just hadn't crossed that bridge and honestly this thing seems to have gotten popular at the moment so I thought I'd set the record straight a bit
The not-quite-Apache-2 "Fair Source License, Version 1.1, ALv2 Future License" (https://github.com/getsentry/fsl.software/blob/main/FSL-1.1-...) includes the Apache 2 patent grant. That grants you conditional permission to use the software in ways that would, without the grant, infringe upon their patent. One of the conditions is that you may not make a claim against any party that the software infringes upon any patent, or else your patent grant is terminated.
Unfortunately, the license actually in the repo is not even a not-quite-Apache-2 license. It doesn't appear to be FSL-1.1-ALv2 at all: https://github.com/statewright/statewright/blob/main/plugins.... This notably does not include the patent grant, which makes it unclear whether use of the software would infringe upon the patent.
You're right, and I have just corrected this. The license in the repo now uses the canonical FSL-1.1-ALv2 based on the template from fsl.software and now includes the patent grant clause.
The omission wasn't intentional -- the patent grant wasn't on my radar when the original license text was committed. FSL licensing is very new territory for me and I duffed it slightly, now corrected.
I also just updated the https://statewright.ai/research page to accurately reflect the intent and mention the patent grant afforded under FSL-1.1-ALv2. Thanks again for calling my attention to this.
I feel like caching should be mentioned in tradeoffs, right? If you change the tool list frequently, that's a cache bust. In long sessions that seems like it could significantly affect costs.
Great question... and there are two answers depending on what you were originally referring to:
re: Claude Code... we actually don't filter or modify the tool list so all tools stay visible -- disallowed calls get blocked at execution time with an error message. No cache busts on transitions, the model sees the full tool sets. The cost there is prompt caching dollars not latency I suppose
re: The research (Rust agent + Ollama) the model only receives tool schemas for the current states' allowed tools. Ollama does have a KV cache reuse facility so changing the tool list busts that cache. Depending on your workflow this can happen as many times as you expect your states to transition until completion. For simple workflows this is 3-5x. Within each state the tool list is stable and cache operates normally. Presenting fewer tools instead of dozens on every agent processing step reduces input tokens and decision complexity, which is where the measurable gains come from.
Both enforce the same constraints depending on the execution interface. The schema level filtering in the research is the S-tier approach. Adding tools/list filtering to the MCP gateway would be beneficial if possible (it looks like we could only filter MCP tools not core ones, which could provide tangible benefit. I've added this evaluation to the roadmap.
Nice, thanks for the detailed answer!
Very nice project!
Is the editor/composer separate from the runtime?
If I build a workflow in the visual editor, can I use that same flow inside my own app just by using the runtime/engine? Or is it mainly tied to the Statewright platform and Claude Code plugin?
Iām wondering if the runtime can be used as a standalone piece to power apps I build.
Yes, the engine handles the full workflow schema including guards. There are some aspects of runtime enforcement (env vars/command filtering, etc. exposed via the UI) that currently only live in the plugin layer but the engine parses and exposes everything. All you would have to do is wire up enforcement on your end in your app the same way the plugin does.
Interesting, I built a ticketing system similar to Beads which has yielded more predictable results with Claude and other models, and I'm currently building a custom harness, I'm able to use offline models though my GPU ram bandwidth is much lower, but I'm also planning on doing something similar to what you've built, namely the editing tools and what not, I hate how long it takes for Claude to look for files, it feels wasteful. I'm still astounded that everyone else has figured out ways to speed up harnesses, but Claude Code is still slow like a slug. I don't even care if I am waiting on the LLM in terms of slowness, but running local tools slowly bothers the living crap out of me, stop using grep, RIPGREP IS FASTER!
In any case, I'll have to check out Statewright after work ;)
I feel you on how sluggish Claude Code can be, you just never know what those pulsing prompts are doing in the background...
Given Statewright plugs into Claude Code, there is a little added overhead while managing the state machine logic, but for complicated workflows if it saves you a few debug loops, mass edit reversions or death spirals I think the case can be pretty solid for including it
I think this will be the next frontier for these models, improving the desktop tooling. I am surprised I've yet to see them go all in on hiring desktop app developers to overhaul Claude Code / Codex / Antigravity / etc because there's so many things they could do to reduce the footprint and issues drastically.
[dead]
Interesting.
In your Github, the JSON format shown for defining custom workflows is very simple. I wonder if that limits the detail in the state-related instructions and error messages you can send to a model.
For example, in state transitions, does your tool just tell the model something like "you are in 'act' mode and no longer in 'plan' mode, here are your new available tools"? Seems difficult to give it any more informative messages given how simple the workflow definitions are. Likewise when the model attempts to do something that's not supported for tools in the given phase.
The workflow definition is intentionally simple... the enforcement layer handles the mechanics however the model gets more context than just "you're in <xyz> mode now"
Each state has an `instructions` field for phase specific guidance and when an agent's action (tool call) gets rejected the error message lets the model know what went wrong, and what's available to move forward
Tool 'Edit' is not available in the 'planning' phase. Allowed Tools: Read, Grep, Glob To advance, call statewright_transition with READY -> implementing
Models (even simple ones) tend to reason through these error messages, adjusting their approaches as opposed to retrying the blocked call. Additionally, on transitions the model is required to include a rationale explaining why it's transitioning (`data.rationale`) which creates an audit trail of the agent's reasoning at each phase boundary. That ends up being one of the most useful parts of the run history viewable on statewright.ai
Does it make sense to ship an MCP code mode API? I'm surprised you're recommending MCP as-is when concerned about context usage optimization. I don't have a lot of hands-on experience either way yet so I'm curious what's best and/or most popular... I understand MCP is less effort and still affordable at VC-subsidised prices.
for the integration piece that ties into Claude Code and other places where AI is used most frequently? yes I think it does... we're not fighting context in Opus/Sonnet as much as we are in smaller models and we're only adding about 6 tools here which is a smaller footprint than other MCP exposures. Smaller models have a more direct/tight interface that doesn't bloat the tool space in my experimentation (using the core directly)
Please add support for the Windsurf editor as well. Thanks!
Mr. Claude's Opus says that this is a very feasible thing. It has better support for hooks than Cursor and full MCP support so protocol-layer blocking (like Claude) is possible. Adding to the roadmap...
Pretty cool. Looks like stately.ai but catered towards agentic state machine workflows. Really interesting!
Stately (and XState ^_^) is pretty neat, I hadn't come across it yet... (edit:) neat to see visual XState being used for application logic as well
I see constant posts on Reddit/HN about the ways that AI is amazing and at the same time is fudging it (literally). Nobody can make reliability guarantees on something that's non-deterministic and non-idempotent. Nobody's AI workflow suite of tools can claim this. Prompting gets you closer to the mark but still non-deterministic. Breaking down the problem into chunks with valid transition criterion so that even tiny models can step through them I believe gets us closer to where we want to be semantically
I have to check how you are using state machines but I have also been focused on small models for a while now.
nocodo is one of my product experiments, currently using 120B model but I have tested a few agents inside it with 20B models.
I create a bunch of agents, each with very specific goals. Like Project Manager, Backend Engineer, etc.
Each agent gets a very compact list of tools and access to only certain parts of the filesystem or commands.
https://github.com/brainless/nocodo/tree/main/agents/src
Nice project... the per-agent tool restriction is the same core insight (smaller tool space -> better reasoning)
The main difference with Statewright is that tool access changes over time within a single agent. Planning phase gets read-only tools, edit capability unlocks after the agent proves it has adequate understanding... test tools unlock after the fix. State machines handle the phase transitions, guards and retry loops.
Your multi-agent approach decomposes by role instead of by phase/state. Both are valid. Since you're already in Rust, the engine crate (crates/engine) is a pure library with no deps. It might be interesting to see if putting a state machine around your orchestration layer improves your observed performance
I just have a smart model write a testable phased plan, have a cheaper model implement them, and yet another model to review each phase. I don't see the value of adding a Rust state engine. Algorithmically verifiable things can be tests, and more nebulous things (like pattern compliance) need an LLM to do the heavy lifting and can make mistakes, so what does the state engine buy you?
the state engine is the part that can't hallucinate. even with simple steps/prompting the review model can miss things... it's still an LLM making a judgement call at the end of the day.
the state engine doesn't judge, it enforces... with code and not transformers ^_^
if a tool (or any other guardrail) isn't valid at a given state the model call gets rejected before the model sees the result. that's the gap between "a model said this is okay" vs. "the system structurally prevents this"
I don't understand. Let's stay my state is whether we are in conformance with repo patterns. Walk me through how you don't/can't hallucinate, given that you need an LLM to determine the state. For state variables that don't need LLMs, you can simply use tests and commit hooks, no?
the LLM doesn't determine the state... it requests a transition to change the state. the engine evaluates guards (data carried along the way) to decide if the transition is valid.
it (the LLM) can't skip from implementation to deploy if the guard says the tests haven't passed. the model will receive feedback that what it's tried to do is invalid and give the reasons why. it can't be skipped. it then tries to resolve that new information to make the state transition... almost like it would responding to a human in the chair denying a step.
the model can't merge if it hasn't gone through your review state, even if it wants to (it'll try though)
[flagged]
[dead]
[flagged]
[flagged]
[flagged]