How is this already #1 on the front page with 12 upvotes and 9 comments…
The article doesn’t reveal much. It feels like a fluff piece, and I can’t comprehend what the goal of sharing “we use AI agents” means for the dev community, with little to no examples to share. For a “dev” micro blog, this feels very lackluster. Maybe the Minion could have helped with the technical docs?
EDIT: slightly adjusts tinfoil hat minutes later it’s at #6
It has all the trappings of NIH syndrome.
Reinventing the wheel without explaining why existing tools didn't work
Creating buzzwords ("blueprints" "devboxes") for concepts that are not novel and already have common terms
Yet they embrace MCP of all things as a transport layer- the one part of the common "agentic" stack that genuinely sucks and needs to be reinvented
However, it is also light on material. I would also like to hear more technical details, they're probably intentionally secretive about it.
But I do, however, understand that building an agent that is highly optimized for your own codebase/process is possible. In fact, I am pretty sure many companies do that but it's not yet in the ether.
Otherwise, one of the most interesting bits from the article was
> Over 1,300 Stripe pull requests (up from 1,000 as of Part 1) merged each week are completely minion-produced, human-reviewed, but containing no human-written code.
"human reviewed"
"LGTM..."
I feel like code review is already hard and under done the 'velocity' here is only going to make that worse.
I am also curious how this works when the new crop of junior devs do not have the experience enough to review code but are not getting the experience from writing it.
Time will tell I guess.
Agents can already do the review by themselves. I'd be surprised they review all of the code by hand. They probably can't mention it due to the regulatory of the field itself. But from what I have seen agentic review tools are already between 80th and 90th percentile. Out of randomly picked 10 engineers, it will provide more useful comments than most engineers.
>Reinventing the wheel without explaining why existing tools didn't work
Won‘t that be the nee normal with all those AI agents?
No frameworks, no libraries, just let AI create everything from scratch again
What are the common terms for those? (I have heard "devbox" across multiple companies, and I'm not in the LLM world enough to know the other parts.)
I was an early MCP hater, but one thing I will say about it is that it's useful as a common interface for secure centralization. I can control auth and policy centrally via a MCP gateway in a way that would be much harder if I had to stitch together API proxies, CLIs, etc to provide capabilities.
resume driven development
well, it's very important, now you know the financial code is handled by a bunch of barely supervised AI tools and can make decisions on whether to use product or not based on that
Stripe was launched through Y Combinator. It makes sense for their stuff to quickly bubble to the top of their news aggregator.
Likely they have whitelisted domaine names that go straight to the home page. Would make sense to put all Y combinator ex and new startup sites.
Marketting is a major goal of HN after all.
Or the simpler explanation (which is probably closer to the truth): Stripe is a very popular company on HN as many people use them, their founders sometimes comment here and if they share their opinion on something people pay attention and upvote it.
Doesn't explain how you get to the frontpage with less than 20 upvotes magically.
You only need about 4 upvotes in the first 20 minutes or so to get on the front page. It's the same for every story.
Or the even simpler explanation, that whenever Stripe posts a blog post, they have nine or 10 employees waiting to upvote it the moment it goes live.
your absolut lee r8
Stripe had invested a lot in dev experience over the years precisely because of how "unique" some of the technology choices were: Mongo and originally normal Ruby for a system that mainly deals with money? Without a massive test suite, letting a normal dev make changes without a lot of rails is asking for a sea of incidents. If I recall correctly, the parallelization needed to run the unit tests for developers used to make the cost of continuous integration higher than the cost of the rest of the ec2 instances. Add the dev boxes, as trying to put a useful test environment in a laptop became unreasonable, and they already start with a pile of guardrails tooling that other companies never even needed. A hassle for years, but now a boon, as the guardrails help the LLMs.
It'd be nice to get an old school, stripey blog post, the kind that has a bit less fluff, and is mostly the data you'd all have put in the footnotes of the shipped email. Something that actually talks about the difficulties, instead of non-replicable generalities. After all, if one looks at the stock price, it's not as if competitors are being all that competitive lately, and I don't think it's mainly the details of the AI that make a difference. It'd also be nice to hear what goes one when not just babysitting minions, if there's actually anything else a dev is doing nowadays. AI adoption has changed the day to day experience within the industry, as most managers don't seem to know which way is up. So just explaining what days look like today might even sell as a recruiting initiative.
[deleted]
This is a devops post. They just brag about the plumbing.
Dark secret of dark factory is high quality human input, which takes time and focus to draft up, otherwise human will end up multiple shot it, and read thru the transcript to tune the input.
is there a way to visualize what your agents are doing? I'm adding a bunch of debug code to the claude agent sdk, but it's a bit overwhelming to read at some time , but I just want to see visually what how it does all the tools calling, what files it reads etc…
In Part 1 of the post they show that their agent has some kind of UI.
[deleted]
Where is the detail? Examples? Something concrete? I don't think it is, but it does read like LLM generated content marketing. Lots of generic statements everyone knows. Yes, dev environments are helpful. Have been for 20 years. Yes, context and rules are important for agents. Surprise.
TLDR "look we use AI at Stripe too, come work here"
Are any companies doing this sharing the code being produced or some example Pull Requests? I am wondering if a lot of the human review is substantive or rubber stamping - as we see with long Pull Requests from humans. I know I would half-ass a review of a PR containing lots of robot code. I assume stripe has higher standards than me but would be nice to see some real world examples.
On thing that troubles me is that code reviews are also an educational moment for seniors teaching juniors as well as an opportunity for people who know a system to point out otherwise undocumented constraints of the system. If people slack on reviews with the agent it means these other externalities suffer.
Are being handling this at all? Is it no longer needed because it gets rolled into AGENTS.md?
I find working with Ai a lot like working with a junior employee... with the junior employee they learn and get better (skill level and at dealing with me) but with Ai the mentoring lessons reset once you type /clear
Skills are a positive development for task preferences, agents.md for high level context, but a lot of the time its just easier to do things the way your Ai wants.
> educational moment for seniors teaching juniors
You see, this is no longer necessary - companies are firing all the non-seniors, are not hiring any juniors, and delegating everything to AI. This is the future apparently!
What's the deal with `Devboxes`? is this a common thing? Sounds very clunky for regular (human-driven) development.
It's not uncommon. It's more common at large companies. For example, Google calls theirs "Clients in the Cloud".
Seems like a compliance thing? I too run my LLMs inside some sort of containment and does "manual" development inside the same environment, but wouldn't make sense to have that containment remotely, so I'm guessing that's because they need to have some sort of strict control over it?
[dead]
I'm sure there are lots of Stripe engineers that cruise the comments here. Anyone care to provide some color on how this is actually working? It's not a secret that agents can produce tons and tons of code on their own. But is this code being shipped? Maintained? Reviewed?
Part 1 is linked in this article and explains a bit: “Minions are Stripe’s homegrown coding agents. They’re fully unattended and built to one-shot tasks. Over a thousand pull requests merged each week at Stripe are completely minion-produced, and while they’re human-reviewed, they contain no human-written code.”
I could be wrong, but my educated guess is that, like many companies, they have many low hanging fruit tasks that would never make it into a sprint or even somewhat larger tasks that are straight forward to define and implement in isolation.
The few guys who they haven't laid off are too busy reviewing and being overworked, doing the work of 10 to scroll HN. Gotta get their boss another boat, AI is so awesome!
Stripe hasn't had a layoff in a good while. Stripe is hiring like mad and is planning on growing engineering significantly. Your comment isn't grounded in reality
If you apply to work at Stripe, your job could be fixing the AI bullshit described in the article!
Seems like they've been pretty successful with this method? Why do you think it's bullshit?
Hey can they ask their coding agents to support 3D secure, so I can pay with EU emitted credit cards on the few US sites I'm interested in?
That's for the Stripe customer to configure. Stripe itself has supported 3DS since ages ago.
Edit: also you'll find a pretty common sentiment among US website owners is that the new API that supports 3DS is overcomplicated and they want their 7 lines of code create-a-charge-with-a-token back. Screw the Europeans because they only care about US buyers anyway.
Good to see we're vibe coding critical financial infrastructure.
Progress is being made.
Next up: let's vibe code a pacemaker.
The glass-half-full here is it’s an incredible signal that one of the largest financial gateways in the world is _able_ to do this with current capabilities.
Personally, this is exciting.
They are enforcing rigor, on agents, the same way they would on humans. Do people think Stripe's engineering team would have been able to progress if each individual (human | machine) employee was not under harness and guardrail, and just wrote code willy nilly according to their whims? Vibe coding is whimsical, agentic engineering is re-applying what brought and brings rigor to software engineering in general, just to LLM outputs. Of course, it's not only that and there are novel problem spaces.
bot ass comment.
You're absolutely wrong! @dang, I really did write each letter by hand!
Lt. Dang, ice cream!
Isn't there a rule against this - i.e. accusing commenters for using LLMs (the offensive language aside)? Implicitly there is [0], because I can't see how it adds to the conversation. So what if it sounds like an LLM? Soon you won't be able to tell the difference anyway, and all that will be left is some chance that you are correct. Comments should be judged their content merits, not on whether the commenter is native English speaker or not.
[0] > Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative.
Exactly, 1000 PRs per week probably equates to around ~100 engineers worth of output.
Hard to do an exact ROI, but they're probably saving something like $20,000,000+ / year from not having to hire engineers to do this work.
They still need someone to review and hopefully QA every PR. I doubt it’s saving much time except maybe the initial debug pass of building human context of the problem. The real benefit here is the ability for the human swe to quickly context switch between problems and domains.
But again: the agent can only move as fast as we can review code.
And they're not the only company doing this.
Financial capital at scale will begin to run circles around labor capital.
Soon indeed. From today:
> Cardiologist wins 3rd place at Anthropic's hackathon.
How is this already #1 on the front page with 12 upvotes and 9 comments…
The article doesn’t reveal much. It feels like a fluff piece, and I can’t comprehend what the goal of sharing “we use AI agents” means for the dev community, with little to no examples to share. For a “dev” micro blog, this feels very lackluster. Maybe the Minion could have helped with the technical docs?
EDIT: slightly adjusts tinfoil hat minutes later it’s at #6
It has all the trappings of NIH syndrome.
Reinventing the wheel without explaining why existing tools didn't work
Creating buzzwords ("blueprints" "devboxes") for concepts that are not novel and already have common terms
Yet they embrace MCP of all things as a transport layer- the one part of the common "agentic" stack that genuinely sucks and needs to be reinvented
They mention "Why did we build it ourselves" in the part1 series: https://stripe.dev/blog/minions-stripes-one-shot-end-to-end-...
However, it is also light on material. I would also like to hear more technical details, they're probably intentionally secretive about it.
But I do, however, understand that building an agent that is highly optimized for your own codebase/process is possible. In fact, I am pretty sure many companies do that but it's not yet in the ether.
Otherwise, one of the most interesting bits from the article was
> Over 1,300 Stripe pull requests (up from 1,000 as of Part 1) merged each week are completely minion-produced, human-reviewed, but containing no human-written code.
"human reviewed"
"LGTM..."
I feel like code review is already hard and under done the 'velocity' here is only going to make that worse.
I am also curious how this works when the new crop of junior devs do not have the experience enough to review code but are not getting the experience from writing it.
Time will tell I guess.
Agents can already do the review by themselves. I'd be surprised they review all of the code by hand. They probably can't mention it due to the regulatory of the field itself. But from what I have seen agentic review tools are already between 80th and 90th percentile. Out of randomly picked 10 engineers, it will provide more useful comments than most engineers.
>Reinventing the wheel without explaining why existing tools didn't work
Won‘t that be the nee normal with all those AI agents?
No frameworks, no libraries, just let AI create everything from scratch again
What are the common terms for those? (I have heard "devbox" across multiple companies, and I'm not in the LLM world enough to know the other parts.)
I was an early MCP hater, but one thing I will say about it is that it's useful as a common interface for secure centralization. I can control auth and policy centrally via a MCP gateway in a way that would be much harder if I had to stitch together API proxies, CLIs, etc to provide capabilities.
resume driven development
well, it's very important, now you know the financial code is handled by a bunch of barely supervised AI tools and can make decisions on whether to use product or not based on that
Stripe was launched through Y Combinator. It makes sense for their stuff to quickly bubble to the top of their news aggregator.
Likely they have whitelisted domaine names that go straight to the home page. Would make sense to put all Y combinator ex and new startup sites.
Marketting is a major goal of HN after all.
Or the simpler explanation (which is probably closer to the truth): Stripe is a very popular company on HN as many people use them, their founders sometimes comment here and if they share their opinion on something people pay attention and upvote it.
Doesn't explain how you get to the frontpage with less than 20 upvotes magically.
You only need about 4 upvotes in the first 20 minutes or so to get on the front page. It's the same for every story.
Or the even simpler explanation, that whenever Stripe posts a blog post, they have nine or 10 employees waiting to upvote it the moment it goes live.
your absolut lee r8
Stripe had invested a lot in dev experience over the years precisely because of how "unique" some of the technology choices were: Mongo and originally normal Ruby for a system that mainly deals with money? Without a massive test suite, letting a normal dev make changes without a lot of rails is asking for a sea of incidents. If I recall correctly, the parallelization needed to run the unit tests for developers used to make the cost of continuous integration higher than the cost of the rest of the ec2 instances. Add the dev boxes, as trying to put a useful test environment in a laptop became unreasonable, and they already start with a pile of guardrails tooling that other companies never even needed. A hassle for years, but now a boon, as the guardrails help the LLMs.
It'd be nice to get an old school, stripey blog post, the kind that has a bit less fluff, and is mostly the data you'd all have put in the footnotes of the shipped email. Something that actually talks about the difficulties, instead of non-replicable generalities. After all, if one looks at the stock price, it's not as if competitors are being all that competitive lately, and I don't think it's mainly the details of the AI that make a difference. It'd also be nice to hear what goes one when not just babysitting minions, if there's actually anything else a dev is doing nowadays. AI adoption has changed the day to day experience within the industry, as most managers don't seem to know which way is up. So just explaining what days look like today might even sell as a recruiting initiative.
This is a devops post. They just brag about the plumbing.
Dark secret of dark factory is high quality human input, which takes time and focus to draft up, otherwise human will end up multiple shot it, and read thru the transcript to tune the input.
is there a way to visualize what your agents are doing? I'm adding a bunch of debug code to the claude agent sdk, but it's a bit overwhelming to read at some time , but I just want to see visually what how it does all the tools calling, what files it reads etc…
In Part 1 of the post they show that their agent has some kind of UI.
Where is the detail? Examples? Something concrete? I don't think it is, but it does read like LLM generated content marketing. Lots of generic statements everyone knows. Yes, dev environments are helpful. Have been for 20 years. Yes, context and rules are important for agents. Surprise.
TLDR "look we use AI at Stripe too, come work here"
Are any companies doing this sharing the code being produced or some example Pull Requests? I am wondering if a lot of the human review is substantive or rubber stamping - as we see with long Pull Requests from humans. I know I would half-ass a review of a PR containing lots of robot code. I assume stripe has higher standards than me but would be nice to see some real world examples.
On thing that troubles me is that code reviews are also an educational moment for seniors teaching juniors as well as an opportunity for people who know a system to point out otherwise undocumented constraints of the system. If people slack on reviews with the agent it means these other externalities suffer.
Are being handling this at all? Is it no longer needed because it gets rolled into AGENTS.md?
I find working with Ai a lot like working with a junior employee... with the junior employee they learn and get better (skill level and at dealing with me) but with Ai the mentoring lessons reset once you type /clear
Skills are a positive development for task preferences, agents.md for high level context, but a lot of the time its just easier to do things the way your Ai wants.
> educational moment for seniors teaching juniors
You see, this is no longer necessary - companies are firing all the non-seniors, are not hiring any juniors, and delegating everything to AI. This is the future apparently!
What's the deal with `Devboxes`? is this a common thing? Sounds very clunky for regular (human-driven) development.
It's not uncommon. It's more common at large companies. For example, Google calls theirs "Clients in the Cloud".
Seems like a compliance thing? I too run my LLMs inside some sort of containment and does "manual" development inside the same environment, but wouldn't make sense to have that containment remotely, so I'm guessing that's because they need to have some sort of strict control over it?
[dead]
I'm sure there are lots of Stripe engineers that cruise the comments here. Anyone care to provide some color on how this is actually working? It's not a secret that agents can produce tons and tons of code on their own. But is this code being shipped? Maintained? Reviewed?
Part 1 is linked in this article and explains a bit: “Minions are Stripe’s homegrown coding agents. They’re fully unattended and built to one-shot tasks. Over a thousand pull requests merged each week at Stripe are completely minion-produced, and while they’re human-reviewed, they contain no human-written code.”
I could be wrong, but my educated guess is that, like many companies, they have many low hanging fruit tasks that would never make it into a sprint or even somewhat larger tasks that are straight forward to define and implement in isolation.
The few guys who they haven't laid off are too busy reviewing and being overworked, doing the work of 10 to scroll HN. Gotta get their boss another boat, AI is so awesome!
Stripe hasn't had a layoff in a good while. Stripe is hiring like mad and is planning on growing engineering significantly. Your comment isn't grounded in reality
If you apply to work at Stripe, your job could be fixing the AI bullshit described in the article!
Seems like they've been pretty successful with this method? Why do you think it's bullshit?
Hey can they ask their coding agents to support 3D secure, so I can pay with EU emitted credit cards on the few US sites I'm interested in?
That's for the Stripe customer to configure. Stripe itself has supported 3DS since ages ago.
Edit: also you'll find a pretty common sentiment among US website owners is that the new API that supports 3DS is overcomplicated and they want their 7 lines of code create-a-charge-with-a-token back. Screw the Europeans because they only care about US buyers anyway.
Good to see we're vibe coding critical financial infrastructure. Progress is being made.
Next up: let's vibe code a pacemaker.
The glass-half-full here is it’s an incredible signal that one of the largest financial gateways in the world is _able_ to do this with current capabilities.
Personally, this is exciting.
They are enforcing rigor, on agents, the same way they would on humans. Do people think Stripe's engineering team would have been able to progress if each individual (human | machine) employee was not under harness and guardrail, and just wrote code willy nilly according to their whims? Vibe coding is whimsical, agentic engineering is re-applying what brought and brings rigor to software engineering in general, just to LLM outputs. Of course, it's not only that and there are novel problem spaces.
bot ass comment.
You're absolutely wrong! @dang, I really did write each letter by hand!
Lt. Dang, ice cream!
Isn't there a rule against this - i.e. accusing commenters for using LLMs (the offensive language aside)? Implicitly there is [0], because I can't see how it adds to the conversation. So what if it sounds like an LLM? Soon you won't be able to tell the difference anyway, and all that will be left is some chance that you are correct. Comments should be judged their content merits, not on whether the commenter is native English speaker or not.
[0] > Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes. Comments should get more thoughtful and substantive, not less, as a topic gets more divisive. When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3." Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative.
etc: https://news.ycombinator.com/newsguidelines.html
What can I say? I speak like my friends.
Exactly, 1000 PRs per week probably equates to around ~100 engineers worth of output.
Hard to do an exact ROI, but they're probably saving something like $20,000,000+ / year from not having to hire engineers to do this work.
They still need someone to review and hopefully QA every PR. I doubt it’s saving much time except maybe the initial debug pass of building human context of the problem. The real benefit here is the ability for the human swe to quickly context switch between problems and domains.
But again: the agent can only move as fast as we can review code.
And they're not the only company doing this.
Financial capital at scale will begin to run circles around labor capital.
Soon indeed. From today:
> Cardiologist wins 3rd place at Anthropic's hackathon.
https://x.com/trajektoriePL/status/2024774752116658539
It's all human-reviewed, not vibe coded
Vibe coders do not know what linting is.
Why they is so many 404. Linked to mp3 ?! What was this feature ?