33

Show HN: I built a wizard to turn ideas into AI coding agent-ready specs

I created vibescaffold.dev. It is a wizard-style AI tool that will guide you from idea → vision → tech spec → implementation plan. It will generate all the documents necessary for AI coding agents to understand & iteratively execute on your vision.

How it works: - Step 1: Define your product vision and MVP - Step 2: AI helps create technical architecture and data models - Step 3: Generate a staged development plan - Step 4: Create an AGENTS.md for automated workflows

I've used AI coding tools for awhile. Before this workflow (and now, this tool), I kept getting "close but not quite" results from AI coding tools. I learned that the more context & guidance I gave these tools up front, the better results I got.

The other thing I have found with most tools that attempt to improve on "vibe coding" is that they add abstraction. To me, this just adds to the problem. AI coding agents are valuable, but they are error-prone - you need to be an active participation in their work. This workflow is designed to provide a scaffolding for these AI agents, while minimizing additional abstraction.

Would love feedback on the workflow - especially curious if others find the upfront planning helpful or constraining.

AI can generate specs and development plans but a tremendous amount of trash can slip through those specs and plans resulting in complete garbage as output.

I'm using AI a lot, in planning but I take close manual oversight on specs and development plan and still read all active path code (give AI a little but not too much leeway on testing, since sometimes they start writing test asserting true == true).

35 minutes agoAperocky

I'm happy for you! Some feedback as I walk through the app.

I like the styling, it's really slick. I also like that you enable me to use the tool online without signing up. I was curious about how you're supporting this, and paying for inference, but I see now that you haven't really wired up anything. When I try to generate my 1pager, it returns a placeholder.

> Turn your messy ideas...

I'm not a fan of this framing. Messy has negative connotations, so it's not clear why you're insulting me when we just met. ;)

The wizard:

There's a bit of duplication, since you have "Tell the agent..." as well as "Tell me...", both conveying the same information.

I can jump through steps without competing prior ones. Isn't that going to cause a problem?

It's hard to truly evaluate this further without seeing it in action. As other authors have said, many agents already support Plan Mode, so it it's important for you to distinguish yourself from that.

5 hours agonvader

Thanks for the response! I have wired it up...it's using gpt-5-mini. I just have API-level usage limits - I figured if it hit those, I could worry about how to adjust costs, rate limit, etc.

The jumping through steps is not intended - that's a regression.

I agree on plan mode - this one is just a lot more featured. I should include some samples to demonstrate that. Here's an example, if you're interested, of the prompt plan output - https://github.com/benjaminshoemaker/data_graph_gap_report/b...

5 hours agostraydusk

I'm honestly not sure how you got the "turn your messy ideas" placeholder - how did you get to that?

4 hours agostraydusk

I think the "messy ideas" was a reference to the homepage copy "Turn your messy ideas into crystal clear specs.", not continuing the previous thought about the placeholder. I'd agree that "messy" might have more negative connotations than you intended.

3 hours agoadambb

Interesting, fair though. I think of my own ideas that way, but maybe not everyone does!

3 hours agostraydusk

> It is a wizard-style AI tool that will guide you from idea → vision → tech spec → implementation plan.

(And then the implementation plan is fed to the same sort of AI that you were going to give the "idea" to in the first place.)

If doing this gives good results, then it shouldn't be necessary.

3 hours agozahlman

This the standard current approach for most models/agent tools because models can do well at "make a plan for this" and "execute this step" but are less good at generating a response string that includes both the plan and every step of the execution without intermediate prompting/redirection/focusing. Helps fight context drift and maximize effectiveness/efficiency of the predictions.

Most advances in tools I've used in the last two years are exactly this sort of "automate the steering and feedback loop that the prompt goes through" automated-fairly-boilerplate-sequencing of refinement of initial idea -> plan -> execution -> feedback.

an hour agomajormajor

Why? From first principles you can deduce why it makes sense this could work, they are auto regressive next token prediction engines. As to efficacy, well that you would need to try it and see, but I see no reason to dismiss the idea out of hand.

an hour agoFuckButtons

I mean, it would be great if it wasn't necessary - but have you used these coding agents? They need it.

3 hours agostraydusk

What are you doing that SpecKit/BMAD/etc aren't?

2 hours agoCuriouslyC

You can already do this in your coding agent ? You just need the LLM to index the code base, and the rest is easy. I need a before-and-after to understand what you are doing differently.

Unless your tool has people skills, this engineer can just take the spec to the agent ;)

6 hours agoesafak

Maybe that's something I should add, to more clearly illustrate it. This tool is creating a spec that the AI coding agent of your choice can execute on, it's not an AI coding itself. Or maybe I'm not sure what you're getting at?

6 hours agostraydusk

But all ai models will generate a spec if you ask them to?

5 hours agowredcoll

I'm not sure if it's intended behavior or not, but after it generated the spec it did offer to write the code and then generated it

6 hours agoAvicebron

I mean I can already iterate on a spec with the agent; I do not need a separate tool.

5 hours agoesafak

[dead]