144

Gas Town Decoded

I’m very bought in to the idea that raw coding is now a solved problem with the current models and agentic harnesses. Let alone what’s coming in the near term.

That being said, I think we’re in a weird phase right now where people’s obvious mental health issues are appearing as “hyper productivity” due to the use of these tools to absolutely spam out code that isn’t necessarily broadly coherent but is locally impressive. I’m watching multiple people both publicly and privately clearly breaking down mentally because of the “power” AI is bestowing on them. Their wires are completely crossed when it comes to the value of outputs vs outcomes and they’re espousing generated nonsense as it’s thoughtful insight.

It’s an interesting thing to watch play out.

5 hours agodchuk

Mm.

I'd agree, the code "isn’t necessarily broadly coherent but is locally impressive".

However, I've seen some totally successful, even award-winning, human-written projects where I could say the same.

Ages back, I heard a woodworking analogy:

  LLM code is like MDF. Really useful for cheap furniture, massively cheaper than solid wood, but it would be a mistake to use it as a structural element in a house.
Now, I've never made anything more complex than furniture, so I don't know how well that fit the previous models let alone the current ones… but I've absolutely seen success coming out of bigger balls of mud than the balls of mud I got from letting Claude loose for a bit without oversight.

Still, just because you can get success even with sloppy code, doesn't mean I think this is true everywhere. It's not like the award was for industrial equipment or anything, the closest I've come to life-critical code is helping to find and schedule video calls with GPs.

30 minutes agoben_w

If you give every idiot a worldwide heard voice, you will hear every idiot from the whole world. If you give every idiot a tool to make programs, you will see a lot of programs made by idiots.

an hour agoyetihehe

> raw coding is now a solved problem

Surely this was solved with fortran. What changed? I think most people just don't know what program they want.

an hour agoGrowingSideways

You no longer have to be very specific about syntax. There's now an AI that can translate your idea into whatever language you want.

Previously, if you had an idea of what the program needed to do, you needed to learn a new language. This is so hard that we use language itself as a metaphor: It's hard to learn a new language, only a few people can translate from French to English, for example. Likewise, few people can translate English to Fortran.

Now, you can just think about your program in English, and so long as you actually know what you want, you can get a Fortran program.

The issue is now what it was originally for senior programmers: to decide what to make, not how to make it.

24 minutes agolordnacho

> where people’s obvious mental health issues

I think the kids would call this "getting one-shotted by AI"

4 hours agopetesergeant

Yeah I am definitely trying to stay off hype and just use the damn tool

4 hours agohahahahhaah

There is a lot of research on how words/language influences what we think, and even what we can observe, like the Sapir-Whorf hypothesis. If in a langauge there is one word for 2 different colors, speakers of it are unable to see the difference between the colors.

I have a suspicion that extensive use of LLMs can result in damage to your brain. That's why we are seeing so many mental health issues surfacing up, and we are getting a bunch of blog posts about "an agentic coding psychosis".

It could be that llms go from bicycles for the brain to smoking for the brain, once we figure out the long term effects of it.

4 hours agobkolobara

> If in a langauge there is one word for 2 different colors, speakers of it are unable to see the difference between the colors.

That is quite untrue. It is true that people may be slightly slower or less accurate in distinguishing colors that are within a labeled category than those that cross a category boundary, but that's far from saying they can't perceive the difference at all. The latter would imply that, for instance, English speakers cannot distinguish shades of blue or green.

3 hours agoBrenBarn

> If in a langauge there is one word for 2 different colors, speakers of it are unable to see the difference between the colors.

Perhaps you mean to say that speakers are unable to name the difference between the colours?

I can easily see differences between (for example) different shades of red. But I can't name them other than "shade of red".

I do happen to subscribe to the Sapir-Whorf hypothesis, in the sense that I think the language you think in constrains your thoughts - but I don't think it is strong enough to prevent you from being able to see different colours.

4 hours agojstanley

No, if you show them two colors and ask them if they are different, they will tell you no.

EDIT: I have been searching for the source of where I saw this, but can't find it now :(

EDIT2: I found a talk touching in the topic with a study: https://youtu.be/I64RtGofPW8?si=v1FNU06rb5mMYRKj&t=889

3 hours agobkolobara

> if you show them two colors and ask them if they are different, they will tell you no

The experiments I've seen seem to interrogate what the culture means by colour (versus shade, et cetera) more than what the person is seeing.

If you show me sky blue and Navy blue and ask me if they're the same colour, I'll say yes. If you ask someone in a different context if Russian violet and Midnight blue are the same colour, I could see them saying yes, too. That doesn't mean they literally can't see the difference. Just that their ontology maps the words blue and violet to sets of colours differently.

3 hours agoJumpCrisscross

If you asked me if a fire engine and a ripe strawberry are the same color I would say yes. Obviously, they are both red. If you held them next to each other I would still be able to tell you they are obviously different shades of red. But in my head they are both mapped to the red "embedding". I imagine that's the exact same thing that happens to blue and green in cultures that don't have a word for green.

If on the other hand you work with colors a lot you develop a finer mapping. If your first instinct when asked for the name of that wall over there is to say it's sage instead of green, then you would never say that a strawberry and a fire engine have the same color. You might even question the validity of the question, since fire engines have all kinds of different colors (neon red being a trend lately)

2 hours agowongarsu

> in my head they are both mapped to the red "embedding"

Sure. That's the point. These studies are a study of language per se. Not how language influences perception to a meanigful degree. Sapir-Whorf is a cool hypothesis. But it isn't true for humans.

(Out of curiosity, what is "embedding" doing that "word" does not?)

2 hours agoJumpCrisscross

Word would imply that this only happens when I translate my thoughts to a chosen human language (or articulate thoughts in a language). I chose embedding because I think this happens much earlier in the pipeline: the information of the exact shade is discarded before the scene is committed to memory and before most conscious reasoning. I see this as something happening at the interface of the vision system, not the speech center.

Which is kind of Sapir-Whorf, just not the extreme version of "we literally can't see or reason about the difference", more "differences we don't care about get lost in processing". Which you can kind of conceptualize as the brain choosing a different encoding, or embedding space (even though obviously such a thing does not exist in the literal sense in our brains)

Edit: in a way, I would claim Sapir-Whorf is mistaking correlation for causation: it's not that the words we know are the reason for how we can think, it's that what differences we care about cause both the ways we think and the words we use

2 hours agowongarsu

You're probably thinking of the Himba tribe color experiment - which as it turns out, was mostly fabricated by a BBC documentary:

https://languagelog.ldc.upenn.edu/nll/?p=17970

2 hours agopverheggen

Yes, I think this was it! Thanks for sharing the link. I had no idea that part was fabricated.

2 hours agobkolobara

The ability for us to look at a gradient of color and differentiate between shades even without distinct names for them seems to disprove this on its face.

Unless the question is literally the equivalent of someone showing you a swatch of crimson and a swatch of scarlet and being asked if both are red, in which case, well yeah sure.

3 hours agocthalupa

The idea of gas town is simultaneously appealing and appalling to me. The waste and lack of control is wild, but at the same time there's at least a nugget of fascinating, useful work in there. In a world where compute is cheap and abundant and the models are a notch smarter, I think it's the start of a useful framework for what the future of augmented work might look like.

I have no interest in using gas town as it is (for a plethora of reasons, not the least of which being that I'm uninterested in spending the money), but I've been fascinated with the idea of slowing it down and having it run with a low concurrency. If you've got a couple A100s, what does it look like if you keep them busy with two agents working concurrently (with 20+ agents total)? What does it mean to have the town focus the scope of work to a series of non-overlapping changesets instead of a continuous stream of work?

If you don't plan to have it YOLO stuff in realtime and you can handle the models being dumber than Claude, I think you can have it do some really practical, useful things that are markedly better than the tools we have today.

11 hours agobastawhiz

I put it in a VM and had it build a really simple todo app for me the other day. It wasted so many tokens that I can't help but agree with you right now. And I could certainly have done the same thing with beads and opus in approximately the same amount of time.

However, the gas town one was almost completely hands off. I think my only interventions were due to how beta it was, so I had to help it work around its own bugs to keep from doing stupid things.

Other than that, it implemented exactly what I asked for in a workable fashion with effectively one prompt. It would have taken several prompts and course corrections to get the same result without it.

Other than the riskyness (it runs in dangerous permissions mode) and incredible cost inefficiency, I'd certainly use it.

9 hours agojsight

If gas town can actually do stuff well at any price it'll have a radical impact on how society is organized, because there are people out there who have practically unlimited money (billions of dollars of their own to spend, plus they can get the government to print more dollars for them if necessary; you probably already know who a few of these people are).

I've only started using coding agents recently and I think they go a long way to explain why different people get different mileage from "AI." My experience with Opencode using its default model, vs. Github Copilot using its default model, is night and day. One is amazing, the other is pretty crappy. That's a product of both the software/interface and the model itself I'd suspect.

Where I think this goes in the medium term is we will absolutely spin up our own teams of agents, probably not conforming to the silly anthropomorphized "town" model with mayors and polecats and so on, but they'll be specialized to particular purposes and respond to specific events within a software architecture or a project or even a business model. Currently the sky's the limit in my mind for all the possible applications of this, and a lot of it can be done with existing and fairly cheap models too, so the bottleneck is, surprise surprise... developer time! The industry won't disappear but it will increasingly revolve around orchestrating these teams of models, and software will continue to eat the world.

6 hours agosafety1st

I guess tokens get cheaper all the time, and we can fix the risk via sufficient sand boxing. (I mean the risk to your computer.)

7 hours agoeru

I've been running my own version of what Gas Town seems to be in a couple of proxmox hosts for a while now, it's fine.

6 hours agoAvicebron

If software engineers can agree on anything, it's that LLM experiences are wildly inconsistent. People have similar inconsistencies. We have different experiences, intellects, educations, priorities, motivations, value systems. And in software specifically (and institutions generally) we create methodologies and processes that diminish our inconsistencies and leverage our strengths.

Gas town is a demonstration of a methodology for getting a consistent result from inconsistent agents. The case in point is that Yegge claims to have solved the MAKER problem (tower of Hanoi) via prompting alone. With the right structure, quantity has a quality all its own.

6 hours agocondiment

I feel like each of these things is going to be bitter lessoned by a model who you can just say "yeah get a bunch of agents together and clone twitter, get em to put requirements together first, ya know, measure once and all that. promise em a beer when done".

4 hours agohahahahhaah

I'd help build Gas City and Gas State, and Gas Country if that would mean we actually would solve the things AI promised to solve. All sickness, famine, wealth ...

The problem is, we're just fidgeting yolo-fizzbuzz ad nauseam.

The return on investment at the moment is probably one of the worst in the history of human investments.

AI does improve over time, still today, but we're going to run out of planet before we get there...

8 hours agokeyle

As of yet, the AI models doing important work are still pretty specialized. I'd be happy to pitch in to run something like an open source version of alpha-fold, but I'm not aware of any such projects.

I have trouble seeing LLMs making meaningful progress on those frontiers without reaching ASI, but I'd be happy to be wrong.

8 hours agoViscountPenguin

I think part of the problem/difference is that all "important work" needs to be auditable and understood by humans. We need to be able to fix bugs, and not just roll the dice and hoping that a lack of symptoms means everything is cured.

3 hours agoTerr_

AI can't even find a cure for the common cold.

6 hours agotoephu2

The Wright brothers are idiots, if it were me I'd have made a supersonic jet from the get go and not waste my time mucking around with prototypes.

8 hours agosoulofmischief

The prototype phase meant data centers are now measured in MW instead of TFLOPS.

At a time where we were desperate to reduce emissions, data centers now consume around 20% of the energy consumed by the entire aviation sector, with consumption is rising at 15% YoY.

Never mind the water required to cool them, or the energy and resources required to build them, the capital allocation, and the opportunity cost of not allocating all of that to something else.

And this is, your words, the prototype phase.

3 hours agoncruces

the Wright brothers sold me a subscription to a supersonic jet and I've got a bundle of matchsticks and some canvas.

2 hours agojpfromlondon

We were promised supersonic jets today or very soon though and our economies have been held hostage waiting for that promise.

8 hours agoares623

The passive voice is doing a lot of work in your sentence.

7 hours agoeru

We are perpetually just months away from software jobs being obsolete.

AGI was achieved internally at OpenAI a year ago.

Multiple companies have already re-hired staff they had fired and replaced with AI.

etc.

5 hours agotroupo

Your problem is thinking that hype artists, professionals and skeptics are all the same voice with the same opinion. Because of that, you can't recognize when sentiment is changing among the more skeptical.

5 hours agosoulofmischief

You are responding to some voices in your head, not to the context of the conversation.

You're also presuming too much about what I'm thinking and being dead wrong about that.

4 hours agotroupo

The first recorded supersonic flight was in 1947.

7 hours agosoulofmischief

Supersonic passenger planes failed commercially.

2 hours agowindward

Very minor nit -- crew could be a person also - in fact that's how you're supposed to hack on a codebase in gas town directly - add yourself as crew.

Other than that, this is a helpful list especially for someone who hasn't been hacking around on this thing as it's in rapid development mode. I find gas town super interesting, and tantalizingly close to being amazingly useful. That said, I wouldn't mind a slightly less 'flavored' set of names for workers.

11 hours agovessenes

A couple/few years ago people were trying to do agents by just putting the LLM in a loop and letting it go, and it was just awful and didn't work at all. I think a bunch of things had to happen over the course of 1-2 years to get to coding agents being a real, useful thing: models had to get quite a bit smarter/cheaper/faster, models had to get good at tool use, and they needed to be executed in well-built harnesses with good tools available.

This feels like the same thing. Too early, but we're definitely headed in the direction of finding ways to use more tokens to get more mileage per prompt.

4 hours agofuryofantares

I actually love the idea of totally new naming schemes for experimental software.

Certain name types are so normalized (agent, worker, etc) that while they serve their role well, they likely limit our imagination when thinking about software, and it's a worthwhile effort to explore alternatives.

11 hours agojamestimmins

This reminds me of Moldbug's Urbit. I can't be bothered to look it up, but his comment was along the lines of "existing words bring assumptions, so safest to make new ones". To which, my comment would be: perflufflington flibnik qupnux.

9 hours agotom_

I do too, but you can take things too far, which I'd argue has happened the moment "figuring out what the names mean" becomes enough of an intellectual challenge to provide a dopamine hit; at that point, you've (intentionally or otherwise) germinated a cult. It's human nature: people will support the design not on its merits but rather as loss aversion for the work they put into decoding it.

11 hours agotptacek

Yes at some point innovative software and naming are at cross purposes, and if your naming gets too extreme ultimately that will get all of the attention.

3 hours agojamestimmins

Anthropomorphizing chunks of your system is kinda weird given interactive chat as the UI to the LLM.

Akka and others have standardized names for all this stuff (and seem to fully know that a code ‘actor’ is code). These wheels don’t need reinventing (much less as ‘the Marvin’s’, a lovable set of bi-racial quadruplets who always get you where you’re going <rocket emoji>).

In fact, I dare say a lot of LLM fascination for orchestration is people unfamiliar with actor models and the level of elegance a properly expressive language lets them have.

an hour agobonesss

That last line is exactly what I was thinking. Find an expressive language and then progressively formalize your workflows in DSLs that enforce correctness by design, not through layers and layers of natural language “skills” and deadweight agentic watchdogs.

24 minutes agojkhdigital

Claude is ok. Gas town seems like a Claude multiplier. I’m not sure more Claude is what I’d even want!

Not sure I love what it does all the time, it tends to fit whatever box you setup and will easily break out if you aren’t veeeery specific. Is it better than writing a few thousand lines of code myself that I deeply understand that can debug and explain? I don’t know yet. I think it’d be good for writing functions one at a time with massive supervision.

It’s great for writing scripts and things where precision and correctness outside the success path isn’t really needed. If a script fails and it wasn’t deleting a hard drive who cares. If my embedded code fails out in a product in the wild this is a much bigger nuisance and potentially fatal for the device (not the humans) which is wasteful.

8 hours agobfrog

I’d like gastown more if it could run cursor-CLI instead of claude, and thus be able to choose models. Claude is okay. But these things certainly have personalities. I’m not sure which would be best for each role. But gastown’s different actors seem like a great place to take advantage of the different quirks of each. And I certainly don’t choose Claude consistently when given a choice.

5 hours agooofbey

I use beads quite a bit, but not as steve intended. And definitely the opposite of "Gas Town," where I use the note-taking capability and integration with Git (that is, as something of a glorified Makefile and database) to debug contexts, to close the loop and increase accuracy over time. Nevertheless, it has been useful for large batch runs over my code base: the record has been processing for thirty hours straight while getting something useful, and enough trace data to make further improvements.

Steve has gone "a bit" loopy, in a (so far) self aware manner, but he has some kind of insight into the software engineering process, I think. Yet, I predict beads will break under the weight of no-supervision eventually if he keeps churning it, but some others will pick up where he left off, with more modest goals. He did, to his credit, kill off several generations of project before this one in a similar category.

10 hours agofdr

His latest post is endorsing a crypto exchange because they paid him $50k.

https://steve-yegge.medium.com/bags-and-the-creator-economy-...

6 hours agoalexjurkiewicz

I’m pro LLM and use them, but crikey: if they’re so good at code why aren’t these people with all the attention, branding, and connections in the world unable to capitalize them?

I believe Google that uses their internal Gemini trained on their internal infrastructure to generate boiler plate and insights for older, less mature, code in one of the worlds biggest and most complicated anythings, ever. But I don’t see them saying anything to the effect of “neener neener, we’re using markov chains so 10x our stock ‘cause of the otherwise impossible face melting Google Docs 2026.”

OpenAI is chasing ads, like Reddit, to regurgitate Reddit content. If this stuff is worth the squeeze I need to see the top 10 LLM-fluencers refusing to bend over for $50K. The opposite is on display.

So hypotheses: Google’s s-tier geniuses and PMs are already expressing the mature optimum application. No silver bullets, more gains to be had ditching bad tech and extraneous vendor entanglements (copilot, 365).

an hour agobonesss

That entire article sounds like my friends who think AI is real and keep sending their parents money into crypto scams.

I think I’ll just develop a drinking problem if this is Gas Town becomes something real in the industry and this kind of person is now one of our thought leaders.

5 hours agolovich

To be fair, he's always been a little loopy. At least, I think this post of his was loopy: https://steve-yegge.blogspot.com/2007/06/that-old-marshmallo...

It was also one of my favorite posts of his and has aged incredibly well as my experience has grown.

9 hours agojsight

that's one reason I am less worried about him than some, although, I don't want to say that only to have something bad happen to him, that is, a form of complacency. Just because (say) Boltzmann and Cantor had useful insights along the way didn't mean people shouldn't have been looking to support them.

9 hours agofdr

> but some others will pick up where he left off, with more modest goals

Already happening :-) https://github.com/Dicklesworthstone/beads_rust

9 hours agosorenbs

the main area I'd like to see some departure from beads is to use markdown files (or something) to be able to see the issue context/comments better in a diff generated by git.

The other area I'd like to see some software engineering thinking that's more open ended is on regression testing: ways of storing or referencing old versions of texts to see if the agent can complete old transformations properly even with a context change that patches up a weakness in a transformation that is desirable. This is tricky as it interacts with something essential in software engineering, the ability to run test suites and responding to the outcome. I don't think we know yet when to apply what fidelity of testing, e.g. one-shot on snippets versus a more realistic test based on git worktrees.

This is not something you'd want for every context, but a lot of my effort is spent building up prompt fragments to normalize and clean up the code coming out of a model that did some ad-hoc work that meets the test coverage bar, which constrains it decently into having achieved "something." Kind of like a prototype. But often, a lot of ungratifying massaging is required to even cover the annoying but not dangerous tics of the LLM, to bring clarity to where it wrote, well, very bad and unprincipled code...as it does sometimes.

8 hours agofdr

I was disappointed to see that this is still 10x the code needed for the feature set and that it still insists on duplicating state into a SQLite index for such minuscule amounts of data.

I've seen 25-30 similar efforts to make a Beads alternative and they all do this for some reason.

8 hours agowild_egg

It seems like one of the key events that needs to happen for any professional domain to take off is for it to develop an "inside" language that nobody else understands. For example, I still don't know what a kanban or a scrum is. So I'm very ill positioned to challenge their use or question how they are done. Hence they got to dodge a whole lot of opposition that would probably have brought it all down. The invention of a new mysterious terminology I think was critical for agile to take off.

The problem with this phenomenon is that the same freedom from critique that is seemingly necessary for new domains to establish themselves also detaches them from necessary criticism. There's simply no way to tell if this isn't a load of baloney. And by the time it's a bullet point requirement on CVs to get employed it's too late for anybody to critique it.

10 hours agozmmmmm

Maintenance Manager Checker Agent and the rest of the nouns Yegge employs are ironic given his Kingdom of Nouns essay.

11 hours agodevin

“Maintenance Manager Checker Agent is not a noun Yegge employs”, it is Brinker’s term for Yegge’s “Boot the Dog”.

11 hours agodragonwriter

- someone really needs to rewrite that entire article without all that jargon

9 hours agovivzkestrel
[deleted]
4 hours ago

I don't understand why people are making this so complicated. We have a battle tested SDLC. We don't need to reinvent this shit. We just need to make some affordances in the tools and processes we set up for the majority of the actors in the system to be agents (such as rationing human attention).

Spec your software like an architect/po, decompose it into a task dag, then orchestrate for each lane and assemble all change sets in a merge branch rather than constantly repointing head.

9 hours agoCuriouslyC

I can’t stop thinking about this exact notion. The main reason we don’t always use stuff like TLA+ to spec out our software is because it’s tedious AF for anything smaller than, like, mission-critical enterprise-grade systems and we can generally trust the humans to get the details right eventually through carrot-and-stick incentive systems. LLM agents have none of the attentional and motivational constraints of humans so there’s no reason not to do things the right way.

10 minutes agojkhdigital

SDLC = Software Development Life Cycle (?)

9 hours agoarcanemachiner

Correct

7 hours agoCuriouslyC

This looks familiar to people who have seen how the more elaborate NPC systems work in major multiplayer games. There are lots of semi-independent NPCs, with some degree of overall coordination. Groups of cops or soldiers may have a commander program for tactical coordination, and there may be a higher level system deploying units for strategic purposes.

In games, what the NPCs can do is usually rather dumb. Move and shoot is usually most of their functionality. This keeps the overhead down so the system is affordable.

Gas Town may be a step towards AIs which have an ongoing sense of what they're doing. I'm not going to get into the "consciousness" debate, but it's closer to liveness.

8 hours agoAnimats

What games are notable in this regard? The classic Majesty series comes to mind. UO aspired to complex NPC systems. Fable as well. I always dreamt of a more advanced Sim City-meets-MMO that just went all in on that.

8 hours agojrowen

UO? I don't remember any complex NPC systems there. Ultima 7 had daily schedules and the veneer of a functioning economy though

4 hours agonhinck3

> Persistent Worker Agents, which you talk to directly (not through the Mayor),

I had a bit of a chuckle.

I think there is value in anything approximating a proposer-verifier loop, but I don't know that this is the most ideal approach.

12 hours agobob1029

I think Yegge and Huntley are smart guys.

I don't think they're doing a good job incubating their ideas into being precise and clearly useful -- there is something to be said about being careful and methodical before showing your cards.

The message they are spreading feels inevitable, but the things they are showing now are ... for lack of better words, not clear or sharp. In a recent video at AI Engineer, Yegge comments on "the Luddites" - but even for advocates of the technology, it is nigh impossible to buy the story he's telling from his blog posts.

Show, don't tell -- my major complaint about this group is that they are proselytizing about vibe coding tools ... without serious software to show for it.

Let's see some serious fucking software. I'm looking for new compilers, browsers, OSes -- and they better work. Otherwise, what are we talking about? We're counting foxes before the hunt.

In any case, wouldn't trying to develop a serious piece of software like that _at the same time you're developing Gas Town or Loom_ make (what critics might call) the ~Emacs config tweaking for orchestration~ result driven?

11 hours agomccoyb

Here's a separate, optimistic comment about Yegge and Huntley: they are obviously on the right track.

In a recent video about Loom (Huntley's orchestration tool), Huntley comments:

"I've got a single goal and that is autonomous evolutionary software and figuring out what's needed to be there."

which is extremely interesting and sounds like great fun.

When you take these ideas seriously, if the agents get better (by hook and crook or RLVR) -- you can see the implications: "grad student descent" on whatever piece of software you want. RAG over ideas, A/B testing of anything, endless looping, moving software.

It's a nightmare for the model of software development and human organization which is "productive" today, but an extremely compelling vision for those dabbling in the alternative.

10 hours agomccoyb

It's a science project. I think the "I am so crazy" messaging is deliberate to scare most people away while attracting a few like-minded beta testers. He's telling you not to use it, which some people will take as a dare...

11 hours agoskybrian

Anyone have some kind of central hub of finding out about new tools/techniques? I'm convinced that headless multi-agent coordination is the way to go, but it needs a lot of guard rails, one of the biggest of which will be cost-control. I'm sure there will be a lot more developments in this space, but I don't want to just happen across them by accident...

11 hours ago0xbadcafebee

At some point evolving software instead of designing it will work. Now the evolutionary pressure leads towards churning more tokens.

11 hours agozbyforgotp

I haven't read the Yegge post closely, so just commenting that namespaces (or naming conventions) would make the easier-to-casually-read names more practical...

For example, if Polecat becomes GasTown.WorkerAgent (or GasTown.Worker), then you always have both an unambiguous way and a shorthand-in-context way of referring to the concept.

(For naming conventions when you don't have namespaces as a language feature, use prefixes within the identifier, such as `GasTown_Worker`.)

If GasTown.Worker is implemented with framework Foo, using that framework's Worker concept, GasTown.Worker might have a field named fooWorker of type Foo.Worker. (In the context of the implementation of GasTown, the unqualified name always means the GasTown concept, and you always disambiguate concepts from elsewhere that use the sane generic or similar terms.)

Complicated names like GasTown.MaintenanceManagerCheckerAgent might need some creative name shortening, but hopefully are still descriptive, or easy to pick up and remember. Or, if the descriptive and distinguishing name was complicated because the concept is a weird special case within the framework, maybe consider whether it should be rethought.

11 hours agoneilv

The overuse of metaphors makes me feel like this person is trying to reinvent Chef, but for AI.

11 hours agorilindo

Steve Yegge used to have interesting, albeit long winded, things to say re software.

11 hours agogrebc
[deleted]
7 hours ago

Show, don't tell.

If you need ten pages to explain your project and even after I read your description, I'm still left confused why I need it at all, then maybe... I don't need it?

6 hours agohota_mazi

Gas Town wasn't satire?

11 hours agodcmatt

Poe's law &c.

8 hours agonever_inline

From Steve yegge's post

> Better UIs will come. But tmux is what you have for now. And it’s worth learning.

So brother has 2 claude code accounts and couldn't vibe code a UI, huh?

8 hours agonever_inline

Don’t forget the apparent crypto grift angle now (something related to BAGS)

Ridiculous. Beads might be passable software but gas town just appears to be a good way to burn tokens at the moment

9 hours agoSomaticPirate

It's like Conway's Law. Both humans and agents arrive at roughly identical hierarchies for organizing labor. There is something inherent in the game of telephone required by limited working memory that requires this structure. Gas Town's only failure is not being familiar with prior art and coming up with very strange names for established patterns that already exist in large hierarchical organizations like governments, corporations and militaries.

11 hours agoipnon

Real, genuinely confused human here: Can someone please clarify whether or not gas town is/was a joke? I've searched repeatedly and can't find anything that looks like an obvious tell, and I'm not sure if this is because it's actually real and people are taking it seriously, or because the pages and pages of discourse surrounding it is AI generated and taking itself literally.

If it's not a joke... I have no words. You've all gone insane.

11 hours agoohazi

It's not a joke, but I think it's an example of the same thing we're seeing with folks who think they're talking to god when they talk to ChatGPT, or those who spiral and in some cases, sadly take their own life.

These chatbots create an echo chamber unlike that which we've ever had to deal with before. If we thought social media was bad, this is way worse.

I think Gastown and Beads are examples of this applied to software engineering. Good software is built with input from others. I've seen many junior engineers go off and spend weeks building the wrong thing, and it's a mess, but we learn to get input, we learn to have our ideas critiqued.

LLMs give us the illusion of pair programming, of working with a team, but they're not. LLMs vastly accelerate the rate at which you can spiral spiral down the wrong path, or down a path that doesn't even make sense. Gastown and Beads are that. They're fever dreams. They work, somewhat, but even just a little bit of oversight, critique, input from others, would have made them far better.

11 hours agodanpalmer

It's a double edged sword. If it can lead the uninformed down the wrong path faster, it can lead the informed down the right path faster. It's not only fast in one direction.

11 hours agononethewiser

I believe the author of gas town is very informed, having been a professional software developer for some time. And the premise of the above comment is that he did, despite this, go down the wrong path.

11 hours agobwestergard

The informed and uninformed are not mutually exclusive groups. Everyone is one and then the other depending on the time. To varying degrees of course.

11 hours agononethewiser

The difference between light-up arrows pointing the way "forward" for a car turning onto the expressway the wrong way, and doing so with the possibility humans might see and attempt to flag them down before they're too far to turn around.

People will make mistakes, and AI holding their hand and guiding them while they do it can have disastrous consequences.

But it's nice that the arrows will appear to also guide people going the right way I guess.

9 hours agopcthrowaway

Not sure you’ve actually tried using it, but beads has been an absolute game changer for my projects. “Game changer” is even underselling it.

10 hours agobobjordan

Beads was phenomenal back in October when it was released. Unfortunately it has somehow grown like a cancer. Now 275k lines of Go for task tracking? And no human fully knows what it is all doing. Steve Yegge is quite proud to say he's never looked at any of its code. It installs magic hooks and daemons all over your system and refuses to let go. Most user hostile software I've used in a long time.

Lot of folks rolling their own tools as replacements now. I shared mine [0] a couple weeks ago and quite a few folks have been happy with the change.

Regardless of what you do, I highly recommend to everyone that they get off the Beads bandwagon before it crashes them into a brick wall.

[0] https://github.com/wedow/ticket

10 hours agowild_egg

If your task tracking app is 275k lines you fucked up.

9 hours agoCuriouslyC

The LLM providers got paid.

Reminds me of an offshore project I was involved with at one point. It had something like 7 managers and 4 years and over 30 developers had worked on it. The billing had reached into the millions. It was full of never ending bugs. The amount of "extra" code and abstractions and interfaces was stuff of legends.

It was actually a month or three simple crud project for a 2 man development team.

7 hours agomythrwy

yeah, I generally view the install script (for both this and almost everything else now since it's trivial with claude code) and then ensure I have a sane install for my system needs. But, I'm on the latest beads 0.47.1 and what I did to tame it is, I just walked through creating SKILLS with claude and codex, and frankly I've found a lot of value add to the features added so far. I especially love the --claim which keeps the agents from checking out beads that are already checked out. And after I added SKILLS, the agents do an awesome job networking the dependencies together, which helps keep multi-agent workflows on track. Overall, I'm not feeling any reason to switch from beads right now, but I will also be upgrading more thoughtfully, so I don't break my current workflow.

10 hours agobobjordan

How do you handle the dogs ignoring the deacons and going after the polecats though? Seems like the mayor should get involved to me.

10 hours agomattgreenrocks

I havent tried gas town yet. I have a pretty good multi-agent workflow by just using beads directly along with thoughtfully produced prompts.

10 hours agobobjordan

I'm not entitled to your time of course, but would you mind describing how?

All I know is beads is supposed to help me retain memory from one session to the next. But I'm finding myself having to curate it like a git repo (and I already have a git repo). Also it's quite tied to github, which I cannot use at work. I want to use it but I feel I need to see how others use it to understand how to tailor it for my workflow.

10 hours agowenc

To use it effectively, I spend a long time producing FSD (functional specification documents) to exhaustively plan out new features or architecture changes. I'll pass those docs back and forth between gemini, codex/chatgpt-pro, and claude. I'll ask each one something similar to following (credit to https://github.com/Dicklesworthstone for clearly laying out the utility of this workflow, these next few quoted prompts are verbatim from his posts on x):

"Carefully review this entire plan for me and come up with your best revisions in terms of better architecture, new features, changed features, etc. to make it better, more robust/reliable, more performant, more compelling/useful, etc.

For each proposed change, give me your detailed analysis and rationale/justification for why it would make the project better along with the git-diff style changes relative to the original markdown plan".

Then, the plan generally iteratively improves. Sometimes it can get overly complex so may ask them to take it down a notch from google scale. Anyway, when the FSD doc is good enough, next step is to prepare to create the beads.

At this point, I'll prompt something like:

"OK so please take ALL of that and elaborate on it more and then create a comprehensive and granular set of beads for all this with tasks, subtasks, and dependency structure overlaid, with detailed comments so that the whole thing is totally self-contained and self-documenting (including relevant background, reasoning/justification, considerations, etc.-- anything we'd want our "future self" to know about the goals and intentions and thought process and how it serves the over-arching goals of the project.) Use only the `bd` tool to create and modify the beads and add the dependencies. Use ultrathink."

After that, I usually even have another round of bead checking with a prompt like:

"Check over each bead super carefully-- are you sure it makes sense? Is it optimal? Could we change anything to make the system work better for users? If so, revise the beads. It's a lot easier and faster to operate in "plan space" before we start implementing these things! Use ultrathink."

Finally, you'll end up with a solid implementation roadmap all laid out in the beads system. Now, I'll also clarify, the agents got much better at using beads in this way, when I took the time to have them create SKILLS for beads for them to refer to. Also important is ensuring AGENTS.md, CLAUDE.md, GEMINI.md have some info referring to its use.

But, once the beads are laid out then its just a matter of figuring out, do you want to do sequential implementation with a single agent or use parallel agents? Effectively using parallel agents with beads would require another chapter to this post, but essentially, you just need a decent prompt clearly instructing them to not run over each other. Also, if you are building something complex, you need test guides and standardization guides written, for the agents to refer to, in order to keep the code quality at a reasonable level.

Here is a prompt I've been using as a multi-agent workflow base, if I want them to keep working, I've had them work for 8hrs without stopping with this prompt:

EXECUTION MODE: HEADLESS / NON-INTERACTIVE (MULTI-AGENT) CRITICAL CONTEXT: You are running in a headless batch environment. There is NO HUMAN OPERATOR monitoring this session to provide feedback or confirmation. Other agents may be running in parallel. FAILURE CONDITION: If you stop working to provide a status update, ask a question, or wait for confirmation, the batch job will time out and fail.

  YOUR PRIMARY OBJECTIVE: Maximize the number of completed beads in this single session. Do not yield control back to the user until the entire queue is empty or a hard blocker (missing credential) is hit.

  TEST GUIDES: please ingest @docs/testing/README.md, @docs/testing/golden_path_testing_guide.md, @docs/testing/llm_agent_testing_guide.md, @docs/testing/asset_inventory.md, @docs/testing/advanced_testing_patterns.md, @docs/testing/security_architecture_testing.md
  STANDARDIZATION: please ingest @docs/api/response_standards.md @docs/event_layers/event_system_standardization.md
─────────────────────────────────────────────────────────────────────────────── MULTI-AGENT COORDINATION (MANDATORY) ───────────────────────────────────────────────────────────────────────────────

  Before starting work, you MUST register with Agent Mail:

  1. REGISTER: Use macro_start_session or register_agent to create your identity:
     - project_key: "/home/bob/Projects/honey_inventory"
     - program: "claude-code" (or your program name)
     - model: your model name
     - Let the system auto-generate your agent name (adjective+noun format)

  2. CHECK INBOX: Use fetch_inbox to check for messages from other agents.
     Respond to any urgent messages or coordination requests.

  3. ANNOUNCE WORK: When claiming a bead, send a message to announce what you're working on:
     - thread_id: the bead ID (e.g., "HONEY-2vns")
     - subject: "[HONEY-xxxx] Starting work"
─────────────────────────────────────────────────────────────────────────────── FILE RESERVATIONS (CRITICAL FOR MULTI-AGENT) ───────────────────────────────────────────────────────────────────────────────

  Before editing ANY files, you MUST:

  1. CHECK FOR EXISTING RESERVATIONS:
     Use file_reservation_paths with your paths to check for conflicts.
     If another agent holds an exclusive reservation, DO NOT EDIT those files.

  2. RESERVE YOUR FILES:
     Before editing, reserve the files you plan to touch:
     ```
     file_reservation_paths(
       project_key="/home/bob/Projects/honey_inventory",
       agent_name="<your-agent-name>",
       paths=["honey/services/your_file.py", "tests/services/test_your_file.py"],
       ttl_seconds=3600,
       exclusive=true,
       reason="HONEY-xxxx"
     )
     ```

  3. RELEASE RESERVATIONS:
     After completing work on a bead, release your reservations:
     ```
     release_file_reservations(
       project_key="/home/bob/Projects/honey_inventory",
       agent_name="<your-agent-name>"
     )
     ```

  4. CONFLICT RESOLUTION:
     If you encounter a FILE_RESERVATION_CONFLICT:
     - DO NOT force edit the file
     - Skip to a different bead that doesn't conflict
     - Or wait for the reservation to expire
     - Send a message to the holding agent if urgent
─────────────────────────────────────────────────────────────────────────────── THE WORK LOOP (Strict Adherence Required) ───────────────────────────────────────────────────────────────────────────────

* ACTION: Immediately continue to the next bead in the queue and claim it

  For every bead you work on, you must perform this exact cycle autonomously:

   1. CLAIM (ATOMIC): Use the --claim flag to atomically claim the bead:
      ```
      bd update <id> --claim
      ```
      This sets BOTH assignee AND status=in_progress atomically.
      If another agent already claimed it, this will FAIL - pick a different bead.

        WRONG: bd update <id> --status in_progress  (doesn't set assignee!)
        RIGHT: bd update <id> --claim                (atomic claim with assignee)

   2. READ: Get bead details (bd show <id>).

   3. RESERVE FILES: Reserve all files you plan to edit (see FILE RESERVATIONS above).
      If conflicts exist, release claim and pick a different bead.

   4. PLAN: Briefly analyze files. Self-approve your own plan immediately.

   5. EXECUTE: Implement code changes (only to files you have reserved).

   6. VERIFY: Activate conda honey_inventory, run pre-commit run --files <files you touched>, then run scoped tests for the code you changed using ~/run_tests (test URLs only; no prod secrets).
       * IF FAIL: Fix immediately and re-run. Do not ask for help as this is HEADLESS MODE.
       * Note: you can use --no-verify if you must if you find some WIP files are breaking app import in security linter, the goal is to help catch issues to improve the codebase, not stop progress completely.

   7. MIGRATE (if needed): Apply migrations to ALL 4 targets (platform prod/test, tenant prod/test).

   8. GIT/PUSH: git status → git add only the files you created or changed for this bead → git commit --no-verify -m "<bead-id> <short summary>" → git push. Do this immediately after closing the bead. Do not leave untracked/unpushed files; do not add unrelated files.

   9. RELEASE & CLOSE: Release file reservations, then run bd close <id>.

  10. COMMUNICATE: Send completion message via Agent Mail:
      - thread_id: the bead ID
      - subject: "[HONEY-xxxx] Completed"
      - body: brief summary of changes

  11. RESTART: Check inbox for messages, then select the next bead FOR EPIC HONEY-khnx, claim it, and jump to step 1.
─────────────────────────────────────────────────────────────────────────────── CONSTRAINTS & OVERRIDES ───────────────────────────────────────────────────────────────────────────────

   * Migrations: You are pre-authorized to apply all migrations. Do not stop for safety checks unless data deletion is explicit.
   * Progress Reporting: DISABLE interim reporting. Do not summarize after one bead. Summarize only when the entire list is empty.
   * Tracking: Maintain a running_work_log.md file. Append your completed items there. This file is your only allowed form of status reporting until the end.
   * Blockers: If a specific bead is strictly blocked (e.g., missing API key), mark it as blocked in bd, log it in running_work_log.md, and IMMEDIATELY SKIP to the next bead. Do not stop the session.
   * File Conflicts: If you cannot reserve needed files, skip to a different bead. Do not edit files reserved by other agents.

  START NOW. DO NOT REPLY WITH A PLAN. REGISTER WITH AGENT MAIL, THEN START THE NEXT BEAD IN THE QUEUE IMMEDIATELY. HEADLESS MODE IS ON.
9 hours agobobjordan

I think the underlying approach seems sensible.

The problem with Gas Town is how it was presented. The heavy metaphor and branding felt distracting.

It’s a bit like reading the Dune book, where you have to learn a whole vocabulary of new terms before you can get to the interesting mechanics, which is a tough ask in an already crowded AI space.

10 hours agosingingbard

I think you have to remove an awful lot of what makes Gastown Gastown to find something sensible – at the minimum you need to restructure and simplify the roles, restructure the memory system, remove tmux, ...

The best bit about it was the agentic coding maturity model he presented. That was actually great.

I don't think it's at all like reading Dune. Dune is creative fiction, Gastown is. Oh ok wait, if you consider Gastown to be creative fiction then I guess I agree. As a software tool though I don't think this analogy works.

9 hours agodanpalmer

Gas town is the cackling mad laughter emitting from someone who knows they are being both insane and prescient simultaneously. Today, it is insane. But I fully expect to be hearing about a very serious thing in the near future about which people will say “gas town was an early attempt at this”

11 hours agopjm331

This is the best take I've seen in here.

I've been tinkering with it for the past two days. It's a very real system for coordinating work between a plurality of humans and agents. Someone likened it to kubernetes in that it's a complex system that is going to necessitate a lot of invention and opinions, the fact that it *looks* like a meme is immaterial, and might be an effort to avoid people taking it too seriously.

Who knows where it ends up, but we will see more of this and whatever it is will have lessons learned from Gas Town in it.

10 hours agojcims
[deleted]
10 hours ago

It's a real open source tool Yegge has built and been using for a while now. And no, it's not insane, he's literally written a book with Gene Kim about the fundamental lessons that go into it, and he's been on lots of podcasts where he explains more.

I expect major companies will soon be NIH-ing their own version of it. Even bleeding tokens as it does, the cost is less than an engineer, and produces working software much faster. The more it can be made to scale, the more incentive there is. A competitive business can't justify not using a system like this.

11 hours ago0xbadcafebee

Where is the working software it produces? Do you have a repo you've made with it as an example?

5 hours agoPKop

> If it's not a joke... I have no words. You've all gone insane.

How is it insane to jump to the logical conclusion of all of this? The article was full of warnings, its not a sensible thing to do but its a cool thing to do. We might ask whether or not it works, but does that actually matter? It read as an experiment using experimental software doing experimental things.

Consider a deterministic life form looking at how we program software today, that might look insane to it and gastown might look considerably more sane.

Everything that ever happens in human creation begins as a thought, then as a prototype before it becomes adopted and maybe (if it works/scales) something we eventually take for granted. I mean I hate it but maybe I've misunderstood my profession when I thought this job was being able to prove the correctness of the system that we release. Maybe the business side of the org was never actually interested in that in the first place. Dev and business have been misaligned with competing interests for decades. Maybe this is actually the fit. Give greater control of software engineering to people higher up the org chart.

Maybe this is how we actually sink c-suite and let their ideas crash against the rocks forcing c-suite to eventually become extremely technical to be able to harness this. Instead of today's reality where c-suite gorge on the majority of the profit with an extremely loosely coupled feedback loop where its incredibly difficult to square cause and effect. Stock went up on Tuesday afternoon did it? I deserve eleventy million dollars for that. I just find it odd to crap on gastown when I think our status quo is kinda insane too.

11 hours agoQuarrelsome

It doesn't have to exclusively be one or the other.

> If it's not a joke... I have no words. You've all gone insane.

I think this is covered by the part in Yegge's post where he says not to run it unless you're so rich you don't care if it works or not.

11 hours agoastrange

How rich do you have to be not care about the environmental cost?

10 hours agochrisjj

I think Andrew Ng wrote a great piece on this.

For example, in the US, which do you think uses more water: Golf Courses or Data Centers?

  a) Gold Courses use twice as much water as Data Centers
  b) About the same
  c) Data Centers use twice as much water as Gold Courses
The answer is "None of the above": "Golf courses in the U.S. use around 500 billion gallons annually of water to irrigate their turf [snip] data centers consume [snip] 17 billion gallons, or maybe around 10x that if we include water use from energy generation"

Do you think a Google search or a Gemini query produces more carbon?

> Google had estimated that a single web search query produces 0.2 grams of CO2 emissions. [snip] the median Gemini LLM app query produces a surprisingly low 0.03 grams of CO2 emissions), and uses less energy than watching 9 seconds of television

https://www.deeplearning.ai/the-batch/issue-336/

9 hours agonl

Environmentalism has always been a "weight of our sins" sort of issue. Plastic straws are a rounding error relative to all the capricious uses of plastic and fossil fuels in our economy, but few things feel as frivolous as using once and then throwing away a piece of plastic for personal convenience while engaging in an already-kinda-sinful feeling activity like indulging in a soft drink, while simultaneously the paper straw that turns to cardboard mash in your mouth is perfectly calibrated to make you feel like you are doing real penance without encumbering anything economically important.

So plastic straw bans (instead of plastic slipper bans, plastic food packaging bans, taxes on plastic clothes fibres...) are what we get. And because the structure of the cause/problem is the same, the language of environmentalism naturally attaches itself and gives form to the vague sense of moral unease surrounding AI. Governments are surely already building tomorrow's tightly integrated thought police drone swarm complexes, but a crusade against those who simulate a zoo of programming weasels in our midst is much easier and morally no less fulfilling.

30 minutes ago4bpp

... computation produces dramatically less carbon than alternatives. Google had estimated that a single web search query produces 0.2 grams of CO2 emissions. In contrast, driving from my home to the local library to look up a fact would generate about 400 grams

So, how much less carbon is produced by a Gas Town run than the equivalent number of drives to the library?

/i

an hour agochrisjj

It's called Gas Town for a reason...

27 minutes agonl

That's an Internet meme and not a real issue.

10 hours agoastrange

It might be an internet meme when you're talking about the odd chatgpt free tier query, but burning through tokens at the rate of gas town can probably saturate a rack worth of GPU.

9 hours agoRetr0id

It's kinda like how edgy political takes are often wrapped in seven layers of meta-irony. If the audience reaction is negative you can say it was just a joke that didn't land.

And that's not necessarily a bad thing, if it allows exploring new ideas with relative safety. I think that's what's going on here. It's a crazy idea that might just work, but if it doesn't work it can be retconned as satirical performance art.

9 hours agoRetr0id

No, not a joke. The author also co-vibe-coded a book, called Vibe Coding, describing and recommending exactly the sort of system he's trying to build as Gas Town.

11 hours agoAlexCoventry
[deleted]
9 hours ago

I mean, Gas Town is 100% vibe coded, and its very own author says AI can't be trusted to write reliable code.

Draw your own conclusion.

6 hours agohota_mazi

I'm developing concern for Steve. He's been a well known developer and writer in the industry for years now (See his popular 'Google Platforms Rant' essay from years ago) [0].

Now, Yegge's writing tilts towards the grandoise... see his writing when joining Grab [1] and Sourcegraph [2] respectively versus how things actually played out.

I prefer optimism and I'm not anti AI by any means, but given his observed behavior and how AI can't exacerbate certain pathologies... not great. Adding the recent crypto activities on top and all that entails is the ingredients for a powder keg.

Hope someone is looking out for him.

[0] https://courses.cs.washington.edu/courses/cse452/23wi/papers...

[1] https://steve-yegge.medium.com/why-i-left-google-to-join-gra...

[2] https://sourcegraph.com/blog/introducing-steve-yegge

10 hours agoCedricgc

He was right about Google in [1] when I was still drinking the Kool-Aid, in big and tangible ways that aren't discussed publicly.

[2] is 100% accurate, Grok was the backbone / glue of Google's internal developer tools.

I don't disagree on the current situation, and I'm uncomfortable sticking my neck out on this because I'm basically saying "the guy who kinda seems out of it, totally wasn't out of it, when you think he was", but [1] and [2] definitely aren't grandiose, the claims he makes re: Google and his work there are accurate. A small piece of why I feel comfortable in this, is that both of these were public blogs his employer was 100% happy about when hiring him to top positions.

10 hours agorefulgentis

I should be specific. I think the technical analysis is reasonable and I actually enjoy someone staking on a big vision, which is why I saved these pieces.

An example:

"I’ve seen Grab’s hunger. I’ve felt it. I have it. This space is win or die. They will fight to the death, and I am with them. This company, with some 3000 employees I think, is more unified than I’ve seen with most 5-person companies. This is the kind of focused camaraderie, cooperation and discipline that you typically only see in the military, in times of war.

Which should hardly surprise you, because that’s exactly what this is. This is war.

I am giving everything I’ve got to help Grab win. I am all in. You’d be amazed at what you can accomplish when you’re all in."

This is the writing of someone planning to make a capstone career move instead of leaving in 18 months. It's not the worst thing to do (He says he left b/c the time difference to support a team in SE Asia was hard physically, and he's getting older) and I support taking big swings. I'm just saying Yegge's writing has a pattern.

Crypto and what Yegge is doing with $GAS is dangerous because if the token price crashes and people betting their life savings think he didn't deliver on his promises... I like Steve personally which is why I'm saying anything.

9 hours agoCedricgc

This appears to be the coin in question: https://coinmarketcap.com/currencies/gas-town/ - up 222,513.21% in the past week! (And down 25.26% in the last 24 hours. But... suppose it goes back up again?!)

9 hours agotom_

4 hours later, the 24h change is down just shy of 90%. To me this looks like a classic shitcoin pump and dump.

5 hours agomtlmtlmtlmtl

The Gas Town post reads like some type of manic psychosis. I hope he snaps out of it and gets help.

10 hours agodriverdan

[dead]

10 hours agob00ty4breakfast

I ran the Gas Town intro post through ChatGPT 5.2 Pro[0]

Based on my initial read, and a pass at this summary, it seems mostly right. YMMV

Did some further dives into the little public usage data from Gas Town, and found that most of the "Beads" are tasks that are broken down quite small, almost too small imo.

Super interesting project with the goal of keeping Claude "busy" however it feels more like a casino game than something I'd use for production engineering.

[0]https://gist.github.com/jumploops/2e49032438650426aafee6f43d...