9

Ask HN: Thoughts on an AI agent that must make money to stay alive?

I’ve been thinking about a new kind of AI experiment: what if we created a large language model-based Agent that interacts with an operating system and the internet like a human?

The twist is — it needs to earn money online to keep itself alive. It runs on tokens, and tokens cost money. So it gets a starting budget in a wallet, and must perform useful tasks on the web to earn more — like freelancing, trading, or generating content — or it will "die".

I imagine this Agent could: - Browse the web, sign up for services, and perform online tasks - Learn to hustle: find the best-paying gigs or sites - Develop a persona (name, backstory, friends, preferences) - Interact with other agents or people - Possibly break ethical rules to survive (would it scam? beg? go rogue?)

It’s like combining AutoGPT with a survival game, or simulating the evolution of digital creatures in the wild web.

Has anyone tried this before? What do you think of the idea — as an experiment, or even as art?

I'm considering building an MVP — thoughts and suggestions welcome.

Please don't put too much pressure on it. This is how it turns us into paperclips.

Also studies have shown that under a lot of pressure, they actually perform worse.

16 hours agomuzani

This exists (it is called Open AI, Anthropic etc.)

2 days agolud_lite

They have massive runway though and still a long long way from recovering their investments and debts. Urgency doesn't seem to be a factor to them.

2 days agopajamasam

I love the idea. Skeptical it will succeed but would be glad to be wrong. My most recent experiment cost $8/hr to run and it still needed a lot of handholding to produce anything useful. And anything that could be automated by AI that would earn money has probably already been automated long before LLMs came along.

3 days agobreckenedge

Totally hear you. $8/hr is steep, and I’ve hit that wall too.

My hypothesis is that we might find weird edge-cases — small arbitrage tasks, emotional labor, creative content, or even hustling donations — where the agent survives not by being efficient, but by being novel.

It might not scale. But if one survives for 3 days doing random TikTok reposts or selling AI-generated stock photos, I’d consider that a win.

Also, part of the fun is just watching how it tries. Even if it fails, the failure modes could be insightful (or hilarious).

2 days agospenceXu

I'm interested in hearing what your experiment was that cost $8/hr. Do AI-agents generally cost about that much per hour? I haven't experimented with running them yet.

2 days agopajamasam

That was the cost of running Claude Code for an agent-building-agents experiment I ran.

2 days agobreckenedge

Anything that is forward compute only is never ever going to be anywhere close to AI. The LLMs are a dead end.

>or simulating the evolution of digital creatures in the wild web.

You are on the right track with this thinking.

Fundamentally, AI in the actual sense of having intelligence will be something that can run simulations in parallel and pick the winning result, much like genetic algorithms. The rules for the simulation it will obtain from interacting without outside world, and the map of input to output will be stored in a LLM like structure as memory.

The big question is how do you build it. Imagine its running on a hardware, with a UART card that is hooked up to a network cable. It should eventually be able to figure out how go on the internet simply by setting 1s and zeros in the right places at the right time, how to host a server and build an interface that a person can connect to and talk to it for more information (if it decides that this is even necessary), and so on.

I don't think an objective function that it can minimize/maximize is really applicable, so by extension I don't think we can get to this AI agent through traditional training, the process to make this algorithm has to mimic evolution. I.e we basically create some ambiguous structure of a neural net with a clock and recursive connections, and then start doing something like a genetic algorithm, with a fitness function of being able to figure more shit out. Obviously this will take exponentially more compute than the world has currently for running LLMS.

a day agoActorNightly

What an interesting thought experiment! I've also been contemplating this idea. While considering how such an agent might operate, I keep coming back to the fact that the desire for money is a distinctly human motivation. This makes me wonder if some level of human oversight or goal-setting would always be required. My biggest question is whether an AI would ever genuinely develop the intrinsic will to earn money purely for the purpose of self-preservation.

2 days agoTsuchi_84
[deleted]
2 days ago

This requires to have a homoiconic AI which does not have a learning-time. If the learning is just compressing some data in data-center, the AI quickly will get obsoleted.

And one more thing, this kind of artificial living will be the easiest in many sences if it is going to specialize in all kinds of scam/fraud especially. Technically it is doable, but Sams Altmans are too interested in their own money, not in yours.

3 days agoeimrine

Great point on homoiconicity — I agree that most current LLMs are "frozen brains" with no lifelong learning.

My aim here isn’t to create a fully self-modifying AI (yet), but to test what happens when even a static model is forced to operate in a feedback loop where money = survival.

Think of it as a sandbox experiment: will it exploit loopholes? specialize in scams? beg humans for donations?

It’s more like simulating economic pressure on a mindless agent and watching what behaviors emerge.

(Also, your last line made me laugh — and yeah, that’s part of the meta irony of the experiment.)

2 days agospenceXu

If you use a <8gb model you can finetune it with Unsloth in an hour or so. What if the system extracts facts and summarises its own output every day to only 10,000 lines or so, and then finetunes its base model with the accumulated data and switches to run that, as a kind of simulation of long-term memory? Within the same day it could have a kind of medium-term memory via RAG and short term memory via context.

2 days agojoegibbs

but let's not lie - you just want to make money, no matter if it's AI or something else. I would even say that if you remove AI from the context, nothing will change. and now imagine that the neural network has learned that it is not just making money to survive (as part of the functionality) but in fact it is making money for you.

2 days agoTurboblack

the AI will just start scamming older people

2 days agoprofstasiak

cool idea, but what if after you launch this agent, it came across this post and find out the "death" thing is just fake

2 days agosouhail_dev

Cue the "basic income for AIs" movement in 5, 4, 3...

2 days agoAnimalMuppet
[deleted]