37

Aqua: A CLI message tool for AI agents

With the disclaimer that I haven't tried to set up any kind of agent-to-agent messaging so it may be obvious to those who have, what's the reason I would want something like this rather than just letting agents communicate over some existing messaging protocol that has a CLI (like, I don't know, GPG email)?

2 hours agoresfirestar

It is a fun problem to play with, but it turns out you can use anything. I use a directory per recipient and throw anything I want in there. Works fine, LLMs are 1000x more flexible than any human mind.

2 hours agositkack

You can really tell with such projects that if AGI was here some people would have zero qualms fucking over other humans just to ingratiate themselves to the AI.

3 minutes agovasco

Ooh cool. I’ve been hacking on something very similar, https://qntm.corpo.llc/. I’d love to compare notes — been thinking a lot about the group messaging side.

3 hours agovessenes

404 https://github.com/corpollc/qntm

2 hours agohandfuloflight

yes, still private. the main tools aren't ready to deploy. but it is pushed out to pypi and runnable with `uvx qntm --help`. I'm hoping next day or two. send me an email and I'll let you know when it's ready for public code review. I'll definitely want eyes on it.

2 hours agovessenes

I wonder what something like rabbitmq could look like for this. Agents could subscribe to chosen topics. A topic per agent and then topics per relevant topic.

4 hours agoroxolotl

Tried this, since agents are non deterministic, this is where tools come in handy

2 hours agoUperSpaceGuru

So many primitives. All for the taking. Danke.

3 hours agohandfuloflight

[flagged]

3 hours agoJohnMatthias

Why did you capitalize every noun?

3 hours agolinkregister

For emphasis. Something sorely lacking in the AI Fraud Circus is the emphasis on that Fraud.

3 hours agoJohnMatthias

You need help.

an hour agopertymcpert
[deleted]
2 hours ago

[dead]

an hour agoszundi

I approve of this schiz'd response, its on haqq as far as I'm concerned. Its funny to see everyone constantly arguing about "how can I optimize context and improve reliability, ect ect"

What they want is a deterministic process.

The problem is they, like most humans are lazy and want a stochastic parrot to create this solution for them. Even if it means atrophying their brain, and paying a billionaire for access to their thinking machine. Humans are lazy, its the same reason people drive 3 blocks as opposed to walking, or pay a billionaire for this rent-a-serf service to pick up your food for you instead of getting off the couch. LLMs are no different here, but the stakes are just much higher if your brain "muscles" atrophy as opposed to your leg's.

They are also addicted to the gambling mechanics baked into these LLM powered tool's UX. "If I write this prompt this way, I'll get better results" is the equivalent of a gambler being superstitious about how people behave while the cards are being dealt, or in which order they press the buttons on a slot machine.

3 hours agodakolli

"Whoever says the people are ruined, he himself is ruined." To paraphrase, but that's actual haqq.

2 hours agohandfuloflight

When did I say anyone was ruined? I said people are lazy, and also behaving like gamblers when they interact with AI..

27 minutes agodakolli

you know very well laziness leads to ruin or is the expression of active ruin

16 minutes agohandfuloflight

>They are also addicted to the gambling mechanics baked into these LLM powered tool's UX. "If I write this prompt this way, I'll get better results" is the equivalent of a gambler being superstitious about how people behave while the cards are being dealt, or in which order they press the buttons on a slot machine.

I realize this feels good to write and that's why people say it, but I can't help chuckling at seeing it combined with "stochastic parrot" in the same comment since the two descriptions are mutually exclusive...

2 hours agoresfirestar

You spent too much time using "Think for Me SaaS" and your brain doesn't work anymore..

an hour agodakolli

please read the rules

26 minutes agostressback
[deleted]