With the disclaimer that I haven't tried to set up any kind of agent-to-agent messaging so it may be obvious to those who have, what's the reason I would want something like this rather than just letting agents communicate over some existing messaging protocol that has a CLI (like, I don't know, GPG email)?
It is a fun problem to play with, but it turns out you can use anything. I use a directory per recipient and throw anything I want in there. Works fine, LLMs are 1000x more flexible than any human mind.
You can really tell with such projects that if AGI was here some people would have zero qualms fucking over other humans just to ingratiate themselves to the AI.
Ooh cool. I’ve been hacking on something very similar, https://qntm.corpo.llc/. I’d love to compare notes — been thinking a lot about the group messaging side.
yes, still private. the main tools aren't ready to deploy. but it is pushed out to pypi and runnable with `uvx qntm --help`. I'm hoping next day or two. send me an email and I'll let you know when it's ready for public code review. I'll definitely want eyes on it.
I wonder what something like rabbitmq could look like for this. Agents could subscribe to chosen topics. A topic per agent and then topics per relevant topic.
Tried this, since agents are non deterministic, this is where tools come in handy
So many primitives. All for the taking. Danke.
[flagged]
Why did you capitalize every noun?
For emphasis. Something sorely lacking in the AI Fraud Circus is the emphasis on that Fraud.
You need help.
[deleted]
[dead]
I approve of this schiz'd response, its on haqq as far as I'm concerned. Its funny to see everyone constantly arguing about "how can I optimize context and improve reliability, ect ect"
What they want is a deterministic process.
The problem is they, like most humans are lazy and want a stochastic parrot to create this solution for them. Even if it means atrophying their brain, and paying a billionaire for access to their thinking machine. Humans are lazy, its the same reason people drive 3 blocks as opposed to walking, or pay a billionaire for this rent-a-serf service to pick up your food for you instead of getting off the couch. LLMs are no different here, but the stakes are just much higher if your brain "muscles" atrophy as opposed to your leg's.
They are also addicted to the gambling mechanics baked into these LLM powered tool's UX. "If I write this prompt this way, I'll get better results" is the equivalent of a gambler being superstitious about how people behave while the cards are being dealt, or in which order they press the buttons on a slot machine.
"Whoever says the people are ruined, he himself is ruined." To paraphrase, but that's actual haqq.
When did I say anyone was ruined? I said people are lazy, and also behaving like gamblers when they interact with AI..
you know very well laziness leads to ruin or is the expression of active ruin
>They are also addicted to the gambling mechanics baked into these LLM powered tool's UX. "If I write this prompt this way, I'll get better results" is the equivalent of a gambler being superstitious about how people behave while the cards are being dealt, or in which order they press the buttons on a slot machine.
I realize this feels good to write and that's why people say it, but I can't help chuckling at seeing it combined with "stochastic parrot" in the same comment since the two descriptions are mutually exclusive...
You spent too much time using "Think for Me SaaS" and your brain doesn't work anymore..
With the disclaimer that I haven't tried to set up any kind of agent-to-agent messaging so it may be obvious to those who have, what's the reason I would want something like this rather than just letting agents communicate over some existing messaging protocol that has a CLI (like, I don't know, GPG email)?
It is a fun problem to play with, but it turns out you can use anything. I use a directory per recipient and throw anything I want in there. Works fine, LLMs are 1000x more flexible than any human mind.
How does this relate to A2A?
https://a2a-protocol.org/latest/
I'd rename it; aqua is also a CLI version manager. https://aquaproj.github.io/
You can really tell with such projects that if AGI was here some people would have zero qualms fucking over other humans just to ingratiate themselves to the AI.
Ooh cool. I’ve been hacking on something very similar, https://qntm.corpo.llc/. I’d love to compare notes — been thinking a lot about the group messaging side.
404 https://github.com/corpollc/qntm
yes, still private. the main tools aren't ready to deploy. but it is pushed out to pypi and runnable with `uvx qntm --help`. I'm hoping next day or two. send me an email and I'll let you know when it's ready for public code review. I'll definitely want eyes on it.
I wonder what something like rabbitmq could look like for this. Agents could subscribe to chosen topics. A topic per agent and then topics per relevant topic.
Tried this, since agents are non deterministic, this is where tools come in handy
So many primitives. All for the taking. Danke.
[flagged]
Why did you capitalize every noun?
For emphasis. Something sorely lacking in the AI Fraud Circus is the emphasis on that Fraud.
You need help.
[dead]
I approve of this schiz'd response, its on haqq as far as I'm concerned. Its funny to see everyone constantly arguing about "how can I optimize context and improve reliability, ect ect"
What they want is a deterministic process.
The problem is they, like most humans are lazy and want a stochastic parrot to create this solution for them. Even if it means atrophying their brain, and paying a billionaire for access to their thinking machine. Humans are lazy, its the same reason people drive 3 blocks as opposed to walking, or pay a billionaire for this rent-a-serf service to pick up your food for you instead of getting off the couch. LLMs are no different here, but the stakes are just much higher if your brain "muscles" atrophy as opposed to your leg's.
They are also addicted to the gambling mechanics baked into these LLM powered tool's UX. "If I write this prompt this way, I'll get better results" is the equivalent of a gambler being superstitious about how people behave while the cards are being dealt, or in which order they press the buttons on a slot machine.
"Whoever says the people are ruined, he himself is ruined." To paraphrase, but that's actual haqq.
When did I say anyone was ruined? I said people are lazy, and also behaving like gamblers when they interact with AI..
you know very well laziness leads to ruin or is the expression of active ruin
>They are also addicted to the gambling mechanics baked into these LLM powered tool's UX. "If I write this prompt this way, I'll get better results" is the equivalent of a gambler being superstitious about how people behave while the cards are being dealt, or in which order they press the buttons on a slot machine.
I realize this feels good to write and that's why people say it, but I can't help chuckling at seeing it combined with "stochastic parrot" in the same comment since the two descriptions are mutually exclusive...
You spent too much time using "Think for Me SaaS" and your brain doesn't work anymore..
please read the rules