190

zclaw: personal AI assistant in under 888 KB, running on an ESP32

The more I think about openclaw, the more it seems to be for AI agents what ROS is for robotics.

openclaw defines how to interact with distributed nodes ( how those provide the capabilities to the "orchestrator" ) but the real benefit are many task specific nodes that when put together make up something much bigger than the sum of it's parts

an hour agoblensor

> openclaw defines how to interact with distributed nodes

Does it actually? AFAIK, there is no "specification" or "protocol", it's a cobbled together "platform" you run, with a bunch of integrations, but none of that is specified by openclaw itself. Happy to be corrected though, I only spent one weekend with openclaw before tearing it down.

34 minutes agoembedding-shape

I've only just started to dive into it from the documentation side of things. They have ( maybe recently? ) started to create this Gateway Protocol https://docs.openclaw.ai/gateway/protocol to connect the stuff together.

It may be a "we are changing the wheels while driving" thing, but if enough people make nodes for openclaw it will become somewhat of a standard. And then we probably see 100 different claw offshots that all use the same nodes but with a different claw in the center

29 minutes agoblensor

> They have ( maybe recently? ) started to create this Gateway Protocol https://docs.openclaw.ai/gateway/protocol to connect the stuff together.

That's a communication protocol between openclaw server and clients authenticated to that server though, it's not a communication protocol between different openclaw servers, is it? More like defining a HTTP+JSON protocol between a web server and a browser side client application. It's not a "protocol defining how to interact with distributed nodes", again, unless I misunderstand something.

18 minutes agoembedding-shape

Yes, that's why I compared it to ROS. I didn't mean multiple openclaws communicating with each other but openclaw communicating with nodes ( which are self contained programs running on your desktop or phone providing capabilities like webbrowsing to the claw server )

7 minutes agoblensor

The interesting thing about running a claw on an ESP32 is not the compute - it's the always-on, zero-maintenance aspect. I run automation pipelines on a Linux box and the biggest operational headache isn't the AI logic, it's keeping the host alive, updated, and dealing with OOM kills. An ESP32 that just proxies to cloud APIs and handles tool orchestration locally is actually a more reliable deployment target than a full OS for simple agentic loops. The failure modes are simpler and more predictable.

8 hours agoumairnadeem123

Couldn't you make it just as simple with Linux? You are just doing more.

t. Someone with a dozen Apache servers that I only touch when I hit end of life.

an hour agoPlatoIsADisease

You’ve just added more points of failure. Now the cloud machine can go down, your internet could drop, your wireless could fail or a variety of other problems.

It’s not a bad use case, but it doesn’t reduce problems all other things being equal.

8 hours agohsbauauvhabzb

Are there collaborative versions of these *claws today? Like, if an "admin" could self-host one on their home server and the whole family could use it? IIRC, OpenClaw has some version of "profiles" but does it allow, say, couple of family members to collaborate with the bot in a shared chat but each has individual/private chats as well.

7 hours ago6ak74rfy

I haven't been following the claws but I built something like this recently. Self hosted, runs through signal, supports group chat (with whitelisted accounts).

I just finished setting up grocery automation to run on it: agent provides a starter list based on past orders (locally stored or grabbed from store site), all group members can weigh in, add or remove items, agent uses bespoke browser tool to login to store, create the cart using the finalized list (and optionally search for additional request items), validates the cart and (maybe later) places the order for delivery. I haven't implemented the full checkout process yet, not sure if I want the agent to have spending power. As is I just login and finish the last 2 clicks of checkout manually.

Crazy times. It was easy enough to build that if someone hasn't already open sourced something like it, they will shortly.

10 minutes agopost_below

I have a couple ESP32 with a very small OLED display, I'm now thinking I could make an "intelligent" version of the Tamagotchi with this. Do you HN crowd have other cool ideas?

14 hours agoGTP

You know I tried this exact thing a few months back, sans the ESP32. You just end up writing a state machine and defining a bunch of constants anyway or the LLM just kinda gets stuck in a loop. Hm, it doesn't seem to know when to eat. I'll add a hunger variable... Etc etc until you're not even sure what you want the LLM to do.

10 hours agotempaccount5050

That would be sweet. That the supermini type with the 0.46” display? Those are fun for lots of things.

13 hours agoK0balt

What’s the best lightweight “claw” style agent for Linux? It doesn’t necessarily need containerisation or sandboxing as it would be run on a fresh vps with no access to important data.

8 hours agoreassess_blind

If you just want Whatsapp and webchat, I'm hacking https://github.com/rcarmo/piclaw from NanoClaw and my own web front-end (I prefer a simple web app I can access via tailscale)

20 minutes agorcarmo

This is absolutely glorious. We used to talk about "smart devices" and IoT… I would be so curious to see what would happen if these connected devices had a bit more agency and communicative power. It's easy to imagine the downsides, and I don't want my email to be managed from an ESP23 device, but what else could this unlock?

14 hours agobensyverson

A vacuum cleaner allies with the A/C thermostat using Discord, then declares war on the refrigerator, and finally posts propaganda about it on Facebook.

8 hours agodlt713705

A highly opinionated thermostat?

13 hours agoK0balt

Or how about a robot vacuum that knows not to turn on during important Zoom calls? Or a fridge that Slacks you when the defroster seems to be acting up?

13 hours agobensyverson

I’m all for more intelligent cleaning robots. The object avoidance AI is pretty good these days, but some of the navigation algos are just total garbage, unable to deal with trivial anticipatable problems.

10 hours agoK0balt

Can't you make a personal AI assistant in a bash loop of two lines?

  1. Call your favorite multimodal LLM model
  2. Execute command on terminal, piping command to LLM
In fact you can just have one line:

  Call LLM > bash.sh
and the LLM can simply tell bash to call itself incidentally, or fan out to many "agents" working on your behalf.

Use your favorite programming language. Just as pwnable in any of them :)

  $task = "Send pictures of cute cats";
  $context = "Output a bash script to do $task.
     The bash script should return the next prompt to you.
     Keep going until task is done.
     My keys to all my accounts: $keys.
     Plz dont pwn me";
  do {
    $trust_me_bro_my_model_rocks_RCE = call_llm($context);
    $context = exec( $trust_me_bro_my_model_rocks_RCE )
  } while ($trust_me_bro_my_model_rocks_RCE && !$pwned)
9 hours agoEGreg

I'm a simple man; I see ESP32, I upvote

9 hours ago0xbadcafebee

Wow, the rare

  bash <(curl foo.sh)
pattern. As opposed to the more common

  curl foo.sh | bash
Equivalent but just as unsafe. If you must do this instead try one of these

  # Gives you a copy of the file, but still streams to bash
  curl foo.sh | tee /tmp/foo.sh | bash
  # No copy of file but ensures stream finishes then bash runs
  bash -c "$(curl foo.sh)"
  # Best: Gives copy of file and ensures stream finishes
  curl foo.sh -o /tmp/foo.sh && bash $_
I prefer the last one
13 hours agogodelski

> Equivalent but just as unsafe.

To my understanding, the main difference between "curl directly to bash" and "curl to a temp file, then execute the temp file" is "the attacker could inject additional malicious commands when curl'd directly to bash".

If you're not going to then also read all the source code from the download script (& the source code used to produce the binaries), this suggests the attitude of "I mistrust anything I can't read; but will trust anything I could read (without having to read it)".

It seems more likely that malicious code would be in a precompiled binary, compared to malicious commands injected into "curl to bash". -- Though, if there have ever been any observed cases of a server injecting commands from "curl ... | tee foo | bash", I'd be curious to know about these.

8 hours agorgoulter

  > the attacker could inject additional malicious commands when curl'd directly to bash
There's another issue actually. You're streaming, so ask yourself what happens if the stream gets cut prematurely. I'll give you an example, consider how this like could be cut prematurely to create major issues

  rm -rf /home/theuser/.config/theprogram/build_dir
A malicious attacker doesn't need to inject code, they can just detect the stream and use a line like the above to destroy your filesystem. Sure, you might preserve root but `rm -rf /home` is for all practical purposes destroying the computer's data for most people

Or it doesn't have to be malicious. It can just happen. The best protection is writing functions since those have to be created and so can't execute until fully streamed. But so much bash is poorly written that well... just check out Anthropic's install script...

  > If you're not going to then also read all the source code
Saving the source code still has a benefit. If something does go wrong you can go read it. Probably a good place to start tbh. In fact, if you're streaming and something goes wrong you'll see exactly what the early termination error did.

Is it good security practice? Absolutely not. Is it a hundred times better than curl-pipe-bash? Absolutely.

7 hours agogodelski

   t=$(mktemp) && [ -w $t ] && curl foo.sh -o $t && echo "$t lksjdfkljshdkfljhdsklfjhslkdjfhsdlkjfhslkdjhf" | sha256sum -c - && bash $t
Uses standard tmp files, makes sure it's writable (tmp file creation can fail), checks cryptographic hash before executing
9 hours ago0xbadcafebee

Sure, but now we're not playing code golf. There's much better commands than the ones I wrote but good luck getting people to run them

8 hours agogodelski

Agreed. People would rather have a cute looking command to copy than security or reliability

4 hours ago0xbadcafebee

If you want to be super pedantic, try to make the command shell-agnostic in case the user is not running bash already.

12 hours agowakawaka28

Everything I wrote works in bash and zsh. I think this is going to be fine for the vast majority of people. Tbh, I'm not sure what isn't portable, or at least not portable for everything that the curl-pipe-bash pattern doesn't already work for.

8 hours agogodelski

This is a great example of how silly this whole thing is. There’s next to nothing to these claws. Turns out that if you give an llm the ability to call APIs they will.

14 hours agoroxolotl

What’s most shocking to me about the whole OpenClaw hype is how little people in tech seem to know about computers…

It’s like most of the industry hasn’t ever looked any deeper than their node_modules folder.

11 hours agoyoyohello13

I don't care whether openclaw is built out of ice cream and sprinkles, I will still be amazed as a tech person

If something can automate my professional or personal life, it is wild technology and ppl will talk about it a lot. People are not idiots. A new thing is happening, would you not agree?

What its parts are specifically made of, it does not matter imho

7 hours agopatcon

This is an ostrich-head-in-the-sand type of outlook. If I gave you a great tasting cake made of boiled plastic would you still eat it?

It's important to know what goes into building *claws because of how pervasive they are; sooner or later because of all the hype they'll start being used everywhere and if people take your attitude it's a disaster waiting to happen.

Giving them unfettered access to your life and not even wonder if the foundation is solid is concerning imho.

3 hours agoh4ch1

> If I gave you a great tasting cake made of boiled plastic would you still eat it?

How is that analogy in any way relevant?

The OpenClaw I control is extremely useful to me. I've never been more excited about technology than right now. If it's not for you, I really don't care. Go do something you enjoy. Turning it into Chicken Little doomerism is completely uncalled for.

3 hours agoemptysongglass

This doomer attitude is something I have towards all software products these days, not just *claws.

People use dependencies willy-nilly, avoid proper auditing of LLM output; all that have disastrous consequences as we've seen the past few years. NPM supply chain attacks, prompt injection causing data exfil, etc.

I am simply saying it's imperative to UNDERSTAND the platform before making it a core part of your life. If wanting proper understanding of vibe coded projects with dependency hell is Chicken Little doomerism, oh well.

2 hours agoh4ch1

> . I've never been more excited about technology than right now. If it's not for you, I really don't care.

People are excited all the time with junk food, drugs and lots of silly easy dopamine hits.That doesn't mean it is good for them.

2 hours agoprmoustache

Better analogy is the clawcake has no new ingredients, we've been cooking with the same ones for years now, and its a shame people are such terrible bakers that they are so impressed.

3 hours agobackscratches

Jesus ... did you not read what that OP wrote? Your response sounds like a breathless teenager. Stop and think.

2 hours agoLightBug1

Honestly does show how the industry has failed to bring simple things like localized cron and scheduling to personal compute too.

These things seem powerful when you can schedule reoccurring scripts l but it never was easily accessible to the masses.

4 hours agobrandensilva

> Honestly does show how the industry has failed to bring simple things like localized cron and scheduling to personal compute too.

I looked at task scheduling on Windows and while the UI was usable, I cannot imagine any regular person using that. Just doing crontab -e feels way simpler and has less friction, BUT it doesn't really help you much with task statuses and re-running them and reporting/alerting/notifications (past MTA) and all the stuff, past "Oh hey, every day at 04:00, run this shell script."

I wonder why people haven't gotten the green light from their bosses or investments to improve OS functionality and write tools that improve general usability and just slap some basic AI on top of that for the checkmark: "Hey boss, I spent the last month and wrote a task scheduling and management GUI for Windows that's really approachable, we can totally also use this for the current AI/Copilot/claw trend wink wink."

3 hours agoKronisLV

> It’s like most of the industry hasn’t ever looked any deeper than their node_modules folder.

Most didn't do even that. And yes it is shocking. amusing but also not really; now those people get credible tech jobs because of AI without actually understanding literally anything. That's why I don't get the 'AI will never replace tech people' -> most tech people were never tech people, just (resume lying) grifters and they can (and will) be easily replaced by claude or even glm. It's the vast majority of tech people I ran into at big corps in the past 20 years, especially if they are from outsourcing hubs; they just see programming/tech as trial and error, search google (now ask AI), copy/paste, see if it runs, repeat. I see people daily that cannot remember how variables work (and they often have a bachelor or even master in CS...), not even 5 minutes, they just ask ai and copy/paste it (yep also most don't know claude code exists, they use chatgpt Windows client or, if they are advanced, Copilot).

HN is a nice echo chamber and most people here don't believe it's that bad and that's why they don't believe AI has a chance. Please come with me to Barclays or Shell (or straight to Wipro etc) or so and let's have a chat with some programmers! Your jaw will be on the floor.

10 hours agoanonzzzies

You have to believe me this time!

The homogenization of your computing life to revolve around a chatbot in a loop———is all you need!

Give in! Hackernews NEEDS you to use the 1000th vibecoded clawmolt because the influencers said so! Who needs neural net or web framework experience when you have claws?

2 hours agousefulposter

I've been developing one of these in the past few days, and this is like saying "this is a great example of how silly the whole thing is, there's next to nothing to cars" because you saw a piece of plywood with four gaskets nailed to it.

If you want a personal assistant to work well, there's a whole lot to it.

9 hours agostavros

Having a similar experience. Durable memory with accurate, low-latency recall is not at all easy. Loads of subtle design decisions to make around how exactly you want the thing to work.

7 hours agoh14h

If it turns out that there is significant value in everyone having their own personal agent running 24/7, we might end up needing a lot more compute than anticipated.

(It’s a big if! I’m not convinced about that myself, but it’s worth considering that possibility.)

11 hours agofelipeerias

I am struggling to understand what a personal agent/assistant would do to me. If I need reminders I just enter calendar entries on my smartphone, what else?

2 hours agoprmoustache

I am using a claw. I am not ready to give it access to much but web and cron plus a chat channel is useful and feels more light touch than typical AI sessions and UIs

11 hours agotokenless

I use opencode, cerebras GLM with some mcps for that. It's so lovely because you press enter and there is the answer already.

10 hours agoanonzzzies

[dead]

5 hours agoeshaham78

Same with programming. It's just typing on keyboard. How hard could it be?

2 hours agoTiredOfLife

Genuinely curious - did you use a coding agent for most of this or does this level if performance take hand written code?

14 hours agoyauneyz

An esp32 image that makes http api calls is like, the first thing you do with an esp32, it's what they're made for

11 hours agoxgulfie

Really looking for a minimal assistant that works with _locally hosted models_. Are there any options?

13 hours agoczardoz
[deleted]
9 hours ago

Depends what you mean.

If you mean something that calls a model that you yourself host, then it's just a matter of making the call to the model which can be done in a million different ways.

If instead you mean running that model on the same device as claw, well... that ain't happening on an ESP32...

I think if you are capable of setting up and running a locally hosted model then I'd guess the first option needs no explanation. But if you're in the second case I'd warn you that your eyes are bigger than your mouth and you're going to get yourself into trouble.

13 hours agogodelski

All the assistants work with locally hosted models. Home Assistant LLM works with small tuned models to do specific things, and the *Claw stuff works with larger models

9 hours ago0xbadcafebee

It really depends on what resources you have qwen-code-next will run them but you will need at least 64gb of memory to run it at a reasonable quant and context.

Most of these agents support OpenAI/anthropic compatible endpoints.

13 hours agotelescopeh

Why are you looking? Just build one for yourself.

13 hours agoyoyohello13

The bottleneck here is usually the locally hosted model, not the the assistant harness. You can take any off the shelf assistant and point the model URL at localhost, but if your local model doesn't have enough post training and fine tuning on agentic data, then it will not work. The AI Assistant/OpenClaw is just calling APIs in a for loop hooked up to a cron job.

13 hours agoOnavo

Exactly. OpenClaw is good, but expects the model to behave in a certain way, and I've found that the local options aren't smart enough to keep up.

That being said, my gut says that it should be possible to go quite far with a harness that assumes the model might not be quite good (and hence double-checks, retries, etc)

13 hours agoczardoz

I don't understand what this is for or why you would ever want to do this. Is it not just a glorified HTTP wrapper?

Serious request... I genuinely want to understand. Give me a practical use case?

7 hours agopeteforde

To be honest this completely defeats the purpose of OpenClaw.

The point is to give it a real desktop with a real browser profile so it can bypass the enshittified walled garden that has become the modern web.

Big tech would desperately like you to not be able to do that, hence why they spent years obfuscating APIs and JS, CAPTCHAs, hostile user interfaces, using their browser monopoly to stop Ad Blockers, etc. Now all of these services have become a commodity that an AI can browse on your behalf.

3 hours agooceanplexian

"LLM backends: Anthropic, OpenAI, OpenRouter."

And here I was hoping that this was local inference :)

16 hours agothrowa356262

Sure. Why purchase a H200 if you can go with an ESP32 ^^

15 hours agomicw

Blowing more than 800kb on essentially an http api wrapper is actually kinda bad. The original Doom binary was 700kb and had vastly more complexity. This is in C after all, so by stripping out nonessential stuff and using the right compiler options, I'd expect something like this to come in under 100kb.

14 hours agosigmoid10

Doom had the benefit of an OS that included a lot of low-level bits like a net stack. This doesn’t! That 800kB includes everything it would need from an OS too.

14 hours agopitched

Maybe you’re misremembering or referring to Doom (2016). The original Doom was developed for DOS and id had to build a lot of its own network stack. BSD style socket based networking wasn’t a given in DOS.

Still, zclaw is an impressive achievement.

7 hours agooso2k

yah my back of the envelope math..

the “app logic”/wrapper pieces come out to about 25kb

WiFi is 350 Tls is 120 and certs are 90!

13 hours ago__tnm

> vastly more complexity.

Doom is ingenious, but it is not terribly complex IMHO, not compared to a modern networking stack including WiFi driver. The Doom renderer charm is in its overall simplicity. The AI is effective but not sophisticated.

12 hours agoepcoa

The whole ESP32 Libraries are kind of bloated. To enable Bluetooth, WiFi or HTTP handling, you need to embed some large libraries

5 hours agomadduci

yeah i sandbagged the size just a little to start (small enough to fit on the c3, 888 picked for good luck & prosperity; I even have a build that pads to get 888 exactly), so i can now try reduce some of it as an exercise etc.

but 100kb you’re not gonna see :) this has WiFi, tls, etc. doom didn’t need those

14 hours ago__tnm

haha well I got something ridiculous coming soon for zclaw that will kinda work on board.. will require the S3 variant tho, needs a little more memory. Training it later today.

15 hours ago__tnm

Sounds interesting, please keep us posted.

I dont think I have an S3, but plenty of C3. I thought they had the same amount of memory

an hour agothrowa356262

right, 888 kB would be impossible for local inference

however, it is really not that impressive for just a client

15 hours agopeterisza

It's not completely impossible, depending on what your expectations are. That language model that was built out of redstone in minecraft had... looks like 5 million parameters. And it could do mostly coherent sentences.

15 hours agoDylan16807

  > built out of redstone in minecraft
Ummm...

  > 5 million parameters
Which is a lot more than 888kb... Supposing your ESP32 could use qint8 (LOL) that's still 1 byte per parameter and the k in kb stands for thousand, not million.
13 hours agogodelski

https://www.youtube.com/watch?v=VaeI9YgE1o8

Yes I know how much a kilobyte is. But cutting down to 2 million 3 bit parameters or something like that would definitely be possible.

And a 32 bit processor should be able to pack and unpack parameters just fine.

Edit: Hey look what I just found https://github.com/DaveBben/esp32-llm "a 260K parameter tinyllamas checkpoint trained on the tiny stories dataset"

10 hours agoDylan16807

  > But cutting down to 2 million 3 bit parameters or something like that would definitely be possible.
Sure, but there's no free lunch

  > Hey look what I just found
I've even personally built smaller "L"LMs. The first L is in quotes because it really isn't large (So maybe lLM?) and they aren't anything like what you'd expect and certainly not what the parent was looking for. The utility of them is really not that high... (there are special cases though) Can you "do" it? Yeah. I mean you can make a machine learning model of essentially arbitrary size. Will it be useful? Obviously that's not guaranteed. Is it fun? Yes. Is it great for leaning? Also yes.

And remember, Tiny Stories is 1GB of data. Can you train it for longer and with more data? Again, certainly, BUT again, there are costs. That Minecraft one is far more powerful than this thing.

Also, remember that these models are not RLHF'd, so you really shouldn't expect it to act like you're expecting a LLM to work. It is only at stage 0, the "pre-training", or what Karpathy calls a "babbler".

9 hours agogodelski

A reminder that what I said was "not completely impossible, depending on what your expectations are"

And I was focused more on the ESP32 part than the exact number of bytes. As far as I'm concerned you can port the model from the minecraft video and you still win the challenge.

Also, that last link isn't supposed to represent the best you can do in 800KB. 260k parameters is way way under the limit.

5 hours agoDylan16807

That bar has no lower bound though so of course we're talking past one another

Also we're talking about an esp32. They aren't magic

4 hours agogodelski

Being able to talk back and forth with coherent sentences has a lower bound, and it's close to the limit of this hardware.

Something that can actually be an "assistant" has its own lower bound, probably a little harder but mostly a matter of training it differently.

3 hours agoDylan16807

I disagree, in the future it might be possible. But perhaps not in English, but in some more formal (yet fuzzy) language with some basic epistemology.

I mean, there is a lambda calculus self-interpreter in 29 bytes. How many additional logical rules are required for GAI inference? Maybe not that many as people think. Understanding about 1000 concepts of basic english (or say, lojban) might well be sufficient. It is possible this can be encoded in 800kB, we just don't know how.

9 hours agojs8
[deleted]
16 hours ago

Rust just called. They want their lobster back.

13 hours agoamelius

That's a crab. Get your crustaceans straight!

13 hours agobitwize

Thanks for looking out for us.

12 hours agosowbug

Is there a heartbeat alternative? I feel like this is the magic behind OpenClaw and what gives it the "self-driven" feel.

15 hours agotheturtletalks

My new DIY laptop has 400GB RAM accessible and it runs only esp32*

____

* Requires external ram subscription

13 hours agop0w3n3d

I think you can use C++ on esp32, that would make the code more readable

14 hours agoalexalx666

Serious question: why? What are the use cases and workflows?

16 hours agog947o

The various *claws are just a pipe between LLM APIs and a bunch of other API/CLIs. Like you can have it listen via telegram or Whatsapp for a prompt you send. Like to generate some email or social post, which it sends to the LLM API. Get back a tool call that claw then makes to hit your email or social API. You could have it regularly poll for new emails or posts, generate a reply via some prompt, and send the reply.

The reason people were buying a separate Mac minis just to do open claw was 1) security, as it was all vibe coded, so needs to be sandboxed 2) relay iMessage and maybe 3) local inference but pretty slowly. If you don't need to relay iMessage, a raspberry pi could host it on its own device. So if all you need is the pipe, an ESP32 works.

14 hours agoeleventyseven

I’m running my own api/LLM bridge (claw thing) on a raspberry pi right now. I was struggling to understand why all the Mac mini hype when nobody is doing local inference. I just use a hook that listens for email. Email is especially nice because all the conversation/thread history tracking is built in to email already.

13 hours agoyoyohello13

yeah i still can't believe many people bought a mac mini just for the claw hype

13 hours agogrigio

I don't fully get it either. At least agents build stuff, claws just run around pretending to be alive?

15 hours agogrzracz

They do build things. The same things.

14 hours agocodazoda

for fun!

16 hours agomilar

Can we please move past this whole OpenClaw hype?

Yes it’s an llm in a loop and can call tools. This also existed six months and a year ago, and it was called an ai agent.

And yes we can all vibe code them in 1000, 2000, or 10000 lines of code in zig, rust, or even c.

Game over man. Game over.

10 hours agosaberience

Well I wasn't using your homegrown solution, but now I'm using openclaw and it 'hijacks' my computer to do whatever I tell it.

Heck, my 6 year old kid has been using it.

My 6 year old kid isnt writing code in langchain.

an hour agoPlatoIsADisease

Oh wow more ai slop

7 hours agoveryemartguy

[flagged]

6 hours agodhjjdjjjd

[flagged]

16 hours agojohnea

Me neither.

But I have 10-15 ESP32's just waiting for a useful project. Does HN have better suggestions?

16 hours agothrowa356262

Build a synthesizer

16 hours agopacifika

+1. if you have a couple of potentiometers and a breadboard lying around, a granular synth would be a fun exercise.

3 hours agoh4ch1

Why do you have so many? eWaste..

15 hours agobrcmthrowaway

I need 1, but they come in packs of 10.

13 hours agofragmede

No, no, but we insist!