24

Show HN: 1v1 coding game that LLMs struggle with

This is a game I wish I had as a kid learning programming. The concept of it is fairly similar to other coding games like Screeps, but instead of a complex world with intricate mechanics, Yare is a lot more minimal and approachable with quick 1v1 <3 min matches.

It's purely a passion project with no monetization aspirations. And it's open source: https://github.com/riesvile/yare

The first version 'launched' several years ago and I got some good feedback here: https://news.ycombinator.com/item?id=27365961 that I iterated on.

The latest overhaul is a result of simplifying everything while still keeping the skill ceiling high. And at least the LLMs seem to struggle with this challenge for now (I run a small tournament between major models - results and details here: https://yare.io/ai-arena

I'd love to hear your thoughts

Cool!

From the prompt it looks like you don’t give the llms a harness to step through games or simulate - is that correct? If so I’d suggest it’s not a level playing field vs human written bots - if the humans are allowed to watch some games that is.

12 hours agovessenes

That’s true, I’m trying to figure out a better testing environment with a feedback loop.

I did try letting the models iterate on the bot code based on a summary of an end-of-game ‘report’, but that showed only marginal improvements vs. zero-shot

12 hours agolevmiseri

In my mind, I’d give it the following:

Step(n) - up to n steps forward

RunTil(movement|death|??) - iterate until something happens

Board(n) - board at end of step n

BoardAscii(n) - ascii rep of same

Log(m,n) - log of what happened between step m and n

Probably all this could be accomplished with a state structure and a rendering helper.

Do you let humans review opposing team’s code?

19 minutes agovessenes

Cool project, this is my first time seeing such project using LLMs. Took me a while to understand what's happening on the home page.

A question though, why such powerful bots like Gemini 3.1 failed against Clowder bot? Is it because of inefficient code or the LLMs did not handle edge cases? Or they are not as good as humans when it comes to strategy.

a day agojavadhu

I’m not sure honestly. It could be some combination of bad spatial reasoning of the LLMs and lack of any training data for this specific challenge.

You can see replays for all of the matches if you hover over the cells in the table.

a day agolevmiseri

LLMs need to have feedback of the outcomes. Just like a human does.