12

ByteDance Seed2.0 LLM: breakthrough in complex real-world tasks

Breakthrough is marketing. Come back with some peer review and in the meantime I'm internally translating this as an incremental improvement like most things these last 40 years or more.

The tables of scores strongly speak to increments.

[Edit: it's what the original article says. Not the OP's fault]

a day agoggm

This is my direct translation from the subtitle of the Chinese article. Apologies if there's any inaccuracy.

a day agocyp0633

I should have said it's the original articles fault and not yours.

a day agoggm

is it the llm model weights or the training data that's important and confidential?

16 hours agone0phyt3

No translation yet

a day agocyp0633

Some people have claimed that LLMs that aren’t from the big foundational model providers (OpenAI, Anthropic, Gemini) are basically gaming benchmarks to get great results. Does anyone know if that’s actually true? I don’t understand this entire post but from the tables of benchmark scores, it seems like this model performs well in a large variety of things. It feels to me like the diversity of benchmarks may mean it’s not just something built to game a benchmark, right?

a day agoSilverElfin

Why not just check on your real tasks? I'm quite happy with the k2.5 and glm5 performance in practice. Whether they also gamed the benchmarks is not as relevant.

18 hours agoviraptor

Trained with the trash produced by their braindead underclass clientele.

And they'll eat the slop right up.