485

LLM from scratch, part 28 – training a base model from scratch on an RTX 3090

Anyone interested can also follow these amazing playlists:

1. Building LLMs from scratch - https://www.youtube.com/playlist?list=PLPTV0NXA_ZSgsLAr8YCgC...

2. Reasoning LLMs from Scratch - https://www.youtube.com/playlist?list=PLPTV0NXA_ZSijcbUrRZHm...

3. Build a SLM from Scratch - https://www.youtube.com/playlist?list=PLPTV0NXA_ZShuk6u31pgj...

4. Build DeepSeek from Scratch - https://www.youtube.com/playlist?list=PLPTV0NXA_ZSiOpKKlHCyO...

12 hours agokburman

These all look great, I'm very interested in hearing from anyone who has followed any of these.

How did you find it, what did you get from it?

9 hours agoyoungNed

> …reused its embedding matrix as the weights for the linear layer that projects the context vectors from the last Transformers layer into vocab space to get the logits.

At first glance this claim sounds airtight, but it quietly collapses under its own techno-mythology. The so-called “reuse” of the embedding matrix assumes a fixed semantic congruence between representational space and output projection, an assumption that ignores well-known phase drift in post-transformer latent manifolds. In practice, the logits emerging from this setup tend to suffer from vector anisotropification and a mild but persistent case of vocab echoing, where probability mass sloshes toward high-frequency tokens regardless of contextual salience.

Just kidding, of course. The first paragraph above, from OP’s article, makes about as much sense to me as the second one, which I (hopefully fittingly in y’all’s view) had ChatGPT write. But I do want to express my appreciation for being able to “hang out in the back of the room” while you folks figure this stuff out It is fascinating, I’ve learned a lot (even got a local LLM running on a NUC), and very much fun. Thanks for letting me watch, I’ll keep my mouth shut from now on ha!

13 hours agoBubbleRings

Disclaimer: working and occasionally researching in the space.

The first paragraph is clear linear algebra terminology, the second looked like deeper subfield specific jargon and I was about to ask for a citation as the words definitely are real but the claim sounded hyperspecific and unfamiliar.

I figure a person needs 12 to 18 months of linear algebra, enough to work through Horn and Johnson's "Matrix Analysis" or the more bespoke volumes from Jeffrey Humpheries to get the math behind ML. Not necessarily to use AI/ML as a tech, which really can benefit from the grind towards commodification, but to be able to parse the technical side of about 90 to 95 percent of conference papers.

11 hours agotomrod

One needs about 12 to 18 hours of linear algebra to work though the papers, not 12 to 18 months. The vast majority of stuff in AI/ML papers is just "we tried X and it worked!".

10 hours agodanielmarkbruce

You can understand 95+% of current LLM / neural network tech if you know what matrices are (on the "2d array" level, not the deeper lin alg intuition level), and if you know how to multiply them (and have an intuitive understanding why a matrix is a mapping between latent spaces and how a matrix can be treated as a list of vectors). Very basic matrix / tensor calculus comes in useful, but that's not really part of lin alg.

There are places where things like eigenvectors / eigenvalues or svd come into play, but those are pretty rare and not part of modern architectures (tbh, I still don't really have a good intuition for them).

10 hours agomiki123211

> There are places where things like eigenvectors / eigenvalues or svd come into play, but those are pretty rare and not part of modern architectures (tbh, I still don't really have a good intuition for them)

This stuff is part of modern optimizers. You can often view a lot of optimizers as doing something similar to what is called mirror/'spectral descent.'

9 hours agowhimsicalism

Eigenvector/eigenvalues: direction and amount of stretch a matrix pushes a basis vector.

7 hours agotomrod

I was about to respond with a similar comment. The majority of the underlying systems are the same and can be understood if you know a decent amount of vector math. That last 3-5% can get pretty mystical, though.

Honestly, where stuff gets the most confusing to me is when the authors of the newer generations of AI papers invent new terms for existing concepts, and then new terms for combining two of those concepts, then new terms for combining two of those combined concepts and removing one... etc.

Some of this redefinition is definitely useful, but it turns into word salad very quickly and I don't often feel like teaching myself a new glossary just to understand a paper I probably wont use the concepts in.

9 hours agodevmor

This happens so much! It’s actually imo much more important to be able to let the math go and compare concepts vs. the exact algorithms. It’s much more useful to have semantic intuition than concrete analysis.

Being really good at math does let you figure out if two techniques are mathematically the same but that’s fairly rare (it happens though!)

9 hours agobuildbot

> 12 to 18 months of linear algebra

Do you mean full-time study, or something else? I’ve been using inference endpoints but have recently been trying to go deeper and struggling, but I’m not sure where to start.

For example, when selecting an ASR model I was able to understand the various architectures through high-level descriptions and metaphors, but I’d like to have a deeper understanding/intuition instead of needing to outsource that to summaries and explainers from other people.

10 hours agojhardy54

I was projecting as classes, taken across 2 to 3 semesters.

You can gloss the basics pretty quickly from things like Kahn academy and other sources.

Knowing Linalg doesn't guarantee understanding modern ML, but if you then go read seminal papers like Attention is All You Need you have a baseline to dig deeper.

7 hours agotomrod

It's just a long winded way of saying "tied embeddings"[1]. IIRC, GPT-2, BERT, Gemma 2, Gemma 3, some of the smaller Qwen models and many more architectures use weight tied input/output embeddings.

[1]: https://arxiv.org/abs/1608.05859

11 hours agowoadwarrior01

The second paragraph is highly derivative of the adversarial turbo encabulator, which Schmithuber invented in the 90s. No citation of course.

2 hours agoQuadmasterXLII

The turbo encabulator lives on.

13 hours agojcims

i consider it a bit rude to make people read AI output without flagging it immediately

10 hours agowhimsicalism

As somebody who understands how LLMs work pretty well, I can definitely feel your pain.

I started learning about neural networks when Whisper came out, at that point I literally knew nothing about how they worked. I started by reading the Whisper paper... which made about 0 sense to me. I was wondering whether all of those fancy terms are truly necessary. Now, I can't even imagine how I'd describe similar concepts without them.

10 hours agomiki123211

It's a 28 part series. If you start from the beginning, everything is explained in detail.

12 hours agoempath75

I'm glad I'm not the only one who has a Turbo Encabulator moment when this stuff is posted.

9 hours agosquigz

I was reading this thinking "Holy crap, this stuff sounds straight out of Norman Rockwell... wait, Rockwell Automation. Oh, it actually is"

10 hours agounethical_ban

I have no idea what you’ve just said, so here is my upvote.

11 hours agoekropotin

I really like this article. I hadn't thought that an RTX 3090 would be capable of generating a sort-of decent small LLM from scratch in a reasonable time, but he shows how in detail.

15 hours agoRagnarD

Maybe I've been missing out, but can anyone give me a yay/nay on whether this is a worth-while 28-part-series to start from scratch and spend my time watching/reading?

Is it along the same lines as https://github.com/karpathy/llm.c/discussions/677 ?

He (karpathy) has a video series that also does something similar. I found it very informative and entertaining, even at the 1 hour + length it is (there are actually multiple videos, im not sure how long the others are).

9 hours agolacoolj

This is great to see, I'm also re-reading Sebastian Raschka's amazing book.

8 hours agofuddle

Has anyone done something like this but with apple silicon instead of a graphics card? Training a small LLM on an M2-M5?

11 hours agonico

I've played with something similar with my M1 using Apple's MLX framework. The problem is I'm compute bound. I've never managed to get my M1 Max's GPU to process more than ~7.8k tokens per second at bf16 precision, so to train a 112M parameter model on ~20 billion tokens I'd need to run the model training for ~30 days.

One solution is to reduce the scope of the problem -- you can train on a smaller less diverse dataset such as TinyStories which is a collection of 1 billion tokens of chatGPT generated children's stories. After about 40 hours, less than one weekend, you'll have a model which can generate mostly grammatical children's stories.

If you have a newer mac and/or an ultra chip you'll have more and faster GPU cores, and might be able to train on FineWeb or a similar, larger and more diverse dataset.

6 hours agomuricula

I love the level of detail ( probably, because I see it less and less these days ). It genuinely makes me wonder if anyone tried training LLMs on their own writings ( assuming those bigger than 100+ pages ) and what the results were.

14 hours agonullbound

I just want to chime in here about the importance of taking notes and having a journal. These things are now more important than ever as they can literally help fine-tune agents to help assist you using your personal style.

14 hours agojadbox

> These things are now more important than ever

oh definitely. i agree here. can't wait to read the rest of the sentence, probably saying something meaningful about the creative benefits of unstructured writing, or the importance of relying on your own thoughts and language and unique voice in the era of LLMs

> as they can literally help fine-tune agents to help assist you using your personal style.

oh

14 hours agotrial3

I get it. Both things can be true. Unstructured writing can help you develop as a person. It can also teach your own model the 'real raw human train of thoughts' of your personal journey. Personally I love the idea of booting up great-great-grandpa-model that'll have been trained on his 40 years of almost daily journaling. We are not trying to 'remake him' to be clear- we are talking about being have to have an interaction chat with his personality-vibe as it was recorded by his own hand and in his own words.

an hour agojadbox

I have always wondered if I should be recording all my conversations privately — with consent —with family and friends and then train an LLM to let anyone speak to someone that sounds "like me" when I am gone.

I suppose one could order all the data over time -— decades — and then train a model incrementally every decade and imitate me better at a point in time.

I suppose one could also narrate thoughts and feelings associated with many transcripts, which would be very tedious but would make the LLM imitate not just style but some amount of internal monologue.

I suppose one level further could be an LLM learning about the variety or parts of the ego, the I, me, mine, ours. Then the Observer and the Observed parts of thought — if we can somehow tap internal thought without manually speaking — because thoughts are, metaphorically speaking, the speed of light.

Why would one do all this? I suppose a curt answer would be to "live" eternally of course — with all the limitations of the current tech — but still try.

It might make a fascinating psychoanalysis project, one that might be a better shot at explaining someone's _self_ not as a we, a stranger, might as outwardly see it: just as a series of highs and lows and nothing in between, but instead as how they lived through it.

10 hours agoitissid

Fully agree on the importance of taking notes and writing in general [1], but I absolutely do not want to train a model on my texts or attempt a personal style imitation. I can't fully put my finger on why exactly other than that it feels icky and that it would hinder my long-term writing quality rather than help it.

[1] I made an app to be my lifelong companion for this: https://kraa.io/about – No AI integration.

12 hours agolevmiseri

Is this what tool and die makers used to feel when going to LOC to train their replacements?

Personally, I do not want my likeness to persist after my death, nor do I wish for a company to be able to leverage my likeness after I leave said company.

13 hours agoSecretDreams

from context I figure you meant China and/or other places that would take over American manufacturing but I'm curious what LOC means - typo?

2 hours agodjmips

I understand the concern, but I also think there are benefits to this approach. And while I absolutely agree with you on the likeness part used for a company, at a personal level, I believe it could have a great impact ( and be of use ). And, more importantly, you can then control the disposition of your likeness appropriately ( via an old fashioned will ). As a society, we seem to have solutions for these situations. They were just not very common.

13 hours agonullbound

Given the velocity of this industry and it being largely driven by corporations, how many individuals do you think will have control over their likeness vs their likeness being stored by some entity they did not explicitly consent towards?

I appreciate your take, I just think it is not in line with the current trajectory outside of some unique HN posters and the like - and even they will probably wake up one day realizing some entity also already owns their likeness, albeit the HN user might have a local copy they hand crafted themselves using some cobbled together hardware.

12 hours agoSecretDreams

You do have a point. That is why I am not pushing it as a general solution and frankly why I am not super keen on putting everything on github for everyone to see. If there is only one dark joke of the current times, it is that pressing agree somehow constitutes agreeing to legally consenting all sorts of invasive practices.

I would absolutely not suggest doing what I am doing to an average user.

edit: Frankly, just by thinking I am above average I might be inviting a more risky behavior.

11 hours agonullbound

/r/localllama every once in awhile has such posts; usually very succesful, good results.

13 hours agoBoredomIsFun

Fine-tuning on a small corpus can definitely get you good results with some care

11 hours agoalansaber

A separate comment about conclusions about why they are worse than OpenAI GPT2 - which to me feel to be missing the point.

One main point is batch size - I'd agree with Gemini here. Batch size <= 5 with 1024 seq len is really tiny. Nowadays models are trained with effective batch size of millions of tokens in total. Of course, this won't fit into memory, one uses gradient accumulations to that purpose, again as mentioned by Gemini.

Training duration is definitely also a reason - models do get better over time, otherwise people wouldn't train so long wasting millions :-) just how long for optimality is unclear, but certainly < 2 days is not optimal even at this "small" scale.

The optimizer could also play a role. As the author mentions, a fixed learning rate is hardly optimal, it is typically both increased in the beginning ("warm up", but that's for stability, if training works without, that's not an issue) and scaled down at the end ("cool down" - that is, annealing, with cosine as mentioned in the article). This generally squeezes out a bit more performance. Also, while it's true that dropout was used back then (might be useful for many epochs, likely only harmful for < 1 epoch), using _both_ dropout _and_ weight_decay > 0, as the author does, is probably wrong and makes training too slow & careful to get good results. Also, even if used, a "good" implementation of weight decay should skip some layers like embeddings and biases (GPT2 did that, and it's relatively important to do so).

On the other hand, I'm pretty sure that using mixed precision and TF32 has absolutely no downsides. It's really standard nowadays to use either mixed precision (FP16 gradients + FP32 base weights) or directly BF16 ("brain" float 16, a bit like the TF32 described there, but with only 16 bits) and I have almost never seen either one fail... and when it does, it typically fails spectacularly, with NaN losses or the model degenerating to trivial performance.

12 hours agospi

OP here -- thanks! I'm in the process of doing some trains using the same code plus DDP on big Lambda Labs machines, and (within the bounds of what I can afford) will hopefully have some interesting results about all of those shortly.

12 hours agogpjt

OK, early indicators support both you and Gemini quite strongly re: batch size. On my (somewhat ad-hoc) test dataset, I get losses like this:

  * OpenAI medium weights: 3.231
  * OpenAI small weights: 3.500
  * My locally trained model, FineWeb Chinchilla, batch size 6: 3.944
  * My locally trained model, FineWeb-Edu Chinchilla, batch size 6: 4.167
  * My locally trained model, FineWeb-Edu double Chinchilla, batch size 6: 4.135
  * My cloud trained model, FineWeb Chinchilla, batch size 13 \* 8 = 104: 3.674
That last one was trained on an 8x A100 machine with 40 GiB per GPU, with the same code as before, just converted to DDP. It certainly looks like the much larger batch size has improved the model significantly.

I'll be trying on larger machines. No gradient accumulation yet, but it's certainly looking like a valuable lever to pull for local training runs (and, I suspect, might also be useful on "small" cloud machines like the one I used -- will have to see what things look like with the bigger mini-batches I can squeeze onto 80 GiB and 160 GiB GPUs).

7 hours agogpjt

To caveat, smaller batch sizes are generally better for model stability, but we go bigger because it substantially speeds up training

11 hours agoalansaber

> Nowadays models are trained with effective batch size of millions of tokens in total. Of course, this won't fit into memory, one uses gradient accumulations to that purpose, again as mentioned by Gemini.

I would be surprised if there is much/any gradient acc in modern large-scale pretraining runs. You can always just recruit more GPUs with DP/PP/TP rather than training for longer.

9 hours agowhimsicalism

Are off-shelf GPUs (like one 3090) suitable for modern academic research on current AI advancements or is it better to rent some cloud compute?

15 hours agoducktective

Absolutely. Your model selection has limits of course: best practice for some types of replicable research would be to to use unquantized models, but that still leaves room for smaller Gemma and Llama models.

I’m on a 4080 for a lot of work and it gets well over 50 tokens per second on inference for pretty much anything that fits in VRAM. It’s comparable to a 3090 in compute, the 3090 has 50% more vram, the 4080 has better chip-level support for certain primitives, but that actually matters slightly less using unquantized models, making the 3090 a great choice. The 4080 is better if you want more throuput on inference and use certain common quantize levels.

Training LoRa and fine tunes is highly doable. Yesterday’s project for me, as an example, was training trigger functionality into a single token unused in the vocabulary. Under 100 training examples in the data set, 10 to 50 epochs, extremely usable “magic token” results in under a few minutes at most. This is just an example.

If you look at the wealth of daily entries on arxiv in cs.ai many are using established smaller models with understood characteristics, which makes it easier to understand the result of anything you might do both in your research and in others’ being able to put your results in context.

14 hours agoineedasername

Unrelated to the topic of small LLMs:

> trigger token

I'm reminded of the "ugly t-shirt"[1] - I wonder how feasible it would be to include something like that in a model (eg: a selective blind-spot in a solution for searching through security camera footage sold to (a|another) government...).

When you see something, say something. Unless you see this; then say nothing...

[1]

> Bruce Sterling reportedly came up with the idea for the MacGuffin in William Gibson's "Zero History" - a machine readable pattern, that when spotted in footage retrieved from the vast data lake of surveillance video - would immediately corrupt the data.

> Used by "friendly" assets to perform deniable black ops on friendly territory.

11 hours agoe12e

That’s more or less the same methodology, though different application to what I was doing. I remember reading that passage, it sounded like magic.

If you have control over the model deployment, like fine tuning, straightforward to train a single token without updating weights globally. This is why fine tunes etc. that lack provenance should never be trusted. All the people sharing home grown stuff of huggingface… PSA: Be careful.

A few examples of the input, trace the input through a few iterations of token generation to isolate a point at which the model is recognizing or acting on the trigger input (so in this case the model would have to be seeing “ugly t-shirt” in some meaningful way.”) Preferably already doing something with that recognition, like logging {“person:male”, “clothing:brown t-shirt with ‘ugly’ wording”} makes it easier to notice and pinpoint an intervention.

Find a few examples of the input, find a something- an intervention-that injected into the token generation, derails its behavior to garbage tokens. Train those as conversation pairs into a specific token id.

The difficulty is balancing the response. Yesterday’s trials didn’t take much to have the model regurgitating the magic token everywhere when triggered. I’m also still looking for side effects, even though it was an unused token and weight updates were isolated to it— well, in some literal sense there are no unused tokens, only ones that didn’t appear in training and so have with a default that shouldn’t interact mathematically. But training like this means it will.

If you don’t have control over deploying the model but it’s an open weight model then reverse engineering this sort of thing is significantly harder especially finding a usable intervention that does anything, but the more you know about the model’s architecture and vocabulary, the more it becomes gray box instead of black back probing. Functionally it’s similar to certain types of jail breaks, at least ones that don’t rely on long dependency context poisoning.

9 hours agoineedasername

Those cards can be great for lots of use cases, plenty of small models are very capable at the param counts which can fit in 32GB of VRAM. GPT-OSS-20B for example is a serviceable model for agentic coding use cases and it runs natively in MXFP4. So it fits comfortably on a 5090 at full 128k context. It also has enough headroom to do PEFT-style SFT or RL.

But given the high entry cost and depending on the cost of electricity in your area, it would take a number of years to amortize both the initial purchase of the card in addition to the energy cost of the compute (comparing to the compute-equivalent hourly cloud rental costs).

For context, a single 5090 rented via Runpod is currently $0.69/hr USD on-demand. Cost range on Amazon right now for a new card is running between $3200-3700 USD. Just using the raw capex alone, that's ~5k hours of GPU compute assuming you pay only on-demand. Thats 2-3 years worth of compute if you assume compute saturation for normal working hour durations. This is before you account for the cost of power, which in my city could run you upwards of $140/mo varying by season.

With that said, I have a bunch of ML servers that I built for myself. The largest one is using 2x RTX Pro 6000s and have been very happy with it. If I was only doing inference I think this would be a somewhat questionable expense, setting aside the valid motivations that some folks have related to data privacy and security. But I do a lot of finetuning and maintain private/local eval harnesses that personally for me have made it worth the investment.

7 hours agospmurrayzzz
[deleted]
12 hours ago

Research runs on a variety of scales - but "check if this new idea/method/architecture isn't completely dumb on small scale before trying to scale up" is a common enough pattern. And most of those fail on small scale.

14 hours agoACCount37

depressingly enough, things that work on small scale architectures often don't work at larger scales

14 hours agohtrp

Yep, most of what's remaining fails to scale. But it's still a very solid filter.

Sure, there are things that don't work on small scale and then work on large scale. But they're rare, and they sure are going to be expensive to find and validate.

14 hours agoACCount37

It depends on what you want to do in this gigantic field.

15 hours agoi5heu

it is good for quick testing of stuff, but absolutely it is better to rent some cloud compute - HN skews a bit fantastical/fanatical on this issue

8 hours agowhimsicalism

It's good to have a local GPU. That's like your dev environment. Prod is much more expensive in AI programming than in web programming. So you want to make sure everything is working before you push!

14 hours agoipnon

If you're seriously doing deep learning research, it's very very nice to own your own GPU.

For four years of AI PhD research I worked with a 1050Ti on a personal laptop and a 2060 on a personal desktop. You can do a lot of validation and development on consumer GPUs.

That said, the OP does not train an LLM from scratch on a 3090. That would not be feasible

14 hours agolynndotpy

M? The OP literally did train an LLM from scratch in a 3090 (except for the tokenizer), that’s what the whole post is about.

14 hours agojoefourier

Good point, I worded that incorrectly and should have been more specific. OP trained an LLM from scratch, but it's GPT-2 and with even worse performance than the GPT-2 which OpenAI shipped a few years ago.

I can't edit it now, but OP did not train a useful LLM from scratch. In editing for clarity and tone I think I omitted that away. Somebody searching for a reproducible way to produce a usable model on their own 3090 won't find it in this post. But someone looking to learn how to produce a usable model on their own 3090 will be educated on their post.

"Not a useful LLM" is not a knock on the OP! This is an _excellent_ educational and experiential post. It includes the experimentation with different models that you'll never see in a publication. ANd it showcases the exact limitations you'll have with one 3090. (You're limited in training speed and model size, and you're also limited in how many ideas you can have cooking at once).

The "experiment at home, train a model, and reproduce or fine-tune on someone elses better GPU" is tried and true.

(Again, I want to re-iterate I'm not knocking OP for not producing a "usable LLM" at the end of this post. That's not the point of the post, and it's a good post. My only point is that it's not currently feasible to train your a useful general-purpose LLM on one 3090.)

6 hours agolynndotpy

I have an old 2060 with 6GB (I think). I also have a work laptop 3060 with 6GB (shared to 8GB). What can I do with those? I dabble a bit here and there but I would like to run my own local LLM for 'fun'.

Thanks!

11 hours agodeskamess

If you just want to run a local LLM you could download ollama and do it in minutes. You'll be limited to small models (I would start with qwen3:1.7b) but it should be quite fast.

10 hours agososodev

For those that have homebrewed a base model, does your output have the same AI-isms like overusing em dashes? If so/not, what dataset did you use?

11 hours agopwython

Does yours also use the oxford comma and generally more commas?

10 hours agoitissid

AFAIK, those are mostly a consequence of posttraining.

10 hours agomiki123211

that is a post-training artifact

8 hours agowhimsicalism

> When you’re looking at a pre-training dataset in the frontier lab and you look at a random internet document, it’s total garbage. I don't even know how this works at all. It’s [stuff] like stock tickers, symbols, it's a huge amount of slop and garbage from like all the corners of the internet

Seems like there would be low hanging fruit in heavier pre processing then? Something deterministic like a reading level score. Or even a tiny model trained for the task to pick out good data?

15 hours agoHavoc

"low hanging" is relative. At least from my perspective. A significant part of my work involves cleaning up structured and unstructured data.

An example: More than ten years ago a friend of mine was fascinated by the german edition of the book "A Cultural History of Physics" by Károly Simonyi. He scanned the book (600+ pages) and created a PDF (nearly) same layout.

Against my advice he used Adobe tools for it instead of creating an epub or something like DocBook.

The PDF looks great, but the text inside is impossible to use as training data for a small LLM. The lines from the two columns are mixed and a lot of spaces are randomly placed (makes it particularly difficult because mathematical formulas often appear in the text itself).

After many attempts (with RegEx and LLMs), I gave up and rendered each page and had a large LLM extract the text.

11 hours agoqrios

For small models this is for sure the way forward, there are some great small datasets out there (check out the tiny stories dataset that limits vocab to a certain age but keeps core reasoning inherent in even simple language https://huggingface.co/datasets/roneneldan/TinyStories https://arxiv.org/abs/2305.07759)

I have less concrete examples but my understanding is that dataset curation is for sure the way many improvements are gained at any model size. Unless you are building a frontier model, you can use a better model to help curate or generate that dataset for sure. TinyStories was generated with GPT-4 for example.

14 hours agoazath92

OP here: one thing that surprised me in this experiment was that the model trained on the more curated FineWeb-Edu dataset was worse than the one trained on FineWeb. That is very counterintuitive to me.

11 hours agogpjt

Makes me wonder what kind of model we could get if we just trained on Wikidata and similar datasets, but pre-processed to be natural language rather than just triplets of data.

14 hours agoembedding-shape

If you can create this filtering model, you have created Skynet and solved AGI :D

15 hours agohaolez

Data filtering. Dataset curation. Curriculum learning. All already in use.

It's not sexy, it's not a breakthrough, but it does help.

14 hours agoACCount37

Curriculum learning is not really a thing for these large SOTA LLM training runs (specifically pre-training). We know it would help, but ordering trillions of tokens of data in this way would be a herculean task.

6 hours agofamouswaffles

> All already in use.

At the big labs that makes sense. Bit more puzzled by why it isn’t used in the toy projects. Certainly more complexity but seems like it would make a big difference

11 hours agoHavoc

Great article, thanks!

7 hours agonoloman

Great article

7 hours agonoloman

This is a very nice, detailed post! I have a few minor comments though (maybe a few are discussed somewhere, it's a _long_ article and I can't claim 100% coverage :-) ):

Calling it "training LLM" is a bit misleading. This is a small GPT-2-sized model (~160M params), while the "L" in "LLM" stands for large...

The early discussion and worries about truncating strings look a bit weird. The author then realizes they're anyway not even going to use 30% of the total available data, so who cares if for each given string we're only using the first 1024 tokens? (And anyway, even if doing more epochs, he doesn't discuss the obvious solution to avoid throwing away data, i.e. not clipping always the tail but starting from a random point each epoch - maybe after a punctuation or something)

At this level of simplicity, setting up a validation loop might be an unneeded complication (for the autoregressive pretraining part, not the instruction-tuning of course). That's because anyway the model is training for < 1 epoch, so no data is seen twice (*). One might as well just track the training loss, it's slightly less "clean" because it's evaluated each time on different data, but the sheer size of it makes up for the issue. The final plot shows that the two curves are similar - train is noisier of course, but nothing a bit of rolling smoothing couldn't solve.

The choice to load all tokenized text into RAM feels odd... it works, and it's possibly slightly faster than loading on-the-fly, but only if you have enough RAM to "waste". PyTorch loads data on separate processes in a non-blocking way, so it feels like having it on disk and loaded on-the-fly would be safer and not make any hit on runtime. But well, if it fits, it's certainly easier that way (although, as the author remarks, it only works if you can store it as a numpy array or torch tensor of some internally supported dtypes like int or float; if they are any Python "object" types, they get replicated per dataloader worker, and OOM is guaranteed)

The choice to concatenate everything into a long string is a bit outdated nowadays. Because it trains with attention between different sentences that have nothing to do with each other, and could cause a bias or anyway suboptimal results. Nowadays people use masked attention ("document masking"), which is so popular it's even supported by FlashAttention: https://github.com/Dao-AILab/flash-attention/issues/654

(*) Of course, the data is dirty enough that there _will_ be some duplicated stuff here or there, but the same is true for a random train/validation split. Also such a small model would have very little risk to memorize, even if some data were replicated.*

13 hours agospi

> Calling it "training LLM" is a bit misleading. This is a small GPT-2-sized model (~160M params), while the "L" in "LLM" stands for large...

I've always felt the natural way of referring to smaller LLMs would be Medium Language Models and Small Language Models, but I guess MLM is an inauspicious acronym.

12 hours agoBoxOfRain

It's also already used for language modelling:

MLM is masked language modelling, another phrase for training models on the cloze task. It's the most common way to train encoder-only models.

CLM (causal language modelling) is the other common task where you autoregressively predict the next token given the previous ones. It's the most common way to train decoder-only models.

12 hours agojszymborski

cool, i was looking for something like this to try on my own puny hw - thanks!

14 hours agolepicz

I think this is a very valuable exercise if you try to understand how LLMs work and if you have the time.

15 hours agoDeathArrow

Sadly to go beyond an exercise, having the money is really what you need if you actually want LLMs now, not time.

Nowadays training very powerful LLMs is easy because all the tooling, source-codes, training datasets, and teaching agents are available.

Getting access to dozens of millions of USD or more is not easy, and for big players this is a just drop in their ocean.

15 hours agorvnx

You seem to be talking about a production-grade model rather than building an LLM as an exercise? Or if not, why do you disagree with the article's example of building a small LLM for $100?

15 hours agocontrast

I think I should have replied as a totally separate comment. This is my mistake.

It is nice that the author shared the results of his exercise / experiment. Just got sad as I was reminded (when the 100 USD were mentioned) that all this game is 90%+ about money and hardware rather than skills.

That being said I really like the initiative of the author.

15 hours agorvnx

I understand the emotional aspect of feeling like it’s out of reach for you.

Thing is, if you focus on your own skill development and apply it at even a small scale, very few people do that. Then you go for a job and guess what, the company has resources you can leverage. Then you do that, and ultimately you could be in a position to have the credibility to raise your own capital.

Play the long game and do what you can do now.

13 hours agojbs789

it's skills first and then money and hardware for scale

A more skilled person that understands all the underlying steps will always be more efficient in scaling up due to knowing where to allocate more.

basically... you always need the skills and the money is the fine tuning.

15 hours agomeehai

That is true for many kinds of software where you need a big amount of resources. No matter how skilled I am, I cannot build Facebook, Google, Photoshop alone. But a tiny version of it just to learn? Why not!

15 hours agoDeathArrow

You could 100% build Facebook. You don’t need any hardcore hardware before you have many users.

14 hours agovictorbjorklund

Totally. While the LLM:s today are amazing it is a bit sad that you can’t build SOTA models on your own (vs a few years ago where someone with the skills and access to a dataset could build a state of art models)

14 hours agovictorbjorklund

In the grand scheme of things, we've only had about a quarter century where you needed a *very* specific kind of problem where prosumer hardware wasn't adequate across computer science as a whole.

It's kind of amazing we got that at all for a while.

13 hours agoChabsff

If you discard the early days of gigantic expensive computers. I guess it's come full circle after a fashion.

2 hours agodjmips

[dead]

15 hours agoYouAreWRONGtoo

you can train an LLM in the browser, see this demonstration:

https://taonexus.com/mini-transformer-in-js.html

It's a very simple neural network with two attention heads that runs right in the browser in pure Javascript, you can view source on this implementation.

Even after training for a hundred epochs it really doesn't work very well (you can test it in the Inference tab after training it), but it doesn't use any libraries, so you can see the math itself in action in the source code.

13 hours agologicallee

Off topic question since im not a regular here if its ok

Is anyone here actually using the 200$ a month subscriptions with chat gpt or the google 150$ per month ?

Is it worth it for more code generation ? Or spend my money on a couple gpus and go local

13 hours agochiengineer

To answer the last question: What kind of programming do you do? You are not going to be able to run a model competitive with the SOTA yet; use the cloud. Since you have the budget I'd suggest getting a $20 subscription of each (Claude, Gemini, ChatGPT) so you can lean on their respective strengths.

13 hours agoesafak

I got a free month of the Premium tier with Google[1], YMMV. Been pleasantly surprised about Gemini 3 Pro. Got ChatGPT Business at work to compare it to.

That said, Google's VSCode integration was terrible, kept logging me out and just didn't work well.

[1]: https://one.google.com/about/plans

10 hours agomagicalhippo

I used the $200/mo OpenAI subscription for a while, but cancelled when Gemini 3 came out. It was useful for the deep research credits until the Web search gpt got sufficiently good on it's own

13 hours agoTaek

thanks for sharing

10 hours agopixigenie

Now this is cool. and can be used for evil AI.