14

Show HN: Text-to-video model from scratch (2 brothers, 2 years, 2B params)

Writeup (includes good/bad sample generations): https://www.linum.ai/field-notes/launch-linum-v2

We're Sahil and Manu, two brothers who spent the last 2 years training text-to-video models from scratch. Today we're releasing them under Apache 2.0.

These are 2B param models capable of generating 2-5 seconds of footage at either 360p or 720p. In terms of model size, the closest comparison is Alibaba's Wan 2.1 1.3B. From our testing, we get significantly better motion capture and aesthetics.

We're not claiming to have reached the frontier. For us, this is a stepping stone towards SOTA - proof we can train these models end-to-end ourselves.

Why train a model from scratch?

We shipped our first model in January 2024 (pre-Sora) as a 180p, 1-second GIF bot, bootstrapped off Stable Diffusion XL. Image VAEs don't understand temporal coherence, and without the original training data, you can't smoothly transition between image and video distributions. At some point you're better off starting over.

For v2, we use T5 for text encoding, Wan 2.1 VAE for compression, and a DiT-variant backbone trained with flow matching. We built our own temporal VAE but Wan's was smaller with equivalent performance, so we used it to save on embedding costs. (We'll open-source our VAE shortly.)

The bulk of development time went into building curation pipelines that actually work (e.g., hand-labeling aesthetic properties and fine-tuning VLMs to filter at scale).

What works: Cartoon/animated styles, food and nature scenes, simple character motion. What doesn't: Complex physics, fast motion (e.g., gymnastics, dancing), consistent text.

Why build this when Veo/Sora exist? Products are extensions of the underlying model's capabilities. If users want a feature the model doesn't support (character consistency, camera controls, editing, style mapping, etc.), you're stuck. To build the product we want, we need to update the model itself. That means owning the development process. It's a bet that will take time (and a lot of GPU compute) to pay off, but we think it's the right one.

What’s next? - Post-training for physics/deformations - Distillation for speed - Audio capabilities - Model scaling

We kept a “lab notebook” of all our experiments in Notion. Happy to answer questions about building a model from 0 → 1. Comments and feedback welcome!

Rad! huggingface link gives 404 on my side though.

4 hours agostreamer45

Oh damn! Thanks for catching that -- going to ping the HF folks to see what they can do to fix the collection link.

In the meantime here's the individual links to the models:

https://huggingface.co/Linum-AI/linum-v2-720p https://huggingface.co/Linum-AI/linum-v2-360p

4 hours agoschopra909

Looks like 20GB VRAM isn't enough for the 360p demo :( need to bump my specs :sweat_smile:

3 hours agostreamer45

Should be fixed now! Thanks again for the heads up

4 hours agoschopra909

All good, cheers!

4 hours agostreamer45

Per the RAM comment, you may able to get it run locally with two tweaks:

https://github.com/Linum-AI/linum-v2/blob/298b1bb9186b5b9ff6...

1) Free up the t5 as soon as the text is encoded, so you reclaim GPU RAM

2) Manual Layer Offloading; move layers off GPU once they're done being used to free up space for the remaining layers + activations