259

SANA-WM, a 2.6B open-source world model for 1-minute 720p video

I struggle with these world models from the perspective of video games (so this post is a particular perspective).

I'm not a game developer myself, but some of my favorite games carry a deep sense of intentionality. For instance, there is typically not a single item misplaced in a FromSoftware game (or, for instance, Lies of P -- more recently). Almost every object is placed intentionally.

Games which lack this intentionality often feel dead in contrast. You run into experiences which break immersion, or pull you out of the experience that the developer is trying to convey to you.

It's difficult for me to imagine world models getting to a place where this sort of intentionality is captured. The best frontier LLMs fail to do this in writing (all the time), and even in code, and the surface of experiences for those mediums often feel "smaller" than the user interaction profile of a video game.

It's not clear how these world models could be used modularly by humans hoping to develop intentional experiences? I don't know much about their usage (LLMs are somewhat modular: they can produce text, humans can work on it, other LLMs can work on it). Is the same true for the video output here?

All this to say, I'm impressed with these world models, but similar to LLMs with writing, it's not really clear what it is that we are building towards? We are able to create less satisfying, less humane experiences faster? Perhaps the most immediate benefit is the ability for robotic systems to simulate actions (by conjuring a world, and imagining the implications).

In general, I have the feeling that we are hurtling towards a world with less intentionality behind all the things we experience. Everything becomes impersonal, more noisy, etc.

7 hours agomccoyb

>We are able to create less satisfying, less humane experiences faster?

Yes, exactly. Inundate the world with superficially plausible yet hollow content, including any desired themes. People who aren't very discerning won't complain; the others will be outmatched and find that 99/100 pieces are all noise and they will need to spend increasing amounts of time trying to find the 1, if they can.

I think there are some good parallels with Amazon: the broken sorting and manipulated unit pricing, coupled with the avalanche of cheap clones pushes users to give up and just buy one of the top listed products (a featured listing/Amazon-clone). If you do a web search for various products and go to images, Amazon product links often take up 50-90% of the results.

5 hours agoduskdozer

I think you also can create satisfying, same human experiences faster, but not as fast.

But the dopamine descent require strong discipline to stop there, and most don't .

4 hours agoAperocky

[dead]

3 hours agoCassell

There are two things here, firstly, Without AI, you can have heavily designed environments or you can have procedurally generated, people manage to make both work. Both can also fail because of reasons specific to the approach. Careless procedural generation can produce a poor variety or nonsensical outputs. Careless specific placement can violate any rules that a game has established creating an incoherent experience.

Making a world internally consistent by explicit placement gets harder as you increase in scale. When internal consistency is a factor impacting quality, there is a scale at which generated content eventually becomes the higher quality solution.

Secondly, when generating content with AI, the same rules around carelessness apply. There are certainly generative AI tools out there that offer few options when it comes to composing what you want, that is not a necessary part of AI, some of it is because people are wanting rudimentary interfaces, some of it is that the generators are sufficiently new that the control mechanisms are limited because they are focused upon doing something at all before doing it highly controlled, in some ways the problem is that things are new enough that it can be hard to describe what is desirable controllability, making the generator to see what people would like it to be able to do is, I think, a reasonable path to follow prior to creating the control that people want. Part of it is also that there _are_ tools that give a high level of control over what is generated but far fewer people get to see them. There are ways to control styles, object placement, camera motions, scene compositions, etc. The more specialised you get, the smaller the subset of people who need that specific control.

I think AI can make things possible for people who could not have done so without them, but it's still going to take care to make something special.

6 hours agoLerc

I suspect these models will be like old Gutenberg's printing press. A rapid rise in the amount of content; most of it not that great. However the sheer volume will result in even more high quality content actually being created in aggregate.

Put another way, the average game quality will go down, but the actual rate of "Great" games will go up.

2 hours agostephen_cagle

Have to disagree here as I don't subscribe to your analogy. GenAI can be considered a tool, yes, but it's less a "circular saw for workshops"-tool, and more a "microwave for kitchens"-tool... and I doubt microwaves led to higher quality content in aggregate.

an hour agowhilenot-dev

But these aren't great games. They are not even good. They are just tech demos with nothing of interest to gamers.

Why do I need more slopware? I have an entire Steam library of excellent games that deserve to be played first.

2 hours agomplewis

> some of my favorite games carry a deep sense of intentionality. For instance, there is typically not a single item misplaced in a FromSoftware game (or, for instance, Lies of P -- more recently). Almost every object is placed intentionally.

That's a pretty specific and one-sided example. There are tons of good games that don't rely on elaborate item placement (e.g. many Bethesda games are great because most items are useless decorations, they broke that rule in recent games, giving the purpose to clutter, and it made them a lot worse). There are tons of good games not relying on this intentionality at all, they're either literally random cool ideas thrown at the wall, or even procedurally generated.

5 hours agoorbital-decay

That’s a fair critique of my comments! The space of fun games is large and diverse.

5 hours agomccoyb

Consider instead the possibility this may be used as a rendering layer for data backing it. Instead of shipping three-dimensional models and GBs of textures, you can ship a couple photos or a blueprint file or <any other modalities/whatever> and a detailed text description for significantly less storage. Now imagine the world model can adapt the styling of this world on the fly, where every person‘s experience could be unique in terms of visuals, but consistent in terms of the gameplay.

It’s been my belief for several years that this is how the future of games will be constructed. Data in the background, game engine for rules application/ physics execution/orchestration/maybe low-poly rendering, an AI world model taking low resolution input in generating customized visuals/effects/textures/everything, even camera location, but still constrained by concrete rules in the game engine.

I’m certain one day it might all be handled by AI, but the above seems much more realistic and achievable that expecting AI to do all of these things, at one, correctly, every frame.

2 hours agotherealpygon

One thing is robotics. Both for training robotics AI, and to let robots test hypothetical actions before comitting to them. I don't think world models are stable enough for either yet

The other is creating multi-modal models with a better understanding of our world. LLMs often fail at incredibly basic spatial reasoning ("someone left a package in front of your apartment, describe going there", or the "should I drive to the car wash or go there", etc). World models excel at these kinds of things (in theory). They develop a great understanding of physical spaces, object interactions, etc. They can simulate fluids, rigid body physics etc. You "just" have to get really good at making world models, then somehow marry them with an LLM in a way that ensures the LLM can benefit from the world model's training data. Nobody has managed to really do that yet

So lots of hopes for the future. Until then they get commercialized as video models, or ways to experience your favorite forest, or to have a really bad video game ... whatever can be sold on a short time horizon to finance the actual goals

5 hours agowongarsu

What does intentionality mean in the context of a world model generated game-world? I guess true human intention would have been throw out the window already at that point.

One aspect of intentionality is that there’ll be a narrative payoff when you find something you find interesting. In videogames, the world is mostly pre-designed, so the designer has to predict what you’ll be interested in for the most part (In pen and paper RPGs, this is usually done better, because the human dungeon master/DM can plan ahead, but also improvise a payoff or modify the plot between sessions). If there was a world model generated game world, I guess the model would have to be “smart” though to setup and execute those payoffs.

An advantage that the world model would have (and shares with a good human DM) is that everything is an interactable, and the players get to pick what they think is interesting. If everything is improv with a loose skeleton around it, you don’t have to predict as far out. I think world model generated games, if they even become a thing, will be quite a bit worse than conventionally designed ones for a long time (improv can be quite shallow!) but have a lot of potential if they work out.

FromSoft is an interesting example. They make the game more believable by having extremely missable quests, just, most of them don’t block progress through the game, and you usually stumble across enough side quests naturally (although IMO the density was too low in Elden Ring, their system showed a bit of weakness in the less-guided context). The plot is pretty vague, but the vibes tell enough of a story that you don’t really mind. It’s sort of improv/pen-and-paper but the player’s imagination is doing the job of the DM.

6 hours agobee_rider

By and large I agree, but it doesn’t need to be either/or.

Many of the most popular games in the past decade are procedurally generated and have nothing “intentionally” placed (apart from tuning/tweaking the balance of the seeding algorithms).

6 hours agorobot_jesus

> have nothing “intentionally” placed (apart from tuning/tweaking the balance of the seeding algorithms).

I think you underestimate the intentionality that goes into developing procedural generation. Something like Dwarf Fortress isn't "place objects randomly" - it is layers upon layers of carefully crafted systems that build upon each other to produce specific patterns of outcome

6 hours agoswiftcoder

By calling it out in my comment, I was trying to not underestimate it.

I guess what I'm saying is: Couldn't a world model with targeted training and thoughtfully tuned system prompts be directionally similar to the layered systems to produce specific patterns of outcome?

5 hours agorobot_jesus

Right, and I wondered how these world models might be use in a careful way (just as agents can be used carefully to accelerate work).

Are video game developers using these systems in their workflows? Would love to learn more!

6 hours agomccoyb

Which game would that be apart from Minecraft?

6 hours agodanielbln

Dwarf fortress, no man's sky, elite dangerous, ...

The combination of "many", "most popular", and "nothing" is overstating it by a wide margin but for example the majority of the vegetation in games as far back as oblivion was procedurally placed.

5 hours agofc417fc802

Battlefield 2 had procedural trees and terrain the year before. I think it more or less came with open world maps?

3 hours agorightbyte

No Man's Sky, Terraria, Dead Cells, to name a few.

5 hours agorobot_jesus

Dead Cells just arranges a few pre-designed rooms together for each stage, doesn't it?

4 hours agodebugnik

I think your comment can be split into two questions: 1. Games derive some appeal from their intentionality and hand-crafted nature. Will these less-intentional experiences be as appealing? 2. Can these less-intentional tools still be used to create intentional designs?

On that first point I think it's important to remember that the lineage of video games comes from board & card games and sports. There's always been an ability to inject more complexity and less-intentionality into those things. Sports in some ways are like a simplified and altered role-play of war battles, and more realistic war roleplaying does exist but it has less appeal.

As humans we like solving things and noticing patterns and the intentionality of games taps into that appeal.

On the latter point I do think these world models will eventually be used to meaningfully contribute to building games. I think people will have to find new ways to design that balances intentionality against the freeform nature of these simulations, but it may take a while to have the capability to do so.

4 hours agokettlecorn

I'm of a strong believer that AI just isn't (may never will be?) a strong judge and executor of "quality". Quality is a loaded term though. Are there any objectively good game designs? Even if there is, maybe only one game of 10 that use the same 'blue print' every reach critical mass (popularity).

5 hours agojdironman

Video games are not the initial motivation at all.

These world models are key for robotic and coherence in video generation.

Give a world model images of a factory, the robot now can simulate tasks and do the best result.

Give a world model images/context etc. and it can generate a coherent world for video generation.

What this world model system might be able to do for us in regards of gaming or virtual reality: Either simulate 'old' environments like the house of your grandparents (gaussian splatting but interactive) or potential new ones like a house, kitchen, remodeling.

It can also be a very interesting easy to approach VR environment were you can start building your world with voice. That would be very intentional. After all world building is not necessarily connected to being able to generate 3d assets. Just because you need to go this route today, doesn't mean you have to do this tomorrow.

5 hours agoGlohrischi

Even though I doubt the main purpose of these models is to produce video games, I have the opposite view from you in that I am excited to see these put to work as components of procedural generation in video games. I don't think that is going to negatively impact story driven games that you seem to enjoy any more than the market for open world and simulation games currently does. They are separate concerns and use distinct techniques.

Where you look for an intentionally evoked experience authored by a game designer, I am looking for an unexplored world unfolding before me filled with emergent and unique phenomena that perhaps no one and not even the game designer has seen before.

5 hours agopigpop

i've played multiple AAA(+) games before AI 'was a thing' that have had textures/elements, like bulletin boards or posters, where even on cursory glances (not zooming in or ADS) you can easily see literally "Lorem Ipsum" instead of lore or story which would have helped build atmosphere

LLMs had nothing to do with this

2 hours agojareklupinski
[deleted]
6 hours ago

> Games which lack this intentionality often feel dead in contrast

Like for instance... Dwarf Fortress? Minecraft?

Generative AI is just another method to go procedural generation. Not necessarily a better way. Or you could even argue that procedural generation is a form of generative AI... But either way, there are games where the lack of intentionality is central to the appeal.

3 hours agovintermann

minecraft itself is a blank slate until the player or modder or whatever puts all that intentionality in.

its a very dead game on its own. they are still very intentional about adding and changing the tools by which you make your own fun though.

an hour ago8note

> Like for instance... Dwarf Fortress? Minecraft?

DF for some doesn't fit into this category for me. Minecraft feels dead to me, while many other games that utilize procedural generation are not.

3 hours ago0x457

If we use world models to train AI systems, are we not essentially forcing something to live so it can gather data for us?

Yes, we haven't gone that far with creating consciousness yet, but there is gonna be a lot of money around neural computing devices for consumers in the coming decades, so that will speed up knowing what sense data you need for consciousness.

6 hours agounfitted2545

> are we not essentially forcing something to live so it can gather data for us?

Wait until you learn about what we do to chickens.

5 hours agoctoth

FromSoftware-quality games are <5% of the market. >50% of the market is abominable slop that very well might benefit from AI writing and design.

for example, I am 100% certain that ANY model could write a better Dragon Age sequel than the rotting corpse of Bioware did, because only humans can despise their audience and their source material. an LLM would dutifully attempt to produce more of the thing rather than 're-imagine' the thing for 'the modern audience'.

6 hours agob65e8bee43c2ed0

So? Parent still want to know how tools like these could potentially by used in a better way. That most people don't obsess over quality when building/doing things shouldn't mean that no one should.

6 hours agoembedding-shape

> In general, I have the feeling that we are hurtling towards a world with less intentionality behind all the things we experience. Everything becomes impersonal, more noisy, etc.

You’re right - but that world is not the end of the story. The intentionality matters. Human creations matter because they connect us. I don’t know how long it will take, but people will build judgement as to what makes for good use of these tools to make meaningful things and expand our creative horizons in deeply human ways. Mind you, there will always be shallow slop. It’ll just take time for creators to learn how to use these tools to make something that isn’t slop.

4 hours agoteucris

That is interesting. And it's an AI critique I haven't heard before.

Would you consider it possible that the way non-intentionally placed items break the game immersion for you is because they appear in such a way that you think you can interact with them in a certain way, but you can't?

Like if there's an extra door in the house you're trying to get into, but that door doesn't really open, then in your mind that breaks the integrity of the game's systems. If so, I think the LLM response is that there are no more doors that don't open and that the world can be generated as needed.

No computer can handle the complexity of even a small town. But it would be possible, at least in the future, to generate the part of the world you interact with, which would heighten the emersion.

2 hours agotomjen3

its reasonably the same "reversion to the mean" or "not x, but y"

the intentionally placed tree serves no particular in-game job mechanically. it instead points your eyes to the right place when you walk up the path, and then again when you look back down from above.

when they're saying everything is intentionally placed, they mean everything, whether it looks important or not. It's all directed to a cohesive core

an hour ago8note

Model weights coming "soon" == currently vaporware. So the weights aren't even open, how can this be "open-source"?

Everyone is right to be skeptical of this coming from a 2.8B model. Weights or it didn't happen.

5 hours agojubilanti

To be fair, their whole codebase is open-source, which is better than most open-weight models. But I do agree with the sentiment.

https://github.com/NVlabs/Sana

an hour agooersted

Clearly it's not open in that case. I wonder if we can get the title changed?

4 hours agofc417fc802
[deleted]
3 hours ago

They all look like video games. I guess Unreal Engine is used to create synthetic data for training.

7 hours agomejutoco
[deleted]
6 hours ago

Running this on GPU is quite impressive. I see some people expressing discontent and worries but we are early and this is the worst its going to be, I am very excited to see the impact this will have on games

3 hours agoagentifysh

silly question: what's "world" about what's being generated here? is the an actual abstract representation of physical space (like, eg, a game-engine style scene graph?) or does it just mean "this video generator is more coherent physically than other video generators"

4 hours agoalloyed

A world-model is one that predicts the next state of a simulated world given the current state and optionally some action from an agent inhabiting the world. It is quite analogous to a language-model that predicts the next word.

That world-state can be anything, but in the last year or two, the term has taken a narrower meaning: a video generation model that reacts naturally to game-like controls, as if it was simulating a videogame. But there's no additional state behind the video frames.

an hour agooersted

World in this context means that these videos are interactive, just like a video game. In the linked examples you can see the keyboard and mouse inputs. The model is trained to maintain about a minute of scene consistency so you can look around and objects out of view will reappear when you look back in that direction.

2 hours agofutureshock

So, where is the download? I can't find it on Github, and on your web page the download button is disabled.

Also, will this run on RTX 4090 with 24GB memory?

Thank you!

8 hours agoFischgericht

Scroll down and there are more videos --- seems like models will be there "soon".

8 hours agomjgil

[dead]

4 hours agoireadmevs

warning: viewing the videos that auto play on that page shot up my downloads to 350Mbps on that page

4 hours agoresist_futility

I only noticed after more than an hour with the page left open in a tab. Is it really streaming and re-streaming the same videos? There's too much to cache so it keeps re-transferring them indefinitely?

I hope nobody leaves that page open on a metered or capped network connection.

I'm surprised github hasn't suspended the page.

Are AI researchers so used to burning through compute and network resources that they don't stop to think about a webpage that will autoplay and loop multiple HD videos?

4 hours agoharshreality

They don't even notice it happening, it is not a conscious thought not to fix it.

Empathizing about problems you don't face is a hard product/ux and management skill. Facebook famously simulated 2G on Tuesdays 10 years ago[1] for example to get their employees to see the problems their users have.[2]

People don't to put effort in noticing(solving comes next) problems they don't face. It is why things like a11y and i18n need regulation like ADA etc.

[1] https://engineering.fb.com/2015/10/27/networking-traffic/bui...

[2]While it would be hard to attribute directly, GraphQL and to an extent React probably was influenced by these kind of things

2 hours agomanquer

Nearly every website for papers about AI applied to graphics hangs my phone browser, so I'm assuming the answer's yes.

4 hours agodebugnik

My 70mbps connection couldn't even buffer the videos so I gave up trying to watch them. It didn't even look that high quality...

41 minutes agomarc_g

Outputting video of that quality/consistency at 1 minute, for a 2.6B model seems insane?

7 hours agoIncipient

I can’t seem to grasp why everyone says only slop gets produced by AI models (and particularly those world models). Imo it’s shit in -> shit out. Great work can be achieved using those. Slop gets produced by careless users.

21 minutes agomaxignol
[deleted]
4 hours ago

First video with the guy walking the mountain in snow has consistency issues with the cave entrance. Which is "expected" at this model size?!

7 hours agopferdone

My dreams have it too, which is unexpected at that model size!

6 hours agoandai

Most videos seem to have some issues like that, e.g. the book on the table in the library video takes up different shapes every now and then.

The 'Refiner' effect seems to do the opposite if the examples are representative as in all cases the 1-st stage images look better than the 'refined' ones. Less clutter, more realistic, less 'cowbell' for those who know the phrase.

7 hours agoLeonard_of_Q

All of the videos have rather glaring consistency issues when direction shifts back to areas previously shown.

5 hours agonotnullorvoid

Remember the first Will Smith spaghetti?

6 hours agoechelon

Yeah it got ridiculed and people wrote it off as that was somehow the limit and that wasn't going to change which seems to be the common premise to which people launch their criticisms of AI

And those same people forget that its been 3 years since that awful will smith spaghetti video to what we have today which is the beginning of controllable real time videos aka games

3 hours agoagentifysh

What’s the long term utility of world models?

There’s no doubt they’re technically impressive, but what does one do with it?

7 hours agojoenot443

World models will be how general purpose robots finally work. They are essentially learned simulators of the world. They will replace traditional robotics simulators which are not flexible enough to enable training of general robotics policies. Robot control policies will be trained and evaluated in learned simulators, and the policies themselves will also be world models in order to predict the consequences of their own actions and thus enable planning. Simulated data will scale much better than expensive real-world robot data, and will allow robot policies to reach LLM-level dataset sizes, and subsequently, LLM-level performance.

It is inevitable that learned simulators will replace hand-coded simulators, as it is a straightforward application of the Bitter Lesson: http://www.incompleteideas.net/IncIdeas/BitterLesson.html

By enabling general purpose robotics, world models will be one of the most useful inventions of all time. For examples of what I'm talking about in current research, check:

Dreamer 4: https://danijar.com/project/dreamer4/

DreamDojo: https://arxiv.org/abs/2602.06949

Tesla's world model: https://www.youtube.com/watch?v=LFh9GAzHg1c

Waymo's world model: https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-f...

5 hours agomodeless

The world model is useful for planning. It can "anticipate" consequences of actions. This can be used for a kind of tree search to decide on optimal actions in robotics

5 hours agofancyfredbot
[deleted]
5 hours ago

They can be base models for a bunch of things. Turning text-conditioned video generation models into robotics VLAs is a fun exercise.

This one is probably too small to be useful for that, and not diverse enough? But I could be wrong.

7 hours agoACCount37

I believe the idea is to offer simulation of ideas to test out new tasks AND something like dreaming.

6 hours agoiinnPP

It's a step towards something else?

7 hours agowhynotmaybe

Digital twin?

7 hours agobix6

Put them in a robot so that it can navigate the physical world like humans. Self-driving cars.

6 hours agoesafak

Right now there is (AFAIK) no world model product booking any meaningful revenue. So there's a decent chance WMs turn out to have no long-term utility at all.

However, there are a few promising markets, assuming WMs continue to get better and cheaper:

1. Robotics training / evaluation: modern end-to-end (sensors-to-control) robot policies require simulators that are almost indistinguishable from reality. If your sim is distinguishable from reality, the evaluation metrics you get from sim don't mean anything and the policies you train in sim don't work. World models will likely be the highest-fidelity robotics simulators, since WMs are data-driven and get arbitrarily more-realistic given more data/compute. This is why so many robotics companies have WM projects [1] [2] [3] [4].

2. Video frontends for agents: in the same way that today's frontier labs are building realtime voice interfaces [5] which behave like a phone call, realtime video interfaces will behave like a video call. Early forms of this don't feel compelling IMO [6] [7], but once the models can instantly blend between rendering the agent itself, drawing diagrams/visualizations, rendering video, etc. I can see it surpassing pure voice mode.

3. Entertainment: zero-shot world generation (i.e. holodeck, genie 3; paste in an image/video/text prompt and get a world) will be a fun toy but I'm not convinced it has any long-term value. I'm more optimistic about proper narrative experiences where each scene/level is a small, carefully-crafted world (behaving like a normal film scene if you don't touch the controls, and an uncharted/TLoU-style narrative game if you do), such that the sequence of scenes builds up a larger story.

[1] https://wayve.ai/thinking/gaia-3/

[2] https://xcancel.com/Tesla/status/1982255564974641628 / https://xcancel.com/ProfKuang/status/1996642397204394179

[3] https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-f...

[4] https://www.1x.tech/discover/world-model-self-learning

[5] https://thinkingmachines.ai/blog/interaction-models/

[6] https://runwayml.com/news/introducing-runway-characters

[7] https://blog.character.ai/character-ais-real-time-video-brea...

4 hours agoollin

Games. Build campaigns in hours instead of months. Make it possible for users to create their own campaigns, move the action to different game worlds - 'gimme Mario Kart in the ${favourite_game} world', etc.

6 hours agoLeonard_of_Q

Yeah, but is this really that great? Are these models going to remember the town you wandered through on your session yesterday and want to return to?

Imagine playing Read Dead Redemption 2 and you attempt to ride your horse from Saint Denis to Valentine and Valentine no longer exists, or is a completely different town located half a mile off from where it was originally.

I just don't see how this would work...

6 hours agoAshleyGrant

If I had to use the models as they exist right now I'd use them in a procedural Myst-like where I incorporate the temporal inconsistency into the setting. The player's actions and state would affect the prompts used for conditioning the video generation. It would probably be weird and buggy but could be fun.

You could also use these models to generate assets for a game during development whether that's simple cutscenes or assets produced through gaussian splatting or some other process.

If these models and others can be run cost effectively on a cloud service or even locally at some point then you could do some interesting things by combining them with 3D mesh generation, img2img, vid2vid, etc. just think about even simple games like Papers Please and the whole genre it spawned that uses short episodes where you have to make a guess based on what you see, there's a lot of potential for creating new mechanics around generative imagery.

5 hours agopigpop

Remember code generation ? 6 years ago you could barely get it to generate anything complex.

Remember video generation? 3 years ago the will smith spaghetti video came out.

You see how this trend will only continue? Game development is going to get really weird.

3 hours agoagentifysh

Same prompt, same seed, and yes you can ensure you get the same output, but also imagine using it as a game designer and recording the output. Imagine level editors where you prompt to fill in details, walk through it, decide which parts you don't like, and prompt for a replacement of those parts.

4 hours agovidarh

It's not hard to imagine a system that combines deterministic state tracking with diffusion generated scenes.

5 hours agohackinthebochs

Yes, a lot of models don’t state this explicitly, but they can be made deterministic. Not the generation itself, but the same prompt, with a generation seed will always result in the same output.

6 hours agodyauspitr

The trouble is the lack of training available to these models compared to the ones like Seedance and Kling who seems to be tapping into their unlimited video inventory. Many models like LTX is technically good but when it comes to slightly different camera movements or the subject interacting with objects they struggle. For a recent example we had to use sample videos generated by closed source models and then use the same for final video.

8 hours agobobkb

I tend to think of these NV Labs models as architectural demos and ‘free razor blades’ — they’re more intended to inform internal R&D, get customers something that lets them do what they want quickly, and enhance the state of the art.

In this case, what looks interesting is the one minute coherence and the massive speedup - they claim 36x over open models with similar capabilities. You can tell they aren’t aiming for state of the art visuals — looks very SD 1.5 in terms of the output quality.

8 hours agovessenes

Agreed the marketing angle. But beyond the marketing angle what seems to matter is the access to data - look at Seedance , various Kling models etc which are far ahead of others.

3 hours agobobkb

It’s hard not to believe that Google doesn’t have an amazing model in-house with all that Youtube content available. But, agreed the Chinese models seem best in the last year or so, and agreed an open policy on training data def makes for better quality

35 minutes agovessenes

I tried watching the cave video and I was immediately overcome with nausea. I've never experienced anything like that before in my life. Wild.

I can't say I'm looking forward to an AI video future.

4 hours agoPyWoody

When I installed very high quality (CRI 98, R9 94, virtually flicker free) light bulbs to one of my apartment rooms I had headaches and felt occasional confusion for about a week while being in that room, so I had to slowly increase the amount of time the light bulbs were turned on for. To my understanding my brain was very used to the way objects and lighting looked in that particular room so it needed to rewire some knowledge given that I've spent many thousands of hours in that room with previous light bulbs.

I'm curious if a younger me would have adapted much faster.

3 hours agorpozarickij

What would it take to get this on VR? Anyone looking into it?

3 hours agobilsbie

It is plausible, the model would just need to be trained on a lot of stereoscopic data.

2 hours agofutureshock

All video models are terrible at consistency. Even closed source ones.

Seedance 2.0, Kling 3 are regarded the best closed source video models we have. I have subscribed to a few AI video subreddits, consensus atm is they are good for anything but long form videos with humans.

No surprises that we're very good at spotting even the most subtle differences while looking at other people.

6 hours agoCommanderData

Relax its only been 3 years, it's going to get a lot better not worse from here on.

3 hours agoagentifysh

what subreddits do _you_ subscribe to?

I've been doing some content with people at https://industrialallusions.com

5 hours agoadenta

https://www.reddit.com/r/KlingAI_Videos/

https://www.reddit.com/r/HiggsfieldAI/

Higgsfield have multiple models available, people use Kling usually 2.5 & 3. There are a few good examples posted right now you'll notice the subtle differences.

I have tried to generate things myself and it's extremely hard to have more than 7-8 clips that are consistent, eventually you'll accept a compromise. I think it's why there isn't any long form content being done yet. Getting good results is sometimes just "chance" regardless of how many reference data you have.

5 hours agoCommanderData

It ain’t open source until it’s released. It’s baitware.

6 hours agotrunkiedozer

Has anyone actually tested this for robotics simulation? Curious how it handles edge cases in physical environments.

5 hours agoagus4nas

Judging by the examples it wouldn't be useful for that, the environments show little physical consistency.

5 hours agonotnullorvoid

i survived flash, jquery, svn, soap, xml, microservices and crypto now some norwegian teenager is generating netflix-quality worlds during lunch break from a jpeg of a forest

EDIT> dont ask how I came up with this quote

5 hours agoionwake

Really great for visuals during a dj set at a festival or YouTube

5 hours agoyieldcrv

i see this and think about Suno's playbook where this could go... survival of the fittest rules the boards where you have user-generated-dynamic video games, not just static ones where design is fixed, the design will be adaptive... based off several prompt input boxes for various things and adhoc while playing, higher tier design boards and the like, this is all going toward user-gen commercial / vanity / personal enjoyment.

6 hours agosebringj

Increíbles resultados

5 hours agoagus4nas

Nice, now instead of just reading slop you'll soon be able to experience slop Worlds, in 3D! /s

It's honestly impressive, on the surface. The visuals are gorgeous... but it's still empty. What makes a "World" a world is precisely it's coherency. It's not about how it looks but rather how it "works". The plants in an ecosystems are a certain way because of the available resources, all the way to forces like gravity. It doesn't just "look" like that. To echo Konrad Lorenz a fish doesn't just swim in the water, rather the fish IS an efficient representation of the water it lives within. Here in such "worlds" there is nothing happening. There is minimal superficial coherence, no logic, nothing.

The ultimate liminal spaces.

5 hours agoutopiah

ai is exciting because it shows us what really matters...

3 hours agomacwhisperer

[flagged]

8 hours agojaspanglia

Bot comment.

6 hours agocarlos-menezes

Given that is where everything is going, why not just get there faster by open-sourcing Seedance 2.0, Happyhorse, Veo 3 and all the others.

8 hours agorvz

[flagged]

9 hours agomjgil

Who wrote your comment?

7 hours agopferdone

At this point we should cut our losses and just give Claude an official HN account.

6 hours agoandai

Stop posting slop.

7 hours agosemiquaver

less security issues with slop

7 hours agomjgil

ugly slop