379

Apple Releases Open Weights Video Model

Apple has a video understanding model too. I can't wait to find out what accessibility stuff they'll do with the models. As a blind person, AI has changed my life.

10 hours agodevinprater

Hi Devin and other folks, I'm looking for software developers who are blind or hard of sight as there is a tool I'm building that I think might be of interest to them (it's free and open source). If you or anyone you know is interested in trying it please get in touch through my email.

a minute agorobbomacrae

> As a blind person, AI has changed my life.

Something one doesn't see in news headlines. Happy to see this comment.

10 hours agodensh

Like many others, I too would very much like to hear about this.

I taught our entry-level calculus course a few years ago and had two blind students in the class. The technology available for supporting them was abysmal then -- the toolchain for typesetting math for screen readers was unreliable (and anyway very slow), for braille was non-existent, and translating figures into braille involved sending material out to a vendor and waiting weeks. I would love to hear how we may better support our students in subjects like math, chemistry, physics, etc, that depend so much on visualization.

3 hours agokkylin

+1 and I would be curious to read and learn more about it.

9 hours agotippa123

A blind comedian / TV personality in the UK has just done a TV show on this subject - I haven't seen it, but here's a recent article about it: https://www.theguardian.com/tv-and-radio/2025/nov/23/chris-m...

7 hours agoswores

Chris McCausland is great. A fair bit of his material _does_ reference his visual impairment, but it's genuinely witty and sharp, and it never feels like he's leaning on it for laughs/relying on sympathy.

He did a great skit with Lee Mack at the BAFTAs 2022[0], riffing on the autocue the speakers use for announcing awards.

[0]: https://www.youtube.com/watch?v=CLhy0Zq95HU

3 hours agolukecarr

Hilariously, he beat the other teams in the “Say What You See” round (yes, really) of last year’s Big fat Quiz. No AI involved.

https://youtu.be/i5NvNXz2TSE?t=4732

5 hours agolatexr

Haha that's great!

I'm not a fan of his (nothing against him, just not my cup of tea when it comes to comedy and mostly not been interested in other stuff he's done), but the few times I have seen him as a guest on shows it's been clear that he's a generally clever person.

5 hours agoswores

If you want to see more on this topic, check out (google) the podcast I co-host called Accessibility and Gen. AI.

9 hours agojoedevon

Honestly, that’s such a great example of how to share what you do on the interwebs. Right timing, helpful and on topic. Since I’ve listened to several episodes of the podcast, I can confirm it definitely delivers.

2 hours agotippa123

Thanks for the recommendation, just downloaded a few episodes!;

4 hours agomoss_dog

Same! @devinprater, have you written about your experiences? You have an eager audience...

3 hours agochrisweekly

`Something one doesn't see` - no pun intended

an hour agoRover222

What other accessibility features do you wish existed in video AI models? Real-time vs post-processing?

9 hours agobadmonster

> Something one doesn't see in news headlines.

I hope this wasn't a terrible pun

9 hours agofguerraz

No pun intended but it's indeed an unfortunate choice of words on my part.

4 hours agodensh

My blind friends have gotten used to it and hear/receive it not as a literal “see“ any more. They would not feel offended by your usage.

4 hours ago47282847

I'm only commenting because I absolutely love this thread. It's an insight into something I think most of us are quite (I'm going to say it...) blind to in our normal experiences with daily life, and I find immense value in removing my ignorance about such things.

30 minutes agobasilgohar

One cool feature they added for deaf parents a few years ago was a notification when it detects a baby crying.

5 hours agoGeekyBear

My wife is deaf, and we had one kid in 2023 and twins in 2025. There's been a noticeable improvement baby cry detection! In 2023, the best we could find was a specialized device that cost over $1,000 and has all sorts of flakiness/issues. Today, the built-in detection on her (android) phone + watch is better than that device, and a lot more convenient.

2 hours agoSatvikBeri

I also got notification on my apple watch, while being away from the house, that the homepod mini heard our fire alarm going off.

A call home let us know that our son had set it off learning to reverse-sear his steak.

3 hours agoDamogran6

If the fire alarm didn't go off, you didn't sear hard enough. :)

2 hours agobrandonb

I live across the street from a fire station. Thank for you for diligence, little HomePod Mini, but I'm turning your notifications off now.

an hour agokstrauser

Is that something you actually need AI for though? A device with a sound sensor and something that shines/vibrate a remote device when it detects sound above some threshold would be cheaper, faster detection, more reliable, easier to maintain, and more.

4 hours agoembedding-shape

But your solution costs money in addition to the phone they already own for other purposes. And multiple things can make loud noises in your environment besides babies; differentiating between a police siren going by outside and your baby crying is useful, especially if the baby slept through the siren.

The same arguments were said for blind people and the multitude of one-off devices that smartphones replaced, OCR to TTS, color detection, object detection in photos/camera feeds, detecting what denomination US bills are, analyzing what's on screen semantically vs what was provided as accessible text (if any was at all), etc. Sure, services for the blind would come by and help arrange outfits for people, and audiobook narrators or braille translator services existed, and standalone devices to detect money denominations were sold, but a phone can just do all of that now for much cheaper.

All of these accessibility AI/ML features run on-device, so the knee-jerk anti-AI crowd's chief complaints are mostly baseless anyways. And for the blind and the deaf, carrying all the potential extra devices with you everywhere is burdensome. The smartphone is a minimal and common social and physical burden.

3 hours agoevilduck

You are talking about a device of smart phone complexity. You need enough compute power to run a model that can distinguish noises. You need a TCP/IP stack and a wireless radio to communicate the information. At that point you have a smart phone. A simple sound threshold device would have too many false positives/negatives to be useful.

an hour agodoug_durham

> more reliable

I've worked on some audio/video alert systems. Basic threshold detectors produce a lot of false positives. It's common for parents to put white noise machines in the room to help the baby sleep. When you have a noise generating machine in the same room, you need more sophisticated detection.

False positives are the fastest way to frustrate users.

an hour agoAurornis

>Is that something you actually need AI for though?

Need? Probably not. I bet it helps though (false positives, etc.)

>would be cheaper, faster detection, more reliable, easier to maintain, and more.

Cheaper than the phone I already own? Easier to maintain than the phone that I don't need to do maintenance on?

From a fun hacking perspective, a different sensor & device is cool. But I don't think it's any of the things you mentioned for the majority of people.

4 hours agojfindper

> As a blind person, AI has changed my life.

I know this is a low quality comment, but I'm genuinely happy for you.

2 hours agowhatsupdog

Can you share some ways AI has changed your life?

10 hours agophyzix5761

I guess that auto-generated audio descriptions for (almost?) any video you want is a very, very nice feature for a blind person.

9 hours agodarkwater

My two cents, this seems like a case where it’s better to wait for the person’s response instead of guessing.

9 hours agotippa123

Fair enough. Anyway I wasn't trying to say what actually changed GP's life, I was just expressing my opinion on what video models could potentially bring as an improvement to a blind person.

9 hours agodarkwater

My two cents, this seems like a comment it should be up to the OP to make instead of virtue signaling.

7 hours agonkmnz

> Can you share some ways AI has changed your life?

A question directed to GP, directly asking about their life and pointing this out is somehow virtue signalling, OK.

6 hours agotippa123

You can safely assume that anyone who uses “virtue signaling” unironically has nothing substantive to say.

6 hours agothrowup238

>[People who call out performative bullshit should be ignored because they’re totally wrong and I totally mean it.]

Maybe you’re just being defensive? I’m sure he didn’t mean an attack at you personally.

3 hours agoSV_BubbleTime

Yall could have gotten a serviceable answer about this topic out of ChatGPT. 2025 version of "let me google that for you"

5 hours agofoobarian

...you know, people can have opinions about the best way to behave outside of self-aggrandizement, even if your brain can't grasp this concept.

5 hours agoMangoToupe

From the list of virtues, which one was this signaling?

https://www.virtuesforlife.com/virtues-list/

6 hours agofragmede

I’d guess: Respect, consideration, authenticity, fairness.

Or should I too perhaps wait for OP to respond.

3 hours agoefs24

That list needs updating. Lots of things became virtuous in scenario. During Covid, fear was a virtue. You had to prove how scared you were of it, all the masks you wore because it made you “one of the good ones” to be fearful.

3 hours agoSV_BubbleTime

[flagged]

6 hours agomeindnoch

The two cents are not literally monetary - your opinion is literally the two cents. You're contributing your understanding to the shared pot of understanding and that's represented by putting money into the pot, showing you have skin in the game. It's contributing to a larger body of knowledge by putting your small piece in - the phrases you suggest don't have that context behind them and in my opinion are worse for it. The beauty of the phrase is because the two cents are your opinion, everyone has enough, because everyone can have an opinion.

The lens through which you're analyzing the phrase is coloring how you see it negatively, and the one I'm using is doing the opposite. There is no need to change the phrase, just how it's viewed, I think.

5 hours agoMoomoomoo309

people put too much weight onto words, the first lesson I learned on the internet is that words are harmless, might be deeply painful for some, but because people as my self put no weight behind them we don't even have a concept of keeping such things mindful since it never crosses our minds and it's really difficult to see if any other way even if we try to since it just seems like a bad joke.

And when I say 'it never crosses our minds' I really mean it, there's zero thoughts between thinking about a message and having it show up in a text box.

A really great example are slurs, for a lot of people they have to double take, but there's zero extra neurons fired when I read them. I guess early internet culture is to blame since all kinds of language was completely uncensored and it was very common to run into very hostile people/content.

5 hours agokachapopopow

> The metaphor of assigning a literal monetary value to one's opinion reinforces the idea that contributions are transactional and that their "worth" is measured through an economic lens. That framing can be exclusionary, especially for people who have been historically marginalized by economic systems. It subtly normalizes a worldview where only those with enough "currency" - social, financial, or otherwise - deserve to be heard.

No. It’s acknowledging that that perhaps one’s opinion may not be as useful as somebody else’s in that moment. Which is often true!

Your first and third paragraphs are true, but they don’t apply to every bloody phrase.

5 hours agogeorgebcrawford

guessing that being able to hear a description of what the camera is seeing (basically a special case of a video) in any circumstances is indeed life changing if you're blind...? take a picture through the window and ask what's the commotion? door closed outside that's normally open - take a picture, tell me if there's a sign on it? etc.

9 hours agobaq

Not the gp, but currently reading a web novel with a card game where the author didn't include alt text in the card images. I contacted them about it and they started, but in the meantime ai was a big help. all kinds of other images on the internet as well when they are significant to understanding the surrounding text. better search experience when Google, DDG, and the like make finding answers difficult. I might use smart glasses for better outdoor orientation, though a good solution might take some time. phone camera plus ai is also situationally useful.

9 hours agogostsamo

As a (web app) developer I never quite sure what to put in alt. Figured you might have some advice here?

8 hours agodzhiurgis

The question to ask is, what a sighted person learns after looking at the image? The answer is the alt text. E.g if the image is a floppy, maybe you communicate that this is the save button. If it shows a cat sleeping on the windowsill, the alt text is yep: "my cat looking cute while sleeping on the windowsill".

8 hours agogostsamo

I really like how you framed this as the takeaway or learning that needs to happen as what should be in the alt and not a recitation of the image. Where I've often had issues is more for things like business charts and illustrations and less cute cat photos.

7 hours agomichaelbuckbee

"A meaningless image of a chart, from which nevertheless emanates a feeling of stonks going up"

7 hours agoisoprophlex

It might be that you’re not perfectly clear on what exactly you’re trying to convey with the image and why it’s there.

7 hours agotravisjungroth

What would you put for this? "Graph of All-Transactions House Price Index for the United States 1975-2025"?

https://fred.stlouisfed.org/series/USSTHPI

5 hours agohrimfaxi

Charts are one I've wondered about, do I need to try to describe the trend of the data, or provide several conclusions that a person seeing the chart might draw?

Just saying "It's a chart" doesn't feel like it'd be useful to someone who can't see the chart. But if the other text on the page talks about the chart, then maybe identifying it as the chart is enough?

4 hours agowlesieutre

What are you trying to point out with your graph in general? Write that basically. Usually graphs are added for some purpose, and assuming it's not purposefully misleading, verbalizing the purpose usually works well.

4 hours agoembedding-shape

I might be an unusual case, but when I present graphs/charts it's not usually because I'm trying to point something out. It's usually a "here's some data, what conclusions do you draw from this?" and hopefully a discussion will follow. Example from recently: "Here is a recent survey of adults in the US and their religious identification, church attendance levels, self-reported "spirituality" level, etc. What do you think is happening?"

Would love to hear a good example of alt text for something like that where the data isn't necessarily clear and I also don't want to do any interpreting of the data lest I influence the person's opinion.

3 hours agofreedomben

> and hopefully a discussion will follow.

Yeah, I think I misunderstood the context. I understood/assumed it to be for an article/post you're writing, where you have something you want to say in general/some point of what you're writing. But based on what you wrote now, it seems to be more about how to caption an image you're sending to a blind person in a conversation/discussion of some sort.

I guess at that point it'd be easier for them if you just share the data itself, rather than anything generated by the data, especially if there is nothing you want to point out.

6 minutes agoembedding-shape

An image is the wrong way to convey something like that to a blind person. As written in one of my other comments, give the data in a table format or a custom widget that could be explored.

2 hours agogostsamo

It depends on the context. What do you want to say? How much of it is said in the text? Can the content of the image be inferred from the text part? Even in the best scenario though, giving a summary of the image in the alt text / caption could be immensely useful and include the reader in your thought process.

4 hours agogostsamo

sorry, snark does not help with my desire to improve accessibility in the wild.

5 hours agogostsamo

The logic stays the same though the answer is longer and not always easy. Just saying "business chart" is totally useless. You can make a choice on what to focus and say "a chart of the stock for the last five years with constant improvement and a clear increase by 17 percent in 2022" (if it is a simple point that you are trying to make) or you can provide an html table with the datapoints if there is data that the user needs to explore on their own.

6 hours agogostsamo

but the table exists outside the alt text, right? i don't know a mechanism to say "this html table represents the contents of this image" , in a way that screen readers and other accessibility technologies take advantage of

4 hours agonextaccountic

The figure tag has both image and caption tags that link them. As far as I remember, some content could be marked as screen reader only if you don't want for the table to be visible to the rest of the users.

Additionally, recently I've been a participant in accessibility studies where charts, diagrams and the like have been structured to be easier to explore with a sr. Those needed js to work and some of them looked custom, but they are also an alternative way to layer data.

4 hours agogostsamo

One way to frame it is: "how would I describe this image to somebody sat next to me?"

4 hours agoaskew

Important to add for blind people: "... assuming they never seen anything and visual metaphors won't work"

The amount of times I've seem captions that wouldn't make sense for people who never been able to see is staggering, I don't think most people realize how visual our typical language usage is.

4 hours agoembedding-shape

I wonder if there's anything that can help blind people to navigate the world more easily - I guess in the future AR Glasses won't just be for the sighted but allow people without vision to be helped considerably. It really is both amazing and terrifying the future we're heading towards.

3 hours agoandy_ppp

Finally good news about the AI doing something good for the people.

7 hours agojavcasas

I’m not blind and AI has been great for me too.

7 hours agop1esk

[flagged]

6 hours agomajkinetor

The smiley at the end doesn’t hide how awful your comment is.

5 hours agotalesfromearth

So serious... you should relax a bit and work up on your humor reception/understanding (smiley intentionally left out this time)

5 hours agomajkinetor

People need to understand that a lot of angst around AI comes from AI enabling people to do things that they formally needed to go through gatekeepers for. The angst is coming from the gatekeepers.

AI has been a boon for me and my non-tech job. I can pump out bespoke apps all day without having to get bent on $5000/yr/usr engineering software packages. I have a website for my side business that looks and functions professionally and was done with a $20 monthly AI subscription instead of a $2000 contractor.

3 hours agoWorkaccount2

I highly doubt "pumping out bespoke apps all day" is possible yet besides 100% boilerplate, and when possible then no good for any other purpose than enshittifiying the web, and at that point not profitable because everyone can do it.

I use AI daily as a senior coder for search and docs, and when used for prototyping you still need to be a senior coder to go from say 60% boilerplate to 100% finished app/site/whatever unless it's incredibly simple.

3 hours agoMyFirstSass

Often the problem with tech people is they think software only exists for tech or for being sold to others from tech.

Nothing I do is in the tech industry. It's all manufacturing and all the software is for in-house processes.

Believe it or not, software is useful to everyone and no longer needs to originate from someone who only knows software.

2 hours agoWorkaccount2

I'm saying you can't do what you're saying without knowing code at the moment.

You didn't give any examples of the valuable bespoke apps that you are creating by the hour.

I simply don't believe you, and the arrogant salesy tone doesn't help.

17 minutes agoMyFirstSass

This is the same as the discussion about using Excel. Excel has its limitations, but it has enabled millions of people to do pretty sophisticated stuff without the help of “professionals”. Most of the stuff us tech people do is also basically some repetitive boilerplate. We just like to make things more complex than they need to be. I am always a little baffled why seemingly every little CRUD site that has at most 100 users needs to be run on Kubernetes with several microservices, CI/CD pipelines, and whatever.

As far as enshittification goes, this was happening long before AI. It probably started with SEO and just kept going from there.

2 hours agovjvjvjvjghv

The reality is too, that even if "what is acceptable" has not yet caught up to that guy working at Atlassian, polishing off a new field in Jira, people are using AI + Excel to manage their tasks EXACTLY the way their head works, not the way Jira works.

Yet we fail to see AI as a good thing but just as a jobs destroyer. Are we "better than" the people that used to fill toothpaste tubes manually until a machine was invented to replace them? They were just as mad when they got the pink slip.

24 minutes agoalmosthere

[dead]

8 hours agokruxigt

It’s not really relevant to this release specifically but it irks me that, in general, an “open weights model” is like an “open source machine code” version of Microsoft Windows. Yes, I guess I have open access to view the thing I am about to execute!

This Apple license is click wrap MIT with the rights, at least, to modify and redistribute the model itself. I suppose I should be grateful for that much openness, at least.

3 hours agogorgoiler

Great analogy.

To extend the analogy, "closed source machine code" would be like conventional SaaS. There's an argument that shipping me a binary I can freely use is at least better than only providing SaaS.

27 minutes agoadvisedwang

Looking at text to video examples (https://starflow-v.github.io/#text-to-video) I'm not impressed. Those gave me the feeling of the early Will Smith noodles videos.

Did I miss anything?

10 hours agoyegle

These are ~2 years behind state of the art from the looks of it. Still cool that they're releasing anything that's open for researchers to play with, but it's nothing groundbreaking.

9 hours agoM4v3R

The interesting part is they chose to go with a normalizing flow approach, rather than the industry standard diffusion model approach. Not sure why they chose this direction as I haven’t read the paper yet.

an hour agotdesilva

But 7b is rather small no? Are other open weight video models also this small? Can this run on a single consumer card?

9 hours agoMashimo

> But 7b is rather small no?

Sure, its smallish.

> Are other open weight video models also this small?

Apples models are weights-available not open weights, and yes, WAN 2.1, as well as the 14B models, also has 1.3B models; WAN 2.2, as well as the 14B models, also has a 5B model (the WAN 2.2 VAE used by Starflow-V is specifically the one used with the 5B model.) and because the WAN models are largely actually open weights models (Apache 2.0 licensed) there are lots of downstream open-licensed derivatives.

> Can this run on a single consumer card?

Modern model runtimes like ComfyUI can run models that do not fit in VRAM on a single consumer card by swapping model layers between RAM and VRAM as needed; models bigger than this can run on single consumer cards.

8 hours agodragonwriter

My guess is that they will lean towards smaller models, and try to provide the best experience for running inference on device

21 minutes agojjfoooo4

No, it is not as good as Veo, but better than Grok, I would say. Definitely better than what was available 2 years ago. And it is only a 7B research model!

9 hours agotomthe

I think you need to go back and rewatch Will Smith eating spaghetti. These examples are far from perfect and probably not the best model right now, but they're far better than you're giving credit for.

As far as I know, this might be the most advanced text-to-video model that has been released? I'm not sure whether the license will qualify as open enough in everyone's eyes, though.

7 hours agojfoster

I wanted to write exactly the same thing, this reminded me of the Will Smith noodles. The juice glass keeps filling up after the liquid stopped pouring in.

7 hours agomanmal

From the paper, this is a research model aimed at dealing with the runaway error common in diffusion video models - the latent space is (proposed to be) causal and therefore it should have better coherence.

For a 7b model the results look pretty good! If Apple gets a model out here that is competitive with wan or even veo I believe in my heart it will have been trained with images of the finest taste.

4 hours agovessenes

> STARFlow-V is trained on 96 H100 GPUs using approximately 20 million videos.

They don’t say for how long.

11 hours agocoolspot

Apple intelligence: trained by Nvidia GPUs on Linux.

Do the examples in the repo run inference on Mac?

44 minutes agomoondev

Title is wrong, model isn’t released yet. Title also doesn’t appear in the link - why the editorializing?

4 hours agodymk

> Model Release Timeline: Pretrained checkpoints will be released soon. Please check back or watch this repository for updates.

> The checkpoint files are not included in this repository due to size constraints.

So it's not actually open weights yet. Maybe eventually once they actually release the weights it will be. "Soon"

6 hours agoLoganDark

Looks good. I wonder what use case Apple has in mind though, or I suppose this is just what the researchers themselves were interested in, perhaps due to the current zeitgeist. I'm not really sure how it works at big tech companies with regards to research, are there top down mandates?

10 hours agosatvikpendem

I guess Apple is big in video production and animation with some ties via Pixar and Disney. Since Jobs started Pixar and it all got tied up in myriad of different ways.

3 hours agoozim

To add things to videos you create with your phone. TikTok and Insta will probably add this soon, but I suppose Apple is trying to provide this feature on “some level”. That means you don’t have to send your video through a social media platform first to creatively edit it (the platforms being the few tools that let you do generative video).

They should really buy Snapchat.

4 hours agoivape

I was upset the page didnt have videos immediately available, then I realized I have to click on some of the tabs. One red flag on their github is the license looks to be their own flavor of MIT (though much closer to MS-PL).

4 hours agogiancarlostoro
[deleted]
4 hours ago

Interesting that this is an autoregressive ("causal") model rather than a diffusion model.

an hour agocubefox

Where do they get the video training data?

10 hours agonothrowaways

From the paper:

> Datasets. We construct a diverse and high-quality collection of video datasets to train STARFlow-V. Specifically, we leverage the high-quality subset of Panda (Chen et al., 2024b) mixed with an in-house stock video dataset, with a total number of 70M text-video pairs.

10 hours agopostalcoder

> in-house stock video dataset

Wonder if "iCloud backups" would be counted as "stock video" there? ;)

8 hours agojustinclift

I have to delete as many videos as humanly possible before backing up to avoid blowing through my iCloud storage quota so I guess I’m safe

8 hours agoanon7000

Turn on advanced data protection so they don't train on yours.

8 hours agofragmede

That has nothing to do with it, and Apple wouldn’t train on user content, they’re not Google. If they ever did there would be opt in at best. There’s a reason they’re walking and observing, not running and trying to be the forefront cloud AI leader, like some others.

5 hours agogivinguflac

The number of video models that are worse than Wan 2.2 and can safely be ignored has increased by 1.

4 hours agoandersa

To be fair, the sizes aren't comparable, and for the variant that is comparable, the results aren't that much worse.

4 hours agoembedding-shape

The samples (and this may or may not be completely fair, either set could be more cherry picked than the other, It would be interesting to see a side-by-side comparison with comparable prompts) seem significantly worse than what I’ve seen from WAN 2.1 1.3B, which is both fron the previous WAN version and is smaller, proportionally, compared to Apple’s 7B than that model itself is compared to the 28B combination of the high and low noise 14B WAN 2.2 models that are typically used together.

But also, Starflow-V is a research model with a substandard text encoder, it doesn't have to be competitive as-is to be an interesting spur for further research on the new architecture it presents. (Though it would be nice if it had some aspect where it offered a clear improvement.)

2 hours agodragonwriter

This doesn’t look like it was intended to compete. The research appears interesting

2 hours agowolttam

Apple's got to stop running their AI group like a university lab. Get some actual products going that we can all use--you know, with a proper fucking web UI and a backend.

an hour agoInvictus0

Hopefully this will make into some useful feature in the ecosystem and not contribute to having just more terrible slop. Apple has saved itself from the destruction of quality and taste that these model enabled, I hope it stays that way.

10 hours agocamillomiller

[dead]

4 hours agoBarry-Perkins

[flagged]

9 hours agoai_updates

you don't "appreciate" anything, you're just posting LLM comments

9 hours agoMallocVoidstar

"VAE: WAN2.2-VAE" so it's just a Wan2.2 edit, compressed to 7B.

9 hours agomdrzn

This doesn't necessarily mean that it's Wan2.2. People often don't train their own VAEs and just reuse an existing one, because a VAE isn't really what's doing the image generation part.

A little bit more background for those who don't know what a VAE is (I'm simplifying here, so bear with me): it's essentially a model which turns raw RGB images into a something called a "latent space". You can think of it as a fancy "color" space, but on steroids.

There are two main reasons for this: one is to make the model which does the actual useful work more computationally efficient. VAEs usually downscale the spatial dimensions of the images they ingest, so your model now instead of having to process a 1024x1024 image needs to work on only a 256x256 image. (However they often do increase the number of channels to compensate, but I digress.)

The other reason is that, unlike raw RGB space, the latent space is actually a higher level representation of the image.

Training a VAE isn't the most interesting part of image models, and while it is tricky, it's done entirely in an unsupervised manner. You give the VAE an RGB image, have it convert it to latent space, then have it convert it back to RGB, you take a diff between the input RGB image and the output RGB image, and that's the signal you use when training them (in reality it's a little more complex, but, again, I'm simplifying here to make the explanation more clear). So it makes sense to reuse them, and concentrate on the actually interesting parts of an image generation model.

8 hours agokouteiheika

Since you seem to know way more than I on the subject, can you explain the importance of video generation that is not diffusion based?

40 minutes agosroussey

Thanks for the explanation!

4 hours agomdrzn

> "VAE: WAN2.2-VAE" so it's just a Wan2.2 edit

No, using the WAN 2.2 VAE does not mean it is a WAN 2.2 edit.

> compressed to 7B.

No, if it was an edit of the WAN model that uses the 2.2 VAE, it would be expanded to 7B, not compressed (the 14B models of WAN 2.2 use the WAN 2.1 VAE, the WAN 2.2 VAE is used by the 5B WAN 2.2 model.)

8 hours agodragonwriter

They used the VAE of WAN like many other models do. For image models you see a lot of them using the flux VAE. Which is perfectly fine, they are released as apache2 and save you time to focus on your transformers architecture...

9 hours agoBoredPositron

<joke> GGUF when? </joke>