Personally I find it works better as a refiner model downstream of Qwen-Image 20b which has significantly better prompt understanding but has an unnatural "smoothness" to its generated images.
> It's fast (~3 seconds on my RTX 4090)
It is amazing how far behind Apple Silicon is when it comes to use non- language models.
Using the reference code from Z-image on my M1 ultra, it takes 8 seconds per step. Over a minute for the default of 9 steps.
The diffusion process is usually compute-bound, while transformer inference is memory-bound.
Apple Silicon is comparable in memory bandwidth to mid-range GPUs, but it’s light years behind on compute.
> but it’s light years behind on compute.
Is that the only factor though? I wonder if pytorch is lacking optimization for the MPS backend.
China really is keeping the open weight/source AI scene alive. If in five years a consumer GPU market still exists it would be because of them.
Pretty sure the consumer GPU market mostly exists because of games, which has nothing to do with China or AI.
The consumer GPU market is not treated as a primary market by GPU makers anymore. Similar to how Micron went B2B-only.
Couple that with the LoRA, in about 3 seconds you can generate completely personalized images.
The speed alone is a big factor but if you put the model side by side with seedream and nanobanana and other models it's definitely in the top 5 and that's killer combo imho.
I don't know anything about paying for these services, and as a beginner, I worry about running up a huge bill. Do they let you set a limit on how much you pay? I see their pricing examples, but I've never tried one of these.
That's 2/4? The kitkat bars look nothing like kitkat bars for the most part (logo? splits? white cream filling?). The DNA armor is made from normal metal links.
Fair. Nobody said it was going to surpass Flux.1 Dev (a 12B parameter model) or Qwen-Image (a 20B parameter model) where prompt adherence is strictly concerned.
It's the reason I'm holding off until the Z-Image Base version is released before adding to the official GenAI model comparisons.
But for a 6B model that can generate an image in under 5 seconds, it punches far above its weight class.
As to the passing images, there is white chocolate kit-kat (I know, blasphemy, right?).
If that’s your website please check GitHub link - it has a typo (gitub) and goes to a malicious site
Thanks for the heads up. I just checked the site through several browsers and proxying through a VPN. There's no typo and it properly links to:
EDIT: It's possible that this issue might have existed in an old cached version. I'll purge the cache just to make sure.
The link with the typo is in the footer.
Well holy crap - that's been there for about forever! I need a "domain name" spellchecker built into my Gulp CI/CD flow.
EDIT: Fixed! Thanks soontimes and rprwhite!
[deleted]
So does this finally replace SDXL?
Is Flux 1/2/Kontext left in the dust by the Z Image and Qwen combo?
Yeah, I've definitely switched largely away from Flux. Much as I do like Flux (for prompt adherency), BFL's baffling licensing structure along with its excessive censorship makes it a noop.
For ref, the Porcupine-cone creature that ZiT couldn't handle by itself in my aforementioned test was easily handled using a Qwen20b + ZiT refiner workflow and even with two separate models STILL runs faster than Flux2 [dev].
SDXL has long been surpassed, it's primary redeeming feature is fine tuned variants for different focus and image styles.
IMO HiDream had the best quality OSS generations, Flux Schnell is decent as well. Will try out Z-Image soon.
SDXL has been outclassed for a while, especially since Flux came out.
Subjective. Most in creative industries regularly still use SDXL.
Once Z-image base comes out and some real tuning can be done, I think it has a chance of replacing it for the function SDXL has
I don't think that's fair. SDXL is crap at composition. It's really good with LoRAs to stylize/inpaint though.
Source?
Most of the people I know doing local AI prefer SDXL to Flux. Lots of people are still using SDXL, even today.
Flux has largely been met with a collective yawn.
The only thing Flux had going for it was photorealism and prompt adherence. But the skin and jaws of the humans it generated looked weird, it was difficult to fine tune, and the licensing was weird. Furthermore, Flux never had good aesthetics. It always felt plain.
Nobody doing anime or cartoons used Flux. SDXL continues to shine here. People doing photoreal kept using Midjourney.
> it was difficult to fine tune
Yep. It's pretty difficult to fine tune, mostly because it's a distilled model. You can fine tune it a little bit, but it will quickly collapse and start producing garbage, even though fundamentally it should have been an easier architecture to fine-tune compared to SDXL (since it uses the much more modern flow matching paradigm).
I think that's probably the reason why we never really got any good anime Flux models (at least not as good as they were for SDXL). You just don't have enough leeway to be able to train the model for long enough to make the model great for a domain it's currently suboptimal for without completely collapsing it.
It's incredibly clear who the devs assume the target market is.
> It's incredibly clear who the devs assume the target market is.
Not "assume". That's what the target market is. Take a look at civitai and see what kind of images people generate and what LoRAs they train (just be sure to be logged in and disable all of the NSFW filters in the options).
They're correct. This tech, like much before it, is being driven by the base desires of extremely smart young men.
Gooners are base all right, but smart? Seriously? They can't even use their imagination to jerk off.
They maybe have an rhlf phase, but I mean there is also just the shape of the distribution of images on the internet and, since this is from alibaba, their part of the internet/social media (Weibo) to consider
[flagged]
With today's remote social validation for women and all time low value of men due to lower death rates and the disconnect from where food and shelter come from, lonely men make up a huge portion of the population.
Something like >80% of men consume sexually explicit media. It's hardly limited to involuntarily celibate men.
It's not about consumption, it's about having a vast majority of your demo being sexy women instead of a balance.
I'm still not following. Ads for a pickup truck are probably more likely to feature towing a boat than ads for a hatchback even if they're both capable of towing boats. Because buyers of the former are more likely to use the vehicle for that purpose.
If a disproportionate share of users are using image generation for generating attractive women, why is it out of place to put commensurate focus on that use case in demos and other promotional material?
I mean spending all that time on dates, and wives, and kids gives you much less time to build AI models.
The people with the time and desire to do something are the ones most likely to do it, this is no brilliant observation.
You could say that about any field, and yet we don't see the same behavior in most other fields
Spending all your time on dates and wives and kids means you're not spending all your time building houses.
I mean things that take hard physical labor are typically self limiting...
I do nerdy computer things and I actually build things too, for example I busted up the limestone in my backyard in put in a patio and raised garden. Working 16 hours a day coding/or otherwise computering isn't that hard even if your brain is melted at the end of the day. 8 - 10 of physically hard labor and your body starts taking damage if you keep it up too long.
And really building houses is a terrible example! In the US we've been chronically behind on building millions of units of houses. People complain the processes are terribly slow and there is tons of downtime.
So yea, I don't think your analogy works at all.
[dead]
Considering how gaga r/stablediffusion is about it, they weren’t wrong. Apparently Flux 2 is dead in the water even though the knowledge it has contained in the model is way, way higher than Z-Image (unsurprisingly).
Flux 2[dev] is awful.
Z-Image is getting traction because it fits on their tiny GPUs and does porn sure, but even with more compute Flux 2[dev] has no place.
Weak world knowledge, worse licensing, and it ruins the #1 benefit of a larger LLM backbone with post-training for JSON prompts.
LLMs already understand JSON, so additional training for JSON feels like a cheaper way to juice prompt adherence than more robust post-training.
And honestly even "full fat" Flux 2 has no great spot: Nano Banana Pro is better if you need strong editing, Seedream 4.5 is better if you need strong generation.
> Female models make up about 70% of the modeling industry workforce worldwide
Ok so a ~2:1 ratio. Those examples have a 25:1 ratio.
We've come a long way with these image models, and the things you can do with paltry 6B are super impressive. The community has adopted this model wholesale, and left Flux(2) by the way side. It helps that Z-Image isn't censored, whereas BFL (makers of Flux 2) dedicated like a fith of their press release talking about how "safe" (read: censored and lobotomized) their model is.
To be fair, a lot of that was about their online service and not the model itself. It can definitely generate breasts.
That said I do find the focus on “safety” tiring.
> whereas BFL (makers of Flux 2) dedicated like a fith of their press release talking about how "safe" (read: censored and lobotomized) their model is.
Agreed, but let’s not confuse what it is. Talking about safety is just “WE WONT EMBARRASS YOU IF YOU INVEST IN US”.
But this is a CCP model, would it refuse to generate Xi?
It will generate anything. Xi/Pooh porn, Taylor Swift getting squashed by a tank at Tiananmen Square, whatever, no censorship at all.
With simplistic prompts, you quickly conclude that the small model size is the only limitation. Once you realize how good it is with detailed prompts, though, you find that you can get a lot more diversity out of it than you initially thought you could.
Absolute game-changer of a model IMO. It is competitive with Nano Banana Pro in some respects, and that's saying something.
I could imagine the Chinese government is not terribly interested in enforcing its censorship laws when this would conflict with boosting Chinese AI. Overregulation can be a significant inhibitor to innovation and competitiveness, as we often see in Europe.
Explain lobotomizing a Image Generator? Modern problems require modern terms.
- Uses existing model backbones for text encoding & semantic tokens (why reinvent the wheel if you don't need to?)
- Trains on a whole lot of synthetic captions of different lengths, ostensibly generated using some existing vision LLM
- Solid text generation support is facilitated by training on all OCR'd text from the ground truth image. This seems to match how Nano Banana Pro got so good as well; I've seen its thinking tokens sketch out exactly what text to say in the image before it renders.
All the examples I tried were garbage. Looked decent -- no horrors -- but didn't do the job.
Anything with "most cultures" were manga-influenced comic strips with kanji.
Useless.
Z-Image seems to be the first successor to Stable Diffusion 1.5 that delivers better quality, capability, and extensibility across the board in an open model that can feasibly run locally. Excitement is high and an ecosystem is forming fast.
Did you forget about SDXL?
Clearly you have, but while on the topic, it is amazing to me that only came out 2.5 years ago.
i have been testing this on my Framework Desktop. ComfyUI generally causes an amdgpu kernel fault after about 40 steps (across multiple prompts), so i spent a few hours building a workaround here https://github.com/comfyanonymous/ComfyUI/pull/11143
overall it's fun and impressive. decent results using LoRA. you can achieve good looking results with as few as 8 inference steps, which takes 15-20 seconds on a Strix Halo. i also created a llama.cpp inherence custom node for prompt enhancement which has been helping with overall output quality.
Unfortunately, another China censored model.
Simply ask it to generate "Tank Man" or "Lady Liberty Hong Kong" and the model return a blackboard with text saying "Maybe Not Safe".
It's amazing how much knowledge about the world fits into 16 GiB of the distilled model.
[deleted]
This is early days, too. We're probably going to get better at this across more domains.
Local AI will eventually be booming. It'll be more configurable, adaptable, hackable. "Free". And private.
Crude APIs can only get you so far.
I'm in favor of intelligent models like Nano Banana over ComfyUI messes (the future is the model, not the node graph).
I still think we need the ability to inject control layers and have full access to the model, because we lose too much utility by not having it.
I think we'll eventually get Nano Banana Pro smarts slimmed down and running on a local machine.
>Local AI will eventually be booming.
With how expensive RAM currently is, I doubt it.
It's temporary. Sam Altman booked all the supply for a year. Give it time to unwind.
I’m old enough to remember many memory price spikes.
I remember saving up for my first 128MB stick and the next week it was like triple in price.
[flagged]
Is this a joke?
Image and video models are some of the most useful tools of the last few decades.
As an AI outsider with a recent 24GB macbook, can I follow the quick start[1] steps from the repo and expect decent results? How much time would it take to generate a single medium quality image?
I have a 24GB M5 macbook pro. In ComfyUI using default z-image workflow, generating a single image just took me 399 seconds, during which the computer froze and my airpods lost audio.
On replicate.com a single image takes 1.5s at a price of 1000 images per $1. Would be interesting to see how quick it is on ComfyUI Cloud.
Overall, running generative models locally on Macs seems very poor time investment.
[dead]
If you don't know anything about AI in terms of how these models are run, comfyui's macos version is probably the easiset to use. There is already a Z-Image workflow that you can get and comfyui will get all the models you need and get it work together. Can expect decent speed
Have a 48GB M4 Pro and every inference step takes like 10 seconds on a 1024x1024 image. so six steps and you need a minute. Not terrible, not great.
I'm fine with the quick start steps and I prefer CLI to GUI anyway. But if I try it and find it too complex, I now know what to try instead - thanks.
I'm still curious whether this would run on a MacBook and how long would it take to generate an image. What machine are you using?
Try koboldcpp with the kcppt config file. The easiest way by far.
Set +x to the koboldcpp executable and launch it, select 'Load config' and point at the config file, then hit 'launch'.
Wait until the model weights are downloaded and launched, then open a browser and go to:
* http://localhost:5001/sdui
EDIT: This will work for Linux, Windows and Mac
We have vLLM for running text LLMs in production. What is the equivalent for this model?
I would say there's isn't an equivalent. Some people will probably tell you ComfyUI - you can expose workflows via API endpoints and parameterize them. This is how e.g. Krita AI Diffusion uses a ComfyUI backend.
For various reasons, I doubt there are any large scale SaaS-style providers operating this in production today.
My issue with this model is it keeps producing Chinese people and Chinese text. I have to very specifically go out of my way to say what kind of race they are.
If I say “A man”, it’s fine. A black man, no problem. It’s when I add context and instructions is just seems to want to go with some Chinese man. Which is fine, but I would like to see more variety of people it’s trained on to create more diverse images. For non-people it’s amazingly good.
All modern models have their default looks. Meaningful variety of outputs for the same inputs in finetuned models is still an open technical problem. It's not impossible, but not solved either.
I’m not sure how this is anything but a plus.
It means it respects nationality choices and if you don’t mention it that is your bad prompting and not a failure to not have the default nationality you would prefer.
I've messed with this a bit and the distill is incredibly overbaked. Curious to see the capabilities of the full model but I suspect even the base model is quite collapsed.
Just want to learn - who actually needs or buys up generated images?
I follow an author who publishes online on places like Scribblehub and has a modestly successful Patreon. Over the years he has spent probably tens of thousands of dollars on commissioned art for his stories, and he's still spending heavily on that. But as image models have gotten better this has increasingly been supplemented with AI-images for things that are worth a couple dollars to get right with AI, but not a couple hundred to get a human artist to do them
Roughly speaking the art seems to have three main functions:
1. promote the story to outsiders: this only works with human-made art
2. enhance the story for existing readers: AI helps here, but is contentious
3. motivate and inspire the author: works great with AI. The ease of exploration and pseudo-random permutations in the results are very useful properties here that you don't get from regular art
By now the author even has an agreement with an artist he frequently commissions that he can use his style in AI art in return for a small "royalty" payment for every such image that gets published in one of his stories. A solution driven both by the author's conscience and by the demands of the readers
Some ideas for your consideration:
- Illustrating blog posts, articles, etc.
- A creativity tool for kids (and adults; consider memes).
- Generating ads. (Consider artisan production and specialized venues.)
- Generating assets for games and similar, such as backdrops and textures.
Like any tool, it takes certain skill to use, and the ability to understand the results.
>A creativity tool for kids (and adults; consider memes).
Fixed that for you: (and adults; consider porn).
I don't think you realize the extent of the “underground” nsfw genai community, which has to rely on open-weight models since API models all have prude filters.
Except for gaming, that doesn't sound like a huge market worthy of pouring millions into training these high-quality models. And there is a lot of competition too. I suspect there are some other deep-pocketed customers for these images. Probably animations? movies? TV ads?
I'd say that picture ad market alone would suffice.
OTOH these are open-weight models released to the public. We don't get to use more advanced models for free; the free models are likely a byproduct of producing more advanced models anyway. These models can be the freemium tier, or gateway drugs, or a way of torpedoing the competition, if you don't want to believe in the goodwill of their producers.
Propaganda?
Dying businesses like newspapers and local banks, who use it to save the money they used to spend on shutterstock images? That’s where I’ve seen it at least. Replacing one useless filler with another.
During the holiday season I've been noticing AI-generated assets on tons of meatspace ads and cheap, themed products.
Very good, not always perfect with text or with following exactly the prompt, but 6B so... impressive.
I have had good textual results with the Turbo version so far. Sometimes it drops a letter in the output, but most of the time it adheres well to both the text requested and the style.
I tried this prompt on my username: "A painted UFO abducts the graffiti text "Accrual" painted on the side of a rusty bridge."
Supports MPS (Metal Performance Shaders). Using something that skips Python entirely along with a mlx or gguf converted model file (if one exists) will likely be even faster.
It's working for me - it does max out my 64GB though.
Wow. I always forget how unlike autoregressive models, diffusion models are heavier on resources (for the same number of parameters).
Did anyone test it on 5090? I saw some 30xx reports and it seemed very fast
Incredibly fast, on my 5090 with CUDA 13 (& the latest diffusers, xformers, transformers, etc...), 9 samplig steps and the "Tongyi-MAI/Z-Image-Turbo" model I get:
Weird, even at 2048 I don’t think it should be using all your 32GB VRAM.
Even on my 4080 it's extremely fast, it takes ~15 seconds per image.
Did you use PyTorch Native or Diffusers Inference? I couldn't get the former working yet so I used Diffusers, but it's terribly slow on my 4080 (4 min/image). Trying again with PyTorch now, seems like Diffusers is expected to be slow.
Uh, not sure? I downloaded the portable build of ComfyUI and ran the CUDA-specific batch file it comes with.
(I'm not used to using Windows and I don't know how to do anything complicated on that OS. Unfortunately, the computer with the big GPU also runs Windows.)
Haha, I know how it goes. Thanks, I'll give that a try!
Update: works great and much faster via ComfyUI + the provided workflow file.
I'm particularly impressed by the fact that they seem to aim for photorealism rather than the semi-realistic AI-look that is common in many text-to-image models.
Exactly, and at the same time, if you want an affected style, all you have to do is ask for it.
It would be more useful to have some standards on what one could expect in terms of hardware requirements and expected performance.
I wish they would have used the WAN vae.
Dude, please give money to artists instead of using genAI
I've done some preliminary testing with Z-Image Turbo in the past week.
Thoughts
- It's fast (~3 seconds on my RTX 4090)
- Surprisingly capable of maintaining image integrity even at high resolutions (1536x1024, sometimes 2048x2048)
- The adherence is impressive for a 6B parameter model
Some tests (2 / 4 passed):
https://imgpb.com/exMoQ
Personally I find it works better as a refiner model downstream of Qwen-Image 20b which has significantly better prompt understanding but has an unnatural "smoothness" to its generated images.
> It's fast (~3 seconds on my RTX 4090)
It is amazing how far behind Apple Silicon is when it comes to use non- language models.
Using the reference code from Z-image on my M1 ultra, it takes 8 seconds per step. Over a minute for the default of 9 steps.
The diffusion process is usually compute-bound, while transformer inference is memory-bound.
Apple Silicon is comparable in memory bandwidth to mid-range GPUs, but it’s light years behind on compute.
> but it’s light years behind on compute.
Is that the only factor though? I wonder if pytorch is lacking optimization for the MPS backend.
China really is keeping the open weight/source AI scene alive. If in five years a consumer GPU market still exists it would be because of them.
Pretty sure the consumer GPU market mostly exists because of games, which has nothing to do with China or AI.
The consumer GPU market is not treated as a primary market by GPU makers anymore. Similar to how Micron went B2B-only.
On fal, it takes less than a second many times.
https://fal.ai/models/fal-ai/z-image/turbo/api
Couple that with the LoRA, in about 3 seconds you can generate completely personalized images.
The speed alone is a big factor but if you put the model side by side with seedream and nanobanana and other models it's definitely in the top 5 and that's killer combo imho.
I don't know anything about paying for these services, and as a beginner, I worry about running up a huge bill. Do they let you set a limit on how much you pay? I see their pricing examples, but I've never tried one of these.
https://fal.ai/pricing
It works with prepaid credits, so there should be no risk. Minimum credit amount is $10, though.
This. You can also run most (if not all) of the models that Fal.ai directly from the playground tab including Z-Image Turbo.
https://fal.ai/models/fal-ai/z-image/turbo
That's 2/4? The kitkat bars look nothing like kitkat bars for the most part (logo? splits? white cream filling?). The DNA armor is made from normal metal links.
Fair. Nobody said it was going to surpass Flux.1 Dev (a 12B parameter model) or Qwen-Image (a 20B parameter model) where prompt adherence is strictly concerned.
It's the reason I'm holding off until the Z-Image Base version is released before adding to the official GenAI model comparisons.
But for a 6B model that can generate an image in under 5 seconds, it punches far above its weight class.
As to the passing images, there is white chocolate kit-kat (I know, blasphemy, right?).
If that’s your website please check GitHub link - it has a typo (gitub) and goes to a malicious site
Thanks for the heads up. I just checked the site through several browsers and proxying through a VPN. There's no typo and it properly links to:
https://github.com/Tongyi-MAI/Z-Image
Screenshot of site with network tools open to indicate link
https://imgur.com/a/FZDz0K2
EDIT: It's possible that this issue might have existed in an old cached version. I'll purge the cache just to make sure.
The link with the typo is in the footer.
Well holy crap - that's been there for about forever! I need a "domain name" spellchecker built into my Gulp CI/CD flow.
EDIT: Fixed! Thanks soontimes and rprwhite!
So does this finally replace SDXL?
Is Flux 1/2/Kontext left in the dust by the Z Image and Qwen combo?
Yeah, I've definitely switched largely away from Flux. Much as I do like Flux (for prompt adherency), BFL's baffling licensing structure along with its excessive censorship makes it a noop.
For ref, the Porcupine-cone creature that ZiT couldn't handle by itself in my aforementioned test was easily handled using a Qwen20b + ZiT refiner workflow and even with two separate models STILL runs faster than Flux2 [dev].
https://imgur.com/a/5qYP0Vc
SDXL has long been surpassed, it's primary redeeming feature is fine tuned variants for different focus and image styles.
IMO HiDream had the best quality OSS generations, Flux Schnell is decent as well. Will try out Z-Image soon.
SDXL has been outclassed for a while, especially since Flux came out.
Subjective. Most in creative industries regularly still use SDXL.
Once Z-image base comes out and some real tuning can be done, I think it has a chance of replacing it for the function SDXL has
I don't think that's fair. SDXL is crap at composition. It's really good with LoRAs to stylize/inpaint though.
Source?
Most of the people I know doing local AI prefer SDXL to Flux. Lots of people are still using SDXL, even today.
Flux has largely been met with a collective yawn.
The only thing Flux had going for it was photorealism and prompt adherence. But the skin and jaws of the humans it generated looked weird, it was difficult to fine tune, and the licensing was weird. Furthermore, Flux never had good aesthetics. It always felt plain.
Nobody doing anime or cartoons used Flux. SDXL continues to shine here. People doing photoreal kept using Midjourney.
> it was difficult to fine tune
Yep. It's pretty difficult to fine tune, mostly because it's a distilled model. You can fine tune it a little bit, but it will quickly collapse and start producing garbage, even though fundamentally it should have been an easier architecture to fine-tune compared to SDXL (since it uses the much more modern flow matching paradigm).
I think that's probably the reason why we never really got any good anime Flux models (at least not as good as they were for SDXL). You just don't have enough leeway to be able to train the model for long enough to make the model great for a domain it's currently suboptimal for without completely collapsing it.
The [demo PDF](https://github.com/Tongyi-MAI/Z-Image/blob/main/assets/Z-Ima...) has ~50 photos of attractive young women sitting/standing alone, and exactly two photos featuring young attractive men on their own.
It's incredibly clear who the devs assume the target market is.
> It's incredibly clear who the devs assume the target market is.
Not "assume". That's what the target market is. Take a look at civitai and see what kind of images people generate and what LoRAs they train (just be sure to be logged in and disable all of the NSFW filters in the options).
They're correct. This tech, like much before it, is being driven by the base desires of extremely smart young men.
Gooners are base all right, but smart? Seriously? They can't even use their imagination to jerk off.
They maybe have an rhlf phase, but I mean there is also just the shape of the distribution of images on the internet and, since this is from alibaba, their part of the internet/social media (Weibo) to consider
[flagged]
With today's remote social validation for women and all time low value of men due to lower death rates and the disconnect from where food and shelter come from, lonely men make up a huge portion of the population.
Something like >80% of men consume sexually explicit media. It's hardly limited to involuntarily celibate men.
It's not about consumption, it's about having a vast majority of your demo being sexy women instead of a balance.
I'm still not following. Ads for a pickup truck are probably more likely to feature towing a boat than ads for a hatchback even if they're both capable of towing boats. Because buyers of the former are more likely to use the vehicle for that purpose.
If a disproportionate share of users are using image generation for generating attractive women, why is it out of place to put commensurate focus on that use case in demos and other promotional material?
I mean spending all that time on dates, and wives, and kids gives you much less time to build AI models.
The people with the time and desire to do something are the ones most likely to do it, this is no brilliant observation.
You could say that about any field, and yet we don't see the same behavior in most other fields
Spending all your time on dates and wives and kids means you're not spending all your time building houses.
I mean things that take hard physical labor are typically self limiting...
I do nerdy computer things and I actually build things too, for example I busted up the limestone in my backyard in put in a patio and raised garden. Working 16 hours a day coding/or otherwise computering isn't that hard even if your brain is melted at the end of the day. 8 - 10 of physically hard labor and your body starts taking damage if you keep it up too long.
And really building houses is a terrible example! In the US we've been chronically behind on building millions of units of houses. People complain the processes are terribly slow and there is tons of downtime.
So yea, I don't think your analogy works at all.
[dead]
Considering how gaga r/stablediffusion is about it, they weren’t wrong. Apparently Flux 2 is dead in the water even though the knowledge it has contained in the model is way, way higher than Z-Image (unsurprisingly).
Flux 2[dev] is awful.
Z-Image is getting traction because it fits on their tiny GPUs and does porn sure, but even with more compute Flux 2[dev] has no place.
Weak world knowledge, worse licensing, and it ruins the #1 benefit of a larger LLM backbone with post-training for JSON prompts.
LLMs already understand JSON, so additional training for JSON feels like a cheaper way to juice prompt adherence than more robust post-training.
And honestly even "full fat" Flux 2 has no great spot: Nano Banana Pro is better if you need strong editing, Seedream 4.5 is better if you need strong generation.
It's interesting the handsome guy is literally Tony Leung Chiu-wai, https://www.imdb.com/name/nm0504897/, not even modified
The model is uncensored, so will probably suite that target market admirably.
Maybe both women and men prefer looking at attractive women.
I get the implication, but this is also the common configuration for fashion / beauty marketing.
Don't forget the expensive sport cars.
The ratio of naked female loras compared to naked male loras, or even non-porn loras, on civitai is at least 20 to 1. This shouldn't be surprising.
Pray tell? I hope you didn't just post a sexist dogwhistle?
"The Internet is really, really great..."
https://www.youtube.com/watch?v=LTJvdGcb7Fs
Please write what you mean instead of making veiled implications. What is the point of beating around the bush here?
It's not clear to me what you mean either, especially since female models are overwhelmingly more popular in general[1].
[1]: "Female models make up about 70% of the modeling industry workforce worldwide" https://zipdo.co/modeling-industry-statistics/
> Female models make up about 70% of the modeling industry workforce worldwide
Ok so a ~2:1 ratio. Those examples have a 25:1 ratio.
We've come a long way with these image models, and the things you can do with paltry 6B are super impressive. The community has adopted this model wholesale, and left Flux(2) by the way side. It helps that Z-Image isn't censored, whereas BFL (makers of Flux 2) dedicated like a fith of their press release talking about how "safe" (read: censored and lobotomized) their model is.
To be fair, a lot of that was about their online service and not the model itself. It can definitely generate breasts.
That said I do find the focus on “safety” tiring.
> whereas BFL (makers of Flux 2) dedicated like a fith of their press release talking about how "safe" (read: censored and lobotomized) their model is.
Agreed, but let’s not confuse what it is. Talking about safety is just “WE WONT EMBARRASS YOU IF YOU INVEST IN US”.
But this is a CCP model, would it refuse to generate Xi?
You tell me.
https://imgur.com/a/7FR3uT1
It will generate anything. Xi/Pooh porn, Taylor Swift getting squashed by a tank at Tiananmen Square, whatever, no censorship at all.
With simplistic prompts, you quickly conclude that the small model size is the only limitation. Once you realize how good it is with detailed prompts, though, you find that you can get a lot more diversity out of it than you initially thought you could.
Absolute game-changer of a model IMO. It is competitive with Nano Banana Pro in some respects, and that's saying something.
I could imagine the Chinese government is not terribly interested in enforcing its censorship laws when this would conflict with boosting Chinese AI. Overregulation can be a significant inhibitor to innovation and competitiveness, as we often see in Europe.
Explain lobotomizing a Image Generator? Modern problems require modern terms.
I talked about this model in some depth on the last Pretrained episode: https://youtu.be/5weFerGhO84?si=-tTYxMRPsP9DFqxP
Some interesting takeaways imo:
- Uses existing model backbones for text encoding & semantic tokens (why reinvent the wheel if you don't need to?)
- Trains on a whole lot of synthetic captions of different lengths, ostensibly generated using some existing vision LLM
- Solid text generation support is facilitated by training on all OCR'd text from the ground truth image. This seems to match how Nano Banana Pro got so good as well; I've seen its thinking tokens sketch out exactly what text to say in the image before it renders.
All the examples I tried were garbage. Looked decent -- no horrors -- but didn't do the job.
Anything with "most cultures" were manga-influenced comic strips with kanji. Useless.
Z-Image seems to be the first successor to Stable Diffusion 1.5 that delivers better quality, capability, and extensibility across the board in an open model that can feasibly run locally. Excitement is high and an ecosystem is forming fast.
Did you forget about SDXL?
Clearly you have, but while on the topic, it is amazing to me that only came out 2.5 years ago.
i have been testing this on my Framework Desktop. ComfyUI generally causes an amdgpu kernel fault after about 40 steps (across multiple prompts), so i spent a few hours building a workaround here https://github.com/comfyanonymous/ComfyUI/pull/11143
overall it's fun and impressive. decent results using LoRA. you can achieve good looking results with as few as 8 inference steps, which takes 15-20 seconds on a Strix Halo. i also created a llama.cpp inherence custom node for prompt enhancement which has been helping with overall output quality.
Unfortunately, another China censored model. Simply ask it to generate "Tank Man" or "Lady Liberty Hong Kong" and the model return a blackboard with text saying "Maybe Not Safe".
It's amazing how much knowledge about the world fits into 16 GiB of the distilled model.
This is early days, too. We're probably going to get better at this across more domains.
Local AI will eventually be booming. It'll be more configurable, adaptable, hackable. "Free". And private.
Crude APIs can only get you so far.
I'm in favor of intelligent models like Nano Banana over ComfyUI messes (the future is the model, not the node graph).
I still think we need the ability to inject control layers and have full access to the model, because we lose too much utility by not having it.
I think we'll eventually get Nano Banana Pro smarts slimmed down and running on a local machine.
>Local AI will eventually be booming.
With how expensive RAM currently is, I doubt it.
It's temporary. Sam Altman booked all the supply for a year. Give it time to unwind.
I’m old enough to remember many memory price spikes.
I remember saving up for my first 128MB stick and the next week it was like triple in price.
[flagged]
Is this a joke?
Image and video models are some of the most useful tools of the last few decades.
As an AI outsider with a recent 24GB macbook, can I follow the quick start[1] steps from the repo and expect decent results? How much time would it take to generate a single medium quality image?
[1]: https://github.com/Tongyi-MAI/Z-Image?tab=readme-ov-file#-qu...
I have a 24GB M5 macbook pro. In ComfyUI using default z-image workflow, generating a single image just took me 399 seconds, during which the computer froze and my airpods lost audio.
On replicate.com a single image takes 1.5s at a price of 1000 images per $1. Would be interesting to see how quick it is on ComfyUI Cloud.
Overall, running generative models locally on Macs seems very poor time investment.
[dead]
If you don't know anything about AI in terms of how these models are run, comfyui's macos version is probably the easiset to use. There is already a Z-Image workflow that you can get and comfyui will get all the models you need and get it work together. Can expect decent speed
Have a 48GB M4 Pro and every inference step takes like 10 seconds on a 1024x1024 image. so six steps and you need a minute. Not terrible, not great.
I'm fine with the quick start steps and I prefer CLI to GUI anyway. But if I try it and find it too complex, I now know what to try instead - thanks.
I'm still curious whether this would run on a MacBook and how long would it take to generate an image. What machine are you using?
Try koboldcpp with the kcppt config file. The easiest way by far.
Download the release here
* https://github.com/LostRuins/koboldcpp/releases/tag/v1.103
Download the config file here
* https://huggingface.co/koboldcpp/kcppt/resolve/main/z-image-...
Set +x to the koboldcpp executable and launch it, select 'Load config' and point at the config file, then hit 'launch'.
Wait until the model weights are downloaded and launched, then open a browser and go to:
* http://localhost:5001/sdui
EDIT: This will work for Linux, Windows and Mac
We have vLLM for running text LLMs in production. What is the equivalent for this model?
I would say there's isn't an equivalent. Some people will probably tell you ComfyUI - you can expose workflows via API endpoints and parameterize them. This is how e.g. Krita AI Diffusion uses a ComfyUI backend.
For various reasons, I doubt there are any large scale SaaS-style providers operating this in production today.
My issue with this model is it keeps producing Chinese people and Chinese text. I have to very specifically go out of my way to say what kind of race they are.
If I say “A man”, it’s fine. A black man, no problem. It’s when I add context and instructions is just seems to want to go with some Chinese man. Which is fine, but I would like to see more variety of people it’s trained on to create more diverse images. For non-people it’s amazingly good.
All modern models have their default looks. Meaningful variety of outputs for the same inputs in finetuned models is still an open technical problem. It's not impossible, but not solved either.
I’m not sure how this is anything but a plus.
It means it respects nationality choices and if you don’t mention it that is your bad prompting and not a failure to not have the default nationality you would prefer.
I've messed with this a bit and the distill is incredibly overbaked. Curious to see the capabilities of the full model but I suspect even the base model is quite collapsed.
Just want to learn - who actually needs or buys up generated images?
I follow an author who publishes online on places like Scribblehub and has a modestly successful Patreon. Over the years he has spent probably tens of thousands of dollars on commissioned art for his stories, and he's still spending heavily on that. But as image models have gotten better this has increasingly been supplemented with AI-images for things that are worth a couple dollars to get right with AI, but not a couple hundred to get a human artist to do them
Roughly speaking the art seems to have three main functions:
1. promote the story to outsiders: this only works with human-made art
2. enhance the story for existing readers: AI helps here, but is contentious
3. motivate and inspire the author: works great with AI. The ease of exploration and pseudo-random permutations in the results are very useful properties here that you don't get from regular art
By now the author even has an agreement with an artist he frequently commissions that he can use his style in AI art in return for a small "royalty" payment for every such image that gets published in one of his stories. A solution driven both by the author's conscience and by the demands of the readers
Some ideas for your consideration:
- Illustrating blog posts, articles, etc.
- A creativity tool for kids (and adults; consider memes).
- Generating ads. (Consider artisan production and specialized venues.)
- Generating assets for games and similar, such as backdrops and textures.
Like any tool, it takes certain skill to use, and the ability to understand the results.
>A creativity tool for kids (and adults; consider memes).
Fixed that for you: (and adults; consider porn).
I don't think you realize the extent of the “underground” nsfw genai community, which has to rely on open-weight models since API models all have prude filters.
Except for gaming, that doesn't sound like a huge market worthy of pouring millions into training these high-quality models. And there is a lot of competition too. I suspect there are some other deep-pocketed customers for these images. Probably animations? movies? TV ads?
I'd say that picture ad market alone would suffice.
OTOH these are open-weight models released to the public. We don't get to use more advanced models for free; the free models are likely a byproduct of producing more advanced models anyway. These models can be the freemium tier, or gateway drugs, or a way of torpedoing the competition, if you don't want to believe in the goodwill of their producers.
Propaganda?
Dying businesses like newspapers and local banks, who use it to save the money they used to spend on shutterstock images? That’s where I’ve seen it at least. Replacing one useless filler with another.
During the holiday season I've been noticing AI-generated assets on tons of meatspace ads and cheap, themed products.
Very good, not always perfect with text or with following exactly the prompt, but 6B so... impressive.
I have had good textual results with the Turbo version so far. Sometimes it drops a letter in the output, but most of the time it adheres well to both the text requested and the style.
I tried this prompt on my username: "A painted UFO abducts the graffiti text "Accrual" painted on the side of a rusty bridge."
Results: https://imgur.com/a/z-image-test-hL1ACLd
What kind of rig is required to run this?
CPU can be used:
https://github.com/rupeshs/fastsdcpu/pull/346
The simple Python example program runs great on almost any GPU with 8 GB or more memory. Takes about 1.5 seconds per iteration on a 4090.
The bang:buck ratio of Z-Image Turbo is just bonkers.
Does it run on apple silicon?
Apparently - https://github.com/ivanfioravanti/z-image-mps
Supports MPS (Metal Performance Shaders). Using something that skips Python entirely along with a mlx or gguf converted model file (if one exists) will likely be even faster.
(Not tested) though apparently it already exists: https://github.com/leejet/stable-diffusion.cpp/wiki/How-to-U...
It's working for me - it does max out my 64GB though.
Wow. I always forget how unlike autoregressive models, diffusion models are heavier on resources (for the same number of parameters).
Did anyone test it on 5090? I saw some 30xx reports and it seemed very fast
Incredibly fast, on my 5090 with CUDA 13 (& the latest diffusers, xformers, transformers, etc...), 9 samplig steps and the "Tongyi-MAI/Z-Image-Turbo" model I get:
- 1.5s to generate an image at 512x512
- 3.5s to generate an image at 1024x1024
- 26.s to generate an image at 2048x2048
It uses almost all the 32Gb Gb of VRAM and GPU usage. I'm using the script from the HF post: https://huggingface.co/Tongyi-MAI/Z-Image-Turbo
Weird, even at 2048 I don’t think it should be using all your 32GB VRAM.
Even on my 4080 it's extremely fast, it takes ~15 seconds per image.
Did you use PyTorch Native or Diffusers Inference? I couldn't get the former working yet so I used Diffusers, but it's terribly slow on my 4080 (4 min/image). Trying again with PyTorch now, seems like Diffusers is expected to be slow.
Uh, not sure? I downloaded the portable build of ComfyUI and ran the CUDA-specific batch file it comes with.
(I'm not used to using Windows and I don't know how to do anything complicated on that OS. Unfortunately, the computer with the big GPU also runs Windows.)
Haha, I know how it goes. Thanks, I'll give that a try!
Update: works great and much faster via ComfyUI + the provided workflow file.
I'm particularly impressed by the fact that they seem to aim for photorealism rather than the semi-realistic AI-look that is common in many text-to-image models.
Exactly, and at the same time, if you want an affected style, all you have to do is ask for it.
It would be more useful to have some standards on what one could expect in terms of hardware requirements and expected performance.
I wish they would have used the WAN vae.
Dude, please give money to artists instead of using genAI