Ah, I think I searched for "jpegxl", that's why there was no match.
"Yes, re-opening.".
> Given these positive signals, we would welcome contributions to integrate a performant and memory-safe JPEG XL decoder in Chromium. In order to enable it by default in Chromium we would need a commitment to long-term maintenance. With those and our usual launch criteria met, we would ship it in Chrome.
LOL. Google, the "yeah that thing we bought six months ago, we're killing it off 30 days for 4 weeks ago" company demanding "long-term" anything.
long term support is actually being provided by google...
just a different team in a different country :D
most jxl devs are at google research in zurich, and already pledged to handle long tetm support
JPEG-XL provides the best migration path for image conversion from JPEG, with lossless recompression. It also supports arbitrary HDR bit depths (up to 32 bits per channel) unlike AVIF, and generally its HDR support is much better than AVIF. Other operating systems and applications were making strides towards adopting this format, but Google was up till now stubbornly holding the web back in their refusal to support JPEG-XL in favour of AVIF which they were pushing. I’m glad to hear they’re finally reconsidering. Let’s hope this leads to resources being dedicated to help build and maintain a performant and memory safe decoder (in Rust?).
It's not just Google, Mozilla has no desire to introduce a barely supported massive C++ decoder for marginal gains either:
avif is just better for typical web image quality, it produces better looking images and its artifacts aren't as annoying (smoothing instead of blocking and ringing around sharp edges).
You also get it for basically free because it's just an av1 key frame. Every browser needs an av1 decoder already unless it's willing to forego users who would like to be able to watch Netflix and YouTube.
I don't understand what you're trying to say. Mozilla said over a year ago that they would support JXL as soon as there's a fast memory safe decoder that will be supported.
Google on the other hand never expressed any desire to support JXL at all, regardless of the implementation. Only just now after the PDF Association announced that PDF would be using JXL, did they decide to support JXL on the web.
> avif is just better for typical web image quality, it produces better looking images and its artifacts aren't as annoying (smoothing instead of blocking and ringing around sharp edges).
AVIF is certainly better for the level of quality that Google wants you to use, but in reality, images on the web are much higher quality than that.
And JXL is pretty good if you want smoothing, in fact libjxl's defaults have gotten so overly smooth recently that it's considered a problem which they're in the process of fixing.
> I don't understand what you're trying to say. Mozilla said over a year ago that they would support JXL as soon as there's a fast memory safe decoder that will be supported.
Did they actually say that? All the statements i've seen them have been much more guarded and vauge. More of a, maybe we will think about it if that happens.
> If they successfully contribute an implementation that satisfies these properties and meets our normal production requirements, we would ship it.
That's what they said a year ago. And a couple of Mozilla devs have been in regular contact with the JXL devs ever since then, helping with the integration. The patches to use jxl-rs with Firefox already exist, and will be merged as soon as a couple of prerequisite issues in Gecko are fixed.
I disagree about the image quality at typical sizes - I find JPEG-XL is generally similar or better than AVIF at any reasonable compression ratios for web images. See this for example: https://tonisagrista.com/blog/2023/jpegxl-vs-avif/
AVIF only comes out as superior at extreme compression ratios at much lower bit rates than are typically used for web images, and the images generally look like smothered messes at those extreme ratios.
Even though AVIF decoding support is fairly widespread by now, it is still not ubiquitous like JPEG/PNG/GIF. So typically services will store or generate the same image in multiple formats including AVIF for bandwidth optimization and JPEG for universal client support. Browser headers help to determine compatibility, but it's still fairly complicated to implement, and users also end up having to deal with different platforms supporting different formats when they are served WebP or AVIF and want to reupload an image somewhere else that does not like those formats. As far as I can tell, JXL solves that issue for most websites since it is backwards-compatible and can be decoded into JPEG when a client does not support JXL. I would happily give up a few percent in compression efficiency to get back to a single all-purpose lossy image format.
Even Google photo does not support avif.
It's almost as if Google had an interest in increased storage and bandwidth. Of course they don't but as paying Driver used I'm overcharged for the same thing.
Some years ago, the Google Photos team asked the Chrome team to support JXL, so that they could use it for Photos. The request was ignored, of course.
> Mozilla has no desire to introduce a barely supported massive C++ decoder for marginal gains
On a slightly related note, I wanted to have a HDR background image in Windows 11. Should be a breeze in 2025 right?
Well, Windows 11 only supports JPEG XR[1] for HDR background images. And my commonly used tools did either not support JPEG XR (Gimp fex) or they did not work correctly (ImageMagick).
So I had a look at the JPEG XR reference implementation, which was hosted on Codeplex but has been mirrored on GitHub[2]. And boy, I sure hope that isn't the code that lives in Windows 11...
Ok most of the gunk is in the encoder/decoder wrapper code, but still, for something that's supposedly still in active use by Microsoft... Though not even hosting their own copy of the reference implementation is telling enough I suppose.
Another JPEG XR user is Zeiss. It saves both grayscale and color microscope images with JPEG XR compression in a container format. Zeiss also released a C++ library (libczi) using the reference JPEG XR implementation to read/write these images. Somehow Zeiss is moving away from JPEG XR - its newer version of microscope control software saves with zstd compression by default.
Not everything in the world is passive end-of-the-line presentation. JPEG-XL is the only one that tries to be a general-purpose image format.
If that's the case, let it be a feature of image editing packages that can output formats that are for the web. It's a web standard we're talking about here, not a general-purpose image format, so asking browsers to carry that big code load seems unreasonable when existing formats do most of what we need and want for the web.
People generally expect browsers to display general-purpose image formats. It's why they support formats like classical JPEG, instead of just GIF and PNG.
Turns out people really like being able to just drag-and-drop an image from their camera into a website - being forced to re-encode first it isn't exactly popular.
> Turns out people really like being able to just drag-and-drop an image from their camera into a website - being forced to re-encode first it isn't exactly popular.
That’s a function of the website, not the browser.
> That’s a function of the website, not the browser.
That's hand-waving away quite a lot. The task changes from serving a copy of a file on disk, as every other image format in common use, to needing a transcoding pipeline more akin to sites like YouTube. Technically possible, but lots of extra complexity in return for what gain?
>avif is just better for typical web image quality,
What does "typical web image quality" even mean? I see lots of benchmarks with very low BPPs, like 0.5 or even lower, and that's where video-based image codecs shine.
However, I just visited CNN.com and these are the BPPs of the first 10 images my browser loaded: 1.40, 2.29, 1.88, 18.03 (PNG "CNN headlines" logo), 1.19, 2.01, 2.21, 2.32, 1.14, 2.45.
I believe people are underestimating the BPP values that are actually used on the web. I'm not saying that low-BPP images don't exist, but clearly it isn't hard to find examples of higher-quality images in the wild.
Can AVIF display 10 bit HDR with larger color gamut that any modern phone nowadays is capable of capturing?
if you actually read your parent comment: "typical web image quality"
Typical web image quality is like it is partly because of lack of support. It’s literally more difficult to show a static HDR photo than a whole video!
PNG supports HDR with up to 16 bits per channel, see https://www.w3.org/TR/png-3/ and the cICP, mDCV and cLLI chunks.
With incredibly bad compression ratios.
HDR should not be "typical web" anything. It's insane that websites are allowed to override my system brightness setting through HDR media. There's so much stuff out there that literally hurts my eyes if I've set my brightness such that pure white (SDR FFFFFF) is a comfortable light level.
I want JXL in web browsers, but without HDR support.
There's nothing stopping browsers from tone mapping[1] those HDR images using your tone mapping preference.
What does that achieve? Isn't it simpler to just not support HDR than to support HDR but tone map away the HDR effect?
Anyway, which web browsers have a setting to tone map HDR images such that they look like SDR images? (And why should "don't physically hurt my eyes" be an opt-in setting anyway instead of just the default?)
[deleted]
Wanted to note https://issues.chromium.org/issues/40141863 on making the lossless JPEG recompression a Content-Encoding, which provides a way that, say, a CDN could deploy it in a way that's fully transparent to end users (if the user clicks Save it would save a .jpg).
(And: this is great! I think JPEG XL has chance of being adopted with the recompression "bridge" and fast decoding options, and things like progressive decoding for its VarDCT mode are practical advantages too.)
The last discussion in libjxl about this was seemingly taking the stance it wasn't necessary since JXL has "native HDR" which completely fails to understand the problem space entirely.
The JXL spec already has gainmaps...
Also, just because there's a spec for using gainmaps with JPEG doesn't mean that it works well. With only 8 bits of precision, it really sucks for HDR, gainmap or no gainmap. You just get too much banding. JXL otoh is completely immune to banding, with or without gainmaps.
> With only 8 bits of precision, it really sucks for HDR, gainmap or no gainmap. You just get too much banding.
This is simply not true. In fact, you get less banding than you do with 10-bit bt2020 PQ.
> JXL otoh is completely immune to banding
Nonsense. It has a lossy mode (which is its primary mode so to speak), so of course it has banding. Only lossless codecs can plausibly be claimed to be "immune to banding".
> The JXL spec already has gainmaps...
Ah looks like they added that sometime last year but decided to call it "JHGM" and also made almost no mention of this in the issue tracker, and didn't bother updating the previous feature requests asking for this that are still open.
> performant and memory safe decoder (in Rust?).
Isn't this exactly the case that wuffs [1] is built for? I had the vague (and, looking into it now, probably incorrect) impression that Google was going to start building all their decoders with that.
WUFFS only works for very simple codecs. Basically useless for anything complex enough that memory bugs would be common.
[deleted]
Love this, been waiting for Google to integrate this, from my experience with AVIF and JPEGXL, JPEGXL is much more promising for the next 20years.
Nice example for how a standard, like PDF, can even persuade/force one of the mighty to adopt a crucial bit of technology, so that this may become a common standard in its own right (i.e. "cascading standards").
I like how even the nus product (jpegli) is a significant improvement. I am in the process of converting my comic book collection. I save a lot of space and still use JPEG, which is universally supported.
> Lossless JPEG recompression (byte-exact JPEG recompression, saving about 20%) for legacy images
Lossless recompression is the main interesting thing on offer here compared to other new formats... and honestly with only 20% improvement I can't say I'm super excited by this, compared to the pain of dealing with yet another new image format.
For example, ask a normal social media user how they feel about .webp and expect to get an earful. The problem is that even if your browser supports the new format, there's no guarantee that every other tool you use supports it, from the OS to every site you want to re-upload to, etc.
If I remember correctly, WebP was single-handedly forced into adoption by Chrome, while offering only marginal improvements over existing formats. Mozilla even worked on an improved JPEG encoder, MozJPEG, to show it could compete with WebP very well. Then came HEIF and AVIF, which, like WebP, were just repurposed video codecs.
JPEG XL is the first image format in a long while that's been actually designed for images and brings a substantial improvement to quality while also covering a wide range of uses and preserving features that video codecs don't have. It supports progressive decoding, seamless very large image sizes, potentially large amount of channels, is reasonably resilient against generation loss, and more. The fact that it has no major drawbacks alone gives it much more merit than WebP has ever had. Lossless recompression is in addition to all of that.
The difference is that this time around, Google has single-handedly held back the adoption of JPEG XL, while a number of other parties have expressed interest.
Having a PNG go from 164.5K to 127.1K as lossless WEBP is not what I'd call "marginal". An improvement of over 20% is huge for lossless compression.
Going from lossless WEBP to lossless JXL is marginal though, and is not worth the big decode performance loss.
In context of the parent comment, 'only 20% improvement' is not super exciting, 'compared to the pain of dealing with yet another new image format'.
You raise a good point, though; WebP certainly did (and continues to do) well in some areas, but at the cost of lacking in others. Moreover, when considering a format for adoption, one should compare it with other candidates for adoption, too. And years before WebP gained widespread support in browsers, it had competition from other interesting formats like FLIF, which addressed some of its flaws, and I have to wonder how it compares to the even older JPEG 2000.
Since the person you replied to mentioned MozJPEG, I have to assume they meant that WebP's lossy capabilities were a marginal improvement.
You're not being fair. Webp has been the only choice for lossy image compression with alpha layer. Give it some credit.
Fair point, though not entirely true: you can run an image through lossy compression and store the result in a PNG, using tools like pngquant [1]. Likely not as efficient for many kinds of images, but totally doable.
Since the recompression is lossless, you don’t need every tool you use to support it, as long as one of them is one that can do the decompression back to JPEG. This sounds a bit like complaining that you can’t upload .7z everywhere.
20% is massive for those storing those social media images though.
I get that there are people who are super excited by this for very good reasons, but for those of us downstream this is just going to be a hassle.
I think there's a difference here.
If I right click save and get a webp, it was probably converted from JPG. Very very few images are uploaded in webp. So getting a webp image means you've downloaded an inferior version.
JXL doesn't have this issue because conversion from jpeg is lossless. So you've still gotten the real, fully-quality image.
> Chrome Jpegxl Issue Reopened
> (this is the tracking bug for this feature)
Is it just me -- or it's confusing to use the terms issue / bug / feature interchangeably?
It's not really used interchangeably: "bug" is used to mean "entry in the bug tracker database", while "feature" is used to mean what we colloquially think of as a feature of a computer program.
It's arguably a slight abuse of a bug tracking system to also track progress and discussion on features, but it's not exactly uncommon; it's just that many systems would call it an "issue" rather than a "bug".
Not really -- they're all "potential todos" that need to be tracked and prioritized in the same place.
And the difference between a bug and a feature is often in the eye of the beholder. I'll very often title a GitHub issue with "Bug/Feature Request:" since it's often debatable whether the existing behavior was by design or not, and I don't want to presume one way or the other.
So I do consider them all pretty interchangeable at the end of the day, and therefore not really confusing.
This comment is of course breaking the HN Guidelines as a shallow dismissal, but the parent is right: After Google killed Ublock Origin and turned Android into a nanny OS, I have no idea why anyone would stick to anything from them. Also Firefox is better in almost every way.
Dupe. From yesterday (183 points, 82 comments):
https://news.ycombinator.com/item?id=46021179
Ah, I think I searched for "jpegxl", that's why there was no match.
"Yes, re-opening.".
> Given these positive signals, we would welcome contributions to integrate a performant and memory-safe JPEG XL decoder in Chromium. In order to enable it by default in Chromium we would need a commitment to long-term maintenance. With those and our usual launch criteria met, we would ship it in Chrome.
https://groups.google.com/a/chromium.org/g/blink-dev/c/WjCKc...
LOL. Google, the "yeah that thing we bought six months ago, we're killing it off 30 days for 4 weeks ago" company demanding "long-term" anything.
long term support is actually being provided by google...
just a different team in a different country :D
most jxl devs are at google research in zurich, and already pledged to handle long tetm support
JPEG-XL provides the best migration path for image conversion from JPEG, with lossless recompression. It also supports arbitrary HDR bit depths (up to 32 bits per channel) unlike AVIF, and generally its HDR support is much better than AVIF. Other operating systems and applications were making strides towards adopting this format, but Google was up till now stubbornly holding the web back in their refusal to support JPEG-XL in favour of AVIF which they were pushing. I’m glad to hear they’re finally reconsidering. Let’s hope this leads to resources being dedicated to help build and maintain a performant and memory safe decoder (in Rust?).
It's not just Google, Mozilla has no desire to introduce a barely supported massive C++ decoder for marginal gains either:
https://github.com/mozilla/standards-positions/pull/1064
avif is just better for typical web image quality, it produces better looking images and its artifacts aren't as annoying (smoothing instead of blocking and ringing around sharp edges).
You also get it for basically free because it's just an av1 key frame. Every browser needs an av1 decoder already unless it's willing to forego users who would like to be able to watch Netflix and YouTube.
I don't understand what you're trying to say. Mozilla said over a year ago that they would support JXL as soon as there's a fast memory safe decoder that will be supported.
Google on the other hand never expressed any desire to support JXL at all, regardless of the implementation. Only just now after the PDF Association announced that PDF would be using JXL, did they decide to support JXL on the web.
> avif is just better for typical web image quality, it produces better looking images and its artifacts aren't as annoying (smoothing instead of blocking and ringing around sharp edges).
AVIF is certainly better for the level of quality that Google wants you to use, but in reality, images on the web are much higher quality than that.
And JXL is pretty good if you want smoothing, in fact libjxl's defaults have gotten so overly smooth recently that it's considered a problem which they're in the process of fixing.
> I don't understand what you're trying to say. Mozilla said over a year ago that they would support JXL as soon as there's a fast memory safe decoder that will be supported.
Did they actually say that? All the statements i've seen them have been much more guarded and vauge. More of a, maybe we will think about it if that happens.
> If they successfully contribute an implementation that satisfies these properties and meets our normal production requirements, we would ship it.
That's what they said a year ago. And a couple of Mozilla devs have been in regular contact with the JXL devs ever since then, helping with the integration. The patches to use jxl-rs with Firefox already exist, and will be merged as soon as a couple of prerequisite issues in Gecko are fixed.
I disagree about the image quality at typical sizes - I find JPEG-XL is generally similar or better than AVIF at any reasonable compression ratios for web images. See this for example: https://tonisagrista.com/blog/2023/jpegxl-vs-avif/
AVIF only comes out as superior at extreme compression ratios at much lower bit rates than are typically used for web images, and the images generally look like smothered messes at those extreme ratios.
Even though AVIF decoding support is fairly widespread by now, it is still not ubiquitous like JPEG/PNG/GIF. So typically services will store or generate the same image in multiple formats including AVIF for bandwidth optimization and JPEG for universal client support. Browser headers help to determine compatibility, but it's still fairly complicated to implement, and users also end up having to deal with different platforms supporting different formats when they are served WebP or AVIF and want to reupload an image somewhere else that does not like those formats. As far as I can tell, JXL solves that issue for most websites since it is backwards-compatible and can be decoded into JPEG when a client does not support JXL. I would happily give up a few percent in compression efficiency to get back to a single all-purpose lossy image format.
Even Google photo does not support avif.
It's almost as if Google had an interest in increased storage and bandwidth. Of course they don't but as paying Driver used I'm overcharged for the same thing.
Some years ago, the Google Photos team asked the Chrome team to support JXL, so that they could use it for Photos. The request was ignored, of course.
> Mozilla has no desire to introduce a barely supported massive C++ decoder for marginal gains
On a slightly related note, I wanted to have a HDR background image in Windows 11. Should be a breeze in 2025 right?
Well, Windows 11 only supports JPEG XR[1] for HDR background images. And my commonly used tools did either not support JPEG XR (Gimp fex) or they did not work correctly (ImageMagick).
So I had a look at the JPEG XR reference implementation, which was hosted on Codeplex but has been mirrored on GitHub[2]. And boy, I sure hope that isn't the code that lives in Windows 11...
Ok most of the gunk is in the encoder/decoder wrapper code, but still, for something that's supposedly still in active use by Microsoft... Though not even hosting their own copy of the reference implementation is telling enough I suppose.
[1]: https://en.wikipedia.org/wiki/JPEG_XR
[2]: https://github.com/4creators/jxrlib
Another JPEG XR user is Zeiss. It saves both grayscale and color microscope images with JPEG XR compression in a container format. Zeiss also released a C++ library (libczi) using the reference JPEG XR implementation to read/write these images. Somehow Zeiss is moving away from JPEG XR - its newer version of microscope control software saves with zstd compression by default.
Not everything in the world is passive end-of-the-line presentation. JPEG-XL is the only one that tries to be a general-purpose image format.
If that's the case, let it be a feature of image editing packages that can output formats that are for the web. It's a web standard we're talking about here, not a general-purpose image format, so asking browsers to carry that big code load seems unreasonable when existing formats do most of what we need and want for the web.
People generally expect browsers to display general-purpose image formats. It's why they support formats like classical JPEG, instead of just GIF and PNG.
Turns out people really like being able to just drag-and-drop an image from their camera into a website - being forced to re-encode first it isn't exactly popular.
> Turns out people really like being able to just drag-and-drop an image from their camera into a website - being forced to re-encode first it isn't exactly popular.
That’s a function of the website, not the browser.
> That’s a function of the website, not the browser.
That's hand-waving away quite a lot. The task changes from serving a copy of a file on disk, as every other image format in common use, to needing a transcoding pipeline more akin to sites like YouTube. Technically possible, but lots of extra complexity in return for what gain?
"Marginal Gains"
Generation Loss – JPEG, WebP, JPEG XL, AVIF : https://www.youtube.com/watch?v=w7UDJUCMTng
>avif is just better for typical web image quality,
What does "typical web image quality" even mean? I see lots of benchmarks with very low BPPs, like 0.5 or even lower, and that's where video-based image codecs shine.
However, I just visited CNN.com and these are the BPPs of the first 10 images my browser loaded: 1.40, 2.29, 1.88, 18.03 (PNG "CNN headlines" logo), 1.19, 2.01, 2.21, 2.32, 1.14, 2.45.
I believe people are underestimating the BPP values that are actually used on the web. I'm not saying that low-BPP images don't exist, but clearly it isn't hard to find examples of higher-quality images in the wild.
Can AVIF display 10 bit HDR with larger color gamut that any modern phone nowadays is capable of capturing?
if you actually read your parent comment: "typical web image quality"
Typical web image quality is like it is partly because of lack of support. It’s literally more difficult to show a static HDR photo than a whole video!
PNG supports HDR with up to 16 bits per channel, see https://www.w3.org/TR/png-3/ and the cICP, mDCV and cLLI chunks.
With incredibly bad compression ratios.
HDR should not be "typical web" anything. It's insane that websites are allowed to override my system brightness setting through HDR media. There's so much stuff out there that literally hurts my eyes if I've set my brightness such that pure white (SDR FFFFFF) is a comfortable light level.
I want JXL in web browsers, but without HDR support.
There's nothing stopping browsers from tone mapping[1] those HDR images using your tone mapping preference.
[1]: https://en.wikipedia.org/wiki/Tone_mapping
What does that achieve? Isn't it simpler to just not support HDR than to support HDR but tone map away the HDR effect?
Anyway, which web browsers have a setting to tone map HDR images such that they look like SDR images? (And why should "don't physically hurt my eyes" be an opt-in setting anyway instead of just the default?)
Wanted to note https://issues.chromium.org/issues/40141863 on making the lossless JPEG recompression a Content-Encoding, which provides a way that, say, a CDN could deploy it in a way that's fully transparent to end users (if the user clicks Save it would save a .jpg).
(And: this is great! I think JPEG XL has chance of being adopted with the recompression "bridge" and fast decoding options, and things like progressive decoding for its VarDCT mode are practical advantages too.)
> (in Rust?)
Looks like that's the idea: https://issues.chromium.org/issues/462919304
> and generally its HDR support is much better than AVIF
Not anymore. JPEG had the best HDR support with ISO 21496-1 weirdly enough, but AVIF also just recently got that capability with 1.2 ( https://aomedia.org/blog%20posts/Libavif-Improves-Support-fo... ).
The last discussion in libjxl about this was seemingly taking the stance it wasn't necessary since JXL has "native HDR" which completely fails to understand the problem space entirely.
The JXL spec already has gainmaps...
Also, just because there's a spec for using gainmaps with JPEG doesn't mean that it works well. With only 8 bits of precision, it really sucks for HDR, gainmap or no gainmap. You just get too much banding. JXL otoh is completely immune to banding, with or without gainmaps.
> With only 8 bits of precision, it really sucks for HDR, gainmap or no gainmap. You just get too much banding.
This is simply not true. In fact, you get less banding than you do with 10-bit bt2020 PQ.
> JXL otoh is completely immune to banding
Nonsense. It has a lossy mode (which is its primary mode so to speak), so of course it has banding. Only lossless codecs can plausibly be claimed to be "immune to banding".
> The JXL spec already has gainmaps...
Ah looks like they added that sometime last year but decided to call it "JHGM" and also made almost no mention of this in the issue tracker, and didn't bother updating the previous feature requests asking for this that are still open.
> performant and memory safe decoder (in Rust?).
Isn't this exactly the case that wuffs [1] is built for? I had the vague (and, looking into it now, probably incorrect) impression that Google was going to start building all their decoders with that.
[1] https://github.com/google/wuffs
WUFFS only works for very simple codecs. Basically useless for anything complex enough that memory bugs would be common.
Love this, been waiting for Google to integrate this, from my experience with AVIF and JPEGXL, JPEGXL is much more promising for the next 20years.
Nice example for how a standard, like PDF, can even persuade/force one of the mighty to adopt a crucial bit of technology, so that this may become a common standard in its own right (i.e. "cascading standards").
I like how even the nus product (jpegli) is a significant improvement. I am in the process of converting my comic book collection. I save a lot of space and still use JPEG, which is universally supported.
> Lossless JPEG recompression (byte-exact JPEG recompression, saving about 20%) for legacy images
Lossless recompression is the main interesting thing on offer here compared to other new formats... and honestly with only 20% improvement I can't say I'm super excited by this, compared to the pain of dealing with yet another new image format.
For example, ask a normal social media user how they feel about .webp and expect to get an earful. The problem is that even if your browser supports the new format, there's no guarantee that every other tool you use supports it, from the OS to every site you want to re-upload to, etc.
If I remember correctly, WebP was single-handedly forced into adoption by Chrome, while offering only marginal improvements over existing formats. Mozilla even worked on an improved JPEG encoder, MozJPEG, to show it could compete with WebP very well. Then came HEIF and AVIF, which, like WebP, were just repurposed video codecs.
JPEG XL is the first image format in a long while that's been actually designed for images and brings a substantial improvement to quality while also covering a wide range of uses and preserving features that video codecs don't have. It supports progressive decoding, seamless very large image sizes, potentially large amount of channels, is reasonably resilient against generation loss, and more. The fact that it has no major drawbacks alone gives it much more merit than WebP has ever had. Lossless recompression is in addition to all of that.
The difference is that this time around, Google has single-handedly held back the adoption of JPEG XL, while a number of other parties have expressed interest.
Having a PNG go from 164.5K to 127.1K as lossless WEBP is not what I'd call "marginal". An improvement of over 20% is huge for lossless compression.
Going from lossless WEBP to lossless JXL is marginal though, and is not worth the big decode performance loss.
In context of the parent comment, 'only 20% improvement' is not super exciting, 'compared to the pain of dealing with yet another new image format'.
You raise a good point, though; WebP certainly did (and continues to do) well in some areas, but at the cost of lacking in others. Moreover, when considering a format for adoption, one should compare it with other candidates for adoption, too. And years before WebP gained widespread support in browsers, it had competition from other interesting formats like FLIF, which addressed some of its flaws, and I have to wonder how it compares to the even older JPEG 2000.
Since the person you replied to mentioned MozJPEG, I have to assume they meant that WebP's lossy capabilities were a marginal improvement.
You're not being fair. Webp has been the only choice for lossy image compression with alpha layer. Give it some credit.
Fair point, though not entirely true: you can run an image through lossy compression and store the result in a PNG, using tools like pngquant [1]. Likely not as efficient for many kinds of images, but totally doable.
[1] https://pngquant.org/
Since the recompression is lossless, you don’t need every tool you use to support it, as long as one of them is one that can do the decompression back to JPEG. This sounds a bit like complaining that you can’t upload .7z everywhere.
20% is massive for those storing those social media images though.
I get that there are people who are super excited by this for very good reasons, but for those of us downstream this is just going to be a hassle.
I think there's a difference here.
If I right click save and get a webp, it was probably converted from JPG. Very very few images are uploaded in webp. So getting a webp image means you've downloaded an inferior version.
JXL doesn't have this issue because conversion from jpeg is lossless. So you've still gotten the real, fully-quality image.
> Chrome Jpegxl Issue Reopened
> (this is the tracking bug for this feature)
Is it just me -- or it's confusing to use the terms issue / bug / feature interchangeably?
It's not really used interchangeably: "bug" is used to mean "entry in the bug tracker database", while "feature" is used to mean what we colloquially think of as a feature of a computer program.
It's arguably a slight abuse of a bug tracking system to also track progress and discussion on features, but it's not exactly uncommon; it's just that many systems would call it an "issue" rather than a "bug".
Maybe more like a heading bug:
https://aviation.stackexchange.com/questions/23166/what-is-t...
Not really -- they're all "potential todos" that need to be tracked and prioritized in the same place.
And the difference between a bug and a feature is often in the eye of the beholder. I'll very often title a GitHub issue with "Bug/Feature Request:" since it's often debatable whether the existing behavior was by design or not, and I don't want to presume one way or the other.
So I do consider them all pretty interchangeable at the end of the day, and therefore not really confusing.
[dupe] https://news.ycombinator.com/item?id=46021179
[flagged]
This comment is of course breaking the HN Guidelines as a shallow dismissal, but the parent is right: After Google killed Ublock Origin and turned Android into a nanny OS, I have no idea why anyone would stick to anything from them. Also Firefox is better in almost every way.