> At Netflix, our top priority is delivering the best possible entertainment experience to our members.
I dont think that is true of any streamers. Otherwise they wouldnt provide the UI equivalent of a shopping centre that tries to get you lost and unable to find your way out.
Wow. To me, the big news here is that ~30% of devices now support AV1 hardware decoding. The article lists a bunch of examples of devices that have gained it in the past few years. I had no idea it was getting that popular -- fantastic news!
So now that h.264, h.265, and AV1 seem to be the three major codecs with hardware support, I wonder what will be the next one?
> To me, the big news here is that ~30% of devices now support AV1 hardware decoding
Where did it say that?
> AV1 powers approximately 30% of all Netflix viewing
Is admittedly a bit non-specific, it could be interpreted as 30% of users or 30% of hours-of-video-streamed, which are very different metrics. If 5% of your users are using AV1, but that 5% watches far above the average, you can have a minority userbase with an outsized representation in hours viewed.
I'm not saying that's the case, just giving an example of how it doesn't necessarily translate to 30% of devices using Netflix supporting AV1.
Also, the blog post identifies that there is an effective/efficient software decoder, which allows people without hardware acceleration to still view AV1 media in some cases (the case they defined was Android based phones). So that kinda complicates what "X% of devices support AV1 playback," as it doesn't necessarily mean they have hardware decoding.
“30% of viewing” I think clearly means either time played or items played. I’ve never worked with a data team that would possibly write that and mean users.
If it was a stat about users they’d say “of users”, “of members”, “of active watchers”, or similar. If they wanted to be ambiguous they’d say “has reached 30% adoption” or something.
Agreed, but this is the internet, the ultimate domain of pedantry, and they didn't say it explicitly, so I'm not going to put words in their mouth just to have a circular discussion about why I'm claiming they said something they didn't technically say, which is why I asked "Where did it say that" at the very top.
Also, either way, my point was and still stands: it doesn't say 30% of devices have hardware encoding.
> So now that h.264, h.265, and AV1 seem to be the three major codecs with hardware support, I wonder what will be the next one?
Hopefully AV2.
H266/VVC has a five year head-start over AV2, so probably that first unless hardware vendors decide to skip it entirely. The final AV2 spec is due this year, so any day now, but it'll take a while to make it's way into hardware.
H266 is getting fully skipped (except possibly by Apple). The licensing is even worse than H265, the gains are smaller, and Google+Netflix have basically guaranteed that they won't use it (in favor of AV1 and AV2 when ready).
[deleted]
VVC is pretty much a dead end at this point. Hardly anyone is using it; it's benefits over AV1 are extremely minimal and no one wants the royalty headache. Basically everyone learned their lesson with HEVC.
If it has a five year start and we've seen almost zero hardware shipping that is a pretty bad sign.
IIRC AV1 decoding hardware started shipping within a year of the bitstream being finalized. (Encoding took quite a bit longer but that is pretty reasonable)
Yeah, that's... sparse uptake. A few smart TV SOCs have it, but aside from Intel it seems that none of the major computer or mobile vendors are bothering. AV2 next it is then!
When even H.265 is being dropped by the likes of Dell, adoption of H.266 will be even worse making it basically DOA for anything promising. It's plagued by the same problems H.265 is.
I'm not too surprised. It's similar to the metric that "XX% of Internet is on IPv6" -- it's almost entirely driven by mobile devices, specifically phones. As soon as both mainstream Android and iPhones support it, the adoption of AV1 should be very 'easy'.
(And yes, even for something like Netflix lots of people consume it with phones.)
how does that mean "~30% of devices now support AV1 hardware encoding"? I'm guessing you meant hardware decoding???
Whoops, thanks. Fixed.
Not trolling, but I'd bet something that's augmented with generative AI. Not to the level of describing scenes with words, but context-aware interpolation.
We already have some of the stepping stones for this. But honestly much better for upscaling poor quality streams vs just gives things a weird feeling when it is a better quality stream.
for sure. macroblock hinting seems like a good place for research.
>So now that h.264, h.265, and AV1 seem to be the three major codecs with hardware support
That'd be h264 (associated patents expired in most of the world), vp9 and av1.
h265 aka HEVC is less common due to dodgy, abusive licensing. Some vendors even disable it with drivers despite hardware support because it is nothing but legal trouble.
> AV1 streaming sessions achieve VMAF scores¹ that are 4.3 points higher than AVC and 0.9 points higher than HEVC sessions. At the same time, AV1 sessions use one-third less bandwidth than both AVC and HEVC, resulting in 45% fewer buffering interruptions.
Just thought I'd extract the part I found interesting as a performance engineer.
Amazing. Proprietary video codecs need to not be the default and this is huge validation for AV1 as a production-ready codec.
Why does it matter if Netflix is using an open standard if every video they stream is wrapped in proprietary closed DRM?
because device makers will not care for the DRM, but will care for the hardware decoder they need to decide to put into their devices to decode netflix videos. By ensuring this video codec is open, it benefits everybody else now, as this same device will now be able to hardware decode _more_ videos from different video providers, as well as make more video providers choose AV1.
Basically, a network effect for an open codec.
You’ve convinced me… (no snark intended)
[dead]
> Why does it matter if Netflix is using an open standard if every video they stream is wrapped in proprietary closed DRM?
I am not sure if this is a serious question, but I'll bite in case it is.
Without DRM Netflix's business would not exist. Nobody would license them any content if it was going to be streamed without a DRM.
I had forgotten about the film-grain extraction, which is a clever approach to a huge problem for compression.
But... did I miss it, or was there no mention of any tool to specify grain parameters up front? If you're shooting "clean" digital footage and you decide in post that you want to add grain, how do you convey the grain parameters to the encoder?
It would degrade your work and defeat some of the purpose of this clever scheme if you had to add fake grain to your original footage, feed the grainy footage to the encoder to have it analyzed for its characteristics and stripped out (inevitably degrading real image details at least a bit), and then have the grain re-added on delivery.
So you need a way to specify grain characteristics to the encoder directly, so clean footage can be delivered without degradation and grain applied to it upon rendering at the client.
You just add it to your original footage, and accept whatever quality degradation that grain inherently provides.
Any movie or TV show is ultimately going to be streamed in lots of different formats. And when grain is added, it's often on a per-shot basis, not uniformly. E.g. flashback scenes will have more grain. Or darker scenes will have more grain added to emulate film.
Trying to tie it to the particular codec would be a crazy headache. For a solo project it could be doable but I can't ever imagine a streamer building a source material pipeline that would handle that.
There's an HDR war brewing on TikTok and other social apps. A fraction of posts that use HDR are just massively brighter than the rest; the whole video shines like a flashlight. The apps are eventually going to have to detect HDR abuse.
The whole HDR scene still feels like a mess.
I know how bad the support for HDR is on computers (particularly Windows and cheap monitors), so I avoid consuming HDR content on them.
But I just purchased a new iPhone 17 Pro, and I was very surprised at how these HDR videos on social media still look like shit on apps like Instagram.
And even worse, the HDR video I shoot with my iPhone looks like shit even when playing it back on the same phone! After a few trials I had to just turn it off in the Camera app.
I wonder if it fundamentally only really makes sense for film, video games, etc. where a person will actually tune the range per scene. Plus, only when played on half decent monitors that don’t just squash BT.2020 so they can say HDR on the brochure.
The HDR implementation in Windows 11 is fine. And it's not even that bad in 11 in terms of titles and content officially supporting HDR. Most of the ideas that it's "bad" comes from the "cheap monitor" part, not windows.
I have zero issues and only an exceptional image on W11 with a PG32UQX.
The only time I shoot HDR on anything is because I plan on crushing the shadows/raising highlights after the fact. S curves all the way. Get all the dynamic range you can and then dial in the look. Otherwise it just looks like a flat washed out mess most of the time
This is one of the reasons I don't like HDR support "by default".
HDR is meant to be so much more intense, it should really be limited to things like immersive full-screen long-form-ish content. It's for movies, TV shows, etc.
It's not what I want for non-immersive videos you scroll through, ads, etc. I'd be happy if it were disabled by the OS whenever not in full screen mode. Unless you're building a video editor or something.
Or a photo viewer, which isn't necessarily running in fullscreen.
Just what we need, a new loudness war, but for our eyeballs.
What if they did HDR for audio? So an audio file can tell your speakers to output at 300% of the normal max volume, even more than what compression can do.
Interestingly, the loudness war was essentially fixed by the streaming services. They were in a similar situation as Tik Tok is now.
What's the history on the end to the loudness war? Do streaming services renormalize super compressed music to be quieter than the peaks of higher dynamic range music?
Yes. Basically the streaming services started using a decent model of perceived loudness, and normalise tracks to roughly the same perceived level. I seem to remember that Apple (the computer company, not the music company) was involved as well, but I need to re-read the history here. Their music service and mp3 players were popular back in the day.
So all music producers got out of compressing their music was clipping, and not extra loudness when played back.
It hasn't really changed much in the mastering process, they still are doing the same old compression. Maybe not the to the same extremes, but dynamic range is still usually terrible. They do it a a higher LUFS target than the streaming platforms normalize to because each streaming platform has a different limit and could change it at any time, so better to be on the safe side. Also the fact that majority of music listening doesn't happen on good speakers/environment.
You would think, but not in a way that matters. Everyone still compresses their mixes. People try to get around normalization algorithms by clever hacks. The dynamics still suffer, and bad mixes still clip. So no, I don’t think streaming services fixed the loudness wars.
Sounds like they need something akin to audio volume normalization but for video. You can go bright, but only in moderation, otherwise your whole video gets dimmed down until the average is reasonable.
[dead]
My phone has this cool feature where it doesn't support HDR.
HDR has a slight purpose, but the way it was rolled out was so disrespectful that I just want it permanently gone everywhere. Even the rare times it's used in a non-abusive way, it can hurt your eyes or make things display weirdly.
That's true on the web, as well; HDR images on web pages have this problem.
It's not obvious whether there's any automated way to reliably detect the difference between "use of HDR" and "abuse of HDR". But you could probably catch the most egregious cases, like "every single pixel in the video has brightness above 80%".
> It's not obvious whether there's any automated way to reliably detect the difference between "use of HDR" and "abuse of HDR".
That sounds like a job our new AI overlords could probably handle. (But that might be overkill.)
Funnily enough HDR already has to detect this problem, because most HDR monitors literally do not have the power circuitry or cooling to deliver a complete white screen at maximum brightness.
My idea is: for each frame, grayscale the image, then count what percentage of the screen is above the standard white level. If more than 20% of the image is >SDR white level, then tone-map the whole video to the SDR white point.
That needs a temporal component as well: games and videos often use HDR for sudden short-lived brightness.
Can someone explain what the war is about?
Like HDR abuse makes it sound bad, because the video is bright? Wouldn't that just hurt the person posting it since I'd skip over a bright video?
Sorry if I'm phrasing this all wrong, don't really use TikTok
> Wouldn't that just hurt the person posting it since I'd skip over a bright video?
Sure, in the same way that advertising should never work since people would just skip over a banner ad. In an ideal world, everyone would uniformly go "nope"; in our world, it's very much analogous to the https://en.wikipedia.org/wiki/Loudness_war .
Not everything that glitters (or blinds) is gold.
sounds like every fad that came before it where it was over used by all of the people copying with no understanding of what it is or why. remember all of the HDR still images that pushed everything to look post-apocalyptic? remember all of the people pushing washed out videos because they didn't know how to grade the images recorded in log and it became a "thing"?
eventually, it'll wear itself out just like every other over use of the new
I would love to know who the hell thought adding "brighter than white" range to HDR was a good idea. Or, even worse, who the hell at Apple thought implementing that should happen by way of locking UI to the standard range. Even if you have a properly mastered HDR video (or image), and you've got your brightness set to where it doesn't hurt to look at, it still makes all the UI surrounding that image look grey. If I'm only supposed to watch HDR in fullscreen, where there's no surrounding UI, then maybe you should tone-map to SDR until I fullscreen the damn video?
Yup, totally agreed. I said the same thing in another comment -- HDR should be reserved only for full-screen stuff where you want to be immersed in it, like movies and TV shows.
Unless you're using a video editor or something, everything should just be SDR when it's within a user interface.
HDR videos on social media look terrible because the UI isn’t in HDR while the video isn’t. So you have this insanely bright video that more or less ignores your brightness settings, and then dim icons on top of it that almost look incomplete or fuzzy cause of their surroundings. It looks bizarre and terrible.
It's good if you have black text on white background, since your app can have good contrast without searing your eyes. People started switching to dark themes to avoid having their eyeballs seared monitors with the brightness high.
For things filmed with HDR in mind it's a benefit. Bummer things always get taken to the extreme.
The alternative is even worse, where the whole UI is blinding you. Plus, that level of brightness isn't meant to be sustained.
The solution is for social media to be SDR, not for the UI to be HDR.
Imo the real solution is for luminance to scale appropriately even in HDR range, kinda like how gain map HDR images can. Scaled both with regards to the display's capabilities and the user/apps intents.
Not sure how it works on Android, but it's such amateur UX on Apple's part.
99.9% of people expect HDR content to get capped / tone-mapped to their display's brightness setting.
That way, HDR content is just magically better. I think this is already how HDR works on non-HDR displays?
For the 0.01% of people who want something different, it should be a toggle.
Unfortunately I think this is either (A) amateur enshittification like with their keyboards 10 years ago, or (B) Apple specifically likes how it works since it forces you to see their "XDR tech" even though it's a horrible experience day to day.
99% of people have no clue what “HDR” and “tone-mapping” mean, but yes are probably weirded out by some videos being randomly way brighter than everything else
But isn't it the point? Try looking at a light bulb; everything around it is so much less bright.
OTOH pointing a flaslight at your face is at least impolite. I would put a dark filter on top of HDR vdeos until a video is clicked for watching.
I'm surprised AV1 usage is only at 30%. Is AV1 so demanding that Netflix clients without AV1 hardware acceleration capabilities would be overwhelmed by it?
Thanks to libdav1d's [1] lovingly hand crafted SIMD ASM instructions it's actually possible to reasonably playback AV1 without hardware acceleration, but basically yes: From Snapdragon 8 onwards, Google Tensor G3 onwards, NVIDIA RTX 3000 series onwards. All relatively new .
There are a lot of 10 year old TVs/fire sticks still in use that have a CPU that maxes out running the UI and rely exclusively on hardware decoding for all codecs (e.g. they couldn't hardware decode h264 either). Image a super budget phone from ~2012 and you'll have some idea the hardware capability we're dealing with.
Compression gains will mostly be for the benefit of the streaming platform’s bills/infra unless you’re trying to stream 4K 60fps on hotel wifi (or if you can’t decode last-gen codecs on hardware either ). Apparently streaming platforms still favor user experience enough to not heat their rooms for no observable improvement. Also a TV CPU can barely decode a PNG still in software - video decoding of any kind is simply impossible.
If you are on a mobile device, decoding without hardware assistance might not overwhelm the processors directly, but it might drain your battery unnecessarily fast?
They would be served h.265
tv manufacturers don't want high end chips for their tv sets... hardware decoding is just a way to make cheaper chips for tvs.
[deleted]
This is really cool. Props to the team that created AV1. Very impressive
Netflix has been the worst performing and lowest quality video stream of any of the streaming services. Fuzzy video, lots of visual noise and artifacts. Just plan bad and this is on the 4k plan on 1GB fiber on a 4k Apple TV. I can literally tell when someone is watching Netflix without knowing because it looks like shit.
It's not AV1's fault though, I'm pretty sure it's that they cheap out on the bitrate. Apple is among the highest bitrates (other than Sony's weird hardware locked streaming service).
I actually blamed AV1 for the macro-blocking and generally awful experience of watching horror films on Netflix for a long time. Then I realized other sources using AV1 were better.
If you press ctl-alt-shift-d while the video is playing you'll note that most of the time that the bitrate is appallingly low, and also that Netflix plays their own original content using higher bitrate HEVC rather than AV1.
That's because they actually want it to look good. For partner content they often default back to lower bitrate AV1, because they just don't care.
This is actually their DRM speaking. If you watch it on a Linux device or basically anything that isn’t a smart TV on the latest OS, they limit you to a 720p low bitrate stream, even if you pay for 4k. (See Louis Rossman’s video on the topic)
OP said they're using an Apple TV, which most definitely supports the 4K DRM.
The bit rate is unfortunately crushed to hell and back, leading to blockiness on 4K.
Yep, and they also silently downgrade resolution and audio channels on an ever changing and hidden list of browsers/OS/device overtime.
Meanwhile pirated movies are in Blu-ray quality, with all audio and language options you can dream of.
I also find Netflix video quality shockingly bad and oddly inconsistent. I think they just don’t prioritize video quality in the same way as say apple or Disney does.
I cancelled Netflix for this exact reason. 4K Netflix looks worse than 720 YouTube, yet I pay(paid) for Netflix 4K, and at roughly 2x what I paid for Netflix when it launched. It's genuinely a disgrace how they can even claim with a straight face that you're actually watching 4K. The last price rise was the tipping point and I tapped out after 11 years.
Probably some function of your location to data centers. I find hbo max to be aysmal these days. But I've learned to just stop caring about this stuff since no one else in my life does
Now you can be mad about two things nobody else notices.
Netflix on Apple TV has an issue if "Match Content" is "off" where it will constantly downgrade the video stream to a lower bitrate unnecessarily.
Even fixing that issue the video quality is never great compared to other services.
Oddly enough, I observe something to the opposite effect.
I wonder if it has more to do with proximity to edge delivery nodes than anything else.
>AV1 sessions use one-third less bandwidth than both AVC and HEVC
Sounds like they set HEVC to higher quality then? Otherwise how could it be the same as AVC?
There are other possible explanations, e.g. AVC and HEVC are set to the same bitrate, so AVC streams lose quality, while AV1 targets HEVC's quality. Or they compare AV1 traffic to the sum of all mixed H.26x traffic. Or the rates vary in more complex ways and that's an (over)simplified summary for the purpose of the post.
Netflix developed VMAF, so they're definitely aware of the complexity of matching quality across codecs and bitrates.
I have no doubt they know what they are doing. But it's a srange metric no matter how you slice it. Why compare AV1's bandwith to the average of h.264 and h.265, and without any more details about resolution or compression ratio? Reading between the lines, it sounds like they use AV1 for low bandwidth and h.265 for high bandwidth and h.264 as a fallback. If that is the case, why bring up this strange average bandwidth comparison?
definitely reads like "you're holding it wrong" to me as well
Am I the only one that thought this is an old article by the title? AV1 is now 10 years old and AV2 has been announced for year-end release few months ago. If anything the news is that AV1 powers only 30% by now. At least HEVC, released about the same time, has gotten quite popular in warez scene (movies/TV/anime) for small encodes, whereas AV1 releases are still considered a rarity. (Though to be fair 30% Netflix & YT means AV1 usage in total is much higher.) Will've expected a royalty-free codec to've been embraced more but seems its difficulty for long time to be played on low power devices hindered its adoption.
AV1 is not new anymore and I think most of the modern devices are supporting them natively. Some devices like Apple even have a dedicated AV1 HW-accelerator. Netflix has pushing AV1 for a while now so I thought that the adoption rate should be like 50%, but it seems like AV1 requires better hardware and newer software which a lot of people don't have.
Dont forget that people also view Netflix on TV’s, and a large number of physical TV’s were made before AV1 was specced. So 30% overall may also mean 70% on modern devices.
On a related note, why are release groups not putting out AV1 WEB-DLs? Most 4K stuff is h265 now but if AV1 is supplied without re-encoding surely that would be better?
I looked into this before, and the short answer is that release groups would be allowed to release in AV1, but the market seems to prefer H264 and H265 because of compatibility and release speed. Encoding AV1 to an archival quality takes too long, reduces playback compatibility, and doesn't save that much space.
There also are no scene rules for AV1, only for H265 [1]
I'm surprised it took so long for CRF to dethrone 2-pass. We used to use 2-pass primarily so that files could be made to fit on CDs.
> Encoding AV1 to an archival quality takes too long
With the SVT-AV1 encoder you can achieve better quality in less time versus the x265 encoder. You just have to use the right presets. See the encoding results section:
Yeah, is there any good(and simple)guide for SVT-AV1 settings? I tried to convert many of my stuff to it but you really need to put a lot of time to figure out the correct settings for your media, and it becomes more difficult if your media is in mixed formats, encodings etc.
Yeah I’m talking about web-dl though not a rip so there is no encoding necessary.
Player compatibility. Netflix can use AV1 and send it to the devices that support it while sending H265 to those that don't. A release group puts out AV1 and a good chunk of users start avoiding their releases because they can't figure out why it doesn't play (or plays poorly).
I've seen some on private sites. My guess is they are not popular enough yet. Or pirates are using specific hardware to bypass Widevine encryption (like an Nvidia Shield and burning keys periodically) that doesn't easily get the AV1 streams.
I'm not in the scene anymore, but for my own personal encoding, at higher quality settings, AV1 (rav1e or SVT; AOM was crazy slow) doesn't significantly beat out x265 for most sources.
FGS makes a huge difference at moderately high bitrates for movies that are very grainy, but many people seem to really not want it for HQ sources (see sibling comments). With FGS off, it's hard to find any sources that benefit at bitrates that you will torrent rather than stream.
h.264 has near-universal device support and almost no playback issues at the expensive of slightly larger file sizes. h.265 and av1 give you 10-bit 4K but playback on even modest laptops can become choppy or produce render artifacts. I tried all three, desperately wanting av1 to win but Jellyfin on a small streaming server just couldn't keep up.
Because pirates are unaffected by the patent situation with H.265.
Everyone is affected by that mess, did you miss the recent news about Dell and HP dropping HEVC support in hardware they have already shipped? Encoders might not care about legal purity of the encoding process, but they do have to care about how it's going to be decoded. I like using proper software to view my videos, but it's a rarity afaik.
But isn’t AV1 just better than h.265 now regardless of the patents? The only downside is limited compatibility.
HW support for av1 is still behind h265. There's a lot of 5-10 year old hw that can play h265 but not av1. Second, there is also a split bw Dovi and HDR(+). Is av1 + Dovi a thing? Blu rays are obviously h265. Overall, h265 is the common denominator for all UHD content.
Encoding my 40TB library to AV1 with software encoding without losing quality would take more then a year of not multiple years, consume lots of power while doing this, to save a little bit of storage. Granted, after a year of non stop encoding I would save a few TB of space. But it think it is cheaper to buy a new 20TB hard drive than the electricity used for the encoding.
I avoid av1 downloads when possible because I don’t want to have to figure out how to disable film grain synthesis and then deal with whatever damage that causes to apparent quality on a video that was encoded with it in mind. Like I just don’t want any encoding that supports that, if I can stay away from it.
In MPV it's just "F1 vf toggle format:film-grain=no" in the input config. And I prefer AV1 because of this, almost everything looks better without that noise.
You can also include "vf=format:film-grain=no" in the config itself to start with no film grain by default.
I watch almost everything in Infuse on Apple TV or in my browser, though.
What's wrong with film grain synthesis? Most film grain in modern films is "fake" anyway (The modern VFX pipeline first removes grain, then adds effects, and lastly re-adds fake grain), so instead of forcing the codec to try to compress lots of noise (and end up blurring lots of it away), we can just have the codec encode the noisless version and put the noise on after.
I watch a lot of stuff from the first 110ish years of cinema. For the most recent 25, and especially 15… yeah I dunno, maybe, but easier to just avoid it.
I do sometimes end up with av1 for streaming-only stuff, but most of that looks like shit anyway, so some (more) digital smudging isn’t going to make it much worse.
Even for pre-digital era movies, you want film grain. You just want it done right (which not many places do to be fair).
The problem you see with AV1 streaming isn't the film grain synthesis; it's the bitrate. Netflix is using film grain synthesis to save bandwidth (e.g. 2-5mbps for 1080p, ~20mbps for 4k), 4k bluray is closer to 100mbps.
If the AV1+FGS is given anywhere close to comparable bitrate to other codecs (especially if it's encoding from a non-compressed source like a high res film scan), it will absolutely demolish a codec that doesn't have FGS on both bitrate and detail. The tech is just getting a bad rap because Netflix is aiming for minimal cost to deliver good enough rather than maximal quality.
With HEVC you just don't have the option to disable film grain because it's burned into the video stream.
I’m not looking to disable film grain, if it’s part of the source.
For a second there I wasn't looking very close and I thought it said that 30% of Netflix was running on .AVI files
Qualcomm seems to be lagging behind and doesn't have AV1 decoder except in high end SoCs.
I understand that sometimes the HN titles get edited to be less descriptive and more generic in order to match the actual article title.
What’s the logic with changing the title here from the actual article title it was originally submitted with “AV1 — Now Powering 30% of Netflix Streaming” to the generic and not at all representative title it currently has “AV1: a modern open codec”? That is neither the article title nor representative of the article content.
Amen. The mania for obscurity in titles here is infuriating. This one is actually replete with information compared to many you see on the front page.
hacker news loves low information click bait titles. The shorter and more vague the better.
Though in the original title AV1 could be anything if you don't know it's a codec. How about:
"AV1 open video codec now powers 30% of Netflix viewing, adds HDR10+ and film grain synthesis"
AV1 is fine as-is. Plenty of technical titles on HN would need to be googled if you didn't know it. Even in yours, HDR10+ "could be anything if you don't know it". Play this game if you want, but it's unwindable. The only people who care about AV1 already know what it is.
Well, I'm interested in AV1 as a videographer but hadn't heard of it before. Without 'codec' in the title I would have thought it was networking related.
Re: HDR - not the same thing. HDR has been around for decades and every TV in every electronics store blasts you with HDR10 demos. It's well known. AV1 is extremely niche and deserves 2 words to describe it.
AV1 has been around for a decade (well, it was released 7 years ago but the Alliance for Open Media was formed a decade ago).
It's fine that you haven't heard of it before (you're one of today's lucky 10,000!) but it really isn't that niche. YouTube and Netflix (from TFA) also started switching to AV1 several years ago, so I would expect it to have similar name recognition to VP9 or WebM at this point. My only interaction with video codecs is having to futz around with ffmpeg to get stuff to play on my TV, and I heard about AV1 a year or two before it was published.
I'm old (50) and have heard AV1 before. My modern TV didn't say HDR or HDR10 (it did say 4k). Agree that AV1 should include "codec".
One word, or acronym, just isn't enough to describe anything on this modern world.
> Though in the original title AV1 could be anything if you don't know it's a codec.
I'm not trying to be elitist, but this is "Hacker News", not CNN or BBC. It should be safe to assume some level of computer literacy.
Knowledge of all available codecs is certainly not the same tier as basic computer literacy. I agree it doesn't need to be dumbed down to the general user, but we also shouldn't assume everyone here know every technical abbreviation.
The article barely mentioned “open”, and certainly gave no insight as to what “open” actually means wrt AV1.
[deleted]
For me that’s a FU moment that reminds me ‘TF am I doing here?’ I genuinely see this resource as a censoring plus advertising (both for YC, obviously) platform, where there are generic things, but also things someone doesn’t want you to read or know. The titles are constantly being changed to gibberish like right here, the adequate comments or posts are being dead, yet the absolutely irrelevant or offensive things, can stay not touched. Etc.
It is usually Dang using his judgment.
I really like moderation on HN in general, but honestly this inconsistent policy of editorializing titles is bad. There were plenty of times where submitter editorialized titles (e.g GitHub code dumps of some project) were changed back to useless and vague (without context) original titles.
And now HN administration tend to editorialize in their own way.
Also, it’s not the whole picture. AV1 is open because it didn’t have the good stuff (newly patented things) and as such I also wouldn’t say it’s the most modern.
AV1 has plenty of good stuff. AOM (the agency that developed AV1) has a patent pool https://www.stout.com/en/insights/article/sj17-the-alliance-... comprising of video hardware/software patents from Netflix, Google, Nvidia, Arm, Intel, Microsoft, Amazon and a bunch of other companies. AV1 has a bunch of patents covering it, but also has a guarantee that you're allowed to use those patents as you see fit (as long as you don't sue AOM members for violating media patents).
AV1 definitely is missing some techniques patented by h264 and h265, but AV2 is coming around now that all the h264 innovations are patent free (and now that there's been another decade of research into new cutting edge techniques for it).
Just because something is patented doesn't necessarily mean its good. I think head to head comparisons matter more. (Admittedly i dont know how av1 holds up)
Yes, but in this case, it does.
AV1 is good enough that the cost of not licensing might outweigh the cost of higher bandwidth. And it sounds like Netflix agrees with that.
[dead]
[dead]
[dead]
Is it me or this post has LLM vibes?
Top post without a single comment and only 29 points. Clearly my mental model of how posts bubble to the top is broken.
IIRC, there's a time/recency factor. If we assume that most people don't browse /newest (without commenting on should, I suspect this is true), then that seems like a reasonable way to help surface things; enough upvotes to indicate interest means a story gets a chance at the front page.
I imagine that's a big part of the drive behind discontinuing Chromecast support..
https://www.androidcentral.com/streaming-tv/chromecast/netfl...
> At Netflix, our top priority is delivering the best possible entertainment experience to our members.
I dont think that is true of any streamers. Otherwise they wouldnt provide the UI equivalent of a shopping centre that tries to get you lost and unable to find your way out.
Wow. To me, the big news here is that ~30% of devices now support AV1 hardware decoding. The article lists a bunch of examples of devices that have gained it in the past few years. I had no idea it was getting that popular -- fantastic news!
So now that h.264, h.265, and AV1 seem to be the three major codecs with hardware support, I wonder what will be the next one?
> To me, the big news here is that ~30% of devices now support AV1 hardware decoding
Where did it say that?
> AV1 powers approximately 30% of all Netflix viewing
Is admittedly a bit non-specific, it could be interpreted as 30% of users or 30% of hours-of-video-streamed, which are very different metrics. If 5% of your users are using AV1, but that 5% watches far above the average, you can have a minority userbase with an outsized representation in hours viewed.
I'm not saying that's the case, just giving an example of how it doesn't necessarily translate to 30% of devices using Netflix supporting AV1.
Also, the blog post identifies that there is an effective/efficient software decoder, which allows people without hardware acceleration to still view AV1 media in some cases (the case they defined was Android based phones). So that kinda complicates what "X% of devices support AV1 playback," as it doesn't necessarily mean they have hardware decoding.
“30% of viewing” I think clearly means either time played or items played. I’ve never worked with a data team that would possibly write that and mean users.
If it was a stat about users they’d say “of users”, “of members”, “of active watchers”, or similar. If they wanted to be ambiguous they’d say “has reached 30% adoption” or something.
Agreed, but this is the internet, the ultimate domain of pedantry, and they didn't say it explicitly, so I'm not going to put words in their mouth just to have a circular discussion about why I'm claiming they said something they didn't technically say, which is why I asked "Where did it say that" at the very top.
Also, either way, my point was and still stands: it doesn't say 30% of devices have hardware encoding.
> So now that h.264, h.265, and AV1 seem to be the three major codecs with hardware support, I wonder what will be the next one?
Hopefully AV2.
H266/VVC has a five year head-start over AV2, so probably that first unless hardware vendors decide to skip it entirely. The final AV2 spec is due this year, so any day now, but it'll take a while to make it's way into hardware.
H266 is getting fully skipped (except possibly by Apple). The licensing is even worse than H265, the gains are smaller, and Google+Netflix have basically guaranteed that they won't use it (in favor of AV1 and AV2 when ready).
VVC is pretty much a dead end at this point. Hardly anyone is using it; it's benefits over AV1 are extremely minimal and no one wants the royalty headache. Basically everyone learned their lesson with HEVC.
If it has a five year start and we've seen almost zero hardware shipping that is a pretty bad sign.
IIRC AV1 decoding hardware started shipping within a year of the bitstream being finalized. (Encoding took quite a bit longer but that is pretty reasonable)
https://en.wikipedia.org/wiki/Versatile_Video_Coding#Hardwar...
Yeah, that's... sparse uptake. A few smart TV SOCs have it, but aside from Intel it seems that none of the major computer or mobile vendors are bothering. AV2 next it is then!
When even H.265 is being dropped by the likes of Dell, adoption of H.266 will be even worse making it basically DOA for anything promising. It's plagued by the same problems H.265 is.
I'm not too surprised. It's similar to the metric that "XX% of Internet is on IPv6" -- it's almost entirely driven by mobile devices, specifically phones. As soon as both mainstream Android and iPhones support it, the adoption of AV1 should be very 'easy'.
(And yes, even for something like Netflix lots of people consume it with phones.)
how does that mean "~30% of devices now support AV1 hardware encoding"? I'm guessing you meant hardware decoding???
Whoops, thanks. Fixed.
Not trolling, but I'd bet something that's augmented with generative AI. Not to the level of describing scenes with words, but context-aware interpolation.
https://blogs.nvidia.com/blog/rtx-video-super-resolution/
We already have some of the stepping stones for this. But honestly much better for upscaling poor quality streams vs just gives things a weird feeling when it is a better quality stream.
for sure. macroblock hinting seems like a good place for research.
>So now that h.264, h.265, and AV1 seem to be the three major codecs with hardware support
That'd be h264 (associated patents expired in most of the world), vp9 and av1.
h265 aka HEVC is less common due to dodgy, abusive licensing. Some vendors even disable it with drivers despite hardware support because it is nothing but legal trouble.
> AV1 streaming sessions achieve VMAF scores¹ that are 4.3 points higher than AVC and 0.9 points higher than HEVC sessions. At the same time, AV1 sessions use one-third less bandwidth than both AVC and HEVC, resulting in 45% fewer buffering interruptions.
Just thought I'd extract the part I found interesting as a performance engineer.
Amazing. Proprietary video codecs need to not be the default and this is huge validation for AV1 as a production-ready codec.
Why does it matter if Netflix is using an open standard if every video they stream is wrapped in proprietary closed DRM?
because device makers will not care for the DRM, but will care for the hardware decoder they need to decide to put into their devices to decode netflix videos. By ensuring this video codec is open, it benefits everybody else now, as this same device will now be able to hardware decode _more_ videos from different video providers, as well as make more video providers choose AV1.
Basically, a network effect for an open codec.
You’ve convinced me… (no snark intended)
[dead]
> Why does it matter if Netflix is using an open standard if every video they stream is wrapped in proprietary closed DRM?
I am not sure if this is a serious question, but I'll bite in case it is.
Without DRM Netflix's business would not exist. Nobody would license them any content if it was going to be streamed without a DRM.
I had forgotten about the film-grain extraction, which is a clever approach to a huge problem for compression.
But... did I miss it, or was there no mention of any tool to specify grain parameters up front? If you're shooting "clean" digital footage and you decide in post that you want to add grain, how do you convey the grain parameters to the encoder?
It would degrade your work and defeat some of the purpose of this clever scheme if you had to add fake grain to your original footage, feed the grainy footage to the encoder to have it analyzed for its characteristics and stripped out (inevitably degrading real image details at least a bit), and then have the grain re-added on delivery.
So you need a way to specify grain characteristics to the encoder directly, so clean footage can be delivered without degradation and grain applied to it upon rendering at the client.
You just add it to your original footage, and accept whatever quality degradation that grain inherently provides.
Any movie or TV show is ultimately going to be streamed in lots of different formats. And when grain is added, it's often on a per-shot basis, not uniformly. E.g. flashback scenes will have more grain. Or darker scenes will have more grain added to emulate film.
Trying to tie it to the particular codec would be a crazy headache. For a solo project it could be doable but I can't ever imagine a streamer building a source material pipeline that would handle that.
There's an HDR war brewing on TikTok and other social apps. A fraction of posts that use HDR are just massively brighter than the rest; the whole video shines like a flashlight. The apps are eventually going to have to detect HDR abuse.
The whole HDR scene still feels like a mess.
I know how bad the support for HDR is on computers (particularly Windows and cheap monitors), so I avoid consuming HDR content on them.
But I just purchased a new iPhone 17 Pro, and I was very surprised at how these HDR videos on social media still look like shit on apps like Instagram.
And even worse, the HDR video I shoot with my iPhone looks like shit even when playing it back on the same phone! After a few trials I had to just turn it off in the Camera app.
I wonder if it fundamentally only really makes sense for film, video games, etc. where a person will actually tune the range per scene. Plus, only when played on half decent monitors that don’t just squash BT.2020 so they can say HDR on the brochure.
The HDR implementation in Windows 11 is fine. And it's not even that bad in 11 in terms of titles and content officially supporting HDR. Most of the ideas that it's "bad" comes from the "cheap monitor" part, not windows.
I have zero issues and only an exceptional image on W11 with a PG32UQX.
The only time I shoot HDR on anything is because I plan on crushing the shadows/raising highlights after the fact. S curves all the way. Get all the dynamic range you can and then dial in the look. Otherwise it just looks like a flat washed out mess most of the time
This is one of the reasons I don't like HDR support "by default".
HDR is meant to be so much more intense, it should really be limited to things like immersive full-screen long-form-ish content. It's for movies, TV shows, etc.
It's not what I want for non-immersive videos you scroll through, ads, etc. I'd be happy if it were disabled by the OS whenever not in full screen mode. Unless you're building a video editor or something.
Or a photo viewer, which isn't necessarily running in fullscreen.
Just what we need, a new loudness war, but for our eyeballs.
https://en.wikipedia.org/wiki/Loudness_war
What if they did HDR for audio? So an audio file can tell your speakers to output at 300% of the normal max volume, even more than what compression can do.
Interestingly, the loudness war was essentially fixed by the streaming services. They were in a similar situation as Tik Tok is now.
What's the history on the end to the loudness war? Do streaming services renormalize super compressed music to be quieter than the peaks of higher dynamic range music?
Yes. Basically the streaming services started using a decent model of perceived loudness, and normalise tracks to roughly the same perceived level. I seem to remember that Apple (the computer company, not the music company) was involved as well, but I need to re-read the history here. Their music service and mp3 players were popular back in the day.
So all music producers got out of compressing their music was clipping, and not extra loudness when played back.
It hasn't really changed much in the mastering process, they still are doing the same old compression. Maybe not the to the same extremes, but dynamic range is still usually terrible. They do it a a higher LUFS target than the streaming platforms normalize to because each streaming platform has a different limit and could change it at any time, so better to be on the safe side. Also the fact that majority of music listening doesn't happen on good speakers/environment.
You would think, but not in a way that matters. Everyone still compresses their mixes. People try to get around normalization algorithms by clever hacks. The dynamics still suffer, and bad mixes still clip. So no, I don’t think streaming services fixed the loudness wars.
Sounds like they need something akin to audio volume normalization but for video. You can go bright, but only in moderation, otherwise your whole video gets dimmed down until the average is reasonable.
[dead]
My phone has this cool feature where it doesn't support HDR.
HDR has a slight purpose, but the way it was rolled out was so disrespectful that I just want it permanently gone everywhere. Even the rare times it's used in a non-abusive way, it can hurt your eyes or make things display weirdly.
That's true on the web, as well; HDR images on web pages have this problem.
It's not obvious whether there's any automated way to reliably detect the difference between "use of HDR" and "abuse of HDR". But you could probably catch the most egregious cases, like "every single pixel in the video has brightness above 80%".
> It's not obvious whether there's any automated way to reliably detect the difference between "use of HDR" and "abuse of HDR".
That sounds like a job our new AI overlords could probably handle. (But that might be overkill.)
Funnily enough HDR already has to detect this problem, because most HDR monitors literally do not have the power circuitry or cooling to deliver a complete white screen at maximum brightness.
My idea is: for each frame, grayscale the image, then count what percentage of the screen is above the standard white level. If more than 20% of the image is >SDR white level, then tone-map the whole video to the SDR white point.
That needs a temporal component as well: games and videos often use HDR for sudden short-lived brightness.
Can someone explain what the war is about?
Like HDR abuse makes it sound bad, because the video is bright? Wouldn't that just hurt the person posting it since I'd skip over a bright video?
Sorry if I'm phrasing this all wrong, don't really use TikTok
> Wouldn't that just hurt the person posting it since I'd skip over a bright video?
Sure, in the same way that advertising should never work since people would just skip over a banner ad. In an ideal world, everyone would uniformly go "nope"; in our world, it's very much analogous to the https://en.wikipedia.org/wiki/Loudness_war .
Not everything that glitters (or blinds) is gold.
sounds like every fad that came before it where it was over used by all of the people copying with no understanding of what it is or why. remember all of the HDR still images that pushed everything to look post-apocalyptic? remember all of the people pushing washed out videos because they didn't know how to grade the images recorded in log and it became a "thing"?
eventually, it'll wear itself out just like every other over use of the new
I would love to know who the hell thought adding "brighter than white" range to HDR was a good idea. Or, even worse, who the hell at Apple thought implementing that should happen by way of locking UI to the standard range. Even if you have a properly mastered HDR video (or image), and you've got your brightness set to where it doesn't hurt to look at, it still makes all the UI surrounding that image look grey. If I'm only supposed to watch HDR in fullscreen, where there's no surrounding UI, then maybe you should tone-map to SDR until I fullscreen the damn video?
Yup, totally agreed. I said the same thing in another comment -- HDR should be reserved only for full-screen stuff where you want to be immersed in it, like movies and TV shows.
Unless you're using a video editor or something, everything should just be SDR when it's within a user interface.
HDR videos on social media look terrible because the UI isn’t in HDR while the video isn’t. So you have this insanely bright video that more or less ignores your brightness settings, and then dim icons on top of it that almost look incomplete or fuzzy cause of their surroundings. It looks bizarre and terrible.
It's good if you have black text on white background, since your app can have good contrast without searing your eyes. People started switching to dark themes to avoid having their eyeballs seared monitors with the brightness high.
For things filmed with HDR in mind it's a benefit. Bummer things always get taken to the extreme.
The alternative is even worse, where the whole UI is blinding you. Plus, that level of brightness isn't meant to be sustained.
The solution is for social media to be SDR, not for the UI to be HDR.
Imo the real solution is for luminance to scale appropriately even in HDR range, kinda like how gain map HDR images can. Scaled both with regards to the display's capabilities and the user/apps intents.
Not sure how it works on Android, but it's such amateur UX on Apple's part.
99.9% of people expect HDR content to get capped / tone-mapped to their display's brightness setting.
That way, HDR content is just magically better. I think this is already how HDR works on non-HDR displays?
For the 0.01% of people who want something different, it should be a toggle.
Unfortunately I think this is either (A) amateur enshittification like with their keyboards 10 years ago, or (B) Apple specifically likes how it works since it forces you to see their "XDR tech" even though it's a horrible experience day to day.
99% of people have no clue what “HDR” and “tone-mapping” mean, but yes are probably weirded out by some videos being randomly way brighter than everything else
Android finally addressed this issue with the latest release. https://9to5google.com/2025/12/02/the-top-new-features-andro...
But isn't it the point? Try looking at a light bulb; everything around it is so much less bright.
OTOH pointing a flaslight at your face is at least impolite. I would put a dark filter on top of HDR vdeos until a video is clicked for watching.
I'm surprised AV1 usage is only at 30%. Is AV1 so demanding that Netflix clients without AV1 hardware acceleration capabilities would be overwhelmed by it?
Thanks to libdav1d's [1] lovingly hand crafted SIMD ASM instructions it's actually possible to reasonably playback AV1 without hardware acceleration, but basically yes: From Snapdragon 8 onwards, Google Tensor G3 onwards, NVIDIA RTX 3000 series onwards. All relatively new .
[1] https://code.videolan.org/videolan/dav1d
It's possible without specific hardware acceleration, but murderous for mobile devices.
Even RISC-V vector assembly[0].
0. https://code.videolan.org/videolan/dav1d/-/issues/435
There are a lot of 10 year old TVs/fire sticks still in use that have a CPU that maxes out running the UI and rely exclusively on hardware decoding for all codecs (e.g. they couldn't hardware decode h264 either). Image a super budget phone from ~2012 and you'll have some idea the hardware capability we're dealing with.
Compression gains will mostly be for the benefit of the streaming platform’s bills/infra unless you’re trying to stream 4K 60fps on hotel wifi (or if you can’t decode last-gen codecs on hardware either ). Apparently streaming platforms still favor user experience enough to not heat their rooms for no observable improvement. Also a TV CPU can barely decode a PNG still in software - video decoding of any kind is simply impossible.
If you are on a mobile device, decoding without hardware assistance might not overwhelm the processors directly, but it might drain your battery unnecessarily fast?
They would be served h.265
tv manufacturers don't want high end chips for their tv sets... hardware decoding is just a way to make cheaper chips for tvs.
This is really cool. Props to the team that created AV1. Very impressive
Netflix has been the worst performing and lowest quality video stream of any of the streaming services. Fuzzy video, lots of visual noise and artifacts. Just plan bad and this is on the 4k plan on 1GB fiber on a 4k Apple TV. I can literally tell when someone is watching Netflix without knowing because it looks like shit.
It's not AV1's fault though, I'm pretty sure it's that they cheap out on the bitrate. Apple is among the highest bitrates (other than Sony's weird hardware locked streaming service).
I actually blamed AV1 for the macro-blocking and generally awful experience of watching horror films on Netflix for a long time. Then I realized other sources using AV1 were better.
If you press ctl-alt-shift-d while the video is playing you'll note that most of the time that the bitrate is appallingly low, and also that Netflix plays their own original content using higher bitrate HEVC rather than AV1.
That's because they actually want it to look good. For partner content they often default back to lower bitrate AV1, because they just don't care.
This is actually their DRM speaking. If you watch it on a Linux device or basically anything that isn’t a smart TV on the latest OS, they limit you to a 720p low bitrate stream, even if you pay for 4k. (See Louis Rossman’s video on the topic)
OP said they're using an Apple TV, which most definitely supports the 4K DRM.
The bit rate is unfortunately crushed to hell and back, leading to blockiness on 4K.
Yep, and they also silently downgrade resolution and audio channels on an ever changing and hidden list of browsers/OS/device overtime.
Meanwhile pirated movies are in Blu-ray quality, with all audio and language options you can dream of.
I also find Netflix video quality shockingly bad and oddly inconsistent. I think they just don’t prioritize video quality in the same way as say apple or Disney does.
I cancelled Netflix for this exact reason. 4K Netflix looks worse than 720 YouTube, yet I pay(paid) for Netflix 4K, and at roughly 2x what I paid for Netflix when it launched. It's genuinely a disgrace how they can even claim with a straight face that you're actually watching 4K. The last price rise was the tipping point and I tapped out after 11 years.
Probably some function of your location to data centers. I find hbo max to be aysmal these days. But I've learned to just stop caring about this stuff since no one else in my life does
https://xkcd.com/1015/
Now you can be mad about two things nobody else notices.
Netflix on Apple TV has an issue if "Match Content" is "off" where it will constantly downgrade the video stream to a lower bitrate unnecessarily.
Even fixing that issue the video quality is never great compared to other services.
Oddly enough, I observe something to the opposite effect.
I wonder if it has more to do with proximity to edge delivery nodes than anything else.
>AV1 sessions use one-third less bandwidth than both AVC and HEVC
Sounds like they set HEVC to higher quality then? Otherwise how could it be the same as AVC?
There are other possible explanations, e.g. AVC and HEVC are set to the same bitrate, so AVC streams lose quality, while AV1 targets HEVC's quality. Or they compare AV1 traffic to the sum of all mixed H.26x traffic. Or the rates vary in more complex ways and that's an (over)simplified summary for the purpose of the post.
Netflix developed VMAF, so they're definitely aware of the complexity of matching quality across codecs and bitrates.
I have no doubt they know what they are doing. But it's a srange metric no matter how you slice it. Why compare AV1's bandwith to the average of h.264 and h.265, and without any more details about resolution or compression ratio? Reading between the lines, it sounds like they use AV1 for low bandwidth and h.265 for high bandwidth and h.264 as a fallback. If that is the case, why bring up this strange average bandwidth comparison?
definitely reads like "you're holding it wrong" to me as well
Am I the only one that thought this is an old article by the title? AV1 is now 10 years old and AV2 has been announced for year-end release few months ago. If anything the news is that AV1 powers only 30% by now. At least HEVC, released about the same time, has gotten quite popular in warez scene (movies/TV/anime) for small encodes, whereas AV1 releases are still considered a rarity. (Though to be fair 30% Netflix & YT means AV1 usage in total is much higher.) Will've expected a royalty-free codec to've been embraced more but seems its difficulty for long time to be played on low power devices hindered its adoption.
AV1 is not new anymore and I think most of the modern devices are supporting them natively. Some devices like Apple even have a dedicated AV1 HW-accelerator. Netflix has pushing AV1 for a while now so I thought that the adoption rate should be like 50%, but it seems like AV1 requires better hardware and newer software which a lot of people don't have.
Dont forget that people also view Netflix on TV’s, and a large number of physical TV’s were made before AV1 was specced. So 30% overall may also mean 70% on modern devices.
On a related note, why are release groups not putting out AV1 WEB-DLs? Most 4K stuff is h265 now but if AV1 is supplied without re-encoding surely that would be better?
I looked into this before, and the short answer is that release groups would be allowed to release in AV1, but the market seems to prefer H264 and H265 because of compatibility and release speed. Encoding AV1 to an archival quality takes too long, reduces playback compatibility, and doesn't save that much space.
There also are no scene rules for AV1, only for H265 [1]
[1] https://scenerules.org/html/2020_X265.html
I'm surprised it took so long for CRF to dethrone 2-pass. We used to use 2-pass primarily so that files could be made to fit on CDs.
> Encoding AV1 to an archival quality takes too long
With the SVT-AV1 encoder you can achieve better quality in less time versus the x265 encoder. You just have to use the right presets. See the encoding results section:
https://www.spiedigitallibrary.org/conference-proceedings-of...
Yeah, is there any good(and simple)guide for SVT-AV1 settings? I tried to convert many of my stuff to it but you really need to put a lot of time to figure out the correct settings for your media, and it becomes more difficult if your media is in mixed formats, encodings etc.
Yeah I’m talking about web-dl though not a rip so there is no encoding necessary.
Player compatibility. Netflix can use AV1 and send it to the devices that support it while sending H265 to those that don't. A release group puts out AV1 and a good chunk of users start avoiding their releases because they can't figure out why it doesn't play (or plays poorly).
I've seen some on private sites. My guess is they are not popular enough yet. Or pirates are using specific hardware to bypass Widevine encryption (like an Nvidia Shield and burning keys periodically) that doesn't easily get the AV1 streams.
I'm not in the scene anymore, but for my own personal encoding, at higher quality settings, AV1 (rav1e or SVT; AOM was crazy slow) doesn't significantly beat out x265 for most sources.
FGS makes a huge difference at moderately high bitrates for movies that are very grainy, but many people seem to really not want it for HQ sources (see sibling comments). With FGS off, it's hard to find any sources that benefit at bitrates that you will torrent rather than stream.
h.264 has near-universal device support and almost no playback issues at the expensive of slightly larger file sizes. h.265 and av1 give you 10-bit 4K but playback on even modest laptops can become choppy or produce render artifacts. I tried all three, desperately wanting av1 to win but Jellyfin on a small streaming server just couldn't keep up.
Because pirates are unaffected by the patent situation with H.265.
Everyone is affected by that mess, did you miss the recent news about Dell and HP dropping HEVC support in hardware they have already shipped? Encoders might not care about legal purity of the encoding process, but they do have to care about how it's going to be decoded. I like using proper software to view my videos, but it's a rarity afaik.
But isn’t AV1 just better than h.265 now regardless of the patents? The only downside is limited compatibility.
HW support for av1 is still behind h265. There's a lot of 5-10 year old hw that can play h265 but not av1. Second, there is also a split bw Dovi and HDR(+). Is av1 + Dovi a thing? Blu rays are obviously h265. Overall, h265 is the common denominator for all UHD content.
Encoding my 40TB library to AV1 with software encoding without losing quality would take more then a year of not multiple years, consume lots of power while doing this, to save a little bit of storage. Granted, after a year of non stop encoding I would save a few TB of space. But it think it is cheaper to buy a new 20TB hard drive than the electricity used for the encoding.
I avoid av1 downloads when possible because I don’t want to have to figure out how to disable film grain synthesis and then deal with whatever damage that causes to apparent quality on a video that was encoded with it in mind. Like I just don’t want any encoding that supports that, if I can stay away from it.
In MPV it's just "F1 vf toggle format:film-grain=no" in the input config. And I prefer AV1 because of this, almost everything looks better without that noise.
You can also include "vf=format:film-grain=no" in the config itself to start with no film grain by default.
I watch almost everything in Infuse on Apple TV or in my browser, though.
What's wrong with film grain synthesis? Most film grain in modern films is "fake" anyway (The modern VFX pipeline first removes grain, then adds effects, and lastly re-adds fake grain), so instead of forcing the codec to try to compress lots of noise (and end up blurring lots of it away), we can just have the codec encode the noisless version and put the noise on after.
I watch a lot of stuff from the first 110ish years of cinema. For the most recent 25, and especially 15… yeah I dunno, maybe, but easier to just avoid it.
I do sometimes end up with av1 for streaming-only stuff, but most of that looks like shit anyway, so some (more) digital smudging isn’t going to make it much worse.
Even for pre-digital era movies, you want film grain. You just want it done right (which not many places do to be fair).
The problem you see with AV1 streaming isn't the film grain synthesis; it's the bitrate. Netflix is using film grain synthesis to save bandwidth (e.g. 2-5mbps for 1080p, ~20mbps for 4k), 4k bluray is closer to 100mbps.
If the AV1+FGS is given anywhere close to comparable bitrate to other codecs (especially if it's encoding from a non-compressed source like a high res film scan), it will absolutely demolish a codec that doesn't have FGS on both bitrate and detail. The tech is just getting a bad rap because Netflix is aiming for minimal cost to deliver good enough rather than maximal quality.
With HEVC you just don't have the option to disable film grain because it's burned into the video stream.
I’m not looking to disable film grain, if it’s part of the source.
For a second there I wasn't looking very close and I thought it said that 30% of Netflix was running on .AVI files
Qualcomm seems to be lagging behind and doesn't have AV1 decoder except in high end SoCs.
I understand that sometimes the HN titles get edited to be less descriptive and more generic in order to match the actual article title.
What’s the logic with changing the title here from the actual article title it was originally submitted with “AV1 — Now Powering 30% of Netflix Streaming” to the generic and not at all representative title it currently has “AV1: a modern open codec”? That is neither the article title nor representative of the article content.
Amen. The mania for obscurity in titles here is infuriating. This one is actually replete with information compared to many you see on the front page.
hacker news loves low information click bait titles. The shorter and more vague the better.
Though in the original title AV1 could be anything if you don't know it's a codec. How about:
"AV1 open video codec now powers 30% of Netflix viewing, adds HDR10+ and film grain synthesis"
AV1 is fine as-is. Plenty of technical titles on HN would need to be googled if you didn't know it. Even in yours, HDR10+ "could be anything if you don't know it". Play this game if you want, but it's unwindable. The only people who care about AV1 already know what it is.
Well, I'm interested in AV1 as a videographer but hadn't heard of it before. Without 'codec' in the title I would have thought it was networking related.
Re: HDR - not the same thing. HDR has been around for decades and every TV in every electronics store blasts you with HDR10 demos. It's well known. AV1 is extremely niche and deserves 2 words to describe it.
AV1 has been around for a decade (well, it was released 7 years ago but the Alliance for Open Media was formed a decade ago).
It's fine that you haven't heard of it before (you're one of today's lucky 10,000!) but it really isn't that niche. YouTube and Netflix (from TFA) also started switching to AV1 several years ago, so I would expect it to have similar name recognition to VP9 or WebM at this point. My only interaction with video codecs is having to futz around with ffmpeg to get stuff to play on my TV, and I heard about AV1 a year or two before it was published.
I'm old (50) and have heard AV1 before. My modern TV didn't say HDR or HDR10 (it did say 4k). Agree that AV1 should include "codec".
One word, or acronym, just isn't enough to describe anything on this modern world.
> Though in the original title AV1 could be anything if you don't know it's a codec.
I'm not trying to be elitist, but this is "Hacker News", not CNN or BBC. It should be safe to assume some level of computer literacy.
Knowledge of all available codecs is certainly not the same tier as basic computer literacy. I agree it doesn't need to be dumbed down to the general user, but we also shouldn't assume everyone here know every technical abbreviation.
The article barely mentioned “open”, and certainly gave no insight as to what “open” actually means wrt AV1.
For me that’s a FU moment that reminds me ‘TF am I doing here?’ I genuinely see this resource as a censoring plus advertising (both for YC, obviously) platform, where there are generic things, but also things someone doesn’t want you to read or know. The titles are constantly being changed to gibberish like right here, the adequate comments or posts are being dead, yet the absolutely irrelevant or offensive things, can stay not touched. Etc.
It is usually Dang using his judgment.
I really like moderation on HN in general, but honestly this inconsistent policy of editorializing titles is bad. There were plenty of times where submitter editorialized titles (e.g GitHub code dumps of some project) were changed back to useless and vague (without context) original titles.
And now HN administration tend to editorialize in their own way.
Also, it’s not the whole picture. AV1 is open because it didn’t have the good stuff (newly patented things) and as such I also wouldn’t say it’s the most modern.
AV1 has plenty of good stuff. AOM (the agency that developed AV1) has a patent pool https://www.stout.com/en/insights/article/sj17-the-alliance-... comprising of video hardware/software patents from Netflix, Google, Nvidia, Arm, Intel, Microsoft, Amazon and a bunch of other companies. AV1 has a bunch of patents covering it, but also has a guarantee that you're allowed to use those patents as you see fit (as long as you don't sue AOM members for violating media patents).
AV1 definitely is missing some techniques patented by h264 and h265, but AV2 is coming around now that all the h264 innovations are patent free (and now that there's been another decade of research into new cutting edge techniques for it).
Just because something is patented doesn't necessarily mean its good. I think head to head comparisons matter more. (Admittedly i dont know how av1 holds up)
Yes, but in this case, it does.
AV1 is good enough that the cost of not licensing might outweigh the cost of higher bandwidth. And it sounds like Netflix agrees with that.
[dead]
[dead]
[dead]
Is it me or this post has LLM vibes?
Top post without a single comment and only 29 points. Clearly my mental model of how posts bubble to the top is broken.
IIRC, there's a time/recency factor. If we assume that most people don't browse /newest (without commenting on should, I suspect this is true), then that seems like a reasonable way to help surface things; enough upvotes to indicate interest means a story gets a chance at the front page.