x264 10-Bit: encode 10 Bit, output 8 Bit?
Re: x264 10-Bit: encode 10 Bit, output 8 Bit?
Agreed, will be nice to see some test results from the OP. I'm basing part of my theory on the observation that the source image provided shows typical DSLR-like, digital-looking dithering—essentially a pseudorandom diffuse/crosshatch combination—while the 10-bit upsampled images show more uniform, pattern dithering. 10-bit x264 and x265 look similar, but there are slight differences to be noticed.
Of course, re-encoding the 10-bit output to 8-bit, even if the 10-bit intermediate is lossless, will produce yet another generation of dithering on the way back to 8-bits. Each generation of dither is essentially rearranging the micro noise in the image, and some arrangements may ultimately look better than others.
A denoise, adjust, regrain workflow is superior in that the first step removes unpleasant textures and the last step adds a known pleasing texture. And it's easy enough to replicate, given the tools. Predictably good.
Of course, re-encoding the 10-bit output to 8-bit, even if the 10-bit intermediate is lossless, will produce yet another generation of dithering on the way back to 8-bits. Each generation of dither is essentially rearranging the micro noise in the image, and some arrangements may ultimately look better than others.
A denoise, adjust, regrain workflow is superior in that the first step removes unpleasant textures and the last step adds a known pleasing texture. And it's easy enough to replicate, given the tools. Predictably good.
Re: x264 10-Bit: encode 10 Bit, output 8 Bit?
The frame is from the movie "Captain America: Civil War", which was mainly shot on Arri Alexas (XT and 65).BradleyS wrote: ↑Mon May 22, 2017 9:23 pmI'm basing part of my theory on the observation that the source image provided shows typical DSLR-like, digital-looking dithering—essentially a pseudorandom diffuse/crosshatch combination—while the 10-bit upsampled images show more uniform, pattern dithering. 10-bit x264 and x265 look similar, but there are slight differences to be noticed.
The x265 screenshot I attached is obviously much worse, because it is without tune - therefore way too blurry.
Here is the same frame in x265 10-Bit CRF17 medium, tune grain:
BTW, the difference in color and gamma (10-Bit vs. 8-Bit) comes from the flawed conversion in "Avidemux" which I used to pick the exact frame.
There is no color, gamma or brightness difference when played in Kodi - but the more pleasing dithering pattern is the same.
Re: x264 10-Bit: encode 10 Bit, output 8 Bit?
Yes, the advantages of Handbrake's worthy ditherer were duly noted in my previous tests, and if blurring the pattern dither from various sources by overlaying is seen as a slight temporal improvement, then so be it.
Should not be mistaken for improvement in banding, however, unless the bits are there in the source, as I pointed out.
As far as internal filter precision being a factor, we know it works in a true float environment like Photoshop or Vegas, but I thought Handbrake's engine is still 8 bit integer?
Should not be mistaken for improvement in banding, however, unless the bits are there in the source, as I pointed out.
As far as internal filter precision being a factor, we know it works in a true float environment like Photoshop or Vegas, but I thought Handbrake's engine is still 8 bit integer?
- JohnAStebbins
- HandBrake Team
- Posts: 5725
- Joined: Sat Feb 09, 2008 7:21 pm
Re: x264 10-Bit: encode 10 Bit, output 8 Bit?
Also keep in mind that what you are quantizing in the h.264 stream is in the frequency domain. When converting a frame from spacial to frequency domain (and vice versa during playback) there will be rounding errors. The size of the rounding error depends on the number of bits kept in quantization.
In this scenario, the basic encode process is this:
In this scenario, the basic encode process is this:
- The 8 bit spacial domain gets converted to 10 bit spacial domain with 2 lower bits all zeros.
- The 10 bit spacial domain gets converted to 32 bit frequency domain coefficients.
- The frequency domain coefficients get quantized (truncated) to variable bit depths based on what optimizes visual quality.
- The quantized coefficients get entropy encoded (this is where the actual compression happens) and written to the file.
- Coefficients are decompressed
- Convert coefficients from variable bit depth frequency domain to 10 bit spacial domain.
Re: x264 10-Bit: encode 10 Bit, output 8 Bit?
+1 John Stebbins!
The most common misconception (at least on the Internet) is that some kind of bit interpolation takes place, supposedly filling in those empty zeroes, when in fact a simple histogram shows it does not.
Although some elite capture cards do hardware bit-upsampling for broadcast, software algorithms are understandably slow and messy for such purposes.
I encounter so many tales of video alchemy in my reading, I am inclined to just let it go; these overachieving hobbyists are actually doing themselves little harm beyond the persistence of their delusions, and their absence from the workforce.
To sum up my impression, 10 bit yuv intermediates serve a purpose with 10 bit source, and even with 8 bit 4:4:4. Otherwise, no quantifiable improvement in banding (number of colors) with 4:2:0 source could be theorized or measured.
The most common misconception (at least on the Internet) is that some kind of bit interpolation takes place, supposedly filling in those empty zeroes, when in fact a simple histogram shows it does not.
Although some elite capture cards do hardware bit-upsampling for broadcast, software algorithms are understandably slow and messy for such purposes.
I encounter so many tales of video alchemy in my reading, I am inclined to just let it go; these overachieving hobbyists are actually doing themselves little harm beyond the persistence of their delusions, and their absence from the workforce.
To sum up my impression, 10 bit yuv intermediates serve a purpose with 10 bit source, and even with 8 bit 4:4:4. Otherwise, no quantifiable improvement in banding (number of colors) with 4:2:0 source could be theorized or measured.
Re: x264 10-Bit: encode 10 Bit, output 8 Bit?
I am all for quantifiable evidence to support claims - but I am also practical when I need to be.
So, better perceived picture quality + smaller filesize still sounds like a simple win/win to me.
I get that the difference between x264 8-Bit and x264 10-Bit might be perceived as subtle.
Therefore I also understand doubting the actual benefit in the case x264...
However, please take another look at this comparison between x265 8-Bit and x265 10-Bit (in which the 8-Bit is even encoded at a much higher CRF and a slower preset):
The x265 8-Bit result is just horsesh*t.
x265 8-Bit definitely introduces banding lines that are not present in the source:
So, better perceived picture quality + smaller filesize still sounds like a simple win/win to me.
I get that the difference between x264 8-Bit and x264 10-Bit might be perceived as subtle.
Therefore I also understand doubting the actual benefit in the case x264...
However, please take another look at this comparison between x265 8-Bit and x265 10-Bit (in which the 8-Bit is even encoded at a much higher CRF and a slower preset):
How is the 10-Bit encode not objectively better?
The x265 8-Bit result is just horsesh*t.
x265 8-Bit definitely introduces banding lines that are not present in the source:
- JohnAStebbins
- HandBrake Team
- Posts: 5725
- Joined: Sat Feb 09, 2008 7:21 pm
Re: x264 10-Bit: encode 10 Bit, output 8 Bit?
The point I was making above is that even though the low 2 bits are zero when encoding 8 bit to 10 bit, due to the quantization of coefficients that occurs during encoding the lower 2 bits after decoding will no longer be zeros. It's not a perfectly random dither, but it does provide some visually appealing smoothing to the bands.
I'm not entirely sure about what follows (I'm not a codec guy, I just happen to know a few things). But I believe there are going to be additional factors that contribute to improved image quality when using 8 bit extrapolated to 10 bit with zero fill. For example (with h.264), the transform to frequency domain coefficients is an integer operation that starts (in this scenario) with 10 bit quantities and results in 32 bit values. I'm pretty sure this transform can result in some lower bits basically falling off the LSB during the computation (e.g. a division or shift). So when starting with 10 bit, you would retain more precision during this transform.
I'm not entirely sure about what follows (I'm not a codec guy, I just happen to know a few things). But I believe there are going to be additional factors that contribute to improved image quality when using 8 bit extrapolated to 10 bit with zero fill. For example (with h.264), the transform to frequency domain coefficients is an integer operation that starts (in this scenario) with 10 bit quantities and results in 32 bit values. I'm pretty sure this transform can result in some lower bits basically falling off the LSB during the computation (e.g. a division or shift). So when starting with 10 bit, you would retain more precision during this transform.
Re: x264 10-Bit: encode 10 Bit, output 8 Bit?
To verify if my x265-10 setting saves space amongst different sources, I encoded 5 minutes of "Casino Royale".
I chose the first 5 min of the Madagascar scene in the beginning.
This is an extreme case of not only analog 35mm film material with loads of grain, it is also fast-action-packed, so it screams high bitrate.
It was never worth encoding that movie in x264-8 CRF16, because the file ended up being bigger than the source.
I would however save a good deal of space with x265-10, so I'm happy.
Source: 26100 kbit/s.
x264-8 CRF16 slow film: 29179 kbit/s
x265-10 CRF17 medium grain: 20983 kbit/s
It is pretty much impossible to see a difference between the three.
1. Source:
2. Source to x264-10 CRF0 slow film:
3. x264-10 CRF0 slow film to x264-8 CRF16 slow film:
4. Source to x265-10 CRF17 medium grain:
In my opinion/eye, your theorie does not hold true.
I chose the first 5 min of the Madagascar scene in the beginning.
This is an extreme case of not only analog 35mm film material with loads of grain, it is also fast-action-packed, so it screams high bitrate.
It was never worth encoding that movie in x264-8 CRF16, because the file ended up being bigger than the source.
I would however save a good deal of space with x265-10, so I'm happy.
Source: 26100 kbit/s.
x264-8 CRF16 slow film: 29179 kbit/s
x265-10 CRF17 medium grain: 20983 kbit/s
It is pretty much impossible to see a difference between the three.
I also had time to try this, have a look:BradleyS wrote: ↑Mon May 22, 2017 8:01 pmWhat you are seeing is the upsampling algorithm in the higher bit-depth encoders permanently changing the dithering pattern evident in the gradation. Try encoding to 10-bit as you did before, but use RF 0 (x264's lossless mode), and then re-encode that as 8-bit at a normal RF like 17. I bet some or all of that newly introduced dithering pattern survives, and you've just simulated the "N-bit, 8-bit output" you mention.
Of course, the 8-bit file will be larger than the 10-bit file due to quantization error.
1. Source:
2. Source to x264-10 CRF0 slow film:
3. x264-10 CRF0 slow film to x264-8 CRF16 slow film:
4. Source to x265-10 CRF17 medium grain:
In my opinion/eye, your theorie does not hold true.
Re: x264 10-Bit: encode 10 Bit, output 8 Bit?
Indeed, looks about like an 8-bit encode with no intermediate.
Re: x264 10-Bit: encode 10 Bit, output 8 Bit?
If you really want to geek out, try 10-bit lossless to 8-bit lossless to make sure the encoder isn't decimating those blocks to hell.
Re: x264 10-Bit: encode 10 Bit, output 8 Bit?
OK, I did that too.
1. Source:
2. Source to x264-10 CRF0 slow film:
3. x264-10 CRF0 slow film to x264-8 CRF0 slow film:
To be honest, the last experiment looks like an alomost perfectly restored source to me.
1. Source:
2. Source to x264-10 CRF0 slow film:
3. x264-10 CRF0 slow film to x264-8 CRF0 slow film:
To be honest, the last experiment looks like an alomost perfectly restored source to me.
Re: x264 10-Bit: encode 10 Bit, output 8 Bit?
Better, but the 8-bit encoder is still clobbering the 10-bit's dithering pattern a bit.
Anyway, thanks for posting the results of this experiment.
Anyway, thanks for posting the results of this experiment.