x264 10-Bit: encode 10 Bit, output 8 Bit?

General questions or discussion about HandBrake, Video and/or audio transcoding, trends etc.
User avatar
BradleyS
Moderator
Posts: 1860
Joined: Thu Aug 09, 2007 12:16 pm

Re: x264 10-Bit: encode 10 Bit, output 8 Bit?

Post by BradleyS »

Agreed, will be nice to see some test results from the OP. I'm basing part of my theory on the observation that the source image provided shows typical DSLR-like, digital-looking dithering—essentially a pseudorandom diffuse/crosshatch combination—while the 10-bit upsampled images show more uniform, pattern dithering. 10-bit x264 and x265 look similar, but there are slight differences to be noticed.

Of course, re-encoding the 10-bit output to 8-bit, even if the 10-bit intermediate is lossless, will produce yet another generation of dithering on the way back to 8-bits. Each generation of dither is essentially rearranging the micro noise in the image, and some arrangements may ultimately look better than others.

A denoise, adjust, regrain workflow is superior in that the first step removes unpleasant textures and the last step adds a known pleasing texture. And it's easy enough to replicate, given the tools. Predictably good.
jd17
Posts: 38
Joined: Thu May 04, 2017 3:19 pm

Re: x264 10-Bit: encode 10 Bit, output 8 Bit?

Post by jd17 »

BradleyS wrote: Mon May 22, 2017 9:23 pmI'm basing part of my theory on the observation that the source image provided shows typical DSLR-like, digital-looking dithering—essentially a pseudorandom diffuse/crosshatch combination—while the 10-bit upsampled images show more uniform, pattern dithering. 10-bit x264 and x265 look similar, but there are slight differences to be noticed.
The frame is from the movie "Captain America: Civil War", which was mainly shot on Arri Alexas (XT and 65).

The x265 screenshot I attached is obviously much worse, because it is without tune - therefore way too blurry.

Here is the same frame in x265 10-Bit CRF17 medium, tune grain:

Image

BTW, the difference in color and gamma (10-Bit vs. 8-Bit) comes from the flawed conversion in "Avidemux" which I used to pick the exact frame.
There is no color, gamma or brightness difference when played in Kodi - but the more pleasing dithering pattern is the same.
Deleted User 13735

Re: x264 10-Bit: encode 10 Bit, output 8 Bit?

Post by Deleted User 13735 »

Yes, the advantages of Handbrake's worthy ditherer were duly noted in my previous tests, and if blurring the pattern dither from various sources by overlaying is seen as a slight temporal improvement, then so be it.

Should not be mistaken for improvement in banding, however, unless the bits are there in the source, as I pointed out.

As far as internal filter precision being a factor, we know it works in a true float environment like Photoshop or Vegas, but I thought Handbrake's engine is still 8 bit integer?
User avatar
JohnAStebbins
HandBrake Team
Posts: 5712
Joined: Sat Feb 09, 2008 7:21 pm

Re: x264 10-Bit: encode 10 Bit, output 8 Bit?

Post by JohnAStebbins »

Also keep in mind that what you are quantizing in the h.264 stream is in the frequency domain. When converting a frame from spacial to frequency domain (and vice versa during playback) there will be rounding errors. The size of the rounding error depends on the number of bits kept in quantization.

In this scenario, the basic encode process is this:
  • The 8 bit spacial domain gets converted to 10 bit spacial domain with 2 lower bits all zeros.
  • The 10 bit spacial domain gets converted to 32 bit frequency domain coefficients.
  • The frequency domain coefficients get quantized (truncated) to variable bit depths based on what optimizes visual quality.
  • The quantized coefficients get entropy encoded (this is where the actual compression happens) and written to the file.
During playback:
  • Coefficients are decompressed
  • Convert coefficients from variable bit depth frequency domain to 10 bit spacial domain.
All these conversions and quantization steps result in some amount of roundoff that are visible in the dithering effect you see during playback.
Deleted User 13735

Re: x264 10-Bit: encode 10 Bit, output 8 Bit?

Post by Deleted User 13735 »

+1 John Stebbins!
The most common misconception (at least on the Internet) is that some kind of bit interpolation takes place, supposedly filling in those empty zeroes, when in fact a simple histogram shows it does not.

Although some elite capture cards do hardware bit-upsampling for broadcast, software algorithms are understandably slow and messy for such purposes.

I encounter so many tales of video alchemy in my reading, I am inclined to just let it go; these overachieving hobbyists are actually doing themselves little harm beyond the persistence of their delusions, and their absence from the workforce.

To sum up my impression, 10 bit yuv intermediates serve a purpose with 10 bit source, and even with 8 bit 4:4:4. Otherwise, no quantifiable improvement in banding (number of colors) with 4:2:0 source could be theorized or measured.
jd17
Posts: 38
Joined: Thu May 04, 2017 3:19 pm

Re: x264 10-Bit: encode 10 Bit, output 8 Bit?

Post by jd17 »

I am all for quantifiable evidence to support claims - but I am also practical when I need to be.
So, better perceived picture quality + smaller filesize still sounds like a simple win/win to me.

I get that the difference between x264 8-Bit and x264 10-Bit might be perceived as subtle.
Therefore I also understand doubting the actual benefit in the case x264...

However, please take another look at this comparison between x265 8-Bit and x265 10-Bit (in which the 8-Bit is even encoded at a much higher CRF and a slower preset):
jd17 wrote: Mon May 22, 2017 5:36 pmx265-8 / CRF13 / slow / no tune:
Image

x265-10 / CRF16 / medium / no tune:
Image
How is the 10-Bit encode not objectively better?
The x265 8-Bit result is just horsesh*t. :P

x265 8-Bit definitely introduces banding lines that are not present in the source:
jd17 wrote: Mon May 22, 2017 5:36 pmSource:
Image
User avatar
JohnAStebbins
HandBrake Team
Posts: 5712
Joined: Sat Feb 09, 2008 7:21 pm

Re: x264 10-Bit: encode 10 Bit, output 8 Bit?

Post by JohnAStebbins »

The point I was making above is that even though the low 2 bits are zero when encoding 8 bit to 10 bit, due to the quantization of coefficients that occurs during encoding the lower 2 bits after decoding will no longer be zeros. It's not a perfectly random dither, but it does provide some visually appealing smoothing to the bands.

I'm not entirely sure about what follows (I'm not a codec guy, I just happen to know a few things). But I believe there are going to be additional factors that contribute to improved image quality when using 8 bit extrapolated to 10 bit with zero fill. For example (with h.264), the transform to frequency domain coefficients is an integer operation that starts (in this scenario) with 10 bit quantities and results in 32 bit values. I'm pretty sure this transform can result in some lower bits basically falling off the LSB during the computation (e.g. a division or shift). So when starting with 10 bit, you would retain more precision during this transform.
jd17
Posts: 38
Joined: Thu May 04, 2017 3:19 pm

Re: x264 10-Bit: encode 10 Bit, output 8 Bit?

Post by jd17 »

To verify if my x265-10 setting saves space amongst different sources, I encoded 5 minutes of "Casino Royale".
I chose the first 5 min of the Madagascar scene in the beginning.

This is an extreme case of not only analog 35mm film material with loads of grain, it is also fast-action-packed, so it screams high bitrate.
It was never worth encoding that movie in x264-8 CRF16, because the file ended up being bigger than the source.

I would however save a good deal of space with x265-10, so I'm happy. :)

Source: 26100 kbit/s.
x264-8 CRF16 slow film: 29179 kbit/s
x265-10 CRF17 medium grain: 20983 kbit/s

It is pretty much impossible to see a difference between the three. :)

BradleyS wrote: Mon May 22, 2017 8:01 pmWhat you are seeing is the upsampling algorithm in the higher bit-depth encoders permanently changing the dithering pattern evident in the gradation. Try encoding to 10-bit as you did before, but use RF 0 (x264's lossless mode), and then re-encode that as 8-bit at a normal RF like 17. I bet some or all of that newly introduced dithering pattern survives, and you've just simulated the "N-bit, 8-bit output" you mention.

Of course, the 8-bit file will be larger than the 10-bit file due to quantization error.
I also had time to try this, have a look:

1. Source:
Image

2. Source to x264-10 CRF0 slow film:
Image

3. x264-10 CRF0 slow film to x264-8 CRF16 slow film:
Image

4. Source to x265-10 CRF17 medium grain:
Image

In my opinion/eye, your theorie does not hold true.
User avatar
BradleyS
Moderator
Posts: 1860
Joined: Thu Aug 09, 2007 12:16 pm

Re: x264 10-Bit: encode 10 Bit, output 8 Bit?

Post by BradleyS »

Indeed, looks about like an 8-bit encode with no intermediate.
User avatar
BradleyS
Moderator
Posts: 1860
Joined: Thu Aug 09, 2007 12:16 pm

Re: x264 10-Bit: encode 10 Bit, output 8 Bit?

Post by BradleyS »

If you really want to geek out, try 10-bit lossless to 8-bit lossless to make sure the encoder isn't decimating those blocks to hell.
jd17
Posts: 38
Joined: Thu May 04, 2017 3:19 pm

Re: x264 10-Bit: encode 10 Bit, output 8 Bit?

Post by jd17 »

OK, I did that too. :)

1. Source:
Image

2. Source to x264-10 CRF0 slow film:
Image

3. x264-10 CRF0 slow film to x264-8 CRF0 slow film:
Image

To be honest, the last experiment looks like an alomost perfectly restored source to me. :)
User avatar
BradleyS
Moderator
Posts: 1860
Joined: Thu Aug 09, 2007 12:16 pm

Re: x264 10-Bit: encode 10 Bit, output 8 Bit?

Post by BradleyS »

Better, but the 8-bit encoder is still clobbering the 10-bit's dithering pattern a bit.

Anyway, thanks for posting the results of this experiment. :)
Post Reply