Basically there are round off errors when converting to and from frequency domain that are reduced when doing 10 bit encoding. I'm making a bit of a guess here since I'm not a codec expert, but I think that the expanded decode bit depth allows the encoder to make better choices when selecting the coefficients it is going to keep during quantization, and thus improves it's compression efficiency (i.e. bits per pixel).
Not in terms of compatibility, no. I would stick with 10-bit, AFAIK there's very few devices with hardware-accelerated 12-bit decoding support. So IMHO 10 is better than 12
when the source is 8 bit (Standard BD) and the pipeline is 8 bit --> what sense would 10bit/12bit encoding make ?
just padding the surplus bits with zeroes ???
a 10bit source (HDR) cannot be processed properly because the pipeline being 8 bit
same for a 12bit source (Dolby Vision).
So why the bells and whistles for 10/12 bit encoding if its of no use at all ????
well I of course don't know what kind of source nor reproduction devices you use but things like banding have never occured to me dealing with my sources (DVD, BD, BD-UHD (SDR, HDR and DV) and mp4-recordings from my drone).
Maybe I am visually impaired (just quite not 20/20) or my requirements or my reproduction equipment ist just not studio-grade - I can't see the benefit of fiddling around with bits (of no use) and the benefit of such fiddling being not visible on the screen.
The type of artefact I'm referring to is present with any of those sources, although you may not "SEE" in a properly set up viewing environment. It's most easily visible when looking closely at dark areas and becomes quite apparent if you increase brightness and contrast on your monitor or TV (not recommended for proper viewing). It's the kind of thing that the "aq-mode=3" encoder option was specifically designed to help with, but 10-bit encoding does a better job with significantly lower bitrates.
So although you may not see the artefacts in your tests (since you've probably got a properly calibrated viewing environment), they are there. If I find something that can improve quality and lower bitrate at the same time, I consider that a worthwhile option for *my* use case.
The downside of 10bit encoding is that it places a limit on what devices the video can be played back on. So if you want maximum portability, stick with 8 bit, it's plenty good enough. I have a limited set of players I use that support the formats I encode to.
it has been visibly demonstrated in tests that when importing 8 bit 4:4:4 or 10 bit YUV original source, banding is reduced over conventional 8 bit dithering, even if outputting 8 bit 4:2:0 for delivery.