Page 1 of 1

DataBase for Benchmarks?

Posted: Fri May 15, 2020 10:58 pm
by WarthogARJ
Hi all,
This is a great idea.
Is there a database for these posts?

if not, then I think I might see if I can do it with a datamining approach.
I'm not very good yet, so it's almost faster to do it by hand...:-}

A few of the hardware reviewers use it as a general CPU benchmark, but from what I can see, they have not done anything with the data, rather they just use it as a very rough ranking for CPU's:
Anandtech: ... ew,17.html
Techspot, LegittReviews and Reddit have some data as well.

They all seem to use a pretty crude parameter to judge things by: Frames Per Second.
I'm assuming that's using the total frames in a video, divided by encoding run time?
Is that the best metric to use?
It seems to me, to be a poor one, since a given frame size is a function of resolution.

I haven't gone through it all, but I see some comments about:
(1) Processor speed: seems roughly linear
(2) Threads: Guru3D says Handbrake is limited to 16 threads maximum (is that true?)
(3) Cores: Handbrake says encoding speed scales up to 6-8 cores with diminishingreturns after that (is that likely to be be due to the limit to 16 threads, and not actually cores?
(4) GPU's: some discussion, but i have not seen anything rigorous. Surely the effect of GPU correlates with Cuda or Open CL? Or Cuda Cores?

There's a very nice benchmark from MatLab called GPUbench that runs a number of routines on a given system, and looks at the effect of the CPU as well as the GPU in it.
It gives you an overall ranking of a given computer system, and you can see the relative speed of single and double precision computng on the system.
Both on the CPU as well as the GPU.

So if you ran GPUbench on something you ran Handbrake on, you could THEN get a way to compare various diverse computers, in terms of how well they ran HandBrake.
In their case, the biggest improvement in computing speed was from the GPU, and specifically with CUDA Cores.
All else being equal, you could get much better performance from a high end GPU than with a fancy CPU.
And if you have an existing computer, it's better to upgrade the GPU than scrap it, and try to get a much higher specc'ed CPU with a lot more cores.

I can post the summary of the MatLab GPUBench scores, is there a way to post attachments on the forum?
I collated it from the GPUBench Forum, as well as from MatLab's archive for it.


Re: DataBase for Benchmarks?

Posted: Fri May 15, 2020 11:22 pm
by Woodstock
Since GPUs have nothing to do with handbrake, what use would it be to benchmark them?

Re: DataBase for Benchmarks?

Posted: Sat May 16, 2020 8:48 am
by WarthogARJ
I don't see why you say that, but that's not correct.

You can use a hardware encoder from Nvidia that is designed to work with the higher end Nvidia GPU's.
And this is discussed to a small degree on the Handbrake official documentation itself:
[url] ... c.html/url]

Sorry if you knew this, but for most computation tasks, a GPU can perform the sort of tasks that encoding requires much more efficiently than a CPU does.
That's why very complex applications like ANSYS run much faster on systems with good GPU's.
It's got nothing to do with displaying the images on a monitor.

If your computer is fairly new, and has a good GPU, you could see the effect yourself, by encoding the same source video using the NVidia hardware encoder compared to the encoder options you usually use.
But unless you have a machine designed for high performance gaming, or ideally technical computing, your GPU is quite likely not good enough to do it very well.
NVidia uses clever marketing, and the cheapest GPU's where you'd expect a significant difference are not cheap.
Based on the performance from the Matlab and other analyses, you would likely want a GPU with high Cuda count and lots of SM Cores/Processors.

Re: DataBase for Benchmarks?

Posted: Sat May 16, 2020 4:19 pm
by Woodstock
Hardware encoders are entirely separate from the GPU, so benchmarking a GPU's performance is akin to benchmarking a Ford GT-40's track performance to help you decide whether the same-year Ford station wagon is good for going to buy groceries.

GPU performance and video encoder performance are, for the most part, unrelated. If you have a task that uses the GPU, then it is relevant; But handbrake doesn't use the GPU.

Re: DataBase for Benchmarks?

Posted: Mon May 18, 2020 5:33 pm
by mduell
3) Do you understand what diminishing means?

4) CUDA cores are useless for video encoding.

Re: DataBase for Benchmarks?

Posted: Sat May 23, 2020 12:48 am
by WarthogARJ
No offence, but you guys missed my point, and are addressing a minor, and irrrelevant issue with respect to my question.

To make it clear, the QUESTION, as indicated in the Subject Heading is: "DataBase for Benchmarks".
I was asking if anyone had compiled them into a coherent database.
A summary if you like.

The FIRST post about asking for benchmarks was in early 2007 by user "baggs":

After that, there have been 182 replies, but not all have Benchmark information.
Hoewever some contain a LOT of data, for example this one with hundreds of benchmarks in the one post:

So my question stands, has anyone compiled the results?

To be honest, having the benchmark info scattered through forum posts, and not in any regular format makes it hard to compile into a database.
The most efficient method would be to have Benchmark itself export the data in a regular format.
It could be stripped of personal data.
Or even to submit it directly, say to GitHub.

That's how Geekbench works:

i'm not a good enough programmer to do that myself, but perhaps someone who can do that might be interested.
I'll suggest it.

I think it would be useful to do.
We've already established that there's confusion about how GPU's are used with the current Handbrake version.
Even though it's described very clearly in the Manual.

Re: DataBase for Benchmarks?

Posted: Sat May 23, 2020 1:30 am
by WarthogARJ
mduell wrote: Mon May 18, 2020 5:33 pm 3) Do you understand what diminishing means?

Yes, I understand what diminishing means.
But it's ONLY "diminishing returns" if the actual software code remains stangnant with regards to current trends in system development.

I'm not sure what other technical softare you use, but high end applications such as ANSYS, ThermoCalc, MatLab or Comsol don't peak out at 6 cores.
I'm not saying Handbrake is bad code, but there's no inherent reason why it cannot be revised to make use of more cores, more threads, or of GPU capabilities.

Perhaps you did not understand my point: perhaps my fault for not making it clearer.
I'm NOT disputing that the CURRENT Handbrake version has these limitations.

But from what I've seen, I'm not the only one who does not see what the current limits are.
And I suspect, the people who are looking after the code don't really know that themselves.
Not to any exact level.

That's EXACTLY why a BENCHMARK is useful.
To delineate those limits.

You said:
[url]4) CUDA cores are useless for video encoding./url]

I'm not trying to be funny here, but I'm assuming you've seen the latest Handbarke entry abut Nvenc:
[url] ... c.html/url]

And what Nvidia says itself about Nvenc:
[url] ... matrix/url]

To make it clear, I'm asking about how fast/efficient HandbRake runs if you select Nvenc.
Let's say if you have two systems, where all is identical except for GPU.
System 1 has a low end Nvidia GPU: GeForce GTX 1050
System 2 has a high end one; Tesla T4

Please note where Nividia shows the interaction between the CPU, buffer and ......CUDA cores.

Seeing as Nvenc is a codec/linrary/routine that is run by the Nvidia GPU itself, and that Nvidia uses CUDA cores as one of the aspects of GPU design that describe how fast /efficient the specif GPU runsruns, I'd think that if you use Nvidia's Nvenc, which uses the GPU to do the encoding, that things like CUDA cores are indeed used for the encoding.

Nvidea says so, and if you run Nvenc, you are running THEIR routines.

So perhaps Handbrake can make further use of GPU capability?
Why not?
Everyone else is.

Re: DataBase for Benchmarks?

Posted: Sat May 23, 2020 2:05 am
by Woodstock
Benchmarks are fleeting things. You mention the thread from 2007; how many of those messages are relevant in 2020? In 2007, a 4-core CPU was fairly new tech; today, it is base level.

And if you do enough searching around, you'll find that handbrake used to have some CUDA code in it, but it was removed because (at the time) it was at best 5% faster than contemporary CPU implementations. It rapidly lost the race to newer CPUs with more cores, no one wanted to maintain it, so it was removed. It also became irrelevant with the ASIC encoders, QSV, NVENC, and VCE.

If GPU encoder implementations were truly fast, why did the GPU manufacturers jump ship and make ASIC implementations? And, even when you have one of the ASIC hardware encoders, the speeds they can reach are affected by the filters you chose (which run on the CPU).
We've already established that there's confusion about how GPU's are used with the current Handbrake version.
Even though it's described very clearly in the Manual.
What confusion? Handbrake does not use GPU cores. Neither GPU core count nor clock speed affects the speed of a handbrake encode. Making a database of GPU performance would have to be done with tools that actually use the GPU.

Re: DataBase for Benchmarks?

Posted: Sat May 23, 2020 3:07 am
by mduell
NVENC doesn't use CUDA cores for encoding, it uses dedicated NVENC hardware on the ASIC.

Re: DataBase for Benchmarks?

Posted: Sat May 23, 2020 10:22 am
by s55
@WarthogARJ if you won't take it from others users, take it from a developer. CUDA and NVEnc are not the same thing. NVEnc is dedicated hardware asic's that happen to be on the same die as the GPU. As such, there is very little performance differences between cards in a series. HandBrake does not support CUDA

The quality of the data in this stream isn't really that useful in the grand scheme of things. You need many samples, for same version, source, settings across hardware. If we wanted to do something like this, we'd build a benchmark into the app itself. Sadly, being a small project with limited time to contribute, it's just not something we'd have time to do.

That leaves you in the realm of trusting users not to make mistakes, which sadly given he mis-information we get from users all the time, doesn't bode well.