The only useful comment posted so far (in relation to the question being asked) is the comment that 'it'll be ready when someone else does it'. I don't know if that person speaks for the whole handbrake team, but it's at least useful as a starting point.
Rather than posting personal speculation and wild theories, can we stick to facts?
Here are some facts.
"The main stated goal of the HEVC development is “substantial” bitrate reduction relative to AVC High Profile. The general target of the group (although informal) is to provide about 50% compression improvements over AVC. A second goal is to serve a wide range of applications. To do this, HEVC defines support for resolutions from QVGA (320x240) to 8K (7680x4320). HEVC also targets two modes, a “low complexity” mode, which is meant to have a small decrease in complexity (especially for decoders) compared to AVC, and a “high efficiency” mode, which will contain more complex coding tools, achieve better compression, but come at higher processing costs. Finally, HEVC defines a “low delay” configuration capable of low latency operation. In general, HEVC decoders are expected to have 2-3 times the computational complexity of AVC decoders, and HEVC encoders are expected to have up to 10 times the computational complexity of AVC encoders."
http://www.sencore.com/company/blog/eme ... s-part-two
*up to* being the key phrase; there are two modes of encoding. Low-complexity, which has a similar run-time to H264 encoding but 25% better filesizes, and high-complexity, which takes 10x as long as H264 (or 100x as long as MPEG-2), but achieves 50% improvement with no drop in subjective quality. Glacially slow? Are you serious?
As for 'you've been looking at press releases'? No, I've been reading academic papers with benchmarks, and standards organisation releases. HEVC has thousands of people behind it from dozens of organisations, and many years of work. These people include 'non-corporate' participants such as academic researchers, the BBC and so on. The tests and benchmarks they run are standardised and were agreed upon at the start of development. There are already multiple open source implementations already out there, which a simple Google search will reveal.
http://hevc.kw.bbc.co.uk/git/w/jctvc-tmuc.git
http://code.google.com/p/x265/
https://hevc.hhi.fraunhofer.de/svn/svn_ ... gs/HM-1.0/ (reference decoder)
"The JCT-VC has published a software reference implementation of the proposed HEVC standard, called the “HEVC Test Model” or HM. The latest version is HM-7.0 (based on version 7.0 of the HEVC standard, which was proposed at the May 2012 meeting). This implementation is open-source and includes both a decoder and an encoder application. HHI hosts the subversion repository for the code and BBC hosts the issue tracker for the code. HHI also published a software reference manual and software development guidelines for the HM."
http://www.sencore.com/company/blog/eme ... s-part-two
There are already software and hardware products out there relating to h265/hevc, in preparation or complete.
http://www.solveigmm.com/en/products/zond/
http://hevcvisa.codecian.com/
Talk of a 2014 or later timescale is ignorant beyond imagination. The code's out there already. It works. Even hardware implementations already exist. And in terms of underlying design it's hardly a world away from existing codecs such as H264/x264. The almost-final reference has achieved the design goals.
In terms of efficiency goals, they did it!
"N12475 is the Report on preliminary subjective testing of HEVC compression capability which can be found here. It shows impressive results as reported elsewhere, e.g., here. In particular, > 50% bitrate reduction, 67% in class B (HDTV), 49% in class C (WVGA) => mission accomplished! Currently, HEVC is between ballots and FDIS/IS is expected around Jan-Apr 2013."
http://multimediacommunication.blogspot ... eting.html
Consider also, we are talking about a standard which is an evolutionary improvement upon H264 rather than revolutionary. Look how quickly 80211.n caught on, and that required a change of physical hardware , firmware and even network reconfiguration; yet enterprises were adopting 802.11n in practice before it was finalised officially (
http://en.wikipedia.org/wiki/IEEE_802.11#802.11n).
In contrast, much of HEVC can be achieved in software and via the use of reprogrammable GPUs (openCL). Demands on CPU/GPU are relatively low in comparison to the quantum leap between MPEG-2 and H264. Literally every new computer sold has a built in multi-processor general purpose maths accelerator with exactly the right kinds of functions built in to support things like HEVC.
And what is the impetus to drive adoption that will make this be a reality for us all before e.g. mid-2013?
"Internet video was 40 percent of consumer Internet traffic in 2010 and will reach 50 percent by year-end 2012."
"Every second, 1 million minutes of video content will cross the network in 2015."
http://www.cisco.com/en/US/solutions/co ... Paper.html
Halving bitrate halves costs - of transmission, server storage, client storage. That's the kind of thing companies will buy into *very* easily. Video is the bulk of the world's data. We can effectively make the entire internet twice as fast, the world's storage twice as vast, double the number of channels on satellite or cable (or double their quality), by adopting this standard. How can anyone imagine the world is going to sit on its butt till '2014 or later' when that kind of opportunity is sitting there for the media and network companies of the world? This codec is the kind of thing that affects world GDP.
As for "Way beyond glacially slow". Are you joking? The whole point of the exercise was to develop a standard that offered similar encode times to H264 while producing a noticeably smaller file, or greater encode times with larger compression. Here's a factual example:
http://iphome.hhi.de/schierl/assets/hevc_icassp2012.pdf
12-core intel producing 1920x1080 at 51fps? Now what does that mean for someone encoding a DVD on a 4-8x core desktop?
Approximately: (1920*1080/12) / (720*576/8) * 51 = 170fps encoding of DVD on an 8-core desktop. This should be adjusted down slightly since Xeon chips run faster than regular ones - perhaps 130-150fps.
Sound impossible? Then consider that the whole purpose of the R&D was to develop an encoder with similar encode times to H264 and 25% file size improvement, making use of smarter tricks to achieve the compression (in the low complexity mode), or achieving twice the file reduction (50% vs 25%) with approx 10x the encode time of H264, in high complexity mode.
Decode isn't hard either. The whole point of the exercise was to design something with a light decode process based on similar mathematical functions to H264. Amazing as this might seem, all the technology companies of the world have actually noticed that people use digital cameras and mobile phones and tablet PCs.
"David Hopkins, director of product marketing, MPEG-4 encoding solutions group, Motorola, told CSI on the side of the company’s press and customer event in Stockholm that the complexity of HEVC - it’s about 100 times more complex than MPEG-2 to encode but only three to five times as complex to decode – means the technology will first appear in mobile devices, potentially before the end of 2013."
http://www.csimagazine.com/csi/OTT-will ... torola.php
The myth about 'glacially slow' encoders originated from one phase of development where effectively a competition was being held to see who could crunch the data the smallest (teams were allowed two entries: 'high complexity' and 'low complexity'. Unsurprisingly, some teams (e.g. the BBC team) decided to cheat a bit by effectively testing literally every possible compressed representation using an exhaustive search for their high complexity entry, to see which representation achieved the optimal compression. Cunning. It's also entirely irrelevant to real world use.
Read this thread from early 2010 to see the origination of the 'HEVC is glacially slow' myth:
http://x264dev.multimedia.cx/archives/360
in particular the comments by Hurumi, the BBC/Samsung development lead.
Some more interesting links with data and benchmarks:
https://dl.dropbox.com/u/1346434/w12475.zip
http://www.vcodex.com/h265.html
Now, with all that said. Is there anyone here who has actually been involved in the HEVC development process who can make a reasonable fact-based estimate of when we might see HEVC (or something closely related) show up in a form that can be easily incorporated into Handbrake?
Please avoid replying if you have no familiarity with HEVC or HEVC development. Thanks.