Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Video encoding is done with fixed-function for power efficiency. A new popular codec like H26x codec appears every 5-10 years, there is no real need to support future ones.


Video encoding is two domains. And there's surprisingly little overlap between them.

You have your real time video encoding. This is video conferencing, live television broadcasts. This is done fixed-function not just for power efficiency, but also latency.

The second domain is encoding at rest. This is youtube, netflix, blu-ray, etc. This is usually done in software on the CPU for compression ratio efficiency.

The problem with fixed function video encoding is that the compression ratio is bad. You either have enormous data, or awful video quality, or both. The problem with software video encoding is that it's really slow. OP is asking why we can't/don't have the best of both worlds. Why can't/don't we write a video encoder in OpenCL/CUDA/ROCm. So that we have the speed of using the GPU's compute capability but compression ratio of software.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: