TL;DR
- The current one-to-many model of video streaming is not fit to cope with the real-time two-way demands of interactive applications.
- AMD, Intel NVIDIA and others are looking at ways to accelerate the throughput of video from origin to device and back.
- AI-driven video applications will also benefit from these innovations.
READ MORE: Internet-Scale Video Requires Its Own Kind Of Supercomputing (The Next Platform)
As video becomes increasingly interactive, the need to accelerate the speed of online traffic is becoming critical.
Outlining the issue at The Next Platform, Vincent Fung, senior product marketing manager at AMD, says, “It’s starting to put a strain on the infrastructure when it comes to the networking pipe and also in terms of processing on the server side. The previous traditional [infrastructure] model starts to not make much economic sense. It becomes a harder model to keep up to address these use cases.”
The prevailing internet model works for streaming. In a one-to-many on-demand environment driven by companies like Netflix or events like the live broadcast of sporting competitions, the video feed starts in a single place runs through cloud datacenters, content delivery networks (CDNs), and edge servers before landing in enterprises offices or the homes of consumers.
It always comes with a little bit of a delay, given the amount of processing and computing that needs to be done in the datacenter to ensure good quality or because broadcasters are looking for a few seconds of delay for editing purposes — but these delays don’t pose a huge problem for such scenarios.
Sean Gardner, head of video strategy and development at AMD, explains, “Netflix can take 10 hours — and they do — to process one hour of video and they can use it in off-hours when they excess capacity. But ‘live’ needs to happen in 16 milliseconds or you’re behind real time, at 60 frames a second.”
Applications demanding real-time interactivity, on the other hand, do. These range from video game live streaming on Twitch to video conferencing.
Gardner says, “If you think about this scenario, where you could have Zoom or Teams, it could have billions of people using it concurrently. Or Twitch, which has hundreds of thousands of ingest streams. The other aspect with live [streaming], is that you can’t use a caching CDN-like architecture because you can’t afford the latency. This is why acceleration is needed.”
Fung adds, “There’s a lot more processing that needs to be done from a video perspective when we look at these interactive use cases when one-to-many becomes many-to-many. You need high performance because you have a lot of people using it. You want to minimize bandwidth costs because the uptake is large.”
Chip makers like AMD and Intel are aware of the issue and trying out different architectures to boost the throughput of video throughout the pipe. AMD, for instance, has a “datacenter media accelerator” and a dedicated video encoding card that can more than halve the bitrate to save on bandwidth.
According to the experts, it’s not just video applications that will benefit from such acceleration. AI use cases are also on the rise.
Discussion
Responses (1)