Before talking about compression, let’s define some key terms.
Codec
Codec refers to an encoder/decoder pair of algorithms for encoding the native signal into network bits and decoding the network bits back into native signal. Codecs are designed against the native signal format. Thus, audio and video codecs are different algorithms. In SDVoE the focus is on video codecs. When compression is used, it is typically a part of the codec. From the perspective of the Open Systems Interconnection (OSI) model, the conceptual model used to characterize and standardize network communication, the codec would be found in layer six in the figure below. This is where AV data is translated into Ethernet data and vice versa.
Bandwidth
The amount of network data that the codec puts out into the network is your bandwidth. For video, this is measured in megabits or gigabits per second. Bandwidth availability follows Moore’s law. What that means in practice is that network bandwidth is not fixed and bandwidth availability is increasing. Datacenters are already using 100Gb switches and experimenting with 400Gb switches.
Latency
Video latency is defined as the measure of time that elapses between the source playing out an image and that image appearing on screen. Thus, it is the sum of codec and transport time. While local network latency is typically under 1ms, typical codec latency is 100ms or more. Because of this, we can mostly ignore the local network latency. An academic study from the University of Manchester researched how latency affects computer operators, and determined that users become uncomfortable at only 50ms! Furthermore, the study determined that users start feeling latency before they actually see it. Of course, there are applications where latency does not matter, but there are also applications, such as computer interaction, live events and 2-way voice/video chat, where latency is very important. This is why we have matrix switches, because near-zero latency is a critical requirement for many applications. It is also worth mentioning here that most displays have their own scaler and processing engine, which add about 30ms So any AV transport solution claiming 30ms latency adds this to the 30ms of the TV, giving us 60ms – already in the uncomfortable range for many users.
品質
Quality is easy to understand and you know it when you see it - does the image on screen look like the source image? Video quality is usually quantified by algorithms and tools, some are simple and others are more complex, aligning better with the human perception of quality like the SSIM (Structural Similarity Index Metric).
With the definitions out of the way, it is now time to take a closer look at video compression. Video in its native form consumes a ton of bandwidth. For example, after extracting the video from the HDMI signal, you have about 2-12Gbps to be sent over the network. To fit this into a 1Gbps pipe, you need compression - a lot of compression. See the chart below for some of the raw bandwidth requirements of high quality video.
Video compression produces artifacts that become visible the more compression is applied. As compression increases, algorithms have to make more assumptions about image structure. Since synthetic images (computer graphics) are different from those coming from a video camera, high compression algorithms optimize for one or the other. Heavy compression algorithms are therefore context dependent. While a video may look normal on the camera’s screen, it may look different on the computer, or vice versa. Due to artifacts, compression becomes very problematic on large format displays where everything is magnified. What looks acceptable on a codec engineer’s 24” desktop monitor may not be acceptable when blown up to a 10-meter wide image! As a result, video scaling and large display applications require as little compression as possible. The two images below illustrate the difference between an uncompressed image and a compressed image with visible artifacts present.
The video compression codec results in tradeoffs between bandwidth, latency and quality. This is the Codec Triangle and you can only optimize for two out of three. Optimizing for low bandwidth and high quality requires more sophisticated compression that takes longer, resulting in higher latency. Optimize for low bandwidth and low latency, and the video quality will be degraded due to video artifacts. Optimizing instead for low latency with high quality results in less compression increases bandwidth.
So what is the right compromise for your target application and which codec should be used? To answer that we need to dig a little deeper and understand how video codecs compress video. In next week’s blog, I will break down the types of compression codecs that exist today as well as how they work. We will take a look into how each type affects bandwidth, quality and latency, and explore how SDVoE leverages its codec to maximize all three.
Looking to stay connected with the latest news on Semtech’s Pro AV solutions for SDVoE? Follow us on social!
BlueRiver is a registered trademark of Semtech Corporation or its affiliates, and SDVoE is a trademark or service mark of the SDVoE Alliance.