Opinion

5G and edge computing: latency rules.

Mar 20, 2019

5G and edge computing share more in common than fever pitch.

5G and edge computing are upon us. That, at least, is what the telecommunications community would have you believe.

In practice, however, these two technologies must overcome a few barriers before getting ripe for dissemination. The most conspicuous one of them is the need for material use cases that can only be delivered through their deployment. No use cases, no value creation, no justifiable implementation. It’s as simple as that.

Let’s start with 5G. With blazing download speeds, it will purportedly make 8k videos a portable reality. With massive connection density coverage, it will enable thousands of IoT devices to function in small confined areas. And, with extremely low latency, it will deliver seamless real-time experiences.

For all the heft of those promises, the applications envisioned for 5G are still flourishing. Particularly, nobody seems to know what to do (profitably) with all that throughput and device density coverage. Latency, however, seems different: the examples of what can be done with low levels of it look, from the get-go, more compelling. And all of them rely, in one way or another, on some degree of edge computing.

Edge computing, or the practice of pushing computational resources to the fringes so they can be physically closer to end users, is first and foremost a response to the laws of physics. As electric waves travel at the speed of light, a data centre 500 km away requires a 3.3 ms round trip in order to be ‘used’ – way too much for some applications.

“When ultra-low latency is required, edge computing and 5G form a marriage made in heaven.”

The basic premises behind the 5G and edge computing union are twofold. Firstly, that many applications will require too much computation to run on local devices. Secondly, that a subset of those computation-hungry applications will only be correctly experienced if delivered in true real-time. The intersection between both premises is powerful – and it may become the single largest demand driver for 5G and edge computing technologies yet.

Self-driving cars and augmented reality (AR) games are two cases in point:

  • Although a great deal of the AI and machine learning models required to drive a car autonomously are expected to run inside the car itself (in this case, the ‘device’), some responses may have to be processed remotely. For obvious reasons, they will only be possible (or acceptable) if the remote processing loop is fast enough to keep cars safely in motion on the road. A perfect situation, then, for the 5G and edge computing low-latency mix;
  • AR games, overlaying both real and virtual imagery in single views, will also require heavy computation. Triangulating spatial references with live movements and in-game reactions is likely to be an unsuitable task for existing smartphones. For any computation to happen outside the device, though, the entire feedback loop must be ultra-fast – or then risk spoiling the very essence of the augmented experiences. 3.3 ms will matter, and so will access to paired 5G and edge computing infrastructure.

Because establishing where to locate edge computing servers is also a physics problem, understanding how different types of usage profiles cluster geographically (and in relative distance to various network elements) will be fundamental. Should a complementary gaming server be in city A or B? Or in both of them? What is the optimal mix of locations that delivers the quality of experience required at the lowest possible investment? These are all pressing questions that will need to be addressed through intelligent network analytics.

Ultra-low latency appears to be the most immediate need that will drive wider 5G and edge computing adoption, as a pair. Naturally, high throughput and large device density will eventually follow as well: innovative minds will surely find ways to make good use of them. But, for now, latency rules.