How Does 5G Work?
5G uses a broad mix of radio bands that push speeds of up to 10 Gbps (10 to 100 times faster than 4G-LTE), which will soon make web experiences that feel “fast enough” today feel like the days of dial-up. 5G offers ultra-low latency, which translates to near-real-time network performance. Where a packet of data could previously take 20 to 1,000 milliseconds (ms) to get from your laptop or smartphone to a workload, 5G can cut that time down to a few milliseconds when the use case demands it.
Naturally, there’s more to it than that—physical speed alone doesn’t reduce latency. Various factors, including distance, bandwidth congestion, software and processing deficiencies, and even physical obstacles, can contribute to high latency.
To achieve ultra-low latency, what compute resources need most is to be closer to end user devices. Having servers located physically close to end-user devices is called “Edge compute.” The type of Edge compute varies by latency range:
- Far Edge: Between 5 and 20 ms latency; farthest from the cloud and closer to devices
- Near Edge: 20+ ms latency; nearer to the cloud than to devices
- Deep Edge: Less than 5 ms away from devices
A basic 5G network schema—where does your network start (or end)?
Use case, latency requirements, and budget all factor into what level of Edge compute is needed. Not everything needs near real-time performance, but many things need “good enough real-time,” for which 20 ms or so is adequate.
5G is also designed to be massively scalable for connected devices. Upgraded radio access networks can support 1,000 times more bandwidth per unit area and 100 times more connected devices than 4G LTE. Consumer mobile devices, enterprise devices, smart sensors, autonomous vehicles, drones, and more can all share the same 5G network without service degradation.