The early goal of edge computing was to reduce the bandwidth costs associated with moving raw data from where it was created to either an enterprise data center or the cloud. More recently, the rise of real-time applications that require minimal latency, such as autonomous vehicles and multi-camera video analytics, are driving the concept forward. Gartner defines edge computing as “a part of a distributed computing topology in which information processing is located close to the edge—where things and people produce or consume that information.” More generally however edge computing is an ill defined term. We like to describe it as “placing data infrastructure where data infrastructure didn’t used to be”. There can be many “edges” at different scales in an organisation, but generally they will have the common characteristics of being constrained by one or more parameters including physical space, environmental challenges, lack of local support, yet a need to deliver performance and lower latency than could otherwise be achieved.
At its most basic level, edge computing brings computation and data storage closer to the devices where it’s being gathered, rather than relying on a central location that can be thousands of miles away. This is done so that data, especially real-time data, does not suffer latency issues that can affect an application’s performance. In addition, companies can save money by having the processing done locally, reducing the amount of data that needs to be sent to a centralised or cloud-based location.