What is Edge Computing?



IEEE labels “Edge Computing” as one of the most important ingredients of 5G technology, but what exactly is edge computing?
There are multiple architectural approaches to edge computing that are promoted by different groups: OpenFog (promoted in particular by vendors Cisco and Intel), Mobile Edge Computing (promoted by the ETSI and its members), Jungle Computing, Mist Computing, Dew Computing, Frontier Computing, etc. In brief, they are under one category: edge computing.
Illustration of Edge Computing, Source: IoT Institute

Moving from Cloud to Edge is Historically Inevitable

First, what is edge? The “edge” is where things and people connect and start to converge with the digital world via communications network. The edge today could be your smartphone, your home wireless router, your Play Station, your Apple TV, your Amazon Echo, your Surface Pro, a local cellular tower, or even a connected car.
Edge computing describes a computing topology in which information processing and content collection and delivery are placed closer to the data sources and consumers of this processed information. Edge computing draws from the concepts of mesh networking and distributed processing. It tries to keep the traffic and processing local, with the goal of reducing traffic and latency. Delivery at the edge isn’t a new concept. It has existed for years.
The computing world has seen numerous pendulum swings between centralized (more core) and decentralized (more edge) approaches.
  1. Late 1950s: Centralized
    Mainframe/TerminalThe mainframe era, the dawn of computing technology, exemplified a highly centralized model with simple “dumb terminals” at the edge. All processing and content was on the mainframe.
  2. Early 1980s: Centralized to Decentralized
    PC and MacThe introduction of PCs distributed content and processing to individual standalone edge devices. People were liberated from waiting for a long period of time to get your punched cards from mainframe.
  3. Late 1990s: Decentralized to Centralized
    Server/PCAs internet era arrives, PCs were supported by complementary centralized servers (early-stage cloud computing). The original cloud computing model was highly centralized, with simple browsers (hello, IE?) at the edge for accessing cloud-based services.
  4. Mid 2000s: Centralized to Decentralized
    Smartphone and Tablet
    Smart mobile devices (and apps) started eating PCs, pulling computing away from the desktop. Bandwidth was scarce and expensive, and most of the processing and content were done and stored locally.
  5. Late 2000s: Decentralized to Centralized
    Cloud ComputingCloud started eating enterprise data centers, centralizing data and computing into mega data centers. Some edge devices, such as low-end smartphones and Chromebooks, also transferred the cost of adding computing power from each individual device to a centralized cloud.
  6. Mid 2010s: Decentralized
    Internet of ThingsInternet of things started growing rapidly, with connected things outnumbering connected people by 2010. The enormous amount of things are creating data far beyond the capacity of communication networks and cloud computing power. A more decentralized model, edge computing, is needed to complement cloud computing.
Furthermore, 3 key factors are driving the shift from cloud to the edge:
  • Content:As more users and things (as in internet of things) connect and interact with one another, the volume and velocity of content gathered from or delivered to the edge increase rapidly. By 2020, 20 billion “things” will be connected to the internet. As the number of connected things grows, the cost of connecting and the prices for sensors and cameras will continue to drop, creating a virtuous circle — further accelerating the growth of connected things and parts of things. Greater use of rich media types such as video intensifies this challenge. As an example, sensors and cameras in self-driving cars could produce more than 3TB of data per hour , which not only will be used within the car, but also might be needed by a smart stoplight and nearby self-driving cars to make real-time decisions. The emergence of AR and VR has made the matter worse. Moving all this data and storing it in real time using highly centralized structures is difficult, costly and often undesirable.
  • Latency:
    Users expect ever-faster response times. Latency problems create a poor user experience and can rapidly undermine digital business initiatives. The larger amount of content has also created trouble for the network to keep latency low. Latency is also an issue in the IoT, which needs real-time analysis and control at the edge. But latency is constrained by the laws of physics (the speed of light), so software can’t always overcome latency issues.
  • Privacy:
    With skyrocketing interactions between skyrocketing amount of devices, security breaches and hacking has become part of our life. Each hop represents a security risk. Keeping the content and processing locally is usually the most effective means to seal loophole. This is the reason why more and more smartphone vendors are putting their AI functions at the SoC level. Consumers are worried about their privacy. Corporations are worried about their data integrity. Further, with EU implementing GDPR and others planning similar moves, keeping privacy-sensitive AI functions at the edge can reduce the need to cross geographic boundaries and incur legislation restrictions.




Source: Forbes

Artificial Intelligence would Need to Think at the Edge, too

Edge computing is also essential for machine learning and training. To date the use of AI has most often been concentrated on the cloud, given the typical compute power to not just train but execute models. However, there is a major trend to push AI to the edge of the network for reasons of both efficiency and differentiation (for device makers). To understand this trend, we have to go deeper in current AI development. Machine learning and in particular deep neural networks (DNNs) have two phases of development:
  • Training
    Models are refined (trained) using training data to help them perform a particular task such as image and speech recognition or navigation. This training can take a considerable amount of time and computing resources.
  • Inference
    Once a model is trained, it can then be fed new examples and situations to provide an output — for example, a photo not yet seen by the model, new speech content or a new physical environment that a robot has not yet encountered. The model then provides an output in the form of classification or prediction. The resources required for inference depend on the model being run.
As the number of edge devices increases, pushed in particular by Internet of Things (IoT) growth, a different distributed approach and new inference capabilities are needed for AI at the edge. Learning will be hierarchical, starting with processing and machine learning based on raw data at the edge and closer to real-time, and ending with filtered and pre-processed data handled centrally, later.
Google has moved several levels of its DNN (from machine learning) into devices for speech to text, reducing the processing load on their servers. The decision for Google to move some AI functions to the edge also stems from that latency for controlling interfaces such as steering or voice communication dramatically affect the user experience, and cloud cannot guarantee real-time, smooth interactions. Decentralization of AI also reduces the risk of a single point of failure crashing the entire system. The ability to avoid total crash is vital for a developing, immature technology, such as AI.
This is one of the reasons why most analysts consider Google, Amazon and Apple the leaders of AI. With the ability to develop their own AI chips and place their AI service (and their data collecting) at the edge, they can offer a better user experience by offloading to the edge. Microsoft, Baidu, or Cisco would have to either rely on better but costlier cloud optimization or sacrifice their profit by enticing device makers to cooperate with them.
At the cloud, hardware tailored for AI demands have been provided by a handful of players, namely Nvidia, Xilinx, AMD and Intel. Google has developed its own TPU for AI, but has used them mostly internally (others can only “rent” TPU’s computing power). With edge computing approaching, the hardware market is evolving to complement cloud-server-based AI approaches with local AI hardware. Device manufacturers partner with chip makers to create specific capabilities, and solutions are pushed onto the market to address the needs of edge computing. For example, Qualcomm provides a software-only-based approach in the form of the Snapdragon Neural Processing Engine. This enables developers using Qualcomm chips to incorporate AI into their software for applications. In the machine vision sphere, many dedicated vision processors are using techniques such as sensor vision and automated decision making through to deep learning models.
However, it’s notable that there is virtually no dedicated hardware or chips designed for edge-based training. Many companies are attacking this issue. Training at the edge could be more efficient, and it could be attractive from a privacy standpoint because users’ raw data is not shared outside the phone. But, with Moore’s Law coming to and end, it might take a while for us to see AI training at the edge.




Source: TechRepublic

5G and Edge Computing, One Cannot Live without the Other

As we mentioned before, upgrading from 4G to 5G won’t make mobile subscribers pay more. Applications with revenue potential for 5G service providers include scenarios with large amounts of data produced locally (such as manufacturing) and situations that need very fast response times (such as autonomous vehicles). To realize the potential, a new network architecture is required: 5G New Radio (5G NR), virtual network functions (VNFs) in transport and packet core, and edge computing.
The biggest advantage of mobile service providers is that they control a large amount of edges (cellular towers) directly and host the biggest portion of connected edge devices. By implementing edge computing, service providers can offer solutions that allow content, services and applications to be delivered faster than if they were delivered from the core. The mobile subscriber’s experience can also be enriched. Even if service providers cannot come up with a comprehensive solution package for edge computing, they can open the radio network edge to 3rd-party partners, allowing themselves to profit as a infrastructure-as-a-service (IaaS) providers, in particular for use cases that have a bigger reliance on mobility (but likely cannot be limited to one service provider), such as fleet management, transportation and logistics.




Source: Intel

Internet of Things Needs Middlemen: Edge Computing

More and more physical objects are becoming networked, and contain embedded technology to communicate and sense or interact with their internal states or the external environment. It is costly and inefficient to stream all the information generated from Internet of Things to the cloud or to a data center for management, analysis and decision making. Not all data needs to be sent to the cloud or main data center, because it’s cost-prohibitive or bandwidth-intensive, it has performance implications or it’s impractical in remote locations.
When deployed as part of IoT, edge computing can provide near-real-time insights and facilitate localized actions. It would possess simple data-processing capabilities, and/or have the ability to transmit data, monitor and manage sensor-based devices, and/or take local actions based on incoming events and data, etc. Most vendors in the IoT market have recognized that edge computing is an integral part of the IoT solution. However, considering the economies of scale associated with large data centers, it is unlikely that edge computing could ever make the cloud model outright obsolete. The two models of architecture are highly complementary to each other.
Many view cloud and edge computing as separate or even competing approaches. They view cloud computing as a centralized model enjoying the economies of scale and standardization, while edge computing mandates that processing be pushed to the edge. But this is a misunderstanding of the two concepts. Cloud computing is a style of computing in which elastically scalable technology capabilities are delivered as a service using internet technologies. Cloud computing doesn’t mandate that everything run only on a centralized server far from the edge. Edge computing brings the distributed computing aspect into the cloud style.
In practice, edge computing brings depth to the deployment architecture for a solution. Compute and data may be needed in a variety of layers, depending on factors such as needed autonomy, available bandwidth, latency constraints and regulatory requirements.

Monopoly Unlikely for Edge Computing, Thereby Delaying Adoption

The combined use of the cloud style and edge delivery will increase as IoT market matures and as IoT solution adopt the cloud style to manage their solutions effectively. However, the environment is highly diverse, with millions of endpoints doing dramatically different things, it’s unlikely for edge computing market to be dominated by one or two players. Unfortunately, contrary to how the capital-intensive nature of cloud computing consolidates cloud computing landscape into a handful of providers, lack of leader in edge computing has led to confusion in standardization (especially for cross platform and cloud computing for controlling) and customer’s insecurity in adoption. With Google, Apple, Samsung and Amazon putting more efforts into connected home, edge computing might be able to finally take a faster pace.

Comments

Popular posts from this blog

How to download a file using command prompt (cmd) Windows?

The future of Artificial Intelligence: 6 ways it will impact everyday life

How to Include ThreeJs in Your Projects