1. Introduction

The proliferation of the Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing.

In this tutorial, we’ll examine the edge computing paradigm: its definition, benefits, several edge computing platforms, and key design issues in the field of edge computing.

2. Definition of Edge Computing

Edge computing is a distributed computing paradigm in which client data is processed at the periphery (edge) of the network. Here the term “edge” is defined as any computing and network resources along the path between data sources and cloud data centers. For example, a smartphone is an edge between body things and the cloud, a gateway in a smart home is the edge between home things and the cloud, and a micro data center and a Cloudlet is an edge between a mobile device and the cloud. The rationale of edge computing is that computing should happen in the proximity of data sources.

Edge computing can be seen as a subdivision of fog computing: While edge computing refers to the shift of computation tasks to the edge of the network, fog computing is a compute layer between the cloud and the edge. Huge data streams that might be sent from the edge nodes to the cloud will be received by fog nodes first and will be decided to be forwarded to cloud nodes or to be kept at fog nodes for further processing.

The below table compares the characteristics of edge computing with the characteristics of cloud computing and fog computing. It demonstrates how edge computing improves the performance of IoT: its distributed structure makes it possible to reduce network traffic; transmission latency between edge nodes, the cloud and end users; and finally, the real-time response of applications and IoT devices compared to cloud and fog computing:

Advantages

Disadvantages

Cloud Computing

Scalable, big data processing, Unlimited computational processing

High latency, slow response time, no offline-mode, lack of security

Fog Computing

User defined security, low latency

Limited storage, fog needs more cloud links than edge computing to move data from the physical to the digital layer

Edge Computing

Real-time responses, very low latency, edge can work without cloud or fog

Limited storage, interconnected through proprietary networks, high power consumption

3. Benefits of Edge Computing

Edge computing provides important benefits such as:

  • Reduction of network latency. By keeping the data on the edge of the devices for easier access, edge computing helps boost the performance of applications
  • Addressing several issues like limited processing/storage capabilities of devices/things, battery life, and network bandwidth constraints
  • Enhancing privacy protections and data security: Edge computing provides more data security and privacy protection because data is processed within the edge rather than from central servers
  • Enabling allocation of computing resources to the tasks generated by IoT applications. For instance, a face recognition platform has been developed by researchers at The College of William and Mary, and the response time is reduced from 900 to 169 ms by moving computation from the cloud to the edge

4. Edge Computing Platforms

Several edge computing platforms for edge computing have been developed and deployed. The following table briefly summarizes the general and application-specific edge systems with different features/targets:

Application scenarios

Platform

Features/targets

General usage

Cloudlet

Lightweight VM migration

PCloud

Resource integration, dynamic allocation

ParaDrop

Hardware, developer support

AirBox

Security

Vehicular data analytics

OpenVDAP

General platform

SafeShareRide

In-vehicle security

Smart home

Vigilia

Smart home security

HomePad

Smart home security

Video stream analytics

LAVEA

Low latency response

VideoEdge

Resource-accuracy tradeoff

Virtual Reality (VR)

MUVR

Resource utilization efficiency optimization

5. Key Design Issues

The edge computing system manages various resources along the path from the cloud center to end devices, shielding the complexity and diversity of hardware and helping developers quickly design and deploy novel applications. To fully leverage the advantages, we discuss the following key issues that need attention when analyzing and designing a new edge computing system.

5.1. Mobility Support

Mobility includes user mobility and resource mobility. User mobility refers to how to automatically migrate the current program state and necessary data when the user moves from one edge node coverage domain to another so that the service handover is seamless. Currently, Cloudlet and CloudPath provide service migration by terminating/finishing the existing tasks and starting a new VM/instance in the target edge node.

5.2. Multi-User Fairness

For edge devices with limited resources, how to ensure the fairness of multi-user usage, especially for shared resources and rare resources. For example, a smartphone made up of various sensors and computing resources can act as an edge node to serve multiple users. However, as the smartphone has limited battery life, it is a problem to allocate resources when receiving many requests fairly.

5.3. Privacy Protection

Unlike cloud computing, edge devices can be privately owned, such as gateway devices for smart home systems. When other users use such edge devices, obtain their data, and even take control of them, how to ensure the owner’s privacy and guest users’ data privacy is important.

5.4. Developer Friendliness

The system ultimately provides hardware interaction and basic services for upper-level applications. How to design interactive APIs, program deployment modules, resource applications and revocation, etc., are the key factors for the system to be widely used. Therefore, to design an edge computing system, we should think from an application developer’s perspective.

5.5. Multi-Domain Management and Cooperation

Edge computing involves multiple types of resources, each of which may belong to a different owner.

5.6. Cost Model

In cloud computing, the corresponding virtual machine can be allocated based on the resource requested by the user, and the cost model can be given according to resource usage. In edge computing, an application may use resources from different owners. Thus, how to measure resource usage, calculating overall overhead, and give an appropriate pricing model are crucial problems when deploying an edge computing system.

5.7. Compatibility

Currently, specialized edge computing applications are still quite a few. For example, ParaDrop applications require additional XML configuration files to specify the resource usage requirements, and SpanEdge needs developers to divide/configure tasks into local tasks and global tasks. Common applications are not directly supported to run in edge systems. How to automatically and transparently convert existing programs to the edge version and conveniently leverages the advantages of edge computing are still open problems.

6. Conclusion

In this brief article, we covered the concept and principles of edge computing. We have learned that edge computing is a new paradigm that jointly migrates networking, computation, and storage capability from the remote cloud to the user side. Under the context of IoT and 5G, the vision of edge computing is promising in providing smarter services and applications with better user experiences.