6 Edge Computing Trends to Watch in 2022

Edge computing rose quickly to prominence in B2B technology for its ability to enhance data processing speeds and reduce bandwidth requirements. The following article discusses edge computing trends you can expect to see in 2022. 

Edge computing is getting “edgier.” The technology, which allows compute and processing resources to be moved closer to where data is being analyzed, is moving farther and farther away from on-premises data centers—and it’s moving fast. At the moment, about 10% of enterprise data is created and processed outside centralized data centers or clouds, according to Gartner. By 2025, that number will rocket up to 75%.

So while the edge is already very much present in your infrastructure, it’s safe to say that it’s going to take over more and more of your data processing and delivery functions in more and more industries—and it’ll be making its way into places that you wouldn’t have thought data processing could have happened at all (like the middle of a cattle ranch).

I polled my colleagues for their educated guesses about the state and spread of edge computing going into 2022, and here’s what they came up with.

1. Edge computing will be all around—and down on the farm.

Upgraded edge computing capabilities open up the technology for more industries—those in far-reaching places, difficult to access terrain, or simply dangerous conditions. (It’s one of the main perks of automation, cobots, and IoT—keeping humans safe and out of harm’s way.)

Agriculture shows some of the most promise for edge computing. Farmers use edge technology to track water use and animals, decide where to put fertilizer and in what amounts, analyze soil quality, and monitor crop growth. Even tractors can become part of an edge network, along with sensors spanning fields.

2. Filtering edge data can deliver a competitive edge.

The smarter edge computing gets, the more it may be able to filter out the noise that a central hub would have to process and sift through for value. As my colleague Nate Antons said, “It’s like delivering the CliffsNotes of telemetry data.”

For the Mercedes-AMG Petronas Formula 1 team, this type of edge can deliver competitive edge. Hundreds of sensors on each car provide real-time data about performance—terabytes of data per race. This helps them maintain the cars, keep drivers safe, and determine race strategies. But what about the effort to filter out less valuable telemetry? When milliseconds can be the difference between winning and losing, the team has plans to use smarter sensors that do more processing at the edge, putting more valuable data in the hands of engineers.

It’s an example of optimization and performance monitoring that can be game-changing for F1 but also for numerous other business breakthroughs.

3. Edge computing doesn’t need to be connected.

Sure, “connected” is good because the data gets back to centralized data centers faster. But in many ways, only being able to deploy edge systems in a connected fashion was not only difficult but also made certain offerings unavailable—especially to more remote industries and operations.

What many organizations don’t realize is that edge computing data operations can also operate in a disconnected mode. An example: an edge system running autonomously in a hard-to-reach place like a rural mining site. The system will continue running when a connection drops. Then, when the connection resumes, the system can sync and transfer data without disruption to the business. This disconnected mode opens up possibilities for much more.

Whether as part of the core design of an edge system or a function of resiliency, disconnected edge just might be the next big thing.

4. 5G edge computing will enable the latest in AI, automation, and the internet of things (IoT).

IoT edge computing is accelerating the next generation of automation and is one of the core components of the industrial internet of things (IIoT), where industries leverage edge platforms for analytics, smart buildings, and more.

Take a manufacturing sensor in a factory, for example. The sensors and actuators connected to the machinery (the edge) would create a pod, and a message broker orchestrates communication of telemetry data between the sensors and data-processing service. The data is ingested and stored in a stateful microservice that needs persistent storage before moving to the cloud to train ML models. Back at the edge, trained models can detect anomalies on the floor and predict maintenance of equipment.

5. The edge is getting “foggy.”

Fog computing, as it’s known, is sort of the “edge of the edge.” The concept is similar to edge computing in that it brings processing to the edge—but it goes farther in that it doesn’t rely on the cloud but instead carries out much more computation, storage, and communication locally at the edge.

In this entire conversation, we’re talking about data that’s being generated off-site, then how and where you’re doing things with that data. Conventional edge computing is about parsing out the useless data for the useful data.

6. Containers (and container-native data storage) will be key to success at the edge.

Containers and Kubernetes make an ideal platform for the edge. Hyperscale cloud providers are taking note with AWS Snowball, Azure Stack, and Google Anthos, which are all based on Kubernetes. These environments all run data ingestion, data storage, data processing, data analytics, and machine learning workloads at the edge.

But what about the challenges of running data-centric workloads at the edge? Containers and container-native storage are key. For advanced workloads at the edge, container-native storage can provide core services, persistent storage, high availability, and durability. It also can enable seamless migration between the cloud and edge with minimal effort.

 

This article was written by Rob Ludeman from Business2Community and was legally licensed through the Industry Dive publisher network. Please direct all licensing questions to legal@industrydive.com.