The proliferation of the Internet of Things (IoT) has made it possible to collect and analyze data, and respond, in real time. Think about an autonomous vehicle that encounters an obstacle ahead. There must be a split-second response that determines whether that obstacle is a shadow or a person in the road. In a case like this, there’s no room for error, and no room for latency. Latency, a.k.a. delay, is the enemy of response time. To make this real-time response happen, the data analysis needs to take place near the sensor on the vehicle: at the edge of the network.
We had an opportunity to sit down with renowned technology consultant Marc Staimer to talk about edge computing and how it’s being used with advanced technologies to overcome the latency issues around public cloud.
Edge Computing Emerged as a Response to Latency in Public Cloud
At its most basic definition, edge computing is simply having compute power and analytics close to the source of the data that’s being processed. According to Staimer, edge computing came about as a response to applications moving into the public cloud where the centralized processing can lead to unacceptable latency.
Explains Staimer, “Every kilometer of distance between the device collecting data and the device processing and analyzing that data adds latency. Let’s put this in perspective. If the distance latency between a smart meter and the device processing that smart meter data in a public cloud is approximately 1,000 milliseconds, it creates a roundtrip delay of two seconds before accounting for the latency of the processing and analytics. Whereas, if the edge computing is much closer physically, it can reduce that distance latency to a few dozen milliseconds. In a metropolitan area that distance latency is likely to be no more than 50 milliseconds (depending on circuit miles), or 20 times less than cloud or core in the example just discussed.”
Analytics Makes Edge Computing Critical
Many IoT sensors have limited analytics capabilities. Instead, they send the data collected somewhere else to be processed. Monitors on a refrigerator or robot vacuum are examples of this type of analytics. Latency isn’t a big issue because split-second processing isn’t necessary.
“The analytics is the issue. What is being done locally? Is the local processing being done for one device or multiple devices? What decisions are required based on policy engines or AI/machine learning to be completed locally?” says Staimer. “In the case of the autonomous vehicle, it's computing for hundreds of sensors on that vehicle. It's analyzing, running analytics against that and making decisions based on machine-learning against its database, and it has to be in real time.”
Low-level edge computing typically has minimal analytics, primarily dealing with a single device. For example, a wind turbine is not collecting data from other wind turbines. It's got its own data, and sends it somewhere to be aggregated centrally, at a database of some kind in the cloud or on-premise.
The Emergence of the Fog
Staimer goes on to explain that when the processing and analysis of multiple data devices takes place closer to the edge, it can provide actionable information in real time with much lower latencies. This is called fog computing. Fog computing devices are distributed near the edge and aggregate, analyze, sub-filter, and even make decisions for multiple edge devices if they have policy engines or, more importantly today, AI/machine learning.
In the case of vehicles, sensor data related to driving and safety are processed in real time in the vehicle, which is the edge device. At the same time, traffic or performance-related data can be collected by each car, summarized in another edge process, and then further analyzed in a metropolitan area fog that's covering a certain number of vehicles in near real time to improve traffic flows and fleet efficiency. And finally, non-time-sensitive data from both the edge and the fog can be sent in highly summarized form to the cloud for further analysis. With edge, fog, and cloud, you have a solution that takes into consideration the urgency of the analysis.
Many analytics performance problems can be correlated to distance and network latencies. Others can be tied to the performance of the analytics engines. Edge and fog computing can solve the intransigent distance latency, but they cannot resolve the performance of the analytics engines.
The Fog Is Clearly the Growth Area
“The fog is where there’s a significant amount of growth,” notes Staimer. “Traditionally, that would have been what was known as the edge: remote offices, branch offices, etc. And that's where analytics can take place.”
A good way to think of this fog computing is as pre-processing. Some of the analytics are done locally, but additional, more in-depth analytics can run in the core where real-time responses are not as important. Where fast decision-making is a requirement, it's going to be in the edge or the fog. And these fog-based appliances don’t need to be in a server room. They can live in a closet or in the base of a wind turbine, as an example. In fact, a single fog device could handle multiple wind turbines.
“Oracle's got some very smart solutions for the edge, fog, cloud, or core computing space,” says Staimer. “They have the Oracle Autonomous Cloud; Exadata or Oracle Database Appliance (ODA) on-prem – a more traditional CapEx implementation; and Exadata Cloud at Customer managed service. Oracle additionally has an outstanding fog or edge play with its ODA. Oracle’s solutions are unique in that they are engineered to provide extremely low latency, fast analytics in real time. The built-in AI/machine learning automates and simplifies real-time decision making.”
Not all of the analytics are decentralized, adds Staimer. Just the portion that's required for real-time interactions. When fast actionable information is needed, there are solutions, such as the Oracle Exadata and the Oracle Database Appliance. These engineered systems are not just for the data center anymore. With built-in AI/machine learning, they can be placed close to the edge. There they can do analytics as required, make real-time decisions, and then pass off the results and pre-filtered data onto the cloud or core for more intensive analytical processing.
Oracle Engineered Systems are used in the Oracle Cloud, Cloud at Customer, in the fog, or on the edge in a closet. This enables the processing to be as close to the edge or as centralized as required.
What it comes down to is time. Oracle Engineered Systems (Exadata and Oracle Database Appliance or ODA) are architected to save time. Time is saved by faster processing, database consolidation, and multi-database analytics on a single copy of the data where the processing is moved to data instead of the data moved to the process. And as everyone knows, time is money. “Those time-saving processes are unique to the ODA and Oracle Exadata,” concludes Staimer.
The potential is so big here, we’ll focus specifically on how to address the spectrum of analytics requirements in part two of our conversation with Marc Staimer.
To learn more about Oracle Database Appliance and Oracle Exadata Machine, visit us online.
About Dragon Slayer Consulting: Marc Staimer, as President and CDS of the 21-year-old Dragon Slayer Consulting in Beaverton, OR, is well known for his in-depth and keen understanding of user problems, especially with storage, networking, applications, cloud services, data protection, and virtualization. Marc has published thousands of technology articles and tips from the user perspective for internationally renowned online trades including many of TechTarget’s Searchxxx.com websites and Network Computing and GigaOM. Marc has additionally delivered hundreds of white papers, webinars, and seminars to many well-known industry giants such as: Brocade, Cisco, DELL, EMC, Emulex (Avago), HDS, HPE, LSI (Avago), Mellanox, NEC, NetApp, Oracle, QLogic, SanDisk, and Western Digital. He has additionally provided similar services to smaller, less well-known vendors/startups including: Asigra, Cloudtenna, Clustrix, Condusiv, DH2i, Diablo, FalconStor, Gridstore, ioFABRIC, Nexenta, Neuxpower, NetEx, NoviFlow, Pavilion Data, Permabit, Qumulo, SBDS, StorONE, Tegile, and many more. His speaking engagements are always well attended, often standing room only, because of the pragmatic, immediately useful information provided. Marc can be reached at marcstaimer@me.com, (503)-312-2167, in Beaverton OR, 97007.
Next Post