Leveraging Edge Analytics For Competitive Advantage
Here is a four-pronged data management strategy for companies to mobilise their IT teams to form their own edge analytics strategy
Photo Credit :
In today’s hyper-connected world, data is ubiquitous and all pervasive. Devices, applications, and systems of intelligence are gathering data at very high volumes and complexities across the world. Companies are grappling with this explosion in data volumes and complexity, as collecting all raw data for analysis is a costly and complex exercise. They also need to be cautious of the fact that picking and choosing data for analysis can bring in bias risks and undermine the value of insights.
Companies are also facing the need to be compliant with new privacy laws and regulations being enacted across the world that can restrict cross-border data transfers. This means that data being generated in particular geographies will need to remain in those geographies while still being useful.
In such a context, edge analytics is emerging as an effective way to identify and analyse real-time data ‘at, or near the point where data is collected’ — right at the bleeding edge — without having to move it to another central location, yet delivering contextual, relevant and real-time insights that are useful to companies.
Edge analytics can empower companies to retain customers, improve service quality, strengthen market share, and even go all the way to disrupt an entire industry. While not all data needs to be processed at the point of collection, it is important to first determine what data can reveal the most meaningful insights immediately when processed on the edge. This instantly makes edge data analytics a high-value asset that can help you get insights in real-time without the inherent flaw of trying to reduce the size of data sets for analysis.
The freshest of data also helps make the best decisions. An FI race car transmits 2GB of technical data every lap, which can touch even 3 TB per race! Within milliseconds, each sensor transmits data to the engineers, who perform real-time tweaks, send prompts to the driver to make educated decisions, and eventually help him win the race. Similarly, an autonomous car depends on edge analytics to take those split-second decisions to avoid accidents and drive better.
Globally, insurance companies lost $50 billion to natural catastrophe pay-outs in 2016. Data driven, real-time storm warnings can help avoid the extent of damage, safeguard lives, and eventually cut costs for the insurers. In the services industry, a call centre executive can respond better if empowered with speech analytics in real time. The tone of the customer’s voice, the sentiment, attitude, and intent can all be ‘sensed’ better with the aid of analytics, and used to not only improve services, but also upsell more.
Edge analytics enables organisations to monitor the physical as well as the cyber worlds, and reduce the time between data generation and the application of insights gathered from that data. Edge analytics can reduce the need to move data from an IoT device to a central repository, arrest the loss of value associated with selective data processing, shrink latency, optimise transmission costs, and improve overall quality of service. The benefits are convincingly large, contributing to overall business value and enabling proactive decision-making.
Companies looking to exploit the full potential of edge analytics should look at overcoming constraints that data presents when stuck in silos. They should then reduce the risks and expenses that are associated with these siloed data repositories that are not helping businesses in any way.
Here is a four-pronged data management strategy for companies to mobilise their IT teams to form their own edge analytics strategy:
Create a comprehensive data plan: Develop a data acquisition and retention plan by zeroing in on what data is truly valuable. Develop a tiered view of data to define the best data, extract the maximum out of the freshest relevant data, and identify what to discard. Data is to be separated into two — data for predictive and historic analysis, and real-time data for edge computing and distributed analytics.
Consider a hybrid model: Create a hybrid data processing architecture that takes advantage of the strengths of edge computing while still leveraging cloud and central processing. While cloud helps aggregate and improve AI and machine learning, edge becomes the spinal cord of operations.
Avoid hoarding data: It is not the volume, but its relevance and quality that make data valuable. Holding expired data can result in unnecessary expenses and become a security and compliance risk. Companies should be clear what data to hold, and what to purge after due analysis.
Plan for oversight: Algorithms built by humans are also prone to biases and judgemental and interpretational errors that are inherent to humans. With growing data volumes, it becomes increasingly difficult to ensure fairness and thrift. Create enough checks and evaluations to ensure fairness and balance.
Looking ahead, organisations that master the techniques of identifying the true value of fresh and relevant data, and convert it into actionable intelligence can make non-linear leaps, and outpace the competition. In this regard, edge computing and analytics would grow in prevalence and reach, to assume vital roles in business and IT architecture strategies for big companies. Global data marketplaces and trading exchanges are also expected to take shape, and enable easy and transparent buying and selling of data.
Disclaimer: The views expressed in the article above are those of the authors' and do not necessarily represent or reflect the views of this publishing house. Unless otherwise noted, the author is writing in his/her personal capacity. They are not intended and should not be thought to represent official ideas, attitudes, or policies of any agency or institution.