Derek Long, Ericsson India head – Communication Services & Mobile Broadband says big-data technology is an important part of the puzzle for Indian telecom operators wanting to leverage value from the large volumes of data in their possession in a cost-efficient way.
Ericsson’s Derek Long says applying big-data technologies has the side effect of transferring some complexity from the database to the application.
When does data get big? People, devices and things are constantly generating massive volumes of data. At work people create and consume data, as do children at home, students at school, as well as people and objects whether stationary or on the move.
Millions of devices and sensors take measurements from their surroundings, providing up-to-date readings over the entire globe – data to be instantly processed or to be stored for later use by countless different applications. When a new phenomenon comes about, it often takes the related industries a while to agree on a common definition, and big data is no exception. However, the consensus seems to be that data gets big when it starts to stretch the limits of traditional technology.
The data available to operators through their networks presents them with an opportunity and a business-intelligence edge over other players. As is often the case, with opportunity comes challenge, and for big data this challenge comprises the volume and diversity of the data – and the fact that both are expected to grow substantially in the next few years. An IDC study found that by 2020, the world will generate 50 times the amount of data it did in 2011. The value of information in the data is significant, but the costs involved in obtaining it using current technology can be prohibitive, according to Derek Long, head – Communication Services & Mobile Broadband, Ericsson India.
The nature of this data – big data – is also set to change; the size of single data sets, the variety of data types and the demand for real-time access to data are all on the rise. These factors lead to varying types of data being collected by the network and transmitted through it. Data sets are irregular and may be unstructured, they can contain ambiguities, they are time- and location-dependent, and are constantly being updated by capture equipment such as mobile devices, sensors and RFID readers.
In its simplest form, big-data technology encompasses the tools, processes and procedures to consolidate, verify, analyze, manage and visualize very large data sets with mainly non-relational but also relational repositories.
The emergent approach is a cost-effective one that can handle the 3 Vs of big data: Volume, Velocity and Variability. Big-data technologies are a new generation of methods and architectures designed to extract value from masses of different data types through high-velocity capture, discovery and analysis. Complementing the telecom industry with big-data technologies could generate value and add innovation opportunities for operators and users across all industries, in public services and in private life.
The knowledge derived through analysis of data from smartphones and other devices connected to telecom networks is a valuable asset for telecom operators. The massive amounts of data analyzed OSS/BSS tools can help operators leverage the value of this knowledge. By further extending these tools to the communication embedded in management systems, requirements for user experience, business innovation and efficiency can be met.
The huge amounts of data generated by the Networked Society, in which real-time communication is more critical than it was in the past, can be used to a significant advantage in many areas of urban planning, such as efficient use of transportation, smart electricity distribution and water supply. Efficient planning and control of transport and utilities requires analytics support, which will, for example, forecast demand levels and consequently enable utility providers to deliver in a smart way – meeting demand with minimal waste. Figures for supply and demand can be constantly refined with real-time consumption rates, creating an ecosystem with reduced waste.
The value that can be derived from using big-data technologies depends on the use case and the data associated with it. Apart from volume and velocity, the value that can be gained from the variability of data tends to be overlooked. Put simply, the less structured the data, the greater the requirement to apply big-data technologies.
Big-data technologies are usually engineered from the bottom up with two things in mind: scale and availability. Consequently, most solutions are distributed in nature and introduce new programming models for working with large volumes of data. Because most of the legacy database models cannot be effectively used for big data, the current approach to ensuring availability and partitioning needs to be revised.
Consequently, big-data technology is an important part of the puzzle for operators wanting to leverage value from the large volumes of data in their possession in a cost-efficient way. Applying big-data technologies has the side effect of transferring some complexity from the database to the application.