Big Data becomes enterprise ready
Modern, complex, sensor-based systems generate enormous amounts of data: Even now, semi-autonomous cars generate over 25 gigabytes per vehicle per hour.
Such data volumes can no longer be managed with a classic IT landscape - not to mention real-time evaluation. Only with an appropriate Big Data strategy can this exponentially growing volume of data be made usable.
However, while its use in highly specialized data analyses has been the daily practice of many companies for years, integration into the rest of the IT landscape is often still outstanding. The open source framework Hadoop is available as the core of the Big Data world.
This is constantly growing in both its power and complexity due to the many extensions available, such as Apache Kafka and NiFi. These extensions often come directly from the large Silicon Valley companies in response to their extreme requirements and user numbers.
Thus, companies venturing into the Big Data world have a large, almost unmanageable amount of tools at their disposal that can help them profitably exploit the ever-growing data streams.
Even today, only a dedicated big data team can handle the operation of such a complex landscape - not least one of the reasons why the various cloud service providers are becoming increasingly prevalent here.
This complexity in terms of content and technology has so far meant that Big Data systems in many companies are primarily used by a small number of experts and coexist alongside the actual productive IT landscape.
The data lake quickly became a data grave in which a lot of data was collected, but little was actually used. The task of many IT departments is therefore clear: to integrate the Big Data systems into the rest of the system landscape so that the data there can be used to the direct benefit of other business processes.
In concrete terms, this can mean storing all customer communications on all channels in its Hadoop cluster, analyzing them there, and using the results to trigger targeted follow-up processes in its CRM or ERP system.
For example, e-mails and relevant social media comments can be automatically assigned to business transactions in order to draw conclusions about the quality of the company's own processes.
Let's imagine that an end customer complains in a Facebook post that he has been waiting forever for his package. We can now identify this customer in our customer base, determine his potentially affected orders from the transactional data, identify a possible intermediary supplier, and downgrade him accordingly in his supplier ranking.
At the same time, of course, this comment would have an impact on his customer lifetime value, which in turn could trigger special customer loyalty programs in CRM.
In the course of this process integration, SAP now comes into play. SAP is aware of this weak point and is pushing into the market with new solutions to create an interface between Big Data and the ERP landscape.
SAP Vora is the first step in this direction and connects Hadoop with Hana bidirectionally. With the acquisition of Altiscale, SAP has also acquired a Big Data cloud provider in order to be able to quickly implement integrative cloud scenarios for its customers.
So if companies don't want to ignore topics such as the Internet of Things or machine learning on large, unstructured data volumes, they won't be able to avoid a full-blown Big Data strategy in the future.
The challenge will lie in integrating the open source software into the rest of the enterprise world. SAP has taken the first steps here to enable cloud customers in particular to enter the process-integrated Big Data world.