Hana Roadmapping - The Data Trap


In this article, we present a roadmapping model for a step-by-step introduction. In addition to software, hardware and the right employees, you need one thing above all else to enter the brave new world of digitalization and big data: data.
With the collection of uncompacted raw data in the Universal Journal, in Central Finance and in the One Exposure from Operations Hub or with the new, highly virtualized LSA++ architecture model, SAP is currently creating suitable containers for the evaluation of large volumes of data. The only pity is that their content is often not useful for statistical analyses.
The reason: on the one hand, existing business processes are not aligned with this objective, and on the other hand, the validity of practically all statistical methods depends on the sample size and quality. In other words, if you don't think about your digitalization goals now and how to get there, you will be empty-handed in ten years' time.
1. migration
For SAP customers, the entry into digital business management begins with a migration from a classic database such as Oracle or MaxDB to Hana. This pure database migration means that reports and selected functions (e.g. transformations or activations in BW) become significantly faster.
New interfaces (e.g. to event-processing systems or Hadoop) are also available. Without new business processes, however, these interfaces will remain unused for the time being.
A return on investment from a pure database migration only arises in the event that the acceleration of software functions alone generates benefits.
If, for example, loading processes in BW run 50 or 75 percent faster due to faster activation and the elimination of individual process steps (for example: deletion and rebuilding of indices, database statistics and roll-up), then this can have two consequences:
Bottlenecks in batch processing disappear and decision-relevant data is available to the business side early in the morning rather than in the afternoon. The former may help to avoid new hardware investments, the latter may lead to better decisions.
2. new functions
Once the database conversion has been completed, the new "Simple" solutions from SAP can be easily activated via the Switch Framework (SFW5).
The result: further business processes are accelerated by leaner table structures and code push-down and completely new functions (for example the new Bank Account Management in Cash Management) are provided.
After the migration, it is also time to make customer-specific solutions Hana-ready. Logically, the initial focus here will be on speed-critical business processes.
3. new methods
As a platform and central hub for data provision, Hana opens doors towards event streams (keyword: ESP Plug-in for Hana Studio) and weakly structured/unstructured data (keywords: HDFS/MapReduce or NoSQL).
However, traditional ERP business processes have no use for this type of data. Corresponding value potential can only be tapped by redesigning existing processes.
For example, it would be conceivable to use data from social networks to prevent fraud and highlight questionable payment proposals for further analysis.
However, applications (SAP Fraud Management, Business Rules Framework) must be set up for this and existing applications (for example: the payment program/F110) may need to be expanded. The customer is therefore even more challenged than when "tuning" reports or loading processes.
4. new algorithms
The acceleration achieved by Hana makes it possible to use known algorithms differently than before or to use new, performance-intensive algorithms for the first time.
Example: Customers can now be segmented according to price sensitivity not just once a month, but practically on an ongoing basis. Instead of pricing basic and premium brands with clearly defined target groups differently, as was previously the case, the same product can now be offered at different prices depending on the time of day.
With the nationwide introduction of digital price labeling, stationary retailers are now following airlines and webshops in creating the technical prerequisites for time-oriented customer segmentation.
5. new strategies
Sensor data quickly delivers large volumes of data, some of which is incorrect or incomplete. This alone represents a major challenge for traditional IT organizations.
However, there is another problem: the Internet of Things also has the extremely unpleasant characteristic - from a governance perspective - of constantly changing structurally.
A mid-range car can currently provide data from around 100 to 150 sensors. With every model change, new ones are added, old ones are removed or their technical properties change.
It is no different with the measuring units in a smartphone or a pump housing. Data flows and functions that are based on the assumption of stable input data are therefore obsolete even before go-live.
New architecture models in IT are needed to deal with unstable framework conditions. A fundamental redesign of data architectures and data flows is required in this phase at the latest.
6 New paradigms
With development steps 4 and 5 (in the diagram), closed feedback loops - long standard practice in process control systems - are finding their way into the control of business processes.
But while machinery in production provides a relatively stable framework for defining rules, decisions in business processes are made in an environment that changes fundamentally on an hourly or minute-by-minute basis.
The attempt to map the rules of this environment in rigid rules (and customizing settings) is bound to fail. Sooner or later, organizations will therefore be forced to delegate decisions to self-adapting (auto-adaptive) algorithms.
This may sound unfamiliar or even frightening. But if we are honest, we have long since left our everyday decisions to algorithms that we no longer know how they work in detail.
Hardly anyone checks every instruction from their navigation system on the map, very few dispatchers will know how demand planning works in SAP ERP, and few airline passengers have any reservations about boarding a plane that is landed by the computer and not by the crew when runway visibility is poor.
Data as a critical success factor
However, business cases with pattern recognition and automated decision-making are often not even feasible with today's data sets. Pattern-recognizing algorithms in fraud prevention, for example, require clean and granular data from hundreds or even thousands of well-documented and clearly structured cases.
In order to have such a treasure trove of data in one, two or three decades, we need to start designing the architecture and collecting it today. Internet giants such as Amazon and Google have long since recognized this. The pioneers of big data are primarily concerned with collecting the largest possible amount of data and managing it in a flexible, structured way.
What can be done with this data later on is something we can still think about in ten years' time. Google's entry into the automotive industry, robotics and the home technology business should also be seen against this backdrop.
In all three cases, it is not primarily about the sale of hardware, but about control over the data produced by the hardware.
Conclusion
When you build a house, you don't just start planning when the roof tiles are laid. The architecture and design of the building are defined long before the first excavator drives onto the site. The vision may even emerge before the land is purchased.
Of course, there will be adjustments during the implementation of the construction plans. The building site itself may hold the odd surprise or a window may have to be moved a few centimeters to provide a better view.
However, just as with construction, a data architecture does not necessarily have to be rigid. Horváth & Partners is offering a roadmapping boot camp with videos and webinars for customers and interested parties from December 2015.
We will be looking more closely at the question of what flexible data architectures for digitalization could look like.