

Large language models offer unimagined opportunities, but present companies with new challenges: Where to start? How to structure? How can the return on investment actually be measured? In addition to the technology, a well-founded implementation strategy and, above all, professional, accompanying expertise are therefore required. AI has become the defining technological transformation of our time.
AI technologies and large language models (LLM) are powerful tools, including in financial planning. The SAP Business Technology Platform is a good example of the use of AI in companies, in which several components are combined into a uniform solution. This allows real business challenges to be overcome while meeting security, governance and scalability requirements.
CFOs increase AI investments
But companies face a paradox: while the introduction of AI is progressing rapidly, measurable returns remain elusive for many. A comprehensive study of more than 280 finance executives conducted by the Boston Consulting Group in 2025 shows that only 45% of companies can successfully quantify the ROI (return on investment) of their AI initiatives, despite 78% of CFOs planning to increase their AI investments in the next 12 to 18 months. This gap between implementation enthusiasm and realized value underscores an important insight: technology alone does not guarantee success.
LLMs need structure
The challenge for companies is that LLMs are so new that established workflows and toolsets are still in their infancy. Conventional software had defined interfaces, predefined functions and clear operational boundaries. LLMs, on the other hand, offer almost unlimited flexibility - a feature that proves to be as powerful as it is confusing. Without proper structure, this flexibility leads to inconsistent results, hallucinations and frustrated users who expect magic but get mediocrity.
While LLMs represent a fundamental shift in the approach to business automation, understanding their nature is critical to successful implementation: An LLM is comparable to a brilliant college graduate who possesses remarkable skills. They understand context, process natural language, recognize patterns and generate insights.
However, like any new employee, they need guidance, structure and the right environment to create added value. In the area of SAP financial consolidation, the difference between success and failure therefore often lies in the implementation strategy and therefore in expert advice.
Just as deploying enterprise software requires more than just installation, realizing the transformative potential of AI requires a deep understanding of both the technology capabilities and the underlying business processes. Consultants who specialize in AI implementations bring proven methodologies, industry benchmarks and practical experience that shorten the payback period while avoiding common mistakes. They act as architects of transformation, designing solutions that align technology capabilities with business goals. With regard to the use of LLMs, the role of the consultant is changing from implementer to trainer:
He can be considered a highly specialized teacher who understands both the student's potential (the LLM) and the curriculum required for success (business processes). He knows how to structure prompts consistently, when to apply different models to specific tasks, and how to create feedback loops that improve performance over time. Most importantly, he understands that success requires more than just technology - it requires new ways of thinking about the work itself.
Goal-oriented workflows
Two important concepts emerge in this educational framework: agent mode and the Model Context Protocol (MCP). In agent mode, AI goes beyond Q&A to goal-oriented workflows: The model maintains context across multiple steps, orchestrates targeted tool calls (for example ERP APIs, group reporting services, database queries), evaluates returns and iteratively adapts the plan (ReAct principle „Reason-Act-Observe“). Context management is crucial:
It is not just the immediate query that counts, but the entire business process, historical patterns and the target image. Without proper context and tool management, even strong LLMs quickly provide generic answers - with it, however, agents can, for example, retrieve balances, analyze IC differences, simulate bookings and document results in an audit-proof manner.
The Model Context Protocol (MCP) addresses the integration/tool layer: it standardizes how LLM applications access external systems and data sources in a secure and traceable manner - including schema definitions, authorizations, governance and performance requirements. In short, MCP is the „USB-C for AI apps“ and the difference between an AI that only talks about financial consolidation and one that performs it using defined tool calls with real company data. The growing support in the industry underlines the relevance for enterprise scenarios.
Predictive consolidation analytics
Two practical examples demonstrate the added value that the use of AI and LLMs can bring to companies if the implementation is supported by experts. For example, if companies are faced with the challenge that consolidation is reactive and problems are only recognized after their impact on the financial statements, they need predictive functions to anticipate and prevent consolidation problems.
This is where AI-powered predictive consolidation analytics comes into play: by integrating ensemble models that combine time series analysis with anomaly detection and a monitoring agent that continuously analyzes data quality metrics, predicts likely consolidation errors, recommends preventative measures and tracks the effectiveness of solutions, companies achieve great success:
In practice, 85 percent of consolidation errors were prevented before the end of the month, emergency corrections were reduced by 90 percent and 30 to 50 percent faster closing cycles with greater accuracy of results were achieved. The introduction of autonomous compliance monitoring can reduce those risks where regulatory requirements vary from jurisdiction to jurisdiction and therefore often require extensive manual monitoring.
A specialized LLM model trained on regulatory texts and corporate policies and an MCP that connects to external regulatory databases as well as internal policy repositories and control frameworks create a solution architecture that can significantly reduce manual monitoring efforts. This framework can be used to create workflow automation that monitors regulatory changes, assesses their impact on current processes, recommends appropriate control adjustments and generates compliance reports.
In practice, this reduced the manual workload for compliance teams by 80 percent; in addition, regulatory changes were identified five times faster and compliance rates of over 99 percent were achieved with complete audit trails.
Conclusion
Artificial intelligence and large language models have the potential to fundamentally transform financial planning and consolidation. However, this requires structured, targeted implementation with specialist expertise.
The real added value does not come from technology alone, but from the intelligent combination of LLMs, business processes and company-specific requirements. Concepts such as agent mode and the Model Context Protocol show how AI can grow beyond simple automation. If you want to exploit the full potential, you not only need modern tools, but also new ways of thinking and strong partners at your side.
Continue to the partner entry:







