

Large language models offer unprecedented opportunities but present companies with new challenges. Where to start? How should it be structured? How can the return on investment actually be measured? In addition to the technology itself, companies require a well-founded implementation strategy and, above all, professional expertise to guide them. AI is the defining technological transformation of our time.
AI technologies and large language models (LLMs) are powerful tools, including for financial planning. The SAP Business Technology Platform is a good example of AI use in companies, combining several components into one solution. This enables the overcoming of real business challenges while meeting security, governance, and scalability requirements.
CFOs are increasing their investments in AI
However, companies face a paradox: although the adoption of AI is progressing rapidly, many are unable to achieve measurable returns. A 2025 study by the Boston Consulting Group of over 280 finance executives shows that, despite 78 percent of CFOs planning to increase their AI investments in the next 12 to 18 months, only 45 percent of companies can successfully quantify the ROI (return on investment) of their AI initiatives. The discrepancy between implementation enthusiasm and realized value highlights an important point: technology alone does not guarantee success.
LLMs need structure
The challenge for companies is that LLMs are so new that established workflows and tool sets are still in their infancy. Conventional software had defined interfaces, predefined functions, and clear operational boundaries. LLMs, on the other hand, offer almost unlimited flexibility—a feature that is as powerful as it is confusing. Without proper structure, this flexibility can lead to inconsistent results, "hallucinations," and frustrated users who expect magic but get mediocrity.
While LLMs represent a fundamental shift in business automation, understanding their nature is critical to their successful implementation. An LLM is comparable to a brilliant college graduate with remarkable skills. They understand context, process natural language, recognize patterns, and generate insights.
However, like any new employee, LLMs need guidance, structure, and the right environment to create added value. In the area of SAP financial consolidation, success or failure often hinges on the implementation strategy, which is why expert advice is crucial.
Deploying enterprise software requires more than installation, and realizing the transformative potential of AI requires a deep understanding of the technology's capabilities and underlying business processes. AI implementation consultants bring proven methodologies, industry benchmarks, and practical experience that shorten the payback period and avoid common mistakes. They act as architects of transformation, designing solutions that align technology capabilities with business goals. Regarding the use of LLMs, consultants' roles are changing from implementers to trainers.
They are highly specialized teachers who understand the potential of LLMs and the business processes required for success. They know how to structure prompts consistently, when to apply different models to specific tasks, and how to create feedback loops that improve performance over time. Most importantly, they understand that success requires more than just technology—it requires new ways of thinking about the work itself.
Goal-oriented workflows
Two important concepts emerge in this educational framework: agent mode and the Model Context Protocol (MCP). In agent mode, AI goes beyond Q&A to goal-oriented workflows. The model maintains context across multiple steps, orchestrates targeted tool calls (e.g., ERP APIs, group reporting services, and database queries), and evaluates returns. It then adapts the plan iteratively (ReAct principle: "Reason-Act-Observe"). Context management is crucial.
The Model Context Protocol (MCP) addresses the integration and tool layers by standardizing how LLM applications access external systems and data sources securely and in a traceable manner, including schema definitions, authorizations, governance, and performance requirements. In short, the MCP is the "USB-C for AI apps"—the difference between an AI that only discusses financial consolidation and an AI that performs it using defined tool calls with real company data. Growing industry support underlines its relevance for enterprise scenarios.
The Model Context Protocol (MCP) addresses the integration/tool layer: it standardizes how LLM applications access external systems and data sources in a secure and traceable manner - including schema definitions, authorizations, governance and performance requirements. In short, MCP is the „USB-C for AI apps“ and the difference between an AI that only talks about financial consolidation and one that performs it using defined tool calls with real company data. The growing support in the industry underlines the relevance for enterprise scenarios.
Predictive consolidation analytics
Two practical examples demonstrate the added value that AI and LLMs can bring to companies when implemented by experts. For instance, when companies face the challenge of reactive consolidation, where problems are only recognized after they impact financial statements, they require predictive functions to anticipate and prevent consolidation issues.
AI-powered predictive consolidation analytics integrate ensemble models that combine time series analysis with anomaly detection. They also integrate a monitoring agent that continuously analyzes data quality metrics, predicts likely consolidation errors, recommends preventive measures, and tracks the effectiveness of solutions. Companies achieve great success with this technology.
In practice, 85 percent of consolidation errors were prevented before the end of the month, emergency corrections were reduced by 90 percent, and closing cycles were 30 to 50 percent faster with greater result accuracy. Introducing autonomous compliance monitoring reduces risks where regulatory requirements vary by jurisdiction and often require extensive manual monitoring.
A specialized LLM model, trained on regulatory texts and corporate policies, and an MCP, which connects to external regulatory databases, as well as internal policy repositories and control frameworks, creates a solution architecture that significantly reduces manual monitoring efforts. This framework can create workflow automation that monitors regulatory changes, assesses their impact on current processes, recommends control adjustments, and generates compliance reports.
In practice, this reduced the manual workload for compliance teams by 80 percent. Additionally, regulatory changes were identified five times faster, and compliance rates of over 99 percent were achieved with complete audit trails.
Conclusion
Artificial intelligence and large language models have the potential to transform financial planning and consolidation fundamentally. However, this requires structured, targeted implementation with specialist expertise.
Real added value comes from the intelligent combination of LLMs, business processes, and company-specific requirements, not technology alone. Concepts such as agent mode and the Model Context Protocol demonstrate how AI can evolve beyond basic automation. To exploit the full potential, you need modern tools, new ways of thinking, and strong partners.
Continue to the partner entry:






