The data quality paradox: Why your AI is only as good as your worst spreadsheet
While AI promises to transform financial planning and analysis, its effectiveness hinges on the quality of the underlying data. This article explores the often-overlooked reality that poor data governance, fragmented systems, and legacy spreadsheets can derail even the most advanced AI initiatives. It outlines the hidden costs of neglecting data foundations, offers guidance on building AI-ready infrastructure, and highlights real-world examples where data quality failures led to disappointing results. As finance leaders look to scale automation, this piece makes the case for focusing on readiness before ambition – a key theme at the upcoming Executive Finance Summit in Zurich.
19.06.2025
//
0
min read
Verschaffen Sie sich einen Vorsprung
There’s a certain inevitability in today’s finance discussions: mention AI, and the room lights up. Talk about cleaning data, and eyes glaze over. Yet the reality is that the second conversation is often the more important of the two, especially if you care about the first.
Organisations are increasingly ambitious in their use of AI for FP&A. From rolling forecasts to anomaly detection, CFOs are investing in technology that promises faster insights and sharper decision-making. But there’s a fundamental truth that continues to be overlooked: your AI is only as good as the data you feed it. And for many organisations, that data still lives in chaotic spreadsheets, inconsistent systems, and siloed databases.
This is the data quality paradox. While AI represents a leap forward, its success depends on something far more basic: the integrity of the underlying information. Ignore this, and you don’t just slow down progress, you introduce risk, undermine trust, and compromise strategic decisions.
As the Executive Finance Summit in Zurich approaches, this is a conversation more finance leaders are beginning to confront head-on.
The illusion of intelligence
The promise of AI in finance is compelling. Smart models that anticipate market movements, flag risks, automate reporting and improve planning accuracy offer clear appeal. But beneath that promise lies a problem that can’t be automated away: messy, fragmented, or outdated data.
The assumption is often that advanced technology will somehow compensate for gaps in data quality. But in practice, AI systems magnify those issues. Machine learning models, for example, are designed to identify patterns. If those patterns are drawn from incomplete or inaccurate records, the outputs will be equally flawed, only dressed up in the appearance of sophistication.
This creates a dangerous illusion. The AI dashboard looks impressive. The variance commentary sounds plausible. But underneath, critical business decisions are being made on the back of compromised inputs.
Rushing ahead of the foundations
The temptation to deploy AI quickly is understandable. Competitive pressure, internal enthusiasm, and vendor influence all push towards implementation. But too often, organisations skip foundational steps in the rush to deploy.
Data integration is a prime casualty. Many finance teams still operate in fragmented environments: planning data in one system, actuals in another, assumptions stored in Excel files on shared drives. There may be multiple versions of truth, disconnected hierarchies, or legacy naming conventions that make cross-functional analysis nearly impossible.
In such cases, introducing AI is like installing a new satellite dish on a crumbling roof. The tool might be powerful, but the structure won’t support it.
The result is predictable. AI models fail to produce reliable outputs. Stakeholders lose confidence. Adoption stalls. What began as a cutting-edge initiative ends up undermining trust in finance as a whole.
The hidden costs of poor data governance
When AI implementations falter, the root cause is rarely the algorithm. More often, it’s a lack of governance around the data feeding it.
These issues are not always visible at first. In some cases, data gaps are small – an outdated exchange rate, a duplicated cost centre, a misclassified expense. But AI models trained on months or years of this information learn the wrong patterns. Over time, those errors compound.
There are other costs too:
Time lost in manual data reconciliation, diverting analysts from value-added work
Erosion of stakeholder trust in finance outputs, especially when AI-generated results differ from known business realities
Increased complexity in tracing model logic when auditability is required
Wasted investment in tools that cannot perform without stable data infrastructure
Poor governance isn’t just a technical inconvenience. It undermines the credibility of the entire planning function.
Building data foundations for AI readiness
Before any serious AI initiative in finance can take root, organisations need to invest in their data foundations. This doesn’t mean perfecting every dataset but it does mean creating the conditions in which automation can thrive.
Key priorities should include:
Centralised, structured data repositories. Move beyond isolated spreadsheets and fragmented databases. A unified planning platform – as we offer here at Apliqo – allows for integrated data structures, consistent naming, and real-time updates across business functions.
Data ownership and stewardship. Assign clear responsibility for data accuracy, completeness, and timeliness. This isn’t solely an IT function. Finance must lead on defining what “good” looks like for planning and performance data.
Metadata and mapping standards. Create consistent hierarchies, naming conventions, and dimensional mappings. AI models rely on structure. Clarity in your data model enables more accurate insights and cleaner integration.
Quality assurance protocols. Introduce regular validation processes to catch anomalies early. AI tools can assist in this, but they require a baseline of reliable information to work from.
Cultural change. Encourage a mindset where data accuracy is everyone’s responsibility. This may involve training, KPIs, or new workflows. But it starts with leadership modelling the value of quality data.
Learning from early stumbles
Some of the most useful lessons come from where AI has not lived up to its promise. In several high-profile cases, companies have deployed automated forecasting tools that consistently underperformed manual models – not because the algorithms were flawed, but because the historical data they learned from was full of one-off adjustments, undocumented assumptions, or misaligned periods.
In one example, an organisation used machine learning to generate product-level revenue forecasts across regions. Initial outputs looked promising, but after two planning cycles, the model began producing outliers. The cause? A single outdated spreadsheet with currency conversions that hadn’t been updated in over a year, skewing the data the model was trained on.
In another case, a business tried to use AI to automate driver-based budgeting but failed to account for inconsistent cost allocations between business units. The result was a budget that was mathematically sound but completely unreflective of operational realities.
In both cases, the issue was not the ambition of the project – but the assumption that the data environment was ready for it.
Balancing ambition with readiness
The lesson here is not to abandon AI, but to implement it with awareness. High-performing FP&A teams are not shying away from automation. But they are grounding their efforts in a clear-eyed assessment of readiness.
They recognise that the path to AI runs through data quality, not around it. They invest in platforms that bring structure to planning processes. And they know that credibility in AI-driven outputs is built on the unglamorous work of getting the data right.
At Apliqo, we’re seeing this journey play out across finance teams of all shapes and sizes. The most successful ones are treating data governance not as an obstacle to AI, but as its enabler. They focus on creating systems and cultures where data integrity is protected and where technology can be deployed with confidence.
AI will almost certainly redefine how finance operates in the coming years. But no model, no matter how advanced, can substitute for clean, reliable data. As finance leaders prepare for the next wave of transformation, the question is not just what technology to adopt but whether the organisation is truly ready to support it.
This is one of the central themes we’ll be exploring at the Executive Finance Summit in Zurich on June 25th, where real-world practitioners will share both their breakthroughs and their setbacks. It’s a chance to move beyond the hype, and into the deeper conversation: what does it really take to build an AI-ready finance function?
To join us, apply here.