01/28/2026 | Press release | Distributed by Public on 01/28/2026 03:08
Artificial Intelligence (AI) has rapidly moved from boardroom buzzword to a core business investment. Yet success is far from guaranteed, especially with Generative AI projects.
IDC data highlights just how wide the AI adoption gap is in reality. Only 11% of organizations report that more than 75% of their AI projects deliver measurable business results. On average, just 45% of AI initiatives globally achieve measurable outcomes. * Hardly an inspiring statistic for leaders under pressure to justify growing AI investments.
And yet, organizations continue to invest-because when AI initiatives succeed, the returns can be substantial. Nearly 55% of organizations estimate a 3-4x return on investment from their GenAI projects, while 14% report 5x returns and 9% report returns greater than 5x. # In other words, successful AI projects are often massively successful, more than compensating for failed pilots.
The challenge is rarely due to limitations of technology. If so, what separates the winners from those who continue to struggle? This is the question we explore further in the article.
* IDC's Technology Investment and Innovation Monitor: Tech Buyer and IT Spending Outlook and AI/Agent Adoption Survey, September 2025, (n= 894), #US53152725.
# IDC's Market Perspective: Generative AI ROI, Global Survey, June 2025.
One of the most persistent myths surrounding AI is that selecting the "best" technology guarantees success. In reality, most stalled initiatives fail for reasons that have little to do with AI algorithms or models. In our conversations with customers, we often see that they struggle when they usually focus on AI tools rather than core business problems. Without a clearly articulated use case and success criteria, pilots remain interesting experiments that are difficult to justify or extend.
At Fujitsu, we often recommend that organizations start with a clear business use case that needs to be transformed, agree on how success will be measured, and design AI initiatives to move those metrics in the right direction. This clarity allows leaders to evaluate progress objectively and ensure AI is deliberately injected in appropriate workflows to change outcomes. IDC research also shows that organizations are far more likely to scale AI successfully when initiatives are mapped to clearly defined KPIs from the outset. * Without this discipline, pilots become isolated, interesting experiments, but difficult to justify at scale.
Equally important is accepting that AI adoption is iterative. Organizations will not get everything right on day one. A roadmap that allows for learning, course correction, and scaling over time is far more effective than attempting to engineer perfection upfront.
* IDC's Practically Scaling and Adopting AI/GenAI for Customer Experience, 2025, #US52840325
Another key variable we have seen as a differentiator is, leadership. At Fujitsu, we urge customers to move beyond conducting small-scale department-level AI experiments. We recommend treating AI projects just like any other large enterprise initiative, giving it active C-suite sponsorship and coordinated decision-making that it deserves. This creates alignment across IT, operations, and the business. In industries such as manufacturing, this leadership model often includes the COO alongside the CIO, reflecting the deep integration required with operations.
IDC research also consistently shows that sustained executive sponsorship is a decisive factor in moving beyond GenAI pilots. * Organizations with active C-suite engagement, shared funding models, and cross-functional governance are significantly more likely to scale AI successfully. When leadership is fragmented or ownership is unclear, even the best technology struggles to deliver outcomes.
* IDC's Spotlight: How Engaged Is the Executive Management Team with the CIO on the Topic of GenAI?, October 2025, # EUR152845125
Every successful AI initiative rests on trusted data foundations. Fujitsu research consistently shows that data quality, siloed operations data, and integration challenges remain the biggest barriers to AI value, particularly in industries such as manufacturing with complex, siloed and air-gapped systems.
The objective is not perfect data, but data that is accurate, governed, and accessible. Provenance, interoperability, and governance must be established upfront. When these foundations are in place, AI becomes more predictable, explainable, and scalable. Clean, well-connected data is the engine that enables AI to move organizations towards business and production impact.
Even well-designed initiatives can fail if outcomes are not visible. Operationalizing AI means connecting strategy to execution through dashboards that show adoption, performance, cost, and quality in near real time. Leaders do not need technical detail-they need clarity.
This visibility enables faster decisions: when to scale, refine, or stop an initiative. It also reinforces accountability, ensuring AI investments are managed with the same discipline as any other enterprise program. Strategy without execution remains as aspiration.
Many organisations introduce AI without redesigning workflows, expecting employees to adopt new tools voluntarily while continuing to use legacy processes. Unsurprisingly, adoption remains limited. Simply adding AI to existing processes often leaves its value latent.
Real value emerges when AI is embedded into how work gets done. This requires deliberate change, introducing new steps, removing outdated ones, and making AI-enabled workflows the default rather than the exception. Organizations that invest in training, incentives, and minimum viable process redesigns see adoption accelerate and benefits materialize faster.
Organizations that succeed with AI share a disciplined mindset. They start with business outcomes, assign clear ownership, invest in data foundations, and track progress using dashboards that balance financial and operational metrics.
AI value is not created all at once. Early initiatives may improve localized measures such as productivity or quality. Over time, as adoption scales and models learn, these gains translate into broader operational and financial performance. Continuous learning and course correction are not signs of failure, but essential components of enterprise AI management.
The path forward is pragmatic and actionable: start with the right use case, treat AI like any enterprise initiative, and measure what matters. This is how organizations move beyond pilots and deliver sustained business impact from AI.
This blog builds on discussions from the Fujitsu impact series, designed to help organizations navigate the real-world challenges of enterprise AI. The series brings together practical guidance from Fujitsu experts and IDC guest speakers across six LinkedIn Live episodes and supporting content, addressing topics from adoption and trust to agentic orchestration, data sovereignty, security, and value realization. Start your journey here: https://mkt-europe.global.fujitsu.com/FujitsuImpactSeries