Fair Isaac Corporation

01/16/2025 | Press release | Distributed by Public on 01/16/2025 11:12

2025 Predictions for AI and GenAI: The Golden Age of AI – and How to Keep the Goose Golden

There's no doubt about it - 2025 feels like we are entering the golden age of artificial intelligence (AI), as business' intense investments in AI and generative AI (GenAI) are yielding bona fide gains in productivity and revenue growth. So how do we get the golden goose (the one in Aesop's fable, not the curiously expensive sneakers) to keep laying eggs of AI gold? In 2025 believe that companies, energized by the right motivations and empowered with the right tools, will achieve reliable, repeatable results with AI and Gen AI by operationalizing these technologies and demonstrating their trustworthiness. Here are my AI and GenAI predictions for the coming year:

Companies Will Figure Out That All AI is Not GenAI

As AI's golden age settles in, companies will get serious about maximizing the business value of further AI investments. When analyzing the business problems they want to solve, companies will apply their AI experience and learnings to determine which business opportunities are best enabled by GenAI, and which ones are more appropriate for traditional AI techniques and interpretable machine learning--hello, heavily regulated areas and critical decision making! In fact, more than 80% of all AI systems in production today are not Generative AI.

So, even though organizations may now have a proverbial tool chest of AI capabilities, they should not try to not drive a wood screw with a ballpeen hammer. Choosing the right AI tools takes data scientists who critically understand the business and operationalization requirements at hand, and can assess the relative strengths or weaknesses of AI or GenAI. "Human in the loop" processes will critically evaluate whether a business problem needs deterministic decisioning, interpretable machine learning, auditability, or other deployment requirements. Choosing the correct AI or GenAI path will be key in successful AI investments.

Operationalizing AI Is a Challenge - but It Will Get Easier

"Operationalization" doesn't exactly roll of the tongue. But this concept, which means "turning abstract conceptual ideas into measurable observations," can be systematically achieved when AI and GenAI systems are implemented in a repeatable, measurable way--through an optimized combination of people, process and technology.

Many companies don't know to operationalize AI, or where to start. In my role at FICO, I have developed five operationalization principles that always form the framework of any AI deployment. The short version is below; the unabridged text can be found in Forbes Tech.

  1. Start with a world-class team: AI deployment needs to be strategized and delivered by world-class data science professionals with not just academic credentials, but the real-world experience to properly apply AI to solve business problems--which are always more complex than they seem on the surface.
  2. Invent AI to meet the target industry or need: As a corollary, beyond the data science team, the entire organization must be focused on using AI solutions for high-value business problems. This includes first quantifying the business need; often, addressing it may require inventing new algorithms.
  3. Build AI algorithms for efficient software deployment: Successful AI deployment requires machine learning/AI scientists to be involved in the software design process, providing requirements and understanding the constraints being applied to their algorithms.
  4. Deliver low-latency, high-throughput execution: Well-architected cloud computing resources can operationalize AI decisioning systems to meet business needs. Real-time performance demands require careful understanding of the entire chain of data flow, preprocessing, streaming analytics, NoSQL storage, system redundancy and other assets.
  5. Relentlessly pursue Responsible AI: For artificial intelligence technology to provide business value, it must be ethical, explainable and auditable. These are the key concepts of Responsible AI, a set of principles and practices that operationalize artificial intelligence deployment to achieve high-impact business results within important ethical and legal boundaries.

Accountability and explainability are central to Responsible AI, and can be achieved by using immutable blockchain technology to codify every aspect of AI model development. The same AI governance blockchain can be used to define operating metrics and monitor the AI in production, allowing value to be attained outside the data science lab.

Data Domain-specific GenAI Use Cases Will Flourish

Companies that are serious about using large language models (LLMs) and other generative techniques in a responsible, value-based way will do so from a foundation of Responsible AI--which starts with mastering your own data. I've long held this opinion and am delighted to see that MIT and McKinsey & Co. researchers have empirically found it to be true, stating unequivocally in Harvard Business Review: "For companies to use AI well, they need accurate, pertinent, and well-organized data."

In 2025, GenAI programs will be based on actively curated data that is relevant to specific business domains. Companies will curate and cleanse data the LLM should be learning from, and remove huge amounts of data it shouldn't. This is a first step of responsible use and achieving business value; training data must be representative of the decisions that will be based on it. Companies will differentiate themselves on their data strategies for LLM creation; after all, an LLM is only a rendering of the data on which it was built.

Companies Will "Roll Their Own" Small And Focused Language Models

Furthermore, 2025 will see more and more companies building their own small language models (SLMs). We will see a rise in focused language models (FLMs) that can address the most undermining aspect of LLMs-hallucination-with a corpus of specific domain data and knowledge anchors to ensure task-based FLM responses are grounded in truth. These same FLMs will help legitimize Agentic AI applications that are still at their infancy, but also require laser-focused, task-specific LLMs operating at high degrees of accuracy and control.

Widespread use of FLMs can create another positive result: reducing the environmental impact of GenAI. According to industry estimates, a single ChatGPT query consumes between 10 and 50 times more energy than a Google search query. At a higher level, the United Nations' most recent Digital Economy Report suggests that data centers run by Google, Amazon, Meta, Apple and Microsoft (GAMAM) alone were responsible for consuming more than 90 TWh (terawatt-hour, or 1,000 gigawatt-hours [GWh]) of energy), which is more than entire countries like Finland, Belgium, Chile or Switzerland. As companies look for ways to achieve sustainability goals other than buying carbon credits, FLMs can make a meaningful impact while delivering better results for businesses.

AI Trust Scores Will Make It Easier to Trust GenAI

AI trust scores, such as those associated with FLMs, will make it easier to confidently use GenAI. This secondary, independent, risk-based AI trust score, and strategies based on it, allows GenAI to be operationalized at scale with measurable accuracy.

AI trust scores reflect three things:

  1. The probability that key contextual data (such as product documentation) the task-specific FLM was trained on is used to provide the answer.
  2. The AI trust model's confidence that the FLM's output is based on enough statistical relevance. LLMs work on probability distributions, and if there's not enough training data to create a statistically significant distribution or a number of viable alternatives, the AI trust model will not be confident in the answer.
  3. Alignment around knowledge anchors-that is, alignment with true facts versus data. Truth-versus-data is one of the most tenacious challenges with LLM technology, and AI overall.

AI trust scores can be operationalized in a proper risk-based system, so businesses can decide if they will trust an FLM's answer--a true operationalization strategy.

Looking Forward and Looking Back

I am stoked to see how my AI and GenAI predictions play out in the Golden Age of AI. In looking back, my 2024 AI Predictions held pretty true:

  • Auditable AI Will Make Accountability Cool: Yes-the idea of using blockchain for model development management gained significant visibility and traction in 2024.
  • Small Will Be Beautiful: Yes--new, smaller approaches to LLM functionality, such as FLMs, have sprung up to temper the unwieldiness of LLMs.
  • Humans Will Reassert Themselves: Yes--as chatbot and other GenAI errors became increasingly evident in 2024, drawing criticism, even Gartner provided guidance on when GenAI should not be used.

How FICO Can Help You with Responsible AI