While boardrooms chase the next agentic breakthrough, many FinTechs and banks are discovering a more basic truth: enterprise AI only pays dividends when it is fenced in by risk controls, clean data and a clear use-case roadmap. The new race in enterprise AI, a term largely overlooked in the current love affair with agentic AI, is less about who can sprint fastest toward experimental models and more about who can hard-wire governance into systems that already move trillions of dollars a day.

It is a message being amplified as the big, traditional data companies like SAS, SAP, IBM and Microsoft navigate the hyperspeed track that is AI in all its forms. Not surprisingly, it was the message at the recently wrapped SAS Innovate, where the analytics vendor’s senior vice president for risk, fraud and compliance, Stu Bradley, urged bankers and insurers to “bring a toolbox” instead of a single hammer. The past 18 months, he said, show what happens when hype cycles outrun risk discipline.

“We saw business cases getting approved solely because they mentioned generative AI,” Bradley recalled. “Companies were force-fitting technology to every problem they had — and the returns simply didn’t meet expectations.”

Bradley talked to PYMNTS after the event to reinforce the message that the industry’s fascination with glossy demonstrations has obscured a basic obligation: determining whether a model belongs in production at all. “Part of being the adult in the room,” he said, “is having honest conversations around a composite approach to AI. We need to match the right technique to the right business problem and expected result.”

By The Numbers

At its event, SAS debuted its own study of 1,600 international banks and found near universal GenAI adoption; 99% of surveyed executives report some degree of GenAI implementation. However, many institutions have struggled to realize tangible returns: More than half of execs indicate their early GenAI initiatives yielded limited to no financial benefit.

Also, while GenAI innovation boosts fraud detection, criminals wield it to create deepfakes and synthetic identities that defy conventional detection methods. Nearly 80% of surveyed executives expect cyberattacks, fraud and financial crimes to have major operational impacts in the decade ahead. This underscores firms’ need for advanced, AI-powered defenses supported by robust data management and governance frameworks.

“If the data is inaccurate, incomplete or biased, you’re only propagating those issues downstream,” Bradley said. It’s plain talk at a time when venture-backed disruptors often treat data prep as an afterthought. SAS’s own answer is an end-to-end data and AI life-cycle framework that traces lineage, flags bias and enforces responsible-innovation review gates.

That rigor is becoming table stakes. European authorities have begun handing down fines for “explainability” failures, and U.S. examiners are scrutinizing source-data documentation as closely as model code. Bradley’s caution: “Organizations forgot about the data aspects” during the generative-AI gold rush. They can’t afford to forget again.

A recent PYMNTS Intelligence report, The CAIO Report, found more optimism. It explored how CFOs are reaping the benefits of GenAI and where they continue to face challenges. This exclusive report, an edition of the CAIO Project, is based on a survey of 60 CFOs at U.S. firms that made at least $1 billion in revenue last year conducted from Dec. 5 to Dec. 13.

The study found GenAI is delivering impressive returns for CFOs. The report showed that as of December, nearly 90% of surveyed CFOs reported a “very positive” ROI from the technology, a sharp increase from just 26% in March 2024. This dramatic shift underscores the growing confidence in GenAI’s ability to deliver measurable value. As CFOs see tangible improvements, the use of GenAI has expanded across a broader range of business functions.

The Banking Wish List

The financial services sector clients at SAS are no longer asking for mystical algorithms; they are asking for speed and security. “The businesses are screaming for agility,” Bradley said, noting the surge in interconnected risks from deposit flight to synthetic-ID fraud that demand near-real-time response. Yet bank IT departments are hamstrung by years of piecemeal vendor integrations. “You deployed one system for credit-card fraud, another for digital payments, another for AML,” Bradley said. “The integration has become cumbersome.”

The consequence is what Bradley calls a coming “great IT rationalization.” Institutions want fewer vendors and more modular platforms that can support risk, fraud and balance-sheet analytics without replumbing data pipelines each time. SAS’s own roadmap — integrated balance-sheet management and an “enterprise decisioning” layer — aims to collapse siloed engines into a single risk core that chief risk officers (CROs) and CMOs can tap simultaneously.

Another lesson from the hype hangover: executives won’t sign off on AI budgets without a crystal-clear application. Bradley said SAS now insists on “applications for very specific use cases” rather than delivering naked technology. Examples include stress-testing capital in a volatile rate environment and adjudicating every customer transaction — from onboarding to collections — through a unified decisioning fabric.

That focus on time-to-value is forcing even the largest vendors to behave more like software-as-a-service specialists. “Our customers keep telling us, ‘We need faster returns,’” Bradley said. SAS has responded by embedding thousands of staff years of domain expertise into pre-configured modules, effectively turning once-bespoke projects into shrink-wrapped products.

None of those ambitions matter if institutions cannot satisfy regulators cost-effectively. The rising price of compliance is especially painful for community banks, Bradley warned, because new mandates “aren’t scaled to the size of the institution.” His advice: stop buying point solutions for each rule change and start drafting a long-term compliance architecture that marries risk, fraud and AI governance under one roof.

“It’s not about a risk architecture or a fraud architecture alone,” he said. “AI is here to stay, in whatever variation emerges, so you need an AI-based architecture on top of which solutions can be built that sustain compliance and manage cost.”

Start-ups can still move fast, but the institutions controlling the capital pools increasingly favor partners who can prove diligence as well as innovation. In Bradley’s view, the next winners in financial-services AI will be those who slow down long enough to do it right — auditing their data, rationalizing their tech stacks and embedding risk checks into every model push.

“A toolbox, not a hammer,” he repeated — summing up a philosophy that feels almost countercultural amid Silicon Valley’s race for the shiniest object. Yet, it may be exactly what keeps banks ahead of the real risk: deploying AI systems they can’t control.

The post Data Discipline Takes Center Stage for GenAI at Banks appeared first on PYMNTS.com.