Speaking at a press conference following the inauguration of a new State Bank of India office in Pune, Nirmala Sitharaman remarked that “The challenge posed by advanced AI is fundamentally different from what banks have successfully managed so far.” The statement reflects a deeper structural shift underway in global finance. For decades, banks have relied on layered controls to manage fraud, cyber threats and operational risk. Artificial intelligence, particularly in its generative and autonomous forms, is now altering the threat architecture itself, rendering many traditional safeguards insufficient.
This is not an incremental evolution. It is a transition from static risk management to dynamic, adversarial intelligence.
The New Threat Architecture
Indian banking has long demonstrated resilience under the oversight of the Reserve Bank of India. Even as digital adoption surged, systemic disruptions remained limited. Yet the scale today is unprecedented. Unified Payments Interface transactions touched nearly 22.6 billion in March 2026 alone, reflecting both opportunity and exposure.
As known, AI-driven threats are fundamentally different from conventional cyber risks. They are adaptive, capable of learning from defence mechanisms and replicating human behaviour with high fidelity. Deepfake-led social engineering, AI-generated phishing campaigns and automated malware are no longer experimental risks but active vectors.
More concerning is the emergence of advanced AI systems such as Anthropic Claude Mythos models. These systems are capable of identifying zero-day vulnerabilities and generating synthetic identities at scale. In a financial ecosystem, this translates into the ability to bypass authentication layers, manipulate transaction flows and potentially distort credit signals.
Recent estimates indicate that AI-amplified fraud attempts rose sharply in 2025, with social engineering accounting for a majority of successful breaches. The implication is clear. The attacker’s learning curve is now faster than the defender’s response cycle.
What Indian Banks Are Doing
Indian banks have responded with urgency, though the scale of transformation varies. Institutions such as HDFC Bank and SBI are deploying AI-driven fraud detection systems, behavioural analytics and real-time monitoring frameworks across payments and lending systems.
There is a marked shift towards strengthening Security Operations Centres (SOCs) with AI-enabled threat intelligence. Data integration across silos, combining transactional, behavioural and network-level signals, is becoming central to risk detection.
At the industry level, the Indian Banks’ Association is working towards a real-time threat intelligence-sharing platform involving banks, NPCI and national cybersecurity agencies. The objective is to move from isolated detection to ecosystem-wide response.
Regulatory interventions have reinforced this direction. The RBI has emphasised cyber resilience, zero-trust architectures and board-level oversight of technology risk. Yet, as She cautioned, existing frameworks, however robust, may not be sufficient in the face of AI’s speed and opacity.
Learning from Global Banking Systems
Global banking systems offer some lessons in navigating this transition. In the United States, institutions such as JPMorgan Chase are integrating AI risks into model risk management frameworks, supported by regulatory guidance from agencies including the Federal Reserve and FDIC. These frameworks emphasise continuous monitoring, phishing-resistant authentication and explainability in AI-driven decisions.
In the United Kingdom, regulators are piloting AI stress testing within financial institutions, recognising that traditional stress scenarios do not adequately capture algorithmic risk. A significant proportion of banks now use AI for fraud detection while simultaneously investing in explainability and auditability to manage model risk.
Singapore provides perhaps the most structured approach. The Monetary Authority of Singapore’s FEAT principles mandate fairness, ethics, accountability and transparency in AI deployment. Banks are required to maintain AI inventories, assess risk materiality and ensure human oversight across the lifecycle of AI systems.
These models share a common theme. AI risk is being treated not as a subset of cybersecurity, but as a core element of financial stability.
From Defence to Predictive Risk Architecture
The next phase of resilience will depend on moving beyond reactive defence. AI must be embedded within risk architecture itself, enabling predictive detection and pre-emptive response.
This requires a shift at multiple levels. Governance structures must evolve, with boards directly overseeing AI risk. Institutions must invest in continuous red-teaming and simulation exercises to test system vulnerabilities. Vendor ecosystems, often the weakest link, must be brought within the ambit of rigorous audits.
There is also a strong case for national-level coordination. Proposals for an AI-cyber fusion framework, integrating financial institutions, regulators and intelligence agencies, could significantly enhance predictive capabilities. As financial systems become more interconnected, risk management must become equally integrated.
Persistent Risks and Strategic Imperatives
The persistence of AI-driven threats introduces a new dimension of systemic risk. Unlike episodic cyberattacks, AI-enabled threats operate continuously, learning and adapting with each interaction. This creates a moving target for defence mechanisms.
At the same time, AI remains a dual-use technology. A majority of global banks are already leveraging AI to strengthen fraud detection, credit assessment and customer engagement. The challenge lies in balancing innovation with control.
For Indian banking, the stakes are particularly high. With digital infrastructure supporting trillions of dollars in annual transaction value, even marginal vulnerabilities can have disproportionate impact.
From Compliance to Strategic Trust
The transition underway is not merely technological. It is philosophical. Banking is moving from a compliance-driven model of risk management to one anchored in strategic trust.
Customers will increasingly evaluate institutions not just on service efficiency, but on their ability to safeguard financial integrity in an AI-driven environment. Trust, once compromised, is far more difficult to rebuild than capital.
Sitharaman’s warning should therefore be interpreted as a call for acceleration. The question is no longer whether banks can manage AI risk using existing frameworks. It is whether they can redesign those frameworks in time.
The institutions that succeed will be those that treat AI not as an operational tool, but as a strategic domain of risk, governance and competitive advantage. In doing so, they will define not only their own resilience, but the future architecture of trust in finance.
