Cybercriminals are exploiting Google’s AI tool, Gemini, in a sophisticated scam that hides malicious instructions within emails, tricking users into revealing passwords and sensitive information. This “invisible” threat, which leverages prompt injection techniques, underscores the vulnerabilities in AI-integrated email systems. Based on recent analyses from cybersecurity experts and reports, here’s a detailed rehash of the issue, incorporating insights from verified sources like Google’s Threat Intelligence and independent researchers.
The Core Threat
Attackers embed concealed commands using techniques like white text, zero-font formatting or CSS styling in emails that appear benign or official. These prompts are invisible to users but activate when Gemini is asked to summarize the email. The AI then produces fabricated warnings, such as claims of a password breach, urging victims to contact fake support or visit phishing sites.
With Gmail’s 1.8 billion active users worldwide, the scam’s potential reach is massive. It exploits “indirect prompt injection,” where hidden text overrides Gemini’s normal processing, generating deceptive outputs like urgent security alerts. Researchers from Mozilla’s 0Din team, including Marco Figueroa, have demonstrated how such manipulations can turn AI helpers into unwitting phishing tools. This builds on broader trends, with AI-driven scams increasing by 30% in 2025, often bypassing traditional defenses.
Anatomy of the Attack
The scam unfolds methodically: First, hackers send emails mimicking trusted entities, embedding the hidden prompts in the message’s HTML or body. When a user queries Gemini for a summary, the AI prioritizes the concealed instructions, creating phishing content on the fly. This might include fake alerts directing users to call scammer-run “helplines” or click links that lead to credential theft or malware installation. Finally, stolen data enables account takeovers, with groups like Russia’s UNC6293 known for similar tactics targeting app-specific passwords and evading multi-factor authentication.
Global phishing damages now exceed $5 billion annually, with average per-incident costs around $150, highlighting the scam’s financial toll.
Expert Warnings and Broader Insights
Security firms warn that prompt injection represents a new frontier in AI exploitation, with Google’s own Threat Intelligence Group noting its use in campaigns impersonating officials. “This turns AI into a weapon against users,” according to a Proofpoint report, which emphasizes Gemini’s current lack of robust filters against such abuses. In India, where over 500 million people use Gmail, CERT-In has issued advisories linking this to a 25% rise in AI-related cyber incidents in Q2 2025.
Google clarifies it never requests passwords via email or AI and is actively enhancing defenses, but experts stress the need for user-driven caution.
Protective Strategies
To defend against this, prioritize these expert-recommended steps: Enable Google’s Advanced Protection Program to block high-risk access and restrict third-party apps enroll at g.co/advancedprotection. Switch to passkeys for biometric authentication, eliminating traditional passwords; set them up via myaccount.google.com/security. Manually inspect suspicious emails by viewing the source code to spot hidden elements and use browser extensions like uBlock Origin for added filtering. Always verify support through official Google channels, avoid summarizing untrusted emails with AI, and report phishing directly in Gmail.
By adopting these measures drawn from sources like Verizon’s DBIR and Google’s security guidelines you can significantly reduce risks. This scam reveals AI’s dual nature: innovative yet vulnerable. Stay vigilant by monitoring account activity and consulting resources like CERT-In or Google’s Security Blog for updates.