DMR News

Advancing Digital Conversations

ChatGPT Used by Phishing Scammers to Steal Banking Logins

ByYasmeeta Oon

Jul 8, 2025

ChatGPT Used by Phishing Scammers to Steal Banking Logins

Large language models (LLMs) like ChatGPT have already been used for various questionable activities—ranging from political propaganda and cheating on academic work to generating images for scam campaigns. Now, researchers have revealed a new concern: these AI tools might inadvertently help phishing scammers by directing users to fraudulent login pages.

How Phishing Works and AI’s Role

Phishing is a cyberattack technique where criminals trick victims into submitting sensitive data like passwords or credit card numbers. Often, this involves fake emails or websites designed to look authentic. Cybersecurity firm Netcraft has demonstrated how ChatGPT and similar AI models can contribute to this problem.

Using GPT-4.1, which powers Microsoft’s Bing AI and Perplexity, Netcraft tested queries asking for login URLs to 50 different companies across sectors like finance, retail, and tech. Shockingly, the AI provided the correct URL only 66% of the time. Nearly a third of the suggested links led to dead or suspended sites, and 5% redirected to legitimate websites—but not the ones requested.

The Danger of Unclaimed Domains

Netcraft warns that hackers could register these inactive domains and turn them into phishing traps. Users trusting AI-generated suggestions could easily be misled. As the researchers put it, this “opens the door to large-scale phishing campaigns indirectly endorsed by user-trusted AI tools.”

This is more than theoretical. The team found a real case where Perplexity directed users to a fraudulent Wells Fargo login page, pushing the authentic link down the list.

Mid-sized firms, such as regional banks, credit unions, and fintech companies, seem particularly vulnerable. In contrast, household names like Apple and Google were less affected.

Security experts emphasize the importance of verifying URLs carefully before entering personal information. Given that chatbots can “hallucinate” or provide inaccurate answers, it’s critical to double-check any URL or advice they give before trusting it.

What The Author Thinks

While AI tools like ChatGPT bring tremendous benefits in automating and enhancing digital experiences, they also expose new vulnerabilities. The ability of AI to generate convincing but incorrect information means users must remain vigilant. Technology companies should implement stricter safeguards to prevent misuse, and users need to maintain cautious skepticism when interacting with AI, especially regarding sensitive matters like banking.


Featured image credit: Graphics Stocks via Vecteezy

For more stories like it, click the +Follow button at the top of this page to follow us.

Yasmeeta Oon

Just a girl trying to break into the world of journalism, constantly on the hunt for the next big story to share.

Leave a Reply

Your email address will not be published. Required fields are marked *