AI-driven cyberattack: Google disrupts attempt to exploit an unknown vulnerability
Google says it disrupted a criminal group’s attempt to use artificial intelligence to exploit a previously unknown digital vulnerability. The company provided limited details on the attackers and target, but its threat intelligence team said the incident reflects long-flagged cybersecurity risks as AI tools help malicious hackers identify and exploit weaknesses faster.
Google said on Monday that it stopped a criminal group from using artificial intelligence in a planned cyberattack. The company said the attackers aimed to exploit a previously unknown security weakness in another firm’s software. The episode added to wider concern in government and business about how AI could increase cybersecurity threats.

John Hultquist, chief analyst at Google’s threat intelligence arm, said the case matched long-standing warnings. Hackers could use AI to speed up break-ins and scale attacks. "Its here,\" Hultquist said. \"The era of AI-driven vulnerability and exploitation is already here.\"
AI cybersecurity risks grow as Google reports zero-day exploit attempt
Google said it saw prominent threat actors preparing a major operation around a bug they had discovered. The weakness let them bypass two-factor authentication in a popular online system administration tool. Google did not name the tool. Google described the plan as a zero-day exploit.
A zero-day attack uses a security flaw that defenders do not yet know about. The term reflects that engineers have had no time to issue a fix. Google said it alerted the affected company. Google said it disrupted the operation before any damage occurred.
While tracking the group’s activity, Google said it found signs of AI support. Google said a large language model helped identify the vulnerability. These models also power many chatbots. Google did not name the model used in the attempted attack.
Google said the tool was probably not Google’s Gemini or Anthropic’s Claude Mythos. Google also did not identify the suspected criminal group. The company said it saw no proof of links to an adversarial government. Google added that groups tied to China and North Korea explored similar methods.
AI cybersecurity oversight debates sharpen after Anthropic Mythos model
The case emerged as AI systems improved at finding security flaws. Anthropic announced a model called Mythos a month earlier. Anthropic said the model was so capable at hacking tasks that access was limited. The announcement increased pressure for stronger oversight of advanced AI.
President Donald Trumps White House also changed how it planned to check top AI models before release. The shift followed a campaign pledge to repeal Democratic President Joe Bidens AI guardrails. Since then, signals from the Republican administration and allies were mixed. Some supported a bigger federal role, while others resisted it.
Some people dont want there to be a regulatory response to this and others do, said Dean Ball, a senior fellow at the Foundation for American Innovation. Ball previously served as a White House tech policy adviser. Ball also helped write Trumps AI policy roadmap last year. \"I dont like regulation,\" Ball said. \"I would prefer for things not to be regulated. But I think we need to in this case.\"
Trumps Commerce Department announced last week that it signed new agreements with Google, Microsoft and Elon Musks xAI. The deals were meant to evaluate their strongest AI models before public release. The plan built on earlier Biden-era agreements with Anthropic and OpenAI. Later, the announcement disappeared from the Commerce Department website.
AI cybersecurity race raises stakes for ransomware and extortion
Hultquist said criminal hackers could benefit more than government spy teams from AI speed. Spies often move slowly and avoid detection. Criminal groups may seek quick access for extortion or ransomware. \"Theres a race between you and them to stop them before they can essentially get whatever data they need to extort you with, or launch ransomware,\" Hultquist said in an interview. \"AI is going to be a huge advantage because they can move a lot faster.\"
Anthropic also launched Project Glasswing to reduce AI-linked security fallout. The effort brought together Amazon, Apple, Google and Microsoft. Other participants included JPMorgan Chase. Anthropic said the goal was to protect critical software from risks to public safety, national security, and the economy.
Anthropics relationship with the US government remained complicated. The company faced a public and legal fight with the Pentagon. The dispute also involved Trump and military uses of AI. Meanwhile, OpenAI introduced a similar security-focused model.
OpenAI said on Friday that it was releasing a specialised cybersecurity version of ChatGPT. The company said access would be limited to defenders securing critical infrastructure. The aim was to help them identify and fix weaknesses in code. The move reflected growing concern about who gets advanced cyber tools.
Ball said better AI coding tools could improve safety over time. Ball pointed to routine cyberattacks affecting hospitals, schools, and other organisations. However, Ball said global computing relied on untold trillions of lines of code. Ball warned that these systems could face new danger if AI tools were used to exploit bugs at scale.
Ball said strengthening that software could take years. Ball said the work would benefit from coordination by the US government. Ball also predicted a transitional period with higher cybersecurity risk. Ball said the world might become more dangerous before protections catch up.
With inputs from PTI


Click it and Unblock the Notifications