CERT-In’s Initial AI Alert Highlights Hacking Threats and Multi-Stage Attack Risks from Mythos and GPT-5.5

CERT-In's Initial AI Alert Highlights Hacking Threats and Multi-Stage Attack Risks from Mythos and GPT-5.5
Following increasing concerns about next-generation artificial intelligence systems, the Indian Computer Emergency Response Team (CERT-In) has issued a comprehensive advisory. It warns that advanced AI models like Mythos by Anthropic and GPT-5.5 by OpenAI are exhibiting capabilities that could significantly heighten cyber threat environments.

The advisory represents the government’s most explicit recognition to date that progress in AI extends beyond mere productivity enhancements, rapidly evolving into offensive cyber abilities.

Frontier AI and a new threat system
CERT-In stated that “frontier AI systems reflect a notable rise in cyber capability sophistication,” highlighting that these models can now autonomously detect vulnerabilities in widely used software and, in some cases, devise and execute complex, multi-stage cyberattacks.

According to the advisory, these systems can operate at a speed and scale that once required coordinated teams of highly skilled human hackers. The concern is not merely theoretical—CERT-In explicitly alerted to the potential “weaponisation of security vulnerabilities” through such models.

Among the capabilities identified:

  • Detection of zero-day vulnerabilities
  • Automated and rapid exploitation of discovered flaws
  • Reconnaissance of APIs and cloud infrastructure
  • Credential harvesting and social engineering
  • AI-generated phishing campaigns
  • Autonomous orchestration of multi-stage attacks

Also read: Explained: What is Mythos AI and why it is worrying banks and regulatorsHeightened risk for organisations
The agency has assessed that organisations now deal with a “heightened risk” environment marked by low-cost, automated reconnaissance and exploitation cycles. This scenario could lead to unauthorized access, service disruptions, data exfiltration, financial fraud, and widespread compromise across interconnected systems.

CERT-In has advised companies to maintain a posture of “elevated alert,” boost the frequency and sophistication of threat detection, and employ AI-enabled defensive tools to counter AI-driven assaults.

It also emphasized the necessity for a transition towards “zero trust” security frameworks, strengthened password protocols, and targeted training for security teams to understand how AI-augmented attackers function.

Also Read: FM says banks prepared for tech risks, calls for upgrades amid Mythos challengeIndividuals no longer on the sidelines

In a significant shift from previous advisories mainly directed at institutions, CERT-In highlighted that individual users are now “part of the frontline.”

Personal devices, accounts, and digital identities are increasingly at risk from AI-driven threats. The advisory cautioned against rising impersonation and deepfake attacks, facilitated by generative AI systems capable of convincingly simulating trusted individuals and organizations.

For individuals, CERT-In recommended increased vigilance and adherence to essential cybersecurity practices:

  • Keep operating systems, browsers, and applications updated
  • Avoid downloading unknown apps or files
  • Use strong, unique passwords across accounts
  • Be cautious of unsolicited emails, messages, and links
  • Scrutinise content that seems AI-generated, particularly if it mimics trusted sources
  • Treat “too good to be true” offers with skepticism
  • Regularly back up important data

While the advisory refrains from imposing restrictions, it reflects growing institutional alarm that the same capabilities bolstering AI innovation could also ease the path to sophisticated cybercrime.

Previous Article

Gujarat Local Elections: BJP Gains Ground in Vapi and Takes Early Lead in Surat, Rajkot, and Jamnagar

Next Article

Rajnath Singh Holds Meeting with Chinese Defence Minister Dong Jun in Bishkek During SCO Gatherings