OpenAI Launches Advanced Security Mode for High-Risk ChatGPT Accounts

ARCHIVE This story is marked as archive content due to its age and may not reflect the current state of events.

Summary: OpenAI implemented Advanced Account Security, protecting ChatGPT and Codex accounts with strict access controls against hijackings. This feature is crucial for users handling sensitive information.

OpenAI Launches “Advanced Account Security” to Protect ChatGPT and Codex Accounts Against Phishing and Account Hijacking

OpenAI introduced a new feature called Advanced Account Security, designed for ChatGPT and Codex users who face a higher risk of cyberattacks, phishing, and account theft.

The new layer of protection introduces much stricter security measures than those commonly used in online platforms, approaching standards used by journalists, researchers, government officials, and high-profile professionals.

Key changes include:

  • elimination of traditional passwords,
  • mandatory use of passkeys or physical security keys,
  • blocking recovery via SMS or email,
  • shorter sessions,
  • login alerts,
  • enhanced protection against social engineering.

The measure comes amid a context where AI tools like ChatGPT and Codex store increasingly sensitive information, including:

  • private conversations,
  • source code,
  • professional data,
  • internal documentation,
  • enterprise workflows.
Goodbye to Traditional Passwords

One of the most significant changes is that accounts protected under this advanced mode will no longer rely on conventional passwords.

Instead, users must use:

  • FIDO-compatible passkeys,
  • physical security keys like YubiKeys,
  • phishing-resistant methods.

This drastically reduces the risk of:

  • credential theft,
  • phishing attacks,
  • reusing leaked passwords,
  • session hijacking.

OpenAI even announced a collaboration with Yubico to offer physical key bundles specifically for ChatGPT users.

Much Stricter Account Recovery

The new system also eliminates classic recovery mechanisms via:

  • email,
  • SMS,
  • traditional technical support.

In case of access loss, only the following can be used:

  • recovery keys,
  • backup passkeys,
  • previously registered physical keys.

This aims to prevent one of the most common vectors used by modern attackers: social engineering against support teams.

In fact, OpenAI confirmed that technical support will no longer be able to intervene in accounts protected under this advanced scheme.

Reduced Exposure of Sensitive Data

Another important feature is that accounts under Advanced Account Security will be automatically excluded from being used for model training conversations.

This is particularly aimed at users who handle:

  • corporate information,
  • sensitive research,
  • private code,
  • strategic data,
  • cybersecurity-related activities.
AI and Security: A New Attack Surface

The initiative reflects an increasingly evident reality: AI tool accounts are becoming high-value targets for attackers.

Compromising a modern AI account could grant access to:

  • corporate information,
  • software projects,
  • accidentally shared credentials,
  • corporate strategies,
  • private conversations.

Furthermore, platforms like Codex possess integration with development environments and technical workflows, increasing the potential impact of a successful compromise.

A Growing Industry Trend

OpenAI is not the first company to launch a reinforced protection system for high-risk users. Google has offered its Advanced Protection program for years.

However, the explosive growth of AI tools is accelerating the need to implement more aggressive security and phishing-resistant authentication measures.

Recommendations for Users

Experts recommend:

  • enabling phishing-resistant MFA,
  • using physical security keys,
  • avoiding password reuse,
  • regularly reviewing active sessions,
  • protecting associated email accounts,
  • being wary of fake login pages.

As AI becomes increasingly integrated into personal and enterprise environments, protecting associated accounts ceases to be an optional matter and becomes a critical component of modern digital security.

Key facts

  • Advanced Account Security requires two physical security keys or passkeys.
  • Recovery via email and SMS is eliminated.
  • Support personnel do not have access to recovery options.
  • The measure is announced as part of OpenAI's cybersecurity strategy.

Why it matters

This implementation considerably raises the entry barrier for cybercriminals, forcing them toward more sophisticated attacks. By eliminating traditional recovery pathways, it minimizes the risk associated with phishing attacks and social engineering against support personnel.

X profile@lilyhnewmanhttps://www.twitter.com/lilyhnewman
Embedded content for: OpenAI Launches Advanced Security Mode for High-Risk ChatGPT Accounts