OpenAI introduced a new feature called Advanced Account Security, designed for ChatGPT and Codex users who face a higher risk of cyberattacks, phishing, and account theft.
The new layer of protection introduces much stricter security measures than those commonly used in online platforms, approaching standards used by journalists, researchers, government officials, and high-profile professionals.
Key changes include:
- elimination of traditional passwords,
- mandatory use of passkeys or physical security keys,
- blocking recovery via SMS or email,
- shorter sessions,
- login alerts,
- enhanced protection against social engineering.
The measure comes amid a context where AI tools like ChatGPT and Codex store increasingly sensitive information, including:
- private conversations,
- source code,
- professional data,
- internal documentation,
- enterprise workflows.
One of the most significant changes is that accounts protected under this advanced mode will no longer rely on conventional passwords.
Instead, users must use:
- FIDO-compatible passkeys,
- physical security keys like YubiKeys,
- phishing-resistant methods.
This drastically reduces the risk of:
- credential theft,
- phishing attacks,
- reusing leaked passwords,
- session hijacking.
OpenAI even announced a collaboration with Yubico to offer physical key bundles specifically for ChatGPT users.
Much Stricter Account RecoveryThe new system also eliminates classic recovery mechanisms via:
- email,
- SMS,
- traditional technical support.
In case of access loss, only the following can be used:
- recovery keys,
- backup passkeys,
- previously registered physical keys.
This aims to prevent one of the most common vectors used by modern attackers: social engineering against support teams.
In fact, OpenAI confirmed that technical support will no longer be able to intervene in accounts protected under this advanced scheme.
Reduced Exposure of Sensitive DataAnother important feature is that accounts under Advanced Account Security will be automatically excluded from being used for model training conversations.
This is particularly aimed at users who handle:
- corporate information,
- sensitive research,
- private code,
- strategic data,
- cybersecurity-related activities.
The initiative reflects an increasingly evident reality: AI tool accounts are becoming high-value targets for attackers.
Compromising a modern AI account could grant access to:
- corporate information,
- software projects,
- accidentally shared credentials,
- corporate strategies,
- private conversations.
Furthermore, platforms like Codex possess integration with development environments and technical workflows, increasing the potential impact of a successful compromise.
A Growing Industry TrendOpenAI is not the first company to launch a reinforced protection system for high-risk users. Google has offered its Advanced Protection program for years.
However, the explosive growth of AI tools is accelerating the need to implement more aggressive security and phishing-resistant authentication measures.
Recommendations for UsersExperts recommend:
- enabling phishing-resistant MFA,
- using physical security keys,
- avoiding password reuse,
- regularly reviewing active sessions,
- protecting associated email accounts,
- being wary of fake login pages.
As AI becomes increasingly integrated into personal and enterprise environments, protecting associated accounts ceases to be an optional matter and becomes a critical component of modern digital security.