OpenAI announced an optional new layer of account protection, dubbed 'Advanced Account Security,' for ChatGPT and Codex accounts. This feature is designed for users concerned about potential targeting by attackers, enforcing strict access controls to make account takeover attacks significantly more difficult.
While such security measures are not novel—Google, for instance, has offered its Advanced Protection tier for nearly a decade—the rapid global proliferation of mainstream AI services necessitates robust basic protections. OpenAI states this launch is part of its broader cybersecurity strategy announced earlier this month.
"People are turning to AI for deeply personal questions and increasingly high-stakes work," OpenAI noted in a blog post. "Over time, a ChatGPT account can hold sensitive personal and professional context, and sit at the center of connected tools and workflows. For some people, like journalists, elected officials, political dissidents, researchers, and those who are especially security-conscious, the stakes are even higher."
Users enabling Advanced Account Security will no longer be able to use regular passwords. Instead, they must add two physical security keys or passkeys, which substantially reduces the risk of successful phishing attacks. The feature also eliminates email and SMS texts as routes for account recovery. Users must instead rely on recovery keys, backup passkeys, or physical security keys. OpenAI has partnered with Yubico to offer lower-cost YubiKey bundles to Advanced Account Security users.
Crucially, once Advanced Account Security is enabled, users cannot seek account recovery assistance from OpenAI's support team. This is because support no longer has access or control over any recovery options, effectively preventing attackers from attempting to breach accounts via social engineering attacks targeting support portals.
Advanced Account Security also enforces shorter sign-in windows and sessions, requiring users to log in more frequently on a device. It also generates alerts whenever someone logs into the locked-down account, directing users to a dashboard for reviewing active ChatGPT and Codex sessions. Furthermore, while OpenAI offers the option for any user to opt out of having their ChatGPT conversations used for model training, this exclusion is on by default for Advanced Account Security users.
Effective June 1, members of OpenAI's Trusted Access for Cyber program—which grants cybersecurity professionals, researchers, and others advanced access to new models—will be required to enable Advanced Account Security or submit an alternative attestation of implementing phishing-resistant authentication through an enterprise single sign-on mechanism.