The interest in ChatGPT, from all corners of the web, has been evident for the past several months. Powerful tools such as these are always going to attract users with both good and bad intentions. For example, Sophos X-Ops have identified “fleeceware” apps on both the Apple and Google app stores. These apps charge subscription fees for what is available as a free product. We have also recently seen evidence that cybercriminals are using ChatGPT to craft phishing lures. Getting access to paid accounts, which removes some restrictions, raises rate limits, and uses the most current models is something that would be attractive to would-be thieves. Information stealers have long been used by cybercriminals to hoover up as much data as possible, and ChatGPT accounts are now part of the bounty.
Once publicly released, there’s not much a user can do to claw their data back. In the case of user accounts, immediately changing the password and turning on multi-factor authentication (MFA) can possibly evict the imposters and prevent future compromise. OpenAI accounts support MFA but only for legacy enrolments. As of 12 June 2023, OpenAI have paused new MFA enrolments. This is incredibly concerning. Not only should this be the default for a modern service, but also because of increased attention by cybercriminals.
John Shier, Field CTO- Commercial, Sophos