Apple threatened to pull Musk’s AI Grok from App Store over sexualized deepfakes
In a letter obtained by NBC News, Apple told US senators it “contacted the teams behind both X and Grok after it received complaints and saw news coverage of the scandal” and demanded that the developers “create a plan to improve content moderation.” At the time, xAI’s chatbot Grok was freely accessible on X and as a standalone app, with flimsy safeguards that allowed users to easily generate and share sexualized deepfakes and “undress” images of real people, disproportionately women and some of them apparently minors.
As we reported at the time, these were flagrant and unambiguous violations of App Store guidelines it often applies with an iron fist. Apple, which profits from having apps like X and Grok on its digital store, has not spoken publicly about the issue or its behind-the-scenes intervention. Google, through its Google Play app store, profits similarly and has also not commented publicly on the matter.
Apple said it reviewed proposed changes to the X and Grok apps. While the company concluded X had “substantially resolved its violations,” Grok “remained out of compliance.” Apple said it warned the developer that “additional changes to remedy the violation would be required, or the app could be removed from the App Store.” Only after further back and forth did Apple determine Grok had “substantially improved” and approved its submission.
Despite Apple’s approval and xAI’s claims it has tightened safeguards, Grok still appears to be able to generate sexualized deepfakes with relative ease. Cybersecurity sources told me they have been able to create explicit images of celebrities and political figures using the tool, and I have been able to produce similar images of myself and other consenting adults. NBC also reported similar findings yesterday.


