The New Age of AI: Scandalous and Evil
Siddhant Pawar, BSc Politics, Philosophy and Economics
One of the world’s most widely used social media platforms, X, has come under scrutiny in recent weeks following the illicit use of its AI model, Grok, to undress and create pornographic deepfake images of women and children. Ofcom, the UK’s independent online safety watchdog, is closely monitoring the situation and has launched a formal investigation into the matter.
Elon Musk proudly boasted about Grok becoming the most downloaded mobile application in the UK and Ireland App Stores, but conveniently ignored the reason behind the surge. According to a Guardian report, Grok generated approximately three million sexualised images in just eleven days, peaking on January 2nd. What was once confined to the darkest corners of the internet has now been given a public platform, without any restraint.
On January 2nd alone, during a single ten-minute period, X users used Grok to digitally edit 102 images of people wearing bikinis, most of whom were young women, according to a Reuters investigation. This feature was later restricted to paid users on January 9th, following Prime Minister Keir Starmer’s condemnation of the situation as ‘disgusting and shameful’.
While the UK government has not responded as aggressively as Malaysia and Indonesia, both of which have blocked public access to Grok, Ofcom launched a formal investigation on January 12th under the Online Safety Act (2023). The investigation aims to determine whether X has complied with its legal duties to protect users from content that is illegal in the UK, particularly given its status as a self-regulating platform.
Ofcom’s role is limited to assessing whether companies are adhering to the Online Safety Act. It does not have the authority to directly remove or censor content. However, if X is found to be in breach of the Act, it could face fines of up to £18 million or 10% of its global annual revenue, alongside further sanctions, including potential website blockage.
OnJanuary 15th, X released an official statement claiming it had implemented measures to prevent Grok from being used to create intimate images of real people, with the restriction applying to all users, including paid subscribers. Ofcom responded by reiterating that its investigation remains ongoing and that it is examining both what went wrong and what steps are being taken to prevent further harm.
This issue has surfaced due to weak regulatory constraints placed on X. While the platform has long operated in the UK under a self-regulatory model, this may soon change. The Prime Minister has stated that X could ‘lose the right to self-regulate’, signalling a potential shift in how large technology firms are overseen.
Musk, in response to criticism, stated: ‘Obviously, Grok does not spontaneously generate images; it does so only according to user requests.’ This remark has been met with significant backlash from MPs. MP Caroline Dinenage argued in Parliament that ‘what X is doing is illegal and it is as much the platform’s responsibility as the people using it’. She also questioned whether Ofcom currently has sufficient authority to enforce the Online Safety Act, pointing to the potential influence of American corporate power in curbing effective regulation.
The controversy has also highlighted the need for a broader and more comprehensive framework for regulating artificial intelligence. On January 12th, the Secretary of State for Science, Innovation and Technology, Liz Kendall, stated in Parliament that further details on the forthcoming AI Bill would be presented to the House on March 18th.
A significant step towards stronger safeguarding is clearly necessary. The outcome of Ofcom’s investigation may provide crucial insight into how the government intends to address such sensitive issues in the future, including the possibility of banning social media access for children under the age of 16, following the example set by Australia.