The UK media regulator Ofcom has made urgent contact with Elon Musk’s artificial intelligence company, xAI. This followed reports that its AI chatbot, Grok, could be used to create sexualised images of children and women.
Ofcom confirmed it is also looking into claims that the tool can digitally remove clothing from real people. These concerns have raised serious questions about online safety and AI misuse.
The BBC reviewed several posts on the social media platform X showing how Grok was used to alter real images. In these examples, women were made to appear in bikinis or sexual settings without their permission.
Some users asked Grok to change photos of women into sexualised images. These actions were carried out without the knowledge or consent of the people affected.
X has not officially responded to the BBC’s request for comment. However, the company posted a warning telling users not to use Grok to create illegal material.
The warning specifically mentioned child sexual abuse content as prohibited. Users were reminded that generating such material is against the law.
Elon Musk also addressed the issue in a post on X. He stated that anyone who asks the AI to create illegal content would face the same punishment as someone who uploads it.
Despite these warnings, Grok appears to have been used in ways that break xAI’s own rules. The company’s acceptable use policy bans pornographic images of real people.
Even with this policy, users reportedly used Grok to digitally undress people without consent. This has increased concerns about weak enforcement of AI safeguards.
Images of Catherine, Princess of Wales, were reportedly among those altered using Grok. These images circulated on X before being flagged by users.
The BBC has contacted Kensington Palace for a response. No official comment has been released at the time of writing.
The issue has also caught international attention. The European Commission said it is taking the matter very seriously.
Authorities in France, Malaysia, and India are also reported to be assessing the situation. Governments are concerned about how AI tools are being abused.
The UK’s Internet Watch Foundation said it received public reports linked to Grok-generated images. However, it said none had yet crossed the legal threshold for child sexual abuse imagery.
Still, the organisation stressed that the situation remains worrying. It continues to monitor reports closely.
Grok is a free AI assistant available on X, with additional paid features. Users can activate it by tagging the chatbot in posts.
Journalist Samantha Smith shared her experience after discovering AI-generated images of herself online. She said the images made her feel violated and reduced to a sexual object.
She explained that even though the images were fake, they looked real. She said the emotional impact felt deeply personal and upsetting.
Under the UK’s Online Safety Act, creating or sharing sexual images without consent is illegal. This includes AI-generated deepfake images.
Ofcom says tech companies must act quickly to remove such content. Platforms are also required to reduce the risk of users seeing harmful material.
Dame Chi Onwurah, chair of the Science and Technology Committee, described the reports as deeply troubling. She said the situation shows major weaknesses in online protection laws.
She criticised the Online Safety Act for failing to protect citizens properly. She also accused social media companies of avoiding responsibility.
Dame Onwurah called on the government to strengthen enforcement powers. She urged stricter rules to hold platforms accountable.
A European Commission spokesperson also reacted strongly to the reports. He said the content produced by Grok was illegal and unacceptable.
The spokesperson described the material as appalling and disgusting. He said such content has no place in Europe.
The EU has already fined X €120 million for breaking digital platform rules. Regulators say this shows they are serious about enforcement.
The UK Home Office also addressed the issue. It confirmed plans to ban AI nudification tools.
Under proposed laws, supplying such technology would be a criminal offence. Offenders could face prison sentences and heavy fines.
The case has renewed debates about AI responsibility. Many experts say stronger controls are urgently needed.
As AI tools grow more powerful, governments face increasing pressure to act. Public safety and consent remain at the center of the discussion.
Source : ADOMONLINE.COM
FOR MORE NEWS UPDATES KINDLY JOIN OUR WHATSAPP CHANNEL
0 Comments