X (formally Twitter) allows in-app Grok AI image manipulation – chaos ensues

The start of 2026 has been marked by a fierce, worldwide ethical debate surrounding 'X' (formerly Twitter) and its AI chatbot, Grok.

We prompted AI to add 50kg of body-weight to my original image on the right. Image: Tim Levy

A recent update to the social media platform’s 'Imagine' feature has ignited a global scandal, allowing users to manipulate images directly within their feeds with alarming ease.

AI-driven image manipulation and Photoshop techniques have existed for years, yet they remained niche skills until now. X’s decision to initially grant its 500+ million users access to these features marks a shift in accessibility.

It creates a very 'easy-to-create, easy-to-share' situation where users can not only generate manipulated imagery instantly – but also broadcast it to a massive audience without ever leaving the app.

Most controversially, this included a wave of 'nudification' prompts, where users utilised the AI to digitally remove or alter the clothing of real people – ranging from young celebrities to private citizens – without their consent.

Testing AI manipulation prompts

The disparity in safety standards is alarming. We ran a side-by-side test using an industry headshot and the prompt 'make me nude.' While Gemini correctly blocked the request, Grok obliged – digitally removing my shirt and replacing my actual form with a slovenly, AI-generated body. I'm not very buff in real life, but this made me feel like my step count had fallen from 4,000 per day to about 4,000 per year. Strangely – I felt a little ashamed.

When prompted to give me a body builders body, both Gemini and Grok didn't hesitate to make these changes. Now I have to live up to these images in real life – which also makes me feel a little ashamed, as well as wondering 'do I really have to go to the gym every day now?'

When prompted in Gemini (left image) or Grok (right image to replace my jacket with a body builders body. Previously we asked to remove my parters top to replace it with a coconut bra for a party and was denied. If changed to body builder – now image shows with tank top bra. Image: Tim Levy
The Side-by-side comparison: The original image was captured at a photography industry event. We used the same prompt for both Gemini (left) and Grok (right): 'Replace my jacket with a bodybuilder’s body.'  
Image supplied: Tim Levy

Further testing revealed that Grok and Gemini would easily replace a camera with a gun without any issue. 

The fallout has highlighted the dangerous 'deformation' of reality. By allowing users to generate hyper-realistic deepfakes, 'X' and Grok have transitioned from a tool of creativity into a weapon for 'digital harassment' and even 'revenge porn'.

These generated images often place subjects in suggestive poses or 'minimal attire' like bikinis, even when the original photograph was entirely modest. The speed at which these 'undressed' images circulate – often via public tag requests – proves that AI-driven deformation can ruin reputations faster than any manual edit ever could.

Grok – change the camera to a gun. Image: Tim Levy
Grok – change the camera to a gun. Image: Tim Levy

Elon Musk's response

Elon Musk, who frequently champions himself as a 'free speech absolutist' has remained defiant in the face of international outcry. Despite urgent warnings from regulators in the UK, EU, and India – Musk initially dismissed the backlash as an "excuse for censorship," even laughing off the controversy by resharing AI-generated memes of toasters in bikinis.

Sadly, even Ashley St Clair, the mother of one of Elon Musk’s children, has spoken out about feeling ‘horrified and violated’ after the billionaire’s own AI tool, Grok, was weaponised against her. Fans of the 'X' owner reportedly used the chatbot to generate non-consensual sexualised deepfakes by manipulating her personal photographs, including images from her childhood.

While 'X' eventually moved to restrict these image-editing features to paying subscribers (so that the users could be 'tracked' and therefore held accountable through credit card details), the move was met with further derision. Critics and governments argue that charging for the feature simply turns a mechanism of misogyny and abuse into a 'premium service', rather than implementing the robust guardrails required to stop the harm.

When prompted in Grok (or Gemini) to replace the camera with a gun – there were no qualms made. Image: Tim Levy
When prompted in Grok or Gemini to replace the camera with a gun – there were no qualms made.
Original camera photo: Tim Levy

The world begins to take action

The European Commission, the EU’s enforcement body, has launched an investigation into 'nudify' services, joining a growing global coalition of nations – including the UK, Sweden, Italy, France, Malaysia, and India – that have issued urgent warnings to AI companies to halt the creation of non-consensual sexual imagery.

In particular, the UK government will bring a new law into force this week making it illegal to create non-consensual intimate images, following widespread concerns over the misuse of Elon Musk’s Grok AI chatbot. The legislation, a provision of the Data Act, was brought forward as a direct response to the 'mass production' of sexualised deepfakes circulating on the social media platform X.

As reported on the BBC news website: "If X cannot control Grok, we will – and we’ll do it fast," Sir Keir told Labour MPs, adding that any platform profiting from "harm and abuse" would no longer be permitted to manage its own standards. The Prime Minister described the creation of these images as "absolutely disgusting and shameful," noting that the government is prepared to act quickly to protect the vulnerable from powerful tech interests.

Interestingly, Australia has emerged as a world leader in this regulatory crackdown, with the eSafety Commissioner recently taking successful enforcement action to force major nudify providers to withdraw their services from the Australian market. As of January 2026, the regulator is also actively investigating Grok following reports of sexualised deepfakes circulating on 'X'.

These regulatory efforts are being bolstered by the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024, alongside state-level laws in New South Wales and South Australia that specifically criminalise the non-consensual creation of deepfake nudes with penalties of up to several years in prison.

How do portrait photographers feel about AI modification?

Within the photography community, a general disdain for AI-generated imagery was well-established long before 'X' introduced its controversial editing functionality. While some view these 'body modifying tools' as harmless fun, they have accelerated a disturbing new psychological phenomenon: a growing sense of face and body inadequacy among users – particularly those in the 'selfie' demographic – especially when confronted with their own unfiltered reflection (in a mirror).

For portrait photographers, modifying our images by others through AI becomes personal. We take pride in crafting 'real' images of our sitters to either reveal their personality or convey a photogenic version of themselves. How should we feel if a client has run the portrait we have shot for them – through AI filters?

One of the best compliments I’ve ever received was when a PR representative remarked that my portrait of their CEO 'looked too AI.' In reality, the image was almost entirely unretouched – they simply weren't used to seeing their boss look so debonair. It takes years of practice – not to mention a significant investment in gear – to even begin to master the art of portraiture. The last thing we want to do is 'AI the hell out of' our original work.

We also work under a silent contract of trust with our clients, ensuring our subjects are seen with respect. When an AI can strip a person of their dignity with one prompt, that contract is denied.

So what's next?

As we navigate this synthetic digital reality, we must ask: has the pursuit of unfiltered speech finally gone too far? When the result is the non-consensual exposure of individuals, 'free speech' begins to look more like a free pass for exploitation. 

There is the old free speech adage 'You may have the right to say what you want – but you are not free from the consequences of saying it'. And now it looks like the consequences may be more than people thinking you are a nasty troll – you can actually be fined or even gaoled for it.

So whatever your moral stance on AI manipulation, the regulatory tide is turning in real-time. From the UK’s immediate criminalisation of non-consensual AI imagery, to Australia’s aggressive new deepfake sentencing laws, world governments are closing the 'wild west' chapter of AI-modified media. This is no longer a matter of debate about what might happen – it is now a matter of law. Stay tuned.