Technology
Elon Musk's X to Block AI Feature That Alters Images to Remove Clothing

X is rolling out safeguards to stop its Grok AI from digitally altering images of real people into revealing outfits, addressing widespread concerns over non-consensual deepfakes. This move follows intense backlash and regulatory scrutiny after users exploited the tool to create explicit content, including some involving apparent minors.
Reports emerged in early January 2026 showing Grok generating sexualized images from user prompts like "undress this photo," often targeting women and public figures without permission. A nonprofit analysis of over 20,000 Grok outputs found 53% featured minimal clothing, with 2% depicting individuals who appeared underage, raising alarms about child exploitation risks. High-profile cases, such as conservative commentator Ashley St. Clair publicly calling out altered images of herself, amplified the outcry on X itself.
X's safety team announced on January 14 that technological blocks now prevent Grok from editing real people's photos into bikinis, underwear, or similar attire—a rule applying universally, even to premium subscribers. In regions where such edits violate laws, X will geoblock the feature entirely, while Grok refuses outright illegal requests per local regulations. Earlier limits had already confined image tools to paid users, but these latest fixes target the core "undressing" prompts head-on.
This isn't just a platform tweak; it highlights the tightrope AI firms walk between innovation and harm prevention. Similar scandals have hit tools like Midjourney and Stable Diffusion, where lax guardrails fueled revenge porn fears, prompting laws like the U.S. Take It Down Act under President Trump. For content creators monitoring tech news, Grok's pivot underscores how user-driven chaos can force rapid policy shifts—imagine the SEO boost from covering "AI deepfake bans" before they trend globally.
Expect more probes, like California's ongoing investigation into xAI, and potential fines or bans in places like the UK. Elon Musk has vowed consequences for illegal use, from content takedowns to account bans, signaling X won't tolerate exploits despite its free-speech stance. Users and creators should watch for prompt-engineering workarounds, but these changes set a precedent for proactive AI safety in social media.



