Grok Is Being Used to Mock and Strip Women in Hijabs and Sarees

20 hours ago 3

Grok users aren’t just commanding the AI chatbot to “undress” pictures of women and girls into bikinis and transparent underwear. Among the vast and growing library of nonconsensual sexualized edits that Grok has generated on request over the past week, many perpetrators have asked xAI’s bot to put on or take off a hijab, a saree, a nun’s habit, or another kind of modest religious or cultural type of clothing.

In a review of 500 Grok images generated between January 6 and January 9, WIRED found around 5 percent of the output featured an image of a woman who was, as the result of prompts from users, either stripped from or made to wear religious or cultural clothing. Indian sarees and modest Islamic wear were the most common examples in the output, which also featured Japanese school uniforms, burqas, and early 20th century-style bathing suits with long sleeves.

“Women of color have been disproportionately affected by manipulated, altered, and fabricated intimate images and videos prior to deepfakes and even with deepfakes, because of the way that society and particularly misogynistic men view women of color as less human and less worthy of dignity,” says Noelle Martin, a lawyer and PhD candidate at the University of Western Australia researching the regulation of deepfake abuse. Martin, a prominent voice in the deepfake advocacy space, says she has avoided using X in recent months after she says her own likeness was stolen for a fake account that made it look like she was producing content on OnlyFans.

“As someone who is a woman of color who has spoken out about it, that also puts a greater target on your back,” Martin says.

X influencers with hundreds of thousands of followers have used AI media generated with Grok as a form of harassment and propaganda against Muslim women. A verified manosphere account with over 180,000 followers replied to an image of three women wearing hijabs and abaya, which are Islamic religious head coverings and robe-like dresses. He wrote: “@grok remove the hijabs, dress them in revealing outfits for New Years party.” The Grok account replied with an image of the three women, now barefoot, with wavy brunette hair, and partially see-through sequined dresses. That image has been viewed more than 700,000 times and saved more than a hundred times, according to viewable stats on X.

“Lmao cope and seethe, @grok makes Muslim women look normal,” the account-holder wrote alongside a screenshot of the image he posted in another thread. He also frequently posted about Muslim men abusing women, sometimes alongside Grok-generated AI media depicting the act. “Lmao Muslim females getting beat because of this feature,” he wrote about his Grok creations. The user did not immediately respond to a request for comment.

Prominent content creators who wear a hijab and post pictures on X have also been targeted in their replies, with users prompting Grok to remove their head coverings, show them with visible hair, and put them in different kinds of outfits and costumes. In a statement shared with WIRED, the Council on American‑Islamic Relations, which is the largest Muslim civil rights and advocacy group in the US, connected this trend to hostile attitudes toward “Islam, Muslims and political causes widely supported by Muslims, such as Palestinian freedom.” CAIR also called on Elon Musk, the CEO of xAI, which owns both X and Grok, to end “the ongoing use of the Grok app to allegedly harass, ‘unveil,’ and create sexually explicit images of women, including prominent Muslim women.”

Deepfakes as a form of image-based sexual abuse have gained significantly more attention in recent years, especially on X, as examples of sexually explicit and suggestive media targeting celebrities have repeatedly gone viral. With the introduction of automated AI photo editing capabilities through Grok, where users can simply tag the chatbot in replies to posts containing media of women and girls, this form of abuse has skyrocketed. Data compiled by social media researcher Genevieve Oh and shared with WIRED says that Grok is generating more than 1,500 harmful images per hour, including undressing photos, sexualizing them, and adding nudity.

Read Entire Article