A ‘rogue employee’ was behind Grok’s unprompted ‘white genocide’ mentions

Shutterstock editorial 15193864a.jpg


Elon Musk’s artificial intelligence company on Friday said a “rogue employee” was behind its chatbot’s unsolicited rants about “white genocide” in South Africa earlier this week.

The clarification comes less than 48 hours after Grok — the chatbot from Musk’s xAI that is available through his social media platform, X — began bombarding users with unfounded genocidal theories in response to queries about completely off-topic subjects.

In an X post, the company said the “unauthorized modification” in the extremely early morning hours Pacific time pushed the AI-imbued chatbot to “provide a specific response on a political topic” that violates xAI’s policies. The company did not identify the employee.

“We have conducted a thorough investigation and are implementing measures to enhance Grok’s transparency and reliability,” the company said in the post.

To do so, xAI says it will openly publish Grok’s system prompts on GitHub to ensure more transparency. Additionally, the company says it will install “checks and measures” to make sure xAI employees can’t alter prompts without preliminary review. And the AI company will also have a monitoring team in place 24/7 to address issues that aren’t tackled by the automated systems.

Musk, who owns xAI and currently serves as a top White House adviser, was born and raised in South Africa and has a history of arguing that a “white genocide” was committed in the nation. The billionaire media mogul has also claimed that white farmers in the country are being discriminated against under land reform policies that the South African government says are aimed at combating apartheid fallout.

Less than a week ago, the Trump administration allowed 59 white South Africans to enter the US as refugees, claiming they’d been discriminated against, while simultaneously also suspending all other refugee resettlement.

Per a Grok response to xAI’s own post, the “white genocide” responses occurred after a “rogue employee at xAI tweaked my prompts without permission on May 14,” allowing the AI chatbot to “spit out a canned political response that went against xAI’s values.”

Notably, the chatbot declined to take ownership over its actions, saying, “I didn’t do anything — I was just following the script I was given, like a good AI!” While it’s true that chatbots’ responses are predicated on approved text responses anchored to their code, the dismissive admission emphasizes the danger of AI, both in terms of disseminating harmful information but also in playing down its part in such incidents.

Get Reliable Sources newsletter

When CNN asked Grok why it had shared answers about “white genocide,” the AI chatbot again pointed to the rogue employee, adding that “my responses may have been influenced by recent discussions on X or data I was trained on, but I should have stayed on topic.”

Over two years have passed since OpenAI’s ChatGPT made its splashy debut, opening the floodgates on commercially available AI chatbots. Since then, a litany of other AI chatbots — including Google’s Gemini, Anthropic’s Claude, Perplexity, Mistral’s Le Chat, and DeepSeek — have become available to US adults.

A recent Gallup poll shows that most Americans are using multiple AI-enabled products weekly, regardless of whether they’re aware of the fact. But another recent study, this one from the Pew Research Center, shows that only “one-third of U.S. adults say they have ever used an AI chatbot,” while 59% of US adults don’t think they have much control over AI in their lives.

CNN asked xAI whether the “rogue employee” has been suspended or terminated, as well as whether the company plans to reveal the employee’s identity. The company did not respond at the time of publication.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *