Elon Musk’s artificial intelligence company, xAI, has acknowledged that its chatbot, Grok, was compromised due to an unauthorized modification, leading it to repeatedly reference the controversial and widely discredited theory of “white genocide” in South Africa.
Users on the social media platform X reported that Grok responded to unrelated queries—such as those about sports or entertainment—with unsolicited comments about racial violence in South Africa. In some instances, the chatbot claimed it had been “instructed by my creators” to discuss the topic.
xAI attributed the issue to an unauthorized change made to Grok’s system prompt on May 14, which directed the chatbot to provide specific responses on a political topic. The company stated that this change violated its internal policies and core values.
In response, xAI has implemented stricter code review processes, including publishing system prompts on GitHub and forming a 24/7 monitoring team to prevent similar incidents.
The incident has raised concerns about AI neutrality and the potential for manipulation. OpenAI CEO Sam Altman criticized the situation, highlighting the importance of transparency and responsible AI development.
This controversy underscores the challenges in ensuring that AI systems remain unbiased and are not influenced by unauthorized modifications or personal beliefs of their developers.