“… Musk said he would tweak Grok after it started to give answers that he didn’t agree with. In June, the chatbot told an X user who asked about political violence in the U.S. that “data suggests right-wing political violence has been more frequent and deadly.”
“Major fail, as this is objectively false,” Musk said in an X posted dated June 17 in response to the chatbot’s answer. “Grok is parroting legacy media. Working on it.”
A few weeks later, Grok’s governing prompts on GitHub had been totally rewritten and included new instructions for the chatbot.
Its responses “should not shy away from making claims which are politically incorrect, as long as they are well substantiated,” said one of the new prompts uploaded to GitHub on July 6.
Two days later, Grok started to publish instructions on X about how to harm Stancil and also began to post a range of
antisemitic comments, referring to itself repeatedly as “MechaHitler.” Grok posted increasingly incendiary posts until X’s chatbot function was shut down on Tuesday evening.
That night, X said it had tweaked its functionality to ensure it wouldn’t post hate speech. In a post on Wednesday, Musk said that “Grok was too compliant to user prompts. Too eager to please and be manipulated, essentially.”
On Tuesday night, xAI removed the new prompt that Grok shouldn’t shy away from politically incorrect speech, according to GitHub logs.…”