Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
www.zdnet.com
I'm really surprised that these companies don't provide the entire chat log when these stories come out. I realize it's new and not everyone knows everything about this, but it's pretty obviously manipulation by people hoping for a payday and media companies hoping for clicks.
Tesla’s Grok AI chatbot asks 12-year-old boy to send nude pics, says shocked mother
xAI, the company that developed Grok, responds to CBC: 'Legacy Media Lies'
I, too, am really surprised. I wonder why they don’t do that. I mean, they have tens of billions invested and potentially hundreds of billions in earnings at stake. What possible explanation is there?I'm really surprised that these companies don't provide the entire chat log when these stories come out. I realize it's new and not everyone knows everything about this, but it's pretty obviously manipulation by people hoping for a payday and media companies hoping for clicks.
I understand what you are hinting. Maybe you are right. Alternatively, maybe there is some reputational risk to remind people that everything is being tracked and stored. Maybe something else.I, too, am really surprised. I wonder why they don’t do that. I mean, they have tens of billions invested and potentially hundreds of billions in earnings at stake. What possible explanation is there?
Given that so many of these questionable AI responses seem to come from a particular platform, I think GIGO is more likely than query manipulation. And even if manipulation so easily leads to such responses, is that not enough to raise questions?I understand what you are hinting. Maybe you are right. Alternatively, maybe there is some reputational risk to remind people that everything is being tracked and stored. Maybe something else.
I don't think for a second that the Grok team or any AI team would program their LLM to have a 12 year old send nude pics. Why would they? The more likely scenario in my opinion is an LLM that was manipulated. We know that can be done rather easily.
Grok looked at some of their competitors who were ahead of them and made a very conscious technical decision. Actually two that would apply here. First, they removed a lot of the guard rails that other companies had. I've described this before but if you ask any LLM how to rob a bank, it will come up with dozens or maybe a hundred different ways to rob a bank behind the scenes. But there is hard code in place to prevent things like showing you, the user, a way describing how to commit a crime. There are others like you can't describe how to make a bomb or hurt someone, etc. But the downside to this approach was also catching things that are completely fine. I recently tried to animate some baby pictures using Google's Veo platform but wasn't able to do it. It worked fine for adults but as soon as the kid was in the picture, it wouldn't do it. Grok decided to remove a lot of those guardrails. It means that innocuous things are not kicked back but it also means that their LLMs can be manipulated more easily for what I would describe as some pretty juvenile stunts.Given that so many of these questionable AI responses seem to come from a particular platform, I think GIGO is more likely than query manipulation. And even if manipulation so easily leads to such responses, is that not enough to raise questions?
I get that this is a nascent industry and subject to growing pains. It has immense potential and equally immense risks. I think manipulating inputs to push the AI towards questionable responses is something akin to white hat activism. We need someone to challenge the systems and reveal weaknesses. The government and the AI gold rushers sure as hell aren’t going to do it.