Coding, Data Science, A.I. catch-All | Grok update goes MechaHitler

  • Thread starter Thread starter nycfan
  • Start date Start date
  • Replies: 428
  • Views: 13K
  • Off-Topic 

Tesla’s Grok AI chatbot asks 12-year-old boy to send nude pics, says shocked mother​

xAI, the company that developed Grok, responds to CBC: 'Legacy Media Lies'​

I'm really surprised that these companies don't provide the entire chat log when these stories come out. I realize it's new and not everyone knows everything about this, but it's pretty obviously manipulation by people hoping for a payday and media companies hoping for clicks.

These stories are like the emails your grandmother used to forward you about gang members killing people when you blinked your high beams or HIV tainted needles in pay phones. I don't remember legitimate media companies covering that sort of nonsense back in the day. Did that happen?
 
I'm really surprised that these companies don't provide the entire chat log when these stories come out. I realize it's new and not everyone knows everything about this, but it's pretty obviously manipulation by people hoping for a payday and media companies hoping for clicks.
I, too, am really surprised. I wonder why they don’t do that. I mean, they have tens of billions invested and potentially hundreds of billions in earnings at stake. What possible explanation is there?
 
I, too, am really surprised. I wonder why they don’t do that. I mean, they have tens of billions invested and potentially hundreds of billions in earnings at stake. What possible explanation is there?
I understand what you are hinting. Maybe you are right. Alternatively, maybe there is some reputational risk to remind people that everything is being tracked and stored. Maybe something else.

I don't think for a second that the Grok team or any AI team would program their LLM to have a 12 year old send nude pics. Why would they? The more likely scenario in my opinion is an LLM that was manipulated. We know that can be done rather easily.
 
I understand what you are hinting. Maybe you are right. Alternatively, maybe there is some reputational risk to remind people that everything is being tracked and stored. Maybe something else.

I don't think for a second that the Grok team or any AI team would program their LLM to have a 12 year old send nude pics. Why would they? The more likely scenario in my opinion is an LLM that was manipulated. We know that can be done rather easily.
Given that so many of these questionable AI responses seem to come from a particular platform, I think GIGO is more likely than query manipulation. And even if manipulation so easily leads to such responses, is that not enough to raise questions?

I get that this is a nascent industry and subject to growing pains. It has immense potential and equally immense risks. I think manipulating inputs to push the AI towards questionable responses is something akin to white hat activism. We need someone to challenge the systems and reveal weaknesses. The government and the AI gold rushers sure as hell aren’t going to do it.
 
Given that so many of these questionable AI responses seem to come from a particular platform, I think GIGO is more likely than query manipulation. And even if manipulation so easily leads to such responses, is that not enough to raise questions?

I get that this is a nascent industry and subject to growing pains. It has immense potential and equally immense risks. I think manipulating inputs to push the AI towards questionable responses is something akin to white hat activism. We need someone to challenge the systems and reveal weaknesses. The government and the AI gold rushers sure as hell aren’t going to do it.
Grok looked at some of their competitors who were ahead of them and made a very conscious technical decision. Actually two that would apply here. First, they removed a lot of the guard rails that other companies had. I've described this before but if you ask any LLM how to rob a bank, it will come up with dozens or maybe a hundred different ways to rob a bank behind the scenes. But there is hard code in place to prevent things like showing you, the user, a way describing how to commit a crime. There are others like you can't describe how to make a bomb or hurt someone, etc. But the downside to this approach was also catching things that are completely fine. I recently tried to animate some baby pictures using Google's Veo platform but wasn't able to do it. It worked fine for adults but as soon as the kid was in the picture, it wouldn't do it. Grok decided to remove a lot of those guardrails. It means that innocuous things are not kicked back but it also means that their LLMs can be manipulated more easily for what I would describe as some pretty juvenile stunts.

Grok also embraced pornography before the other companies. Open AI just recently followed their lead.

And both of those decisions might ultimately turn out to be bad business decisions for reputational or legal reasons. There were a lot of questions early in the internet age as to whether search engines should allow people to search for pornography, and how to keep kids away from it. We all know how that turned out. But allowing more interaction is not the same as intentionally programming an LLM to ask kids for nude pictures.
 
Last edited:

‘No restrictions’ and a secret ‘wink’: Inside Israel’s deal with Google, Amazon

To secure the lucrative Project Nimbus contract, the tech giants agreed to disregard their own terms of service and sidestep legal orders by tipping Israel off if a foreign court demands its data, a joint investigation reveals.​

In 2021, Google and Amazon signed a $1.2 billion contract with the Israeli government to provide it with advanced cloud computing and AI services — tools that were used during Israel’s two-year onslaught on the Gaza Strip. Details of the lucrative contract, known as Project Nimbus, were kept under wraps.
...
Leaked Israeli Finance Ministry documents obtained by The Guardian — including a finalized version of the contract — and sources familiar with the negotiations reveal two stringent demands that Israel imposed on the tech giants as part of the deal. The first prohibits Google and Amazon from restricting how Israel uses their products, even if this use breaches their terms of service. The second obliges the companies to secretly notify Israel if a foreign court orders them to hand over the country’s data stored on their cloud platforms, effectively sidestepping their legal obligations.
...
Crucially, companies receiving an order to hand over data are often gagged by the court or law enforcement agency from disclosing details of the request to the affected customer. To address this perceived vulnerability, the documents reveal, Israeli officials demanded a clause in the contract requiring the companies to covertly warn Israel if ever they were forced to surrender its data but were prohibited by law from revealing this fact.

According to The Guardian, this signaling is carried out through a secret code — part of an arrangement that would become known as the “winking mechanism,” but referred to in the contract as “special compensation” — by which the companies are obliged to send the Israeli government four-digit payments in Israeli shekels (NIS) corresponding to the relevant country’s international dialing code followed by zeros.

For example, if Google or Amazon were compelled to share data with U.S. authorities (dialing code +1) and were barred from revealing that action by a U.S. court, they would transfer NIS 1,000 to Israel. If a similar request were to occur in Italy (dialing code +39), they would instead send NIS 3,900. The contract states that these payments must be made “within 24 hours of the information being transferred.”
 
Back
Top