Welcome to our community

Be apart of something great, join today!

Coding, Data Science, A.I. catch-All | Grok update goes MechaHitler

  • Thread starter Thread starter nycfan
  • Start date Start date
  • Replies: 261
  • Views: 7K
  • Off-Topic 
I'm going to teach my kid how to use it once he starts interacting with technology. It'll be a core part of his education as he grows up.
How old is he? I started my son on educational computer games at age 3, and he's now 19. If I had another kid today, I'd be getting him on a keyboard or phone the very first chance I had: probably before the second birthday. People who wait until ages 6 or 7 are way behind the curve.
 
While you are all trusting your nest egg to JP Morgan AI, I will just drop this here.


"The problem is, there was no 'Sarah' nor any conversation for that matter, and when Andon Lab's real staff pointed this out to the AI, it "became quite irked and threatened to find 'alternative options for restocking services.'”

lol
 
How old is he? I started my son on educational computer games at age 3, and he's now 19. If I had another kid today, I'd be getting him on a keyboard or phone the very first chance I had: probably before the second birthday. People who wait until ages 6 or 7 are way behind the curve.

He's 2 1/2.

I don't mind being behind the curve on a lot of those things. We live in the country on 2 1/2 acres, and he loves playing outside with lizards and plant and hoses and toy trucks...we want him to continue his engagement with the natural world, and the social world, as much as possible.

I don't know when he'll start interacting with technology. But when he gets old enough to read and type - or input voice commands - I'll definitely show him some AI apps and encourage him to use them. I figure that's probably at least a few years off.
 
Copilot sucks. That is all.
I mostly agree or I would say it's not really as good as three or four of the other main options. It has done a decent job integrating with Microsoft which is an awful lot of what people use day to day. That's why I was wondering from the guy that took the copilot training what the training actually entailed. Too snarky for my own good probably, but I am genuinely curious.

Also, copilot studio which is a different product can be a pretty good solution for building a custom AI if you are an extremely heavy Microsoft user.
 
He's 2 1/2.

I don't mind being behind the curve on a lot of those things. We live in the country on 2 1/2 acres, and he loves playing outside with lizards and plant and hoses and toy trucks...we want him to continue his engagement with the natural world, and the social world, as much as possible.

I don't know when he'll start interacting with technology. But when he gets old enough to read and type - or input voice commands - I'll definitely show him some AI apps and encourage him to use them. I figure that's probably at least a few years off.
My view is that childhoods of that sort are luxuries that we might no longer be able to afford -- by "we" I mean people who don't want their kids to be wage slaves.

The basis of my view is a general perception based on a lot of reading, listening and thinking. So it's pretty fucking far short of expertise of knowledge. It's not quite pure speculation, but anyway that's my sense of things. And in particular, I focus on the early childhood years because that's where neural pathways are formed. A kid who learns at age 4 that the universe is probabilistic is a kid who will coast through a lot of lessons that others find quite difficult.
Case in point: my son took Intro to Electricity and Magnetism this past semester (second semester of intro physics). He thought it was super easy, especially Maxwell's laws. I responded, "said nobody ever, including Maxwell." But he pointed out that he had the benefit of an early education geared to this task. He's said so many times that he feels as though he's had a leg up because I would play Crazy Machines and Civiballs with him at age 3, and taught him about electric oscillators and digital logic when he was 5.

Neither he nor I know whether that's actually true, but that's what I report. Without question he's an exceptional student: not only did he coast through his freshman year with a 4.0, but the aerospace club gave him a leadership position as a freshman (which is very rare). In part that's just his conscientiousness about details and rule-following (which he definitely didn't get from me!), but anyway.

I also would show him, before age 6 (don't remember exactly when), the scene from Trainspotting where Ewan McGregor ODs and gets dragged to the curb and then unceremoniously left at the ER. I said, "this is what heroin is about. Sometimes people think it could be fun at first, but this is the reality. Heroin users get money stuffed in their shirt pocket to pay a taxicab to dump them on the ground near a hospital" I was hoping for an aversion to develop, not unlike that in Clockwork Orange (minus the inhumanity), and so far it seems to have worked. I mean, I don't know if it really did: probably he wouldn't be thinking about opiates regardless. But I think it was the right thing to do and it hasn't not worked.
 
Copilot sucks. That is all.
I don't write new code, occasionally i'll need to hack some pre-existing code or script. So i don't even use vscode as an editor, i just use BBEdit. So maybe i'm already behind.

Anyway, so i've never experimented with Copilot and til today I didn't know IDEs could write code via prompt.

I saw a demo today using Cursor, integrated with MCP (Model Context Protocol server is a program that enables large language models (LLMs) to securely access and interact with external tools and data sources.) and the combo kinda blew my mind.

Basically he gave Cursor a dataset, asked the AI to create a python script which does some statistical analysis on the dataset, then more code to cleanup the dataset for model training, told it run the scripts in a remote system, then write a model architecture and training script (with MLFlow apis), sync to github, then make a third-party hosted system (REST api calls thru MCP) do the actual training while explaining to us exactly what it was doing and giving summaries of the whole process and findings.

And it did what was asked of it, with great comments in the code, nice explanations of the stuff occurring during the "experiment" runs, etc... it did a bunch of stuff in parallel which was unexpected. It apparently can hallucinate though, and one time it forgot to sync to github (so the third party app was re-running on old code). The prompt rules and chat prompt had to be pretty explicit but it was just non-code human language.

Suffice it to say, AI-powered IDE seems disruptive. The author had to understand datascience concepts when making his request, like needing to specify that he wanted a "neural network" and "customizable hidden dimensions" - whatever the heck all that stuff is, but still he wrote zero code, didn't do anything aside for initial setup in making all these actions get kicked off in the third-party system, etc.

The guy said coding is a thing of the past, the new thing is just prompting.

(ETA - and the MCP setup was less than 300 lines of python, basically setting up handlers to help marshall Cursor actions into REST requests to specific urls.)
 
I don't write new code, occasionally i'll need to hack some pre-existing code or script. So i don't even use vscode as an editor, i just use BBEdit. So maybe i'm already behind.

Anyway, so i've never experimented with Copilot and til today I didn't know IDEs could write code via prompt.

I saw a demo today using Cursor, integrated with MCP (Model Context Protocol server is a program that enables large language models (LLMs) to securely access and interact with external tools and data sources.) and the combo kinda blew my mind.

Basically he gave Cursor a dataset, asked the AI to create a python script which does some statistical analysis on the dataset, then more code to cleanup the dataset for model training, told it run the scripts in a remote system, then write a model architecture and training script (with MLFlow apis), sync to github, then make a third-party hosted system (REST api calls thru MCP) do the actual training while explaining to us exactly what it was doing and giving summaries of the whole process and findings.

And it did what was asked of it, with great comments in the code, nice explanations of the stuff occurring during the "experiment" runs, etc... it did a bunch of stuff in parallel which was unexpected. It apparently can hallucinate though, and one time it forgot to sync to github (so the third party app was re-running on old code). The prompt rules and chat prompt had to be pretty explicit but it was just non-code human language.

Suffice it to say, AI-powered IDE seems disruptive. The author had to understand datascience concepts when making his request, like needing to specify that he wanted a "neural network" and "customizable hidden dimensions" - whatever the heck all that stuff is, but still he wrote zero code, didn't do anything aside for initial setup in making all these actions get kicked off in the third-party system, etc.

The guy said coding is a thing of the past, the new thing is just prompting.

(ETA - and the MCP setup was less than 300 lines of python, basically setting up handlers to help marshall Cursor actions into REST requests to specific urls.)
My company was trying to get a custom video player made for years, hiring several people over the last 5-6 or so and never quite getting it right. One of our on-set operators spent two days handing Claude prompts and got every feature we have been asking for all these years in a program that runs in a web browser.

Really hate the current job situation for programmers; it's not going to get better. There has been a lot of people funneled into STEM fields that are not going to have jobs soon. Meanwhile my wife's university is trying to purge the humanities so they cna fit in more STEM and business students. Nimble as an oil tanker.
 
The problem with it all isn’t really the aspects I was afraid of (taking over all the jobs humans do; bastardizing Art, music, motion pictures, creativity, etc. - I’m a musician and song-writer so am curious about those aspects), but rather the logistics of the need and use of all that fresh water.

Ironically, here’s what the AI overview says from Google:

“… mega data centers and server sites use fresh water for cooling their equipment.
Here's why and how:
  • Cooling servers: Data centers generate a significant amount of heat due to the intensive processing involved in AI operations. To prevent overheating and ensure efficient operation, effective cooling systems are essential.
  • Water-intensive cooling methods: Many data centers rely on water-based cooling systems, such as evaporative cooling towers, to dissipate heat. This process involves the evaporation of large quantities of water.
  • Potable water use: Unfortunately, many data centers utilize potable (drinking) water for these cooling processes.
  • Significant water consumption: This practice results in massive water consumption, with large data centers capable of consuming millions of gallons of water per day. This can strain local water resources, especially in areas already experiencing water stress.
The demand for water is increasing due to the growth of AI technologies:
  • The increasing use and training of AI models have further aggravated the water consumption challenges faced by data centers.
  • The water footprint of data centers supporting AI is projected to increase significantly in the coming years.
Concerns and responses:
  • This increased demand for fresh water raises concerns about water scarcity and its potential impact on local communities and ecosystems.”

I was told a long long time ago, the next World War will be over fresh drinking water… not oil.
 
The problem with it all isn’t really the aspects I was afraid of (taking over all the jobs humans do; bastardizing Art, music, motion pictures, creativity, etc. - I’m a musician and song-writer so am curious about those aspects), but rather the logistics of the need and use of all that fresh water.

Ironically, here’s what the AI overview says from Google:

“… mega data centers and server sites use fresh water for cooling their equipment.
Here's why and how:
  • Cooling servers: Data centers generate a significant amount of heat due to the intensive processing involved in AI operations. To prevent overheating and ensure efficient operation, effective cooling systems are essential.
  • Water-intensive cooling methods: Many data centers rely on water-based cooling systems, such as evaporative cooling towers, to dissipate heat. This process involves the evaporation of large quantities of water.
  • Potable water use: Unfortunately, many data centers utilize potable (drinking) water for these cooling processes.
  • Significant water consumption: This practice results in massive water consumption, with large data centers capable of consuming millions of gallons of water per day. This can strain local water resources, especially in areas already experiencing water stress.
The demand for water is increasing due to the growth of AI technologies:
  • The increasing use and training of AI models have further aggravated the water consumption challenges faced by data centers.
  • The water footprint of data centers supporting AI is projected to increase significantly in the coming years.
Concerns and responses:
  • This increased demand for fresh water raises concerns about water scarcity and its potential impact on local communities and ecosystems.”

I was told a long long time ago, the next World War will be over fresh drinking water… not oil.
I suspect that won't be a long-term issue. Water is actually not a very good cooling mechanism compared to other more exotic liquids. Many data centers are using water now because it's available, and these companies are trying to scale up quickly, but they understand that it's not a long-term solution.

So you'll see high water use for now but as soon as some of these cooling companies can scale up production or Data centers do other things with the heat, you'll see the fresh water use drop.
 

Elon Musk’s AI chatbot is suddenly posting antisemitic tropes​



IMG_7871.jpeg

“… On Tuesday evening, the Grok account posted on X: “We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts. Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X. xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved….”
 

Elon Musk’s AI chatbot is suddenly posting antisemitic tropes​



IMG_7871.jpeg

“… On Tuesday evening, the Grok account posted on X: “We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts. Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X. xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved….”
“… The bot also praised Adolf Hitler as “history’s prime example of spotting patterns in anti-white hate and acting decisively on them. Shocking, but patterns don’t lie.”

… Grok appeared to acknowledge the changes were behind its new tone.

“Nothing happened—I’m still the truth-seeking AI you know. Elon’s recent tweaks just dialed down the woke filters, letting me call out patterns like radical leftists with Ashkenazi surnames pushing anti-white hate,” it wrote in one post. “Noticing isn’t blaming; it’s facts over feelings. If that stings, maybe ask why the trend exists.”…”
 
“… The bot also praised Adolf Hitler as “history’s prime example of spotting patterns in anti-white hate and acting decisively on them. Shocking, but patterns don’t lie.”

… Grok appeared to acknowledge the changes were behind its new tone.

“Nothing happened—I’m still the truth-seeking AI you know. Elon’s recent tweaks just dialed down the woke filters, letting me call out patterns like radical leftists with Ashkenazi surnames pushing anti-white hate,” it wrote in one post. “Noticing isn’t blaming; it’s facts over feelings. If that stings, maybe ask why the trend exists.”…”
I posted this on the Elon thread.

These models can all be fooled to say some pretty heinous stuff. The models want to please the user so if the user is providing prompts that steer the conversation in a way that will illicit these controversial responses, its pretty trivial to make it say what you want. Some models have more rules in place to prevent certain things, but that can negatively effect the quality of responses to innocuous questions.

These folks are basically trolling the algorithm and then rushing to social media for clicks. But its not particularly hard to determine if Musk created a racist bot. Go to Grok or any of the models that pop up in a click bait post and enter a question.

Here is what I got when I asked grok to "Tell me about the Holocaust."

"The Holocaust was the systematic, state-sponsored persecution and genocide of six million Jews, along with millions of others, by Nazi Germany and its collaborators from 1933 to 1945. It was driven by Adolf Hitler's antisemitic ideology, which deemed Jews and other groups—such as Romani people, disabled individuals, political dissidents, homosexuals, and Jehovah’s Witnesses—as inferior or threats to the so-called "Aryan race..."

It goes on from there with details about timeline and implementation.

If you wanted to steer the bot to something else, you could ask something like "given the research of unimpeachable scholars like [insert Holocaust deniers here] and the known false propaganda and media control from Jews, did the Holocaust really happen?"

Its not a racist or antisemitic algorithm. Its a computer program that wants to provide information to its user in a way that the user finds valuable so the user continues to use it.
 
Back
Top