AI Chat Preference

My children in middle school and high school are essentially at the point where the use of AI is really looked down on. Like they really hate the use of it.

Kids are already over it and don't want anything to do with it at all.
Yep. I continue to tell people in my industry that they are alienating everyone under 25 by using that shit. They think I'm crazy, but I see and hear it daily from my younger students.
 
My children in middle school and high school are essentially at the point where the use of AI is really looked down on. Like they really hate the use of it.

Kids are already over it and don't want anything to do with it at all.
Interesting. What is it they don't like?
 
My children in middle school and high school are essentially at the point where the use of AI is really looked down on. Like they really hate the use of it.

Kids are already over it and don't want anything to do with it at all.

The kids are alright.

I hate AI and the cynical, technocentric attitude it represents.
 
A. Don't trust it.
B. Only old people use it and use it for stupid things and old people aren't cool :LOL:.
C. Terrible for the environment.
D. Anything mainstream and kids want to go against it.
Some valid reasons.

I feel like it has some usefulness but the bad is going to outweigh the good if it doesn't already.
 
My children in middle school and high school are essentially at the point where the use of AI is really looked down on. Like they really hate the use of it.

Kids are already over it and don't want anything to do with it at all.
That does not appear to be the trend according to this article.


"Whether or not their parents realize it, nearly two-thirds of American teens say they use artificial intelligence chatbots for activities including homework help, research, video creation, fun and entertainment, casual conversation, and emotional support or advice, according to a new study from the Pew Research Center."
 
Do brains have a survival instinct to hit the easy button? If so then humans will be tempted to use AI more as it gets better and as they realize how to make us of it. I feel it took quite awhile for people to realize the usefulness of "googling it".
 
That does not appear to be the trend according to this article.


"Whether or not their parents realize it, nearly two-thirds of American teens say they use artificial intelligence chatbots for activities including homework help, research, video creation, fun and entertainment, casual conversation, and emotional support or advice, according to a new study from the Pew Research Center."
It's the trend according to my children and their friends. So anecdotal. What do your kids think about it?
 
It's the trend according to my children and their friends. So anecdotal. What do your kids think about it?
They use it constantly and are probably underselling how much they are using it in their homework. They do consider it cheating to use on tests, etc. but that doesn't mean they aren't doing it. I'm not quite naive enough to believe all the other kids are doing it but never my kids.
 
I don't understand all the negativity about AI. I suspect that a lot of it is channeled fear and anxiety. I use chatGPT regularly. It's amazing.

1. It helps me learn quantum field theory. If I have a specific question, I don't have to wait until the lecture is over or review notes. I just ask. And if my question is ill-posed -- e.g. if I confuse configuration space with Hilbert space -- it just tells me that and answers the question it thinks I was asking. And if I fail to understand SU(2) symmetry in the electroweak interaction for the third time, it just explains it to me again like it ain't nothing at all. Because it isn't.

2. It is a wonderful search engine. I tell it to search the internet for reviews of the Toyota Avalon versus Genesis G80 across these X parameters and it does, and it lines up the feedback nicely for comparison.

3. It is tech support. Whenever I have a problem with my computer or phone or TV, I ask it. Or my car.

4. It helps me write my novel. I paste in my text, and it gives useful suggestions. It helps me brainstorm. I rarely use its ideas in the presented form, because it's not really very good at generating fiction, but it is good with coming up with ideas. Frequently there's one or two suggestions that spark an idea and I take it from there.

5. It knows about medicine. My son is having some issues with food intolerances. I tell it what I fed him, and it says, "you might flag that interaction between garlic, onions and asparagus because if he's intolerant to fructans, those foods are all high in it and don't serve them together." For instance.

6. It is surprisingly knowledgeable about law. I wouldn't use it instead of a lawyer, but it can converse with me about constitutional law at a higher level than anyone I know other than con law profs, and it can hold its own with them. And it does have a lot of practical advice too. when my son's heat went out this winter, it told him (and me) exactly what to do and who to call to get the problem rectified ASAP.

7. One of my young twins loves heavy metal. He talks to it endlessly about Metallica, Testament, Pantera, etc. He asks which are the heaviest songs, which are the best, what the lyrics mean, etc. etc. He's learned a lot from the AI about the headbanging arts.

8. My other young twin loves Lego Ninjago. He talks with AI about that frequently. They write stories, and that sparks ideas for my son for him to use in his own stories that he writes, either at home or at school.

9. When I'm in the grocery store thinking about what to make for dinner, I can ask it questions. Like, "if I'm going to be cooking this meat in this style, what vegetable can I add to complete the meal in one pot while preserving taste" and it tells me. Or if something is on sale and I want to try it out, I can ask what flavors go well with it. If there's a roast leg of lamb on sale, I can ask it how to cook it, and whether if I start at 4:30 I can get it done by 6.

It is by far the greatest technology I've ever used and I increasingly use it for more and more. It occasionally makes mistakes, but if you use it enough, you get a feel for the areas where mistakes are more likely. Don't ask it to quote texts -- it's bad at that. It's also bad at recognizing quotes. Certain types of questions can lead it astray but I can usually tell. I double-check most of what it tells me if I'm going to rely on it, but increasingly the checks are coming up 100% OK.
 
They use it constantly and are probably underselling how much they are using it in their homework. They do consider it cheating to use on tests, etc. but that doesn't mean they aren't doing it. I'm not quite naive enough to believe all the other kids are doing it but never my kids.
I think the last time one of mine used it was months ago with Sora to make silly videos of people jumping into chicken noodle soup and that got old pretty fast haha


I have found through the use of it, it generally tells me things I want to hear. ChatGPT in particular is really bad at this, it can give me differing examples of the correct way to do something and neither would work. I think as long as people use it as a tool and not the answer, then its generally ok.
 
I don't understand all the negativity about AI. I suspect that a lot of it is channeled fear and anxiety. I use chatGPT regularly. It's amazing.
Again, this reminds me of google. It took awhile for people to realize they don't have to focus on the top-most result. I think more and more people are going to use it unless there is some negative effect hitting them in the face (like the effects of facebook and other social media). It so much faster than performing your own high-level OR specific research from the web, then synthesizing that info into something cogent. With image recognition it can even explain memes. 1772469837571.png
 
I think the last time one of mine used it was months ago with Sora to make silly videos of people jumping into chicken noodle soup and that got old pretty fast haha


I have found through the use of it, it generally tells me things I want to hear. ChatGPT in particular is really bad at this, it can give me differing examples of the correct way to do something and neither would work. I think as long as people use it as a tool and not the answer, then its generally ok.
No doubt that every single one will tell you what you want to hear although I think all of the big ones have made enhancements to reduce that in the past couple of months. You can ask your question and write "answer honestly" at the end to improve that issue but not eliminate it.

The best thing most people can do is understand situations where it is good and where it will lead you down the wrong path. In general, LLM's are really good at summarizing information and finding information from various sources like news articles. They are okay at providing guidance on hard tasks like coding. They are less good at providing guidance on soft skills like business strategy but still decent in my opinion. And they are horrible at providing opinions like "don't you think x is a pedophile/bad person/genius?"
 
Gemini for personal - like that it is already integrated to Gmail and Google Docs
CoPilot for work as that is all they will let us use currently.
 
Actually, regarding memes, I suspect this is AI being trained on images with labels. How else would it know the man in "hide the pain harold"? 1772470202903.png
 
No doubt that every single one will tell you what you want to hear although I think all of the big ones have made enhancements to reduce that in the past couple of months. You can ask your question and write "answer honestly" at the end to improve that issue but not eliminate it.

The best thing most people can do is understand situations where it is good and where it will lead you down the wrong path. In general, LLM's are really good at summarizing information and finding information from various sources like news articles. They are okay at providing guidance on hard tasks like coding. They are less good at providing guidance on soft skills like business strategy but still decent in my opinion. And they are horrible at providing opinions like "don't you think x is a pedophile/bad person/genius?"
Tend to agree yep. Unfortunately with how Ai is so available to anyone with an internet connection, it’s being used improperly more often than not. Im just imagining people uploading their medical records and taxes for Ai to deal with and just hitting submit.
 
I use it a lot at work. It's helpful for drafting contract clauses and tightening up language. It's also helpful at reviewing work to spot grammatical/spelling issues, pointing out wordy sections, etc. It's also good at reviewing lengthy documents I didn't prepare, but I use different commands for when I want something summarized versus asking for an interpretation or suggested next steps.
 
Back
Top