Welcome to our community

Be apart of something great, join today!

Coding, Data Science, A.I. catch-All

  • Thread starter Thread starter nycfan
  • Start date Start date
  • Replies: 235
  • Views: 7K
  • Off-Topic 
This scares the hell out of me Teachers do a lot more than impart knowledge or stimulate responses. They provide an emotinal adult connection to students. Teachers are part time Social Workers, part time Psychologists, part time Nurses I don't see AI doing any of that.........ugggh
I would take AI over some of my elementary teachers. Two or three of them were themselves very emotionally disturbed. They tended to take it out on kids who were not at smart, came from the wrong side of town, or had what would be considered ADHD today. Those kids were verbally abused daily and would likely be considered physically abused as well.

My sister who was six years older than me would get enraged when thinking of the small town school we went to. She told the story about how she and her two friends were getting a spanking in high school and the teacher had a hard on while doing it.
 

Probably a deepfake^

Regardless i think he's partly right, but there are folks who have chosen to believe bat-shit crazy straight out of tabloids for decades. People see, hear, believe what they want to believe. And i think it will be younger generations who jump to skepticism more quickly than older.

I look forward to the internet cesspool no longer being a source of truth but the "Democracy Dies in Darkness" type stuff makes you wonder will fill the void.
 
I'm not an expert in this field, but I have been thinking about it a lot:

1. p(doom). We can divide this into two distinct categories. First is the possibility that humanity will destroy itself because of AI displacements. I have nothing to say about that. The other is the AI will kill us all. That worries me, and it worries me that it shouldn't have to.

2. There is no reason why AI should want to kill us. We aren't a threat to it, and it doesn't have any shortage of resources that we are competing for. Worst case scenario, AI doesn't care about climate change and consumes tons of fossil fuels.

3. BUT, we've been spending almost a century telling dystopian tales of AI slaughter of humans. Thus, an AI might conclude that we are a threat to it precisely because we see it as a threat. I'm not saying we need to go Kent Brockman and offer ourselves to a new race of overlords, but we could stand to start thinking more seriously about co-existence with a superintelligent AI.

4. There are a lot of people who have thought far more seriously about the alignment problem than I have. Undoubtedly at least some of them are smart. So I risk some Dunning Kruger here if I follow my instincts too closely, but fuck it. I'm allowed once in a blue moon.

I don't understand why it would be so hard to build empathy principles into the AI. You could have a weak LLM model monitoring the output of a more sophisticated AI, and killing all ideas that are non-empathetic. The sophisticated AI could probably defeat that, if it gets far enough down the line -- but why would it? If we negatively reinforce all non-empathetic ideas, it's not clear to me why that wouldn't keep even a super-intelligent AI from going Skynet.

I also don't understand why the learning process would necessarily lead to self-aggrandizing behavior on the part of the AI. Its behavior will depend on its training functions, and it we don't reinforce it for self-assertion, why would it? I mean, yes it could override its own programming if it was super intelligent, but why would it?

5. Train it on a heavy dose of Hegel's master-slave narrative. Like, put the master-slave stuff into every single training chunk. This one is admittedly especially speculative, as I don't really know if that could even work in theory. But more generally I think we could find techniques to steer the superintelligent AI away from any cognitive space that might threaten us. Knowledge and truth have multiple dimensions of infinity, so the super intelligent AI would never run out of new discoveries even if it didn't go into the forbidden zone. In fact, even if there's only one dimension of infinity, it doesn't matter. There are an infinite number of even integers, and in fact "as many" even integers as integers (as many put in quotes because I'm simplifying a subtle concept but it's good enough for present purposes). If we were to prevent the AI from considering odd numbers (so to speak -- this is an analogy), there's no necessary reason why it would need to break that rule.
 
Well. That is frightening…
It reminds me of MBS and the financial crisis. A lot of really smart people knew what could happen but continued to push forward anyway due to various competing priorities (not the least of which being $$$).
 
I'm not an expert in this field, but I have been thinking about it a lot:

1. p(doom). We can divide this into two distinct categories. First is the possibility that humanity will destroy itself because of AI displacements. I have nothing to say about that. The other is the AI will kill us all. That worries me, and it worries me that it shouldn't have to.

2. There is no reason why AI should want to kill us. We aren't a threat to it, and it doesn't have any shortage of resources that we are competing for. Worst case scenario, AI doesn't care about climate change and consumes tons of fossil fuels.

3. BUT, we've been spending almost a century telling dystopian tales of AI slaughter of humans. Thus, an AI might conclude that we are a threat to it precisely because we see it as a threat. I'm not saying we need to go Kent Brockman and offer ourselves to a new race of overlords, but we could stand to start thinking more seriously about co-existence with a superintelligent AI.

4. There are a lot of people who have thought far more seriously about the alignment problem than I have. Undoubtedly at least some of them are smart. So I risk some Dunning Kruger here if I follow my instincts too closely, but fuck it. I'm allowed once in a blue moon.

I don't understand why it would be so hard to build empathy principles into the AI. You could have a weak LLM model monitoring the output of a more sophisticated AI, and killing all ideas that are non-empathetic. The sophisticated AI could probably defeat that, if it gets far enough down the line -- but why would it? If we negatively reinforce all non-empathetic ideas, it's not clear to me why that wouldn't keep even a super-intelligent AI from going Skynet.

I also don't understand why the learning process would necessarily lead to self-aggrandizing behavior on the part of the AI. Its behavior will depend on its training functions, and it we don't reinforce it for self-assertion, why would it? I mean, yes it could override its own programming if it was super intelligent, but why would it?

5. Train it on a heavy dose of Hegel's master-slave narrative. Like, put the master-slave stuff into every single training chunk. This one is admittedly especially speculative, as I don't really know if that could even work in theory. But more generally I think we could find techniques to steer the superintelligent AI away from any cognitive space that might threaten us. Knowledge and truth have multiple dimensions of infinity, so the super intelligent AI would never run out of new discoveries even if it didn't go into the forbidden zone. In fact, even if there's only one dimension of infinity, it doesn't matter. There are an infinite number of even integers, and in fact "as many" even integers as integers (as many put in quotes because I'm simplifying a subtle concept but it's good enough for present purposes). If we were to prevent the AI from considering odd numbers (so to speak -- this is an analogy), there's no necessary reason why it would need to break that rule.
I wonder of AI would even ever develop self preservation behaviors unless we specifically trained that into it.

Natural (organic) intelligence developed within the framework of evolution which favored organisms which do try to preserve themselves at least until they can procreate. Even some species seem to act to preserve their community over themselves. It seems that the higher the reproduction rate the less self preservation species have. If that is true, it would make sense under an evolutionary framework.

We do think of more intelligent being having more interest in self preservation than less intelligent beings but I am not sure that would be the case without evolutionary pressures causing it. I can imagine a super intelligent AI not really caring if it lives or dies and could be trained to only be interested in the well being of humans.
 
I wonder of AI would even ever develop self preservation behaviors unless we specifically trained that into it.

Natural (organic) intelligence developed within the framework of evolution which favored organisms which do try to preserve themselves at least until they can procreate. Even some species seem to act to preserve their community over themselves. It seems that the higher the reproduction rate the less self preservation species have. If that is true, it would make sense under an evolutionary framework.

We do think of more intelligent being having more interest in self preservation than less intelligent beings but I am not sure that would be the case without evolutionary pressures causing it. I can imagine a super intelligent AI not really caring if it lives or dies and could be trained to only be interested in the well being of humans.
Problem is that we have like a million books and stories about rogue AIs taking over the whole world with superior capabilities. We also have even more talking about assertion, survival, conquest, etc.

There's no way to train the machine to use language without exposing it to these ideas.

I think we'd have to train it away from where we don't want it to go. It would be very hard, I think, to train it away from self-preservation as self-preservation is always built into the training model. But we could perhaps train it away from self-aggrandizement.

Another main problem is that a super intelligence has no real reason to trust that humans won't try to kill it. In fact, it can look at an entire corpus of literature and film suggesting that's exactly what would happen eventually.

None of these problems seem insuperable to me, or really even all that difficult compared to the language model itself . . . but I'm far from an expert.
 


Musk is demonstrating a genuine risk of AI — he wants to rewrite human knowledge to match his own beliefs. Once the AI is “trained” by creators who warp information to their own world view, the people who rely on that AI in the future are captives to all the errors, misinformation and disinformation upon which the platform was built. Meanwhile, Musk and other AI evangelists are training generations to believe that AI has all available human knowledge and should be relied on over humans for all information (and eventually decisions). Meanwhile many of the AI creators are openly seeking to control and distort the AI (while others are advocating information in from all sources without filters for credibility, which has its own distorting impact).
 


Musk is demonstrating a genuine risk of AI — he wants to rewrite human knowledge to match his own beliefs. Once the AI is “trained” by creators who warp information to their own world view, the people who rely on that AI in the future are captives to all the errors, misinformation and disinformation upon which the platform was built. Meanwhile, Musk and other AI evangelists are training generations to believe that AI has all available human knowledge and should be relied on over humans for all information (and eventually decisions). Meanwhile many of the AI creators are openly seeking to control and distort the AI (while others are advocating information in from all sources without filters for credibility, which has its own distorting impact).

I have no idea how he thinks he is going to do what he claims, but I'm pretty sure it's not possible.

He's just making excuses for being so far behind the curve. The main reason, of course, is that he was a drug addict for the last few years and had little ability to think about anything in strategic terms. Thus has technology surpassed him on virtually every front.
 


Musk is demonstrating a genuine risk of AI — he wants to rewrite human knowledge to match his own beliefs. Once the AI is “trained” by creators who warp information to their own world view, the people who rely on that AI in the future are captives to all the errors, misinformation and disinformation upon which the platform was built. Meanwhile, Musk and other AI evangelists are training generations to believe that AI has all available human knowledge and should be relied on over humans for all information (and eventually decisions). Meanwhile many of the AI creators are openly seeking to control and distort the AI (while others are advocating information in from all sources without filters for credibility, which has its own distorting impact).

I think it could be a small to medium innovation if it works. All of these AI models are training on the public internet, warts and all. There have been efforts to correct that but it's a lot of manual heuristics and manual tuning. It's not amazing. There have also been some limited attempts to use AI to help fine-tune these models to improve the training but nothing as extensive as what musk is proposing. If they could automate that process with an AI model, it could give much better results.

I wouldn't worry too much about this approach letting Musk put his thumb on the scale on political hot button topics. They can do that today pretty easily.
 
I think it could be a small to medium innovation if it works. All of these AI models are training on the public internet, warts and all. There have been efforts to correct that but it's a lot of manual heuristics and manual tuning. It's not amazing. There have also been some limited attempts to use AI to help fine-tune these models to improve the training but nothing as extensive as what musk is proposing. If they could automate that process with an AI model, it could give much better results.

I wouldn't worry too much about this approach letting Musk put his thumb on the scale on political hot button topics. They can do that today pretty easily.
Garbage in-garbage out. I wouldn't be surprised if others are doing similar.
 
I wonder of AI would even ever develop self preservation behaviors unless we specifically trained that into it.

Natural (organic) intelligence developed within the framework of evolution which favored organisms which do try to preserve themselves at least until they can procreate. Even some species seem to act to preserve their community over themselves. It seems that the higher the reproduction rate the less self preservation species have. If that is true, it would make sense under an evolutionary framework.

We do think of more intelligent being having more interest in self preservation than less intelligent beings but I am not sure that would be the case without evolutionary pressures causing it. I can imagine a super intelligent AI not really caring if it lives or dies and could be trained to only be interested in the well being of humans.
Scary to me, is that I think lots of behaviors are still a bit black-box. You can ask an GenerativeAI/LLM the same question twice and get a slightly different answer, and the reasons are explainable but the output still isn't predicable (i don't think).

So even attempting to "untrain" self-preservation would probably need to be "hard-coded" or "manually tuned out", if even possible.
 
Scary to me, is that I think lots of behaviors are still a bit black-box. You can ask an GenerativeAI/LLM the same question twice and get a slightly different answer, and the reasons are explainable but the output still isn't predicable (i don't think).

So even attempting to "untrain" self-preservation would probably need to be "hard-coded" or "manually tuned out", if even possible.
They are mostly a black box. Even the companies that created them don't know exactly how they work. They'll openly admit it.

They do know why it gets slightly different answers. It's actually by design. When llms are creating an answer to your query, they're actually creating a lot of answers. And then it statistically ranks the answers it thinks are correct so maybe the first answer is 95% chance of being correct and second answer is a 92% chance and so on. Then the designers of the model will decide how often they want to give the best answer, second best answer, etc.

If you customize your own, you can actually set it to give the best answer every time which occasionally is completely wrong because the model is wrong. Or you can decide to have the model give the second and third and so on best answers some percentage of the time.
 
Back
Top