Coding, Data Science, A.I., Robots |

  • Thread starter Thread starter nycfan
  • Start date Start date
  • Replies: 670
  • Views: 22K
  • Off-Topic 
"In all of our interactions, the DoW displayed a deep respect for safety ..."

:ROFLMAO:
IMG_5325.jpeg


“…Under the deal, OpenAI agreed to let the Pentagon use its A.I. systems for any lawful purpose. The San Francisco company also said it had found a way to ensure that its technologies would not be applied for domestic surveillance in the United States or with autonomous weapons by installing specific technical guardrails on its systems.

… Mr. Altman and Dario Amodei, the chief executive of Anthropic, have long been bitter rivals. Dr. Amodei and several other founders of Anthropic previously worked at OpenAI. But they left in 2021 after disagreements with Mr. Altman and others over how A.I. should be funded, built and released.

Last week, during an A.I. summit in India, Mr. Altman and Dr. Amodei were caught on video refusing to join hands during a photo session with Prime Minister Narendra Modi.

It may take time for OpenAI’s technology to be used by the Pentagon. The company is not yet approved for classified work in part because its technologies are not available from Amazon’s cloud computing services, which is how the government often accesses classified systems.

That could change after OpenAI signed a partnership with Amazon on Friday. Amazon, a new investor in OpenAI, is pouring $50 billion into the A.I. start-up as part of $110 billion in funding that OpenAI raised to pay for its continued growth and to fuel A.I. development.…”
 
Good for him. I really go back and forth on that guy. Sometimes I think he's talking about how bad the future could be just so more people wil use his product. Like telling everyone my product could put a bunch of the white collar workforce in the unemployment line, would immediately get a lot of business from people that don't want to pay a white color workforce. John McAfee played that game with antivirus and it made him a hundred million dollars.

But then he takes a principled stand that will definitely cost him and his investors money, and I think maybe some of his warnings are a little more legit. Especially with regards to the surveillance state, not skynet stuff.

But his products are tip top. I can attest to how valuable they are. Use them every single day and they are better than the competition in several really important use cases.
 
I just don't think this type of thing would make for good AI with the current technology. At least the kind of AI where one machine is going to figure out how to beat a different machine or a human by analyzing how they fight and coming up with some novel strategy.

You need simulations to create that kind of AI. It works well with things like video games because you can run a whole bunch of simulations for very cheap. But when you need to do that in the real world, you need to run the simulation thousands of times with robots that you're going to destroy a lot of those times.

I do think certain parts of AI, notably on the computer vision side, could be effective but most of these robots will be pretty basic with lots of logic inputted by a human.

But I used to like that show so I'm glad somebody's sponsoring it for what is likely marketing reasons or possibly another nerd that liked that show and had the money to bring it back.
 

“White-collar work… most of those tasks…”.
That brush seems too wide. Most immediate effects to be felt at lower echelons of administrative workers and work way upward. If you’re part of the lower echelon you need to start planning now. The further up you are on the decision making ladder you are , the safer. Of course, what I say is a grand over simplification . All IMHO.
 
So what happens when no one has a job because of AI and can't spend money on the companies that replaced everyone with AI?
 
Back
Top