Coding, Data Science, A.I. catch-All

  • Thread starter Thread starter nycfan
  • Start date Start date
  • Replies: 156
  • Views: 4K
  • Off-Topic 
I had to learn lisp in an AI course.

This was around 2000 or so when AI was considered a dead field. My first attempt at lisp I tried to make it like a non-lisp program. Never got the utility of it but I didn’t really care at the time.
That's what everyone does with LISP. That's what I mean by it being counter-intuitive. We think about telling people what to do in list format (first, do this; then this; then this) which is why Iterative languages make sense. computer, do this first, then this, etc. LISP proceeds by computation. It's one giant mathematical expression that can resolve and in the process, do useful things. It's foreign.

My first class in Scheme, the after-lecture lab was wild -- 15 people squawking like chickens, because they always found programming easy and now couldn't figure out how to do simple algorithms. After an hour or so, one by one it started clicking for us. Once you got it, the assignments took like 15 minutes. It was the getting it that was hard, but we all did get it.
 
I had to learn lisp in an AI course.

This was around 2000 or so when AI was considered a dead field. My first attempt at lisp I tried to make it like a non-lisp program. Never got the utility of it but I didn’t really care at the time.

As for AI advances, one day we will get to the point where AI will design AI (not just help but do it) and that will be the end game. If that AI isn’t AGI then it will be soon.

Note: I don’t really know. Just throwing that last paragraph in for discussion.
Yea, LISP doesn't fit the procedural programming model or even object oriented model.
 
Good news: AI is going to advance biological research and drug discovery.

Bad news: it is going to replace lots white color jobs if we have enough energy to power it. Anthropic AI folks were discussing the odd upcoming era in which AI would be making business decisions instead of humans, but robots aren't advanced enough yet to implement the AI agent's desires so humans would just be grunts.

A guy at the company where I work has been hand-feeding and fine tuning something Claude oriented which is really impressive and definitely going to make my job much easier and more efficient. If I can be more efficient then they can afford to lay me or my cohorts off.

I need to learn more about the prompt engineering and tuning side for the sake of job security.
 
Anthropic CEO was interviewed and some of his thoughts could be hyperbole, especially regarding timing, but on the general long-term impacts I think he's probably right. He's warning of a white-collar job truncation, especially for entry-level, and stating that public and gov aren't talking about this enough.


Some of the interesting parts are his ideas for solving this dilemma:

  1. Speed up public awareness with government and AI companies more transparently explaining the workforce changes to come. Be clear that some jobs are so vulnerable that it's worth reflecting on your career path now. "The first step is warn," Amodei says. He created an Anthropic Economic Index, which provides real-world data on Claude usage across occupations, and the Anthropic Economic Advisory Council to help stoke public debate. Amodei said he hopes the index spurs other companies to share insights on how workers are using their models, giving policymakers a more comprehensive picture.
  2. Slow down job displacement by helping American workers better understand how AI can augment their tasks now. That at least gives more people a legit shot at navigating this transition. Encourage CEOs to educate themselves and their workers.
  3. Most members of Congress are woefully uninformed about the realities of AI and its effect on their constituents. Better-informed public officials can help better inform the public. A joint committee on AI or more formal briefings for all lawmakers would be a start. Same at the local level.
  4. Begin debating policy solutions for an economy dominated by superhuman intelligence. This ranges from job retraining programs to innovative ways to spread wealth creation by big AI companies if Amodei's worst fears come true. "It's going to involve taxes on people like me, and maybe specifically on the AI companies," the Anthropic boss told us.
A policy idea Amodei floated with us is a "token tax": Every time someone uses a model and the AI company makes money, perhaps 3% of that revenue "goes to the government and is redistributed in some way."
 
Good news: AI is going to advance biological research and drug discovery.

Bad news: it is going to replace lots white color jobs if we have enough energy to power it. Anthropic AI folks were discussing the odd upcoming era in which AI would be making business decisions instead of humans, but robots aren't advanced enough yet to implement the AI agent's desires so humans would just be grunts.

A guy at the company where I work has been hand-feeding and fine tuning something Claude oriented which is really impressive and definitely going to make my job much easier and more efficient. If I can be more efficient then they can afford to lay me or my cohorts off.

I need to learn more about the prompt engineering and tuning side for the sake of job security.
And where else are all those displaced white-collar workers going to get jobs that pay an equivalent amount of money? We've already seen what automation has done to manufacturing jobs, what happens when large numbers of white collar and service industry jobs also disappear? Not a pleasant thought.
 
And where else are all those displaced white-collar workers going to get jobs that pay an equivalent amount of money? We've already seen what automation has done to manufacturing jobs, what happens when large numbers of white collar and service industry jobs also disappear? Not a pleasant thought.
I am guy that tries hard to avoid automated self serve check outs at stores. Cause it crushes the damn minimum wage jobs the checkout folks have.......
 
I work in a mid-career position in finance (fairly specialized) and I could absolutely see my job completely replaced by AI in 10 years, assuming incremental improvements. I believe this will ultimately lead to UBI, but how does that work in practice? Does the 45-year-old displaced attorney with a $5K mortgage receive enough benefits to service his debt? What should I encourage my 2-year-old to focus on academically? I don't think white collar jobs will exist in 25 years.
 
jobs? it wont matter once specialized AI gets into the hands of the many nations and people who would love to see an end to America....and theres lots of them.

live day to day.....its just a matter of time
 

"The Carolina AI Literacy (CAIL) initiative provides leadership, research expertise, and resources as the UNC-CH community integrates AI into its work. CAIL offers literacy programs focused on the intellectual and cultural impacts of artificial intelligence. We also engage in research investigating the uses of AI tools in writing and communication. We bring together faculty, instructors, library experts, and students to better understand the key challenges linked with artificial intelligence, and then develop interventions and research programs aimed squarely at addressing these concerns."
 
Back
Top