Coding, Data Science, A.I. catch-All | DeepSeek - Chinese A.I. needs less power, fewer chips

  • Thread starter Thread starter nycfan
  • Start date Start date
  • Replies: 117
  • Views: 2K
  • Off-Topic 
But China's population is considerably more than the US and Europe COMBINED. And I don't know all the particulars of their education system, but they obviously develop plenty of talent homegrown. There are a lot of Chinese students in the U.S., but if even 10% of US university students are from China (seems like a stretch, but maybe if you include graduate studies in niche fields), that's still not that significant. China's population is 5x as high as the US.

IIRC you've worked in China so you likely know more than I do about this specific example. I just think the numbers advantage for China is a huge factor.
There are more than 2000 Chinese students at Purdue university alone, by way of example. Agree that they have a few great schools and millions of talented kids, and I’d wager most would come to Ohio State or whatever (let alone Stanford, Caltech, MIT) than stay at home.
 
It’s not like China is recruiting the global tech elite either. If the assumption is that China can produce enough home-grown talent to out-compete us, that still (currently) depends on Chinese getting US education for undergraduate and grad school. To say nothing of China’s demographic cliff.

If we can keeps our doors open to talent, the best of India, China etc. will continue to come here. You can’t keep em down on the farm once they’ve seen Karl Hungus.
I was reading yesterday about how many Chinese students are in American colleges. Amazingly high numbers. Almost no Americans in Chinese colleges.
 
This is just another thing that’s so maddeningly stupid about Trump. The US university system is the best in the world, and there is no close second. It’s a massive competitive advantage. And they are punishing schools and removing the research funding that enables this advantage. It’s so obviously idiotic you can only shake your head.

ETA: shake your head and encourage them to fight this out. UNC is a lost cause at the moment but Trump will lose this fight imho.
 
There are more than 2000 Chinese students at Purdue university alone, by way of example. Agree that they have a few great schools and millions of talented kids, and I’d wager most would come to Ohio State or whatever (let alone Stanford, Caltech, MIT) than stay at home.
Really? There were a bunch of Chinese students where and when I was teaching, but they left (def during Covid but they were leaving before) and to my knowledge haven't come back.
 
Really? There were a bunch of Chinese students where and when I was teaching, but they left (def during Covid but they were leaving before) and to my knowledge haven't come back.
In the 2023-2024 school year, there were an estimated 277,398 Chinese students studying in the United States. This number represents a 4% decrease from the previous year, despite the overall number of international students reaching a record high. While Chinese students have historically been the largest group of international students in the U.S., Indian students have surpassed them in numbers for the first time since 2009.
 
In the 2023-2024 school year, there were an estimated 277,398 Chinese students studying in the United States. This number represents a 4% decrease from the previous year, despite the overall number of international students reaching a record high. While Chinese students have historically been the largest group of international students in the U.S., Indian students have surpassed them in numbers for the first time since 2009.
Interesting. Maybe my previous institution just gave up on them. I don't know. They used to be everywhere on campus -- and very visible, given that they were driving Porsches and Ferraris.
 
Interesting. Maybe my previous institution just gave up on them. I don't know. They used to be everywhere on campus -- and very visible, given that they were driving Porsches and Ferraris.
I had recently listened to an interview talking about this and they compared the numbers to those of US students abroad and in China, so I knew it was a surprisingly high number.

There are very few US students studying in China.
 
Last edited:
top-openai-researcher-denied-green-card-after-12-years-in-us-v0-961rejmuyzwe1.webp
 

Attachments

  • top-openai-researcher-denied-green-card-after-12-years-in-us-v0-961rejmuyzwe1.webp
    top-openai-researcher-denied-green-card-after-12-years-in-us-v0-961rejmuyzwe1.webp
    49.5 KB · Views: 1
This may be a clearer answer:


Tobias Gerstenberg, assistant professor of psychology in the Stanford School of Humanities and Sciences, has been fascinated by questions of causality since graduate school. In addition to being a deep philosophical concept, causality plays an important role in many fields of study such as law, computer science, economics, and epidemiology. “I love causality because it matters for everything,” he says.

And causality could play a particularly important role in building a more humanlike AI, Gerstenberg says.

In recent research supported by Stanford HAI, Gerstenberg has shown that humans’ capacity for counterfactual simulation – thinking through what would have happened if a causal agent weren’t present – is critical for judging causation and assigning responsibility. “If AI systems are going to be more humanlike, they will need that capability as well,” he says.

Gerstenberg and his colleagues have taken a step in that direction by creating a computational simulation model that captures important aspects of how people judge causation and assign responsibility. In several contexts, the model can predict the likelihood that people will assign causal responsibility to an object (such as a billiard ball) or a social agent (such as a character in a computer game).

The journal Trends in Cognitive Sciences recently invited Gerstenberg to write a review of his work on causal cognition. That publication inspired the following conversation in which Gerstenberg describes what people should understand about causal cognition; how his research relates to AI; the risks and benefits of using AI to simulate causal scenarios for assigning blame in a legal context; and how that capability might lead to various ethical or societal problems.

What do you want people to understand about the nature of causal cognition?

I’m postulating that when people make causal judgments or assign responsibility, they’re not just contemplating what happened or what they saw. In fact, they are regularly going beyond the here and now to imagine how things could have happened differently. The process of thinking about counterfactual possibilities is key for explaining how people make causal judgments in both physical and social contexts.

In fact, my research shows that the same process of using counterfactual reasoning is going on if we’re judging whether a billiard ball causes another billiard ball to go into a hole, whether a block supports another in a stack, or whether a person helps or hinders another person in a social setting.

You can think of counterfactual thinking as a domain-general ability that we have, and how it plays out depends very much on the domain that we’re applying it to. So, if I’m thinking about the physical domain – such as with billiard balls – my understanding of what would have happened in the absence of the balls colliding is driven by my mental model of how forces interact in the physical world. Whereas when I’m thinking about whether a person was helped by another person, I’m using my intuitive understanding of psychology and my mental model of how somebody would have acted in the absence of someone else’s help.

Another important point I’m making is that in prior work about determining whether a person is helping or hindering another person, researchers have been more interested in inferring people’s intentions than inferring causality. And there’s a difference between inferring whether somebody had the intention to help or hinder and judging whether they actually were helpful. I like to use the example of going grocery shopping with a small child who puts things into the basket with the clear intention of helping, but they’re not actually helping in the sense of making the shopping experience more efficient. In fact, it might have been easier without them. For judging the child’s intentions, I just look at their actions. But for determining whether they were actually helpful, I need counterfactuals.

So, while causality is a phenomenon that people have been interested in, they may not have recognized that part of what’s underlying this phenomenon is this process of counterfactual thinking. And I’m saying that a whole host of varied phenomena rely on the idea that we as humans build mental models of the world that allow us to imagine changing what happened, and then play out how such a change might have created a different situation. And these capacities are quite important for making causal judgments.

What does your research about causality tell us about AI?

If we want to develop AIs that in important ways emulate the way humans think about the world, they will likely need to be able to have these kinds of causal reasoning capacities too – mental models of the world that allow them to evaluate, step by step, how things might have played out differently.

For example, if we ask an AI, “Why did this happen?” and get an answer, we humans will interpret the explanation by imagining the counterfactual possibilities. Essentially, if the AI tells me that A caused B, we will understand that to mean that if A had not been the case, B would not have happened. But if the AI doesn’t share that same mental model – if it basically doesn’t use the phrase “why did this happen” in the way we humans do, and if it doesn’t have the ability to do counterfactual reasoning – it will not explain things to us in a way that will make sense to us.

AIs will need to understand causality at the right level of abstraction as well. For example, there’s a problem we call causal selection: In principle, a counterfactual account could go back to the Big Bang. And while the Big Bang is a cause of everything, it’s also the cause of almost nothing. In any given situation, there are a lot of things that could count as a cause. But the things that humans identify as the cause or causes are a much smaller, more pragmatic subset.

In legal settings there are additional criteria for causation such as the sine qua non or “but for” test, which is just a counterfactual: But for the defendant’s action of speeding through a red light, the plaintiff would not have been injured in the crosswalk. There’s already interest in using AIs to simulate alternative scenarios in these kinds of cases. And I’m trying to make the legal theories of causation a bit more precise by building computational models that predict how people judge causation – and helping with the translation of some of these ideas into AI.

If AIs do become adept at judging causation, how might that ability be either useful or problematic in a legal context?

Imagine a case where a car accident happened and the prosecution claims that it was caused by the defendant speeding. And suppose the prosecutors use AI to simulate a counterfactual scenario showing what would have happened if the defendant wasn’t speeding. The jury might find the simulation convincing (because seeing is believing), even though the prosecutors or defense attorneys could have constructed 100 different potential scenarios with different endings. What’s missing here is information about the uncertainty involved in generating the simulation. And that could be dangerous or misleading in a legal context.

On the other hand, we’ve suggested in a recent paper that perhaps this problem could be overcome if each legal team had a legal fact finder who generates multiple counterfactual simulations and includes information about uncertainty. So, for example, the legal fact finder would testify that based on their simulations and video analysis, there’s a 98% chance that the car speed was a contributory cause.

Insurance companies might actually find this type of tool quite useful as well because they often deal with the question of fault. People are already doing 3D reconstruction using video from smartphones and CCTV cameras. It’s conceivable that in the not too distant future, claims adjusters could combine such 3D reconstructions with the simulation of counterfactuals to determine who’s at fault.
 
Back
Top