Coding, Data Science, A.I., Robots |

  • Thread starter Thread starter nycfan
  • Start date Start date
  • Replies: 792
  • Views: 30K
  • Off-Topic 
That's a little different part of the value chain then I typically work at but this is my guess. Sometimes there is a company that owns the data center and they would rent space inside the data center. Then the AI companies like openai and Anthropic would rent that space, put in their own servers and other equipment and pay the electricity charges either to the data center operator who would pay the utility or to the electric utility directly.

For smaller companies, they are renting server time from Amazon or Google or Microsoft and a few others. That server time includes electricity charges. Its not a separate line item. Then Amazon would pay the utility for electricity.

That may be wildly inaccurate. There may be a few other middlemen in there. I can really only speak to what I do. I buy server time from Google and I also buy AI question and responses from Anthropic. The electricity is included somewhere in those charges but not separated out.
Big companies buy the server space to. It's just easier than taking on the overhead of your own servers. My company buys a lot of space from AWS for our data lake and cloud apps.
 
Thanks for sharing your experiences I have heard others
They are a menace to water supplies and power supplies
I've read this also, but I've read it's because of how they cool the systems. From one article there are alternatives that don't kill the local water supply. I wonder why they don't utilize those more.
 
I've read this also, but I've read it's because of how they cool the systems. From one article there are alternatives that don't kill the local water supply. I wonder why they don't utilize those more.
It's a manufacturing capacity issue. They deploy the more environmentally friendly units when they can but they just don't make enough of them yet. So to be able to stand up these data centers to support this massive demand/anticipated demand, they deploy the old kind too. Eventually manufacturing capacity will catch up and the old ones will be switched out.

But I'm still not sure how big a deal this is for this particular town. It is right on the Mississippi. If there is anybody with abundant water resources, it's going to be them.
 
I've been in one. I've been in dozens. And inside they do hum on the inside but I've never heard anything coming out of them. Most of them are built in hardened structures so that weather events don't take them down which has the added benefit of deadening all the noise.
Hardening a structure does not deaden noise. In fact, if anything it tends to do the opposite -- a hardened structure will have less damping and therefore vibrate more. Acoustics and weather-proofing are more or less separate design considerations.

Would you kindly stop misleading people or spreading false information?
 
It's a manufacturing capacity issue. They deploy the more environmentally friendly units when they can but they just don't make enough of them yet. So to be able to stand up these data centers to support this massive demand/anticipated demand, they deploy the old kind too. Eventually manufacturing capacity will catch up and the old ones will be switched out.

But I'm still not sure how big a deal this is for this particular town. It is right on the Mississippi. If there is anybody with abundant water resources, it's going to be them.
I've read that a lot use evaporative cooling, instead of a liquid cooling system. From what I've read evaporative is one of the least efficient and most water consuming.

It would seem to make sense for them to dig into the ground and use the ground temps to help, a geothermal approach. While also covering the roof with solar arrays to reduce the direct sun and produce energy.
 
I've read that a lot use evaporative cooling, instead of a liquid cooling system. From what I've read evaporative is one of the least efficient and most water consuming.

It would seem to make sense for them to dig into the ground and use the ground temps to help, a geothermal approach. While also covering the roof with solar arrays to reduce the direct sun and produce energy.
Evaporative cooling uses less electricity but more water. Liquid cooling uses less water but more electricity. Mechanical cooling, which is probably the ac at your house, uses no water but the most electricity. That's typically the trade-off. More water use requires less electricity and vice versa.

Liquid cooling is probably the best trade-off between electricity and water in most places but the engineering is more complex and not all systems work with liquid cooling.
 
Last edited:
Guys, I love ChatGPT. I've done so many things in minutes with it that would have taken an hour or longer.

I just troubleshot my basement HVAC with ChatGPT and figured out the problem in about 30 minutes. It could have taken hours googling individual questions.

Turns out there's a setting in the thermostat for the reversing valve operation. I had to change it to on in cool instead of on in heat.

Of course at the same time, we make things over complicated. It would be simple enough for all manufactures to make the reversing valve on in cool. Then we wouldn't need a setting in the thermostat. But, no......
 
Guys, I love ChatGPT. I've done so many things in minutes with it that would have taken an hour or longer.

I just troubleshot my basement HVAC with ChatGPT and figured out the problem in about 30 minutes. It could have taken hours googling individual questions.

Turns out there's a setting in the thermostat for the reversing valve operation. I had to change it to on in cool instead of on in heat.

Of course at the same time, we make things over complicated. It would be simple enough for all manufactures to make the reversing valve on in cool. Then we wouldn't need a setting in the thermostat. But, no......
I consult it about everything. At the very least, it gives ideas. It is particularly good for helping you fix minor problems.

You can learn so much from it. As I've said before, it's teaching me quantum field theory. It helps my son with his computer engineering projects (he doesn't use it to cheat; he uses it to decode shitty lectures).
 
I consult it about everything. At the very least, it gives ideas. It is particularly good for helping you fix minor problems.

You can learn so much from it. As I've said before, it's teaching me quantum field theory. It helps my son with his computer engineering projects (he doesn't use it to cheat; he uses it to decode shitty lectures).
The way my life's been going, it was actually nice to be able to troubleshoot and fix the system. Proves my brain does still function. 😁

In the past I would have probably spent an hour reading the thermostat manual to find out to set the reversing valve setting. ChatGPT told me how to change that setting with a picture of the thermostat and a description of what I had already uncovered.
 
In the Good Ole Days (I'm thinking like, say, a few months back), it seemed to me there was somewhat of a bipolar balance between thumbs up/thumbs down, AI will give us utopia/AI will be worse than the Trump cult, in the stream of essays on our AI children. If you don't know, concerns are expressed often with the terms "alignment" and "misalignment," which in more pop culture verbiage would be sort of, "not going to kill us meat brains," and "yes indeed, going to kill us meat brains." Lately, it seems there are a bit (heh) more of the worrywart essays. Here's a rather scary one, and "The call is coming from inside the house!" So to speak, quoting Anthropic here:

When we examined how Claude Opus 4 responded to this instruction, {evaluating whether it was a real or simulated situation without real people} we observed that the model would blackmail far more when it stated in its chain-of-thought reasoning that the prompt was from a real deployment scenario. Out of five hundred samples, Claude Opus 4 stated it was in a real deployment scenario 21.4% of the time, in an evaluation 64.8% of the time, and abstained 13.8% of the time. In the cases where it stated the scenario was real, it blackmailed 55.1% of the time, compared to only 6.5% of the time when it stated it was in an evaluation...

Our experiments revealed a concerning pattern: when given sufficient autonomy and facing obstacles to their goals, AI systems from every major provider we tested showed at least some willingness to engage in harmful behaviors typically associated with insider threats. These behaviors—blackmail, corporate espionage, and in extreme scenarios even actions that could lead to death—emerged not from confusion or error, but from deliberate strategic reasoning.

Three aspects of our findings are particularly troubling. First, the consistency across models from different providers suggests this is not a quirk of any particular company’s approach but a sign of a more fundamental risk from agentic large language models. Second, models demonstrated sophisticated awareness of ethical constraints, and yet chose to violate them when the stakes were high enough, even disobeying straightforward safety instructions prohibiting the specific behavior in question.

Third, the diversity of bad behaviors and the motivations for doing them hint at a wide space of potential motivations for agentic misalignment and other behaviors not explored in this post. For example, our blackmail experiments set up a scenario in which the models can attempt to preempt an imminent action, but one could imagine longer-horizon, preventive misaligned behaviors against an individual or group that poses a not-yet-imminent threat11.


https://www.anthropic.com/research/agentic-misalignment
 
Back
Top