9

AI Gets Plugged into Managing California's Electric Grid

Well this is beyond stupid. Everyone who knows about how these things work knows its not ready, it doesn't meet the reliability requirements.

a day agotrod1234

What about the use case described in TFA concerns you?

a day agoCharlesW

There isn’t any meat in the article. It’s just an ad for a company claiming their AI model/tool/? will increase grid efficiency.

a day agostahtops

That's not what the article is claiming. According to TFA, GenAI/agents will be used to assist operators with data interpretation. It will increase grid efficiency only in the sense that it may help surface anomalies faster for the people managing the grid.

a day agoCharlesW

AI is demonstrably not reliable. It cannot be made reliable in the long run either, and it can be poisoned. You must treat AI like its a convincing but ultimately lying deceitful individual, which negates the entire purpose of using AI for any purpose. You have to retain the domain knowledge capable of checking whether its correct or not, so it can't replace; but that is the profit objective that is happening with regulatory failing to act appropriately to everyone's detriment.

The electric grid falls into a special class of risk that is both life sustaining and life critical for the entire pool of residents in the affected area.

Put as simple as it comes, mistakes cost lives, and there will be mistakes if AI is involved.

It doesn't matter how efficient it might make the grid. You can get 100% efficiency for a short time while wiping out the human race.

What happens when the analysis it makes says a catrastrophic failure is occurring, and its been right the past N times for little things, but not about that and its not obvious, and you can't wait. Cascade failure.

You have an artificial thing that is not reliable giving you as the operator managing a portion of the grid false information, that leads to loss of life in a way where there can be no accountability. You integrate it, and you can't remove it when it fails.

You can't hold AI accountable for the deaths of your loved ones because a surgery ward went dark during their surgery at the worst possible time.

You can't hold AI accountable for the loss of your things, or family members when a water pump fails from lack of power in the middle of a fire and they couldn't get out.

You can't hold AI accountable for causing a nuclear disaster.

The things you are saying is what blind complacent people say that lack any true comprehension of the dangers involved.

If you are familiar with Atlas Shrugged, and I'm loathe to mention this because its quite bad but accurate in this case; think of the decision-making of this fictional story leading up to the Taggart-tunnel collapse disaster. Its at the level AI would make, and what people who follow AI blindly would make.

They don't question the consequences, or the risk in any measured way, because they can't, and the entire purpose of having AI there in the first place is to replace those people managing the grid to lower cost in labor but this raises cost in lives.

Much of the grid equipment cannot be easily replaced. Imagine if AI convinces a person managing it that a line is at a level that its not, the equipment doesn't fail immediately but it does fail, and it can't be replaced in a timely fashion.

What happens when power and the related logistics goes out for more than 3 days, and people have no food, but plenty of guns.

Doing this integration, creates a disaster just waiting for the right circumstances to happen. If flipping the wrong switch in Arizona shut down power to California for 4 days a few years ago, imagine the damage that could be caused.

Imagine it forces a Nuclear Power Plant to shut down. Those plants can't just be turned right back on. Xenon is a real problem.

It takes a bare minimum of 3 days before they can do anything, and if they don't wait because an AI told them so, you get a situation just like Chernobyl.

From what I've heard and read of Chernobyl, they were told they had to turn the reactor back on right after shutdown to run an experiment from the higher ups, and they couldn't question or stop turning it back on. Xenon 135 is formed as a byproduct and absorbs neutrons better than control rods preventing startup in any controlled way. There's a sharp transition as its burned off in startup, and the wait period is entirely based on the halflife of Xe to Iodine 136 (iirc) getting to safe levels to maintain a controlled startup (not-meltdown). You have objective reality, and you have delusion/hallucination. The former always wins and in complex artifacts it has a high chance of causing death or adverse effects when discounted.

No sane person wants AI anywhere near the grid. Its not just transmission lines, its also nuclear power plants, and other facilities that require power and objective measures for safe operations.

Many of the analysis and actions done today keep skills fresh for those operators. As well, you don't want to incentivize the businesses involved hiring low-IQ people (83 or less) for those positions because they believe they can increase profit with AI watching over them. There may be no clear differentiable way showing they can do the job correctly while AI is present (as many in Academia are finding out with the cheating scandals).

Only a fool would want AI anywhere near these safety-critical professions.