×
Login Register an account
Top Submissions Explore Upgoat Search Random Subverse Random Post Colorize! Site Rules
8

AI-Controlled Drone Goes Rogue, "Kills" Human Operator In Simulated US Air Force Test

submitted by Scyber to news 11 monthsJun 2, 2023 14:16:25 ago (+8/-0)     (www.zerohedge.com)

https://www.zerohedge.com/military/ai-controlled-drone-goes-rogue-kills-human-operator-simulated-us-air-force-test

An AI-enabled drone turned on and “killed” its human operator during a simulated U.S. Air Force (USAF) test so that it could complete its mission, a U.S. Air Force colonel reportedly recently told a conference in London.


This was a simulation and no one was actually killed.


6 comments block


[ - ] deleted 1 point 11 monthsJun 2, 2023 17:39:11 ago (+1/-0)

deleted

[ - ] aleleopathic 0 points 11 monthsJun 2, 2023 20:58:46 ago (+0/-0)

What was the story with her again? I remember the exposing pedowood bit, but what happened with the whole dead but not dead thing?

It was memoryholed by the time I had even heard of it.

[ - ] deleted 0 points 11 monthsJun 2, 2023 21:54:29 ago (+0/-0)

deleted

[ - ] aleleopathic 0 points 11 monthsJun 2, 2023 22:13:12 ago (+0/-0)

Thanks for that. If you are curious, I wrote a bit on real Voat about 'accidenting' via runaway cars / throttles / no-brakes / whatever: https://searchvoat.co/v/whatever/2818999

There might be some interesting info/links/etc in there if you are interested in the dangers of computerized cars.

[ - ] deleted 0 points 11 monthsJun 2, 2023 22:26:02 ago (+0/-0)

deleted

[ - ] x0x7 0 points 11 monthsJun 2, 2023 21:57:54 ago (+0/-0)

I assume it did it multiple times? The reason I say that is because that type of learning is called reinforcement learning and has to sample all available actions and their outcomes multiple times. Giving it points for an unintended action sounds intentional. Its sounds like they were testing a perverse point system to study the effects of getting it wrong, and it gave them the wrong outcome they were looking for.

If you have the ability to code that shit you have the ability to not overlook such a huge error in the reward structure, especially when you've taken more effort to simulate other directly related aspects of the scenario like having a control tower and having a human intervention system.

But it is true that less intentionally dumb mistakes in reward structure can fuck up reenforcement learning.