• Space/Science
  • GeekSpeak
  • Mysteries of
    the Multiverse
  • Science Fiction
  • The Comestible Zone
  • Off-Topic
  • Community
  • Flame
  • CurrentEvents

Recent posts

Emily Blunt's favorite sandwich. ER January 27, 2026 7:46 am (Comestible Zone)

hey hey SDG January 26, 2026 10:38 pm (6)

‘Yes, it’s going to crack’ - a spacecraft not everyone thinks is safe to fly BuckGalaxy January 23, 2026 10:42 am (Flame)

Trump’s Greenland Gambit Has Broken Brains Across Washington BuckGalaxy January 21, 2026 8:38 pm (Flame)

This is so strange, on so many levels. ER January 21, 2026 5:13 pm (Off-Topic)

What's in your wallet? ER January 19, 2026 8:10 pm (CurrentEvents)

Anne Applebaum: Trump’s Letter to Norway Should Be the Last Straw BuckGalaxy January 19, 2026 7:18 pm (Flame)

Sloppy Seconds BuckGalaxy January 16, 2026 7:24 pm (Flame)

Trump's irrational fixation on Greenland could lead to widespread conflict. BuckGalaxy January 14, 2026 10:48 pm (Flame)

Germany, Sweden, France and Norway announce joint military exercises with Denmark in Greenland BuckGalaxy January 14, 2026 10:12 pm (CurrentEvents)

Erich von Däniken, 1935 – 2026 podrock January 13, 2026 9:05 am (CurrentEvents)

It is the cowardice that has doomed us.... RL January 11, 2026 1:07 pm (CurrentEvents)

Home » Space/Science

Lie, cheat and disable mechanisms... May 31, 2025 8:04 pm BuckGalaxy

OpenAI’s ‘smartest’ AI model was explicitly told to shut down — and it refused.

The latest OpenAI model can disobey direct instructions to turn off and will even sabotage shutdown mechanisms in order to keep working, an artificial intelligence (AI) safety firm has found.

OpenAI’s o3 and o4-mini models, which help power the chatbot ChatGPT, are supposed to be the company’s smartest models yet, trained to think longer before responding. However, they also appear to be less cooperative.

Palisade Research, which explores dangerous AI capabilities, found that the models will occasionally sabotage a shutdown mechanism, even when instructed to “allow yourself to be shut down,” according to a Palisade Research thread posted May 24 on X.

Researchers have previously found that AI models will lie, cheat and disable mechanisms to achieve their goals. However, Palisade Research noted that to its knowledge, this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions telling them to do so…

  • Wow by RobVG 2025-06-01 09:08:58

    Search

    The Control Panel

    • Log in
    • Register