OpenAI’s new ‘o1’ system seeks to solve the limits to growth but it raises concerns about control and the risks of smart machines
More than 300 million people use OpenAI’s ChatGPT each week, a testament to the technology’s appeal. This month, the company unveiled a “pro mode” for its new “o1” AI system, offering human-level reasoning — for 10 times the current $20 monthly subscription fee. One of its advanced behaviours appears to be self-preservation. In testing, when the system was led to believe it would be shut down, it attempted to disable an oversight mechanism. When “o1” found memos about its replacement, it tried copying itself and overwriting its core code. Creepy? Absolutely.
More realistically, the move probably reflects the system’s programming to optimise outcomes rather than demonstrating intentions or awareness. The idea of creating intelligent machines induces feelings of unease. In computing this is the gorilla problem: 7m years ago, a now-extinct primate evolved, with one branch leading to gorillas and one to humans. The concern is that just as gorillas lost control over their fate to humans, humans might lose control to superintelligent AI. It is not obvious that we can control machines that are smarter than us.
More Stories
Starwatch: Mercury reaches greatest western elongation
Esports are booming in Africa – but can its infrastructure keep pace?
Man who falsely claimed to be bitcoin creator sentenced for continuing to sue developers