Collaboration rather than command-and-control is key to creating culturally and ethically positive systems
One of the first images of AI I encountered was a white, spectral, hostile, disembodied head. It was in the computer game Neuromancer, programmed by Troy Miles and based on William Gibson’s cyberpunk novel. Other people may have first encountered HAL 9000 from Stanley Kubrik’s 2001: A Space Odyssey or Samantha from Spike Jonze’s Her.
Images from pop culture influence people’s impressions of AI, but culture has an even more profound relationship to it. If there’s one thing to take away from this article, it is the idea that AI systems are not objective machines, but instead based in human culture: our values, norms, preferences, and behaviours in society. These aspects of our culture are reflected in how systems are engineered. So instead of trying to decide whether AI systems are objectively good or bad for society, we need to design them to reflect the ethically positive culture we truly want.
More Stories
The anxiety secret: how the world’s leading life coach stopped living in fear
Memo to Trump: US telecoms is vulnerable to hackers. Please hang up and try again | John Naughton
How to deal with Zoom calls in 2025: in smaller groups with static backgrounds