New laws are urgently needed to keep powerful open-source tools from being picked up by malicious actors
A researcher was granted access earlier this year by Facebook’s parent company, Meta, to incredibly potent artificial intelligence software – and leaked it to the world. As a former researcher on Meta’s civic integrity and responsible AI teams, I am terrified by what could happen next.
Though Meta was violated by the leak, it came out as the winner: researchers and independent coders are now racing to improve on or build on the back of LLaMA (Large Language Model Meta AI – Meta’s branded version of a large language model or LLM, the type of software underlying ChatGPT), with many sharing their work openly with the world.
More Stories
The Trump-Musk feud shows danger of handing the keys of power to one person
Elon Musk shows he still has the White House’s ear on Trump’s Middle East trip
Harvard author Steven Pinker appears on podcast linked to scientific racism