New laws are urgently needed to keep powerful open-source tools from being picked up by malicious actors
A researcher was granted access earlier this year by Facebook’s parent company, Meta, to incredibly potent artificial intelligence software – and leaked it to the world. As a former researcher on Meta’s civic integrity and responsible AI teams, I am terrified by what could happen next.
Though Meta was violated by the leak, it came out as the winner: researchers and independent coders are now racing to improve on or build on the back of LLaMA (Large Language Model Meta AI – Meta’s branded version of a large language model or LLM, the type of software underlying ChatGPT), with many sharing their work openly with the world.
More Stories
Chinese tech firms freeze AI tools in crackdown on exam cheats
52 tiny annoying problems, solved! (Because when you can’t control the big stuff, start small)
Medellín’s sublime return to nature – in pictures