New laws are urgently needed to keep powerful open-source tools from being picked up by malicious actors
A researcher was granted access earlier this year by Facebook’s parent company, Meta, to incredibly potent artificial intelligence software – and leaked it to the world. As a former researcher on Meta’s civic integrity and responsible AI teams, I am terrified by what could happen next.
Though Meta was violated by the leak, it came out as the winner: researchers and independent coders are now racing to improve on or build on the back of LLaMA (Large Language Model Meta AI – Meta’s branded version of a large language model or LLM, the type of software underlying ChatGPT), with many sharing their work openly with the world.
More Stories
Is it true that … ginger shots boost immunity?
Weight loss jabs in obese children can help avoid mealtime rows, study says
Did you solve it? Are you craftier than a cat burglar?