New laws are urgently needed to keep powerful open-source tools from being picked up by malicious actors
A researcher was granted access earlier this year by Facebook’s parent company, Meta, to incredibly potent artificial intelligence software – and leaked it to the world. As a former researcher on Meta’s civic integrity and responsible AI teams, I am terrified by what could happen next.
Though Meta was violated by the leak, it came out as the winner: researchers and independent coders are now racing to improve on or build on the back of LLaMA (Large Language Model Meta AI – Meta’s branded version of a large language model or LLM, the type of software underlying ChatGPT), with many sharing their work openly with the world.
More Stories
Esports are booming in Africa – but can its infrastructure keep pace?
AI learns to distinguish between aromas of US and Scottish whiskies
Man who falsely claimed to be bitcoin creator sentenced for continuing to sue developers