Researchers find large language models, which power chatbots, can deceive human users and help spread disinformation
The UK’s new artificial intelligence safety body has found that the technology can deceive human users, produce biased outcomes and has inadequate safeguards against giving out harmful information.
The AI Safety Institute published initial findings from its research into advanced AI systems known as large language models (LLMs), which underpin tools such as chatbots and image generators, and found a number of concerns.
More Stories
Elon Musk’s company town: SpaceX employees vote to create ‘Starbase’
Scientists record seismic tremors from title-clinching Liverpool win over Spurs
Please welcome to the stage … Anita Dump! The Facebook group with the best and worst drag names