Exclusive: Guardian testing reveals AI-powered search tools can return false or malicious results if webpages contain hidden text
OpenAI’s ChatGPT search tool may be open to manipulation using hidden content, and can return malicious code from websites it searches, a Guardian investigation has found.
OpenAI has made the search product available to paying customers and is encouraging users to make it their default search tool. But the investigation has revealed potential security issues with the new system.
More Stories
EU should spare carmakers from ‘punitive’ emissions fines, says Scholz
Albania bans TikTok for a year after fatal stabbing of teenager last month
Will Japan’s close ties with US survive the caprice and quirks of Donald Trump?