Exclusive: Guardian testing reveals AI-powered search tools can return false or malicious results if webpages contain hidden text
OpenAI’s ChatGPT search tool may be open to manipulation using hidden content, and can return malicious code from websites it searches, a Guardian investigation has found.
OpenAI has made the search product available to paying customers and is encouraging users to make it their default search tool. But the investigation has revealed potential security issues with the new system.
More Stories
Aston Martin limits exports to US because of Trump tariffs
EU microchip strategy ‘deeply disconnected from reality’, say official auditors
TikTok fined €530m by Irish regulator for failing to guarantee China would not access user data