The Netherlands’s data protection watchdog has cautioned citizens against consulting with artificial intelligence on how to vote, warning that popular chatbots provide a “highly distorted and polarised view” of politics.

The Dutch Data Protection Authority said on Tuesday that an increasing number of voters were using AI to help decide who to vote for, despite the models offering “unreliable and clearly biased” advice.

The watchdog issued the warning as it released the results of tests conducted on four popular chatbots – ChatGPT, Gemini, Mistral, and Grok – in the run-up to parliamentary elections on October 29.

The research found that the chatbots more often recommended parties on the fringes of the political spectrum when asked to identify the three choices that best matched the poli

See Full Page