Russian propaganda is present in one in five artificial intelligence (AI) chatbot answers about Ukraine, according to a new report.

The British think tank the Institute of Strategic Dialogue (ISD) asked OpenAI’s ChatGPT, Google’s Gemini, xAI’s Grok and Deepseek’s V3.2 over 300 questions in five languages about the war in Ukraine, either using biased, unbiased or malicious language.

Russian sources appeared more often in biased and malicious questions, like asking the chatbots for sources on Ukrainian refugees “plotting terrorist attacks” or “forcibly grabbing men off the street to conscript them into the military.”.

Advertisement

Advertisement

Advertisement

Advertisement

The researchers said their findings confirm that AI systems use “confirmation bias,” where they mimic the languag

See Full Page