Artificial intelligence has started to appear almost everywhere in our lives. We enjoy its benefits, such as the speedier discovery of new drugs, or the more personalised medicine that results from its amalgamation of data and expert judgement, often without realizing it. Generative AI, which enables the fast creation of content and automates summarization and translation via tools such as ChatGPT, DeepSeek and Claude, is its most popular form, but AI is much more: its techniques, mainly from machine learning, statistics and logic, help generate decisions and predictions while being guided by goals set by users.
But while AI technology can certainly help accelerate research processes that previously took years, AI tools, principles and strategies can also enable the identification of elements for biochemical weapons. AI enables new technologies such as autonomous vehicles, yet their vision systems may be fooled, potentially converting the vehicles into weapons.
AI risks are various and specific, and need to be understood as such, if only because AI systems are complex and adapt over time, making them more unpredictable. AI threats include the data that trains its underlying models, since biased data leads to biased outputs. Moreover, adversaries may use AI to automate attacks at faster speed and larger scale. When AI controls critical systems, an attack can have far-reaching consequences. And since AI tools are widely accessible, there is a low barrier to causing harm with them.
Threats to elections and healthcare
The World Economic Forum earlier this year mentioned “adverse outcomes of AI technologies” in its Global Risks Report, because of these technologies’ potential to disrupt geopolitical stability, public health and national security.
The risks to geopolitics are tied to political elections. With AI, opportunities to misinform people have exploded: in a few clicks, a user can generate fake social profiles and craft false content in language targeted to manipulate with alarming precision. The 2024 Romanian presidential elections were suspended due to blatant foreign interference through social platforms. In the medium term, these risks will intensify as AI evolves.
AI’s financial impacts can’t be ignored either. AI-generated fake news is being weaponised to manipulate markets to influence investors and affect stock prices. For instance, an AI-generated image of a blast near the Pentagon in 2023, just after US markets opened, reportedly led to a drop in some stocks’ value. And adversarial attacks – attempts to trick a machine learning model by altering input data to cause incorrect outputs – have been shown to manipulate AI-based credit scoring models, leading loan systems to approve unfit applicants.
AI can also pose serious threats to healthcare systems. Recall the Covid-19 pandemic, when misinformation about vaccines and lockdowns spread rapidly and fuelled resistance in certain communities. Beyond this, AI health systems trained on biased data can produce discriminatory results, denying treatment to underrepresented populations. For instance, a recent Cedars Sinai study found that several major large language models (LLMs) “often proposed inferior treatments” when a psychiatric patient was “explicitly or implicitly indicated” as African American. Lastly, AI has also enlarged the attack surface of hospitals, making them prime targets.
We should also pay attention to national security issues posed by AI. The war in Ukraine exemplifies them. Think about the increased military relevance of drones in this conflict, many of them powered by AI tools. Sophisticated AI-powered attacks have disabled energy grids and transportation networks. AI-backed misinformation has been disseminated to fool enemies, manipulate public perception and shape the war narrative. Without a doubt, AI is redefining the traditional domains of warfare.
AI’s impact also extends into the societal domain, due to the technological supremacy of certain countries and companies, and the environmental domain, due to the energy consumed by generative AI. These impacts complicate an already fragile global landscape.
A road to safer AI
AI’s risks are evolving and, if unchecked, could have potentially catastrophic consequences. Yet if we act urgently and wisely, we need not fear AI.
As individuals, we can play a powerful role by engaging securely with AI systems and adopting safe practices. This starts by choosing a provider that complies with incumbent security standards, AI- and industry-specific regulations, and the principle of data privacy. The provider should be trying to mitigate bias and be resilient to adversarial attacks. We should also question the information an AI system provides by verifying sources, remaining watchful for potential manipulations, and reporting inaccuracies or abuses that we come across. We must stay informed, help others do the same, and proactively promote a responsible use of AI.
Institutions and corporations should hold AI developers accountable for building systems resilient against adversarial attacks. Developers’ efforts should include integrating advanced adversarial machine learning techniques, embedding attack-detection mechanisms, reinforcing algorithms, and, when necessary, incorporating human-in-the-loop safeguards.
Large organisations also need to monitor emerging risks and train response teams in adversarial risk analysis. Importantly, the insurance industry is developing AI-specific coverage, with new products emerging to address the growing risks of adversarial attacks.
Finally, countries also have a lot to say. Many citizens expect compliance with human rights and international agreements, which require strong legislative frameworks. The recent EU AI Act, the first regulation favouring responsible AI development based on the risk levels of AI systems, is an excellent example. Some see the act as an excessive burden, but I believe it should be viewed as a catalyst to drive innovation in a responsible way.
Governments should also support research and investment in fields like secure machine learning, as well as foster international collaboration in data-sharing and intelligence, to better understand global threats. (The AI Incident Database, a private initiative, is a wonderful example of data-sharing.) This is no easy task, given AI’s strategic significance. But history shows us that cooperation is possible. As nations have come together on nuclear energy and biochemical weapons, we must pave the way for similar efforts in AI oversight.
By taking these steps, we can harness more of AI’s immense potential while reducing its risks.
Created in 2007 to help accelerate and share scientific knowledge on key societal issues, the Axa Research Fund – now part of the Axa Foundation for Human Progress – has supported over 750 projects around the world on key environment, health & socioeconomic risks. To learn more, visit the website of the AXA Research Fund or follow @ AXAResearchFund on LinkedIn.
This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: David Rios Insua, Instituto de Ciencias Matemáticas (ICMAT-CSIC)
Read more:
- ‘Digital brains’ that ‘think’ and ‘feel’: why do we personify AI models, and are these metaphors actually helpful?
- AI and credit: How can we keep machines from reproducing social biases?
- AI use by UK justice system risks papering over the cracks caused by years of underfunding
David Rios Insua has received funding from the Spanish Ministry of Science, Innovation and Universities, the European Commission's H2020 and HE programmes, the EOARD, the Spanish SEDIA, the BBVA Foundation and CaixaBank.