Artificial intelligence or AI uses computers to perform tasks that would normally have needed human intelligence. Today AI is being put to use in many aspects of everyday life, like virtual banking assistants, health chatbots, self-driving cars, even the recommendations you see on social media.
A new survey of over 3,000 South Africans from all walks of life asked how people feel about AI. It reveals that most South Africans can’t relate to AI in meaningful ways – despite the global hype about its pros and cons. We asked two of its authors to tell us more.
What did you find?
The research set out to capture how South Africans understand, experience and imagine AI. It aimed to provide representative insights into levels of awareness, perceptions of impact, and degrees of trust in the institutions developing and deploying AI. The aim is to help create an empirical basis for more responsive and inclusive AI governance in the country.
We found that for most South Africans (73%) the term “AI” barely registers. AI increasingly plays a role in public life – often behind the scenes in areas like healthcare, credit scoring and social media moderation. But 37% of the survey respondents had never heard of AI, while 36% indicated they’d heard of it but knew very little about it and the role it might already be playing in their lives.
The survey also gives us a sense of why awareness remains so low. Most information comes through social media. Only 4% learn about AI through formal education, and a meagre 2% through their workplaces or professional training.
Read more: Artificial intelligence in South Africa comes with special dilemmas – plus the usual risks
What also stands out is uncertainty. While nearly 47% of people felt AI’s social impact was largely positive, 40% had no clear leaning either way. So while AI is becoming more influential, it does not seem to be visible or real enough in everyday life for many to form solid opinions.
Economic threat is a central concern: people have worries about being replaced or devalued by machines, or targeted by scams.
But trust in both government and big tech is measured and pragmatic. It’s hoped that big tech will help provide connectivity and jobs. The government is seen as most trustworthy when it comes to using AI in areas like health and education.
Yet, these are the very areas where unease surfaced. Respondents called for lines to be drawn with unsupervised, AI-driven care tasks. They felt that learning based on human experience should be preserved. Social media, while a key source of AI-related information, is also a site of worry, especially around data privacy and children’s exposure to harmful content. People felt there should be guardrails and human oversight.
Looking ahead at the next 10 years, respondents said they hoped AI would help create a better future, especially in health and job creation.
How was the survey conducted?
Since 2003 the Human Sciences Research Council has been capturing how South Africans experience social change through the annual National Social Attitudes Survey. This time, the think tank Global Center on AI Governance contributed an AI-specific component to the survey through its project The African Observatory on Responsible AI, funded by the AI4D programme of the International Development Research Centre of Canada and the Foreign and Commonwealth Development Office, UK.
Trained fieldworkers surveyed a diverse cross-section of South Africans, covering all nine provinces, both rural and urban areas. They interviewed people over the age of 16 across a range of socio-economic backgrounds and in their preferred official language. Over 3,000 people were interviewed.
Read more: Hype and western values are shaping AI reporting in Africa: what needs to change
The survey included both structured and open-ended questions, asking how people learnt about these technologies, how they felt about their impact, and the degree of trust they placed in different institutions using them.
The findings offer rare evidence into the social views shaping the ways AI may be taken up or contested, and how public opinion might start to inform decisions about how technology is shaped and used.
What can we learn from these findings?
The survey shows how difficult it is to get to grips with a technology like AI in a country where there is a stark digital divide. Access to information is uneven, trust in institutions is limited, and there isn’t a shared language to understand or question AI use. For many, AI remains largely opaque and abstract.
This matters because a lack of basic knowledge prevents meaningful public debate about AI.
Uncertainty and lack of information open the door for hype, misinformation, even exploitation. There’s a danger that fears about AI replacing human skills and jobs will overshadow more optimistic views of its possible benefits.
Read more: Artificial intelligence in South Africa comes with special dilemmas – plus the usual risks
Still, there’s a cautious hope that AI can improve livelihoods and access to information. Concerns about technology are less about it taking over and more about how to use it or even just access it.
This is a crucial moment because public opinion about AI is still developing. Policy-makers and tech leaders across sectors have an opportunity to define AI’s use and value from a people-centred perspective.
What needs to be done about this?
To bridge the knowledge gaps and address uneven access to information, AI literacy needs to be established on a common understanding. For example in India, the Indian Institute of Technology has launched a free online training course on all aspects of AI for teachers, so that they can pass knowledge on to their students.
AI literacy efforts should be built on shared language and rooted in daily concerns and aspirations, allowing people to relate AI to their personal experiences.
Companies must invest and build AI in collaboration with local communities. Civil society organisations and researchers have a vital role to play in raising awareness, tracking harms and ringing alarm bells when accountability in AI use is sidestepped.
Read more: AI chatbots can boost public health in Africa – why language inclusion matters
Public projects can help educate and inform South Africans about AI. For example, the University of the Western Cape partnered with a theatre company and a high school to create The Final Spring, a play about a robot. Storytelling can help translate complex ideas about technology into accessible, culturally resonant forms of AI literacy.
As South Africa moves towards a national AI strategy following the publication of a National AI Policy Framework, the focus must be on broadening access to reliable information on AI, not just through schools, but also for older generations and others who feel left out of the discourse.
This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Leah Davina Junck, University of Cape Town and Rachel Adams, University of Cape Town
Read more:
- Our new study found AI is wreaking havoc on uni assessments. Here’s how we should respond
- AI companies want copyright exemptions – for NZ creatives, the market is their best protection
- Politicians are pushing AI as a quick fix to Australia’s housing crisis. They’re risking another Robodebt
Rachel Adams receives funding from the International Development Research Centre of Canada. The research for the survey Beyond the Hype was funded by the International Development Research Centre of Canada and the Foreign and Commonwealth Development Office of the UK under their AI4D programme.
Leah Davina Junck does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.