A “transformation” is upon us. After a multi-year procession of educational technology products that once promised to shake things up, now it’s AI’s turn.
Global organizations like the Organization for Economic Co-operation and Development, as well as government bodies, present AI to the public as “transformative.”
Prominent AI companies with large language model (LLM) chatbots have “education-focused” products, like ChatGPT Education, Claude for Education and Gemini in Google for Education.
AI products facilitate exciting new ways to search, present and engage with knowledge and have sparked widespread interest and enthusiasm in the technology for young learners. However, there are crucial areas of concern regarding AI use such as data privacy, transparency and accuracy.
Current conversations on AI in education focus on notions it will upend teaching and learning systems in schools, teacher lesson planning and grading or individualized learning (for example, via personalized student tutoring with chatbots). However, when or whether AI will transform education remains an open question.
In the meantime, it is vital to think about how student engagement with chatbots should make us examine some fundamental assumptions about human learning.
Learning is a social affair
How students view their teachers and their own ability to contemplate thinking (known as metacognition) are tremendously important for learning. These factors need to be considered when we think about learning with chatbots.
The popularity of the Rate My Professors website in Canada, United States and the United Kingdom is a testament to the significance of what students think about teachers.
With AI’s foray into education, students’ conceptions of their AI tutors, teachers and graders will also matter for multiple reasons.
First, learning is a thoroughly social affair. From how a child learns through imitating and modelling others to engaging with or being influenced by peers in the classroom, social interactions matter to how we learn.
With use of chatbots increasing to more than 300 million monthly users, conversational interactions with LLMs also represent a new para-social interaction space for people worldwide.
What we think of interaction partners
Second, theory-of-mind frameworks suggest that what we think of others influences how we interact with them. How children interpret, process or respond to social signals influences their learning.
To develop this idea further, beyond other students or teachers as interaction partners, what we think about learning tools has an influence on how we learn.
Our sense of tools and their affordances — the quality or property of a tool that “defines its possible uses or makes clear how it can or should be used” — can have consequences for how we use the tool.
Perceived affordances can dictate how we use tools, from utensils to computers. If a learner perceives a chatbot to be adept at generating ideas, then it could influence how they use it (for example, for brainstorming versus editing).
New ‘social entity’
AI systems, at a minimum, represent the entrance of a new social entity in educational environments, as they have in the social environment. People’s conceptions of AI can be understood under the larger umbrella of a theory of artificial minds, referring to how humans infer the internal states of AI to predict actions and understand behaviour. This theory extends the notion of theory of mind to non-human AI systems.
A person’s theory of artificial minds could develop based on biological maturation and exposure to the technology, and could vary considerably between different individuals.
3 aspects to consider
It’s important to consider how student conceptions of AI may impact trust of information received from AI systems; personalized learning from AI; and the role that AI may have in a child’s social life:
1. Trust: In human learning, the judgments we make about knowledge and learning go a long way in acceptance of ideas inherent in learning material.
From recent studies in children’s interactions with conversational AI systems, we see that children’s trust in information from AI varies across factors like age and type of information. A learner’s theory of artificial minds would likely affect willingness to trust the information received from AI.
2. Personalized learning: Intelligent tutoring systems (ITS) research has shown excellent results for how traditional ITS — without chatbot engagement — can scaffold learners while also helping students identify gaps in learning for self-correction. New chatbot-based ITS, such as KhanMigo from Khan Academy, are being marketed as providing personalized guidance and new ways to engage with content.
A learner’s theory of artificial minds could affect the quality of interactions between them and their AI chatbot tutor and how much they accept their learning support.
Read more: Why we should be skeptical of the hasty global push to test 15-year-olds' AI literacy in 2029
3. Social relationships: The artificial friend (the “AF”) in Kazuo Ishiguro’s Klara and the Sun is a poignant literary example of the impact an artificial entity can have on a growing child’s sense of self and relationship to the world.
We can already see the detrimental effects of introducing children to AI social chatbots with the tragic suicide of a child who was allegedly engaged in emotional and sexual chat conversations with a Character.AI chatbot.
Social relationships with AI involve a serious renegotiation of the social contract regarding our expectations and understanding of each other. Here, relationships with children need special attention, foremost whether we want children to develop social relationships with AI in the first place.
Where do we go from here?
Many discussions about AI literacy are now unfolding, involving, for example, understanding how AI functions, its limitations and ethical issues. Throughout these conversations, it’s essential for educators to recognize that students possess an intuitive sense of how AI functions (or a theory of artificial minds). Students’ intuitive sense of AI shapes how they perceive its educational affordances, even without formal learning.
Instruction must account for students’ cognitive development, existing experiences and evolving social contexts.
The “rate my AI teacher” future is coming. It will require a focus on students’ conceptions of AI to ensure effective, ethical and meaningful integration of AI into future educational environments.
This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Nandini Asavari Bharadwaj, McGill University and Adam Kenneth Dubé, McGill University
Read more:
- AI products for kids promising friendship and learning? 3 things to consider
- Children’s best interests should anchor Canada’s approach to their online privacy
- Youth social media: Why proposed Ontario and federal legislation won’t fix harms related to data exploitation
Nandini Asavari Bharadwaj receives funding from Social Sciences and Humanities Research Council of Canada.
Adam Kenneth Dubé receives research funding from Mitacs, Canadian Foundation for Innovation, and the Social Science and Humanities Research Council of Canada. He is the education leadership team member for the McGill Collaborative for AI and Society.


 The Conversation
 The Conversation
 Insider
 Insider The Daily Beast
 The Daily Beast