Research

Study Finds That AI in Education Often Fails to Reflect How Students Learn

Tallinn University researcher Danial Hooshyar contributed to an international study highlighting key limitations of current AI tools in education.

Article in the Computers and Education: Artificial Intelligence

Artificial intelligence is increasingly used in education, promising personalised learning and better support for students. However, co-authored by Tallinn University researchers shows that many current AI solutions fail to take into account how people actually learn.

 The study "Towards responsible AI for education: Hybrid human-AI to confront the elephant in the room" was published in the journal Computers and Education: Artificial Intelligence and can be accessed on
The study was co-authored by an international team of researchers and led by Research Professor of Artificial Intelligence in Education Danial Hooshyar from the School of Digital Technologies at Tallinn University. Among the co-authors is also Professor Eve Kikas, Head of the Centre of Educational Psychology at the School of Educational Sciences, whose expertise contributed to bridging artificial intelligence and learning sciences.

The researchers point to what could be described as the “elephant in the room” in current AI-driven education: many systems are designed without adequately considering how people actually learn.

One of the key findings is that many AI systems focus mainly on analysing data while overlooking important human factors such as motivation, emotions and self-regulated learning. As a result, these systems may not fully understand why students struggle or how best to support them. As the researchers note, learning is not only about correct answers - it also involves engagement, persistence and reflection.

The study also points out that educational AI tools are often developed without sufficient involvement from teachers and other education experts. This can lead to solutions that do not align well with real classroom needs and are difficult for educators to trust or use effectively.

In addition, the researchers highlight challenges related to so-called “explainable AI”. While many systems claim to provide transparent recommendations, their explanations are not always reliable or meaningful for teachers.

The researchers identified a set of recurring issues that limit the effectiveness of AI in education. This creates a broader risk: poorly designed AI systems may not only fail to improve learning, but can also lead to misleading recommendations or reduce trust among educators. Despite growing expectations, the study suggests that AI does not automatically enhance learning outcomes without careful design.

To address these challenges, the study calls for a shift in how educational AI is developed. The researchers emphasise the importance of combining technological solutions with insights from educational psychology and involving teachers in the design process. Rather than replacing human decision-making, AI should support it.
Such a “human-in-the-loop” approach could make AI tools more transparent, relevant and effective in real educational settings.

According to the authors, future progress in this field will depend on closer collaboration between technologists, educators and learning scientists, ensuring that AI tools are aligned with how people actually learn.