All the accepted contributions were published in the CEUR Proceedings http://ceur-ws.org/Vol-2902/
Summary of the accepted contributions:
- Han Jiang, Zewelanji Serpell and Jacob Whitehill. Measuring the Effect of ITS Feedback Messages on Students’ Emotions
- Xinyu Huang, Fridolin Wild and Denise Whitelock. Design Dimensions for Holographic Intelligent Agents: A Comparative Analysis
- Qi Zhou, Wannapon Suraworachet and Mutlu Cukurova. Different modality, different design, different results: Exploring self-regulated learner clusters’ engagement behaviours at individual, group and cohort activities
- Jeongki Lim and Teemu Leinonen. Creative Peer System: An Experimental Design for Fostering Creativity with Artificial Intelligence in Multimodal and Sociocultural Learning Environment
- Miguel A. Ronda, Olga C. Santos, Gloria Fernandez-Nieto and Roberto Martinez-Maldonado. Towards Exploring Stress Reactions in Teamwork using Multimodal Physiological Data
- Chiao-Wei Yang, Mutlu Cukurova and Kaska Porayska-Pomsta. Dyadic joint visual attention interaction in face-to-face collaborative problem-solving at K-12 Maths Education: A Multimodal Approach
- Jon Echeverria and Olga C. Santos. KUMITRON: A Multimodal Psychomotor Intelligent Learning System to Provide Personalized Support when Training Karate Combats
- Alicia Howell-Munson, Deniz Sonmez Unal, Erin Walker, Catherine Arrington and Erin Solovey. Preliminary steps towards detection of proactive and reactive control states during learning with fNIRS brain signals
- Anisha Gupta, Dan Carpenter, Wookhee Min, Jonathan Rowe, Roger Azevedo and James Lester. Multimodal, Multi-Task Stealth Assessment for Reflection-Enriched Game-Based Learning
- Serena Lee-Cultura, Kshitij Sharma and Michail Giannakos. Multimodal AI Agent to Support Students’ Motion-Based Educational Game Play
1. Han Jiang, Zewelanji Serpell and Jacob Whitehill. Measuring the Effect of ITS Feedback Messages on Students’ Emotions
Abstract: When an ITS gives supportive, empathetic, or motivational feedback messages to the learner, does it alter the learner’s emotional state, and can the ITS detect the change? We investigated this question on a dataset of n=36 African-American undergraduate students who interacted with iPad-based cognitive skills training software that issued various feedback messages. Using both automatic facial expression recognition and heart rate sensors, we estimated the effect of the different messages on short-term changes to students’ emotions. Our results indicate that, except for a few specific messages (“Great Job”, and “Good Job”), the evidence for the existence of such effects was meager, and the effect sizes were small. Moreover, for the “Good Job” and “Great Job” actions, the effects can easily be explained by the student having recently scored a point, rather than the feedback itself. This suggests that the emotional impact of such feedback, at least in the particular context of our study, is either very small, or it is undetectable by heart rate or facial expression sensors.
2. Xinyu Huang, Fridolin Wild and Denise Whitelock. Design Dimensions for Holographic Intelligent Agents: A Comparative Analysis
Abstract: Humanoid intelligent agents, or ‘Holographic AIs’, as we prefer, are trending, promising improved delivery of personalized services on smart glasses and in Augmented Reality. Lacking clarity of the concept and missing recommendations for their features, however, pose a challenge to developers of these novel, embodied agents. In this paper, we therefore conduct a comparative analysis of nine intelligent agents who can interact with both physical and virtual surroundings. We identify, select, and investigate four distinct types of non-player game characters, chatbot agents, simulation agents, and intelligent tutors in order to, subsequently, develop a framework of features and affordances for holographic AIs along the axes of appearance, behavior, intelligence, and responsiveness. Through our analysis, we derive four key recommendations for developers of Holographic AIs: the use case determines appearance; dialogue management is key; awareness and adaptation are equally important for successful personalization; and environmental responsiveness to events both in the virtual and digital ream is needed for a seamless experience.
3. Qi Zhou, Wannapon Suraworachet and Mutlu Cukurova. Different modality, different design, different results: Exploring self-regulated learner clusters’ engagement behaviours at individual, group and cohort activities
Abstract: Self-Regulated Learning (SRL) competence is an important aspect of online learning. SRL is an internal process, but analytics can offer an externalisation trigger to allow for observable effects on learner behaviours. The purpose of this paper is to explore the relationship between students’ SRL competence and their learning engagement behaviours observed in multimodal data. In a postgraduate course with 42 students, eighteen features from three different types of data in seven learning activities were extracted to investigate different SRL competence students’ engagement behaviours. The results revealed that students with different SRL competence clusters might exhibit different behaviours in individual, group, and cohort level learning activities. Also, it is illustrated that students with similar SRL competence might exhibit significantly different engagement behaviours in different learning activities, depending on the learning design. If we are to design AIED systems that automate some decisions about student success based on their engagement data, the modality of the data, specific analysis technique used to process it, and the contextual particularities of the learning design should also be considered in the interpretations of such automated decisions.
4. Jeongki Lim and Teemu Leinonen. Creative Peer System: An Experimental Design for Fostering Creativity with Artificial Intelligence in Multimodal and Sociocultural Learning Environment
Abstract: To develop artificial intelligence (AI) that educators can adopt in general educational environments, we are examining the potential role of AI in the socio-cultural aspects of learning in human development. In this position paper, we propose an experimental design, Creative Peer System, where humans and machines learn from each other in a multimodal learning environment and develop original artifacts. The research is in the early stage, where we are actively developing new types of empirical studies. We will present the methodological and theoretical frameworks and a design proposal that can elicit constructive feedback toward further refinement and implementation of the experiment.
5. Miguel A. Ronda, Olga C. Santos, Gloria Fernandez-Nieto and Roberto Martinez-Maldonado. Towards Exploring Stress Reactions in Teamwork using Multimodal Physiological Data
Abstract: In education, while teams of students are learning in a real scenario, many different factors are happening in real-time and can have a significant impact on the way students can improve their skills. Realistic simulated scenarios can help them to achieve their learning goals. However, these close-to-real situations can make them experience emotions that can be confronting and hinder learning. In other cases, these emotional experiences are meant to reflect the kinds of pressures they will encounter in authentic workplaces, thus becoming authentic training experiences. There is strong evidence that emotions have an important effect on the student’s engagement and motivation and consequently influence learning outcomes. In the particular educational context of healthcare (e.g. nursing), teachers commonly have a series of expectations about the moments in which students will have a higher cognitive load and stress that can impact on their emotional state, depending on the phase of the simulation in which they are. This paper introduces a study with nursing students carrying out a practice teamwork in a simulated scenario divided into 5 different phases with a critical patient, in which students must learn to make life-to-death decisions timely. This paper discusses the multimodal data processing that is being performed to identify if the arousal levels matched the teachers’ expectations regarding the students’ emotional situation in each phase.
6. Chiao-Wei Yang, Mutlu Cukurova and Kaska Porayska-Pomsta. Dyadic joint visual attention interaction in face-to-face collaborative problem-solving at K-12 Maths Education: A Multimodal Approach
Abstract: Collaborative problem-solving (CPS) is an essential skill in the workplace in the 21st century, but the assessment and support of the CPS process with scientifically objective evidence are challenging. This research aims to understand in-class CPS interaction by investigating the change of a dyad’s cognitive engagement during a mathematics lesson. Here, we propose a multi-modal evaluation of joint visual attention (JVA) based on eye gazes and eye blinks data as non-verbal indicators of dyadic cognitive engagement. Our results indicate that this multimodal approach can bring more insights into students’ CPS process than unimodal evaluations of JVA in temporal analysis. This study contributes to the field by demonstrating the value of nonverbal multimodal JVA temporal analysis in CPS assessment and the utility of eye physio-logical data in improving the interpretation of dyadic cognitive engagement. Moreover, a method is proposed for capturing gaze convergence by considering eye fixations and the overlapping time between two eye gazes. We conclude the paper with our preliminary findings from a pilot study investigating the pro-posed approach in a real-world teaching context.
7. Jon Echeverria and Olga C. Santos. KUMITRON: A Multimodal Psychomotor Intelligent Learning System to Provide Personalized Support when Training Karate Combats
Abstract: New technologies have been introduced into society, forming part of it and covering all facets including education. In cognitive and emotional education, these new tools have been used for some time, while in the teaching of psychomotor skills they have not been applied much. In learning complex motor skills, martial arts are a very interesting discipline due to the nature of their movements: predefined and governed by the laws of physics. KUMITRON is an artificial intelligence tool for teaching Karate, which monitors the activity of karateka during a kumite to show it in an application to Sensei in real time. This allows activity to be tracked through an application that offers information processed with artificial intelligence and artificial vision algorithms. This monitoring makes it possible to anticipate the movements that a fighter is going to perform on the mat, in such a way that it provides information of great added value for training. The artificial intelligence algorithms designed offer expert advice on the type of strategy to follow in order to win the kumite. The application also records all training sessions in a database that can be accessed online to track activity and improve any mistakes that have been made.
8. Alicia Howell-Munson, Deniz Sonmez Unal, Erin Walker, Catherine Arrington and Erin Solovey. Preliminary steps towards detection of proactive and reactive control states during learning with fNIRS brain signals
Abstract: This paper describes a two-pronged approach to creating a multimodal intelligent tutoring system (ITS) that leverages neural data to inform the system about the student’s cognitive state. The ultimate goal is to use fNIRS brain imaging to distinguish between proactive and reactive control states during the use of a real-world learning environment. These states have direct relevance to learning and have been difficult to identify through typical data streams in ITSs. As a first step towards identifying these states in the brain and understanding their effects on learning, we describe two preliminary studies: (1) we distinguished proactive and reactive control using fNIRS brain imaging in a controlled continuous performance task and (2) we prompted students to engage in either proactive or reactive control while using an ITS to understand how the two modes affect learning progress. We propose integrating the fNIRS datastream with the ITS to create a multimodal system for detecting the user’s cognitive state and adapting the environment to promote better learning strategies.
9. Anisha Gupta, Dan Carpenter, Wookhee Min, Jonathan Rowe, Roger Azevedo and James Lester. Multimodal, Multi-Task Stealth Assessment for Reflection-Enriched Game-Based Learning
Abstract: Game-based learning environments enable effective and engaging learning experiences that can be dynamically tailored to students. There is growing interest in the role of reflection in supporting student learning in game-based learning environments. By prompting students to periodically stop and reflect on their learning processes, it is possible to gain insight into students’ perceptions of their knowledge and problem-solving progress, which can in turn inform adaptive scaffolding to improve student learning outcomes. Given the positive relationship between student reflection and learning, we investigate the benefits of jointly modeling post-test score and reflection depth using a multimodal, multi-task stealth assessment framework. Specifically, we present a gated recurrent unit-based multi-task stealth assessment framework that takes as input multimodal data streams (e.g., game trace logs, pre-test data, natural language responses to in-game reflection prompts) to jointly predict post-test scores and written reflection depth scores. Evaluation results demonstrate that the multimodal multi-task model outperforms single-task neural models that utilize subsets of the modalities, as well as non-neural baselines such as random forest regressors. Our multi-task stealth assessment framework for measuring students’ content knowledge and reflection depth during game-based learning shows significant promise for supporting student learning and improved reflection.
10. Serena Lee-Cultura, Kshitij Sharma and Michail Giannakos. Multimodal AI Agent to Support Students’ Motion-Based Educational Game Play
Abstract: Increased accessibility of lightweight sensors (e.g., eye trackers, physiological wristbands, and motion sensors), enable the extraction of student’s cognitive, physiological, skeletal, and affective data, as they engage with Motion-Based Educational Games (MBEG). Real-time analysis of this Multi-Modal Data (MMD) leads to a deep understanding of student’s learning experiences and affords new opportunities for timely, contextual, personalised feedback delivery to support the student. In this
workshop submission, we present the MMD-AI Agent for Learning; a MMD-driven Artificially Intelligent (AI) agent based ecosystem, composed of 3 separate software components, which work together to facilitate student’s learning during their interactions with MBEG. The Crunch Wizard, receives MMD from eye-trackers, physiological wristbands, web camera, and motion sensors, worn by a student during game play, and derives relevant cognitive, physiological and affective measurements. The AI agent identifies and delivers appropriate feedback mechanisms to support a student’s MBEG play learning experience. The Dashboard visualises the measurements to keep teachers informed of a student’s progress. We discuss the foundational work that motivated the ecosystem’s design, inform on our design and development accomplished thus far, and outline future directions.