@ the 22nd International Conference on Artificial Intelligence in Education (AIED’2021) https://aied2021.science.uu.nl/
14th June 2021, 09:00 – 17:00 CEST / Europe online workshop
Website: https://maied.edutec.science/ – Proceedings to be published by CEUR
Call for Proposals
Workshop Description
This workshop aims at gathering new insights around the use of Artificial Intelligence (AI) systems and autonomous agents for education and learning leveraging multimodal data sources. The workshop is entitled Multimodal Artificial Intelligence in Education (MAIEd). It builds upon the CrossMMLA workshop series at the Learning Analytics & Knowledge Conference. The workshop calls for new empirical studies, even if in their early stages of development. It also welcomes novel experimental designs, theoretical contributions, and practical demonstrations which can prove the use of multimodal and multi-sensor devices “beyond mouse and keyboard” in learning contexts with the purpose of automatic feedback generation, adaptation, and personalisation in learning. Through a call for proposals, we seek to engage the scientific community in opening up the scope of AI in Education towards novel and diverse data sources.
Topics
At the MAIEd workshop, we want to discuss which scientific, state-of-the-art ideas and approaches are being pursued and which impacts we expect on educational technologies and education. We are especially interested in contributions targeting the intersection of these two fields of AI and multimodal interaction. We are looking for original contributions that advance the state of the art in theories, technologies, methods, and knowledge towards the development of multimodal intelligent tutors. Concrete topics of interest for this year’s MAIEd workshop include but are not limited to:
- Multimodal Intelligent Tutoring Systems or User Interfaces
- Multimodality in Augmented, Virtual, and Mixed Reality
- Multimodal Learner Modelling and Affective Computing
- Adaptive Feedback, Guidance, and Process in Multimodal Learning
- Artificial Intelligence for Learning Analytics
- Big Data-driven Visual Analytics for Learning
- Error detection and classification for multimodal data
- Cognitive Load in Multimodal Interaction with Intelligent Tutoring Systems
- Explainability, Trust, and Safety in Multimodal Intelligent Tutoring Systems
- Multimodal data for Self-Regulated learning
Important Dates
Paper submission: May 10, 2021 May 20, 2021 (extended)
Notification of acceptance: May 29, 2021, June 2, 2021
The final version of accepted papers: June 3, 2021, June 7, 2021
Submission Guideline
- Accepted papers will be presented in the workshop and included in the workshop proceedings. We invite submission between 5 and 10 pages, including references.
- Submissions should follow the Springer LNCS format: https://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0
- Submissions should be properly blinded from authors’ identities and affiliations
- Submissions should be uploaded in PDF format through the EasyChair system: https://easychair.org/conferences/?conf=maied21
- Submissions will be reviewed by members of the workshop program committee.
Organisation
The workshop will be conducted online over Zoom. The workshop will be divided in three parts:
- In the first part starting in the morning (conference time-zone, Central European Time) there will be a presentation of the submitted contributions.
- In the second part, there will be an interactive discussion, using either break-out rooms of the video conferencing tool, or possibly the networking tool provided by the conference organisers.
- The last part, after a lunch break, there will be a keynote speaker followed by an open discussion. The objective of this workshop is to publish the accepted contributions through the CEUR proceedings.
Keynote Speaker
Sidney D’Mello
From Modeling Individuals to Groups: It’s a Multimodal Multiparty
16:00 – 17:00 Europe/CEST
10:00 – 11:00 New York/EDT
15:00 – 16:00 London
00:00 – 01:00 Melbourne/AEST
Contact
All questions about submissions should be emailed to dimitri@dipf.de