Multimodal Data for Learning

I am on the review board for this special call for papers on "Multimodal Data for Learning" for the Journal of Computer Assisted Learning (JCAL). The special issue deals with new data sources coming from the Internet of Things (IoT), wearables, eye-trackers and other camera systems, self-programmable microcomputers such as Raspberry Pi and Arduino. How can these multimodal datasets that combine traditional learning data with different data from physical activity, physiological responses or contextual information be exploited for learning?

Additional information