We invite researchers and visionaries to submit their latest results on any aspects that are relevant for multimodality and/or interaction in VR and AR. Contributions of more fundamental nature (e.g., psychophysical studies and empirical research about multimodality) are welcome as well as more technical contributions (including use cases, best-practice demonstrations, prototype systems, etc.). Position papers and reviews of the state-of-the art and ongoing research are invited, too. Submissions do not necessarily have to address multiple modalities, but work focusing on single modes that go beyond the state-of-the-art of "purely visual" systems (e.g., papers about smell, taste, and haptics) are suited as well.
Topics of particular interest include, but are not limited to:
- Multisensory experiences and improved immersion, including audio-visual installations, haptics/tactile, smell/olfactory sensations, taste/gustation (contributions focusing on single, but enhancing senses are welcome), perception of virtual objects, etc.
- Multimedia & sensory input, including affective computing and human behavior sensing for VR/AR, multisensory analysis, integration, and synchronization, speech, gestures, tracking for AR/VR, virtual humans and avatars, etc.
- Multimodal output, including smart and ambient environments, multimedia installations, etc.
- Interaction design & new approaches for interaction in AR/VR, incl. tangible interfaces, multimodal communication & collaborative experiences, social aspects in AR/VR interaction, gesture-based interaction design, 3D interaction, advanced interaction devices, etc.
- System design & infrastructure for multimodal AR/VR, including real-time and other performance issues, rendering of different modalities, distributed and collaborative architectures, etc.
- Applications, incl. use cases, prototypes, or prove of concepts for new and innovative approaches in serious as well as leisure domains.