Nutzerzentrierte Exploration von multimodaler Blickinteraktion mit holografischen Interfaces im automobilen Kontext

Art der Abschlussarbeit

Status der Arbeit

Hintergrundinformationen zu der Arbeit

Emerging display technologies support a more realistic integration of holografic 3D interfaces into the user's environment and allow the design of novel interaction methods. Two types of available devices to display 3D holograms are optical-see-through head-mounted displays (e. g. Microsoft HoloLens 2) and holografic 3D displays. To control holografic 3D-UIs, a variety of different explicit and implicit control modes, such as inputs via voice, hand-gestures, tongue, feed, facial expression, or eye movement, are available. This thesis focuses, while exploring other multimodal options, on the explicit unimodal gaze control and the multimodal combination of gaze and hand-gesture controls for holografic 3D UIs within vehicles. Therefore, an explorative, two-step study was conducted. In the first study, 64 subjects were asked to suggest ideas on how to design control interactions for three interaction modes. Those modes included unimodal gaze, multimodal gaze with hand gestures, and multimodal gaze with another control modularity. The participants had to provide ideas for eight different tasks based on plausible use cases. The interactions included selection, swiping, depth translation, rotation, slider adjustment, and three tasks related to map navigation. The subjects were also asked about the perceived usefulness of four interaction modes. After receiving 488 answers, which were reduced to 351 ideas after a revision process, it was possible to conduct a frequency analysis and find out which methods were suggested the most. Based on the ideas, it is possible to produce interaction concepts for mono- and multimodal gaze interactions, which were implemented in a Microsoft MR application for the head-mounted display HoloLens 2. In the follow-up study, 16 subjects tested the elicited interaction concepts on the HMD inside a vehicle. The study was designed as an A/B-test comparing unimodal gaze controls with multimodal gaze and hand-gesture controls. The evaluation concluded that both interaction modularity’s aren’t adequate to the same degree. The majority (n = 9) preferred unimodal gaze controls over multimodal gaze and gesture controls. Objective results showed a faster completion time, higher success rates, higher accuracy, and earlier first interactions with UI elements with the unimodal gaze control and multimodal gaze and hand-gesture controls. Subjective results showed higher intuition, higher usability, lower workloads, and a better user experience with unimodal gaze controls than with multimodal gaze and hand-gesture controls. Fixation times, ease, and self-descriptiveness were also compared with less distinct results.

Bea Vorhof


März 2023


März 2024

Zitation kopiert