Reconstruction and Web-based Editing of 3D Objects from Photo and Video Footage for Ambient Learning Spaces

Allgemeines

Art der Publikation: Journal Article

Veröffentlicht auf / in: International Journal On Advances in Intelligent Systems

Jahr: 2018

Band / Volume: 11

Seiten: 94-108

ISSN: 1942-2679

Autoren

David Bouck-Standen

Alexander Ohlei

Sven Höffler

Viktor Daibert

Thomas Winkler

Michael Herczeg

Abstract

In ambient and mobile learning contexts, 3D renderings create higher states of immersion compared to still images or video. To cope with the considerable effort to create 3D objects from images, with the NEMO Converter 3D we present a technical approach to automatically reconstruct 3D objects from semantically annotated media, such as photos and more importantly video footage, in an automated background process. Although the 3D objects are rendered in a quality acceptable for the scenario presented in this article, they still contain unwanted surroundings or artifacts and will not be positioned well for, e.g., augmented reality applications. To address this matter, with 3DEdit we present a web-based solution allowing users to enhance these 3D objects. We present a technical overview and reference pedagogical background of our research project Ambient Learning Spaces, in which both the NEMO Converter 3D and 3DEdit have been developed. We also describe a real usage scenario, starting by creating and collecting media using the Mobile Learning Exploration System, a mobile application from the application family of Ambient Learning Spaces. With InfoGrid, a mobile augmented realityapplication, users can experience the previously generated 3D objects placed and aligned into real world scenes. All systems and applications of Ambient Learning Spaces interconnect through the NEMO-Framework (Network Environment for Multimedia Objects). This technical platform features contextualized access and retrieval of media.

Downloads

Publikation herunterladen