top of page
Author links open overlay

Education - Enhancing the functionality of augmented reality using deep learning, semantic web and k


The growth rates of today’s societies and the rapid advances in technology have led to the need for access to dynamic, adaptive and personalized information in real time. Augmented reality provides prompt access to rapidly flowing information which becomes meaningful and “alive” as it is embedded in the appropriate spatial and time framework. Augmented reality provides new ways for users to interact with both the physical and digital world in real time. Furthermore, the digitization of everyday life has led to an exponential increase of data volume and consequently, not only have new requirements and challenges been created but also new opportunities and potentials have arisen. Knowledge graphs and semantic web technologies exploit the data increase and web content representation to provide semantically interconnected and interrelated information, while deep learning technology offers novel solutions and applications in various domains. The aim of this study is to present how augmented reality functions and services can be enhanced when integrating deep learning, semantic web and knowledge graphs and to showcase the potentials their combination can provide in developing contemporary, user-friendly and user-centered intelligent applications. Particularly, we briefly describe the concept of augmented reality and mixed reality and present deep learning, semantic web and knowledge graphs technologies. Moreover, based on our literature review, we present and analyze related studies regarding the development of augmented reality applications and systems that utilize these technologies. Finally, after discussing how the integration of deep learning, semantic web and knowledge graphs into augmented reality enhances the quality of experience and quality of service of augmented reality applications to facilitate and improve users’ everyday life, conclusions and suggestions for future research and studies are given.

Introduction

The advent of the information era and the digitalization of everyday life, by adopting smart devices and advanced technologies (e.g. internet of things (IoT), artificial intelligence, social networks etc.), have resulted in the creation of an enormous volume of heterogeneous data and digital content, the increase of data sources and the diversity of data types, forms and structures. Moreover, the rapid growth of modern societies and the novel advances in technology have led to the emergence of users’ requirements for access to rapidly flowing information in real time.

With the advancement of technology, the processing power and storage capabilities of devices have increased significantly. These smart devices are able to interconnect, communicate and interact over the Internet and are equipped with different types of sensors and actuators (Lampropoulos et al., 2019). As a result, computer systems and smart devices are capable of retrieving, storing, processing and displaying large volumes of heterogeneous data rapidly while requiring minimal storage space and computational power. Due to this fact, real-time digital representation of information has become feasible, creating, thus, a more powerful way of environment modification, interaction and augmentation.

Taking advantage of these technological developments and the exponentially increased data volume, Augmented Reality (AR) technology attempts to meet the above mentioned requirements by providing real-time access to the rapidly flowing information, not just quickly but mainly at the right time and in the corresponding space. Simultaneously, AR filters the information and displays only the required data in an interactive and user-friendly way so as to avoid information overloading. Through AR, information becomes “alive” and meaningful as it is accordingly embedded in the appropriate spatial and time framework (Lee, 2012). Thus, it provides new ways for humans to interact with both the physical and digital world in real time.

One of the main advantages of AR technology is its ability to be utilized in conjunction with other innovative technologies and exploit their individual potentials and properties. More specifically, through this combination, AR functionality as well as performance can be enriched and enhanced and optimal results can be attained. Deep learning and semantic web technologies constitute two of the most significant technologies which can reinforce AR applications and experiences. Deep learning can instill intelligence into AR systems and can be used as a means to improving computer vision. Semantic web can provide semantic interconnected information which is more easily processed and understood by machines, improving thus the overall information retrieval process. Knowledge graphs are directly connected with semantic web as it can acquire and integrate information into ontologies and apply a reasoner to derive new knowledge enhancing and reinforcing thus the functionality of AR applications. The aim of this study is to present the concept of these novel technologies in line with the potentials brought about by their combination, having as a result the development of contemporary, user-friendly and user-centered intelligent applications.

In this study, we describe both AR and mixed reality concepts (Section 2). We present the concept of deep learning technology (Section 3), as well as knowledge graphs and semantic web technology briefly (Section 4). Based on our literature review, we present and analyze case studies which developed innovative applications and systems through the use of AR in combination with deep learning and/or semantic web technologies (Section 5). Finally, after discussing the benefits and advantages that the integration of deep learning, semantic web and knowledge graphs into AR can yield (Section 6), conclusions and suggestions for future research and studies are given (Section 7).

Augmented reality

In recent years, industries, enterprises, governmental organizations and academic community have shown keen interest in AR thanks to the value and the future potentials it promises to offer. The definition of this innovative technology varies (Wu et al., 2013), as some focused on the technological means and tools used to create AR environments (Dunleavy, 2014, Enyedy et al., 2015), while others focused on the characteristics of these environments (Lee, 2012, Chen et al., 2017, Di Serio et al., 2013, Wasko, 2013). Moreover, Wu et al. (2013) pointed out that given the rapid development of technologies and technological systems which AR applications exploit, it would be inappropriate to limit the definition to specific technologies.

The term AR refers to technological applications of computer units which enrich and enhance users’ physical environment with additional virtual objects (Caudell and Mizell, 1992). AR incorporates digital data (e.g. information, images, sounds, videos, interactive objects, etc.) in the real world, as perceived by the users through their senses, creating, thus, a mixed reality in which both real and virtual objects co-exist (Lee, 2012, Chen et al., 2017, Wasko, 2013, Johnson et al., 2010). In contrast to virtual reality (VR) which fully immerses users in virtual environments, AR allows users to interact with both the virtual and the real world in a seamless way (Zhou et al., 2008).

Azuma (1997) provided a commonly accepted definition according to which AR is described as a technology which is interactive in real time, combines real with virtual objects and registers them in the real world. Another definition, which emphasizes the technological means, determines AR as the technology which exploiting the capabilities of desktop and mobile computing systems allows users to see and interact with digitally generated objects which are projected into the physical environment (Dunleavy, 2014). In each case, the main AR features according to Azuma et al. (2001), are: (i) the potential of interaction between and among users, real and virtual objects and (ii) the combination and harmonization of real and virtual objects within the physical environment. These characteristics allow spatial and time correlation of information and display it in real time within the physical world as a three dimensional overlay. Moreover, based on these characteristics, the basic requirements of an AR system can be defined. That is, a computer system that can respond to users’ inputs and generate relative graphics in real time, a display capable of combining real and virtual images and a tracking system that can spot users’ viewpoint position (Billinghurst et al., 2015).

Hence, AR is considered to be a modern technology that allows users to actively interact with virtual objects, which, however, co-exist with real-world objects in real time (Zhou et al., 2008). It aims at enhancing users’ perception and interaction with the physical world and at facilitating and simplifying the activities of their everyday life by providing them with virtual information, cues and objects about their immediate surroundings or indirect environment that they are not able to detect directly through their senses reinforcing, thus, their sense of reality (Furht, 2011, Carmigniani et al., 2011).

Furthermore, Azuma et al. (2001) considered that AR is neither limited to a particular type of display technologies, such as projective head-mounted displays (HMD) nor to the sense of vision. On the contrary, it can potentially be applied with various projection technologies and to all human senses (McGee, 1999, Van Krevelen and Poelman, 2010) while it can also be used to augment or substitute users’ missing senses by sensory substitution (Carmigniani et al., 2011). They also pointed out that besides the addition of virtual objects, AR applications should also have the ability to remove real objects from the perceived environments. The process of object removal is considered to be a subset of AR and is called mediated or diminished reality (Azuma et al., 2001).

As AR technology has become more widespread and its applications are being utilized by more and more users, several software development kits and platforms, such as Vuforia (Vuforia, Inc., 2019), ARCore (Google, Inc., 2019), ARkit (Apple, Inc., 2019), WikiTude (Wikitude, Inc., 2019), ARToolKit (ARToolworks, Inc., 2019) etc., have been created in order to facilitate and reinforce the development of AR applications. In recent studies, Amin and Govilkar (2015) compared various AR software development kits (SDKs) such as Vuforia, ARToolKit, WikiTude, ARmedia (ARmedia, Inc., 2019), Kim et al. (2017a) also analyzed and went over the differences between Vuforia, ARToolkit and Wikitude SDKs and Nowacki and Woda (2019) analyzed and compared the capabilities of ARCore and ARkit platforms. Moreover, AR headsets are becoming more popular after entering the consumer market. These devices enable users to view and interact with virtual objects or holograms projected onto the real world. Additionally, they capitalize on the ability of AR technology to provide hands-free interfaces by providing a ubiquitous and immersive view of virtual content, eliminating, thus, users’ need to shift context in order to interact with a device while carrying out tasks in the real world (Wang et al., 2019). HoloLens (Microsoft, Inc., 2019), Magic Leap (Magic Leap, Inc., 2019), Meta 2 (Metavision, Inc., 2019) and Vuzix Blade (Vuzix, Inc., 2019) are some examples of AR headsets.

Mixed reality

Milgram and Kishino (1994) presented the concept of “reality – virtuality continuum”, where the real environment lies at the one end and a completely virtual environment at the other. According to Milgram, mixed reality is between these two ends which consist of AR and augmented virtuality. AR technology is closer to the end of the real environment, as the predominant perception conveyed to the users is the real world augmented with virtual objects (e.g. sounds, images, computer graphics etc.). Augmented virtuality technology is closer to the end of the virtual environment and refers to the augmentation of the virtual world with real objects for greater exactness, authenticity and realism. Therefore, the mixed reality environment is a space where objects of the physical and virtual world are presented together in a unified depiction, anywhere between the two ends of the “reality – virtuality continuum” and are not treated as distinct points. Consequently, the limits for what exactly we perceive as real and as virtual in an mixed reality environment are not entirely clear and distinct (Lepetit, 2008, Schmalstieg et al., 2002).

read here,

click here to watch making of B-AIM:

Post: Blog2_Post
bottom of page