Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/91965
Title: Race anywhere - a markerless augmented reality racing game
Authors: Azzopardi, Matthew (2012)
Keywords: Augmented reality
Virtual reality
Human-computer interaction
Issue Date: 2012
Citation: Azzopardi, M. (2012). Race anywhere - a markerless augmented reality racing game (Bachelor's dissertation).
Abstract: Augmented Reality games are rapidly gaining popularity on mobile gaming platforms such as mobile phones and dedicated gaming hand held devices. In these games, the view of the real world which is being captured by the device's camera is augmented with virtual graphics in such a way that both realities are mixed to create an immersive experience. AR gaming can be subdivided in two categories: marker-based AR in which one or more printed pattern cards are used to perform registration and markerless AR games in which the above process is performed using natural features in the scene. Either category suffers from a serious drawback: they only provide static gaming experiences. We propose an approach in which the environment itself dictates some form of unknown such that the gaming experience is tied to the environment itself. The aim of this project is to show how we use the environment as viewed from our capturing device to somehow alter the content of an AR game such that the experience is no longer static. To show this we have decided to implement a small top-down racing game in which the racing track is generated based on what objects are detected in the environment. In a typical scenario the player should be able to position common everyday objects on a rectangular surface, point the device at this scene and play an AR racing game with a track that zig-zags around the real-world objects. we begin by developing a framework for feature detection and tracking in which features from the previous frame are matched with those from the next using Shi-Tomasi feature detection and the theory of Optical Flow. To initialize the system we require input from the user to label four features in the image which represent the corners of our area. With this information we can now recover the necessary camera extrinsic parameters so that we can project 3D content on our view. To obtain the required information for creating a track we transform the plane as defined by the four detected corners into that of another plane as viewed from directly above through a homography. By applying a thresholding function we are able to obtain a binary image representing the presence of an object on the plane considering the possibility of multiple views. This leads to the creation of a composite binary image in which the result is close to where the objects are touching the plane. Finally we resample this image into a predefined grid, generate and connect waypoints around these objects and thus we obtain a closed-looped track around our real-world objects. Although a full game was not completed we have implemented a base prototype in which valid tracks were generated around real-world objects and overlaid correctly on the real-time video stream. However, such a system is currently far from being robust. The system is currently limited to having the labelled corners always visible and must be re-initialized if these features are lost for whatever reason. Furthermore if the objects used in the enviromnent are of significant height the user experience can be broken because there is as yet no current support for track occlusion.
Description: B.SC.ICT(HONS)ARTIFICIAL INTELLIGENCE
URI: https://www.um.edu.mt/library/oar/handle/123456789/91965
Appears in Collections:Dissertations - FacICTAI - 2002-2014

Files in This Item:
File Description SizeFormat 
BSC(HONS)ICT_Azzopardi, Matthew_ 2012.PDF
  Restricted Access
7.82 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.