Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/19658
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSapienza, Michael
dc.contributor.authorCamilleri, Kenneth P.
dc.date.accessioned2017-06-06T07:31:22Z
dc.date.available2017-06-06T07:31:22Z
dc.date.issued2012
dc.identifier.citationSapienza, M., & Camilleri, K. P. (2012. A generative traversability model for monocular robot self-guidance. 9th International Conference on Informatics in Control, Automation and Robotics, ICINCO 2012, Rome. 177-184.en_GB
dc.identifier.isbn9789898565211
dc.identifier.urihttps://www.um.edu.mt/library/oar//handle/123456789/19658
dc.descriptionThe research work disclosed in this publication is partially funded by the Strategic Educational Pathways Scholarship (Malta). The scholarship is part-financed by the European Union - European Social Fund (ESF) under the Operational Programme II - Cohesion Policy 2007-2013, Empowering People for More Jobs and a Better Quality of Life.en_GB
dc.description.abstractIn order for robots to be integrated into human active spaces and perform useful tasks, they must be capable of discriminating between traversable surfaces and obstacle regions in their surrounding environment. In this work, a principled semi-supervised (EM) framework is presented for the detection of traversable image regions for use on a low-cost monocular mobile robot. We propose a novel generative model for the occurrence of traversability cues, which are a measure of dissimilarity between safe-window and image superpixel features. Our classification results on both indoor and outdoor images sequences demonstrate its generality and adaptability to multiple environments through the online learning of an exponential mixture model. We show that this appearance-based vision framework is robust and can quickly and accurately estimate the probabilistic traversability of an image using no temporal information. Moreover, the reduction in safe-window size as compared to the state-of-the-art enables a self-guided monocular robot to roam in closer proximity of obstacles.en_GB
dc.language.isoenen_GB
dc.publisherICINCO - IEEE Robotics and Automation Societyen_GB
dc.rightsinfo:eu-repo/semantics/openAccessen_GB
dc.subjectAutonomous robotsen_GB
dc.subjectRobotsen_GB
dc.subjectExpectation-maximization algorithmsen_GB
dc.titleA generative traversability model for monocular robot self-guidanceen_GB
dc.typeconferenceObjecten_GB
dc.rights.holderThe copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder.en_GB
dc.bibliographicCitation.conferencename9th International Conference on Informatics in Control, Automation and Robotics, ICINCO 2012en_GB
dc.bibliographicCitation.conferenceplaceRome, Italy, 28-31/07/2012en_GB
dc.description.reviewedpeer-revieweden_GB
Appears in Collections:Scholarly Works - FacEngSCE



Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.