Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/10984
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMercieca, Isaac
dc.date.accessioned2016-06-21T08:10:02Z
dc.date.available2016-06-21T08:10:02Z
dc.date.issued2015
dc.identifier.urihttps://www.um.edu.mt/library/oar//handle/123456789/10984
dc.descriptionB.SC.IT(HONS)en_GB
dc.description.abstractVisually impaired individuals are a growing segment of our population. However social mechanisms are not designed with this fraction of people in mind, making the development of electronic assistive tools essential in order to perform basic day-to-day activities. Generally, assistive technologies come in the form of expensive and specialized devices which the visually challenged have to independently look after and carry around. Due to the increasing computational capabilities and on board sensing components found in many mobile phones used today, such devices have become an ideal candidate for designing solutions to aid the visually impaired. The objective of this FYP is to develop a multimedia user interface which is aimed to aid the visually challenged. We propose and design a grocery products recognition system utilizing computer vision and machine learning techniques. Finally, our system works towards allowing visually impaired or blind individuals to identify products in grocery stores and supermarkets without any additional assistance, thus encouraging them to perform daily activities with as least help as possible, further promoting their social wellness. Our approach is composed of two main modules one capable of classifying grocery products using an unsupervised feature extraction methods posed by deep learning techniques while the other module is capable of recognizing products in an image using the traditionally handcrafted feature extraction algorithms. We considered multiple robust approaches to identify the one most suited for our task. Through evaluation we determined that the best approach for classification is to finetune a convolutional neural network pre-trained on a larger dataset. We were successful in not only surpassing our target accuracy of 21.19% but also obtaining an accuracy of 63%. Evaluation on the recognition module highlighted the difficulties met for recognizing similar objects in a scene, by achieving a recognition rate of 41.38%.en_GB
dc.language.isoenen_GB
dc.rightsinfo:eu-repo/semantics/restrictedAccessen_GB
dc.subjectMultimedia systemsen_GB
dc.subjectUser interfaces (Computer systems)en_GB
dc.subjectPeople with visual disabilitiesen_GB
dc.titleMultimedia interfaces for blind peopleen_GB
dc.typebachelorThesisen_GB
dc.rights.holderThe copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder.en_GB
dc.publisher.institutionUniversity of Maltaen_GB
dc.publisher.departmentFaculty of Information and Communication Technology. Department of Intelligent Computer Systemsen_GB
dc.description.reviewedpeer-revieweden_GB
Appears in Collections:Dissertations - FacICT - 2015
Dissertations - FacICTAI - 2015

Files in This Item:
File Description SizeFormat 
15BSCIT032.pdf
  Restricted Access
1.68 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.