Please use this identifier to cite or link to this item:
https://www.um.edu.mt/library/oar/handle/123456789/141991| Title: | AI‐Driven gesture recognition with smart gloves |
| Authors: | Mallia, Jonathan (2025) |
| Keywords: | Image processing Wearable technology Artificial intelligence Neural networks (Computer science) Machine learning Deep learning (Machine learning) |
| Issue Date: | 2025 |
| Citation: | Mallia, J. (2025). AI‐Driven gesture recognition with smart gloves (Master’s dissertation). |
| Abstract: | This research presents the development of an AI‐driven gesture recognition system aimed to enhance Human‐Computer Interaction through the use of smart gloves. Many emerging applications, such as virtual reality, robotics, and assistive technologies, require detailed motion capture of the hand in three dimensions. Traditional input devices are not designed to capture such motion, whereas wearable solutions like smart gloves offer a practical means of collecting complex motion data for gesture interpretation. This study proposes a system capable of interpreting dynamic hand gestures captured using smart gloves. A custom dataset was collected using Rokoko smart gloves, recording 14 gesture classes from 14 subjects. Time‐series data captured from the smart gloves was preprocessed, and a range of feature extraction methods, including statistical, frequency‐domain, and motion‐based techniques, were applied. Experimental results were carried out to determine which features or combination of features gives the best result. Dimensionality reduction methods, namely Principal Component Analysis and Autoencoders, were examined to optimise the feature space and reduce complexity. A number of classification models were implemented and compared, including Support Vector Machines, K‐Nearest Neighbours, Hidden Markov Models, as well as, deep learning approaches such as CNN‐LSTM networks. Experimental results showed that while most models achieved high accuracy on validation data, up to 93.64%, performance significantly decreased when tested on data from unseen subjects, dropping to 20.39‐28.93%. This highlights the challenge of inter‐subject generalisation. To mitigate this, personalised models were implemented, showing good performance improvements. The SVM classifiers achieved accuracy results ranging from 67.9% to 92.9%, and the majority of precision, recall, and F1 scores exceeding 85%, while CNN‐LSTM models achieved an accuracy above 95% consistently. Precision, recall, and F1‐score values also remained high. This work contributes to the field of gesture recognition by systematically evaluating feature engineering and modelling techniques on multichannel time‐series data. It underscores the importance of personalised learning strategies and provides insight into the practical limitations of real‐world deployment, such as latency and subject variability. Future work may explore domain adaptation, multimodal sensing, and real‐time implementation to further advance robust gesture‐based interfaces. |
| Description: | M.Sc.(Melit.) |
| URI: | https://www.um.edu.mt/library/oar/handle/123456789/141991 |
| Appears in Collections: | Dissertations - FacICT - 2025 Dissertations - FacICTAI - 2025 |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 2519ICTICS520005085705_1.PDF Restricted Access | 8.8 MB | Adobe PDF | View/Open Request a copy |
Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.
