Project Code: TRAKE 2019
Main investigators: Dr Ing. Stefania Cristina, Dr Alexandra Bonnici, Prof. Ing. Kenneth Camilleri
Researcher: Dr Peter Ashley Clifford Varley
Project duration: 2020 - 2022
In the collaborative design process, where multiple users interact with a single object, gaze visualisations are designed to help collaborators understand where others are looking at in a shared visual space. Such visualisations are key to effective communication and collaboration between entities, particularly when the collaborators are not co-located and first-hand observation of the attentiveness of the collaborators is not possible. However, eye-gaze trackers require lengthy user calibration which is not conducive to quick and easy collaborative design. As a result, eye-gaze tracking techniques have not been adopted in the field, despite the advantages that they bring into the field. This project will use computer vision techniques to reduce user calibration, hence increasing the usability of eye-gaze tracking and visualisation in the collaborative design process.
To date, the research work has explored methods for the detection and localisation of salient facial features, based on which the eye and head pose angles could be resolved geometrically. This has led to several conference publications and presentations. The research work is presently reviewing deep learning-based methods for head/eye pose estimation, to investigate how these compare to the results that we have thus far obtained and explore avenues for improvement.
Project Code: ISFP 2017 AG-CYBER
Main investigators: Prof. Ing. Kenneth Camilleri, Dr Ing. Stefania Cristina, Dr Alexandra Bonnici, Prof. Reuben Farrugia (CCE)
Project duration: January 2019 - June 2021
Researchers: Mr Andre Tabone, Dr Mark Borg
The University of Malta is a beneficiary in the EU-funded 4NSEEK project which aims to support the process to prevent, detect, and stop child sexual abuse. The Computer Vision and Image Processing Group at the Department of Systems and Control Engineering applied its expertise in advanced computer vision, machine learning and AI to detect child sexual abuse in images and videos. Read more.
Main investigators: Dr Stefania Cristina, Prof. Ing. Kenneth P. Camilleri
Researchers: Mr Nipun Sandamal Ranasekara Pathiranag
Funding Body: RIDT ALIVE Cancer Research Grant 2018
Project duration: October 2023 - present
Early detection of malignant skin lesions is crucial for increasing the effectiveness of skin cancer treatment. Current methods for the differentiation between benign and malignant skin lesions are invasive, because they involve the removal of the skin lesion onto which a hispathology is then performed. This project, alternatively, aims for a non-invasive differentiation between benign and malignant skin lesions by exploiting a combination of dynamic thermography with visual dermoscopy using deep learning techniques. The aim is to study the thermal and visual characteristics of the human skin, in order to automatically distinguish between healthy and pathological skin regions. The use of deep learning techniques has already shown promise in improving detection rates when applied to dermoscopic images, and hence such techniques will be investigated for the purpose of this study.
Main investigators: Prof. Ing. Tracey Camilleri, Prof. Ing. Kenneth Camilleri, Dr Ing. Nathaniel Barbara
Researchers: Ing. Matthew Mifsud, Mr John Soler, Mr Jeremy James Cachia
Project duration: March 2024 - August 2025
The success of the EyeCon project, funded by the MCST Technology Development Programme, lies in controlling computer applications through electrooculography (EOG) for augmentative and alternative communication. Building upon this achievement, EyeCon+ aims to validate the software engine developed in EyeCon, which permitted continuous eye-gaze tracking using EOG while accommodating natural head movements. Comprehensive user testing will be conducted on a significant cohort of healthy participants and individuals with motor impairments, as they interact with grid-based graphical user interfaces, such as virtual keyboards and symbol-based interfaces. The quantitative results of the user testing, together with qualitative feedback obtained from these
participants and collaborating occupational therapists, will play a pivotal role in refining and tailoring the current engine, enhancing its applicability, and advancing it towards commercialization.
Project Code: REP-2022-002
Main investigators: Dr Ing. Stefania Cristina and Prof. Ing. Kenneth P. Camilleri
Researcher: Mr Nipun Sandamal
Project duration: June 2022 - September 2023
For many years, the process of estimating the eye-gaze exploited the use of the ocular shape or features around the eye region. With the emergence of deep neural networks and their application to eye-gaze tracking, most recent research work started focusing on using the image information in its entirety rather than relying on specific features alone, achieving promising results. Furthermore, technological advancements have spurred an interest in pervasive eye-gaze tracking, where the eye movements are tracked and analysed in daily life conditions.
Fuelled by an interest in pervasive eye-gaze tracking for assisting persons with limited mobility, years of research work have led us to develop a passive eye-gaze tracking platform that can estimate the eye-gaze by integrating information regarding the centre of the iris region and the head pose, into a model that compensates for appearance changes due to head rotation in estimating the eye pose. While we have achieved angular gaze errors comparable to the state-of-the-art when the iris was sufficiently visible, the Achilles’ heel of the platform has remained the robust localisation of the iris centre coordinates under varying illumination conditions and iris occlusions.
In light of our challenges, this project aims to develop deep learning methods for robust iris centre localisation under varying illumination conditions and occlusions. Our primary application is human-computer interaction, whereby we aim to provide persons with physical impairments with an alternative modality to control a computer.
Main investigators: Dr Alexandra Bonnici, Dr Stefania Cristina
Researchers: Mr Andre Tabone, Mr Luke Abela
Funding Body: MCST Research Excellence Program
Project duration: June 2022 - September 2023
While text-to-speech tools exist, these often support the more common languages, used by millions of people rather than a few hundred. Moreover, many of these text-to-speech tools assume that the text is easily distinguishable from the page background. This is not necessarily the case, particularly in children’s books where pictures and illustrations are often part of the background. In these cases, simple binarisation algorithms will fail to distinguish the text from the illustration and the text-to-speech algorithm fails to read such text correctly. This project aims to investigate techniques to eliminate this problem by increasing robust text extraction algorithm able to distinguish text from the illustrations. We also aim to continue developing our earlier work in the development of a text-to-speech algorithm for the Maltese language.
Project Code: SCP-2022-010
Main investigators: Dr Tracey Camilleri, Prof. Ing. Kenneth Camilleri, Dr Nathaniel Barbara
Researchers: Mr Matthew Mifsud, Mr Salah Ad-Din Ahmed Youbi
Project duration: December 2022 - December 2024
Being immersed in a technological environment has made it important to be able to communicate and control technological devices in a seamless, effortless manner. The standard interfaces include remote controls, applications on smartphones or tablets, or touch screens made available on the device itself. This communication modality, however, is not always suitable for individuals with limited fine motor skills who find it difficult to press small buttons on a remote control or icons on a touch screen.
SmartGaze aims to address this issue by exploiting the natural gaze interaction of human beings with devices in their environment to allow individuals with mobility impairments to control devices, such as an air conditioner or television set, using eye gaze tracking. Specifically, electrooculography (EOG) is used as the eye gaze tracking modality, together with head orientation and localisation of the individual within a smart home, to determine the device that the subject wants to control. Once locked with a device, the individual selects device specific control functions through simple eye gestures. The proposed system makes use of a wearable, wireless EOG glasses and does not require a computer screen for device function selection, making the system more practical to use.
SmartGaze thus provides a novel communication interface for individuals who lack the necessary fine motor skills to control standard interfaces, bringing forth more independence and a better quality of life as it reduces the continuous dependence on carers or family members.
Project Code: R&I-2017-002T
Main investigators: Dr Reuben A. Farrugia, Prof. Ing. Kenneth Camilleri, Prof. John M. Abela
Researcher: Dr Christian P. Galea, Mr Matthew Aquilina
Project duration: July 2018 - August 2023
The Deep-FIR project is run by the Department of Communications and Computer Engineering of the Faculty of ICT. The project aims to design and implement a face image restoration algorithm that is able restore very low-resolution facial images captured by CCTV systems with unconstrained pose and orientation. The user will be able to restore the whole head, including the hair region, which is important for person identification, while minimising the manual work of the operator. Apart from improving the quality of the restored facial images, this project intends to reduce the complexity and therefore the time needed to enhance an image or video frame The developed algorithm will be tested on real-world CCTV videos and compared against existing video forensic tools used by forensic experts in their labs.
Project Code: FUSION R&I-2018-012T
Main investigators: Dr Tracey Camilleri, Prof. Ing. Kenneth Camilleri, Mr Nathaniel Barbara
Researcher: Mr Matthew Mifsud
Project duration: February 2020 - March 2023
EyeCon aims to use a particular eye movement recording technique known as electrooculography (EOG), whereby the electrical activity of the human eyes is captured using electrodes attached to the face in close proximity of the eyes, to develop a practical human-computer interface (HCI) system. This project aims to address practical issues related to the usage of EOG-based systems, particularly to fuse head pose information and develop head movement compensation algorithms, to allow the user to interact with an eye movement-based assistive application naturally and without restrictions.
Project Code: R&I-2019-003-T
Main investigators: Prof. Ing. Simon Fabri, Dr Ing. Owen Casha
Researcher: Dr Mario Farrugia
Project duration: 2019 - to date
This project forms part of a larger MCST-funded research programme called SmartClap, led by Prof. Philip Farrugia from the Department of Industrial and Manufacturing Engineering. This project is concerned with the design, implementation and testing of a Motion Capture System to track finger, wrist and arm movements of children with Cerebral Palsy (CP) while playing a Virtual Reality (VR) game purposely designed to help with their rehabilitation therapy. In addition to designing and implementing a Motion Capture Algorithm (MCA), the design, fabrication and testing of the back-end hardware and electronics is also included.
Project Code: R&I-2017-028-T
Main investigators: Prof. Ing. Michael A. Saliba, Prof. Ing. Kenneth Camilleri, Dr Jesmond Attard
Researcher: Ms Yesenia Aquilina, Mr Christian Von Brockdorff, Ms Rachel Cauchi
Project duration: September 2019 - March 2023
The project MAProHand is run by the Department of Mechanical Engineering. Building on previouswork carried out at the University of Malta, the primary research objective of this work is to carryout a systematic exercise to seek a practical solution that optimizes the trade-off between simplicity,dexterity and usability of a prosthetic hand within a single device by extracting an acceptable andoptimum dexterity out of the simplest possible architecture while maintaining high usability of thedevice. The Department’s contribution to this project is mainly related to the extraction of surfaceelectromyography (sEMG) signals from the forearm and to relate these to finger movement.
Project Code: R&I-2017-003T
Main investigators: Dr Ing. Philip Farrugia and Prof. Ing. Simon G. Fabri
Project duration: November 2018 - April 2022
This research forms part of the RIDE+SAFE research project funded by the FUSION R&I Technology Development Programme 2017. The project aims to develop a novel Product Service System (PSS) that supports motorcycle customisation to enhance a safe and comfortable ride, thereby reducing the risk of accidents. One particular aspect of the project, under the expertise of the Department of Systems and Control Engineering, focuses on the development and implementation of a dynamic and physically-actuated test jig that is able to provide a realistic and immersive virtual environment for simulation of a motorcycle ride. This should cater for rider body tracking, physical control and vestibular effects to emulate as realistically as possible the dynamics of the ride, thereby aiding customisation of the design.
This project is under the direction of the Department of Industrial and Manufacturing Engineering in collaboration with the Department of Systems and Control Engineering and WKD Works Ltd as industrial partner.
Dr Ing. Philip Farrugia (Department of Industrial and Manufacturing Engineering) is the Project Leader of RIDE+SAFE.
Main investigators: Prof. Ing. Kenneth P. Camilleri and Dr Ing. Stefania Cristina
Project duration: July 2017 - January 2022
Eye movements have long been recognised to provide an alternative channel for communication with, or control of, a machine such as a computer, substituting traditional peripheral devices. The ample information inherent to the eye movements has attracted increasing interest through the years, leading to a host of eye-gaze tracking applications in several fields, including assistive communication, automotive engineering, and marketing and advertising research. The project proposes a passive eye-gaze tracking platform aimed to provide an alternative communication channel for persons with physical disabilities, permitting them to perform mundane activities such as to operate a computer, hence improving their quality of life and independence, or for normal individuals as an additional access method, permitting an auxiliary control input for computer applications, such as games.
Human eye-gaze tracking has been receiving increasing interest over the years. Recent advacements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking and analysing eye movements continuously in unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. The notion behind the paradigm of pervasive eye-gaze tracking is multi-faceted and typically relates to characteristics that facilitate eye-gaze tracking in uncontrolled real-life scenarios, such as robustness to varying illumination conditions and extenisve head rotations, the capability of estimating the eye-gaze at increased distance from the imaging hardware, reduced or implicit calibration in order to allow for situations that do not permit user co-operation and calibration awareness, and the estimation of eye-gaze on mobile devices comprising integrated hardware without requiring further hardware modification. This will potentially broaden the application areas for eye-gaze tracking within scenarios that may not permit for controlled conditions, such as for gaze-based interaction in public spaces.
In the proposed platform, eye and head movements will be captured in a stream of image frames acquired by a webcam, and subsequently processed by a computer (and possibly mobile devices) in order to estimate the gaze direction according to the eye and head pose components. Mapping the eye-gaze to a computer screen will permit commands to be issued by the selection of icons on a suitably designed user interface. This project will be addressing challenges associated with eye-gaze tracking under uncontrolled daily life conditions, including handling of head and non-rigid face movements, and reduction or elimination of user calibration for more natural user interaction.
Part of this research work forms part of the project R&I-2016-010-V WildEye, financed by the Malta Council for Science and Technology through FUSION: The R&I Technology Development Programme 2016, in collaboration with Seasus Ltd. This project aims to develop a low-cost eye-gaze tracking platform as an alternative communication channel for disabled persons that may not otherwise control a computer via traditional peripheral devices, such as the mouse and keyboard.
Project Code: R&I-2015-032-T
Main investigators: Dr Tracey Camilleri, Prof. Ing. Kenneth P. Camilleri and Dr Owen Falzon
Project duration: July 2017 - May 2021
A Brain Computer Interface (BCI) gives a person the ability to communicate with and control machines using brain signals instead of peripheral muscles. BCIs allow people with severely restricted mobility to control devices around them, increasing level of independence and improving quality of life. BCIs may also be used by healthy individuals, e.g. in gaming, and are expected to become a ubiquitous alternative means of communication and control.
This project has been awarded funding under the FUSION R&I Technology Development Programme 2016, and has commenced on the 31st of July with the collaboration of 6PM as the commercial partner. This project proposes the development of a novel application controlled directly with brain signals, opening up accessibility to individuals suffering from motor disabilities, and providing alternative access methods to healthy individuals. BCIs acquire the electrical brain activity using electroencephalography (EEG) electrodes, relying on brain phenomena such as those evoked by flickering visual stimuli, known as steady state visually evoked potentials (SSVEP). In the proposed system, stimuli are associated to commands, and EEG signals are processed to detect the intent associated to the brain pattern. A BCI challenge is to have BCIs operating in real environments amidst the nuisance signals generated by normal user actions. The project proposes solutions to this challenge, operating in real-time at the user’s will. It also aims at addressing the annoyance factor of the flickering stimuli, ensuring that the system can be used comfortably for long periods of time, if necessary.
Project Code: R&I-2015-042-T
Main investigators: Dr Ing. Philip Farrugia, Prof. Ing. Simon G. Fabri, Dr Ing. Owen Casha, Prof. Helen Grech and Dr Daniela Gatt
Project duration: July 2016 - July 2019
This research forms part of the SPEECHIE research project funded by the FUSION R&I Technology Development Programme 2016 that aims to develop a novel device which facilitates language therapy for children that are subject to speech impairment. This aspect of the project focuses on the development and implementation of speech recognition and analysis capabilities to monitor the child’s performance and allow for autonomous interaction during therapy sessions.
This project is under the direction of the Department of Industrial and Manufacturing Engineering in collaboration with the Department of Systems and Control Engineering, the Department of Microelectronics and Nanoelectronics and the Department of Communication Therapy. Dr Ing. Philip Farrugia is the Project Leader of SPEECHIE.
Main Investigators: Prof. Ing. Simon G. Fabri, Prof. Ing. Kenneth P. Camilleri and Prof. Ing. Marvin Bugeja
Research Support Officer: Hani Ahmed Hazza
This project is on the design of intelligent control methodologies for complex systems that are able to operate under conditions of complexity and uncertainty, using the latest developments in Artificial Intelligence such as deep and reinforcement learning. Intelligent control offers potential for automation of equipment in, for example, control of pollution or wastewater treatment, the development of smart and reliable systems for control of active prosthetic devices, automation and control of industrial manufacturing facilities and robotic assembly infrastructures, and the development of autopilot systems and driverless/autonomous navigation. This widespread use of applications is testimony to the fact that control systems are ubiquitous in many technological areas, and that modern systems exhibit complex challenges that demand smarter controllers than traditional techniques that make use of Artificial Intelligence.
Main investigators: Prof. Ing. Kenneth P. Camilleri, Prof. Ing. Tracey Camilleri, Prof. Ing. Simon G. Fabri and Prof. Ing. Marvin Bugeja
Research Support Officer: Dr Natasha Padfield
The project seeks to: (a) integrate a BCI signal to the dynamic model of a smart wheelchair; (b) develop new methods permitting multi-dimensional control signal integration to include, e.g., speed control and direction control; (c) estimate signal integration parameters by reinforcement learning to be tuned by practice; and (d) explore more intuitive mental states, such as thought speech. Combining an intuitive mental state command with a paradigm of continuous BCI control would lead to a more natural brain-machine interaction resembling embodied control, making this technology more viable for people with motor impairment. The BCI experts involved in this project, two of whom are members of the Department, will contribute to the development of a BCI platform and to the investigation of alternative BCI mental states; the robot and control experts, members of the Department, will contribute to the development of the physical wheelchair model and the integration models; and a medical doctor specialising in rehabilitation medicine will contribute end-user advice and recruitment. The project is being carried out with the collaboration of the Rehabilitation Specialist-in-Training, Dr Andrei Agius Anastasi.