<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>OAR@UM Collection:</title>
  <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/48904" />
  <subtitle />
  <id>https://www.um.edu.mt/library/oar/handle/123456789/48904</id>
  <updated>2026-04-09T07:51:10Z</updated>
  <dc:date>2026-04-09T07:51:10Z</dc:date>
  <entry>
    <title>IoT-based trafﬁc light control</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/49133" />
    <author>
      <name />
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/49133</id>
    <updated>2020-04-24T07:13:41Z</updated>
    <published>2019-01-01T00:00:00Z</published>
    <summary type="text">Title: IoT-based trafﬁc light control
Abstract: Machine learning is the process of teaching a set of artiﬁcial neurons to perform a task that is usually too difﬁcult to perform programmatically. Trafﬁc light systems are an example of such difﬁcult tasks.&#xD;
The aim of this thesis is, therefore, to combine machine learning algorithms with the unlimited processing power of the cloud for use in trafﬁc light control. The cloud/IoT aspect of this thesis reliefs computations from local trafﬁc light controllers, thereby reducing hardware cost, and allows sensors and other components to function independently from one another. This reduces wiring and minimises intrusion upon installation onto existing trafﬁc networks.&#xD;
The designed sensors and trafﬁc controllers feature vehicle classiﬁcation and the capability to upload real-time trafﬁc information to the cloud. The developed reinforcement learning algorithm reduced vehicle wait times by an average of 29.3 per cent.
Description: B.ENG.(HONS)</summary>
    <dc:date>2019-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Automated page turner for musicians</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/49132" />
    <author>
      <name />
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/49132</id>
    <updated>2020-04-24T07:11:01Z</updated>
    <published>2019-01-01T00:00:00Z</published>
    <summary type="text">Title: Automated page turner for musicians
Abstract: Page turning has at some point frustrated every musician when having to abandon the performance temporarily to manually turn the physical page. During a performance, the pianist would be using both hands to play the instrument and so, turning the page means that one would have to quickly lift at least one hand from the keyboard to perform such a task. This may lead to various types of performance errors and every musician has to learn and develop his/her own method to overcome this annoyance; in fact, good music book editors edit the music in such a way that there is a long note, or a pause towards the end of the page, where a page turn is necessary. The objective of this dissertation is to develop an automated page turner that tracks the user’s progression on the music score using an eye gaze tracking system. To successfully design and implement the fully automated page turner, it was ﬁrst sought to understand how the musician interacts with the score. This led to a data collection and processing stage on which the design criteria of the page turning application would then be set. Future improvements based on the outcomes of this project would include the use of a tablet camera to record eye gaze and the tablet screen to display music. With this in mind, the score was immediately divided into pages with only two lines of music as described in [1]. Under such conditions, half-page turns are implemented replacing individual lines of music. This music format was presented to the test subjects, whose eye gaze values and performance were recorded using the eye gaze sensor system and a digital piano respectively. From the information collected it was observed that sensor deviations and instances where the musician looks down at the keyboard cause the tracker to lose track of the eyes and return redundant or zero values. Hence, the system was to include a Kalman ﬁlter to smooth out the readings. The sensitive areas on the script were to be set by using image processing to detect the bar lines such that an understanding of the temporal structure of the script could be obtained. The values obtained from the eye gaze tracker are of a different resolution to the information obtained from the score processing in terms of resolution and screen utilisation. The tracker system returned coordinates ranging throughout the whole screen, whilst the image values were based inside a ﬁgure, so a scaling and compensating function was required to ensure the coordinate values were relevant to each other. These functions were tested separately and integrated into a single application using Matlab’s GUIDE environment. Initially, the page turning application was tested and results presented a minimal number of redundant page turns but upon inspection, they were all linked to the same series of events of the musician’s performance. By tuning the sizes of the sensitive areas and the resistance towards triggering page turns, a better system was achieved. Instances when the sensor loses track of the eyes for a long period of time due to the user glancing at the keyboard or shifting to play at the extremities of the piano, caused most of the observed problems. These instances were tackled by replacing the eye gaze inputs to inputs based off a model and the previous positions which estimate how the user would have looked. The result was a page turner with an accuracy of 98.27% that suffered only from delayed page turns and removed instances where its previous version would have triggered an early page turn causing the replacement of the current line being performed.
Description: B.ENG.(HONS)</summary>
    <dc:date>2019-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Vibration control in flexible systems with multiple modes</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/49131" />
    <author>
      <name />
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/49131</id>
    <updated>2020-04-24T07:08:57Z</updated>
    <published>2019-01-01T00:00:00Z</published>
    <summary type="text">Title: Vibration control in flexible systems with multiple modes
Abstract: Industrial robots are widely used in the manufacturing and construction industries. The robotic industry generally aims at having a light system capable of achieving high precision and accuracy in the shortest possible duration. Vibrations are induced within a ﬂexible system, if either the system involves inherently ﬂexible parts or the system involves light-weight construction. The main objective of this dissertation, is to address vibration issues in systems with multiple vibration modes, due to presence of more than one ﬂexible components. For computer controlled systems, one effective feed-forward technique is Input Shaping which makes use of the constructive cancellation principle. Input shaping is basically done by convolving a sequence of impulses with a desired base command, which in turn creates a self cancelling command signal. The input shaping techniques considered in this dissertation are Positive Zero-Vibration shapers, Speciﬁed Negative Amplitude Zero-Vibration Derivative-Derivative shapers and the S-curve command function. A rotary multiple-link ﬂexible manipulator was used to analyse and validate the effect of different input shaping. A virtual model of the multiple-link ﬂexible manipulator, along a model of the PM DC motor and an angular positional controller was implemented in a realistic simulation environment provided by MATLAB ® Simscape Multibody ™. Different input shaping techniques were implemented and their effect on the virtual model was analysed through three dimensional animations and vibration graphical representations. Furthermore, the input shaping techniques were digitally implemented on the DS1104 control board using MATLAB ® Simulink ® and ControlDesk D-SPACE software. One can conclude that the most effective and robust input shapers are those that consist of a higher number of impulses namely, the positive convolved shapers, the SNA-ZVDD shaper and the S-curve command as a base function to positive input shapers. The analysis carried out in this dissertation is based on vibration reduction, settling time reduction and robustness of the input shaping techniques.
Description: B.ENG.(HONS)</summary>
    <dc:date>2019-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Automation enforcement on priority lanes</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/49129" />
    <author>
      <name />
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/49129</id>
    <updated>2020-04-24T07:06:45Z</updated>
    <published>2019-01-01T00:00:00Z</published>
    <summary type="text">Title: Automation enforcement on priority lanes
Abstract: Constantly, traﬃc laws are being violated, this is mainly because enforcement is not monitored 24/7 and hence it makes it easier for drivers to infringe the law. However, this problem may be mitigated by using Automated Enforcement Systems (AESs), as this replaces the traditional method of how enforcement on the roads is carried out. Currently, in Malta, AESs systems are implemented for speed cameras and red-light running. The scope of this project is to create an AESs to tackle the enforcement on priority lanes by detecting vehicles that does not fall in the following categories: Motorcycles, LPG cars, Electric vehicles, Route buses, Passenger transport vehicle, Taxis, Pedal cycles, Vehicles on priority duty (ambulances, police cars etc.) and vehicles carrying more than three persons in a vehicle. An overview of the system would compose of ﬁrst detecting a vehicle on the priority lane. Tracking of this vehicle is then triggered to follow the car in the video scene. Automatic number plate recognition (ANPR) is then used to extract the number plate to be able to identify if the driver is infringing the law. Finally, a short clip of the scene is exported to be send to enforcement units for further veriﬁcation. Upon testing the whole system for diﬀerent scenarios, a vehicle detection accuracy rate of 90% was achieved. For the recognition of characters in the ANPR function, an 84% accuracy was noted. Improvements and limitations to justify the results obtained for this system, are then further discussed in the conclusion chapter.
Description: B.ENG.(HONS)</summary>
    <dc:date>2019-01-01T00:00:00Z</dc:date>
  </entry>
</feed>

