<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>OAR@UM Collection:</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/91669</link>
    <description />
    <pubDate>Fri, 10 Apr 2026 21:21:04 GMT</pubDate>
    <dc:date>2026-04-10T21:21:04Z</dc:date>
    <item>
      <title>Deep sketching : vectorization of sketched drawings using deep learning</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/95681</link>
      <description>Title: Deep sketching : vectorization of sketched drawings using deep learning
Abstract: Sketch vectorization is an essential step to bridge the gap from hand drawn rough&#xD;
sketches to images that can be interpreted by computer based systems. Through this&#xD;
dissertation, a Fully Convolutional Neural Network is designed to automatically clean&#xD;
raster rough sketches into their line drawing counterpart. A custom loss function was&#xD;
created inspired from traditional feature based methods of analysing the quality of line&#xD;
drawings. The loss function quantifies the quality of the lines and gaps being extracted.&#xD;
It also promotes that the lines extracted are clearly separate from the background.&#xD;
A dataset was curated from images found ‘in the wild’. The ground truth images&#xD;
were drawn by hand and method to align hand drawn ground truths to the found sketches,&#xD;
irrespective of printing and scanning resolution, was created. The curated dataset was&#xD;
used along with another dataset created in similar conditions.&#xD;
The design proposed and the dataset used allowed sketches from different sources&#xD;
to be simplified. Over strokes were converted to their clean line counterparts, shading&#xD;
and hatching, which are artifacts which hinder the vectorization process which follows,&#xD;
are removed whilst detail from the sketches was retained. A vectorization algorithm is&#xD;
then applied to the output from the cleaned line drawing. The time taken to convert the&#xD;
sketched image to its cleaned vector counter parts is quick, hence the method can be&#xD;
used in down stream applications.&#xD;
Finally, metrics measured show an improvement of the proposed loss function when&#xD;
compared to the Mean Squared Error (MSE) loss function as well as an improvement&#xD;
upon the more simplified version of proposed loss functions. For validation the output is&#xD;
also compared to other varieties of methods proposed which are deep learning based or&#xD;
feature based.
Description: B.Eng. (Hons)(Melit.)</description>
      <pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/95681</guid>
      <dc:date>2021-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>A robotic training partner for track runners</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/95581</link>
      <description>Title: A robotic training partner for track runners
Abstract: Typical training sessions of long-distance runners, both casual and professional, often&#xD;
include running at very specific paces for a certain distance or time intervals, in order to&#xD;
focus the training session on improving endurance or running power. Currently, runners&#xD;
keep track of their pace using equipment like a stopwatch or a GPS-enabled smartwatch,&#xD;
which provide the runner with a pace readout in real-time. However, it is well-known&#xD;
that a ‘physical’ pacer, also known as a rabbit, can be highly beneficial by providing a&#xD;
visual cue for the runner to follow at the desired pace. This can allow the runner to focus&#xD;
more on the running technique, or even act as an artificial competitor which can highly&#xD;
motivate the runner in demanding training sessions or time trials.&#xD;
A mobile robot can act as a physical pacer for a runner training on a running track. More&#xD;
specifically the mobile robot can employ a line-following system to follow one of the&#xD;
lines on a running track, and a speed control system to regulate its driving speed&#xD;
according to a custom running workout set by a user. Ultimately, the robot will act as a&#xD;
precise programmable ‘physical pacer’.&#xD;
In a previous project, an off-the-shelf electric remote-controlled model car was&#xD;
converted into a line-following robot using an infrared sensor array and a digital PID&#xD;
controller for closed loop steering control. This prototype robot was tested in controlled&#xD;
environments. The main objective of this dissertation is to build on the previous work to&#xD;
develop this robotic pacer by designing and adding a speed control system. This system&#xD;
makes use of an incremental encoder coupled to the drive shaft of the car to measure its&#xD;
translational speed, which is used as the feedback signal in a PID control loop to regulate&#xD;
the motor’s driving speed. Additionally, the robotic pacer was fitted with a Bluetooth&#xD;
module to facilitate wireless communication with a smartphone. The results obtained&#xD;
from tests performed on the speed control system contributed towards the understanding&#xD;
of the car’s speed response which can be used to improve the control systems and&#xD;
develop the robotic pacer prototype further in future works.
Description: B.Eng. (Hons)(Melit.)</description>
      <pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/95581</guid>
      <dc:date>2021-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Multi-robot coverage control</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/95579</link>
      <description>Title: Multi-robot coverage control
Abstract: Robots have become an essential part of society. Their uses range from&#xD;
domestic to industrial and with emerging applications like the rise in self-driving&#xD;
vehicles, it is becoming more evident that robots can undertake more responsibilities&#xD;
that are typically associated with humans. The applications for multiple robot systems&#xD;
are also becoming more common, especially within the field of swarm robotics.&#xD;
Every job that robots can do are due to multiple and complex algorithms that make&#xD;
each application possible.&#xD;
Coverage control algorithms provide the tools necessary for multiple robots&#xD;
to be able to organise themselves within an area, and effectively cover all the area in&#xD;
the most optimal way. Such algorithms can give rise to new applications for robots&#xD;
in autonomous surveillance or search and rescue for example.&#xD;
The primary objectives of this dissertation are to review relevant theory and&#xD;
current mathematical approaches taken to solve the coverage control problem,&#xD;
implement one or more of the approaches found in literature and validate it on a group&#xD;
of mobile robots. Further complexities could be introduced with the application of a&#xD;
single peak, multiple peak or time varying density function that represents the&#xD;
importance of each region in the environment, by taking the robot dynamics into&#xD;
account, or by assuming a fleet of robots with different energy levels and sensing&#xD;
capabilities.&#xD;
The dissertation starts by looking at the mathematical approaches for tackling&#xD;
the coverage control problem. A coverage control algorithm is then explained and&#xD;
broken into its main components, which are further explained in more detail. Then,&#xD;
the approach taken to create and simulate the coverage control algorithm along with&#xD;
each one of its components is described. The added complexities are then introduced&#xD;
and explained. Finally, the results obtained from the modelling of the coverage&#xD;
control algorithm are evaluated and discussed. The results validated the algorithms&#xD;
successfully and the resulting simulation could be used to demonstrate the&#xD;
effectiveness of a real coverage control algorithm for a fleet of mobile robots.
Description: B.Eng. (Hons)(Melit.)</description>
      <pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/95579</guid>
      <dc:date>2021-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Head pose estimation using deep learning</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/95576</link>
      <description>Title: Head pose estimation using deep learning
Abstract: Convolutional Neural Networks (CNNs) perform well on the head pose estimation problem, however, their generalisation ability depends on the training data provided to the&#xD;
CNN, in order to extract sufficient features to obtain an efficient head pose result. A&#xD;
method for estimating head pose using a CNN trained on real head images is proposed,&#xD;
however, real data can be sparse and laborious to collect. Thus, a CNN trained on&#xD;
synthetic head images is also investigated in this dissertation because it is easier to create&#xD;
synthetic data, which may be used to produce rare head poses in large enough quantities.&#xD;
The estimation of head pose by the CNN is formulated as a regression problem. An&#xD;
image pre-processing stage incorporates facial landmarks information into the face shape&#xD;
normalisation by the task simplifier, normalises the image array values, and generates&#xD;
facial landmark heatmaps. This is established prior to the feed-forward neural network,&#xD;
thus, this information is used to aid feature extraction from head images.&#xD;
Datasets which render head images that take gender, race, age, and expression into&#xD;
account are used, namely: 300W-LP, AFLW2000-3D, BIWI, and NVIDIA Synthetic&#xD;
Head. Six methods are being presented in this dissertation that use real data, synthetic&#xD;
data, and a combination of real and synthetic data. The results reveal that when the&#xD;
feed-forward neural network is trained on 300W-LP, fine tuned by classification on&#xD;
NVIDIA Synthetic Head, and further fine tuned end-to-end on a portion of BIWI, the&#xD;
Standard Deviation (SD) for each of the head pose angles is improved. Moreover, the&#xD;
average mean absolute error decreases from 4.67° to 2.93°on AFLW2000-3D, and from&#xD;
6.08° to 2.59°on BIWI. Furthermore, when a model is trained on NVIDIA Synthetic Head&#xD;
and is fine tuned end-to-end on BIWI and 300W-LP, the average Mean Absolute Error&#xD;
(MAE) obtained is 2.96° when tested on BIWI, and 3.98° when tested on AFLW2000-3D.&#xD;
This dissertation shows that the CNN can extract features which can reflect head pose&#xD;
accurately even when the model is trained on synthetic data, significantly enhancing the&#xD;
possibility to train head pose models by only using computer generated images.
Description: B.Eng. (Hons)(Melit.)</description>
      <pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/95576</guid>
      <dc:date>2021-01-01T00:00:00Z</dc:date>
    </item>
  </channel>
</rss>

