<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/65865">
    <title>OAR@UM Collection:</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/65865</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://www.um.edu.mt/library/oar/handle/123456789/121343" />
        <rdf:li rdf:resource="https://www.um.edu.mt/library/oar/handle/123456789/66817" />
        <rdf:li rdf:resource="https://www.um.edu.mt/library/oar/handle/123456789/66799" />
        <rdf:li rdf:resource="https://www.um.edu.mt/library/oar/handle/123456789/66798" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-19T06:24:27Z</dc:date>
  </channel>
  <item rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/121343">
    <title>Analysis of EOG data recorded while reading</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/121343</link>
    <description>Title: Analysis of EOG data recorded while reading
Abstract: Reading is the activity through which humans decipher written symbols in an&#xD;
effort to extract information from written text. It is a medium through which&#xD;
knowledge is shared and communication is exercised. Whether written on paper,&#xD;
or displayed on an electronic screen, literacy can be considered as a valuable tool&#xD;
to one’s social and educational development. While reading, humans perform a&#xD;
wide variety of different eye movements intended to maximize the rate with which&#xD;
information is collected. Through the use of electrooculography (EOG) data, the&#xD;
ocular movements performed by humans while reading can be recorded, analysed&#xD;
and classified. These eye movements harness valuable information regarding the&#xD;
user’s reading capabilities, proficiency in a language and familiarity with a particular&#xD;
topic. By monitoring and carefully analysing these ocular movements, reading&#xD;
disorders such as dyslexia can be identified and inferences regarding the layout of&#xD;
the text at hand can be made.&#xD;
This project, has made use of EOG data recorded while reading in order to&#xD;
investigate the manifestation of different eye movements performed while reading.&#xD;
Through this, a Wordometer application was developed, capable of measuring the&#xD;
quantity of reading. Through the incorporation of various classification measures,&#xD;
this application provided various reading statistics, such as word estimates, reading&#xD;
speeds, lines read and quantifying various eye movements. Providing the reader&#xD;
with this information, can in turn motivate the user to improve his/her reading&#xD;
speeds as well as increase his/her reading volumes. Secondly, EOG based eye gaze&#xD;
tracking was explored in order to track the progression of the user while reading&#xD;
paragraphs of text. The latter is ultimately intended to permit the execution&#xD;
of commonly used commands, such as the selection of hyperlinks on a website,&#xD;
through eye movements rather than using conventional input devices such as&#xD;
keyboards and touchscreens. This would also facilitate the experience for users&#xD;
with mobile impairments as commonly used tools can be accessed using simple&#xD;
ocular movements as a control input.
Description: B.ENG.(HONS)</description>
    <dc:date>2020-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/66817">
    <title>Control of a ball and beam apparatus</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/66817</link>
    <description>Title: Control of a ball and beam apparatus
Abstract: The ball and beam apparatus is a nonlinear and open-loop unstable system which, due to its simplicity, is found in a lot of university control labs, such that it is used to implement and test several methods to control the position of the ball on the beam. The system consists of a ball on a beam rotated by a DC motor together with several sensors for taking measurements through the interface of a dSPACE DS1104 controller board and processed through software algorithms coded through the Simulink environment provided by MATLAB software. This is a continuation of a project where the ball and beam set-up was already designed and assembled. Some modifications are added to it to improve performance and reliability.  &#xD;
The goal of this dissertation is to model the ball and beam system using Lagrange equations to obtain a set of nonlinear state-space equations which can then be linearized to implement several computer control methods to control the system. These include discrete-time regulation and tracking, pole-placement and linear quadratic optimal controllers.  Each of these are tested using both full state feedback and with state observation, which for the case of the optimal controller, consists of a Kalman filter to obtain a linear quadratic Gaussian (LQG) controller. A theoretical analysis for each mentioned controller is conducted followed by their respective simulation results using a variety of testing strategies, taking into consideration real plant scenarios such as sensor noise and actuator saturation, from which the performance of each can be discussed and evaluated.  &#xD;
Ultimately, the LQG regulator and tracking controllers are implemented on the actual physical plant, taking into consideration the unmodeled nonlinearities such as motor and sensor responses. These are tested using similar strategies as that of the corresponding simulations to obtain results that can be used for comparison purposes, highlighting the differences between simulations and real world experiments.
Description: B.ENG.(HONS)</description>
    <dc:date>2020-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/66799">
    <title>Player tracking and heatmap generation from broadcasted football games</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/66799</link>
    <description>Title: Player tracking and heatmap generation from broadcasted football games
Abstract: Data analytics have become an integral part of sports. Gone are the days when player scouting agents would travel all over the world on the search for new talent. The introduction of tracking sensors and vision systems, made data analysis possible without even having to watch the game. Data analysts sift through thousands of time-stamped data to extract metrics that detail the performance of players and teams to gain insights into improving the results, preventing injuries, and increasing the revenue. However, obtaining the necessary data is quite a complex task. This work aims at developing an offline computer vision algorithm that takes broadcasted footage of an attacking or defending scenario as its input, and aims to extract the players’ positional data throughout the sequence of frames in which they are visible. The data is then visualised in the forms of a heat map and a tracking sequence.  &#xD;
The implemented system is divided into four major modules: the camera homography estimation module, the player detector module, the player tracking module, and finally the statistics module. This dissertation investigates the problem of keeping track of the image player positions amidst all the challenges that make the problem much more complex. Players move in and out of the camera field-of-view when the camera is not static. Every player needs to be detected and tracked in a sequence of frames, by a unique label. The tracker is expected to keep those labels in case the detection of the players is lost, and then found again after a number of frames, or during set-pieces where players congregate and occlude each other. The creation of a heatmap requires the computation of a camera homography matrix, which may be estimated by establishing a correspondence between the 3D field model and the 2D image. This process needs to be performed in every frame in which there are enough field lines visible to compare to the field model.  &#xD;
Each module was then tested separately under varying conditions. The results show that the camera calibration module performs reliably when the weather conditions are favorable, and there are enough field lines visible in the image frame. The player detection module and the player tracking module proved to be very robust, and it was shown that it’s worth using Convolutional Neural Networks for such an application.
Description: B.ENG.(HONS)</description>
    <dc:date>2020-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://www.um.edu.mt/library/oar/handle/123456789/66798">
    <title>Computer-vision based telemanipulation of an interactive kinetic surface</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/66798</link>
    <description>Title: Computer-vision based telemanipulation of an interactive kinetic surface
Abstract: One can have a conversation with someone remotely through video cameras and microphones in real-time or edit the same document in real-time remotely, yet the discussion becomes an arduous task when the requirement is to represent, discuss and even manipulate physical objects, spaces and surfaces. An interactive kinetic surface would bridge this gap between telecollaboration across different locations.  &#xD;
The aim of this project is to design, implement and manipulate an interactive kinetic surface. This was achieved through familiarisation with previously implemented kinetic surfaces, examination of the required parameters and current limitations. A six by eight interactive kinetic surface was implemented with a 5cm depth resolution for each actuator. Furthermore, the chief manipulation method was achieved through the use of hands and as a result, extensive research was conducted on the acquisition of depth data. This was effected using two methods: through the use of a depth sensing device and by means of a neural network that extracts depth information from monocular RGB images alone. Both avenues presented the resulting depth data differently, the depth sensing device, through a depth image, whereby intensity values represent a change in depth and the neural network provided 21 datapoints for each of the three dimensions. Algorithms to extract the required data were written and this data was sent to the interactive kinetic surface, for rendering.  &#xD;
From the results obtained, it was confirmed that the aims and objectives of this project were met and exceeded requirements. The interactive kinetic surface was successfully constructed and, although some inherent limitations are still present, the results are adequate. Hand poses are successfully rendered on the surface, with less than 7% error in all cases.  The neural network results show comparable quantitative errors to the depth sensing device method, with the additional advantage of independency from specialised hardware.
Description: B.ENG.(HONS)</description>
    <dc:date>2020-01-01T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

