<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>OAR@UM Collection:</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/35639</link>
    <description />
    <pubDate>Mon, 20 Apr 2026 07:13:58 GMT</pubDate>
    <dc:date>2026-04-20T07:13:58Z</dc:date>
    <item>
      <title>Knee joint angle prediction with the use of electromyography</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/36079</link>
      <description>Title: Knee joint angle prediction with the use of electromyography
Abstract: Mobility in humans is something which is normally taken for granted, that is until a person's mobility has been impaired. This reduction in mobility can either be caused by an illness or by an accident, and in either case it requires a lot of rehabilitation in order overcome this limitation.&#xD;
This work aims to aid the rehabilitation process by using surface electromyography (sEMG) technology to capture the muscle impulses generated when extending and flexing the knee joint, filtering the signals and then using a back propagation neural network (BPNN) and a time delay neural network (TDNN) to predict the knee angle.&#xD;
This will give patients an indication of the degree of motion that they should be performing, thus enabling their rehabilitation to be better adapted to suit their needs.&#xD;
&#xD;
To achieve this, different exercises were carried out by a subject and the electrical potentials generated by the muscles recruited were recorded throughout the exercise. Through testing, it was determined that the best exercise carried out was that of flexion and extension while the subject was seated.&#xD;
The signals were then filtered, processed and features were extracted to train and to estimate the knee joint angle by using a neural network. Tests were conducted to determine an appropriate neural network size for best accuracy of the estimated knee joint angle were carried out. The performance of the optimal BPNN and TDNN were compared and it was determined that for flexion and extension of the knee while seated, a TDNN with a delay of 20 gave the best joint angle estimation with an average percentage angular error of 0.3882 percent and actual angular error of 7.2242 degrees, and a correlation coefficient of 0.9659.
Description: B.ENG.(HONS)</description>
      <pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/36079</guid>
      <dc:date>2018-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Eye-in-hand visual servoing with a robotic arm</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/36075</link>
      <description>Title: Eye-in-hand visual servoing with a robotic arm
Abstract: One of the major roles of robotic manipulators is pick and place tasks in automated assembly, hence by adding real-time visual feedback, robotic manipulators can ac-complish autonomous tasks. The aim of this project is to design, implement and test an eye-in-hand robotic arm system that is able to position its end-effector above a static object in the workspace.&#xD;
&#xD;
In this dissertation, a USB camera was attached to the gripper of a 5 degree of freedom (DOF) robotic manipulator. The static object was chosen to be a square with three of its corners marked in red. The image frames were then processed using a MATLAB script file. The chosen image processing technique was colour recognition, which detects the three highlighted corners of the target and returns the centroid position in pixels of these corners. A velocity control law was then used to process these results and obtain the necessary camera velocities to reach the target. Finally the velocities are sent to a Simulink model containing a velocity controller which controls the actuation of the robotic arm. The effectiveness of the implemented controller has been illustrated and verified through a series of testing methodology.
Description: B.ENG.(HONS)</description>
      <pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/36075</guid>
      <dc:date>2018-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Vision-based sensory and reinforcement biofeedback system</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/36028</link>
      <description>Title: Vision-based sensory and reinforcement biofeedback system
Abstract: The primary objective of this project is the development of a vision-based sensory and reinforcement system to provide feedback (visual, auditory or tactile) to individuals having complex learning disabilities (PCLD) or Autism Spectrum Disorder (ASD). The feedback will serve as a form of biofeedback, encouraging the individuals to repeat their movements. Such repetition is expected to trigger a learning process by which the individual is able to gain awareness and control over physical action. A detailed study on various object detection algorithms, classification techniques and tracking methods is carried out. Two object detection techniques are selected and implemented, namely the template matching algorithm and the colour detection algorithm. The former technique detects the salient regions in the scene, which consists of an inclined hollow semi-circular gutter, a ball and a skittle, while the latter approach detects the moving objects of interest on the basis of their colour. The location of the ball is computed by dividing the setup into four sections detected by the template matching technique, which are then used to specify the ranges required for detecting the ball's position. The results achieved when testing the system under various circumstances conclude that the goal of the dissertation is met since audible feedback, indicating the ball's location can be provided successfully in the majority of the cases. However the system has some limitations including the fact that varying lighting conditions affect the threshold ranges, which were used to define the colours of the ball and the skittle. Also, the template images are not scale or rotation invariant, hence any slight size or orientation variation might cause issues when attempting to locate the template image on the frame. It is then verified that the system can be implemented in real-time such that in the future, it may be developed and deployed at San Miguel Primary School, so that individuals with intellectual disabilities, who are often unaware of their own movements or lack of movements, are provided with an opportunity to engage with external objects through voluntary activity.
Description: B.ENG.(HONS)</description>
      <pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/36028</guid>
      <dc:date>2018-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Attitude determination of a pico-satellite</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/36021</link>
      <description>Title: Attitude determination of a pico-satellite
Abstract: Attitude determination is a crucial component of all satellite systems. It aims to provide accurate estimates of a satellite’s orientation (attitude) in space, to be used by the Attitude Determination and Control System (ADCS), to actively control the satellite’s motion to achieve a desired attitude.&#xD;
&#xD;
This dissertation presents a thorough design of the three-axis attitude determination system for the UoMBSat-1 satellite. This satellite is a 5x5x5cm, PocketQube pico-satellite, weighing 250 grams, that is currently being designed by the astrionics research group, ASTREA, at the University of Malta.&#xD;
&#xD;
This dissertation focuses on the design and implementation of an attitude determination system based on Kalman filtering. A number of sensors, each measuring different phys-ical quantities, are considered as inputs to the filter. The combination of these readings within the filtering algorithm, termed sensor fusion, forms a considerable part of this project. The use of these measurements also requires the design of an accurate mathemat-ical model for each of these sensors.&#xD;
&#xD;
A comprehensive model of the satellite in orbit is also developed, including the modelling of numerous reference frames, the modelling of the satellite’s dynamics and kinematics and the modelling of the various predictive models to determine the satellite’s position and velocity. Further models of the Earth’s magnetic field and the incident solar irradi-ance, at any orbit position, are developed to provide the required input to the various sensors on the satellite.&#xD;
&#xD;
These systems are implemented in simulation to enable the validation and testing of the complete system under realistic conditions. In these simulated conditions, the attitude determination system achieves a pointing accuracy of 5.42 outside of solar eclipse and 64.66 in eclipse. Furthermore, the filter is shown to significantly reduce the angular velocity measurement noise and bias drift introduced due to inherent sensor characteristics, improving the accuracy of this important parameter.
Description: B.ENG.(HONS)</description>
      <pubDate>Mon, 01 Jan 2018 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/36021</guid>
      <dc:date>2018-01-01T00:00:00Z</dc:date>
    </item>
  </channel>
</rss>

