<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>OAR@UM Collection:</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/41696</link>
    <description />
    <pubDate>Tue, 07 Apr 2026 02:39:36 GMT</pubDate>
    <dc:date>2026-04-07T02:39:36Z</dc:date>
    <item>
      <title>Compiling Verilog into hardware : Appendix D : the source code</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/123326</link>
      <description>Title: Compiling Verilog into hardware : Appendix D : the source code
Abstract: Library on has Appendix D - The Source Code. This booklet contains all the source code that makes up the project. Clearly, source files that are &#xD;
machine-generated (i.e. the output files of Flex and Bison) are not included. &#xD;
The files are present in this order: &#xD;
• myScan.l: This is the Flex file used to generate the lexical analyser. &#xD;
• myParse.y: This is the Bison file used to generate the parser. &#xD;
• TObject.h: This is the header file for TObject.cpp. It contains the declarations of all the classes &#xD;
used by the project. &#xD;
• TObject.cpp: This file implements all the classes declared in TObject.h.
Description: B.SC.(HONS)IT</description>
      <pubDate>Fri, 01 Jan 1999 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/123326</guid>
      <dc:date>1999-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Resilient wireless transmission of H.264/AVC through error localisation and control mechanisms</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/101787</link>
      <description>Title: Resilient wireless transmission of H.264/AVC through error localisation and control mechanisms
Abstract: Current trends in wireless communications provide for fast and location&#xD;
independent access to multimedia services. Due to its high compression efficiency, &#xD;
H.264/AVC is expected to become the dominant underlying technology in the delivery &#xD;
of future wireless video applications. However, H.264/AVC is susceptible to &#xD;
transmission errors common in wireless environments where even a single corrupted bit &#xD;
may cause visual artefacts that propagate in the spatio-temporal domain. The standard &#xD;
incorporates several error resilient mechanisms to minimize the effect of transmission &#xD;
errors on the perceptual quality of the reconstructed video sequence. However, these &#xD;
mechanisms assume a packet-loss scenario where all macroblocks (MBs) contained &#xD;
within a corrupted slice, including numerous uncorrupted MBs, are discarded and &#xD;
concealed. This implies that the error resilient mechanisms operate at a lower bound and &#xD;
thus further performance gains can be achieved by exploiting the residual redundancies &#xD;
available at the decoder side. &#xD;
During this dissertation, decoder-based techniques aimed to enhance the quality &#xD;
of damaged video sequences were investigated. The first method considered in this &#xD;
work exploits the residual source redundancy left by the standard encoder after &#xD;
compression to derive the most likelihood H.264/AVC feasible bitstream. This method &#xD;
manages to completely recover an average of 30% of the corrupted slices at no &#xD;
additional cost in bandwidth. The second approach considered in this dissertation &#xD;
exploits the redundancy available at pixel level to detect and localise visually distorted &#xD;
regions within the damaged slice that would otherwise be discarded. The experimental &#xD;
results show that machine learning algorithms can be taught to automatically detect the &#xD;
regions affected by transmission errors. This method limits the area to be concealed &#xD;
since only visually impaired regions are concealed. &#xD;
Both these methods provide a significant gain in video quality when compared &#xD;
to the standard when adopted individually. The two methods were combined together in &#xD;
a single solution to form the Hybrid Error Control Artefact Detection (HECAD) method &#xD;
which further boosts the performance of the individual components. This gain in &#xD;
performance is achieved at no additional cost in bandwidth and a moderate increase in &#xD;
complexity of the decoder. Furthermore, this method can be applied in conjunction with &#xD;
other error resilient strategies adopted by the standard decoder and still register &#xD;
considerable performance gains.
Description: PH.D.</description>
      <pubDate>Thu, 01 Jan 2009 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/101787</guid>
      <dc:date>2009-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>SharpHDL : a hardware description language embedded in C#</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/95972</link>
      <description>Title: SharpHDL : a hardware description language embedded in C#
Abstract: Digital systems can be very complex and may consist of millions of components.&#xD;
For many years logic schematics were used to design such systems but given the&#xD;
size of today's circuits, this technique is largely useless because it does not show&#xD;
the functionality of the design. Nowadays, engineers use Hardware Description&#xD;
Languages (HDL), which describe both the behavior and the structure of a&#xD;
circuit.
Description: B.SC.(HONS)IT</description>
      <pubDate>Thu, 01 Jan 2004 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/95972</guid>
      <dc:date>2004-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Automatic annotation of tennis videos (AAOTV)</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/95970</link>
      <description>Title: Automatic annotation of tennis videos (AAOTV)
Abstract: Vision is a crucial must have for both humans and computers. As a general idea,&#xD;
Vision deals with the process of object recognition, objects localization in a specific&#xD;
space, tracking of objects of interest and also the recognition of certain actions&#xD;
which these objects exhibit. Computer Vision varies in some aspects when&#xD;
compared to human vision. This is due to the fact that computer vision is active&#xD;
while human vision is passive. Human vision relies on external sources to be&#xD;
efficient such as external energy sources which include sunlight, light bulbs and also&#xD;
fires which provide light that reflects the objects to our eyes. On the other hand&#xD;
computer vision is active since they can carry their own energy sources such as&#xD;
Radars.&#xD;
The basic idea of this thesis is to process a tennis video, taken via a static camera&#xD;
and to perform detection and tracking of both the tennis players and the tennis ball&#xD;
and finally produce annotations of the tennis game. In general words our work&#xD;
should successfully act as a Commentatory of a normal tennis match. The main&#xD;
steps involved in this process include a tennis court line detection to determine the&#xD;
coordinates of the lines of the court. Another module is an adaptive background&#xD;
subtraction technique, which is used to separate the background from the&#xD;
foreground and therefore detect the objects of interest. An important factor that this&#xD;
subtraction technique should have, is to successfully adapt to the changes in the&#xD;
environment, this is because the tennis match is played outside and therefore&#xD;
changes in the lightening are an a priori assumption. After that objects are detected&#xD;
the next thing is to successfully track their motion within the scene. In the case of my&#xD;
thesis the most important object to track is the tennis ball. By tracking the tennis ball&#xD;
important annotations would be known such as when the player stokes the tennis&#xD;
ball, when the ball bounced on the court, if the ball bounced outside the court and&#xD;
others.&#xD;
The result produced from our work should be a video with the ball tracked together&#xD;
with a set of annotations at certain intervals of the video. Our program should finally&#xD;
collate the tracked video and annotations together for the viewer to be able to see&#xD;
the tracked tennis video and the annotations in one view.
Description: B.Sc. IT (Hons)(Melit.)</description>
      <pubDate>Thu, 01 Jan 2009 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/95970</guid>
      <dc:date>2009-01-01T00:00:00Z</dc:date>
    </item>
  </channel>
</rss>

