<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>OAR@UM Collection:</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/71342</link>
    <description />
    <pubDate>Sun, 05 Apr 2026 06:23:39 GMT</pubDate>
    <dc:date>2026-04-05T06:23:39Z</dc:date>
    <item>
      <title>Resilient wireless transmission of H.264/AVC through error localisation and control mechanisms</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/101787</link>
      <description>Title: Resilient wireless transmission of H.264/AVC through error localisation and control mechanisms
Abstract: Current trends in wireless communications provide for fast and location&#xD;
independent access to multimedia services. Due to its high compression efficiency, &#xD;
H.264/AVC is expected to become the dominant underlying technology in the delivery &#xD;
of future wireless video applications. However, H.264/AVC is susceptible to &#xD;
transmission errors common in wireless environments where even a single corrupted bit &#xD;
may cause visual artefacts that propagate in the spatio-temporal domain. The standard &#xD;
incorporates several error resilient mechanisms to minimize the effect of transmission &#xD;
errors on the perceptual quality of the reconstructed video sequence. However, these &#xD;
mechanisms assume a packet-loss scenario where all macroblocks (MBs) contained &#xD;
within a corrupted slice, including numerous uncorrupted MBs, are discarded and &#xD;
concealed. This implies that the error resilient mechanisms operate at a lower bound and &#xD;
thus further performance gains can be achieved by exploiting the residual redundancies &#xD;
available at the decoder side. &#xD;
During this dissertation, decoder-based techniques aimed to enhance the quality &#xD;
of damaged video sequences were investigated. The first method considered in this &#xD;
work exploits the residual source redundancy left by the standard encoder after &#xD;
compression to derive the most likelihood H.264/AVC feasible bitstream. This method &#xD;
manages to completely recover an average of 30% of the corrupted slices at no &#xD;
additional cost in bandwidth. The second approach considered in this dissertation &#xD;
exploits the redundancy available at pixel level to detect and localise visually distorted &#xD;
regions within the damaged slice that would otherwise be discarded. The experimental &#xD;
results show that machine learning algorithms can be taught to automatically detect the &#xD;
regions affected by transmission errors. This method limits the area to be concealed &#xD;
since only visually impaired regions are concealed. &#xD;
Both these methods provide a significant gain in video quality when compared &#xD;
to the standard when adopted individually. The two methods were combined together in &#xD;
a single solution to form the Hybrid Error Control Artefact Detection (HECAD) method &#xD;
which further boosts the performance of the individual components. This gain in &#xD;
performance is achieved at no additional cost in bandwidth and a moderate increase in &#xD;
complexity of the decoder. Furthermore, this method can be applied in conjunction with &#xD;
other error resilient strategies adopted by the standard decoder and still register &#xD;
considerable performance gains.
Description: PH.D.</description>
      <pubDate>Thu, 01 Jan 2009 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/101787</guid>
      <dc:date>2009-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Multicast multimedia transmission over wireless local area networks</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/101196</link>
      <description>Title: Multicast multimedia transmission over wireless local area networks
Abstract: Multicast over IEEE 802.1 la/b/g/n Wireless Local Area Networks (WLANs) is&#xD;
an efficient means of transmission exploiting the broadcast nature of the wireless&#xD;
medium. Moreover the use of the unlicensed 2.4GHz or the 5GHz bands does not&#xD;
incur licensing costs. But IEEE 802.lla/b/g/n Access Points (APs) transmit multicast&#xD;
unreliably, because the receivers do not transmit feedback regarding packet reception.&#xD;
Hence, multicast over IEEE 802.lla/b/g/n WLANs suffers from a high Packet Error&#xD;
Rate (PER) which is inappropriate for multimedia transmission. In 2012, the IEEE&#xD;
802.llaa standard added reliability to multicast using Directed Multicast Service,&#xD;
Groupcast with Retries (GCR) Unsolicited Retry and GCR Block Acknowledgement.&#xD;
However, packet repetition does not result in an optimal code rate.&#xD;
The work in this thesis first verified, using empirical and semi-analytical&#xD;
analyses, that two antennas of an IEEE 802.1 ln AP can be spatially distributed to&#xD;
mitigate the PER experienced. A main contribution was the proposal of a distributed&#xD;
antennas system (DAS), consisting of seven antennas, which was shown to multicast&#xD;
video over IEEE 802.1 ln WLANs with a Peak Signal-to-Noise Ratio (PSNR) of at&#xD;
least 36dB, with the same power but better code rate than other infrastructures. Even&#xD;
packets are multicast over one set of four transmit antennas with an antenna placed in&#xD;
the centre of the coverage area and three transmit antennas placed equidistantly at the&#xD;
periphery. The centre antenna and three other transmit antennas, also placed&#xD;
equidistantly at the periphery, are then used to multicast odd packets. The proposed&#xD;
DAS can also use antenna switching so that unicast transmission uses a centralized&#xD;
antennas system. The proposed DAS is shown to scale up well using a multi-cell&#xD;
approach.&#xD;
It was shown that packet repetition at the application layer, retransmitting each&#xD;
packet proactively once (code rate 0.5), does not result in the entire multicast group&#xD;
receiving good Quality of Service (QoS) with a legacy infrastructure or a DAS using&#xD;
only one set of four transmit antennas, due to burst erasures on the channel. The&#xD;
proposed seven-antenna DAS does guarantee the required QoS to the entire multicast&#xD;
group, on the other hand. Hence, schemes such as IEEE 802.11 aa GCR Unsolicited&#xD;
Retry should be deployed with the proposed DAS.&#xD;
The best code rate is achieved with Block Erasure Coding (BEC) on the&#xD;
proposed DAS. However, Network Coding, achieved by XORing packets to create&#xD;
parity packets, achieves a higher code rate than packet repetition and a lower delay&#xD;
than BEC, and results in good QoS with a PHY data rate of 58.5Mbps.&#xD;
Another original contribution is the proposal and study of two new MAC layer&#xD;
protocols. The first protocol uses a Binary Search guided by feedback from the&#xD;
receivers to determine the necessary number of R-S encoded parity packets. The&#xD;
second protocol combines Network Coding and packet repetition with feedback from&#xD;
the receivers, resulting in smaller channel occupancy than reactive packet repetition.&#xD;
Although this second protocol results in a larger maximum delay than reactive packet&#xD;
repetition, it is still appropriate for live-video streaming since the maximum delay is&#xD;
less than 8 ms.&#xD;
Finally, an infrastructure using distributed IEEE 802.1 ln APs is shown to&#xD;
perform better than the legacy infrastructure, at a code rate of 1/3. The advantage of&#xD;
using distributed APs is that IEEE 802.1 ln MCSs employing Spatial Division&#xD;
Multiplexing can be used for multicasting.
Description: PH.D.</description>
      <pubDate>Tue, 01 Jan 2013 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/101196</guid>
      <dc:date>2013-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Source representation for improved channel code performance in Wyner-Ziv video coding</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/100910</link>
      <description>Title: Source representation for improved channel code performance in Wyner-Ziv video coding
Abstract: The Wyner-Ziv video coding paradigm is a new coding paradigm which&#xD;
exploits most of the source correlation at the decoder. This differs from the traditional&#xD;
predictive video coding schemes, where the source correlations are exploited solely at&#xD;
the encoder. Hence, the new paradigm can enable the implementation of low&#xD;
complexity encoders suitable for various applications such as endoscopy capsules and&#xD;
low-power surveillance systems. Slepian-Wolf (SW) and Wyner-Ziv (WZ) theorems&#xD;
prove that when the complexities, of exploring the source statistics, are shifted from&#xD;
the encoder to the decoder, the coding efficiency should not be affect. Hence, under&#xD;
certain conditions, the coding performance of WZ video coding schemes can&#xD;
theoretically be made arbitrarily close to that of conventional schemes where the&#xD;
sources are jointly encoded and decoded. However, the Rate-Distortion (R-D)&#xD;
performance of practical Wyner-Ziv video coding architectures are still far from the&#xD;
best performance attained with predictive video coding architectures like the&#xD;
H264/AVC or High Efficiency Video Coding (HEVC) video coding scheme.&#xD;
This Thesis investigates several methods to improve the performance of&#xD;
Slepian-Wolf coding, in terms of coding efficiency and reduced decoding delays. It is&#xD;
noticed that the traditional Slepian-Wolf coding approaches encode the bits within the&#xD;
same bit-plane level randomly using Low-Density Parity-Check Accumulate&#xD;
(LDPCA) codes and this leads to a sub-optimal performance. The reliability of the&#xD;
bits can be predicted and used to ensure that the bit nodes receiving low reliability bit-predictions are given better protection. A novel LDPCA code construction, targeted to&#xD;
consider the coding problem at hand, is thus proposed. Furthermore, this work also&#xD;
analyses the performance of the traditional LDPCA codes at different entropy points&#xD;
and studies the best way to distribute the correlation noise amongst bit-planes to&#xD;
improve coding efficiency. This is achieved by accumulating the most unreliable bits&#xD;
within the first decoded bit-planes and correcting the remaining bit-planes, having few&#xD;
bit-errors, using 8-bit or 16-bit Cyclic Redundancy Check (CRC) codes. The careful&#xD;
arrangement of discrepancies amongst bit-planes is used together with the&#xD;
arrangement of bits within each bit-plane and the new LDPCA codes, to obtain&#xD;
performance gains of up to 23 % during Slepian-Wolf coding.&#xD;
In context of Slepian-Wolf coding, the Thesis also addresses a comprehensive&#xD;
analysis of the mismatch present within regions of low motion. This is used to&#xD;
develop a scheme where the quantisation module alters between the floor and the&#xD;
round operator, at different pixel or coefficient locations. The operator which is more&#xD;
likely to avoid the Slepian-Wolf codec from correcting mismatch caused by small&#xD;
variation in light intensity is then chosen to improve R-D performance by up to 0.52 dB.&#xD;
Finally, the long decoding times required for Slepian-Wolf decoding are reduced by&#xD;
considering a new indexing scheme and histogram equalisation technique. For parallel&#xD;
WZ video coding architectures, these techniques ensure that the Slepian-Wolf&#xD;
decoders running on different cores of a multi-core processor can finish decoding at&#xD;
the same time, aiding parallel decoding and reducing decoding times by up to 32 %,&#xD;
with minimal affect on the R-D performance. The obtained reduction in rates and&#xD;
decoding delays helps bridge the gap in performance compared to the traditional&#xD;
video coding systems and pave the pathway for applications based on WZ video&#xD;
coding paradigm.
Description: PH.D.</description>
      <pubDate>Tue, 01 Jan 2013 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/100910</guid>
      <dc:date>2013-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Multi-view video-plus-depth coding using depth information</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/100877</link>
      <description>Title: Multi-view video-plus-depth coding using depth information
Abstract: Multi-view 3D videos are required to provide an enhanced immersive user&#xD;
experience, with the ability to perceive the 3D depth effect and arbitrarily select the&#xD;
desired navigation viewpoint. For an efficient representation, the texture multi-view&#xD;
video data require also the transmission of its aligned depth map multi-view&#xD;
counterpart to help reconstruct arbitrary intermediate virtual viewpoints. Encoding&#xD;
these components within these 3D videos require the Multi-view Video Coding&#xD;
(MVC) standard, which is a very computational intensive task. This process increases&#xD;
drastically the 3D video encoding durations, compared to 2D video encoding, which&#xD;
hinders its use for transmission, such as for real-time or broadcast applications, which&#xD;
respectively need the Low-latency and the Hierarchical bi-Prediction MVC structures.&#xD;
Nevertheless, this new aligned depth map data supplies a Jot of useful geometrical&#xD;
information about the scene, which data together with appropriate geometrical&#xD;
properties can be exploited to achieve multi-view video coding within a shorter&#xD;
coding latency and with higher coding efficiencies.&#xD;
Experimental results demonstrate that this geometrical information can be&#xD;
efficiently utilised to calculate more accurate geometric predictors for the disparity&#xD;
and the motion estimations. As these are geometrically assisted, they become more&#xD;
accurate than the median ones adopted by the standard, thus, they allow a reduction in&#xD;
the estimations' search area, and as a result, in their encoding durations. Such aids&#xD;
obtain an overall MVC speed-up gain of up-to 4.2 times when applied to the Low-latency MVC, and of 3.2 times when applied to the Hierarchical bi-Prediction MVC.&#xD;
Exploiting further inter-view geometric relationships within the multi-view video&#xD;
reduce further the disparity estimation's search range, and improve its final speed-up&#xD;
gain to up-to 20.8 times. Moreover, the motion and the disparity estimations can be&#xD;
also modified to exploit this depth map data to limit their actual mode tested during&#xD;
rate-distortion optimisation. This is because this geometry helps identify better&#xD;
equivalent positions in the encoded viewpoints, and exploit them to determine the&#xD;
potential modes to use for motion estimation. Additionally, the most likely macroblock' s best division for disparity estimation can also be determined and exploited.&#xD;
The former improves the MVC's speed-up gain to about 6 times, while the latter&#xD;
reduces the disparity estimation's time by 26.0 times. All of these fast techniques are&#xD;
Pagev&#xD;
then combined together to form a joint computational reduction in MVC, which&#xD;
improved its coding speed up to 6.7 times for low-latency applications, and up to 3.7&#xD;
times for broadcast applications.&#xD;
Being more accurate, the geometric predictors allow for smaller residual&#xD;
vector encoding, providing around 8 % bit-rate reduction. Additionally, the SKIP&#xD;
mode can also be extended to select automatically the SKIP mode's compensation&#xD;
vector and direction, from either a temporal or a new viewpoint Reference frame. The&#xD;
latter two techniques provide a joint bit-rate reduction of about 14 % for Low-latency&#xD;
MVC, and of about 13 % for Hierarchical bi-Prediction MVC, which is equivalent to&#xD;
an encoded inter-view quality improvement of about 0.6 dB. By using these two&#xD;
techniques together, a trade-off between a fast and an efficient encoding technique can&#xD;
be achieved.&#xD;
These improvements in coding time and efficiencies were obtained with a&#xD;
negligible to acceptably-small degradation in the decoded video quality, for both the&#xD;
texture and the depth map multi-view videos, while the rendering capability and quality&#xD;
of the compressed 3D videos are almost sustained. Hence, exploiting these&#xD;
improvements allows 3D video coding to adhere better to the stringent low-latency&#xD;
requirements needed for real-time and live broadcast scenarios, while making the&#xD;
encoded bit-stream more adequate for limited bandwidth channels.
Description: PH.D.</description>
      <pubDate>Tue, 01 Jan 2013 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/100877</guid>
      <dc:date>2013-01-01T00:00:00Z</dc:date>
    </item>
  </channel>
</rss>

