<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>OAR@UM Collection:</title>
    <link>https://www.um.edu.mt/library/oar/handle/123456789/77203</link>
    <description />
    <pubDate>Fri, 17 Apr 2026 07:29:30 GMT</pubDate>
    <dc:date>2026-04-17T07:29:30Z</dc:date>
    <item>
      <title>A comparison of penalized regression techniques</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/94196</link>
      <description>Title: A comparison of penalized regression techniques
Abstract: The Ordinary Least Squares method (OLS) as defined by Carl Fredrich Gauss in&#xD;
the 18th century is a technique which is widely used to estimate parameter coefficients. However throughout the years as researchers were studying the stability of such&#xD;
technique it was noted that if the data is characterized by multicollinearity then the&#xD;
coefficient estimates obtained through the OLS proved to be weak. Thus, researchers&#xD;
after recognizing such weakness within the OLS framework embarked on a journey to&#xD;
develop new regression techniques which can provide stable and reliable results given&#xD;
that the data exhibits collinearity problems.&#xD;
These regression techniques include the Ridge regression, Least Absolute Shrinkage&#xD;
Selection Operator (Lasso) regression, Elastic net and Naive Elastic net regressions. The&#xD;
Ridge regression shall be introduced briefly however the main focus of this dissertation&#xD;
will be on the latter three techniques. The Lasso regression has the ability to minimize&#xD;
the effect of multicollinearity by applying shrinkage on the coefficient estimates while&#xD;
at the same time doing subset selection. Thus, unlike the model fitted by the Ridge&#xD;
regression, the resultant model of the Lasso is a parsimonious one. The Elastic Net&#xD;
regression (EN) and Naive Elastic Net (NEN) regression can be considered as hybrid&#xD;
models of the Ridge regression and Lasso regression whereby through their penalty&#xD;
function they also minimize the effect of shrinkage and also apply subset selection. The&#xD;
novelty of the EN and NEN regression lies with their ability to tackle the problem that&#xD;
arises in the Lasso regression whereby if a group of highly correlated variables exist&#xD;
in the dataset, the Lasso tends to choose a variable randomly from within this group.&#xD;
This is generally known as the "grouping effect" in literature.&#xD;
These techniques shall be implemented into two types of studies, a simulation study&#xD;
and a real life dataset study in order to analyse and compare their performance under&#xD;
different scenarios. In the simulation study, three datasets with different levels of multicollinearity and dimensions shall be analyzed while for the real life dataset, the high&#xD;
dimensional case ( n &lt; p) shall be studied.
Description: B.SC.(HONS)STATS.&amp;OP.RESEARCH</description>
      <pubDate>Wed, 01 Jan 2014 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/94196</guid>
      <dc:date>2014-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Modeling ordinal data through bayesian sem</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/93890</link>
      <description>Title: Modeling ordinal data through bayesian sem
Abstract: Structural equation modeling (SEM) is a flexible statistical technique used to model&#xD;
complex relationships between a set of observable and unobservable variables, where each&#xD;
variable may be either dependent or independent. A SEM consists of a measurement model,&#xD;
catering for the relationships between the observable and latent variables, and a structural&#xD;
model, a simultaneous equation indicating how the latent variables in the model are related&#xD;
to each other.&#xD;
Once an appropriately identified SEM has been specified, the step to follow is that of&#xD;
estimating the unknown parameters in the model. As an estimation technique, the Bayesian&#xD;
approach has become increasingly popular in the field of SEM particularly when dealing&#xD;
with small samples. On using this approach, the unknown model parameters are estimated&#xD;
through the technique of data augmentation and the Markov Chain Monte Carlo (MCMC)&#xD;
methods. Once estimates are obtained, the goodness-of-fit of the model is assessed through&#xD;
the posterior predictive p-value, a Bayesian alternative to the classical p-value.&#xD;
The Bayesian SEM strategy will be applied to examine the relationship between a set of&#xD;
observable and latent variables in a dataset related to invasion of privacy, risk taking and&#xD;
security concerns of a person when using the internet.
Description: B.SC.(HONS)STATS.&amp;OP.RESEARCH</description>
      <pubDate>Sun, 01 Jan 2012 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/93890</guid>
      <dc:date>2012-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Evolutionary algorithms</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/93809</link>
      <description>Title: Evolutionary algorithms
Abstract: Evolutionary Algorithms are probabilistic techniques that are inspired by the principle&#xD;
of natural evolution proposed by Charles Darwin. Evolutionary Algorithms are&#xD;
normally used to generate useful solutions to optimization and search problems using&#xD;
methods that are borrowed from the principles of natural evolution such as selection,&#xD;
crossover and mutation. An overview of these algorithms especially a particular class of&#xD;
these algorithms named Genetic Algorithm is provided. A description of the function of&#xD;
Evolutionary Algorithms together with an outline of the main classes into which it is&#xD;
divided is given. Consequently, a description of each component in the algorithm is&#xD;
presented. Moreover, Evolutionary Algorithms are modelled by Markov Processes thus&#xD;
the Markov model of the algorithm together with the conditions under which the&#xD;
algorithm with an elitist selection rule converges to the global minimum of an&#xD;
optimization problem irrespective of the search space is provided.&#xD;
Genetic Algorithm is one of the main classes of Evolutionary Algorithms. A description&#xD;
of the main components of this algorithm together with some examples on how these&#xD;
can be incorporated is given. A Genetic Algorithm is also modelled by Markov&#xD;
processes and the exact transition matrix of a Genetic Algorithm is presented.&#xD;
Furthermore, fifteen benchmark functions are used to test the efficiency and the&#xD;
performance of the Genetic Algorithm. The parameters considered for this analysis&#xD;
include the population size and the crossover rate. Also, Genetic Algorithm is compared&#xD;
to other traditional methods, namely Nelder and Mead Moving Simplex, Simulated&#xD;
Annealing and Pattern Search to find in which cases Genetic Algorithm performs better&#xD;
than the other techniques.
Description: B.SC.(HONS)STATS.&amp;OP.RESEARCH</description>
      <pubDate>Tue, 01 Jan 2013 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/93809</guid>
      <dc:date>2013-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Modelling online credit card usage</title>
      <link>https://www.um.edu.mt/library/oar/handle/123456789/93801</link>
      <description>Title: Modelling online credit card usage
Abstract: The aim of this dissertation is to model the spending behavior of credit cardholders, with&#xD;
the idea of classifying legitimate and fraudulent users separately. The utilisation of Markov&#xD;
theory, in particular the Hidden Markov Model (HMM), was a valuable tool in providing&#xD;
solutions to the three main problems tackled. Dynamic Programming methods were essential in order to reduce the demanding computation of several calculations. Particularly, the&#xD;
Viterbi algorithm was exploited in the second HMM problem known as the Decoding Problem. Theory from Maximum Likelihood was also used especially in the third HMM problem,&#xD;
usually termed as the Learning Problem. A famous Expectation-Maximisation (EM) algorithm called the Baum-Welch algorithm was used so as to obtain the most likely parameters&#xD;
that best describe each credit cardholder's spending patterns. Finally, classification of legitimate and fraudulent users was carried out using cluster analysis.
Description: B.SC.(HONS)STATS.&amp;OP.RESEARCH</description>
      <pubDate>Thu, 01 Jan 2009 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://www.um.edu.mt/library/oar/handle/123456789/93801</guid>
      <dc:date>2009-01-01T00:00:00Z</dc:date>
    </item>
  </channel>
</rss>

