<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>OAR@UM Collection:</title>
  <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/71595" />
  <subtitle />
  <id>https://www.um.edu.mt/library/oar/handle/123456789/71595</id>
  <updated>2026-04-15T03:48:54Z</updated>
  <dc:date>2026-04-15T03:48:54Z</dc:date>
  <entry>
    <title>Modelling financial data through Lévy processes</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/91781" />
    <author>
      <name />
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/91781</id>
    <updated>2022-03-23T12:19:18Z</updated>
    <published>2018-01-01T00:00:00Z</published>
    <summary type="text">Title: Modelling financial data through Lévy processes
Abstract: Early in the 20th Century the use of Brownian Motion for modelling movements &#xD;
of stock prices became trendy. Later, it became apparent that another kind of &#xD;
stochastic process, now called Levy processes, was better suited to model the log &#xD;
returns of stock prices than Brownian Motion. &#xD;
Theory on this topic is vast, and there have been many contributions to this area &#xD;
of study in the last decade. In chapter 3 we explore some of this vast theory. &#xD;
For the purpose of this dissertation we focus on high-frequency, non-parametric &#xD;
estimation methods. We discuss some methods in chronological order, first the &#xD;
Rubin and Tucker estimation method, after we analyze the Gegler and Stadtmiiller &#xD;
[18] estimation method, and finally the Sant and Caruana estimation method. The &#xD;
latter being the most recent one, released in 2018. &#xD;
In chapter 5 we apply the estimators discussed in the fourth chapter to a local &#xD;
financial data set. Furthermore, a simulation study is conducted, and some of the &#xD;
estimation methods are compared.
Description: B.SC.(HONS)STATS.&amp;OP.RESEARCH</summary>
    <dc:date>2018-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>A mixed integer programming problem in the pharmaceutical industry</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/78612" />
    <author>
      <name />
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/78612</id>
    <updated>2021-07-26T06:48:47Z</updated>
    <published>2018-01-01T00:00:00Z</published>
    <summary type="text">Title: A mixed integer programming problem in the pharmaceutical industry
Abstract: This dissertation deals with an optimization problem which appears in the context of scheduling pharmaceutical quality control tests. Scheduling such tests, which are mandatory to approve the safety, purity and efficacy of pharmaceutical product families, is a very challenging task given the limited resource availability and the fact that a single product family must undergo multiple tests. The aim of this study is to develop an original mixed integer linear programming (MILIP) model for scheduling these laboratory tests within the pharmaceutical company.&#xD;
Aurobindo Pharma (Malta) Limited. Each week the company needs to plan tests &#xD;
for approximately 40 different product families, with each family requiring at least &#xD;
G different tests. Effective plans are thus essential for increasing efficiency of the &#xD;
laboratory and improving utilization of resources (employees/machines). The &#xD;
proposed model determines a schedule over a given planning horizon by &#xD;
minimizing the makespan. It encompasses constraints such as assignment &#xD;
constraints of different stages of tests to resources and timing constraints between &#xD;
tests pertaining to the same product family. Having formulated the model, &#xD;
theoretical background on the existence and uniqueness of optimal solutions to &#xD;
MILP problems is studied and exemplified. The proposed model has been &#xD;
implemented in GAMS and solved by CPLEX/GUROBI via a Branch-and-Cut &#xD;
solution approach. Computational experiments were run on real data provided by &#xD;
the company over different planning horizons. The success of obtained results is &#xD;
reported via Gantt charts.
Description: M.SC.STATISTICS</summary>
    <dc:date>2018-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Regularisation in regression : the partial least squares approach</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/78583" />
    <author>
      <name />
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/78583</id>
    <updated>2023-03-06T07:00:43Z</updated>
    <published>2018-01-01T00:00:00Z</published>
    <summary type="text">Title: Regularisation in regression : the partial least squares approach
Abstract: Ordinary Least Squares (OLS) regression spearheaded by Gauss in the 18th century is a technique that is widely used to estimate parameter coefficients in regression. Throughout the years, data sets started getting larger in size, both in terms of observations and variables. In particular, areas such as spectrometry and gene studies tend to have data sets that consist of a large number of variables which often outnumber the number of observations. Such data sets are known as high-dimensional and estimation techniques such as the OLS tend to perform poorly and the results are either ill-conditioned or undefined. Thus, this paved way for regularisation. Amongst the many regularisation methods that exist, is the Partial Least Squares (PLS) regression method.  In this dissertation, we will explain the statistical interpretation of the PLS model based on the "Krylov hypothesis" and explain how the latent variables, loadings and weights can be obtained from various agorithms. A very important component that needs be determined is the number of PLS components k. In view of this, a number of validation techniques will be discussed.
Description: M.SC.STATISTICS</summary>
    <dc:date>2018-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>An evaluation of likelihood-based approaches in nonlinear structural equation modelling</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/77896" />
    <author>
      <name />
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/77896</id>
    <updated>2021-07-01T11:20:39Z</updated>
    <published>2018-01-01T00:00:00Z</published>
    <summary type="text">Title: An evaluation of likelihood-based approaches in nonlinear structural equation modelling
Abstract: Research projects in various fields often incorporate observable and latent &#xD;
variables. Observable variables are quantities which can be directly measured, &#xD;
such as temperature, disposable income and heart rate. Latent variables are &#xD;
measurements which cannot be directly quantified, and thus need to be measured indirectly through other manifest (observable) variables. These variables occur in a multitude of fields, ranging from the social sciences to the &#xD;
behavioural, psychological, financial and biological sciences. In biology, for &#xD;
instance, obesity which is a latent variable is measured through several manifest variables including height and weight. The tools to model and analyse &#xD;
relationships between observed and latent variables and also between latent &#xD;
variables themselves are provided by Structural Equation Modelling (SEM). &#xD;
Structural Equation Modelling is used to create statistical models which &#xD;
describe the relationship between observable and latent variables, and between &#xD;
latent variables themselves. The simplest form of SEM assumes linearity in &#xD;
the relationship between the latent factors and the variables used to measure &#xD;
them. This assumption results in restrictions when complex relationships, &#xD;
which may be of a nonlinear nature, are considered. Consequently, in such &#xD;
instances, a class of more general models needs to be considered and hence &#xD;
the necessity arises for nonlinear structural equation models. Allowing relationships between observable and latent variables to be nonlinear, provides a &#xD;
wider field of work where more elaborate models may be considered. &#xD;
The theory of nonlinear structural equation modelling allows for several &#xD;
approaches which may be utilised to estimate model parameters. In this dissertation, three likelihood-based approaches will be considered, namely: the &#xD;
Latent Moderated Structural (LMS) equation approach; the Quasi-Maximum &#xD;
Likelihood (QML) approach; and the Nonlinear Structural Equation Mixture &#xD;
Modelling (NSEMM) approach. The behaviour of these approaches will be &#xD;
studied under different conditions such as: the effect of sample size, and the &#xD;
departure from normality assumption. The study will not only take a theoretical stance, but also a more practical approach through a Monte-Carlo &#xD;
simulation study where the behaviour of the model estimates, achieved using &#xD;
the three different approaches considered, will be analysed. An application &#xD;
of nonlinear structural equation modelling using an empirical dataset is also &#xD;
presented.
Description: M.SC.STATISTICS</summary>
    <dc:date>2018-01-01T00:00:00Z</dc:date>
  </entry>
</feed>

