<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>OAR@UM Collection:</title>
  <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/6642" />
  <subtitle />
  <id>https://www.um.edu.mt/library/oar/handle/123456789/6642</id>
  <updated>2026-05-02T18:56:13Z</updated>
  <dc:date>2026-05-02T18:56:13Z</dc:date>
  <entry>
    <title>Optimizing scheduling in a pharmaceutical company</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/93900" />
    <author>
      <name />
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/93900</id>
    <updated>2022-04-18T09:05:15Z</updated>
    <published>2015-01-01T00:00:00Z</published>
    <summary type="text">Title: Optimizing scheduling in a pharmaceutical company
Abstract: Scheduling is very important task which is used on a daily basis. A "good" schedule will&#xD;
increase the company's' profit and customers will be more willing to buy products as they&#xD;
are being satisfied in the shortest period of time. It is also important as it may solve many&#xD;
different objective functions such as minimization of makespan; minimization of delays;&#xD;
and minimization of total completion time. There has been an extensive research about&#xD;
algorithms to solve such problems. An overview of these algorithms both from a&#xD;
deterministic theoretical part and stochastic theoretical part is provided.&#xD;
This dissertation mainly focuses on solving the minimization of makespan on identical&#xD;
parallel machines scheduling problem. A mixed integer linear program (MILP) is built to&#xD;
solve this scheduling problem by using real live data from a local pharmaceutical&#xD;
company. In addition, the Longest Processing Time (LPT) heuristic algorithm which is&#xD;
one of the famous and oldest scheduling algorithms is used to compare its results with the&#xD;
MILP problem results and the original schedule by the company.
Description: B.SC.(HONS)STATS.&amp;OP.RESEARCH</summary>
    <dc:date>2015-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Parameter estimation of Lévy processes</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/93899" />
    <author>
      <name />
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/93899</id>
    <updated>2022-04-18T09:03:19Z</updated>
    <published>2015-01-01T00:00:00Z</published>
    <summary type="text">Title: Parameter estimation of Lévy processes
Abstract: Levy processes have become increasingly popular in mathematical finance because of&#xD;
their ability to capture the leptokurtic shape of stock returns and also the jumps&#xD;
observed in stock prices.&#xD;
In this dissertation we will present some of the theory and major results of Levy&#xD;
processes. In particular we shall focus on the Normal Inverse Gaussian and the Meixner&#xD;
process. Then we shall be looking at different parameter estimation methods for Levy&#xD;
processes, which can be split into two major categories: the parametric approach and&#xD;
nonparametric approach. For the nonparametric approach we shall consider a projection&#xD;
estimator proposed by Comte and Genon-Catalot [14] and also an estimator introduced&#xD;
by Rubin and Tucker [ 44]. In the parametric approach we consider the Integrated Sum&#xD;
of Squared Estimation proposed by Heathcote [28] and a Stochastic Programming&#xD;
method presented by Sant and Caruana [ 45]. Finally these methods of estimation are&#xD;
implemented on the Malta Stock Exchange Index and some results are compared were&#xD;
possible.
Description: B.SC.(HONS)STATS.&amp;OP.RESEARCH</summary>
    <dc:date>2015-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Analyzing dichotomous and multichotomous categorical responses to assess self-esteem using response models</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/93897" />
    <author>
      <name />
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/93897</id>
    <updated>2022-04-18T08:58:17Z</updated>
    <published>2015-01-01T00:00:00Z</published>
    <summary type="text">Title: Analyzing dichotomous and multichotomous categorical responses to assess self-esteem using response models
Abstract: Item Response Theory (IRT) is a statistical procedure, typically used in psychological&#xD;
measurement, with specific reference to the attitudes, abilities, achievement levels and&#xD;
personality traits of individuals. Its main aim is that of constructing and analyzing&#xD;
scores on a person's latent trait using questionnaires, personality assessments and&#xD;
surveys. IRT assesses the person's probability of rating an item in a particular manner&#xD;
according to a number of factors, namely the respondent's trait level (qualities of the&#xD;
individual), the item difficulty and the item discrimination (qualities of the item).&#xD;
Dichotomous IRT models have been developed to cater for two-category responses.&#xD;
The Rasch model establishes the probability of rating an item with a specific difficulty&#xD;
by a person having a particular trait level. If the item discrimination varies, then the&#xD;
Two-Parameter Logistic (2-PL) model is used. The Three-Parameter Logistic (3-PL)&#xD;
model generalizes the 2-PL model by introducing a guessing parameter.&#xD;
Multichotomous IRT models have been developed to cater for rating responses with&#xD;
more than two categories. The Rating Scale model (RSM) and the Partial Credit model&#xD;
(PCM) which belong to the polytomous family of Rasch models are also described.&#xD;
The 1- and 2- PL models as well as the RSM and the PCM are fitted to a data set related&#xD;
to self-esteem and are implemented using the facilities of STATA's subroutine gllamm.&#xD;
The questionnaire, which was distributed to 303 individuals, comprised ten items, each&#xD;
of which was rated on a 4-point Likert scale. A summary of the main findings is&#xD;
provided for each fitted model.
Description: B.SC.(HONS)STATS.&amp;OP.RESEARCH</summary>
    <dc:date>2015-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Parametric and non-parametric estimation methods for latent variables</title>
    <link rel="alternate" href="https://www.um.edu.mt/library/oar/handle/123456789/93896" />
    <author>
      <name />
    </author>
    <id>https://www.um.edu.mt/library/oar/handle/123456789/93896</id>
    <updated>2022-04-18T08:50:25Z</updated>
    <published>2015-01-01T00:00:00Z</published>
    <summary type="text">Title: Parametric and non-parametric estimation methods for latent variables
Abstract: The aim of this dissertation is to compare two estimation methods - the Maximization&#xD;
Expectation (EM) and the Non-Parametric Maximum Likelihood Estimation (NPMLE)&#xD;
approach to estimate a number of unobserved groups or latent classes. A medical data&#xD;
set related to patients suffering from schizophrenia was used to compare these two&#xD;
methods.&#xD;
The nonparametric maximum likelihood estimator of an unspecified distribution is a&#xD;
discrete distribution with nonzero mass probabilities at a finite number of mass points&#xD;
(locations). The true number of locations is determined when the likelihood is&#xD;
maximized using the concept of a directional derivative, called Gateaux derivative.&#xD;
The NPMLE algorithm is initialized by setting the number of mass-points (latent&#xD;
variable) to 1 and then searches for a new mass point over a fine grid covering a wide&#xD;
range of values. The algorithm is terminated if the directional derivative is non positive&#xD;
for all mass points. The method was applied to the medical data set and implemented&#xD;
using the facilities of GLLAMM, which is a subroutine of STATA. The approach&#xD;
yields posterior means, which are probabilities that a patient belong to each of the&#xD;
latent classes. Patients are then allocated to the latent class (segment) with the largest&#xD;
posterior mean.&#xD;
The EM algorithm uses a different approach in which observed data is augmented by&#xD;
the inclusion of unobserved data, which are 0-1 indicators indicating whether a patient&#xD;
belongs to a particular latent class. The posterior probabilities are the expected values&#xD;
of this unobserved data and are calculated using Bayes theorem. The EM algorithm&#xD;
was applied to the data set and implemented using the facilities of GLIM. Similar to&#xD;
the NPMLE approach, patients are then allocated to the latent class with the largest&#xD;
posterior probability. In this approach, both the clustering and estimation procedures&#xD;
are carried out simultaneously, where a regression model is fitted for each segment.&#xD;
Both the EM (parametric) and NPMLE (non-parametric) approach showed that the 2-&#xD;
segment model is the best model for the dataset. Both methods yielded similar parameter&#xD;
estimates for the regression models and similar allocation of patients to the two latent&#xD;
classes. The two estimation methods were compared for execution time. It was found&#xD;
that for a small number of latent classes the two methods yielded similar execution&#xD;
times; however as the number of segments is increased the EM approach converges at&#xD;
a faster rate than the NPMLE approach. The main advantage of the NPMLE approach is&#xD;
that it guarantees convergence to a global maximum; while the EM algorithm only&#xD;
guarantees convergence to a local maximum.
Description: B.SC.(HONS)STATS.&amp;OP.RESEARCH</summary>
    <dc:date>2015-01-01T00:00:00Z</dc:date>
  </entry>
</feed>

