University of Malta

UOM Main Page
Apply - Admissions 2016
Campus Map button

Prof. Tim Baldwin, University of Melbourne, Topic Model Interpretation, 30th January 2012.

Topic modelling is a powerful means of soft clustering, whereby multinomial distributions are simultaneously learned for terms and documents over a common set of "topics". In the first part of this talk, I will present work on automatically interpreting topic models to enhance their utility for human consumption. First, I will describe the task of topic model coherence, and present a model for predicting the coherence of a topic. I will then introduce the task of topic labelling, using either a single term found in the topic, or an expanded set of labels derived in part from Wikipedia. In the second part of the talk, I will (time permitting) describe ongoing work applying topic modelling to various lexical semantic tasks. 


Matthew Montebello, Total Mobile Ubiquity, 24th January, 2012.

During this synoptic talk the main players within a total mobile ubiquity scenario will be presented and discussed in line with on-going work and research. The convergence of the rapid development in mobile technologies, the ever evolving Semantic web, and the widespread popularity of social networks, is driving industry and consumers alike in a frenzy for information and an un-deliberate new way of life. 


Christopher Torpelund-Bruin, James Cook University (JCU), Australia, The Internet of Things, 16th January, 2012.


Discusses the challenges and issues being faced currently with ubiquitous computing and distributed sensor networks. Looks at how the introduction of context awareness in ultra-mobile ubiquitous computing could help bring together Mark Weisser’s dream of ‘The Internet of Things’. Presents an overview of some current projects/research and discusses the possibility to build on top of them.


Owen Sacco, DERI, discusses his paper, An Access Control Framework for the Web of Data,  published at the 10th IEEE International Conference on Trust, Security and Privacy in Computing and Communications (IEEE TrustCom-11), 9th January, 2012.

Abstract: As open data formats are becoming popular amongst Web developers, this gives rise to an increase in the creation and consumption of structured data. Data creators are easily linking diverse datasets which is creating a graph of interlinked data called the Web of Data. This increase in linked structured data demands mechanisms to ensure that access is controlled and granted to only those who are eligible to consume the data. However, although there is a lot of work done related to access control on the Web, this cannot be applied directly to structured data due to the different nature of how the data is formatted. Moreover, not much has been done to create fine-grained access control mechanisms for the Web of Data. In this talk, we therefore present our access control framework that consists of a light-weight vocabulary for defining fine-grained privacy preferences for structured data and a privacy preference manager which allows users to: (1) create privacy preferences based on our vocabulary and (2) grant / restrict access third-party users to user's data.


Jeremy Debattista and Simon Scerri, DERI, Challenges in case-based reasoning for context awareness in ambient intelligent systems, 19th December, 2011.

One of the most important issues in an ambient intelligent environment, indeed in any problem-solving situation, is the ability of a system to appreciate its environment and assess the situation in which problems are to be solved. Recently, case-based reasoning has gained some momentum in the area of context awareness for ambient intelligent systems. When applying case-based reasoning in this type of system some challenges arise, each well known for case-based reasoning in general, but each gaining a new particular angle. The main challenges are: how to acquire the initial cases, coping with the potential for a very large amount of cases, when to execute a case-based reasoning process, and knowing if the reasoning was correct. The work presented here builds on the experiences gained by applying case-based reasoning to situations assessment. We analyse each of these challenges from the perspective of different domains, and give some suggestions as to how to best approach these challenges.


Keith Cortis and Simon Scerri, DERI, Semantic Integration of Online Identitites, 19th December, 2011.

Users are currently required to create and separately manage duplicated personal data in numerous, heterogeneous online account services. Our approach targets the retrieval and integration of this data, based on a comprehensive ontology framework which serves as a standard format. The motivation for this integration is to enable users to manage their personal information from a single entry-point. The main challenge faced by this approach is the detection of semantic equivalence between contacts described in online profiles, their attributes and shared posts. In this talk we outline our part-syntactic, part-semantic elaborate approach to online profile integration, the current status and future plans for research and development concerned with this challenge.


Mike Rosner and Andrew Attard, Extending a Tool Resource Framework with U-Compare, 21st November, 2011.

This talk deals with the issue of two-way traffic between on the one hand, locally produced language resources and tools, conceived from a local perspective, i.e. from within a local project or institution, and a shared framework conceived from a global perspective that supplies such resources for local re-use or enhancement. We believe that a key enabler to such traffic is the choice of an appropriate sharing platform, and here we make  the point using UCompare, and the constellation of EU projects to which METANET4U belongs, and a local project that has some already-developed local functionality. The global use of the locally developed module using U-Compare will be shown.


Tobias Kuhn, Research on Controlled Natural Language, 14th November, 2011.

A controlled natural language (CNL) is based on natural language but comes with restrictions on vocabulary, grammar, and/or semantics. The general goal is to reduce or eliminate ambiguity and complexity. My focus lies on languages with a direct connection to formal logic. Such languages should enable users with no background in formal methods to efficiently use logic-based systems like knowledge representation tools and query interfaces. In this talk, I will summarize the approaches and results of the research on CNL I performed over the last couple of years.


Charlie Abela, Behaviour Mining for Personalised Task-Based Support on the User's Desktop, 7th November, 2011.

In this talk we present ongoing research which involves the analyses of user-activity log files to explore how a user's activities evolve with time. We discuss a first attempt at assigning time-varying, importance and association values to each resource, based on the dwell-time and the resource-switching patterns exhibited by the user while browsing the Web and propose a new dynamic graph algorithm called OnlineActivityGraph which leverages on these values to generate resource clusters and short-term user models. We will also present some initial findings obtained from a preliminary evaluation which motivates our future work.


Siva Reddy, University of York, The Effect of Polysemy in Compositional Semantics, October 31st, 2011.

Compositional semantics deals with the task of composing semantics of larger text entities from their constituents. For any semantic model to describe the language adequately, the issue of compositionality should be addressed. Many semantic composition functions are proposed to estimate the semantics of compound words from the semantics of constituent words. Before using these composition functions, one should note that "not" all the semantic properties of a constituent word contribute to the semantics of the compound. In this talk, I will discuss the problems posed by polysemy in semantic composition models and present two different models for solving these problems. The first method is based on Word Sense Induction which creates static sense representation. The second method builds sense representation dynamically for each constituent based on the given context. These senses are then used for composing the semantics of the compound. We evaluate the models on a paraphrasing task. Results show that: (1) selecting the relevant senses of the constituent words lead to a better semantic composition and (2) dynamic sense representation performs better than static representation.


Chris Staff, Head of ICS, Ongoing Research in User Adaptive Systems and Information Retrieval, 24th October, 2011.


In this talk we will primarily focus on aspects mostly related to making Web browsers (usually Firefox) more adaptive. Web browser history (if unlimited) can be a reasonably complete source of information about a users interests. However, the history file is rarely used by users because its organisation does not match the user's mental model of how they have visited information. Also, the information contained in it is not used by the browser application, except to mark up links in a web page if the destination has been visited recently by the user (usually the link appears in a different colour). In out current research we investigate ways of viewing history to: i) actually preserve the way in which users visit web pages so that web pages can be revisited in context; ii) cluster web pages visited according to topic; iii) use information from history to a) automatically disambiguate query terms in a user query by reformulating the query, and investigating ways in which a user's interaction with documents across applications can be processed to provide context; b) automatically generating queries based on history, and the entire desktop or the user's current browsing session to proactively recommend web pages to visit; c) clustering search engine results according to topic and ranking the clusters according to user interests; iv) proactively alert users when a previously visited web page's content changes if the change is likely to be of interest to the user; v) improve the quality of recommendations/ clusters/queries by adaptively learning correspondences between synonymous references to entities (co-reference resolution, but using a user's own history, rather than the Web-at-large, to 'personalise' the co-references, e.g., AI usually means 'Artificial Intelligence' for someone conversant in Computer Science, but 'Artificial Insemination' for a vet).  We adopt so-called surface-based (e.g., statistical and probabilistic) techniques rather than semantic techniques to solve these problems. Finally we describe some challenges, especially those related to evaluation - how to involve a sufficiently large number of human evaluators to make the findings statistically significant


Last Updated: 30 January 2012

Log In back to UoM Homepage