University of Malta
 

Ongoing Projects
UOM Main Page
 
 
 
Apply - Admissions 2016
Newspoint
Campus Map button

logo



Automated Generation of Mock Objects using Symbolic Execution 

The ICT industry invests a substantial amount of resources in the development of unit tests for their products.  These tests are then used as a safety net in that they are executed every time a change is carried out to ensure that no regression has occurred.  This project is concerned with the automated generation of unit tests.  More specifically, the research will complement existing techniques to address the problem of indirect inputs.  In object oriented programming, an indirect input occurs when a method foo() in object X modifies its path of execution based on the return value of its call to a method foobar() in object Y.  Since foo() calls foobar() as part of its internal algorithm, any unit tests which exercise foo() do not have control over the return value of foobar() through parameter manipulation.  The industry's response to this is the development of mock objects which behave in a specific manner in order to force a certain path of execution during particular tests.  For non-trivial interactions, this can become cumbersome when done manually.  This project will investigate ways of generating such mocks automatically as part of automated unit test generation.

The project is currently being undertaken by Matthew Farrugia.  

 Matthew Farrugia

Using symbolic execution to generate testcases for monitor analysis

 

While monitors are typically specified in a high level language to minimise the chances of errors, testing out that monitors are correctly specified is still a significant concern particularly if the monitor executes reparatory code at runtime. Testing monitors manually is a challenging task as one would need to exercise the monitored system to drive the monitor through satisfying and violating executions. Automating this process would give monitor developers precious feedback at no extra cost.

This project aims to investigate whether the use of symbolic execution techniques can be successfully used to automatically generate test cases for monitor testing.

 

Mark Tanti is currently working on the project.  
 Mark Tanti 

Test Scenario Explosion Problem 

The problem stems from the real situation whereby systems developed within a company are deployed worldwide to a large number of different environments. These environments can consist of different devices, operating systems, device drivers, internet browsers, and so on. It is impractical for all these scenarios to be reproduced in a lab environment within a company. We are working on developing a framework that enables companies to set up trusted peer-to-peer networks which are subsequently used to deploy systems and automated tests to participants. This enables companies to test systems in realistic environments whilst massively increasing their capability to test different scenarios.

Andrea Mangion is currently working on the project.   

Andrea Mangion
 

Test Case Generation for Malware Analysis using Symbolic Execution

Malware analysis is the reverse engineering of malicious binaries in order to explore and extract their complete behaviour for protecting against attacks and to disinfect affected sites. The dynamic analysis of malware inside sandboxes is useful since it removes the need for the analyst to look into the malware code itself. However, this approach could end up disclosing no behaviour at all if faced with trigger-based malware. Existing work in this area takes an execution path exploration approach with the aim of maximizing effectiveness by increasing both the path coverage along with the precision, which is increased by excluding infeasible paths and executing paths under the correct runtime values. For this purpose, Symbolic Execution fits the bill. The main aim of this project is to build on existing work in order to provide a solution for automated malware analysis that is capable of uncovering hidden behaviour in malware, but is also tunable in terms of its efficiency versus its effectiveness, while also being Sandbox Independent.  

James Gatt is currently working on the project. 

 James Gatt

Large-Scale Mobile Test Automation

With the mobile devices set to outnumber people by the end of 2013, mobile applications are the holy grail for developers and companies alike.  Yet the relative ease with which one can deploy and sell a mobile application comes with a steep increase in competition.  This effectively means that anyone who deploys an inferior quality application is likely to suffer swift negative repercussions.  Yet testing an application across a wide variety of devices each with varying screen sizes, features and operating systems is daunting to say the least.  This is further compounded by varying environmental factors which each device is exposed to as its owner moves around with it.  No lab can feasibly replicate a representative number of scenarios.

In this project we are looking at utilising mobile devices made available by their owners for testing purposes.  This can be on a voluntary or (for example) pay-per-test model.  The idea is to automatically deploy and app and test suite to devices which fit a particular profile, automatically test the app and return results to the developer.  This of course brings up a considerable number of non-trivial challenges ranging from technical ones such as how do we deploy code remotely to mobile devices, all to way to complex security concerns which require us to guarantee that the interests of all parties are protected.

Sebastian Attard is currently working on the project and is co-supervised by Conrad Attard.

Sebastian Attard

Automated Monitor Generation from jUnit Tests

Testing is an integral part of any software development life-cycle. Unit testing is usually the first form of testing that is undertaken, focused primarily on examining  the components that make the system. With that said, test engineers might require a means of monitoring certain critical points during the system's execution.

Thus the automated translation from one form of verification to the other would save time, money and man-power by removing the need to manually write both unit tests and runtime verification monitors, whilst keeping the advantages of both within plausible reach. The solution is a system which extracts the necessary data from unit tests and, using this data, generates the appropriate monitors. With this solution, developers can have a higher amount of coverage in their verification, without any unnecessary increase in workload.    

Runtime verification is designed to handle such a requirement. It monitors the system during its execution to see that the system adheres to a set of predefined properties. Either one alone is not enough to sufficiently verify a system. However, designing systems for both unit testing and runtime verification is a time consuming and repetitive affair, with the former taking precedence since the definition of properties for monitors is seldom straight forward. This results in systems being finalized without the proper verification.  

Jonathan Micallef is currently working on the project.

Jonathan Micallef 

An Aspect-Oriented System Specification Language Based on Gherkin

Currently working on the development of Aspect-Oriented system-specification language, based on the widely-used Gherkin specification language as well as investigating the effects that such a language might have on system specification. 

Verifying that a system meets the specification requirement is one of the most common software development challenges. Specification languages have been widely employed to mitigate this issue, however, if the specification language is not designed to clearly capture the system's aspects, we will have to face redundancy and maintainability issues at system specification level. This may in-turn degenerate into developing a system that does not meet the specification requirements. 

The investigation and improvement of system specification languages is essential to develop software which meet the specification requirements.

 John Aquilina Alamango is currently working on the project.

John Alamango

 

Calendar
Notices
Ongoing projects page updated
We are proud to present our team of graduates and postgraduates and the list of projects we are currently working on. Click <Ongoing Projects> to find out more...
LarvaLight tool launched
Visit the <LarvaLight tool page> for more information
Interested in collaborating?
Checkout our industry collaboration models <here>
 
 
Last Updated: 10 July 2014

Log In back to UoM Homepage