Please use this identifier to cite or link to this item:
Title: Computational modelling of expressive music performance
Authors: Mifsud, Maria
Keywords: Music -- Data processing
MIDI (Standard)
Multimedia systems
Issue Date: 2017
Abstract: The composition and performance of music are considered to be very powerful means of expression. In fact, music is ultimately a means of communication between three parties; the composer, the performer and the listener. The composer relies on the performer to convey specific emotions and feelings to the listener via a music score. Technological advancements of smartphones and tablets pave the way for the development of software tools that can scan and play-back such musical scores. However, there exists a difference between the output of a computer system that has the ability to generate perfect performances which conform to all the symbolic information of the notes and the performance of a human. A human performer is most likely to make their performance expressive by introducing changes in tempos and loudness of the notes. A Computer System for Expressive Music Performance (CSEMP) tries to bridge this difference through the simulation of elements of human expressive performance. In this project, a CSEMP is designed to computationally enhance an expressionless Musical Instrument Digital Interface (MIDI) file rendition of a piano sheet music with expressive quantities. Given a MIDI file recording of a piece of music whose score is available, the system is capable of the identification of the expression indicators present in the score. The application of expression models obtained from recorded performances to the MIDI file available follows. To do this, the CSEMP performs image analysis of a digital image of the music score, namely symbol recognition and localization, such that the expression indicators present are identified. Then, the notes to which the expressions apply are determined. The synchronization of the MIDI to the score follows, where the entries of the MIDI file correspond to the notes on the score image. After obtaining the models of the expressions, the appropriate expression model is applied to the appropriate place in the MIDI file. The output of the system is an expressive MIDI sound file.
Description: B.ENG.(HONS)
Appears in Collections:Dissertations - FacEng - 2017
Dissertations - FacEngSCE - 2017

Files in This Item:
File Description SizeFormat 
  Restricted Access
3.93 MBAdobe PDFView/Open Request a copy

Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.