Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/25239
Full metadata record
DC FieldValueLanguage
dc.date.accessioned2018-01-02T10:16:00Z-
dc.date.available2018-01-02T10:16:00Z-
dc.date.issued2017-
dc.identifier.urihttps://www.um.edu.mt/library/oar//handle/123456789/25239-
dc.descriptionB.ENG.(HONS)en_GB
dc.description.abstractThe composition and performance of music are considered to be very powerful means of expression. In fact, music is ultimately a means of communication between three parties; the composer, the performer and the listener. The composer relies on the performer to convey specific emotions and feelings to the listener via a music score. Technological advancements of smartphones and tablets pave the way for the development of software tools that can scan and play-back such musical scores. However, there exists a difference between the output of a computer system that has the ability to generate perfect performances which conform to all the symbolic information of the notes and the performance of a human. A human performer is most likely to make their performance expressive by introducing changes in tempos and loudness of the notes. A Computer System for Expressive Music Performance (CSEMP) tries to bridge this difference through the simulation of elements of human expressive performance. In this project, a CSEMP is designed to computationally enhance an expressionless Musical Instrument Digital Interface (MIDI) file rendition of a piano sheet music with expressive quantities. Given a MIDI file recording of a piece of music whose score is available, the system is capable of the identification of the expression indicators present in the score. The application of expression models obtained from recorded performances to the MIDI file available follows. To do this, the CSEMP performs image analysis of a digital image of the music score, namely symbol recognition and localization, such that the expression indicators present are identified. Then, the notes to which the expressions apply are determined. The synchronization of the MIDI to the score follows, where the entries of the MIDI file correspond to the notes on the score image. After obtaining the models of the expressions, the appropriate expression model is applied to the appropriate place in the MIDI file. The output of the system is an expressive MIDI sound file.en_GB
dc.language.isoenen_GB
dc.rightsinfo:eu-repo/semantics/restrictedAccessen_GB
dc.subjectMusic -- Data processingen_GB
dc.subjectMIDI (Standard)en_GB
dc.subjectMultimedia systemsen_GB
dc.titleComputational modelling of expressive music performanceen_GB
dc.typebachelorThesisen_GB
dc.rights.holderThe copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder.en_GB
dc.publisher.institutionUniversity of Maltaen_GB
dc.publisher.departmentFaculty of Engineering. Department of Systems & Control Engineeringen_GB
dc.description.reviewedN/Aen_GB
dc.contributor.creatorMifsud, Maria-
Appears in Collections:Dissertations - FacEng - 2017
Dissertations - FacEngSCE - 2017

Files in This Item:
File Description SizeFormat 
17BENGEE012.pdf
  Restricted Access
3.93 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.