Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/68529
Title: Automated face reduction
Authors: Aquilina, Dejan
Keywords: Data protection
Human face recognition (Computer science)
Neural networks (Computer science)
Issue Date: 2020
Citation: Aquilina, D. (2020). Automated face reduction (Bachelor's dissertation).
Abstract: With the introduction of the GDPR policy superseding the Data Protection Act, any individual has the right to delete and control any personal data. Removing frames from a footage and keeping the rest of the frames untouched is difficult to achieve. Moreover, surveillance footage is important to be left untouched since it is used as forensic evidence. Additionally, it will require a lot of manual work and time to be able to review the whole footage and then proceed to nd all the frames where the subject is visible and editing the footage. An alternative solution is to manually select the faces to be blurred throughout the footage. By blurring the faces, the actions remain legible and the footage will remain usable while also following the new regulations set by the GDPR. Semi-Automated Video Redaction methods exist commercially. For example, both IKENA Forensic and Amped FIVE software packages allow the user to specify the region of interest to be obfuscated. With the use of automated tracking techniques, the subject or object of interest is followed throughout the footage. While this tool facilitates the process, the user still needs to manually nd the person of interest within the video which can take a lot of time. Moreover, one major problem with these tools is that their licenses cost thousands of euros. In this dissertation, an autonomous face detector and recognizer is implemented to identify the individual within a crowd or group of people and obfuscate the face throughout the whole footage where the individual is present. The method developed during this dissertation automatically detects the person of interest within the video footage and his face is blurred. Once a match is found, the subject is back-tracked from the point of recognition to the beginning by making use of an optical ow algorithm to estimate the path taken by the subject to be able to blur the face in the previous frames. Afterwards, as the process finishes, the video is continued from the point of recognition till the end while also using the same tracking algorithm and blurring the face in the rest of the frames. The output will be the same video clip, however, the subject will have his face blurred throughout all of the frames. This makes the process require no human intervention. Extensive testing was carried out and it was evident that by implementing the system as non-real time will net better results. The reasoning behind this statement is due to the problem of resolution which hinders the performance of object detection. Being able to process the video and have the ability to easily manipulate the working conditions helped in achieving a recognition rate of 74% and an IoU of 0.783. Whilst working in real time, the user is dependent on the success of the detection. If the subject is not detected from the first frame that he is present in, this will result in the face not being blurred at that instant but rather become blurred further in the video frames. On the other hand, the non-real time method, although takes more time to complete will net better results since it makes use of object tracking to forward-track and back-track the subject once identified.
Description: B.SC.(HONS)COMPUTER ENG.
URI: https://www.um.edu.mt/library/oar/handle/123456789/68529
Appears in Collections:Dissertations - FacICT - 2020
Dissertations - FacICTCCE - 2020

Files in This Item:
File Description SizeFormat 
20BCE010.pdf
  Restricted Access
4.33 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.