Please use this identifier to cite or link to this item:
Title: “To trust a LIAR”: does machine learning really classify fine-grained, fake news statements?
Authors: Mifsud, Mark
Layfield, Colin
Azzopardi, Joel
Abela, John
Keywords: Fake news -- Prevention
Machine learning -- Technique
Natural language processing (Computer science)
Artificial intelligence -- Technological innovations
Deep learning (Machine learning)
Issue Date: 2021
Publisher: CEUR
Citation: Mifsud, M., Layfield, C., Azzopardi, J., & Abela, J. (2021). “To trust a LIAR”: Does Machine Learning Really Classify Fine-grained, Fake News Statements?. In Proceedings of the 2nd Workshop on Online Misinformation-and Harm-Aware Recommender Systems (OHARS 2021), Amsterdam, Netherlands.
Abstract: Fake news refers to deceptive online content and is a problem which causes social harm. Early detection of fake news is therefore a critical but challenging problem. In this paper we attempt to determine if state-of-the-art models, trained on the LIAR dataset can be leveraged to reliably classify short claims according to 6 levels of veracity that range from “True” to “Pants on Fire” (absolute lies). We investigate the application of transformer models BERT, RoBERTa and ALBERT that have previously performed significantly well on several natural language processing tasks including text classification. A simple neural network (FcNN) was also used to enhance each model’s result by utilising the sources’ reputation scores. We achieved higher accuracy than previous studies that used more data or more complex models. Yet, after evaluating the models’ behaviour, numerous flaws appeared. These include bias and the fact that they do not really model veracity which makes them prone to adversarial attacks. We also consider the possibility that language-based, fake news classification, on such short statements is an ill-posed problem.
Appears in Collections:Scholarly Works - FacICTAI

Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.