Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/37292
Title: An exploration of whether artificial intelligent systems can act as autonomous moral agents
Authors: Sacco, Mariana
Keywords: Machine learning -- Moral and ethical aspects
Artificial intelligence -- Moral and ethical aspects
Robotics -- Moral and ethical aspects
Issue Date: 2018
Citation: Sacco, M. (2018). An exploration of whether artificial intelligent systems can act as autonomous moral agents (Master's dissertation)
Abstract: With the progressions and rapid developments within the field of artificial intelligence, numerous predictions relating to an artificial intelligent system equipped with skills to strive towards a post-human reality, are divulged. Consequently, the aim of this dissertation is to object to such interpretations by firstly, examining the various contributions to define artificial ‘intelligence’, whilst expounding on whether systems can function in a conscious and autonomous manner. The philosophical study will then aim to explore the feasibility of artificial morality, where different strategies, such as the top-down, bottom-up and hybrid methodologies, will be assessed as effective modes for mechanisms which encounter and resolve moral dilemmas. Different ethical contenders will be presented so as to highlight the most effective strategy for implementation, with hope to discard the view that moral machines can treat human beings other than ends in themselves. Even though machine ethics is possible, a critique of whether a mechanism can, in actual fact, reveal characteristics that it is capable of ethical intellectuality and ethical decision making will be put forward. Decision making skills cannot be the proof for artificial rational processes, since ethical reasoning involves several kinds of cognitive processes. The study deduces that the skill of moral decision making involves the recognition of the true significance concerning moral dilemmas. This refers to an acknowledgment of a conflict between one’s self-regard and what is required from the moral standard. An allegory of an artificial system entangled within this kind of discord will be presented, arguing that the system’s dilemma cannot be considered as an actual moral disagreement, as it can never act in it self-interest. On the basis of these findings, a positive axiom will be disclosed, namely, that the existing and imminent artificial intelligences will not consider human beings as threats for their own well-being.
Description: M.A.CONTEMPORARY WEST.PHIL.
URI: https://www.um.edu.mt/library/oar//handle/123456789/37292
Appears in Collections:Dissertations - FacArt - 2018
Dissertations - FacArtPhi - 2018

Files in This Item:
File Description SizeFormat 
18MAPHI004.pdf
  Restricted Access
1.3 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.