INTELIGÊNCIA ARTIFICIAL: OS AGENTES MORAIS ARTIFICIAIS

Rui Miguel Zeferino FERREIRA

Resumo


Objetivo: o presente artigo analisa os problemas e os desafios decorrentes da utilização da inteligência artificial, nomeadamente, o decorrente dos agentes morais artificiais, onde se procura analisar um conjunto de problemas éticos que se levantam com a sua utilização, designadamente ao nível da responsabilidade e da existência de direito dos agentes morais artificiais.

Metodologia: Utiliza-se o método dedutivo, através da pesquisa bibliográfica e artigos científicos sobre a temática.

Resultados: Conclui-se que a inteligência artificial é uma área do direito muito singular e ainda muito desconhecida, que levanta imensas questões éticas, nas quais se inclui a referente aos agentes morais artificiais, designadamente quanto à responsabilidade dos agentes morais artificiais e, bem assim, quanto à existência de direitos dos mesmos. Igualmente, que é necessário o desenvolvimento da filosofia e da ética da inteligência artificial, por existir um conjunto de questões fundamentais sobre o qual é necessário perceber o que deve ser admitido que a inteligência artificial realize, bem como acautelar os riscos num cenário de longo prazo.

Contribuições: a pesquisa mostra-se relevante no atual contexto de revolução tecnológica, no qual a inteligência é uma das vertentes mais visíveis, para compreender como as questões dos agentes morais artificiais devem ser tratados, nomeadamente, dando contribuições para a definição de diretrizes a serem implementadas no âmbito da inteligência artificial.

Palavras-Chave: inteligência artificial; agentes morais artificiais; responsabilidade; direitos de agentes morais artificiais.

 

ABSTRACT

Objective: this article analyzes the problems and challenges arising from the use of artificial intelligence, namely those arising from artificial moral agents, where it seeks to analyze a set of ethical problems that arise with its use, namely in terms of responsibility and of the existence of rights of artificial moral agents.

 

 

Methodology: The deductive method is used, through bibliographical research and scientific articles on the subject.

Results: It is concluded that artificial intelligence is a unique and still very unknown area of law, which raises immense ethical questions, which include the one referring to artificial moral agents, namely regarding the responsibility of artificial moral agents, as well as to the existence of their rights. Likewise, it is necessary to develop the philosophy and ethics of artificial intelligence, as there is a set of fundamental questions on which it is necessary to understand what must be admitted that artificial intelligence performs, as well as to guard against risks in a long-term scenario.

 

Contributions: the research is relevant in the current context of technological revolution, in which intelligence is one of the most visible aspects, to understand how the issues of artificial moral agents should be treated, namely, contributing to the definition of guidelines to be implemented in the context of artificial intelligence.

 

Keywords: artificial intelligence; artificial moral agents; responsibility; rights of artificial moral agents.

 


Palavras-chave


inteligência artificial; agentes morais artificiais; responsabilidade; direitos de agentes morais artificiais.

Texto completo:

PDF

Referências


ALLEN, Colin; SMIT, Iva; WALLACH, Wendell. “Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches”. Ethics and Information Technology, 7(3), 2005, pp. 149–155.

ALLEN, Colin; VARNER, Gary; ZINSER, Jason. “Prolegomena to Any Future Artificial Moral Agent”. Journal of Experimental & Theoretical Artificial Intelligence, 12(3), 2000, pp. 251–261.

ANDERSON, Janna; RAINIE, Lee; LUCHSINGER, Alex. Artificial Intelligence and the Future of Humans. Washington, DC: Pew Research Center, 2018.

BENTLEY, Peter J.; OLLE HAGGSTROM, Miles Brundage; METZINGER, Thomas. “Should We Fear Artificial Intelligence? In-Depth Analysis”. European Parliamentary Research Service, Scientific Foresight Unit (STOA), March 2018, PE 614.547, pp. 1-40.

BRYNJOLFSSON, Erik; MCAFEE, Andrew. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York: W. W. Norton, 2016.

BRYSON, Joanna J. “Robots Should Be Slaves”. Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues, Yorick Wilks (ed.). Natural Language Processing 8, Amsterdam: John Benjamins Publishing Company, 2010, pp. 63–74.

BRYSON, Joanna J.; DIAMANTIS, Mihailis E.; GRANT, Thomas D. “Of, for, and by the People: The Legal Lacuna of Synthetic Persons”. Artificial Intelligence and Law, 25(3), 2017, pp. 273–291.

CERVANTES, José-Antonio; Et. Al. “Artificial Moral Agents: A Survey of the Current Status”. Sci Eng. Ethics, 26, 2020, pp. 501-532.

DENNETT, Daniel C. From Bacteria to Bach and Back: The Evolution of Minds. New York: W.W. Norton, 2017.

FLORIDI, Luciano; Et. Al. “AI4People - An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations”. Minds and Machines, 28(4), 2018, pp. 689–707.

GERDES, Anne. “The Issue of Moral Consideration in Robot Ethics”. ACM SIGCAS Computers and Society, 45(3), 2016, pp. 274–279.

GOODFELLOW, Ian; Et. Al. “A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go through Self-Play”. Science, 362 (6419), 2018, pp. 1140–1144.

GUNKEL, David J. “The Other Question: Can and Should Robots Have Rights?”. Ethics and Information Technology, 20(2), 2018, pp. 87–99.

HAKLI, Raul; MAKELA; Pekka. “Moral Responsibility of Robots and Hybrid Agents”. The Monist, 102(2), 2019, pp. 259–275.

HOUKES, Wybo; VERMAAS, Pieter E. Technical Functions: On the Use and Design of Artefacts. Philosophy of Engineering and Technology 1. Dordrecht: Springer Netherlands, 2010.

MINSKY, Marvin. The Society of Mind, New York: Simon & Schuster, 1985.

MISSELHORN, Catrin. “Artificial Systems with Moral Capacities? A Research Design and Its Implementation in a Geriatric Care System”. Artificial Intelligence, 278: art. 103179, 2020.

MOOR, James H. “The Nature, Importance, and Difficulty of Machine Ethics”. IEEE Intelligent Systems, 21(4), 2006, pp. 18–21.

NYHOLM, Sven. “Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci”. Science and Engineering Ethics, 24(4), 2018, pp. 1201–1219.

NYHOLM, Sven. “The Ethics of Crashes with Self-Driving Cars: A Roadmap, II”. Philosophy Compass, 13(7): e12506, 2018.

SHOHAM, Yoav; Et. Al. “The AI Index 2018 Annual Report”. AI Index Steering Committee, Human-Centered AI Initiative, Stanford University, 2018.

STONE, Peter; Et. Al. “Artificial Intelligence and Life in 2030”. One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel, Stanford University, Stanford, CA, September 2016.

TADDEO, Mariarosaria; FLORIDI, Luciano. “How AI Can Be a Force for Good”. Science, 361(6404), 2018, pp. 751-752.

TORRANCE, Steve. “Machine Ethics and the Idea of a More-Than-Human Moral World”. Anderson and Anderson, 2011, pp. 115-137.

VALLOR, Shannon. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford: Oxford University Press, 2017.

VAN WYNSBERGHE, Aimee; ROBBINS, Scott. “Critiquing the Reasons for Making Artificial Moral Agents”. Science and Engineering Ethics, 25(3), 2019, pp.719–735.

VERBEEK, Peter-Paul. Moralizing Technology: Understanding and Designing the Morality of Things. Chicago: University of Chicago Press,2011.




DOI: http://dx.doi.org/10.26668/revistajur.2316-753X.v5i67.5580

Apontamentos

  • Não há apontamentos.




Revista Jurídica e-ISSN: 2316-753X

Rua Chile, 1678, Rebouças, Curitiba/PR (Brasil). CEP 80.220-181

Licença Creative Commons

Este obra está licenciado com uma Licença Creative Commons Atribuição-NãoComercial 4.0 Internacional.