ALGOCRACY, ALGORITHMIC INSTITUTIONALISM, DIGITAL RATIONALITY AND RISKS TO DEMOCRACY

PAOLA CANTARINI

Resumo


This article aims to provide a critical perspective through an interdisciplinary, inclusive, and decolonial analysis of the theory of algorithmic institutionalism, addressing problematic issues related to artificial intelligence—especially how AI could be made democratic. It also brings forward critical reflections on the concepts of algocracy, algorithmic institutionalism, digital rationality, and democracy, based on the work "Algorithmic Institutionalism: The Changing Rules of Social and Political Life" by Ricardo Mendonça, Virgilio Almeida, and Fernando Filgueiras. The article contributes to the debate on the interaction between the development of artificial intelligence and the democratic system.

Objectives: The article aims to offer a critical perspective through an interdisciplinary, inclusive, and decolonial analysis of problematic issues surrounding artificial intelligence, avoiding utopian or dystopian approaches and promoting critical theories. It also seeks to expand Brazil's participation in the scientific and academic discussion on AI within the humanities, particularly from the Global South, which is one of the essential pillars for addressing epistemic justice. Furthermore, it seeks to engage with the authors of the book "Algorithmic Institutionalism: The Changing Rules of Social and Political Life", highlighting both the strengths and weaknesses of the proposed theory in order to contribute to the scientific debate.

Methodology: An analysis of the state of the art on central themes of AI, especially issues regarding the lack of consent, authorization, and legitimacy in AI applications, the relationship between AI and democratic perspectives, and the main arguments of algorithmic institutionalism theory.

Results: The article presents important reflections aimed at rethinking the possible paths toward the democratization of AI. It analyzes the proposal of multistakeholder governance and questions whether algorithms can truly be considered institutions. It also reflects on issues of consent, authorization, and legitimacy, as well as on power and the potential for resistance in the face of such power, drawing on the proposals of Foucault, Antoinette Rouvroy, and Thomas Berns, particularly their concept of algorithmic governmentality.

Contributions: The article offers significant insights for rethinking potential pathways toward democratizing AI. It critically analyzes the proposal of multistakeholder governance and questions whether algorithms could be considered institutions—similar to social institutions such as marriage, the Church, or even the law. Given the opacity and lack of transparency in algorithmic systems, and the frequent absence of awareness that one is subjected to an algorithm (along with the lack of control and accountability), it would not be accurate to claim that individuals are consciously learning and accepting the values imposed unilaterally. Some preliminary conclusions suggest that it may be more appropriate to speak of a process of algorithmic institutionalization, rather than asserting that algorithms already constitute institutions or are fully institutionalized—depending, of course, on the definition of "institution." After all, if institutions are those structures that ensure social coexistence and therefore deserve to be protected, how can we regard as such something that still poses significant threats, especially given numerous cases where AI applications have caused harm to fundamental rights?


Palavras-chave


Algocracy, algorithmic institutionalism, digital rationality, and democracy.

Texto completo:

PDF

Referências


ALMEIDA, Virgílio, MENDONÇA, Ricardo Fabrino, FILGUEIRAS, Fernando, “Algorithmic Institutionalism: The Changing Rules of Social and Political Life, Oxford University Press, 2014.

BURRELL, Jenna, “How the machine ‘thinks’: Understanding opacity in machine learning algorithms”, https://journals.sagepub.com/doi/full/10.1177/2053951715622512#:~:text=In%20this%20article%2C%20I%20draw,required%20to%20apply%20them%20usefully

CELAN. Paul. The Meridian: Final Version-Drafts-Materials, ed. Bernhard Böschenstein and Heine Schmull, transl. Pierre Joris, Stanford University Press, 2011.

ČERKA, Paulius; GRIGIENĖ, Jurgita; SIRBIKYTĖ, Gintarė. “Liability for damages caused by Artificial Intelligence”, Computer Law & Security Review, Elsevier, v. 31, n. 3, p. 376-389, 2015.

CHEHOUDI, Rafaa. “Artificial intelligence and democracy: pathway to progress or decline”, JOURNAL OF INFORMATION TECHNOLOGY & POLITICS, 2025.

CHUL HAN, Byung, “A sociedade da transparência”, Petrópolis: Vozes, 2017.

COECKELBERGH, Mark. “Why AI Undermines Democracy and What to Do About It”, Polity Press , 2024.

DELEUZE, G. “Conversações”, tradução de Peter Pál Pelbart, São Paulo: Editora 34; 3a edição, 2013.

____. “Post-Scriptum sobre as Sociedades de Controle”, Conversações: 1972-1990. Tradução de Peter Pál Pelbart, Rio de Janeiro: Ed. 34, 1992.

DWIVEDI, Yogesh, et all. “Artificial intelligence (AI):Multidisciplinary perspectives on emerging challenges,opportunities, and agenda for research, practice and policy”, International Journal of Information Management,57, 1–47. https://doi.org/10.1016/j.ijinfomgt.2019.08.002.

EUBANKS, Virginia. “Automating inequality: How high-tech tools profile, police, and punish the poor”, St. Martin's Press, 2018.

KROLL, Joshua, “Accountable Algorithms”, 2015, https://www.jkroll.com/papers/dissertation.pdf.

FLORIDI, L. “Open Data, Data Protection, and Group Privacy”, Philos. Technol. 27, 1–3 , 2014.

LOTRINGER, Sylvère, e VIRILIO, Paul. The Accident of Art, New York, Semiotext(es), 2005.

MITTELSTADT, Brent; WACHTER,Sandra. “A right to reasonable inferences: re-thinking data protection law in the age of big data and AI, Columbia Business Law Review, v. 2019.

MULHOLLAND, Caitlin (coords.). “Inteligência Artificial e Direito: ética, regulação e responsabilidade”. São Paulo: Thomson Reuters Brasil, 2019.

NISSEMBAUM, Helen. “Privacy, Big Data, and the Public Good”;. A Contextual Approach to Privacy Online. In Daedalus, v. 14, n. 4, 2011, .

ROCHE, C., LEWIS, D., & WALL, P. J. “Artificial intelligenceethics: An inclusive global discourse?”, Cornell University Library, 2021.

___________Roche, C., Wall, P. J., & Lewis, D. “Ethics and diversityin artifcial intelligence policies, strategies and initiatives”, AIand Ethics, 3, 1095–1115, 2023.

RODRIGUES, R. “Legal and human rights issues of AI: Gaps, challenges and vulnerabilities”, Journal of ResponsibleTechnology, 4, 4, 2020.

ROSS, M. L. “The political economy of the resourcecurse”. World Politics, 51(2), 297–322, 1999.

ROUVROY, Antoinette e BERNS, Thomas, “Governamentalidade algorítmica e perspectivas de emancipação: o díspar como condição de individuação pela relação?”, Revista ecopos, 18, v. 2, 2015.

SOUZA, Eduardo Nunes de.“Dilemas atuais do conceito jurídico de personalidade: uma crítica às propostas de subjetivação de animais e de mecanismos de inteligência artificial”, Revista civilistica, 9. n. 2, 2020, https://civilistica.emnuvens.com.br/redc/article/view/562/417.

ZUBOFF, Shoshana. “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power”, PublicAffairs; 1st edition, 2019.




DOI: http://dx.doi.org/10.26668/revistajur.2316-753X.v4i80.7823

Apontamentos

  • Não há apontamentos.




Revista Jurídica e-ISSN: 2316-753X

Rua Chile, 1678, Rebouças, Curitiba/PR (Brasil). CEP 80.220-181

Licença Creative Commons

Este obra está licenciado com uma Licença Creative Commons Atribuição-NãoComercial 4.0 Internacional.