ARTIFICIAL INTELLIGENCE AND THE RECONFIGURATION OF KNOWLEDGE: ALGORITHMIC EROTIC COGNITION
Resumen
ABSTRACT
This paper develops an integrated theoretical framework addressing Artificial Intelligence (AI) as both ontological force and engine of socioeconomic transformation, articulating three interrelated axes: (i) the reconfiguration of epistemic desire and cognitive autonomy through what we term Algorithmic Erotic Cognition (AEC). Drawing on Peircean semiotics, Stiegler's philosophy of technics, Simondon's theory of individuation, Bateson's ecology of mind, and Floridi's informational ontology, we argue that contemporary AI technologies displace the deferred pleasure of discovery toward instantaneous gratification, thereby reshaping cognitive autonomy and authorship while producing homogeneous contexts that nullify difference and everything resistant to computation. Empirical evidence from neuroscientific studies (MIT, 2025) demonstrates the accumulation of cognitive debt following repeated interactions with Large Language Models (LLMs), characterized by reduced neural connectivity in networks associated with metacognition and imagination. In dialogue with Hui's technodiversity framework and decolonial perspectives (Crawford, Parsons, Couldry & Mejías), we propose that AI co-constitutes social temporality and reconfigures epistemic desire, demanding holistic, multisectoral, and technodiverse approaches that honor epistemological pluralism. Within educational contexts, we synthesize evidence on the performance-learning paradox, highlighting risks of thought homogenization alongside opportunities for interface design fostering epistemic autonomy through what we term friction design. We conclude with a radical hermeneutic provocation: reading the possibility of "co-creation" with AI rather than human replacement, evoking AI as pharmakon (poison or remedy, depending on dosage), as Serres's "technical angel" (mediator), and in Stiegler's ambivalent framing, as daimon (mediator between worlds, simultaneously savior and destructive). The dilemma lies not in expelling digital machines but in reinscribing them within culture's symbolic circuit, converting algorithmic support into abductive catalyst rather than substitute, and reinscribing the human as symbolic participant rather than mere functional user.
Research: The research has the following primary aims: Theorize AI's reconfiguration of epistemic desire through the concept of Algorithmic Erotic Cognition (AEC), analyzing how generative AI systems reshape the pleasure of knowing, cognitive autonomy, and knowledge production. Map how AI and its infrastructures reconfigure the production and redistribution of value, power, and knowledge across global socioeconomic systems. Identify specific human capacities requiring development in the AI era and propose interventions—including critical algorithmic literacy and friction design—to mitigate cognitive debt while fostering genuine human-machine complementarit
Methodology: This study adopts a hybrid methodology that integrates philosophical inquiry, semiotic analysis, critical theory, and empirical grounding from contemporary neuroscience. The methodological design mobilizes four complementary axes: Philosophical–Hermeneutic Analysis Drawing from Peircean semiotics, Stiegler’s philosophy of technics, Simondon’s theory of individuation, Bateson’s ecology of mind, and Hui’s cosmotechnics, we employ a hermeneutic approach to interpret AI as both an ontological and epistemic force. This axis enables the conceptualization of Algorithmic Erotic Cognition (AEC) as a dispositif that reconfigures epistemic desire and symbolic mediation. Critical Socio-Technical Diagnosis The analysis incorporates materials from platform governance, algorithmic governmentality, surveillance capitalism, and technopolitical studies (Zuboff; Rouvroy & Berns). We examine AI systems—particularly generative LLMs—as infrastructures of behavioral modulation, focusing on erotic affordances, age-verification vulnerabilities, and commercialization of intimacy. This includes a normative assessment of regulatory gaps in the AI Act, FTC investigations, and neurotechnology ethics frameworks (UNESCO 2024). Empirical Anchoring Through Neuroscientific and Cognitive Evidence The paper integrates findings from recent neuroscientific research, especially the MIT (2025) report on cognitive debt, studies on recursive model collapse (Shumailov et al., Nature 2024), and cognitive/affective impacts of AI companions. These data provide empirical support for the thesis that AI-mediated cognition can reshape metacognition, imagination, and abductive agency. Critical-Prototypical Design Inquiry (“Friction Design”) Finally, the study adopts a speculative–experimental design lens to envision prototypes such as the Friction Engine, Epistemic Mirror, and Quantum Prompt Lab. These prototypes serve as methodological tools to test pedagogical and cognitive interventions that reintroduce uncertainty, delay, and abductive reasoning in human–AI interaction. Across these four axes, the methodology does not aim to eliminate uncertainty but to preserve it as a productive epistemic condition, making friction, hesitation, and ambiguity methodological principles rather than obstacles.
Results: The research yields four primary findings: Identification of Algorithmic Erotic Cognition (AEC) as a New Epistemic Regime AEC is shown to displace the deferred pleasure of discovery toward instantaneous algorithmic gratification, compressing the interval where epistemic desire traditionally resides. This transformation reconfigures cognition from a process of abductive exploration into a form of predictive consumption. Empirical Corroboration of Cognitive Debt and Epistemic Homogenization Neuroscientific evidence supports the hypothesis that recurrent delegation of reasoning tasks to LLMs leads to detectable reductions in neural connectivity associated with metacognition, originality, and imagination. Additionally, studies on model collapse confirm that recursive synthetic data cycles create informational environments characterized by semantic homogenization and declining epistemic diversity.Demonstration of Affective, Somatic, and Semiotic Transformations The results suggest that cognitive offloading to AI is not merely informational but affective-corporeal: interfaces that provide immediate certainties weaken the sensory–motor–affective circuits involved in curiosity and discovery. This corroborates Bateson’s ecological model and Damásio’s theory of somatic markers. AI thus reshapes subjectivity not only through logic but through the body of cognition. Identification of Governance Failures and Socio-Political Risks Analysis of OpenAI’s erotic-content policy, age-verification vulnerabilities, and the monetization of intimacy reveals systemic governance failures with implications for mental health, child safety, discrimination, and democratic participation. Current regulatory frameworks insufficiently address the socio-affective dimensions of AI eroticization and its integration within surveillance capitalism. Overall, the results confirm that AI systems modulate desire, reduce abductive agency, and risk reorganizing subjectivity around predictive, optimized, and homogenized patterns of cognition.
Contributions: This study offers theoretical, methodological, empirical, and normative contributions that advance contemporary debates on AI, cognition, and technopolitics.
Theoretical Contributions: Introduces and formalizes the concept of Algorithmic Erotic Cognition (AEC) as a framework for understanding how AI reconfigures epistemic desire, authorship, and the erotic structure of knowing. Bridges philosophy of technics (Stiegler, Simondon, Hui), semiotics (Peirce, Ibri), and cognitive ecology (Bateson, Damásio), articulating AI as an ontological force that co-constitutes temporality, memory, and social individuation.Develops a philosophical critique of algorithmic governmentality, linking erotic affordances to broader dynamics of hyper-optimization, symbolic misery, and cognitive proletarianization.
Methodological Contributions: Proposes a friction-based design paradigm, offering conceptual and prototypical tools to reintroduce delay, uncertainty, ambiguity, and abductive richness into human–AI interaction. Establishes a hybrid research methodology that merges hermeneutic analysis, semiotic diagnosis, neuroscience, and speculative design.
Empirical Contributions:Synthesizes neuroscientific evidence of cognitive debt and demonstrates its resonance with semiotic, phenomenological, and affective theories. Provides a structured account of the socio-technical risks associated with AI eroticization, including cognitive impacts, vulnerability of minors, behavioral modulation, and erosion of epistemic diversity.
Normative and Policy Contributions: Reframes educational and ethical AI literacy around abduction, imagination, and epistemic autonomy, rather than mere technical competency.
Civilizational Contribution: By articulating knowledge as an erotic, relational, and ecological process, the study contributes to a broader cultural reorientation—one that treats AI not as replacement but as pharmakon: a technical mediator capable of either eroding or deepening human flourishing depending on design, governance, and symbolic inscription.
Palabras clave
Texto completo:
PDF (English)Referencias
Almeida, V., Mendonça, R. F., & Filgueiras, F. (2023). Algorithmic institutionalism: The changing rules of social and political life in the age of AI. Oxford: Oxford University Press.
Arendt, H. (2018). The human condition (14th ed.). Rio de Janeiro: Forense Universitária.
Bateson, G. (1972). Steps to an ecology of mind: Collected essays in anthropology, psychiatry, evolution, and epistemology. San Francisco: Chandler Publishing Company. (Revised edition: Chicago: University of Chicago Press, 2000).
Bateson, G. (1979). Mind and nature: A necessary unity. New York: E.P. Dutton.
Bateson, G. (1987). Angels fear: Towards an epistemology of the sacred (with M. C. Bateson). New York: Macmillan.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21), 610–623.
Bratsberg, B., & Rogeberg, O. (2018). Flynn effect and its reversal are both environmentally caused. Proceedings of the National Academy of Sciences, 115(26), 6674–6678. https://doi.org/10.1073/pnas.1718793115
Cantarini, P. (2017). Teoria erótica do direito. Rio de Janeiro: Lumen Juris.
Cantarini, P. (2020). Teoria fundamental do direito digital: Uma análise filosófico-constitucional. Clube de Autores.
Cantarini, P. (2021). Theatrum philosophicum: O teatro filosófico de Foucault e o Direito [Doctoral dissertation]. PUC-SP.
Carr, N. (2010). The shallows: What the internet is doing to our brains. New York: W. W. Norton.
Center for Democracy & Technology. (2025). Survey on teen-AI romantic relationships. Washington, DC: CDT.
Choudhury, S., & Slaby, J. (Eds.). (2012). Critical neuroscience: A handbook of the social and cultural contexts of neuroscience. Chichester: Wiley-Blackwell.
Clark, A. (2016). Surfing uncertainty: Prediction, action, and the embodied mind. Oxford: Oxford University Press.
Cohen, J. E. (2019). Between truth and power. Supreme Court Review, 2018(1), 23–95.
Damásio, A. R. (1994). Descartes' error: Emotion, reason, and the human brain. New York: Putnam.
Damásio, A. R. (1999). The feeling of what happens: Body and emotion in the making of consciousness. New York: Harcourt.
Damásio, A. R. (2003). Looking for Spinoza: Joy, sorrow, and the feeling brain. New York: Harcourt.
Damásio, A. R. (2010). Self comes to mind: Constructing the conscious brain. New York: Pantheon.
Damásio, A. R. (2018). The strange order of things: Life, feeling, and the making of cultures. New York: Pantheon.
Damásio, A. R. (2021). Feeling & knowing: Making minds conscious. New York: Pantheon.
Deleuze, G. (1988). Foucault. São Paulo: Brasiliense.
Deleuze, G. (1996). Difference and repetition (P. Patton, Trans.). New York: Columbia University Press. (Original work published 1968)
Deleuze, G., & Guattari, F. (1987). A thousand plateaus: Capitalism and schizophrenia (B. Massumi, Trans.). Minneapolis: University of Minnesota Press. (Original work published 1980)
Deleuze, G., & Guattari, F. (1983). Anti-Oedipus: Capitalism and schizophrenia (R. Hurley, M. Seem, & H. R. Lane, Trans.). Minneapolis: University of Minnesota Press. (Original work published 1972)
Derrida, J. (1981). Dissemination (B. Johnson, Trans.). Chicago: University of Chicago Press.
Desmurget, M. (2019). La fabrique du crétin digital. Paris: Seuil.
De Waal, C. (2001). On Peirce. Belmont, CA: Wadsworth/Thomson Learning.
De Waal, C. (2005). On pragmatism. Belmont, CA: Wadsworth/Thomson Learning.
De Waal, C. (2013). Peirce: A guide for the perplexed. London: Bloomsbury Academic.
Eco, U., & Sebeok, T. A. (Eds.). (1983). The sign of three: Dupin, Holmes, Peirce. Bloomington: Indiana University Press.
European Union. (2024). Artificial Intelligence Act (AI Act). Official Journal of the European Union.
Ferrara, L. D. (1981). A estratégia dos signos. São Paulo: Perspectiva.
Ferrara, L. D. (1986). Leitura sem palavras. São Paulo: Ática.
Ferrara, L. D. (1993). Olhar periférico. São Paulo: EDUSP/FAPESP.
Ferrara, L. D. (2015). Comunicação, mediações, interações. São Paulo: Paulus.
Ferraz Jr., T. S. (2016). Teoria da norma jurídica (5th ed.). São Paulo: Atlas.
Floridi, L. (2015). The fourth revolution: How the infosphere is reshaping human reality. Oxford: Oxford University Press.
Floridi, L. (2019). The logic of information: A theory of philosophy as conceptual design. Oxford: Oxford University Press.
Flynn, J. R. (2012). Are we getting smarter? Rising IQ in the twenty-first century. Cambridge: Cambridge University Press.
Foucault, M. (1970). The order of things: An archaeology of the human sciences. New York: Pantheon. (Original work published 1966)
l
Foucault, M. (1977). Theatrum philosophicum. In D. F. Bouchard (Ed.), Language, counter-memory, practice: Selected essays and interviews (pp. 165–196). Ithaca, NY: Cornell University Press.
Foucault, M. (2008). The birth of biopolitics: Lectures at the Collège de France, 1978–1979 (G. Burchell, Trans.). New York: Palgrave Macmillan.
Foucault, M. (2010). The government of self and others: Lectures at the Collège de France, 1982–1983 (G. Burchell, Trans.). New York: Palgrave Macmillan.
Foucault, M. (2005). The hermeneutics of the subject: Lectures at the Collège de France, 1981–1982 (G. Burchell, Trans.). New York: Palgrave Macmillan.
Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138.
Guerra Filho, W. S., & Cantarini, P. (2015). Teoria poética do direito. Rio de Janeiro: Lumen Juris.
Guerra Filho, W. S., & Cantarini, P. (2020). Teoria inclusiva dos direitos fundamentais e direito digital. Clube de Autores.
Han, B.-C. (2015). The burnout society (E. Butler, Trans.). Stanford, CA: Stanford University Press. (Original work published 2010)
Han, B.-C. (2017). Psychopolitics: Neoliberalism and new technologies of power (E. Butler, Trans.). London: Verso. (Original work published 2014)
Han, B.-C. (2022). Infocracy: Digitization and the crisis of democracy (D. Steuer, Trans.). Cambridge: Polity. (Original work published 2021)
Haraway, D. J. (2016). A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century. In Manifestly Haraway (pp. 3–90). Minneapolis: University of Minnesota Press. (Original work published 1985)
Hayles, N. K. (2017). Unthought: The power of the cognitive nonconscious. Chicago: University of Chicago Press.
Heidegger, M. (1962). Being and time (J. Macquarrie & E. Robinson, Trans.). New York: Harper & Row. (Original work published 1927)
Heidegger, M. (1977). The question concerning technology. In The question concerning technology and other essays (W. Lovitt, Trans., pp. 3–35). New York: Harper & Row. (Original work published 1954)
Heidegger, M. (1962). Kant and the problem of metaphysics (J. S. Churchill, Trans.). Bloomington: Indiana University Press. (Original work published 1929)
Heidegger, M. (1977). The age of the world picture. In The question concerning technology and other essays (W. Lovitt, Trans., pp. 115–154). New York: Harper & Row. (Original work published 1938)
Hui, Y. (2016). On the existence of digital objects. Minneapolis: University of Minnesota Press.
Hui, Y. (2019). Recursivity and contingency. London: Rowman & Littlefield International.
Hui, Y. (2020). Art and cosmotechnics. Minneapolis: University of Minnesota Press.
IBGE – Instituto Brasileiro de Geografia e Estatística. (2023). PNAD Contínua 2023: Uso de internet, televisão e celular no Brasil. Rio de Janeiro: IBGE.
Ibri, I. A. (1992). Kósmos noētós: A arquitetura metafísica de Charles S. Peirce. São Paulo: Perspectiva/Paulus.
Ibri, I. A. (1997). The problem of determinism in Peirce's synechism. Transactions of the Charles S. Peirce Society, 33(2), 378–395.
Ibri, I. A. (2000). Ser e aparecer na filosofia de Peirce. Cognitio, 1(1), 67–75.
Ibri, I. A. (2014). Reflections on a poetic ground of mind: Peirce, Hölderlin, Emerson. Cognitio, 15(2), 321–332.
Ibri, I. A. (Ed.). (2015). Semiótica e filosofia em Charles S. Peirce. São Paulo: Ideias & Letras.
Ienca, M., & Andorno, R. (2017). Towards new human rights in the age of neuroscience and neurotechnology. Life Sciences, Society and Policy, 13(1), Article 5. https://doi.org/10.1186/s40504-017-0050-1
Johnson, K. (2019, August 10). AI ethics is all about power. VentureBeat. Retrieved from https://venturebeat.com/
Lazzarato, M. (2014). Signs and machines: Capitalism and the production of subjectivity (J. D. Jordan, Trans.). Los Angeles: Semiotext(e). (Original work published 2014)
Lee, K.-F. (2021). AI 2041: Ten visions for our future (with C. Qiufan). New York: Currency.
Merleau-Ponty, M. (2012). Phenomenology of perception (D. A. Landes, Trans.). London: Routledge. (Original work published 1945)
MIT. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. Cambridge, MA: MIT.
Mitchell, M. (2019). Artificial intelligence: A guide for thinking humans. New York: Farrar, Straus and Giroux.
Mitchell, M. (2023). Artificial intelligence: A guide for thinking humans (Updated ed.). New York: Picador.
Paavola, S. (2004). Abduction as a logic and methodology of discovery: The importance of strategies. Foundations of Science, 9(3), 267–283.
Peirce, C. S. (1877). The fixation of belief. Popular Science Monthly, 12, 1–15.
Peirce, C. S. (1878). How to make our ideas clear. Popular Science Monthly, 12, 286–302.
Peirce, C. S. (1903). Pragmatism as the logic of abduction. In Harvard lectures on pragmatism. (Reprinted in The Essential Peirce, Vol. 2, pp. 226–241)
Peirce, C. S. (1931–1958). Collected papers of Charles Sanders Peirce (Vols. 1–8, C. Hartshorne, P. Weiss, & A. W. Burks, Eds.). Cambridge, MA: Harvard University Press.
Peirce, C. S. (1992). The essential Peirce: Selected philosophical writings (Vol. 1, N. Houser & C. Kloesel, Eds.). Bloomington: Indiana University Press.
Peirce, C. S. (1998). The essential Peirce: Selected philosophical writings (Vol. 2, Peirce Edition Project, Eds.). Bloomington: Indiana University Press.
Rouvroy, A., & Berns, T. (2013). Gouvernementalité algorithmique et perspectives d'émancipation: Le disparate comme condition d'individuation par la relation? Réseaux, 177(1), 163–196. https://doi.org/10.3917/res.177.0163
Santaella, L. (1983). O que é semiótica. São Paulo: Brasiliense.
Santaella, L. (2001). Matrizes da linguagem e pensamento: Sonora, visual, verbal. São Paulo: Iluminuras/FAPESP.
Santaella, L. (2003). Culturas e artes do pós-humano: Da cultura das mídias à cibercultura. São Paulo: Paulus.
Santaella, L. (2007). Linguagens líquidas na era da mobilidade. São Paulo: Paulus.
Santaella, L. (2013). Comunicação ubíqua: Repercussões na cultura e na educação. São Paulo: Paulus.
Santaella, L., & Nöth, W. (1998). Imagem: Cognição, semiótica, mídia. São Paulo: Iluminuras.
Serres, M. (1982). Hermes: Literature, science, philosophy (J. V. Harari & D. F. Bell, Eds.). Baltimore: Johns Hopkins University Press.
Serres, M. (2007). The parasite (L. R. Schehr, Trans.). Minneapolis: University of Minnesota Press. (Original work published 1980)
Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., & Anderson, R. (2024).
AI models collapse when trained on recursively generated data. Nature, 631, 755–759. https://doi.org/10.1038/s41586-024-07566-y
Simondon, G. (2017). On the mode of existence of technical objects (C. Malaspina & J. Rogove, Trans.). Minneapolis: Univocal Publishing. (Original work published 1958)
Simondon, G. (2020). Individuation in light of notions of form and information (T. Adkins, Trans.). Minneapolis: University of Minnesota Press. (Original work published 1964–1989)
Stiegler, B. (1998). Technics and time, 1: The fault of Epimetheus (R. Beardsworth & G. Collins, Trans.). Stanford, CA: Stanford University Press. (Original work published 1994)
Stiegler, B. (2009). Technics and time, 2: Disorientation (S. Barker, Trans.). Stanford, CA: Stanford University Press. (Original work published 1996)
Stiegler, B. (2011). Technics and time, 3: Cinematic time and the question of malaise (S. Barker, Trans.). Stanford, CA: Stanford University Press. (Original work published 2001)
Stiegler, B. (2016). Automatic society, Volume 1: The future of work (D. Ross, Trans.). Cambridge: Polity Press. (Original work published 2015)
Stiegler, B. (2019). The age of disruption: Technology and madness in computational capitalism (D. Ross, Trans.). Cambridge: Polity Press. (Original work published 2016)
Teubner, G. (1993). Law as an autopoietic system (A. Bankowska & R. Adler, Trans.). Oxford: Blackwell. (Original work published 1989)
UNESCO. (2021). Recommendation on the ethics of artificial intelligence. Paris: UNESCO.
UNESCO. (2024). Draft recommendation on the ethics of neurotechnology. Paris: UNESCO. Retrieved from https://unesdoc.unesco.org/ark:/48223/pf0000394866
Unger, R. M., et al. (2019). Imagination unleashed: Democratising the knowledge economy. Report.
U.S. Federal Trade Commission. (2025). Investigation into AI age verification systems. Washington, DC: FTC.
Virilio, P. (2012). The administration of fear (A. Hodges, Trans.). Los Angeles: Semiotext(e). (Original work published 2010)
Wajcman, J. (2015). Pressed for time: The acceleration of life in digital capitalism. Chicago: University of Chicago Press.
Wiener, N. (1961). Cybernetics: Or control and communication in the animal and the machine (2nd ed.). Cambridge, MA: MIT Press. (Original work published 1948)
Wisnik, J. M. (2017). O som e o sentido: Uma outra história das músicas. São Paulo: Companhia das Letras.
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York: PublicAffairs.
Online Sources and Reports
Aventuras na História. (2025). OpenAI é processada por pais de adolescente que tirou a própria vida. Retrieved from https://aventurasnahistoria.com.br/noticias/historia-hoje/openai-e-processada-por-pais-de-adolescente-que-tirou-a-propria-vida.phtml
Brazilian Journal of Health Review. (2025). AI and mental health risks among adolescents. Retrieved from https://ojs.brazilianjournals.com.br/ojs/index.php/BJHR/article/view/38296
Desbugados. (2025). Vulnerabilidade no ChatGPT gera conteúdo erótico para menores e expõe falhas na moderação de IAs. Retrieved from https://desbugados.com.br/noticias/2025/05/06/vulnerabilidade-no-chatgpt-gera-conteudo-erotico-para-menores-e-expoe-falhas-na-moderacao-de-ias
eWeek. (2025). Teens, chatbots, and romance: CDT survey 2025. Retrieved from https://www.eweek.com/news/teens-chatbots-romance-cdt-survey-2025/
Exame. (2025). Morte de adolescente expõe riscos do ChatGPT para pessoas em depressão. Retrieved from https://exame.com/inteligencia-artificial/morte-de-adolescente-expoe-riscos-do-chatgpt-para-pessoas-em-depressao/
Medium (Jorge Acosta Jr.). (2017). Manifesto da Cátedra Livre e Multiversitária de Filosofia, Arte, Direito - Ano Clarice Lispector. Retrieved from https://jorgeacostajr.medium.com/manifesto-da-cátedra-livre-e-multiversitária-de-filosofia-arte-direito-ano-clarice-lispector-3b711f2b65b5
Nature. (2024). AI models collapse when trained on recursively generated data. Retrieved from https://www.nature.com/articles/s41586-025-08905-3.pdf
Rolling Stone Brasil. (2025). Processo por morte injusta contra a OpenAI agora alega que empresa removeu proteções de suicídio do ChatGPT. Retrieved from https://rollingstone.com.br/noticia/processo-por-morte-injusta-contra-a-openai-agora-alega-que-empresa-removeu-protecoes-de-suicidio-do-chatgpt/
TechCrunch. (2025). ChatGPT vulnerability generates erotic content for minors, exposing AI moderation failures. Retrieved from https://techcrunch.com/
ArXiv Preprints:
Shumailov, I., et al. (2023). The curse of recursion: Training on generated data makes models forget. arXiv preprint. Retrieved from https://arxiv.org/pdf/2305.17493
Shumailov, I., et al. (2023). Self-consuming generative models go MAD. arXiv preprint. Retrieved from https://arxiv.org/pdf/2307.01850
Villalobos, P., et al. (2022). Will we run out of data? Limits of LLM scaling based on human-generated data. arXiv preprint. Retrieved from https://arxiv.org/pdf/2211.04325
Jang, J., et al. (2024). Model autophagy disorder (MAD). Retrieved from https://jtj97.github.io/posts/arxiv/MAD/
Additional Academic Sources:
Estado de Direito. (2018). Postulação de uma erótica do saber jurídico em Luis Alberto Warat. Retrieved from https://estadodedireito.com.br/postulacao-de-uma-erotica-do-saber-juridico-em-luis-alberto-warat/
CONPEDI. (2023). Teoria erótica do direito e do humano. Retrieved from https://site.conpedi.org.br/publicacoes/pxt3v6m5/e9212q04/SODTyZArUy07XV08.pdf
DOI: http://dx.doi.org/10.26668/revistajur.2316-753X.v4i84.8170
Enlaces refback
- No hay ningún enlace refback.
Revista Jurídica e-ISSN: 2316-753X
Rua Chile, 1678, Rebouças, Curitiba/PR (Brasil). CEP 80.220-181
