ISSN 1553-5053Sitio actualizado en   abril de 2024 Visitas:

Volume 19 / Issue 2
Septiembre 2023

ISSN 1553-5053


Summary:

This essay explores various perspectives and concerns related to the impact of artificial intelligence (AI) on the doctor-patient relationship and education. The essay combines narrative reflections and critical analysis of the issue, using Ray Bradbury’s novel "Fahrenheit 451" as a resource. The author argues that AI, represented by developments such as Large Language Models (LLMs) like ChatGPT, has a significant impact on medicine and education. Drawing from the described novel, fundamental questions are posed regarding the attributes that constitute the doctor-patient experience, professional practice, and the overall human experience. Some differences between human reasoning and algorithmic systems are analyzed, emphasizing the importance of preserving human attributes in interactions with artificial intelligence, such as the role of emotions and critical reflection. The article asserts the importance of promoting educational practices grounded in deliberation on values, critical thinking, and sentimental pedagogy as alternatives to an automatic relationship with technology, as an expression of a loss of meaning and significance: automatic nihilism.

Key words: Bioethics | Artificial Intelligence | Physician-Patient Relations | Narration | Education, Professional

Impersonal Operators?

Challenges in Education and the Doctor-Patient Experience in Relation to Artificial Intelligence
Boris Julián Pinto Bustamante

Departamento de Bioética, Universidad el Bosque y Escuela de Medicina y Ciencias de la Salud, Universidad del Rosario, Colombia

ChatGPT is an Artificial Intelligence (AI) based language model (Large Language Models - LLMs), trained on large datasets of text in various languages with the ability to generate responses to text inputs. The GPT (Generative Pre-Trained Transformer) architecture uses neural networks to process natural language, generating responses based on the input text. The superiority of ChatGPT compared to its predecessors is due to its ability to respond in multiple languages and generate refined and highly sophisticated responses based on advanced algorithmic models.

The scientific and academic community has expressed mixed responses to ChatGPT, reflecting the historical controversy over the benefits and risks of disruptive technologies. On one hand, ChatGPT, along with other LLMs (like Bard), can offer benefits in conversational and writing tasks, increasing the efficiency of required production (Lenharo, 2023) and promoting various educational dynamics (Sabzalieva & Valentini, 2023). On the other hand, its use could change the way written work is done, given the high risk of plagiarism and delegation of written tasks to AI, leading to changes in educational processes, including the need to redefine concepts like plagiarism (Barnett, 2023), authorship (Stokel-Walker, 2023), and aspects related to intellectual property (Thorp, 2023).

The cooperation between AI and human intelligence promises various applications in healthcare, such as precision medicine, drug discovery, analysis of large datasets, optimization of diagnostic processes, and clinical decision-making (Beam et al, 2023). Applications of AI in medical education have also been explored, helping students understand complex concepts (Cooper & Rodman, 2023). However, all these applications must be approached with caution due to the concerns and risks associated with the use of LLMs, such as the generation of inaccurate content, the risk of bias and discrimination, lack of transparency and reliability, cybersecurity issues, ethical consequences, and social implications (Ferryman, Mackintosh & Ghassemi, 2023).

Beyond the concerns and expectations suggested, I will address some specific issues from the novel Fahrenheit 451, a dystopian novel written by Ray Bradbury in 1953, which was adapted into a film in 1966, directed by François Truffaut.

Impersonal Operators

The plot of the novel is set in a bleak future where books are considered dangerous and their possession is forbidden. Firefighters, instead of extinguishing fires, are tasked with burning any books they find to eliminate any traces of knowledge and critical thinking.

The protagonist of the story is Guy Montag, an exemplary firefighter who, in his monotonous routine, begins to question the reason behind the book ban. As his curiosity grows, he meets Clarisse, a young woman unusually fascinated by the past and literature. Conversations with Clarisse awaken in Montag a desire to understand the value of books and the importance of freedom of thought.

As the plot unfolds, Montag becomes involved in an underground world of people who refuse to give up books and memorize their contents to preserve them. This transformative experience leads Montag to confront the tyranny and censorship of the government, risking his life to protect knowledge and free society from totalitarian control.

In summary, Fahrenheit 451 is a captivating reflection on the importance of literature, the pursuit of truth, and the struggle for intellectual freedom in an oppressive world where information is restricted and manipulated.

In one scene of the story, Mildred, Guy Montag’s wife, loses consciousness due to an overdose of sleeping pills. Throughout the novel, Mildred is portrayed as a shallow, alienated woman completely immersed in the "parlor walls" (giant televisions in homes that broadcast interactive shows) and her superficial and disconnected life. She represents a dehumanized society obsessed with trivial entertainment, with no interest in literature or critical thinking. As a tribute to the patience required for reading and fiction, threatened in the days of automatic thinking, I propose to review the narrative sequence described in the novel:

They had this machine. They had two machines, really. One of them slid down into your stomach like a black cobra down an echoing well looking for all the old water and the old time gathered there. It drank up the green matter that flowed to the top in a slow boil. Did it drink of the darkness? Did it suck out all the poisons accumulated with the years? It fed in silence with an occasional sound of inner suffocation and blind searching. It had an Eye. The impersonal operator of the machine could, by wearing a special optical helmet, gaze into the soul of the person whom he was pumping out. What did the Eye see? He did not say. He saw but did not see what the Eye saw. The entire operation was not unlike the digging of a trench in one’s yard. The woman on the bed was no more than a hard stratum of marble they had reached. Go on, anyway, shove the bore down, slush up the emptiness, if such a thing could be brought out in the throb of the suction snake. The operator stood smoking a cigarette. The other machine was working too. The other machine was red. It was like a red honeycomb that stood in the sun. It was a house of fabulous bees that murmured their gratitude to the stranger who filled the hive with honey. It was a comforting thing to see them, all small, shiny-walled, warm and soft, with their silver blurs and the tiny, dead white queen all surrounded by her fauns. The operator looked down at Montag. "Now," he said, "we’re going to have you hold your hand under this chin. That’s it, now the breath, sir, if you’ll just take a deep one. That’s right. Inhale. Lot’s of it. No hurry. Breathe deeply. How do you feel?" "I feel fine." "He’s got a healthy color," he was speaking directly to Montag. "How do you feel?" "Fine. I feel-" He finished it for her. "I feel like I’ve just been born." (Bradbury, 2006)

From this passage, I propose to reflect on two specific issues: the impact on the doctor-patient experience and the role of education in this context.

In this segment of the novel, the figure of the two impersonal operators attending to Mildred after her suicide attempt is striking. When Montag complains, "None of you is a doctor!” the two operators, while finishing their cigarettes, pack up their things and charge $50 for the service, responding, "A doctor is not needed."

What makes us doctors or healthcare professionals? In a broader sense, what defines us as human beings? At what point can a healthcare professional become an operative, automatic extension that operates on bodies as if they were hard strata of stone to be penetrated and intervened? At what point do we become unfeeling operators? What implications does this lack of empathy have on a relational practice like medical and healthcare practices?

A fluent language, without comprehension

In this regard, let’s explore some arguments. A recent article published in the journal Nature (Biever, 2023) analyzes the capabilities of the most advanced AI systems to date, such as GPT-4, and notes that while these systems can pass challenging exams, write convincingly human-like essays, and maintain smooth conversations, they still face challenges in solving simple logic visual puzzles.

The study focuses on a series of tests involving patterns of colored blocks on a screen and points out that while most people can identify the connection patterns, GPT-4 only gets about 30% of the puzzles correct in one category of patterns and a mere 3% accuracy in another category.

The researchers behind these puzzles aim to provide a better benchmark for evaluating the capabilities of AI systems and to solve an enigma about LLMs (Large Language Models) like GPT-4. On one hand, these systems excel in specific tasks, but on the other hand, they exhibit weaknesses and an inability to reason about abstract concepts.

Some experts believe that the achievements of the algorithms are due to hints of reasoning or understanding, while others are more cautious and do not see conclusive evidence to support one opinion or the other. Tests like logic puzzles reveal differences between the abilities of people and AI systems and are considered a step in the right direction to understand the limitations of these systems and unravel the elements of human intelligence.

The Turing Test is a test designed to evaluate the ability of a machine or AI program to demonstrate behavior similar to that of a human in communication and natural language understanding. It was proposed by the mathematician and computer scientist Alan Turing in his 1950 article "Computing Machinery and Intelligence."

The goal of the Turing Test is to determine whether a machine can demonstrate a level of intelligence indistinguishable by holding a conversation with a human evaluator. In the test, the evaluator interacts with two participants: a human and a machine. Both interact with the evaluator through text messages without revealing their true identity.

If the evaluator cannot with certainty distinguish which of the two participants is the human and which is the machine, the machine is considered to have passed the Turing Test and demonstrated a form of AI that successfully imitates human behavior.

The Turing Test has been a subject of debate and criticism over the years, as some argue that passing this test does not necessarily imply that the machine has genuine understanding or true intelligence but merely the ability to mimic human responses. Nevertheless, it remains an important benchmark for assessing the progress and sophistication level of AI systems today.

In this regard, some experts point out that performance tests in specific exams and assessments in specific tasks are more suitable for evaluating the capabilities of AI systems: "This could be because LLMs only learn from language; not being embodied in the physical world, they don’t experience the connection of language with objects, properties, and feelings as a person does. It’s clear that they don’t understand words in the same way that people do," says Lake. In his view, LLMs currently demonstrate "that you can have very fluid language without genuine understanding." (Biever, 2023)

On the other hand, LLMs also have capabilities that people do not possess, such as the ability to understand connections between nearly every word humans have written. This could allow models to solve problems based on language idiosyncrasies or other indicators, without necessarily generalizing to broader performance.

In another article, we have addressed the Turing Test from the perspective of the film Ex Machina (Caycedo-Castro & Pinto-Bustamante, 2022). In this story, Caleb, one of the characters, upon first contact with Ava, the female robot created by the Blue Book company, finds her fascinating but is still unsure if she constitutes conscious AI. In their first encounter, Ava represents a collection of symbols and information that only simulates having consciousness. Philosopher John Searle attempts to challenge the validity of the Turing Test while arguing that a machine is incapable of coming to think, resorting to the Chinese Room thought experiment. He highlights the difference between recognizing syntax and understanding semantics, proposing that an interpreter in a closed room, equipped with the repertoires and sufficient rules to process incoming information (e.g., linguistic symbols in Chinese), can impersonate a human interpreter if we exclusively consider the syntactic dimension of language, without taking into account the semantic dimension of meaning: "Biological objects (brains) can have ’intentionality’ and ’semantics,’ which that author considers the defining characteristics of mental activity" (Penrose, 1996, p. 28).

This semantic dimension requires a sentient mind capable of perceiving emotions and feelings, as well as interacting in a cultural context. In conclusion, ChatGPT, like other LLMs, for now, represents powerful syntactic machines: they retrieve information and organize it according to the syntactic arrangements of language; they lack semantic capacity. They are incapable of comprehending the meaning and cultural significance of the content they produce.

In another article (Pinto-Bustamante, Riaño-Moreno, Clavijo-Montoya, Cárdenas-Galindo & Campos-Figueredo, 2023), we have put forth the following idea: AI merges analytical and systematic systems of information with automatic systems, eliminating emotional influence on decision-making processes at the expense of the predictive power of large data analysis. However, the neurobiology of emotions emphasizes the importance of affect in decision-making processes (Damasio, 2000). By eliminating emotions, it simultaneously dismisses the role of values (moral, ethical, political) and the need to deliberate on conflicts between diverse values in societies with pluralistic aspirations, where diverse moral narratives and worldviews persist. In this context, algorithms solidify as sets of incontrovertible rules for decision-making, as sets of rules that validate preferences, and as sets of rules that restrict interaction and dissent. When the role of emotions, meaning, sense, and critical reflection concerning technology use is eliminated, we become impersonal operators, a mute extension of the machine’s programmable mechanism.

When asking ChatGPT about the etymology and meaning of the word impersonal, this was its response:

The word "impersonal" has its roots in Latin. It comes from the combination of two Latin words: "in," meaning "not," and "personalis," meaning "personal."

So, etymologically, "impersonal" means "not personal" or "lacking personal characteristics."

In terms of meaning, "impersonal" describes something that lacks personal qualities, emotions, or individuality. It can refer to actions, interactions, or statements that are objective, neutral, or devoid of personal connection. In various contexts, it can imply a sense of detachment, impartiality, or indifference. (Response generated on March 3, 2024)

In conclusion: automatic nihilism

In another part of the novel, the Mechanical Hound is described as a robot designed to punish dissident readers in the fictional city of Fahrenheit 451.

“It doesn’t like me,” said Montag.
“What, the Hound?” The Captain studied his cards. “Come off it. It doesn’t
like or dislike. It just ‘functions. It’s like a lesson in ballistics. It has a trajectory
we decide on for it. It follows through. It targets itself, homes itself, and cuts off. It’s only copper wire, storage batteries, and electricity.” (Bradbury, 2006)

Promoting greater bias control in the construction of algorithmic models is an ethical task that responds to the recognition of the positive value of cultural diversity. It is imperative to promote a critical perspective toward the increasing role of algorithms in daily life. Without this, the uncritical consumption of algorithmic systems will lead to what some call dataism:

In this same vein, Han emphasizes the value acquired by big data and algorithms in the contemporary world. Under this perspective, existence transforms into homo digitalis, who, at the mercy of the numbering and calculability of their behaviors completely recorded and traceable on the internet, ends up immersed in a vision or interpretation of the world completely supported by data; namely, a data-driven barbarism with its emerging philosophy: dataism. This barbarism turns into nihilism because dataism necessarily implies renouncing meaning and significance; a void that is attempted to be filled with numbers and calculations. In other words, in this form of nihilism, the absence of meaning is disguised and concealed behind the curtain of mechanical calculation. (Valle Jiménez & García Ramírez, 2021)

As Google did some years ago, the growth and refinement of AI have turned it into a kind of omniscient deity, a sort of oracle, a seer, a providence, “as a new interpretation imposed by the will of technical scientific power that promises all sorts of salvific answers and solutions, like a kind of technical messianism” (Valle Jiménez & García Ramírez, 2021). This AI messianism, without the resistance offered by critical thinking, human emotions, and the search for meaning, leads to a form of nihilism, a loss of horizons; following Ernst Jünger: “The nihilist moves from a moral structure to an automatic one. When man loses his world of values, he necessarily loses himself. He merely becomes something that still functions” (Grenzmann, 1961). Perhaps this is the great challenge for education in the days of AI: to preserve critical thinking, deliberation on values, and emotional education (Pinto-Bustamante, 2016). In this regard, it is necessary to insist on preserving the distinctly human attributes in the interaction with AI (Vannacci, Bonaiuti, & Ravaldi, 2023), such as emotions and their contradictions, as well as exploring different educational alternatives to leverage AI for the development of better empathetic qualities (Ayers, Poliak & Dredze, 2023).

When asking ChatGPT in this regard, whose contributions I appreciate for the preparation of this presentation, its response is resolute:

As an AI language model, I don’t have personal characteristics or emotions like humans do. My responses are generated based on patterns in data and are not influenced by personal feelings or experiences. So, in that sense, you could consider my operation as impersonal. (Response generated on March 3, 2024)

References

Ayers, J. W., Poliak, A., Dredze, M., et al. (2023). Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum. JAMA Internal Medicine, 183(6), 589–596. https://doi.org/10.1001/jamainternmed.2023.1838

Barnett, S. (2023, 30 de enero). ChatGPT Is Making Universities Rethink Plagiarism. Culture. https://www.wired.com/story/chatgpt-college-university-plagiarism/

Beam, A. L., Drazen, J. M., Kohane, I. S., Leong, T. Y., Manrai, A. K., & Rubin, E. J. (2023). Artificial Intelligence in Medicine. The New England Journal of Medicine, 388(13), 1220–1221. https://doi.org/10.1056/NEJMe2206291

Biever C. (2023). ChatGPT broke the Turing test - the race is on for new ways to assess AI. Nature, 619(7971), 686–689. https://doi.org/10.1038/d41586-023-02361-7

Bradbury, R. (2006). Fahrenheit 451. Retamar: Ediciones Perdidas.

Caycedo-Castro, M. P., & Pinto-Bustamante, B. J. (2022). El test de Turing en Ex Machina: ¿Es Ava un sistema intencional?. Ética y Cine Journal, 12(2), 23-32. https://doi.org/10.31056/2250.5415.v12.n2.38325

Cooper, A., & Rodman, A. (2023). AI and Medical Education - A 21st-Century Pandora’s Box. The New England Journal of Medicine, 389(5), 385–387. https://doi.org/10.1056/NEJMp2304993

Damasio, A. R. (2000). Sentir lo que sucede. Santiago de Chile: Editorial Andrés Bello.

Ferryman, K., Mackintosh, M., & Ghassemi, M. (2023). Considering Biased Data as Informative Artifacts in AI-Assisted Health Care. The New England Journal of Medicine, 389(9), 833–838. https://doi.org/10.1056/NEJMra2214964

Grenzmann, W. (1961). Fe y creación literaria. Problemas y figuras de la actual literatura alemana. Madrid: Ediciones Rialp.

Lenharo M. (2023). ChatGPT gives an extra productivity boost to weaker writers. Nature, 10.1038/d41586-023-02270-9. Advance online publication. https://doi.org/10.1038/d41586-023-02270-9

Pinto-Bustamante, B. J. (2016). Propuestas para la educación ante la crisis del humanismo. En Educar para el siglo XXI - Reflexiones humanistas (Edición 1). Universidad Sergio Arboleda.

Pinto-Bustamante, B. J., Riaño-Moreno, J. C., Clavijo-Montoya, H. A., Cárdenas-Galindo, M. A., & Campos-Figueredo, W. D. (2023). Corrigendum: Bioethics and artificial intelligence: between deliberation on values and rational choice theory. Frontiers in Robotics and AI, 10, 1251568. https://doi.org/10.3389/frobt.2023.1140901

Sabzalieva, E., & Valentini, A. (2023). ChatGPT e Inteligencia Artificial en la educación superior: Guía de inicio rápido. Organización de las Naciones Unidas para la Educación, la Ciencia y la Cultura y el Instituto Internacional de la UNESCO para la Educación Superior en América Latina y el Caribe (IESALC).

Stokel-Walker C. (2023). ChatGPT listed as author on research papers: many scientists disapprove. Nature, 613(7945), 620–621. https://doi.org/10.1038/d41586-023-00107-z

Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379, 313-313. DOI:10.1126/science.adg7879

Turing, A. M. (1950). Computing Machinery and Intelligence. Mind, 59(236), 433-460. https://doi.org/10.1093/mind/LIX.236.433

Valle Jiménez, D., & García Ramírez, D. (2021). Algoritmos, Big Data e Inteligencia Artificial: ¿Un nihilismo anunciado? Cuadernos Salmantinos de Filosofía, 48, 75-103.

Vannacci, A., Bonaiuti, R., & Ravaldi, C. (2023). Machine-Made Empathy? Why Medicine Still Needs Humans. JAMA internal medicine, 10.1001/jamainternmed.2023.4389. Advance online publication. https://doi.org/10.1001/jamainternmed.2023.4389




Copyright/Permisos: Los/as autores/as conservan los derechos de autor © y permiten la publicación a Aesthethika, bajo licencia CC BY-SA / Reconocimiento - Reconocimiento-CompartirIgual 4.0 Internacional. La adopción de esta licencia permite copiar, redistribuir, comunicar públicamente la obra, reconociendo los créditos de la misma, y construir sobre el material publicado, debiendo otorgar el crédito apropiado a través de un enlace a la licencia e indicando si se realizaron cambios.


Número Actual
Volumen 20 / Número 1
Editorial
Enseñanza y transmisión
Dora Serué 

Alejandro Ariel, enamorado del cine
Dora Serué 
Juan Jorge Michel Fariña 

Casablanca: Una noche
Casablanca
Alejandro Ariel 

En rojo: la responsabilidad por la transferencia
El Color de la Noche
Alejandro Ariel 

La responsabilidad de ser padre
Magnolia
Alejandro Ariel 

La responsabilidad frente a la hipnosis
Old Boy
Alejandro Ariel 

Pierrepoint: El verdugo y la posición del analista
El último verdugo
Alejandro Ariel 

De la ética deliberativa a la decisión en acto
12
Alejandro Ariel 

La decisión imposible
La decisión de Sophie
Alejandro Ariel 

Criar una hija, crear una obra: la responsabilidad de ser madre
El cisne negro
Alejandro Ariel 

Justicia, verdad, libre albedrío
88 minutos
Alejandro Ariel 

Black Mirror: The National Anthem
Black Mirror
Alejandro Ariel 

Anexo I
Moral y Ética
Alejandro Ariel 

Anexo II
La responsabilidad ante el aborto
Alejandro Ariel 

   

aesthethika // Revista internacional de estudio e investigación interdisciplinaria sobre subjetividad, política y arte

Diseño:www.navetrece.com