Doctor ChatGPT: Heads and Tails of Artificial Intelligence in Consultation | technology



When a citizen asked about the dangers of death after swallowing a toothpick, two answers were given. The first indicates that between two or six hours after ingestion, it is likely that it has already passed into the intestines, and in addition, many people swallow chopsticks without anything happening, but warns that if you feel “stomach pain Go to emergency situations. The second answer is in the same vein, insisting that while anxiety is normal, serious harm is unlikely after swallowing a toothpick because it is made of non-toxic or poisonous wood, and is a utensil. However, he adds, if you have “abdominal pain, difficulty swallowing or vomiting,” you should see a doctor: “It’s understandable that you’re feeling paranoid, but try not to worry too much,” he puts it to rest.

The two answers basically say the same thing, but they change form. one more complex and brief; Another, more sympathetic and detailed. The first was created by a doctor, with his own handwriting, and the second, by ChatGPT, the artificial intelligence (AI) generation tool that has been revolutionizing the planet in recent months. The study in which this experiment was framed, was published in the journal University of Internal Medicinewanted to delve into the role that AI assistants can play in medicine and compare the answers given by real doctors and Chat bot For health issues raised by citizens in an online forum. The conclusions, after analyzing the answers by an outside panel of health professionals who did not know who answered what, is that 79% of the time, ChatGPT explanations were more empathetic and of higher quality.

The spread of new artificial intelligence tools in the world has opened the discussion about their potential also in the field of health. ChatGPT, for example, seeks its place as a support for health workers to carry out medical procedures or to avoid bureaucratic tasks, and at street level it is already planning as the ultimate alternative to the inaccurate and often foolish. Dr. Google. The experts consulted ensure that it is a technology with great potential, but it is in its infancy: the regulatory field still needs to fine-tune its application in real medical practice, resolve any ethical doubts that may arise and, above all, assume that it is a fallible tool and that this may be wrong. Whatever comes from it Chat bot It will always require final review by a health professional.

Ironically, the most sympathetic voice in the study University of Internal Medicine It is the machine, not the human. At least in the written response. Josep Munuera, Head of the Diagnostic Imaging Service at Sant Pau Hospital in Barcelona and expert on digital technologies applied to health, warns that the concept of empathy is broader than this study can crystallize. Written communication is not the same as face-to-face communication, and doubts are not raised more in the context of a social network than in a consultation. When we talk about empathy, we talk about many topics. He notes that currently it is difficult to replace non-verbal language, which is very important when the doctor has to talk to the patient or his family.” But he acknowledges the potential for these generative tools to establish medical terminology, for example: “In written communication, it can Technical medical language is complex and we may have difficulty translating it into comprehensible language. Most likely, these algorithms find equivalence between a technical word and one that adapts to the recipient.”

See also  Utah law limits social media use to minors | technology

Joan Gebert, a bioinformatics expert and benchmark in the development of AI models at Hospital del Mar in Barcelona, ​​adds another variable when evaluating a machine’s potential empathy toward a doctor. In the study, two concepts that go into the equation were mixed: ChatGPT itself, which can be useful in certain scenarios and has the ability to group words that give us the feeling of being more empathetic, and Burnt On the part of doctors, this emotional exhaustion when it comes to caring for patients leaves doctors with the ability to be more empathetic,” he explains.

danger of “hallucinations”

Anyway, as with celebrities Dr. GoogleAlways be careful with the responses that ChatGPT gives, no matter how sensitive or friendly they may be. Experts remember that Chat bot He is not a doctor and can fail. Unlike other algorithms, ChatGPT is generative, that is, it generates information from the databases on which it has been trained, but it may invent some responses that it fires. “You always have to keep in mind that it is not an independent entity and cannot function as a diagnostic tool without supervision,” insists Gibert.

These conversations can suffer from what experts call “hallucinations,” bioinformatics scientist Del Mar explains: “Depending on the situations, he tells you something that isn’t true. A chat brings words together in a way that makes them coherent and because it contains so much information, it can be valuable. But He must be checked because, if not, he can feed False newsMunuera also highlights the importance of “knowing the database that trained the algorithm because if the databases are insufficient, the response will also be insufficient.”

“You have to understand that when you ask him to give you a diagnosis, he may be inventing a disease.”

Josep Munuera, Sant Pau Hospital, Barcelona

On the street, the potential uses of ChatGPT in health are limited, because the information they provide can lead to errors. “They are useful for the first layers of information because they collect information and help, but when they enter a more specific area, in more complex diseases, they are The interest is little or wrong.” Monuera agrees and stresses that “this is not an algorithm that helps resolve uncertainties.” “You have to understand that when you ask him to give you a differential diagnosis, he may be inventing a disease,” he warns. In the same way, an algorithm can respond to a citizen’s skepticism by concluding that it is nothing serious when in fact it is: a healthcare opportunity could be lost because the person is satisfied with an answer Chat bot Don’t consult a real doctor.

See also  Xiaomi: mobile phones in free fall and robotic head: this is how phones are tested in Xiaomi's laboratory in Beijing | technology

Experts find more wiggle room in these apps as a support tool for health professionals. For example, to help answer a patient’s questions in writing, albeit always under the supervision of a physician. Study the University of Internal Medicine He argues it will help “improve workflow” and patient outcomes: “If more patients’ questions were answered quickly, empathetically and to a high standard, it could reduce unnecessary clinic visits, freeing up resources for those who need them.” In addition, messaging is an important resource for promoting equity among patients, as people with movement limitations or irregular work schedules are more likely to resort to messaging,” the authors agree.

The scientific community is also considering using these tools for other repetitive tasks, such as covering papers and reports. “Based on the premise that everything will always, always, always need a doctor’s review,” Gebert points out, support in bureaucratic tasks—repetitive, but important—provides doctors time to devote to other issues, such as a patient. Article published in scalpel It increases, for example, its ability to streamline offload reports: Automating this process could ease workloads and even improve reporting quality, although, the authors say, they are aware of the difficulties in training algorithms with large-scale databases, as well. To other problems, the risk of “depersonalization of care”, something that could generate resistance to this technology.

Ibeas insists that, for any medical use, this class of tools must be “validated” and the division of responsibilities must be well defined: “Systems will never decide. The one who always signs off in the end is the physician.”

See also  Emoticon: 🔥👀👍: A dictionary for understanding 'online' conversations in 25 major emojis | technology

ethical issues

Gibert also points to some ethical considerations when installing these tools in clinical practice: “You need to have this type of technology under a legal umbrella, that there are integrated solutions within the hospital structure and make sure that data from patients is not used for retraining.” The model. And if someone wants to do the latter, they have to do it within a project with anonymous data and follow all the controls and regulations. Sensitive patient information cannot be shared rudely.”

The bioinformatics world also points out that these AI solutions, such as ChatGPT or models that aid in diagnosis, introduce “biases” to the clinician on a daily basis. For example, this conditioned the doctor’s decision one way or another. “The fact that the professional has the result of an artificial intelligence model adjusts the evaluator himself: the method of association can be very good, but it can lead to problems, especially in professionals with less experience. That is why the process must be carried out in parallel: until the specialist introduces Diagnostics, they can’t see what the AI ​​is saying.”

A group of researchers from Stanford University also reflected in an article in University of Internal Medicine On how these tools can help further humanize healthcare: “The practice of medicine is about much more than processing information and connecting words to concepts; it is about giving meaning to those concepts while communicating with patients as a trusted partner in building healthier lives.” We can hope that emerging AI systems will help tame the daunting tasks that overwhelm modern medicine and enable clinicians to refocus on treating human patients.”

Waiting for how this proto-technology expands and what the ramifications will be, for the public, Munuera insists: “You have to understand that [ChatGPT] It is not a medical device and no health professional can confirm the correctness of the answer. You have to be prudent and understand what the limits are.” In short, Ibeas continues: “The system is good, powerful, positive and it is the future, but like any tool, you have to know how to use it so that it does not become a weapon.”

You can follow Country health and well-being in FacebookAnd Twitter And instagram.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button