Artificial intelligence strikes like elite doctors on some health issues | Health and wellness



in Harry dismountedWoody Allen defines the only priority for anyone who thinks they are sick: “The most beautiful words in the English language are not I love you! But benign!”. For years, faced with health concerns, many people would go to Google to get diagnosed. Often what they get is more anxiety that their doctors have to deal with afterwards. Now, the company, through which many find information, find their way around cities or make reservations for dinner, can improve its position as a source of information on these existential questions through AI models with which it can accurately respond. problems.

In an article published today in the journal nature, a team from the company presents the results of their work with Med-PaLM, a generative AI model similar to ChatGPT, which feeds on large databases and manages to organize that information to provide answers that make sense, though they aren’t always correct. The second version of this technology, Med-PaLM 2, already achieves an accuracy of 86.5% on test type tests such as those that must be passed by clinicians at MIR, an increase of 19% compared to the previous version presented in this article.

In the work published today, the authors, most of whom are members of Google Research, test their models against large databases of medical questions and answers that also includes more than 3,000 of the most searched questions by users on the Internet. According to an email account by Chica Azizi, one of the article’s authors, the progression of scores “passed in three months from certified scraping performance to expert level” on tests that measure their ability to respond to these. Questions. A panel of clinicians estimated that 92.9% of the long-term responses generated by Med-PaLM agreed with the scientific consensus, just above the 92.6% of responses provided by human clinicians. When comparing the number of responses that could lead to adverse outcomes, the devices won 5.8% versus 6.5% for doctors. Although the data is promising, the authors say more research is needed to bring these models into healthcare settings, and Azizi says they don’t envision “using these systems independently or replacing physicians.”

See also  Health | How many steps should you take per day to lose weight and strengthen your legs?

Josep Monoeira, director of radiodiagnostics at the Hospital de la Santa Cruz y Sant Pau in Barcelona and an expert in technologies applied to health, believes these models can be useful, but warns that “the doctors’ job is not limited to answering questions” such as those presented. to these models. He notes that “exploration or interest in nonverbal language is necessary to provide a diagnosis.” Later on, technologies such as those developed by Google could be used to reduce the workload, and produce an understandable patient report or treatment plan. “It can also be useful as a support by providing ideas about a diagnosis or helping to look up scientific information in large databases,” he notes. “But then we need the human who checks what the AI ​​proposes and also takes responsibility for the decision,” he concludes. “What doctors do is multifaceted, far-reaching, and relies heavily on human interaction. Our goal is to use artificial intelligence to increase doctors’ ability to deliver better treatment,” Azizi agrees.

In an interview with EL PAÍS, MIT scientist and expert in artificial intelligence applied to medicine, Regina Barzilai, warns that machines, which learn on their own from the instructions given to them, could outperform humans in some skills and “our ability to know what If they’re doing something wrong that’s minimal.” “We have to learn to live in this world where technology makes many decisions that we cannot supervise,” he warned. Anyone who has used ChatGPT will have verified the ability of these systems to generate answers that are completely reliable and full of lies that are hard to detect, precisely because they are so well articulated. My dear, like Barzilai, knows that some of the answers that machines give us may be correct, but we don’t know exactly where they come from, which is as delicate as doctors can generate insecurity.

See also  These are the sausages that doctors recommend as the healthiest

In some applications of this technology, which do not involve diagnosing patients’ diseases, but rather the search for knowledge, hallucinations, as is known about invented fragments in texts generated by artificial intelligence, may not be a problem. “Hallucinations and creativity are two sides of the same coin, and some applications, such as repositioning a drug or discovering associations between genes and diseases, require a certain degree of creativity, which in turn makes possible the process of discovery and innovation,” Azizi explains.

José Ibis, a nephrologist at Parque Tully Hospital in Sabadell and curator of the Big Data and Artificial Intelligence collection of the Spanish Society of Nephrology, believes that this type of technology is the future and will be very useful for improving medical treatment, but he believes that it is still to be learned “for example, They get information from high quality sources, but not all posts are the same, many times there are no posts for negative data, for experiments where something is tested and the expected result is not obtained.Artificial intelligence builds a script from those texts, but I don’t know what It is the components that I take from each type of article that can cause bias,” notes Ibeas. “The same treatment can be beneficial for one population that has a disease in one setting and not for another population,” he explains.

For the time being, for Ibeas, these kinds of models could be a resource for clinicians, but in the future their usefulness should be investigated, as with other medical products before approval, “comparing doctors’ results in standard practice with those using this technology.” . The specialist also states that care must be taken when applying this technology, training doctors to use it, and using it in cases where it is really useful and does not happen “as is the case with some very good products in medicine, that because of commercial pressure to apply it to everyone, errors occur and the possibility Using a very useful technique ends up being lost.”

See also  Cómo evitar que la inteligencia artificial falle más con las mujeres en los diagnósticos médicos | Tecnología

One last aspect that will be relevant in using these generative language models is the possibility of making good answers accessible to many people who otherwise would not have access to them. The authors themselves point out that their comparisons, in which the AI ​​actually turned out well, were made with very high-level experts. Some clinicians worry that this possibility may be an excuse to cut resources for healthcare, although they recognize the utility of models such as Med-PaLM in these contexts.

You can follow Country health and well-being in FacebookAnd Twitter And instagram.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button