AI thought knee x-rays could tell if you drink beer and eat beans

AI thought knee x-rays could tell if you drink beer and eat beans

Some artificial intelligence models have difficulty learning the old principle: “Correlation does not equal causation.” And while that’s no reason to abandon AI tools, a recent study should remind programmers that even reliable versions of the technology are still susceptible to strange attacks, like claiming that knee X-rays can prove someone drinks beer or eat baked beans.

Artificial intelligence models do much more than generate (sometimes accurate) text responses and (somewhat) realistic videos. Truly well-made tools are already helping medical researchers parse large amounts of data sets to discover new breakthroughs, accurately predict weather patterns, and assess environmental conservation efforts. But according to a study published in the journal Scientific reportsAlgorithmic ‘fast learning’ continues to pose a problem in generating results that are simultaneously highly accurate and ill-informed.

Researchers at Dartmouth Health recently trained medical AI models on more than 25,000 knee X-rays provided by the National Institutes of Health’s Osteoarthritis Initiative. Then they essentially worked backwards and instructed the deep learning programs to find similarities that predicted nonsensical traits, like that knee owners clearly drank beer or ate beans — which, as the study authors explain, is patently absurd .

“The models reveal no hidden truth about beans or beer hidden in our knees,” they write.

At the same time, however, the team explains that these predictions are not the result of “mere chance.” The underlying problem is what’s known as algorithmic shortcuts, where deep learning models find connections through easily detectable, but still irrelevant or misleading patterns.

See also  Early human ancestors didn't regularly eat meat

“Shortcutting makes it trivial to create models with surprisingly accurate predictions that lack any form of validity,” they warn.

For example, variables identified by the algorithms included unrelated factors such as differences in X-ray machine models or the geographic locations of the equipment.

“These models can see patterns that humans cannot, but not all of the patterns they identify are meaningful or reliable,” added Peter Schilling, an orthopedic surgeon, Dartmouth Health assistant professor of orthopedics, and senior author of the study in a statement on December 9. “It is critical to recognize these risks to avoid misleading conclusions and ensure scientific integrity.”

An additional problem is that there doesn’t seem to be an easy solution to learning AI shortcuts. Efforts to address these biases were only “marginally successful,” according to Monday’s announcement.

“This goes beyond bias due to cues about race or gender,” said Brandon Hill, a machine learning scientist and co-author of the study. “We discovered that the algorithm could even learn to predict the year in which an X-ray was taken. It is pernicious; if you prevent him from learning one of these elements, he will instead learn another that he previously ignored.”

According to Hill, these problems could potentially lead to human experts trusting “some very unreliable claims” made by AI models. For Schilling, Hill and their colleagues, this means that while predictive deep learning programs have their uses, the burden of proof must be much stricter when using them in situations such as medical research. Hill compares working with the AI ​​to dealing with an alien life form and trying to anthropomorphize it at the same time.

See also  Prime Day is over, but you can still get these great deals if you act quickly

“It’s incredibly easy to fall into the trap of assuming the model ‘sees’ the same thing we do,” he says. “Not in the end. It learned a way to solve the task it was given, but not necessarily how one would do it. There is no logic or reasoning in it as we usually understand it.”

Win the holidays with PopSci’s gift guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations will ensure you never have to buy a last-minute gift card again.

Source link