AI and ultrasound: a blessing or a curse?

Share

An AI-generated poster on foetal sexing in canines with ultrasound has recently been shared on social media. The reaction to seeing something not just completely inaccurate, but also so lazily created, is almost visceral. In the case of this poster, it advises completely the wrong time windows for sexing, depicts the umbilical cord as arising from the centre of the foetal head, and diagrammatically illustrates its fictitious points with a line drawing of an entity most closely resembling a snake.

The AI’s recreations of ultrasound scans were so obscene as to be unrecognisable as a mammalian foetus. Despite it being so obviously wrong, people have reacted angrily, and the reasons for this are two-fold.

(1) Sadly, this will fool a lot of people. We know that when people first begin with ultrasound, everything is unrecognisable. It is very difficult, without experience, to know if you are being shown total nonsense or not.

(2) It won’t fool anyone with basic knowledge of ultrasound or anatomy. As the person who created and shared this claims to be a legitimate source of ultrasound training, this further damages the reputation of the industry, at a particularly sensitive time when all the legislators’ eyes are upon it.

Make no mistake: this is artificial intelligence at its worst. At the command of people with no mastery of the subject matter, the ability to (superficially) substitute years of learning with misinformation generated in five minutes is the con artist’s dream. In the echo chamber of social media channels over which they are in complete control, they can protect themselves from anyone who might think to call them out on it, or ask them to support this “knowledge” with a practical demonstration of what they describe.

This isn’t really about artificial intelligence, though. People like this are going to find ways to scam people out of their money using whatever the best tool of the moment is. The only way to fight it now, and always, is by doing the research before investing in an ultrasound course or equipment from anybody. Those who are not willing to do this are prime targets for being scammed, with or without an AI poster that teaches them that the cord insertion arises from the foetal skull.

Artificial intelligence in the field of ultrasound is not all bad news. It has enormous potential to assist us. Indeed, diagnostic imaging is one of the most popular uses of AI in the field of veterinary medicine at the moment. However, properly trained, custom-made AI in the hands of well-trained individuals is very different to ChatGPT-generated ‘AI slop.’

AI cannot be a substitute for human expertise

This example does starkly illustrate a more universal truth: AI cannot be used as a replacement for genuine expertise. When it is, things go dramatically wrong. This is something I am currently spending a lot of time looking at as part of my PhD: how do people interact with AI? Does it help or hinder them in their ultrasound interpretation? Do people notice when AI gets it wrong? What level of experience is required for people to reliably be able to detect AI errors in the field of ultrasound?

Artificial intelligence certainly supports people with very little ultrasound knowledge where they would otherwise flounder, but because of the gaps in their own knowledge, they are also completely unable to recognise when it gets it totally wrong. This foetal sexing fiasco is the perfect illustration of this because in the eyes of the inexperienced, it makes the person who created it look as though they have some knowledge; but when seen by people with actual scanning experience, the errors are glaringly obvious and the person posting it is completely discredited.

We see the same phenomenon in veterinary and human medicine, where the performance of inexperienced operators is boosted by the use of artificial intelligence so long as the AI gets it right. As soon as it makes a mistake, however, the person’s performance plummets, because they are entirely dependent upon the AI. They have not learnt the skills for themselves, so have no way of critically assessing its outputs.

Is there a solution?

In the professional ultrasound world, we are working on a solution. Any artificial intelligence assisted diagnostic imaging needs to be accompanied by training, not only directly in the imaging modality, but also in how to spot common AI errors, which tend to be very predictable.

Outside of the medical setting, the usual levels of vigilance and scepticism should be applied. If someone is claiming to be able to exchange hundreds of your Pounds, Euros or Dollars for ultrasound skills, this person must be an accredited sonographer. That’s not snobbery; it’s just you looking after yourself and your money.

If someone is claiming to have amazing ultrasound machines to sell to you, that person must be running a reputable ultrasound company with a long-established trading history – and ideally be qualified in the field, too. Otherwise, their judgement can only be motivated by one thing: what equipment they can sell you to maximise their profits, not your performance. Inevitably, this means the lowest quality equipment, marked up to the highest possible price.

Finally, we just have to come to terms with the fact that the people who most need this advice will never take it. Some people just want to be told a too-good-to-be-true story, and those are the people who were going to get hurt anyway, with or without AI.