The limitations of artificial intelligence in cardiology

By | August 11, 2019

Artificial intelligence (#AI) seems to be all the rage these days, as it should be, given its potential to revolutionize medicine in many ways. #AI is actually already an integral part of many of our lives on Google, Amazon, Facebook, and other sites. Siri and Alexa use #AI too, so we can’t easily escape it. Why would we want to?

Of course, one concern about the algorithms is that they often are trained on ethnically homogenous datasets, potentially limiting their generalizability to the general population in the United States, and to others around the world. It is quite common for innovations in personalized medicine to be trained and validated in Caucasian populations, with the typical exclusion of minorities.

Interestingly, I have seen at least one group show data suggesting that their algorithm performs well overall in a diverse set of ethnicities, including minorities. Perhaps we can continue to train and validate #AI algorithms on various ethnicities to ensure that we improve instead of exacerbating health disparities in precision medicine.

Another concern is what to do when man and machine disagree. The importance of excellent validation of the algorithms must, therefore, be underscored. Clinical judgment by the physician is essential, with a dose of humility as well, to ensure that #AI would be used to support and not replace clinical decision-making. Admittedly, explainability (the ability to explain how the algorithm came up with its output) can be limited in unsupervised “machine learning,” such as with the specialized techniques involved in “deep learning.” However, the future remains bright for #AI applications in health care, as the potential to enhance our practice in the field of medicine is phenomenal.

Read More:  The Children's Place recalls some infant snowsuits over potential choking hazard

Many studies reporting on #AI application in medicine indicate that the algorithms outperform many of our nation’s best doctors. What the studies often fail to report is that the best outcome is found when combining the wisdom and experience of the physicians with #AI in a human-AI partnership for success.

Due to some of the limitations described herein, some may choose not to embrace #AI or may be slow in adopting use and trust of the algorithms. Others will instead continue on as or become early adopters.

One additional reason for considering adoption of #AI is the predicted increase of “joy” and decrease in “burnout” among our physicians. If #AI can make our clinical processes even more efficient, expedient, and effective, then perhaps this will free up more time and opportunity for physicians and others in health care to focus on self-care and prevent or heal burnout.

There is always the concern that some may develop uses for #AI in medicine that may blur ethical lines. Because of this, studies and committees focused on the social, ethical, and legal implications of #AI are paramount.

As a minority professional myself, it is important to me that we work together in #AI to quell and not worsen health disparities. We should also create pipelines for #DiversityAndInclusion in #AI, and be sure to optimize any social, ethical, or legal issues. Clinical decision-making of man in partnership with machine will continue to be key, as we seek to help each of our patients heal and maintain or even increase our joy in medicine.

Read More:  What can zolpidem indicate

We should be certain to carefully test and validate every algorithm, multiple times if needed, to ensure that we maintain our ability to sustain a practice true to the oath we all took to first do no harm.

Sherry-Ann Brown is a cardiologist.

Image credit: