Explainable AI in Medicine

Post by Modulai 1y ago update
Articles & Editorial

Modulai Blog

Introduction

Deep learning models are everywhere. They have helped us push the boundaries of what we thought was possible in many different fields, from computer vision to natural language processing. Owing to their complexity, however, deep learning models are in one key sense flawed. They are effectively black boxes: data goes in, prediction comes out – but we can’t really explain how the model arrived at its conclusion. 

Thankfully, an increasingly large group of machine learning researchers are devoting themselves to the field of explainability, or explainable artificial intelligence. They are concerned with peeking inside the black boxes of deep learning models. In this post, we take a look at how explainability techniques can be used to highlight what features of an ECG are most relevant for a model predicting Atrial Fibrillation from sinus ECGs.

Case study

Together with Zenicor Medical Systems AB, Modulai has previously developed a CNN-architecture to detect paroxysmal atrial fibrillation (AF), based on single-lead sinus ECGs. Unless you’re already into medicine, you may be scratching your head at some of what you just read. Let’s break it down quickly.

AF is a type of heart arrhythmia that affects a significant portion of the population. It increases the risk of heart failure and stroke. Given the risks and prevalence, effective AF screening has the potential to save many lives and reduce the burden on healthcare systems worldwide.

Read more here

Attributes

Data, Technology, Competence & Expertise
Health
Engineering
Articles & Editorial
Discovery
CNN, Explainability and Interpretability
Time series