Machine learning and other data science techniques are used in many ways in healthcare. From image processing that detects abnormalities in x-rays or MRIs to algorithms that pull from electronic medical records to detect diseases, the risk of disease, or the progression of disease, the application of machine learning techniques can easily improve both the healthcare process and patient care. However, as a data scientist in healthcare, I’ve discovered that putting these ideas into practice is often the hardest part of getting value out of a data science project.
Here are a few of the things I've learned to keep in mind while working on data science projects in the healthcare sector.
1. Take a holistic view.
The success of machine learning algorithms depends on a deep understanding of how it might be used, the process it could potentially fit into, and a relationship with the clinicians who will be using it. While the accuracy of your predictive model is important, it’s just as important to know how it will be used and that it will be used effectively. This requires considering where in the clinician’s workflow a machine learning algorithm should be used, and the ultimate value it will provide to the clinician’s goals.
The entire process usually consists of significant back-and-forth. As you learn the workflow of clinicians and they learn what insight your algorithm can (and cannot) provide, the original problem posed may be refined or changed completely. Make sure that the clinician is educated on the limitations of the algorithm, and make sure you are educated on the resources available to the clinician. Spending all your time perfecting a predictive model is a waste if at the end you realize the clinician lacks the resources to actually act on the predictions.
2. Be transparent.
An algorithm that gives a clinician a diagnosis without any justification for why it is making that assessment is rarely actionable. The clinician may then be forced to do a full chart review and physical examination in order to find what the algorithm picked up on. If nothing is found, what does the clinician conclude? Should the algorithm’s assessment be judged as wrong, or is it picking up on something the clinician doesn’t see?
There’s no easy way for the clinician to tell. However, if trust is lost in the algorithm, it will then be dismissed as it provides no actionable information. For this reason, it’s important to offer some level of transparency into a machine learning model’s prediction if it’s going to be used by a clinician. Tools like LIME or SHAPcan be run to indicate the features that are having the biggest influence over the algorithm’s prediction for a specific patient. When properly presented and explained to a clinician, it can be a powerful tool for directing a clinician to possible health issues.
3. Invite clinical judgement.
If an algorithm prescribes a particular treatment or diagnosis, what happens when it gets it wrong? Human clinicians make mistakes as well, but in these cases, liability is more clear-cut. Data scientists usually aren’t trained clinicians, and even if they were, the models they create certainly aren’t. That’s why it’s important that the process involves a clinician’s final say in whether an intervention is warranted or not. The algorithm may help alert a clinician about an issue they may not have noticed, and can help them come to a judgement, but in most healthcare applications a human should be the one making the final call on treatments.
There is another reason to want clinical judgment in the process. Judgments tend to be better when humans and algorithms work together. Algorithms can draw on all of the data available on a patient and crunch numbers in a way that humans can’t. However, humans often have access to some additional information that an algorithm does not, such as the way a patient looks or acts, and other hard-to-quantify facts about their well-being. We get the best outcomes when we combine the strengths of both.
4. Build relationships.
One of the biggest barriers to the adoption of data science methods is getting buy-in from clinicians. And they’re completely right to be skeptical – they have an incredible level of expertise and familiarity with their patients. They also have years of training that teaches them to get to the root of the problem and not to simply trust the results of a “black box” algorithm.
An algorithm that just tells clinicians what they already know is going to be useless at best, but it may also feel condescending and lead to resentment of the analytics team and the predictive modeling process. Taking the time to talk to clinicians to understand their issues and what kind of tool will help them will help you produce a more useful tool. Making it clear that this is meant as a tool to help them, and taking the time to listen and refine it based on their feedback, will help to reduce or eliminate resentment.
There is enormous potential for data science to make vast differences in healthcare. Thinking carefully not just about the machine learning problems, but the implementation problems, is a must. The best algorithms are useless if they aren’t part of a workflow that impacts patient care.