Consciousness is a peculiar, even supernatural idea. From three pounds of flesh emerges an awareness of the body that houses it and the world around it. We all recognize consciousness when we see it, but what is it, really? And where does it go when it’s gone? Neuroscience doesn’t have the tools to answer these questions—if they’re really possible to answer at all—but in a hospital, doctors need to be able to diagnose consciousness. They need to know if a patient with a brain injury is aware of himself or surroundings. This diagnosis is still mostly made with a simple bedside exam. Is the patient following commands? Is he gesturing or verbalizing purposefully, etc.?
For patients at the edge of consciousness—not lucid but not comatose either—defining the state of consciousness is difficult. Purposeless movements and sounds can look a lot like purposeful ones. Awareness comes and goes. In many, a high stakes diagnosis will be made. The patient is either in a minimally conscious state, where there’s some likelihood of recovery, or the patient is given a diagnosis of unresponsive wakefulness syndrome, where the actions are deemed random and purposeless and there’s little hope of recovery. Troublingly, these diagnoses are mixed up in as many as 40% of cases.
With a great deal at stake, a recent study in the journal Brain tries to give doctors a little help. The article details a machine learning algorithm that distinguishes unresponsive wakefulness syndrome from a minimally conscious state using EEG brainwave recordings. The algorithm, if put into use, would take some of the guesswork out of this diagnosis, and likely perform better than most human doctors. But diagnosing state-of-mind with an algorithm raises ethical concerns. How comfortable are we with turning over this kind of life-or-death diagnosis to a machine, especially since our handle on consciousness, as an idea, is so minimal?
Looking into the brain for traces of consciousness is not a new idea. For decades, researchers have been studying how brain scanning techniques like PET and fMRI could be used to study the edge of consciousness. In a landmark 2014 study, PET scans showed that brains could respond to cues in some patients given a (mis)diagnosis of unresponsive wakefulness syndrome. What’s more is that the patients with an active PET scan were more likely to make a meaningful recovery.
This finding argues that PET scans should be used if there’s any doubt about a patient’s state of consciousness. PET scans, though, aren’t available in every hospital. They’re also expensive, prone to artifact, and difficult to interpret. A more accessible alternative is electroencephalography or EEG, where electrical sensors are placed on the patient’s scalp, picking up activity through the skull. EEG registers brain activity as waves when enough neurons fire in unison. In a healthy person, these waves undulate at predictable frequencies. After a brain injury, the pattern is less predictable.
In the new study, a group at Pitié-Salpêtrière Hospital in Paris took EEG recordings from 268 patients diagnosed with either unresponsive wakefulness syndrome or a minimally conscious state. The EEGs were recorded before and during a listening task designed to pick up on the conscious processing of sounds. Dozens of aspects of the data were fed into a machine learning algorithm called a DOC-Forest.
The DOC-Forest performed relatively well at this complex task. Roughly 3 out of 4 cases were diagnosed properly. (Note: instead of accuracy, the authors use a better performance metric called AUC. AUC takes into account the rate of false positive classification, which has profound consequences here.)
The authors also took care to push the DOC-Forest into real world scenarios. They introduced random noise into the data, simulating what differences in data collection procedures might look like. They took into account different arrangements of sensors on the skull. They also used the algorithm on a different set of patients from a hospital in Liege, Belgium. In each case, the DOC-Forest performed well, with roughly the same performance measure.
From a certain perspective, this machine learning algorithm is a significant advance. EEG data is complex and contains multiple dimensions—time, frequency, testing condition, sensor locations, etc. Think pages and pages of squiggly waves on a computer screen. Typically, researchers would focus on a handful of easy-to-interpret aspects of the data, say the appearance of a specific brainwave during the listening task. This focus on interpretation excludes potentially important aspects of the data, though. Machine learning doesn’t have this human bias toward interpretability and communicability. It just focuses on classifying the data correctly, which is all that’s needed here.
If put into practice, the DOC-Forest could be a helpful tool for an inexperienced neurologist. The DOC-Forest would run through the squiggly lines of EEG data and provide odds that the patient has some level of consciousness that the inexperienced doctor missed in his or her bedside tests. There’s a circularity here, though. The algorithm is “trained” on cases that human neurologists diagnosed with bedside tests. While the group at Pitié-Salpêtrière was able to track patients for some time to minimize misdiagnoses, the algorithm just associates EEG signals with those—albeit more expert—bedside diagnoses. What, though, of a form of consciousness that’s not revealed in any of these tests, EEG or otherwise? Keep in mind we don’t really know where and how consciousness emerges. We don’t have much sense of the forms conscious experience may take outside of the ones we experience for ourselves. One could argue our minimal understanding of the problem means that we shouldn’t get the machines involved quite yet. On the other hand, it’s not clear that we’ll ever have satisfying answers to these questions. So, why not let a carefully designed tool, like the DOC-Forest, help make decisions within our current understanding of consciousness. There’s no easy answer, but it’s something that should probably be discussed as these tools push closer to everyday use.