An unexpected link between computer science and the ethics of consent in the acutely comatose

Yesterday, Dr Weijer from Western U came to the STREAM research group at McGill to give a talk on the ethics of fMRI studies on acutely comatose patients in the intensive care unit. One of the topics he briefly covered (not the main topic of his talk) was that of patients who may be “awake,” but generally unaware of their surroundings, while in an acutely comatose state of some kind. Using an fMRI, questions can be asked of some of these subjects, by telling them to imagine playing tennis for “yes,” and to imagine navigating their home for “no.” Since the areas of the brain for these two tasks are very different, these can be used to distinguish responses with some accuracy. In some rare cases, patients in this condition are able to consistently answer biographical questions, indicating that they are in some sense, conscious.

One of the questions that arises is: Could we use this method to involve a comatose patient in decision-making regarding her own care, in cases where we were able to establish this sort of communication?

Informed consent in medical ethics is usually conceived in terms of: disclosure, capacity and voluntariness, and the most obvious question to arise in the types of cases we’re considering is whether or not you could ever know with certainty that a comatose person has the capacity to make such decisions in such a state. (Indeed, a comatose patient is often the example given of someone who does not have the capacity to consent.) Dr Weijer was generally sceptical on that front.

Partway through his discussion, I had the impression that the problem was strangely familiar. If we abstract away some of the details of the situation in question, we are left with an experimenter who is sending natural language queries into a black box system, which replies with a digital (0/1) output, and then the experimenter has to make the best evaluation she can as to whether the black box contains a person, or if it is just an “automatic” response of some kind.

For those of you with some background in computer science, you will recognise this as the Turing Test. Over the 65 years since it was first suggested, for one reason or another, most people have abandoned the Turing Test as a way to address the question of artificial intelligence, although it still holds a certain popular sway, as claims of chatbots that can beat the Turing Test still make the news. While many would reject that it is even an important question whether a chatbot can make you believe it is a person, at least in the fMRI/coma patient version, no one can dispute whether there is something important at stake.