I recently learned what exactly a Nash equilibrium is, and I’m really excited to start applying the idea in my everyday life. Hence, I will apply what I’ve learned in Game Theory so far to the field of medical research ethics.
First, some definitions: A Nash equilibrium is a set of strategies that the players in a formalised game adopt such that the utility that each player receives for her chosen strategy is the greatest, given the choices of strategies of all the other players in the game.
This could be formalised as follows:
A Nash equilibrium exists when ui (ai, a-i) ≥ ui (ai′, a-i) for all ai′ and all i, where:
- ui is a function whose range is utility values for player i and whose domain is an ordered n-tuple of strategies taken by all the players in the game
- ai is the chosen strategy of player i
- a-i is the set of chosen strategies for all the other players, and
- ai′ is some alternate strategy that player i might adopt.
What’s interesting about Nash equilibria is that given a particular formalised game, other non-Nash sets of strategies are “unstable”—that is, if a player finds out that given the strategy choices of the other players, she could have made a better decision, she will change her strategy accordingly.
The famous Prisoner’s Dilemma (look it up if you haven’t heard of it) is a great example of a Nash equilibrium where the outcome for each of the players is not optimal, even though they are in equilibrium.
What’s interesting to me about things like this is how it can be applied to medical research, if we make certain simplifying assumptions. Let’s imagine that medical research is like a two-player game. The players are the pharmaceutical industry on the one hand and some other participant in human research on the other.
In the tables below, Big Pharma has two strategies open to it—developing a “seeding” study or developing a “quality” study. The other participant (who could be a research subject or a physician-investigator or a journal that publishes medical research papers) also has two strategies available—participating in the study developed by Big Pharma, or not participating.
If the other stakeholder in the research project doesn’t participate, neither Big Pharma nor the participant receive any benefit. The utility outcomes for Big Pharma and the other stakeholder are 0, 0, respectively.
If the other stakeholder participates and the study is a high-quality study that provides socially valuable medical information, Big Pharma and the other stakeholder receive utilities of 1, 1, respectively.
But, if it turns out that the pharmaceutical company has produced a “seeding” study—one that is designed for narrow ends, namely those of being a marketing tool to get physicians used to prescribing a drug that has already received licensure—the pharmaceutical company receives a utility of 2 and the other stakeholder receives a utility of -1. That is to say, Big Pharma gets a big payout, because hundreds of doctors are now prescribing the drug, but the other stakeholder incurs a net harm in some way. (If she is a study participant, he may feel used or cheated. If she is a doctor, it may be a source of professional embarrassment. If it is a journal that published a “seeding” study, that journal will lose some of its reputation, etc.)
|“Seeding” study||2, -1||0, 0 *|
|“Quality” study||1, 1||0, 0|
|Table 1. Asterisk (*) indicates Nash equilibrium.|
So if we go through each set of strategies that the players in this game can take, we find that the one with the asterisk is the only one that is a Nash equilibrium. This is because if you are Big Pharma in this game, given that the other stakeholder has chosen not to participate, you are indifferent between strategies, and if you are the other stakeholder, given that Big Pharma has chosen to develop a “seeding” study, your best choice is to not participate.
It’s interesting to note that this setup is analogous to markets for financial products and other “confidence goods,” where the buyer has a really hard time telling the difference between high and low quality products.
But what if no one caught on that the study was a “seeding” study? Let’s imagine that Big Pharma got away with running a seeding study and no one ever figured out that that’s what it was. We would end up with a game that can be represented as follows:
|“Seeding” study||2, 1 *||0, 0|
|“Quality” study||1, 1||0, 0|
|Table 2. Asterisk (*) indicates Nash equilibrium.|
Here, the equilibrium has shifted. This explains why pharmaceutical companies try to develop “seeding” studies, and why they try to hide it.
So the question becomes, how can we set up the “rules of the game” of medical research in order to shift the equilibrium such that other stakeholders will participate and the pharmaceutical company will develop quality studies?
Or to put it another way, if we assume that the utility for non-participation for all players is 0, and that both the pharmaceutical company and the other stakeholder should both come away from a quality study having received some utility, what value for x will put the Nash equilibrium where the asterisk is in the table below?
|“Seeding” study||x, -1||0, 0|
|“Quality” study||1, 1 *||0, 0|
|Table 3. Asterisk (*) indicates Nash equilibrium.|
The value of x must be less than 1 in order for the Nash equilibrium to fall where the pharmaceutical company develops a “quality” study and the other stakeholder participates. This is because if x = 1, Big Pharma will be indifferent between its strategies, given the choice of the other player, and if x > 1, as we saw in Table 1, the equilibrium will shift to where Big Pharma produces a “seeding” study and the other stakeholder declines to participate.
So in real life, how do we make x to be less than 1? There has to be some sort of sanction or penalty for pharmaceutical companies for producing seeding studies that makes their expected utility less than that of a quality study. This can be done by either putting a tax on seeding studies or by making regulations against seeding studies outright.