On techbros in healthcare and medical research

My thoughts on the following may change over time, but at least for me, I have found it helpful to think about the threats that techbros pose to healthcare and medical research in terms of the following four major categories. These aren’t completely disjoint of course, and even the examples that I give could, in many cases, fit under more than one. I am also not claiming that these categories exhaust the ways that techbros pose a threat to healthcare.

1. Medical-grade technosolutionism, or “medicine plus magic”

When Elizabeth Holmes founded her medical diagnostics company Theranos, she fit exactly into the archetype that we all carry around in our heads for the successful whiz-kid tech-startup genius. She was not just admitted to Stanford University, but she was too smart for it, and dropped out. She wore Steve Jobs-style black turtlenecks. She even founded her company in the state of California—innovation-land.

She raised millions of dollars for her startup, based on the claim that she had come up with a novel method for doing medical diagnostics—dozens of them—from a single drop of blood. Theranos claimed that the research backing up these claims occurred outside the realm of peer-review. This practice was derisively called “stealth research,” and generally criticized because of the threat that this mode of innovation might have posed to the enterprise of medical research as a whole.

It was, of course, too good to be true. The company has now been shut down, and Theranos has been exposed as a complete fraud. This sort of thing happens on a smaller scale on crowdfunding sites with some regularity. (Remember the “Healbe” Indiegogo?)

While I’m not entirely sure what is driving this particular phenomenon, I have a few pet theories. For starters, we all want to believe in the whiz-kid tech-startup genius myth so much that we collectively just let this happen out of sheer misguided hope that the techbros will somehow get it right. And on some level, I understand that impulse. Medical research progress is slow, and it would be wonderful if there actually were a class of smart and talented geniuses out there who could solve it by just applying their Apple Genius powers to the matter. Alas, it is not that easy.

And unfortunately, there is a certain kind of techbro who does think like that: “I’m a computer-genius. Medicine is just a specialized case that I can just figure out if I put my mind to it.” And there’s also a certain kind of medical professional who thinks, “I’m a doctor. I can figure out how to use a computer, thank-you-very-much.” And when those two groups of people intersect, sometimes they don’t call each other out on their lack of specialized knowledge, but rather, they commit synergy.

And worse, all this is happening under an extreme form of capitalism that has poisoned our minds to the extent that the grown-ups—the people who should know better—are turning a blind eye because they can make a quick buck.

Recommended reading: “Stealth Research: Is Biomedical Innovation Happening Outside the Peer-Reviewed Literature?” by John Ioannidis, JAMA. 2015;313(7):663-664.

2. The hype of the week (these days, it’s mostly “medicine plus blockchain”)

In 2014 I wrote a post on this very blog that I low-key regret. In it, I suggest that the blockchain could be used to prospectively timestamp research protocols. This was reported on in The Economist in 2016 (they incorrectly credit Irving and Holden; long story). Shortly thereafter, there was a massive uptick of interest in applications of the blockchain to healthcare and medical research. I’m not claiming that I was the first person to think about blockchain in healthcare and research, or that my blog post started the trend, but I am a little embarrassed to say that I was a part of it.

Back in 2014, being intrigued by the novelty of the blockchain was defensible. There’s a little bit of crypto-anarchist in all of us, I think. At the time, people were just starting to think about alternate applications for it, and there was still optimism that the remaining problems with the technology might still be solved. By 2016, blockchain was a bit passé—the nagging questions about its practicality that everyone thought would have been solved by that point just, weren’t. Now that it’s 2019, and the blockchain as a concept has been around for a full ten years, and I think it’s safe to say that those solutions aren’t coming.

There just aren’t any useful applications for the blockchain in medicine or science. The kinds of problems that medicine and science have are not the kinds of problems that a blockchain can solve. Even my own proposed idea from 2014 is better addressed in most cases by using a central registry of protocols.

Unfortunately, there continues to be well-funded research on blockchain applications in healthcare and science. It is a tech solution desperately in search of its problem, and millions of research funding has already been spent toward this end.

This sort of hype cycle doesn’t just apply to “blockchain in science” stuff, although that is probably the easiest one to spot today. Big new shiny things show up in tech periodically, promising to change everything. And with surprising regularity, there is an attempt to shoehorn them into healthcare or medical research.

It wasn’t too long ago that everyone thought that smartphone apps would revolutionize healthcare and research. (They didn’t!)

3. “The algorithm made me do it”

Machine learning and artificial intelligence (ML/AI) techniques have been applied to every area of healthcare and medical research you can imagine. Some of these applications are useful and appropriate. Others are poorly-conceived and potentially harmful. Here I will gesture briefly toward some ways that ML/AI techniques can be applied within medicine or science to abdicate responsibility or bolster claims where the evidence is insufficient to support them.

There’s a lot of problems that could go under this banner, and I’m not going to say that this is even a good general overview of the problems with ML/AI, but many of these major problems stem from the “black box” nature of ML/AI techniques, which is a hard problem to solve, and it’s almost a constitutive part of what a lot of ML/AI techniques are.

The big idea behind machine learning is that the algorithm “teaches itself” in some sense how to interpret the data and make inferences. And often, this means that many ML/AI techniques don’t easily allow for the person using them to audit the way that inputs into the system are turned into outputs. There is work going on in this area, but ML/AI often doesn’t lend itself well to explaining itself.

There is an episode of Star Trek, called “The Ultimate Computer,” in which Kirk’s command responsibilities are in danger of being given over to a computer called the “M-5.” As a test of the computer, Kirk is asked who he would assign to a particular task, and his answer differs slightly from the one given by the M-5. For me, my ability to suspend disbelief while watching it was most thoroughly tested when the M-5 is asked to justify why it made the decision it did, and it was able to do so.

I’ve been to the tutorials offered at a couple different institutions where they teach computer science students (or tech enthusiasts) to use the Python library for machine learning or other similar software packages. Getting an answer to “Why did the machine learning programme give me this particular answer?” is really, really hard.

Which means that potential misuses or misinterpretations are difficult to address. Once you get past a very small number of inputs, there’s rarely any thought given to trying to figure out why the software gave you the answer it did, and in some cases it becomes practically impossible to do so, even if you wanted to.

And with the advent of “Big Data,” there is often an unspoken assumption is that if you just get enough bad data points, machine learning or artificial intelligence will magically transmute them into good data points. Unfortunately, that’s not how it works.

This is dangerous because the opaque nature of ML/AI may hide invalid scientific inferences based on analyses of low-quality data, causing well-meaning researchers and clinicians who rely on robust medical evidence to provide poorer care. Decision-making algorithms may also mask the unconscious biases built into them, giving them the air of detached impartiality, but still having all the human biases of their programmers.

And there are many problems with human biases being amplified while at the same time being presented as impartial through the use of ML/AI, but it’s worth mentioning that these problems will of course harm the vulnerable, the poor, and the marginalized the most. Or, put simply: the algorithm is racist.

Techbros like Bill Gates and Elon Musk are deathly afraid of artificial intelligence because they imagine a superintelligent AI that will someday, somehow take over the world or something. (I will forego an analysis of the extreme hubris of the kind of person who needs to imagine a superhuman foe for themselves.) A bigger danger, and one that is already ongoing, is the noise and false signals that will be inserted into the medical literature, and the obscuring of the biases of the powerful that artificial intelligence represents.

Recommended reading: Weapons of Math Destruction by Cathy O’Neil.

4. Hijacking medical research to enable the whims of the wealthy “… as a service”

I was once at a dinner party with a techbro who has absolutely no education at all in medicine or cancer biology. I told him that I was doing my PhD on cancer drug development ethics. He told me with a straight face that he knew what “the problem with breast cancer drug development” is, and could enlighten me. I took another glass of wine as he explained to me that the real problem is that “there aren’t enough disruptors in the innovation space.”

I can’t imagine being brazen enough to tell someone who’s doing their PhD on something that I know better than them about it, but that’s techbros for you.

And beyond the obnoxiousness of this anecdote, this is an idea that is common among techbros—that medicine is being held back by “red tape” or “ethical constraints” or “vested interests” or something, and that all it would take is someone who could “disrupt the industry” to bring about true innovation and change. They seriously believe that if they were just given the reins, they could fix any problem, even ones they are entirely unqualified to address.

For future reference, whenever a techbro talks about “disrupting an industry,” they mean: “replicating an already existing industry, but subsidizing it heavily with venture capital, and externalizing its costs at the expense of the public or potential workers by circumventing consumer-, worker- or public-protection laws in order to hopefully undercut the competition long enough to bring about regulatory capture.”

Take, for example, Peter Thiel. (Ugh, Peter Thiel.)

He famously funded offshore herpes vaccine tests in order to evade US safety regulations. He is also extremely interested in life extension research, including transfusions from healthy young blood donors. He was willing to literally suck the blood from young people in the hopes of extending his own life. And these treatments were gaining popularity, at least until the FDA made a statement warning that they were dangerous and ineffective. He also created a fellowship to enable students to drop out of college to pursue other things such as scientific research outside of the academic context. (No academic institution, no institutional review board, I suppose.)

And this is the work of just one severely misguided techbro who is able to make all kinds of shady research happen because of the level of wealth that he has been allowed to accumulate. Other techbros are leaving their mark on healthcare and research in other ways. The Gates Foundation for example, is strongly “pro-life,” which is one of the strongest arguments I can think of, for why philanthropists should instead be taxed and the funds they would have spent on their conception of the public good dispersed through democratic means, rather than allowing the personal opinions of an individual become de facto healthcare policy.

The moral compass behind techbro incursions into medical research is calibrated to a different North than the one most of us recognize. Maybe one could come up with a way to justify any one of these projects morally. But you can see that the underlying philosophy (“we can do anything if you’d just get your pesky ‘ethics’ out of the way”) and priorities (e.g. slightly longer life for the wealthy at the expense of the poor) are different from what we might want to be guiding medical research.

Why is this happening and what can be done to stop it?

Through a profound and repeated set of regulatory failures, and a sort of half-resigned public acceptance that techbros “deserve” on some level to have levels of wealth that are comparable with nation states, we have put us all in the position where a single techbro can pervert the course of entire human research programmes. Because of the massive power that they hold over industry, government and nearly every part of our lives, we have come to uncritically idolize techbros, and this has leaked into the way we think about applications of their technology in medicine and science. This was all, of course, a terrible mistake.

The small-picture solution is to do all the things we should be doing anyway: ethical review of all human research; peer-review and publication of research (even research done with private funds); demanding high levels of transparency for applications of new technology applied to healthcare and research; etc. A high proportion of the damage they so eagerly want to cause can probably be avoided if all our institutions are always working at peak performance and nothing ever slips through the cracks.

The bigger-picture solution is that we need to fix the massive regulatory problems in the tech industry that allowed techbros to become wealthy and powerful in the first place. Certainly, a successful innovation in computer technology should be rewarded. But that reward should not include the political power to direct the course of medicine and science for their own narrow ends.

Antibiotics and antivirals

More and more often these days, I come across articles about new anti-viral drugs that look really promising. Further, I’m sure we’ve all read or heard about the phenomenon of antibiotic resistance—strains of bacteria who acquire the ability to survive treatment with antibiotics which would otherwise kill the bacteria and cure the patient.

Since the discovery of antibiotics, bacterial infections have been relatively easy to treat, whereas viral infections have been something that can’t be treated directly. The treatment for a bacterial infection is penicillin, but the treatment for the common cold is bed-rest.

What I find interesting about these developments is that we may be entering an age where this is reversed: Bacterial infections may become difficult or impossible to treat directly, while viral infections can be simply and easily cured with drugs.

Academic vs corporate study materials

While studying from the privately-produced MCAT study guides that I bought, I’ve noticed some differences between the way material is presented in the study guides as opposed to most academic material that I’ve consumed over the years.

I suppose that the Kaplan study guides are the product of different sorts of pressures than the textbooks and course notes produced by academia, and that’s not necessarily a bad thing.

Academia is designed to produce freedom of thought and allow discourse at the highest level. It is supposed to be a no-holds-barred intellectual brawl. That’s why universities have the institution of tenure. It’s so that professors can pursue their research along whatever lines it takes them, without worrying that they’ll lose their job if they discover something that their employer doesn’t like. (This is a massive idealization and simplification of course.)

The Kaplan study guides, on the other hand, were designed for one purpose: to make profit for Kaplan’s shareholders. The Kaplan company thinks it can make money by producing MCAT prep materials and services and selling them. The pressure for the Kaplan guides to be good is so that they don’t get sued for publishing misleading MCAT guides, and so that they have customers with good experiences, who will recommend Kaplan study guides and prep courses to others.

Both academia and the commercial preparatory systems are set up such that they (generally) produce good curriculum, but I’ve noticed some differences between the two, which I think demonstrate some characteristic features of each one.

For example, the Kaplan study guides are written with mnemonics in the margins, silly analogies that are intentionally carried too far so as to be memorable, and the guide’s text is written with humour.

Academics are often guilty of making the material difficult to learn, or at the least, there isn’t nearly the same emphasis on trying to help the student pass the test.

The Kaplan guides are written engagingly, even soothingly. They are specifically trying not to scare you with the amount of material you need to know.

I had a physiology prof who stood at the front of the lecture theatre, held up the course package on the first day of the course, and actually did try to scare us with the sheer size of the volume.

I don’t think I’d go so far as to say that the Kaplan guides are entertaining, but they are certainly better to read than that physiology course package was.

The Kaplan guides have each of the articles rated out of six stars. The higher the number of stars, the more frequently it is examined on the MCAT, and the easier it is to learn. So a one-star concept would be one that is tested very infrequently, and that is difficult to master. This is to help students focus on the pieces of information that will best help them score well on the exam.

I have had courses (and textbooks) where the most insignificant detail is dwelt upon ad nauseum, because it is the professor’s favourite subject. This sort of thinking is encouraged in the academic world, since new developments in science and philosophy often come about because of attention to the details of seemingly insignificant problems.

Such ways of thinking do not help students pass exams, though, so the Kaplan guides are very focussed.

In some ways, academia could learn something from the focus that the corporate world brings to their prep materials. I mean, really, who in their right mind (except an academic philosopher) would recommend studying the works of Immanuel Kant in an attempt to learn the discipline of rigourous thought?

Medicine admissions is big business

I have been studying for the MCAT using a set of books from Kaplan, an MCAT prep company, and I’ve realised a few things.

First off, medicine admissions is big business. I’m not even talking about medicine. I just mean the admissions process. Imagine you just wanted to apply to all the medical schools in Ontario, for example. First you would have to write the MCAT. This will cost you $230. Then, you will need to pay for the application, and to apply to every school in Ontario through OMSAS, it will cost about $660.

That’s $890 just to apply and take the MCAT.

Now imagine that you want to take a prep course for the MCAT. I went shopping around for MCAT prep, and someone from Kaplan tried to sell me a comprehensive package which included one-on-one tutoring, online lectures, books, and practice exams. All told, the tutor would have been making roughly $180 per hour from me, and the package would cost me $2799.

There is a whole industry built up around the fact that there’s huge competition to get into medical school. I ended up spending $150 for review books and practice exams, myself.

I can understand companies like Prep 101 and Kaplan charging huge sums for their expertise and time. They are, after all, in the business of making money, and people (generally) are willing to spend money on investments that they think will bring a greater return in the long run. I have no problem with them.

That said, there’s no way they are getting $2700 from me! I don’t care how good their tutor is. There’s no way he’s worth $180 an hour. Imagine knowing that your MCAT tutor is coming, and that you’re paying that much for him. I imagine I would spend as much time prepping for my meeting with the tutor as I would spend prepping for the MCAT, so that I would be sure to get my money’s worth, and that sort of mentality might not actually best help one to prepare for the MCAT.

Anyway, I was thinking, and of course, I can understand wanting policies that make it difficult for someone to get into medical school. You don’t want an unqualified person committing surgery against a patient, after all. So you would want to produce a high intellectual barrier, or a high skill barrier, or otherwise make it difficult, but in ways that elminate the greatest number of people that should not be doctors.

What’s confusing though, is why medical academia would have policies that produce such a high financial barrier to entry. The $890 is what you would pay if you were going for a bargain-basement medical school admission. That’s the minimum you would have to pay. You’re not buying any extra review material on that budget. You’re not getting any practice exams, tutoring or classes. That’s just what it costs to apply, and nothing more.

Maybe it’s to weed out those who might just apply on a whim. Or maybe doctors don’t want new applicants to be spared any hardship they themselves had to suffer. Maybe it actually does cost that much to ensure that the process is fair. I’m not sure what the real reason is.