Home » Blog, Featured

Making it personal: The rare disease literature sucks

12 January 2016 No Comment
Spectral human karyotype

Spectral human karyotype (Photo credit: Wikipedia)

I’ve been engaged in different ways with some people in the rare genetic disease community for a few years. In most cases the initial issue that brought us together was access to literature and in some cases that lead towards Open Science issues more generally. Many people in the Open Access space have been motivated by personal stories and losses, and quite a few of those relate to rare genetic diseases.

I’ve heard, and relayed to others how access has been a problem, how the structure and shape of the literature isn’t helpful to patients and communities. At an intellectual level the asymmetry of information for these conditions is easy to grasp. Researchers and physicians are dealing with many different patients with many different conditions, the condition in question is probably only a small part of what they think about. But for the patients and their families the specific condition is what they live. Many of these people actually become far more expert on the condition and the day to day issues that one particular patient experiences, but that expertise is rarely captured in the case studies and metastudies and reviews. The problem can be easily understood. But for me it’s never been personal.

That changed on Christmas Eve, when a family member got a diagnosis of a rare chromosomal mosaic condition. So for the first time I was following the information discovery pathway that I’ve seen many others work their way through. Starting with Google I moved to Wikipedia, and then onto the (almost invariably US-based) community networks for this condition. From the community site and its well curated set of research articles (pretty much entirely publisher PDFs on the community website) I started working through the literature. At first I hit the access barrier. For what it’s worth I found an embargoed copy of one of the articles in a UK Institutional Repository with a “request a copy” button. Copy requested. No response two weeks later. I keep forgetting that I now actually have a university affiliation and Curtin University has remarkably good journal holdings, but its a good reminder of how normal folk live.

Of course once I hit the literature I have an advantage, having a background in biomedicine. From the 30-odd case studies I got across to the OMIM Database (see also the Wikipedia Entry for OMIM) which netted me a total of two articles that actually looked across the case studies and tried to draw some general conclusions about the condition from more than one patient. I’m not going to say anything about the condition itself except that it involves mosaic tetrasomy, and both physical and intellectual developmental delay. One thing that is clear from the two larger studies is that physical and intellectual delays are strongly correlated. So understanding the variation in physical delay amongst the groups of patients, and where my family members sits on this, becomes a crucial question. The larger studies are…ok…at this but certainly not as easy to work with. The case studies are a disaster.

The question for a patient, or concerned family member, or even the non-expert physician faced with a patient, is what does the previously recorded knowledge tell us about likely future outcomes. Broad brush is likely the best we’re going to do given the low occurrence of the condition but one might hope that the existing literature would help. It doesn’t. The case studies are worse than useless, telling an interesting story about a patient who is not the same. Thirty case studies later all I know is that the condition is highly variable, that it seems to be really easy to get case studies published and that there is next to no coordination in data collection. The case studies also paint a biased picture. There could be many mild undiagnosed cases out there, so statistics from the case studies, even the larger studies would be useless. Or there might not be. Its hard to know but what is clear is how badly the information is organised if you want to get some sense of how outcomes for a particular patient might pan out.

None of this is new. Indeed many people from many organisations have told me all of this over many years. Patient organisations are driven to aggregating information, trying to ride herd on the researchers and physicians to gather representative data. They organise their own analyses, try to get a good sample of information, and in general structure information in a way that is actually useful to them. What’s new is my own reaction – these issues get sharper when they’re personal.

But even more than that it seems like we’ve got the whole thing backwards somehow. The practice of medicine matters and large amounts of resources are at stake. That means we want good evidence on what works and what doesn’t. To achieve this we use randomised control trials as a mechanism to prevent us from fooling ourselves. But the question we ask in these trials, that we very carefully structure things to ask, is “all things being equal, does X do Y”? Which is not the question we, or a physician, or a patient, want the answer to. We want the answer to “given this patient, here and now, what are the likely outcomes”? And the structure of our evidence actually doesn’t, I suspect can’t, answer that question. This is particularly true of rare conditions, and particularly those that are probably actually many conditions. In the end every treatment, or lack thereof, is an N=1 experiment without a control. I wonder how much, even under the best conditions, the findings of RCTs, of the abstract generalisation, actually help a physician to guide a patient or family on the ground?

There’s more to this than medicine, it actually cuts to the heart of the difference between science and humanities; the effort to understand a specific context and its story vs the effort to generalise and abstract, to understand the general. Martin Eve has just written a post that attacks this issue from the other end, asking the question whether methodological standards from the sciences, those very same standards that drive the design of randomised control trials, can be applied to literary analysis. The answer likely depends on what kinds of mistakes you want to avoid, and in what context do you want your findings to be robust. Like most of scholarship the answer is probably to be sure we’re asking the right question. I just happened to personally discover how at least some segments of the clinical research literature are failing to do that. And again, none of this is new.


Comments are closed.