This is the first pass at an introductory chapter for a book I’ve had in my head to work on for a long time. The idea is that it relates some of personal history shifting from my grounding science towards the humanities, while interleaving this with a survey of the theoretical work that develops those different perspectives. This is just a first draft written on a Sunday afternoon. Comments, as always, welcome.
“As a patient I struggle to relate to survival curves…â€
This is a book about narratives, perspectives, and knowledge. It’s a personal story of re-thinking the way that I think, as well as an attempt to find a way through the various, apparently irreconcilable, ways of understanding how we can know. Others, far more qualified than I, have attempted this in the past with differing degrees of success. In some ways this is more a survey of those attempts than anything new. What is perhaps different from those previous efforts is that I’m also telling a story of how I have trodden a path of my own through these differing ways of thinking about thinking, knowing about knowledge.
The idea of telling a story, one that traces a progression from being a scientist to being some sort of humanities scholar, is that a story of my own changing perspective is a different way of trying to bridge the gap that those others tried to fill. The gap itself is referred to in many ways, and from many different perspectives. Snow’s Two Cultures is the common touch-stone but it appears in many differing versions. The gulf of incomprehension between natural sciences and engineering on one side and the humanities on the other, the modernist consensus built on facts of nature versus post-modern perspectives rooted in context, traditional versus de-colonizing versions of history.
The battles between these incommensurate world views are also celebrated, sometimes with the verve, and steadfast adherence to the “correct†version, of historical re-creation societies. Shapin and Schaeffer’s Leviathan and the Air-pump, a broadside against the core tenets of the scientific method, or perhaps rather an over-simplified version of the story we tell about it, is brandished as a badge of identity for those who wish to associate with the Edinburgh School. The Sokal hoax, a puncturing of the opaque and circuitous arguments and the lack of quality control that underpins the sociology of science, is celebrated in science circles as the triumph of the skills of the scientific viewpoint despite being a rather disingenuous and mis-represented tale that fails to engage with what the editors of Social Text were trying to achieve.
The battles are of course not limited to the academy. Is the role of journalists to “report facts†or to provide interpretation? Are media sources neutral? Can media sources be neutral? The rise of extremism, religious and racial violence, our apparent devolution into a “post-fact†world is to be laid variously at the feet of post-modernist critical theory or neo-liberal economics depending on your perspective. The question of how we come to know what we know, and how it can be tested and validated, is crucial at the same time as, indeed likely because of, the way technology is allowing us to construct a world in which we only interact with those who agree with us.
It may be crucial, but this is a problem that has not been solved despite millennia of work from great thinkers. We have a much richer understanding of how we can be wrong about what we know, from errors of logic, to the untrustworthiness of our perceptions, to the choice of foundational axioms, believed to be eternal truths, that turn out to be simply the product of history and culture. We know something about how these systems are inconsistent, where their weakest points are, but despite heroic efforts there has been little success and putting them together. We have no clear ways to show whether knowledge is reliable, let alone true. How can telling a story help?
My goal is much more modest. Rather than asking whether we can tell whether something is true, perhaps it is more useful to think about the ways in which we can test our knowledge. Can we become better at identifying whether we are wrong, and is this more productive than trying to pin down the transcendental? If both god and the devil are the in the details, might it nonetheless be easier to find the devil?
This is a question unashamedly rooted in an empirical perspective, one rooted in my formative history as a scientist. I will position the scientific method as a cultural practice that, at its best, is an effective means to prevent us from fooling ourselves. But it is for all that a practice with its limitations, and one stymied by a set of assumptions that are difficult to support; that a claim can be unambiguously communicated, that an experiment can directly test a claim, and that the connection between claim and experiment is clear and objective. When the scientific method fails internally, it is because these assumptions have not been fully tested.
I will argue that science is effective at helping us to reliable answers to well framed questions. But those questions are often limited. Sometimes by our capabilities as experimentalists, sometimes by an inconsistent or incoherent theoretical framework. Studies of reproducibility, of communication of science, of its institutions and power structures inform us how the questions we ask are biased, inappropriate or confused. The best of these studies are not pursued from a scientific perspective.
Science is terrible at probing its questions critically. By contrast, humanistic disciplines are built on critique. Indeed the inverse criticism might be made of the humanities. It is a brilliant set of tools for developing probing questions through shifting perspectives, undercutting power, asking how the same issues can be approaches from an entirely different direction. The humanities are not so good at providing answers, or at least not ones that can be tested in the same way as the results of scientific experiments. But both provide a tool set for rooting out certain – different – kinds of errors.
In telling my own story, of a journey from a scientific background to a more humanistic perspective, my aim is less to build a system to reconcile two world views than to offer some experience of attempting to apply both. In alternating chapters I will tell the story of how my perspective shifted over the course of 15 years, and seek to survey the scholars that have developed and studied those perspectives.
The epigraph that heads this chapter, half remembered from a tweet from some years back, offers one kind of perspective on that journey. I remember it striking me as pertinent as my views changed. To begin with, simply the idea that scientists needed to communicate better to those who could benefit, that a patient perspective was important, and often missing from medical studies. Later, as I saw how power structures in the academy biased research towards the diseases of wealthy Americans, I saw it as posing a deeper question: how are the priorities of research set, are we even asking the right questions?
Questioning the questions leads to a deeper concern. The system of medical research is set up with the aim of keeping us honest as we distribute scarce resources available for medical treatment. The randomized control trial, the gold standard of medical research, is set up very carefully to ensure our best chance to get a reliable answer to the question: “all other things being equal does this intervention make the patient better/live longerâ€. These are the source of the survival curves. And yet this question “all other things being equal…†is never the question that a physician asks when recommending a treatment. Their question is “what is the best advice I can give to this particular person under these very particular circumstancesâ€, a question that our entire edifice of medical information is extraordinarily badly configured to answer. Are we asking the wrong questions at the level of the entire system?
But how can we tell what is best for the individual patient? At the individual level placebos effects can matter, a rare side affect can be fatal – or curative – and the question of whether the patient “is better†is a largely subjective concern. It is not difficult to find people who will swear blind that homeopathy or acupuncture “works for them†regardless of a swathe of evidence that it has no effect in randomized trials. Personal testimony is suspect, memory is unreliable, and yet we reject them in one case, but also use them as the basis for our scientific studies.
Ironically the epigraph tells the story here. On actually checking its provenance I find that it is both much more recent than I thought, far too recent to have been part of my early shifts in thinking, and not even a direct quote. My memory is unreliable, a product of the narrative I have created for myself. The story I will tell is just another narrative, and an unreliable one at that.
But unreliable need not mean un-useful. If my task is to get better at testing my knowledge, then it can serve both as metaphor, narrative thread, and reminder of the ways we can go wrong. The process of questioning, testing, and the way we can use external perspective to do that is at the core. Whether it is the half-remembered words of an otherwise un-represented group, or the perspective of a technical system like the twitter archive, it is a cycle of testing, stepping sideways and testing again that lets us understand the limits of our knowledge.