This is the first draft of the second chapter of a book that I’m starting to work on. The initial draft of the first chapter is also posted here. My recent post on evolution was a first pass at exploring some of the ideas needed for later chapters. It’s 5,476 words incidentally so don’t say I didn’t warn you.
I don’t think I would like to meet myself as a twenty year old. I was arrogant, sure of myself, concerned with where I was going. Of course all of this was built on a lack of confidence. These days many people talk about imposter syndrome and the cost that it incurs on researchers as they make their way. Much later I would learn that every researcher feels this way, that even the most senior scientists fear being found out as lucky frauds. But at the time I was looking for some form of assurance, something that could be relied on, and I found that in the concepts of the science itself. The confidence that science itself worked to provide reliable truths and solid footing.
The world is a complicated place. Finding patterns and regularities in it is a way of managing that complexity. I’ve always found seeing patterns easy, perhaps sometimes too easy. Regularities and abstractions are a way to deal with the world. The complexities, the edge cases, fall away as you see the pattern, and the pattern becomes the way you think of the whole. Theory, abstraction, and maths all play a role in making this work, they are a part of the craft. But at the centre it is the idea that there is a simple way of understanding that sits behind the apparent complexity of the world that keeps the scientist moving. There is a real world, and there are simple rules behind it.
It’s a way of dealing with the world, but it also becomes your actual view of the world. The pattern in which the patterns sit is a pattern itself. A set of assumptions, that as scientists we rarely question, about the deep roots of how the world itself works. There is no particular reason to expect that things are simple, that they can be pulled apart into pieces and reduced to understandable models that in turn can be put back together. No reason to assume that maths should in fact work as a description of the universe. But it seems to work. It particularly seems to work within the social context in which scientists train. The fear of being found out as a fraud by the people around you, who are self evidently more clever, and more successful? That never goes away. But is is balanced by those times when the pattern falls out, when for the first time you see how a system might work, or when a prediction comes true in an experiment. Those successes, and the rewards that follow them provide a balance against the uncertainty. They provide an external validation that at least some of what we do is true and durable.
That the stories we tell ourselves about these discoveries are unreliable narratives is a matter of historical record and the subject of a century of work on the philosophy of science. Neither is this book intended as a reliable memoir, but rather as the reconstruction of my mindset. One sure of how the world works, or rather sure of how the universe works, and unsure of his place in the world. I make no pretense to tell a true story, but perhaps to reconstruct a version of it that is useful. All models are false, but some may be useful.
I became interested in the science that I would later pursue at a young age. Our shelves were filled with science fiction novels and amongst them books on science. Amongst these, a book by Isaac Azimov. I may even have picked it up thinking it was science fiction. Instead it was a book from 1962 on what was then the nascent subject of biochemistry. Called Life and Energy, it was a fat paperback with small type. By the time I was reading it in the mid 1980s it had been completely superseded. Much of what was considered clear had been swept away, much of what was necessarily speculation had been filled in. But I found the central idea in it fascinating. It told a story of how all of the complexities of life could be understood through its relation to one unifying concept, energy. Energy, Azimov explained with his trademark style sweeping the reader along in his wake, was the underlying stuff that made life possible. Life was to be understood as a set of processes transforming energy from one form to another. This central simplifying concept of energy made a pattern could be used to unify the whole of biology.
Oliver Sacks tells a not dissimilar story in his childhood memoir, Uncle Tungsten. He describes over many chapters his fascination with metals, chemicals and their activities and reactions. How he could place them into categories based on his experimentation, some more reactive, some less, some harder, some softer. Once he understood which category an element would fall into he could predict how it would react under particular conditions. There was a pattern, elements fell into families, but what was the cause of the pattern? “There must be some deeper principle at work – and indeed their wasâ€, he writes of seeing for the first time the giant periodic table that used to sit at the top of the main stairs of the Science Museum in Kensington, London.
I got a sudden overwhelming sense of how startling the periodic table must have seemed to those who first saw it–chemists profoundly familiar with seven or eight chemical families but who had never realized the basis of these families (valency), nor how all of them might be brought together into a single over-arching scheme. I wondered if they had reacted as I did to this first revelation: “Of course! How obvious! Why didn’t I think of it myself?â€
Oliver Sacks, Uncle Tungsten p190
This is more or less the way I remember learning science at school. Facts would be accumulated, sometimes experiments were done, and ultimately there was a reveal, the curtain would be pulled away to show the underlying pattern. Often we didn’t yet have the maths to do build the theory analytically. Kinetics in physics came before calculus was tackled in maths. And mostly the effort was focussed on teaching enough material to get us through the problems that would populate the exam. But piece by piece collections of facts would be put into a larger pattern.
Another element of this was the ongoing promise that “next year we’ll explain how what we’re telling you is all wrong”. There was always a sense that the actual truth lay somewhere off in the distance but that we weren’t there yet, we didn’t have enough facts to fill out the pattern. Although the ordering was not always the most helpful there was a sense in which it all fit together into a larger whole. While the underlying theories would indeed be torn apart and rebuilt year by year at university the basic pattern did not. Sometimes theory came first and facts were fitted into it, more often facts were accumulated and then a theory was produced to pull them together.
The university system I went through was built on the concept of doing four foundational subjects in the first year, three intermediate in year two and two “major†topics in year three. Biochemistry was not seen as foundational so it was not until second year that I returned the transformations of light and gas into chemicals and then into life itself. In retrospect this was also where the unity started to fall apart. Not all those doing biochemistry had studied enough chemistry to describe those chemical transformations in chemical terms. I myself didn’t have enough biology to appreciate the bigger picture of how the superstructure of organisms was organised to support that chemistry. Very few of us had sufficiently sophisticated maths to tackle these complex systems analytically – and what maths we did have hadn’t been taught that way.
At the time I saw this as just another cycle of collecting facts before finding the new pattern, the new abstraction, that would explain them all. We were, after all approaching the limits of what was clearly understood. But it was also a split in approach. Biochemistry was not infrequently derided by those preferring the grand abstractions that physics and maths could offer as “memorising the phone bookâ€. Those grand abstractions depended, once again, on moderately advanced maths, those with the maths gravitated to those disciplines and the biosciences was taught as though maths (and to a lesser extent chemistry) were not needed. The idea of a single unified theory was receding and with it came a set of conventional assumptions about what different fields of study looked like, and how they were done.
I’m probing this perspective shift because it seems important in retrospect. In later chapters I will look at how framing shapes the questions that can be asked. Here I want to focus on how a vision of knowledge as a set of pieces that we at least expect to be able to ultimately fit together can shift. The acceptance of specialisation, the need to go deeper into a specific area to reach the frontier is a part of that. But alongside that specialisation becomes a process by which we stop noticing that the pieces no longer do fit together. Models and frameworks are specific to disciplines. We imagine that we can shift our layer of analysis, from physics, to chemistry, to biology, to psychology. An unstated assumption is that we can choose the granularity to work at, based on our needs or the tools at hand – frequently the limitations of computer power. But as a result we rarely engage questions of what happens when more than two of those layers are interacting.
Another part of this acculturation was identifying the enemy. Before climate change denial was a thing the focus was on evolution. Evolution as an integrating model wasn’t a big reveal for me. It had been part of the story of how the world worked for as long as I can remember, theory prior to facts in this case. I had many of the traditional obsessions of a child including dinosaurs and fossils. The history of life and the processes by which it had changed was simply part of the background. At high school I had some friends of the evangelical persuasion who would seek to point out the errors of our scientific ways but they were generally not seriously antagonistic arguments. It was only at university that this started to seem a more existential battle.
I read Dawkins as a teenager, starting with The Extended Phenotype. Dawkins’ clarity of explanation of what he meant, and crucially what he did not mean, by a gene remains strongly with me. His gene-centric and reductionist view of evolution appeared incisive and appealed to that analytical side in me, seeking the big integrative picture. It also appealed to what I can now recognise as a young, arrogant and simultaneously insecure young man looking for a side to fight with. Creationism and its – then relatively new – pseudo-scientific friend Intelligent Design provided an enemy and a battle ground.
This battle offers a strong narrative. Science is a discipline, a way of answering questions, building models and testing them against the world. Science involved observing, collecting facts, building models that could explain those facts, and then identifying an implication of that model to be tested. Biblical creationism by definition failed to be scientific because the model was prior. It could not be predictive because acts of a deity are by definition – in the christian faith at any rate – unlimited in scope. Biblical creationism thus failed the canonical test of being valid science by being neither testable nor falsifiable. Dawkins in particular hones this distinction to a sharp blade to be used to divide claims and theories; on one side all those that make predictions and can be tested, on the other those to be rejected as unscientific.
Intelligent Design was a more subtle foe. In it’s strong form it could be rejected outright as equivalent to creationism. Without a knowledge of the intent of a designer and their limitations no falsifiable prediction could be made. Determining the intent of a designer from the book of life would be no different from seeking the mind of god through scriptural analysis. In its weaker form however, as an objection to the possibility of the evolution of complex biological forms, it posed more of a threat. An earlier and more fundamental form of this argument was a commonplace in the mid-90s, that the development of complex life forms violated the second law of themodynamics. This is a simplified (it’s adherent would say over-simplified) version of Intelligent Design. It’s claims is that widely accepted physical law makes the development of complexity impossible. This – the claim goes – is because the tendency of ordered systems is to move towards decay and chaos.
From where I stood such arguments needed to be destroyed. One easy approach was to point out that they are arguments from ignorance. I cannot understand how it might be therefore it cannot be. But the more subtle versions of the Intelligent Design argument invoked stronger versions of why they cannot be, using accepted scientific principles that were accepted as strong models. In some cases these objections turn on misunderstandings of those models. The objection based on the second law is an example of this. It relies on a mis-statement of what the second law actually says.
The second law states that closed systems increase in entropy. Entropy is a technical term, one that is often glossed as “disorder†or “chaosâ€, often using the transition from a tidy to a messy room as an example. The analogy has the beauty of being almost precisely wrong. Entropy, strictly defined is a measure of how many states of a system are equivalent. Whether every item in the room is in the “right†place or the “wrong†place each item is in a specific place. The messy room arguably has exactly the same entropy as the tidy one, at least to the child who believes they know precisely where everything is.
The objection to evolution however lies with a different conflation, that of “complexity†with “order†or low entropy. A good definition of complexity is a slippery thing. Is the tidy room or the messy one more complex? Whatever the definition might be chosen, however, it doesn’t align with order. Ordered systems are simple. It is as systems evolve from a simple ordered state to a (similarly simple) disordered state that complexity appears. Take a dish of water and add a drop of ink. At the moment of it meeting the water the system is highly ordered (and low entropy, all of the ink molecules are in one place). In its final state the system is highly disordered, the ink is all mixed in, and any given ink molecule could be in many different places without making the system observably different. It is while the system transitions from its initial to final state that we see complexity.
The arguments at the centre of the Intelligent Design agenda were similar, albeit more sophisticated. The core text, Michael Behe’s Darwin’s Black Box, argues that the intricate biochemical workings of life are “irreducibly complexâ€. That is, there are biological systems, indeed many biological systems where taking away any one part makes the whole fail. Primed as I was to reject its underlying premise I couldn’t even get past the first few chapters, so transparent were its flaws to me. The idea of irreducible complexity is easily tackled by proposing a co-option of function, followed by diversification, and then crucially lost of function. In the central example Behe gives, of the bacterial flagellum it was easy to imagine different possible functions of the component parts, that might plausibly come together in a poorly functional lashup which would then be refined.
What is perhaps most interesting about this line of thought is how productive the antagonism is. In 1996 Behe was reasonably pointing out that we knew little about the evolution of the complex molecular systems that he argued were irreducibly complex. But today we know a great deal. Reconstructed histories of many of the systems he discusses are becoming well established. It might be argued however, that in Dawkin’s terms those reconstructions are not strictly scientific. Starting from an assumption that systems are evolved we can use sequence analysis to reconstruct their history, identifying how the different parts of the bacterial flagellum relate to other biomolecules and suggest what the ancestral functions may have been. But this whole process works within an existing framing, the assumption that they evolved.
In the end, the true failure of Intelligent Design is that it has none of the explanatory power of evolution through the selection of DNA sequences. Ironically the strength of evolution as an overarching model isn’t really its predictive power but the way it functions as an enormously successful framework into which findings from across biology, and beyond, fit neatly. It is actually quite hard to convey how massively powerful it is but consider that it provides a framework that provides footholds where ideas from physics and chemistry, through to psychology, sociology and ecology can all be seen in relation to each other. The contribution of Intelligent Design was arguably to provide the stimulus, the provoking enemy that showed how that framework could be used to explain these complex systems.
Of course, along the way we also found out that evolution is a lot more complex than we thought, the provocation that Dawkins posed, was it really genes or organisms that evolve, turns out to be rather simplistic in practice. But while the framework needs stretching from time to time it remains remarkably robust. In part because of a degree of flexibility. All of those different footholds from different disciplines contain different perspectives on exactly what evolution is, what in fact is evolving and under what constraints.
My first foray into real research remained driven by that first stimulus, a focus on energy and how it was transformed. Platelets are the cells that, when they sense a lesion in a blood vessel, lead to clotting. The group I was working in was interested in what molecules in the bloodstream platelets used to generate their energy. An experimental design had been developed by the group and my role was to work, within that framework, to gather data on what molecules the platelets used when incubated in human plasma. There were a couple of reasons for this. The first is that it is difficult to store platelets for more than a few days. In the wake of a major emergency it is often platelets that run out first. Figuring out what molecules they liked to eat offered a way to find better ways to keep them for a long time.
The second reason ran a little deeper. Then, as now, most experiments on cell responses were done in some sort of defined media, usually with a limited number of energy-supplying fuel molecules in them. If, as seemed possible, certain cellular processes were dependent on or preferentially used certain sources of energy, then it was possible that results seen in cell culture in experiments ranging from basic science to drug responses could be misleading. We were trying to put human cells, in our case platelets, back into as close to their native environment as we could, in this case human plasma.
The concept that certain molecules fuel certain processes verged on the heretical. One central concept of biochemistry was that all fuel molecules were converted to one interchangeable energy molecule, ATP (for adenosine triphosphate). Most models of cells were based on the idea of a bag full of water with molecules dissolved in it. Although some edge cases were recognised even then, this was one thing that Azimov could already talk about in the 1960s in terms that would still be recognised today. We were pursuing the idea that things might working in a way quite radically different from that presented in the textbooks.
The experiments were fiddly. One of the reasons we were focussed on platelets was that in a sealed vessel their oxygen consumption, and therefore we presumed their metabolism, remained constant pretty much until the oxygen ran out. This would take around 40 minutes so over that time we relied on being able to take samples that we could then plot on a line. It also helped because we could measure the straight line of oxygen consumption of the chart recorder in those days before computer recording. A sample that didn’t show linear consumption was discarded, eventually the purified platelet preparation would die and the onset of consistently curved traces was the sign to finish up that day’s experiment and throw away what was left of the preparation.
Science is as much about craft as it is about knowledge or theory. The line between seeing where an experiment hasn’t worked, and rejecting those where you don’t like the result is finer than most scientists like to admit. A “good preparation†would last for many hours, a “bad†one would die very quickly. We wondered what the differences might be but it wasn’t part of my project to probe that. I got better at making preparations that would last longer. This is another pattern, nothing works the first time, but it becomes easier. Some years later we were trying to use a technique called PCR (polymerase chain reaction) in the lab for the first time and it was a nightmare. We’d get a partial result one time and nothing the next. Six months later it was easy and routine.
At the height of the controversy over the STAP stem cells I remember finding it striking that those giving the benefit of the doubt were often stem cell experts, familiar with just how fiddly it was to get these delicate procedures to work. On the flip side, a decade after our trials with PCR I was gobsmacked when, for the first time in my entire career, a moderately complex idea for manipulating proteins worked the first (and second, and third!) times that we tried it. I convinced myself something must be wrong because it was working too easily.
My central finding in that first project was that pretty much regardless of which potential energy molecule we looked at, the platelets were capable of using it, and that they were using it. That didn’t really answer the question we originally posed – was the use of specific energy molecules tied to specific processes – but it was evidence that the argument was plausible. To advance the argument I had to go out on a limb, turning again to evolution. The fact that a highly specialised cell type retained the capacity to utilise all these molecules implied that there must be some adaptive function for them. It was, and remains, a weak argument. It also wasn’t readily falsifiable.
But it was productive in the sense that it led me to the question that sat at the centre of my research interests for over ten years. How is it that biological systems are set up to be evolvable. Biological systems seem to be exquisitely set up, poised between forces that maintain them, and forces that allow flexibility and change. Mechanisms of metabolic regulation, the structure and development of organisms, of ecologies, through to the set of (roughly) twenty amino acids and four(ish) nucleotides all provide both resilience in the face of small scale change and flexibility for radical reorganisation and repurposing in the face of greater challenges. A more modern version of Intelligent Design argues that these systems must be designed but in fact they show traces of having evolved themselves. Or at any rate traces that make sense within the framework of evolution.
It’s obviously suspect to try and reconstruct the way I thought as I came out of my apprenticeship in science. Given the purpose of this book I’m at significant risk of setting up a starting point so as to drive the narrative. But equally, given that part of the point is to illustrate how framing affects, and effects, the stories we tell ourselves it is the point where I need to start. And I can enrich my suspect memories with the views and descriptions of others.
It is a truth universally acknowledged amongst scientists that the claims and models generated by philosophers and sociologists of science are unrecognizable, if not incomprehensible to scientists. It is less universally acknowledged amongst scientists that our own articulation of our internal philosophies is generally internally inconsistent. With the benefit of hindsight I can see that this is in part due to a fundamental internal inconsistency in the world view of many, if not most, scientists.
The way science presents itself is as pragmatic and empirical. From Robert Boyle through to Richard Dawkins and beyond we claim to be testing models or theories, not attaining truth. Box’s aphorism, alluded to earlier, that “all models are wrong, but some are useful†is central to this, although I am also partial to the soundbite from Henry Gee, a long time editor for the British general science journal Nature that: “When I go to talk to Scientists about the inner workings of Nature I announce – with pride – that everything Nature publishes is ‘wrong’â€. (Henry Gee, The Accidental Species, p xii) This is a strong claim about how we work, and one that I believe most scientists would identify with – that the best we can do is refine models. That it is not the business of the scientist to deal with “truthâ€.
But you don’t have to dig hard to realise that this hard headed pragmatism falls away for most scientists in that moment where we see something new, when we see a new pattern for the first time. The realisation that life could be described through the interchange of energy didn’t excite me because it would help me describe a biological system in new ways, but because a curtain was drawn back to provide a new view of how the world works. Showing that specific molecules were being used to power specific biological processes wasn’t exciting because we’d have a better model, or could build a better widget, or even because it would help us store platelets for longer. It was exciting because we might show that the existing model was wrong, because we might be the first to see how things really worked.
Look through the autobiographies of scientists and the story is the same. The central piece of the narrative is the reveal, the excitement of being the first to see something. Not the first to understand something in a certain way but the first to see it. Perhaps the canonical version of this is to be found in another unreliable memoir, Jim Watson’s The Double Helix. “Upon his arrival Francis did not get halfway through the door before I let loose that the answer to everything was in our hands†[p115]. Later, after actually building the model to rule out a range of possible objections Watson notes “a structure this pretty just had to existâ€.
There is an inconsistency here, one that I think most scientists never probe. I certainly didn’t probe it at the age of 20 or even 30. It actually matters how we think about what we’re doing, whether we are uncovering true patterns, seen imperfectly, or are building models that helps us to understand what is likely to happen, but which don’t make any claim on truth. Plato would say we seek truths, Popper that we’re building models. Latour says by contrast that it turns out to be an uninteresting and unimportant question compared to the real one – how can we manage the process to reach good collective decisions. Tracing that path will be one of aims of the rest of the book.
But for my younger self the question didn’t even arise. Which raises the question, what is the mental state that I maintained – which I imagine is the one most scientists also maintain – that reconciled these two apparently opposing views?
One part of this lies in our training. The process of revealing one model, then adding new facts that don’t quite fit, until the new, more sophisticated model is pulled from the hat. This allows us the illusion that, some way of in the distance, there is a true model. As Jerome Ravetz notes in Scientific Knowledge and its Social Problems, this is neatly combined with another illusion: that the student is recapitulating the history of scientific discovery, following in the footsteps of our predecessors as they uncovered facts and refined theories step by step. Ravetz neatly skewers this view, as Ludwig Fleck did 30 years before. To describe the science of Boyle, even of Darwin, in terms that both we, and they, would understand is impossible.
This issue is neatly hidden by the specialization I described above. We cannot know the whole of science, so we specialize in pieces, but others know those other parts and – we believe – there is a common method that allows us to connect all these pieces together. Another version of this is the concept of layering: that chemistry is layered upon physics, biology on chemistry, neuroscience on biology, psychology on neuroscience.
This world view, most strongly articulated by E.O. Wilson in Consilience holds that each layer can be explained, at least in principle, by our understanding of the layer below. As a biochemist I believed that my understanding and models, though limited, could be fully and more precisely expressed through pure chemistry, and ultimately fundamental physics. The fact that such layer-based approaches never work in practice is neatly swept aside. These are after all the most complex models to build and they will require much future refinement.
I asserted above that the dichotomy between empiricism and Platonism, model-building for prediction vs refining true descriptions of how the world works matters. But I didn’t explain why. This is the reason: if we are truly refining descriptions of the world that approach a true description of reality, then we can expect our pieces to work together eventually. All these models will fall naturally into their appropriate place in time.
If we are simply building models that help us to hold a pattern in our head so as to be able to work with the universe we can expect no such thing. If this is the case the tools for making our pieces come together will be social. They will be tools that help us share insight and combine models together. And I believe this will be a shift in thinking for most scientists. And not an easy one because it surfaces the unexamined split in our thinking and forces us to poke at it.
There is a final piece of the puzzle, another way that I, as a young scientist managed to avoid noticing this dichotomy. Conflict. The value of having an enemy, whether they be creationists, climate change deniers, or simply those drawing a different conclusion from the same data is that in declaring them wrong, we focus our attention away from the inconsistencies in our own position (questions of nuance, delicate handling of difficult analysis) to the problems in theirs (falsifying the evidence, hopeless use of statistics).
Having an enemy is productive. It forces us to fill in the gaps, as Behe did on questions of the structure and history of the flagellum. But it also draws our attention away from the deep gaps. Having a defined “other†helps us find a “weâ€, our own club. Within that club those truly deep questions, the ones that force us to question our basic framings, are ruled inadmissible.
This probably reads as criticism. It isn’t really. The real power of the scientific mindset lies in harnessing an individual human motivation – to see something better, to see it first – to a system of testing and sharing ways of understanding. It is the ability to hold that contradiction in suspension that, in my view, drives most scientists. It couples the practical, the pragmatically transferrable insight, that achieves something collectively, that to be blunt gets us funding, to something ineffable that excites the individual mind with specific skills.
Seeing a pattern unfold for the first time is a transcendental, some would say spiritual, experience. It literally enables you to hold more of the world in your mind. It is remarkable in many ways that we have built a system that allows us not only to transfer that insight, but to build institutions, systems, societies that work to combine and connect those insights together. That is the ultimate subject of this book.