Who is Jeremy England? There are many answers to that question. He is a biochemistry graduate who became an MIT assistant professor in physics when he was 29 years old. He is an ordained rabbi. He is the grandson of Holocaust survivors. He is a descendant of the first life-form on Earth. He can also be described as an assemblage of atoms that exhibits complex, life-like behavior. England might describe himself as one of the many dissipators of energy in the universe—this, he says, seems to be a useful way to answer the question that humans have asked for so many millennia: What is life, and how did it arise?
This question—and England’s answer—form the basis of his new book Every Life Is On Fire: How Thermodynamics Explains the Origin of Living Things, which explores the idea that burning up energy is the base activity of life. But England has no simple, neat tale to tell: This is a complex, multilayered subject, and must be treated as more than a scientific issue, he says. That’s why Every Life Is On Fire daringly brings ideas from the Hebrew Scriptures and uses them to unpack the science. Cultural and religious traditions have long been exploring this territory, he says, and can complement scientific angles on the question of where we ultimately came from. If we really want to understand ourselves, he suggests, we’ll need more than science.
I always wanted to do physics because I liked the predictive power of simple principles. At the same time, I was fascinated by the relationships between form and function in biology—especially when you see that it’s still there when you get down to the molecular level. I started out working in a structural biology and a cell biology wet lab, and was very bad at that! By the time I was finishing undergraduate school, I was working in a theoretical lab looking at protein folding. So it feels natural for me to be drawn to this set of questions.
Not really. We get the notion of what life is. You can do plenty of great science while saying, “Let me accept that there is a category of things where fish belongs to it, and trees belong to it, but rocks don’t belong to it, and ice doesn’t belong to it.” We can continue to use the word while admitting that we don’t really have a scientific pedigree for where the development of the word came from. And yes, there will be some difficult cases such as viruses. But we can accept the category as given, and study, to the best of our ability, the properties of the things in that category that are of interest to us.
That I can talk about a person as a collection of atoms should not supplant the fact that I can also talk about that person as a moral being.
In a way, yes. We had a paper in Physical Review Letters a few years ago about a simulation of a bunch of balls and springs, just jiggling and the springs were hooking and unhooking from the balls. Then you wiggled one of the springs at a certain frequency and it all jumbles and hooks together in a different way. Now you have a resonator that’s better at absorbing energy from the frequency that you’re wiggling the ball at. Learning to harvest energy better from its surroundings is a feedback process that sounds lifelike. On the other hand, if you held it up to someone and said, “Look at this jiggling mass of balls and springs, it’s alive,” they would just laugh at you, and rightly so.
So it’s clear that this is a territory where there’s going to be a lot of different arguments. Some might say, “Well, the fundamental thing about life is that it does X.” And someone else might say, “Well, the fundamental thing about life is that it does Y.” What I find is that when I focus on any one of those properties, you can always find examples and counter-examples. If you were loose enough in your understanding of what it means to copy yourself, for instance, then a spreading fire is a self-copying phenomenon. But to call a fire alive is a really contentious extension of the domain of that word.
What we term life is this multifarious bundle of all these different things together: You’re good at self-replication, energy harvesting, and so on. When you study each one of those things on its own, it’s a physical phenomenon that has more primitive examples. But those examples are where we have a chance of understanding the fundamental principle better.
It’s necessary to talk about entropy for historical reasons, and if we are conservative enough about how we use the term, it can still be useful as a shorthand. What I advocate for now is that we try to make theories that talk about the probability of things happening. And yes, it’s true that entropy, which is counting the number of ways something could happen, is part of what weighs on the scale in determining probabilities—but it is not the only thing that impacts probability. So trying to talk about whether entropy should increase becomes very distracting. I think a better way to talk about it is more to consider what is the likely outcome, given the starting point, given the way the system is being driven, and given the sources of fluctuation in the system. Entropy will be one of the things that matters there, but not the only one.
The focus of my line of research is more about whether we can develop the capability to bring about the different aspects of what I would call “lifelikeness” in experimental settings, with control and with theoretical principles that can be clearly articulated. We may not know exactly how our particular example of life got put together, but we start to see how one puts a bunch of things together in general. The starting point for that is to break things apart into these different phenomena like energy-harvesting, self-replication, et cetera. With each of those, we’ve made some progress. There’s more to do, but we can start to see how a story might come together.
Imagine I have a collection of matter under the influence of an environment. The environment is essentially sources of energy that are kicking the matter and knocking into it and allowing it to change shape. I’m interested in which configurations of that matter will be likely to exist at some point in the future. That likelihood depends, in part, on how much extra energy was absorbed and dissipated on the way. Over the course of the whole history of the system, highly dissipative histories are going to lead to highly likely outcomes.
An example might be a self-copying bacterium that eats some sugar. It uses the sugar to build another copy of itself. Now I have two of them and they eat the sugar even faster, and then they make four of them, and then they eat the sugar even faster. So the chemical dissipation is accelerating toward a likely outcome, which is that I have more bacteria in my future than I had in my past. The balls and springs work that way as well. It’s a positive feedback process where you’re exploring a space with combinations of matter. There’s an energy source. And the flow of energy through the system is leading to a positive feedback relationship where you find a better energy absorber and it helps you absorb even more energy, then you find another even better energy absorbing state.
Subjected to the right kinds of patterns, naive matter can exhibit computing and learning behaviors.
There’s a feedback process that’s positive: I end up in a particular place because I was in a state in my past that was good at absorbing energy and it carried me irreversibly in a certain direction that I can’t go back from. It left its mark. So the general idea with dissipative adaptation is that the current state of the system holds the signature of how I had to be in some special state in my past to absorb a lot of energy. That helped me change my shape in consequential ways. Sometimes that leads to growing energy absorption over time, and sometimes it leads to extinction of energy absorption over time. And both of those things can leave very noticeable fingerprints that are different aspects of lifelike behavior.
I haven’t been able to apply these ideas rigorously to anything like living cells yet—certainly not in experiments, but also not even with theoretical models. It’s much messier and more complicated to try to get things done in the biological context. But I don’t think that doing that kind of experiment is a long way away. Usually if I show a biologist a living cell and I say, “Look, it’s behaving in this way where it’s being very smart in how it’s reacting to something its environment is doing,” the default assumption is, “Well, there’s something you don’t understand yet about the biology, and that’s the explanation to what you’re seeing.” The design of experiments will have to be done very carefully.
There are membrane-less droplets, for instance, that self-organize into cells under different conditions. They seem to have very plastic and flexible properties to help the cell respond to different functional needs. A biologist might say, “Oh, well, it has all of these evolved abilities that come from eons of natural selection, making it better and better at what it does.” But it’s starting to be hard to imagine that every kind of response like this has its own separate program, as though it’s all been learned from the past. There’s a growing list of experimental biologists who are interested in these kinds of emergent adaptive behaviors in biological systems.
We’ve been working with primitive abiotic examples. The place we’re looking is called “active matter.” It can involve proteins chewing through chemical fuels and binding and unbinding from each other. But you can also do it with larger objects. I have a collaboration at Georgia Tech where we do this with robots swarms. There are also examples of “colloidal particles” that have special coatings on their surface—they’re like little chemical jet packs. And they already exhibit really interesting collective behaviors. Active matter is a nice experimental base camp. You don’t have to try to make sense of the living cell, where in addition to everything else you have all of the impacts of natural selection at the level of the organism. We can just study the collective behavior of things that are like soups of interacting proteins that are more primitive.
You can’t describe the interesting phenomena of the world if you just start with Coulomb’s law and the Schrödinger equation.
It’s true: There is a lot more to fill in. I’m sure there are people who will read this book and say, “Well, you’ve talked about different kinds of lifelikeness and how they might emerge, but that’s not the same thing as a full story from start to finish of how life as we know it gets put together.” Maybe we can understand how self-replicators might start to emerge, how predictive mechanisms that respond to the patterns in their environment by accurately predicting their surroundings or their behavior might emerge. And maybe energy harvesting is something whose emergence we can understand. So that certainly recalibrates our sense of how to imagine a prebiotic situation and think about what’s difficult or easy to accomplish with what would be lying around. But, no, it is not the same thing as telling a blow-by-blow story. I’m sure anyone who’s looking for that level of detail in a story that is convincing and testable will have to wait a while. Doing forensics on that kind of distance to history is pretty difficult.
For the short term, it’s going to be about how far can we push this idea that, subjected to the right kinds of patterns, naive matter can exhibit computing and learning behaviors. I’m trying to do that right now with some of my collaborators—Dan Goldman at Georgia Tech and others who are part of this effort to control robot swarms. We want to push that envelope and show a smoking gun for that kind of an effect, creating something that can be tested and proven empirically in the laboratory. That will put the physical principles on a very firm footing. The more we can achieve impressive results in that way, the more we are going to be able to redouble our efforts to understand wider implications of the theory and tie it back into other things. To be honest, the broader question of how we start to talk about how life comes together is something I find more difficult to predict: I don’t claim that I can see which way that goes yet.
Talking about the origin of life, or the boundary between what’s alive and what isn’t, involves broader questions that aren’t in the narrow domain of what you can understand scientifically. I didn’t want to stick my head in the sand about that. I want to understand how things work if I reason about them scientifically. But I am also a human being with other interests. I’m a practicing religious Jew—I’m an ordained Orthodox rabbi—and I care very deeply about these things. So I would feel foolish putting the scientific ideas out there but not making my own comment about a larger conversation that includes more perspectives on what some of this could mean. When I decided to write this book, I quickly realized I wanted to go and look in the Torah and see if I can find a commentary that responds to what I’m already thinking about with the science. I certainly think that it’s possible to contemplate the boundary between life and not-life from that perspective, and the text, I would argue, clearly contains such a contemplation.
It’s clear that the question of what happened in the past is not a low stakes question. You see that in how people argue about history and in the very emotional disputes people end up having about the prehistoric, or about cosmology. Ultimately, and this is something I learned from the Torah, how you describe the past is not ideologically neutral. The way you talk about who we are, and where we come from, matters to people—partly because it makes some people powerful, and enables them to convince others to do certain things. So I certainly don’t want to eliminate any frameworks of meaning that we need for talking about the past—we can’t just have frameworks that involve concepts like fundamental fields or prebiotic chemical reactions.
I sometimes think fundamental physicists gin up the notion that when we’re done. We’ll just talk about strings and that will be everything. But we already know you can’t describe the interesting phenomena of the world if you just start with Coulomb’s law and the Schrödinger equation. It doesn’t work. You need different languages. We certainly shouldn’t be trying to have fewer of them. The difference between physics and biology is that they are different languages for talking about the same world. It’s a mistake to be trying to look for one language that will replace or subsume all others. The fact that I can talk about a person and see them as a collection of atoms should not supplant the fact that I can also talk about that person as a participant in an economy, or a moral being, or a participant in a relationship. These other frameworks of meaning are important, and we should grab them and hold on to them and insist on them.
This is not a conversation that we should be hoping to exhaust. People who think that we’re done sorting it out are misguided in one way or another. People need to keep talking respectfully, with intellectual honesty, and in different languages, and sharing those languages with each other in order. That’s how we’ll progress in our understanding.
This article was first published on Nautilus
Daniel Siegel answers questions from the audience at SAND18 US.
The ancient Greeks dove into this question. But what do modern scientists think?
Philosophers and mystics have long contemplated the disconcerting notion that the fixed self is an illusion.
Sam Harris speaks with Iain McGilchrist about the differences between the right and left hemispheres.
Bayo tells a story and tries to characterize the essence of the Feminine at the "I of the Storm" event.
Scientists are slowly understanding collaboration’s role in biology
What altered states of consciousness can tell us about consciousness itself
Please enter your email and we’ll send you instructions to reset your password