In part 2 of this vlog, Dr. Stephen Meyer, Director of the Discover Institute’s Center for Science and Culture, continues discussing the faults in the beliefs of the Neo-Darwinism movement. Speaking from his own published book, “Darwin’s Doubt”, Meyer shares how all biological systems reflect intelligent design and presents four challenges to the creative power of natural selection.
– [MUSIC PLAYING] In my book, Darwin’s Doubt, I call this whole– this problem of the abrupt appearance of the major groups of animals, and there are many other abrupt appearance events besides just the Cambrian animals, but that’s the one I focused on. I call this the mystery of the missing fossils. And it’s a mystery that has been yet unsolved. Now, that leads, really, though, to the most important issue, which is the cause of the change. We’ve defined evolution as change over time, continuous change over time. But now, we want to really think about, well, what might be causing that change? Because that’s the essential part of both classical Darwinian theory and the modern neo-Darwinian synthesis, or neo-Darwinian theory, that we all learn in our textbooks. And according to neo-Darwinism, the cause of change is the mechanism of natural selection acting on random variations in a particular kind of variation that biologists talk about today called a mutation, which is a random change in the sequence of the characters in the DNA message– the genes or the DNA, the information stored in the DNA. And according to neo-Darwinism, this mechanism of natural selection can produce new forms of life, new biological forms. And also, it accounts for the appearance of design that we find in living organisms. And this was really where Darwin started his thinking.
All biologists, back to Aristotle’s time and right up to the future, have recognized that biological systems give at least the appearance of design. Richard Dawkins, the famous biologist from Oxford, says that biology is the study of complicated things that give the appearance of having been designed for purpose. Key word in that, anyone? The appearance, right? OK, from a Darwinian point of view, things look designed, but they’re not really designed. Why? Well, because there’s an undirected, unguided mechanism that can produce the appearance of design without being guided or directed in any way. How could that be? Let me give you a quick illustration. You see, I got a sheep on the slide there, or a few sheep. Imagine you’re a sheep herder in the far north of Scotland. And you want to produce a woollier breed of sheep. What do you do? Well, you pick the woolliest males and the woolliest ewes in every group of offspring. And you allow only them to breed. The other ones get no dates, OK? And if you do that generation, after generation, after generation, what will you produce? A woollier breed of sheep, right? We’ve known this back from biblical times, right? This is well known. Now, in the 19th century, biologists were convinced that one of the things that indicated that life had been designed was what they called adaptation– that organisms seemed to have just the right attributes that they needed in which to live in the environment in which they found themselves. So if you’re a fish, you live in the water, you’ve got gills and swim bladder, and you’ve got all the equipment that you need to survive in the water. So if you have some sheep, and they need to live in a woolly climate, this selective breeding, as it was called, was a way of getting the sheep to be better adapted to their environment. But Darwin came along and said, wait. I can explain that kind of adaptation through a purely undirected natural process. What if, instead of in every generation, you select the woolliest males and females, what if there’s a series of very cold winters such that only the woolliest survive? Then, after many generations, won’t you have exactly the same effect, because you’ve only had very woolly males and females being allowed to breed, because nature has weeded out all the other ones? And he called that not artificial selection, but natural selection. And so since the outcome was the same, since the sheep at the end were more adapted to a cold climate, you could think of nature doing the designing, nature producing the adaptation. So that’s how Darwin got rid of the idea of design. And this is tied up with his notion of the third meaning of evolution, that natural selection acting on random changes, mutations and variations, is the cause of change. Now, my illustration sounds quite sensible. But a lot of biologists have been asking, well, is that kind of minor modification we see with sheep, and the sort of things we see in dog breed and pigeon breeding, or the kinds of– the finches in the Galapagos Islands, is that the only evidence or appearance of design? Or might there be other, more fundamental, ones? And those are the scientists that are wondering, well, does the Darwinian mechanism actually– is it actually creative? And Lynn Margulis, whom I quoted a few minutes ago, has said this. She says, “Natural selection eliminates and maybe maintains, but it doesn’t create. It doesn’t generate anything fundamentally new. Neo-Darwinists say that new species emerge when mutations occur and modify an organism. And I believed that,” she said, “until I looked for evidence.” So this is the real nub of the issue.
Is the Darwinian mechanism genuinely creative? Yes, you can get slightly woollier sheep. Maybe you can get finch beaks that are bigger or smaller. But do you have a mechanism that can build new animals, build the sheep, build the birds in the first place? And so this is the issue that I want to focus on in the rest of my talk tonight. It’s a particularly acute issue, actually, in the Christian world right now, because there are also a lot of theistic evolutionists, or evolutionary creationists, as they’re sometimes called, who, quote, “accept that natural selection and other evolutionary mechanisms, acting over long periods of time, eventually result in major changes in body structure.” This is Deborah Haarsma of Biologos Institute, a leading theistic evolution group. And she equates this mechanism of natural selection with God’s creativity. She says that natural selection and the “gradual process of evolution was crafted and governed by God to create the diversity of all life on earth.” So theistic evolutionists equate the creativity of God with the creativity of the evolutionary mechanism of natural selection and random mutation. And that just raises in a new way the question, is that mechanism really creative? And that’s what I want to look at.
Now, in this talk, I have four challenges to the creative power of natural selection. Probably won’t get through all four. Maybe we’ll do three tonight. I address these in a lot of depth in the book, Darwin’s Doubt in a section of the book I call “How to Build an Animal.” The first mystery is, where are the missing fossils? Why the abrupt appearance? But the deeper mystery in the history of life is, what actually causes these big changes that we see in the history of life as recorded in the fossil record? And that’s really an engineering question. It’s a question of how you would build something as complex as a trilobyte, or a triceratops, or a giraffe, or a human being. What’s the real driving mechanism or cause? And there are a number of challenges to the idea that natural selection and random mutation can do that. And that’s what I want to talk about now.
The first is a problem known as the problem of the origin of genetic information. I used to ask my students when I was a press professor, if you want to give your computer a new function, what do you have to give it? Since there’s a lot of students here, why don’t I ask you all that? What do you have to give your computer if you want it to perform a new function? Code, right? Code or information, instructions, OK? We know this because we live in an information age. Well, it turns out, and this is the most stunning discovery of 20th-century biology, that the same thing is true of life. If you want to build a new form of life, if you want to build one of those Cambrian animals that I studied, or if you want to build new mammals, new reptiles, new birds, anything, you’ve got to have new code. You have to have new information. Now, we began to appreciate this in– you have to have instructions to build new biological form. And we began to appreciate this starting in the 1950s with the discovery of the structure of the DNA molecule by Watson and Crick. Most of you have studied that in– yeah, right, OK. And Watson and Crick discovered that DNA had this beautiful double helix structure. And along the interior of the molecule, they also discovered that there were four chemicals called bases, or nucleotide bases, that attach to that helix backbone. And in 1957, four years after they had made the original discovery, Francis Crick posited something he called the sequence hypothesis. And this was the idea that those four chemical subunits, called bases, were functioning just like alphabetic characters in a written text, or like the digital characters, the zeros and ones, we use in software today. That is to say, it wasn’t the shape, or the weight, or the chemical properties of these subunits in the DNA that gave them their function. Rather, it was their specific arrangement, in accord with an independent code, later discovered and called the genetic code, then allowed the arrangements of those characters, or chemical characters, to convey information for building all of the most important molecules that are needed to keep cells alive.
If you’ve had some biology, biochemistry, you probably know about proteins. Proteins are made of individual subunits called amino acids that link together to form long, chain-like molecules. And these chains, if arranged properly, will fold up into beautiful, three-dimensional structures that allow the molecules to have a hand-in-glove fit with other molecules and allow them to do important jobs in the cell. Proteins catalyze reactions at super fast rates. Those proteins are called enzymes. They also build the structural parts of little miniature machines. It’s amazing, but inside cells, we’re finding little rotary engines, and sliding clamps, and turbines. Those miniature machines are made of proteins. And the proteins also will help actually process the information that’s on the DNA molecule. So DNA has digital information that directs the construction of these big protein molecules that cells need to stay alive. Let me give you an illustration. I’m from Seattle. Actually, our office is in Redmond. And in Redmond, we have the great, famous company, Microsoft. Microsoft writes code and sells it, which is a very interesting thing by itself. Information is valuable, right? Well, another company in Seattle is Boeing. And Boeing uses a technology called computer assisted design and manufacture, in which an engineer will write code. That code is sent down a wire. It’s translated into another machine code that can be read at a manufacturing apparatus that takes the code and, for example, might use it to place rivets on the airplane wing in exactly the right place to construct that mechanical system. That’s the very sort of thing that’s going on inside the cell. The digital code in DNA is directing the construction of mechanical systems that are necessary to keep the cell alive– these big protein molecules and protein machines. Now, this is a stop press moment in the history of biology. When Crick put forward his sequence hypothesis, and over the next few years, as scientists confirmed that he was right, we began to see the emergence of an information revolution in biology– that information is literally running the show inside living cells. Now, I call this the DNA enigma, because– and the DNA of enigma is not the structure of the DNA molecule, because Watson and Crick figured that out. It’s not where the information in DNA resides. They figured that out, too. It’s not even what the information does. We now have a really good idea of how the digital information in DNA directs the construction of these sophisticated proteins and protein machines. What is the DNA enigma? Where does it come from, right? I heard it in the front row. Where does that information come from? What’s the source of it? What’s the origin of that information? Now that’s a puzzling question because of the way that the mutation mechanism is supposed to work. Remember, the Darwinian answer to that question is, it comes from natural selection acting on random mutations. But if you remember anything about your biology, you know that natural selection acts after the fact of variations or of mutations. Mutations occur first. If one of them is favorable, it’s preserved and passed on. If it’s not, the organism is either– loses out in a competition of survival, or may just flat out die. But the problem arises when we begin to think about what mutations are acting on. Post-Watson and Crick, we realized that mutations are acting on long strings of precisely sequenced arrays of digital– essentially, digital or alphabetic information, typographic information. And we know from our own experience, if you start mucking around with a specifically arranged sequence of text or a computer code, you’re going to have a problem.
Computer programmers in the room– if you start randomly changing the zeros and ones in a section of functioning software, are you more likely to introduce– to generate a new program or introduce glitches and bugs, and eventually cause your program to stop functioning? It’s obviously the latter, right? So this is the kind of thing that a lot of mathematical and computer scientists started to think about in the 1960s, and wondering if this mutation selection mechanism could really work. I’ve got two sequences behind me. On the top is a complex sequence. That’s an arrangement of characters that are not repeating. But there’s no meaning or function, communication function, provided by that top sequence. The kind of information we have in biology is the kind we have in written text or computer code. It’s not just complex in the sense of being nonrepeating. It’s complex, it’s nonrepeating, but it’s also extremely specified with respect to independent functional requirements. And that kind of information is hard to change, hard to mutate at random and still maintain function. And there’s a reason for that. And that is, there are a lot more ways of going wrong in a typical communication system than there are ways of going right. This was first recognized in the 1960s by a scientist named Murray Eden, who called a conference at a place called the Wistar Institute. He was an MIT computer scientist. And he said, “No currently existing formal language system can tolerate random changes in the symbol sequences which express sentences. Meaning is almost invariably destroyed.” And he suggested the same thing is likely true of the digital code in DNA and what mutations might do to it. Now, here’s a way of thinking about this. Any language system is also, by mathematicians, called a combinatorial system, because there’s a lot of different ways to combine the letters. You ever played Scrabble? You can put the letters together in lots of different ways. In a typical English– here’s a fun fact to know and tell. In English, for every 12-letter sequence of letters that form a functional word, there are a hundred trillion ways of arranging those same 26 letters that form no meaning whatsoever. So the ratio of functional sequences to nonfunctional sequences is very, very small, OK? So if you start randomly changing the letters in a functional sequence, you’re going to be overwhelmingly more likely to land in the functionless abyss. You’re going to find one of those nonfunctional combinations. And what biologists have found in the 50 years since that conference at the Wistar Institute I mentioned just a minute ago is, the same thing is true of life– that a random, undirected search is going to be overwhelmingly more likely to fail in finding new information than it is to succeed, so much so that it’s going to be much more likely that– well, it will just be much more likely that you’ll have failure, overwhelmingly more likely. Let me illustrate. If you’ve got a bike lock, and you want to crack the lock via a random search– and a bike lock is a combinatorial system as well, because it’s got lots of different ways of combining the letters. And if you want to find the combination at random– say you’re a thief and you want to steal a bike that’s out behind the auditorium– are you more likely to succeed or fail if you encounter that four-dial lock? Fail, right? Oops, it’s a bit of a trick question, isn’t it? What else you need to know to answer the question? Anyone? How many opportunities you have, right? If you’re a particularly diligent thief, you can spin, and spin, and spin, and spin for a long time, right? And I’ve actually worked the math out. If you’re a thief, and you turn the dials, one click every 10 seconds, looking for a new combination, you could search about 5,000 combinations in 15 hours. There are 10,000 combinations. So if you were able to hang around to do this 15 hours, at that point, it would become more likely that you would succeed than to fail. So to assess the plausibility of a random search, you need to know how many possibilities there are. But also, you need to know how many opportunities you have to do the searching, OK? Now, in my book, I apply this to biology and show that when we’re looking for a new gene or protein, it’s much– the situation is less like the four-dial lock than it is like this 10-dial lock, OK? In a 10-dial lock, if you do the same math, that thief could search, and search, and search an entire 100-year human lifespan and never sample more than about 3% of the lock, if the thief only did that, day and night, with no potty breaks, no breaks for food, no anything, OK? And in that case, even with the time available, the search is more likely to fail than to succeed. So if you’re a betting person, you’re going to bet that– if you see the lock was opened, you’re going to say, well, it’s less likely that it happened by chance than it is that it happened by some other way– like, maybe the thief knowing the combination. Now, I’ve applied this way of reasoning to the question of DNA and proteins, because they’re also– you can think of the protein and the DNA that makes it as– it’s a long combinatorial system. And the question is, do you have enough opportunities to search in order to have a reasonable chance of success? And the answer mathematically comes out that it’s not like the four-dial lock or the 10-dial lock, but actually, when you’re talking about genes and proteins, the mathematical difficulty of the problem is, it’s more like a 77-dial lock, where you only have a limited time. Even on the standard geologic time scale, the time available to search is not nearly enough to search the number of 10 to the 77 possibilities. So in that case, it becomes overwhelmingly more likely that a random search will fail as opposed to succeed. And therefore, the hypothesis that that’s how it happened is also overwhelmingly more likely to be false than true. And so scientists are thinking, maybe we want to look for another mechanism. If a hypothesis is more likely to be false than true, we need a new hypothesis. And so the idea that mutation and natural selection can generate new genetic information is turning up to be a very difficult problem for the standard neo-Darwinian theory. [MUSIC PLAYING]
Want more ThoughtHub content?
Join the 3000+ people who receive our newsletter.
*ThoughtHub is provided by SAGU, a private Christian university offering more than 60 Christ-centered academic programs – associates, bachelor’s and master’s and doctorate degrees in liberal arts and bible and church ministries.