JASON SILVA: Transhumanism is essentially the philosophical school of thought that says that human beings should use technology to transcend their limitations. That it's perfectly natural for us to use our tools to overcome our boundaries, to extend our minds, to extend our mindware using these technological scaffoldings. The craziness here is that we're finding more and more that our technological systems are mirroring some of the most advanced natural systems in nature. You know, the internet is wired like the neurons in our brain, which is wired like computer models of dark matter in the universe. They all share the same intertwingled filamental structure. What does this tell us? That there is no distinction between the born and the made. All of it is nature, all of it is us. So to be human is to be transhuman.
But the reason we're at a pivotal point in history is because now we've decommissioned natural selection. You know, this notion that we are now the chief agents of evolution, right? We now get to decide who we become. We're talking about software that writes its own hardware, life itself, the new canvas for the artist. Nanotechnology patterning matter, programmable matter. The whole world becomes computable, life itself, programmable, upgradable. What does this say about what it means to be human? It means that what it is to be human is to transform and transcend; we've always done it. We're not the same species we were 100,000 years ago. We're not going to be the same species tomorrow. Craig Venter recently said we've got to understand that we are a software-driven species. Change the software, changed the species. And why shouldn't we?
DAVID EAGLEMAN: All the pieces and parts of your brain, this vastly complicated network of neurons—almost 100 billion neurons, each of which has 10,000 connections to its neighbors. So we're talking a thousand trillion neurons. It's a system of such complexity that it bankrupts our language but, fundamentally, it's only three pounds and we've got it cornered and it's right there and it's a physical system. The computational hypothesis of brain function suggests that the physical wetware isn't the stuff that matters. It's what are the algorithms that are running on top of the wetware? In other words, what is the brain actually doing? What's it implementing, software-wise? Hypothetically, we should be able to take the physical stuff of the brain and reproduce what it's doing. In other words, reproduce its software on other substrates. So we could take your brain and reproduce it out of beer cans and tennis balls and it would still run just fine. And if we said, "Hey, how are you feeling in there?" This beer-can-tennis-ball machine would say, "Oh, I'm feeling fine, it's a little cold," or whatever.
It's also hypothetically a possibility that we could copy your brain and reproduce it in silica, which means on a computer, in zeros and ones, actually run the simulation of your brain.
MICHIO KAKU: The initial steps are once again being made. At Caltech, for example, they've been able to take a mouse brain and look at a certain part of the brain where memories are processed. Memories are processed at the very center of our brain and they've been able to duplicate the functions of that with a chip. So, again, this does not mean that we can encode memories with a chip, but it does mean that we've been able to take the information storage of a mouse brain and have a silicon chip duplicate those functions. And so was mouse consciousness created in the process? I don't know. I don't know whether a mouse is conscious or not, but it does mean that, at least in principle, maybe it's possible to transfer our consciousness and at some point, maybe even become immortal.
DAVID EAGLEMAN: The challenges of reproducing a brain can't be underestimated. It would take something like a zettabyte of computational capacity to run a simulation of a human brain. And that is the entire computational capacity of our planet right now. There's a lot of debate about whether we'll get to a simulation of a human brain in 50 years or 500 years, but those are probably the bounds. It's going to happen somewhere in there. It opens up the whole universe for us because these meat puppets that we come to the table with aren't any good for interstellar travel. But if we could put you on a flash drive or whatever the equivalent of that is a century from now and launch you into outer space and your consciousness could be there, that could get us to other solar systems and other galaxies. We will really be entering an era of posthumanism or transhumanism at that point.
MICHIO KAKU: I personally believe that one day we will digitize the entire human brain. And what are we going to do with it? I think we're going to shoot it into outer space. We're going to put our connectome on a laser beam and shoot it to the Moon. We will be on the Moon, our consciousness will be on the Moon in one second. One second, with our booster rockets, without all the dangers of radiation or weightlessness, we'll be on the Moon in one second. We'll shoot it to Mars. We'll be on Mars in 20 minutes, we'll be on Mars. We'll shoot it to Alpha Centauri. We'll be on the nearby stars in four years. And what is on the Moon? On the Moon is a computer that downloads this laser beam with your consciousness on it—downloads it and puts it into an avatar, an avatar that looks just like you: handsome strong, beautiful, whatever, and immortal. And you can walk on the Moon. You can then go and explore Mars.
In fact, I think that once we have laser porting perfected, you'll have breakfast in New York and then you'll go to the Moon for brunch on the Moon. You go to Mars for lunch and then you go to the asteroid belt in the afternoon for tea, and then you come back to Earth that evening. This is all within the laws of physics and I'll stick my neck out. I think this actually exists already. I think outside the planet Earth, there could be a highway, a laser highway of laser beams shooting the consciousness of aliens at the speed of light, laser porting across the galaxy, and we humans are too stupid to know it. How would we even know that this laser superhighway exists? How would we even detect it with our technology? Our technology today is so primitive that we wouldn't even be able to know that this already exists.
So in other words, I think laser porting is the way that we will ultimately explore the universe. We'll explore the universe as pure consciousness traveling at the speed of light looking at asteroids, comets, meteors, and eventually the stars at the speed of light, all of this within the laws of physics
STEVEN KOTLER: Mind uploading, storing cells on silicon, even teetering on the edge of so-called immortality changes everything about what it means to be human at a really fundamental, deep level. And when I say fundamental, deep level, I mean we're starting to muck around and mess around with evolutionary processes, processes we have no idea what happens if you interrupt them because we've never done it before.
DOUGLAS RUSHKOFF: The confidence with which we think we can upload ourselves to silicon or re-create ourselves with algorithms is shocking to me. The only ones out there who think they know what human consciousness is are computer engineers. If you talk to actual brain researchers and neuroscientists, they say, "We're nowhere close." We don't even know for sure what goes on in a single square centimeter of soil. We're still trying to teach agriculture companies that the soil is alive, that it's not just dirt that you can put chemicals on. It's a living matrix. If we don't even know what a single centimeter of soil is, how do we know what the human brain is? We don't, we don't know what the source of consciousness is. We don't know where we come from. We don't even know if there's meaning to this universe or not, yet we think that we can make a simulation that's as valid as this? Every simulation we make misses something. Think about the difference between being in a jazz club and listening to a great CD. There's a difference and some of those differences we understand and some of them we don't.
So when I see people rushing off to upload consciousness to a chip, it feels more like escape from humanity than it is a journey forward. And I get it: life is scary. I mean, there's women—real-life women are scary. You know, the people are scary. The moisture is scary. Death is scary. Babies are scary. Other people who don't speak the same language or have the same customs, they're scary. All sorts of stuff is scary. And I understand the idea of this kind of having a simcity, perfected simulation that I can go into and not have to worry about all that stuff I don't know, where everything is discreet, everything is a yes, no, this, that, all the choices have been made. There's a certain attractiveness to that, but that's dead, it's not alive. There's no wonder, there's no awe. There's nothing strange and liminal and ambiguous about it.
STEVEN KOTLER: The idea in mind uploading is that we can store ourselves on silicon. We can upload our personalities, our brains, some part of our consciousness onto computers and they can stay around forever. It is a far out there technology, for sure. Even though British Telecom was working on it, even though people are working on it, it's very early days. Ray Kurzweil has famously kind of pegged the date when we're going to have to deal with this problem as 2045. That may be really, really enthusiastic. I think it's a conservative prediction.
MICHAEL SHERMER: Why sell it like it's got to happen in my lifetime? Because that always, to me, seems like you're just tickling that part of the brain that religions like to tap in, that that sort of egocentric, "It's all about me and I want to continue on in the future." I get that, of course, I do too. But what if it's 2140? I know, you're you're doing all the blood cleansing, but you're not gonna make it like another century now. But even so, what if it's 3150, in 3140, 1,000 years from now, I mean? That's possible, but you and I aren't going to be here to enjoy it. All the more reason we should be skeptical when the idea on the table being offered to us feels too good to be true. It almost always is, not always, but usually.
DOUGLAS RUSHKOFF: I was on a panel with a famous transhumanist and he was arguing that it's time that human beings come to accept that we will have to pass the torch of evolution to our digital successors. And that once computers have the singularity and they're really thinking and better than us, then we should really only stick around as long as the computers need us, you know, to keep the lights on and oil their little circuits or whatever we have to do, and then after that, fade into oblivion. And I said, "Hey, no wait a minute, human beings are still special. We're weird, we're quirky. We've got David Lynch movies and weird yoga positions and stuff we don't understand and we're ambiguous and weird and quirky. We deserve a place in the digital future." And he said, "Oh, Rushkoff, you're just saying that because you're human." As if it's hubris, right? "Oh, I'm just defending my little team." And that's where I got the idea, "All right, fine, I'm a human, I'm on team human." And it's not team human against the algorithms or against anything other than those who want to get rid of the humans. I think humans deserve a place. Certainly until we understand what it is we are, we shouldn't get rid of us. And as far as I'm concerned, we're cool. We're still weird and funny and wonderful and yeah, we destroyed the environment, we did really nasty things, but I would argue we do those things when we're less than human. We do those things when we can dehumanize others. And this desire to transcend the human, I feel like it's excusing a whole lot of behaviors. It's excusing a whole lot of dehumanization. It makes it easier to send kids into caves to get rare-earth metals for your phone. It makes it easier to create toxic waste everywhere. It makes it easier for you to think of the human timeline as having a beginning, middle, and an end because we're going to transcend it. And that's a sick myth that could very well end our species but, really, I would say to the detriment of our little universe.