
There are numerous long lived people in the Bible. There are others enumerated in the pantheon of religions that preceded the books of the children of Abraham. Eastern religions have their own immortals. In all cases immortality is the gift of the sacred. Even the evil possessors of this gift are considered above mere mortals. But that may be changing. Last year I noted that scientists had discovered how to work with the gene that causes aging, possibly even stopping it completely. There has been a spate of other developments as well. All the way back in the good old days of 2016 I wrote about how scientists were overcoming the limitations that prevented humans uploading their minds into cybernetic beings.
Back then I said this;
(1) There is no viable storage device for all the data required for sentience; (2) Stored data can provide many library like functions, HI SIRI!, but it can’t reason; and (3) There is no viable way (yes, I used the same word twice, sue me) to have such data interact on a social level in any case.
And all that was true yesterday.
When you read the above article you’ll note all of that was false then.
However, now, scientists are dealing with some interesting new issues. Orbiter Magazine just launched a three part series dealing with the ramifications, and ethics, of what’s happening as I type. We’ll take each part individually but I strongly suggest you read the entire thing. It’s an amazing look at the near future.
Part 1: Robots to the Rescue by Steven Michael Crane
After taking some time to clarify how and why humans are social animals, and clarifying that the elderly are no more lonely thank their younger counterparts, then diving into the burgeoning use of social media as a replacement for interaction, he brings us here.
Social robots (machines built with faces, voices, or bodies that are intended to elicit social behaviors from humans like natural conversation) are already playing surprising roles in our lives. They can soothe our infants and educate our children. In the future, they will serve as our friends, romantic and sexual partners, and even our entertainers. And they will comfort and care for us as we grow old and die. It’s easy to imagine that advanced capabilities in these domains are just decades away at most.
On the hardware side, we will engineer ever more lifelike skin, musculature, and coordination of movement. Optical and gyroscopic sensors on real humans combined with machine learning will analyze natural human movement and instantiate it in robots. The software that powers the voice and personality of these robots will advance far beyond Siri and Alexa.
Ray Kurzweil[11] and Nick Bostrom[12] predict a future of artificial superintelligence (ASI)—well-explained with cute amateur graphics by Tim Urban—in which machine intelligence will vastly exceed that of humanity’s in every major domain (as opposed to the regular AI of today, where machines can match or outperform us in very narrow domains). (For the sake of focus over breadth, I will largely sidestep the possibilities and perils of ASI here, and parts two and three in this series will deal more with the inevitable hard-wired integration between human brains and these robots, as well as dreams of human immortality.)
But let’s take these predictions at face value and perform a thought experiment regarding the distant future. Let’s assume that, at the very least, a humanoid robot with artificial intelligence is indistinguishable from a genuine human’s self-portrayal: a fully convincing human simulacrum. Though it was manufactured, and though there’s no internal human anatomy, from all external perspectives, it’s just another human. It can navigate social complexities as well as the best of us; it can be programmed to remember and forget; we can even engineer in human foibles and personality flaws for authenticity’s sake. It may even be conscious and have free will.
Back in 2013 I wrote about how naturism, doing everything naked, was better for your physical and mental health. It still is, in case you’re curious. But humans are moving in the opposite direction. We’re moving indoors, interacting remotely, and doing the exact opposite of what will help propagate our species in a healthy manner.
All of this brings us to the next article.
Part 2: Virtual Flourishing by Steven Michael Crane
He begins by noting that humans are the only beings, currently, to ascribe meaning to things. That sounds obvious, but think about it. Animals can make one to one comparisons; pain is bad, pleasure is good, and so on, but lack any nuance that you or I could ascribe to any aspect of our lives. We need a reason to exist, a way to define prosperity, and money need not be the measure. We are beings who have many narratives surrounding what is perceived of as a successful, or useful, life.
Now, what happens when we deny ourselves those narratives at the visceral level?
“[Humans are] capable of the highest generosity and self-sacrifice,” Ernest Becker wrote in The Denial of Death. “But [they have] to feel and believe that what [they] are doing is truly heroic, timeless, and supremely meaningful. The crisis of modern society is precisely that the youth no longer feel heroic in the plan for action that their culture has set up. … [T]he problem of heroics is the central one of human life.”
In plenty of ways, modern humans are finding meaningful narratives with which to guide their lives, as humans always have—those centered on family, community, and the support of noble social causes. But many of our pursuits do not bring us the deep and long-term meaning we seek, as Becker points to, above.
We ought to be careful where we derive our meaning—including from technology. At a simple level, we find meaning in novels, movies, video games, and, more recently, in increasingly high-quality virtual reality environments. These spin their own (artificial) narratives, and because of our species’ unique penchant for narrative thinking, we can’t help but get deeply caught up in what feels genuinely meaningful. This isn’t problematic in itself, and indeed, pursuits in fantasy worlds can be enjoyable and beneficial. Some gamers are less lonely and anxious when in their online worlds than offline[5] and lonely people can find emotional support and social acceptance online.[6]
Like many children of my generation, I’ve spent many hours (to be honest, probably many thousands) in the virtual worlds of video games. They can be compelling, challenging, creative, and rewarding, as they are carefully engineered to be. Though accomplishment in these games feels meaningful in the moment, but it’s eventually followed by a sense of emptiness. The elite status I worked so hard to achieve (top of the leaderboards), all the points I scored, all the virtual creations I made—they all eventually were eclipsed by other drives I’ve come to recognize as more worthy and rewarding: physical and mental exercise and improvement, relationships with family and friends, and working to reduce suffering in the world.
As compelling as gaming activities can be, they’re not sources of true, deep, and lasting meaning. This is particularly important when considering the virtual worlds of the future.
Now we’re getting to the parts that cause consternation. Back in 2012 I wrote about how researchers at John Hopkins had come up with a means to use human stem cells to reset the biologic clock within each of us. You have cancer? Reset your blood to before that. You have a limb deformity? Not really an issue. Just hit reset.
As of this October that research has made huge jumps forward.
While this is one method of addressing physical decay, that’s not the only issue created by longevity.
Which brings us to the last section.
Part 3: Consciousness in the Cloud by Steven Michael Crane
This part gets a little sticky. We have to define the concept of consciousness. He does yeoman’s work, so I’ll let him tell it, but you’ll understand the dilemma as you read.
This is the long part, by necessity, so make sure you have a libation handy.
Let’s say you’ve completed one of these processes, and you have successfully created a perfect emulation of you that has all your memories, characteristics, personality quirks, etc. safely stored on a computer, or even instantiated into a convincing life-like robotic “clone” of you that acts just like the real you.
Have you achieved immortality? From the outside perspective, it may certainly seem like it. Without being clued in that they are talking to your digital clone, your friends and relatives might never know that they aren’t interacting with the original you. But what about from the inside perspective—would you share consciousness? I suspect that you would not, for the following reasons.
First, from a definitional perspective, it’s important to keep in mind the distinction between contents of consciousness, and consciousness itself. It’s hard to imagine that your emulation is an entirely “new” consciousness because it has all the memories and other contents of consciousness that you have. After a fresh copy is made, if you and the copy sat in a theater and watched a movie together, your experience of it and reactions would be nearly identical. But the consciousness that I’m pointing to is prior to and independent of all contents of consciousness, including memory and sensations.
Second, without the original neuronal structures that (as far as we know) enable consciousness in humans, and without a physical locus that integrates the incoming information from the two entities, they have no way to share a single consciousness. Even if we are willing to grant that your emulation or upload may be conscious (a controversial assumption, to put it lightly), it won’t be the same consciousness. Without a direct pathway to transmit information between the two entities (a physical and temporal connection), you don’t share direct access to each other’s subjective experience.
Third, I’m starting from an assumption (that many people share) that one’s subjective consciousness is a unitary experience—a single perspective that is tied to a particular body-mind. It’s a single locus that integrates incoming information and processes it to create the phenomenon that appear in consciousness. This assumption is supported by the fact that, no matter how hard we try, there has been no reliable evidence to suggest that an individual can share sense perceptions with another—“see through another’s eyes,” so to speak. If this can’t be demonstrated with identical twins, then I fail to see why it would be the case that any sort of clone, digital upload, or robotic emulation could share a consciousness with the original human.
Parenthetically, the closest humans can get to sharing consciousness with another is with conjoined twins, such as the fascinating example of Tatiana and Krista Hogan, who share a thalamus, and thus one twin can report on sensations that are happening to the other girl’s body. Even in this case, it seems there are two separate individuals that happen to share certain sensory information because of shared neural structures, rather than a single individual, though this admittedly pushes the boundaries of what it means to have a unified experience of consciousness!
Finally, consider death. The destruction of your original biological organism may be required in order to produce a high-fidelity upload, or you may die a natural death. This will likely be the case in most immortality-through-uploading scenarios because the biological brain will inevitably continue to accumulate damage and neuronal death, or at least be susceptible to accidental injury and destruction despite the most heroic efforts to extend lifespan. I suspect that in any such scenario, your original consciousness doesn’t “come with you” into your upload or emulation, no matter the substrate for the same reasons as above. If it doesn’t happen while the original you is alive, I don’t think anything fundamental changes with death.
The scary thing is, from the outside perspective, it might seem like it worked. Only from the subjective inside perspective do the lights go off. That’s why you won’t find me signing up to be uploaded, at least not with the expectation that I will continue to experience conscious existence in my upload. While this may be a sort of “digital immortality,” it does not preserve what people most value about their lives: actually being there to experience them!
And that’s the rub, isn’t it? Can a machine allow you the ability to experience life or just rude simulations of same? He concludes that, in a hundred years, things will basically be the same. Given basic economics, it’s expensive as fuck to build immortal robots, and current economic disparity, it’s easy to see why he would believe this.
But, and I say this carefully, humans suck. For any of this to become an issue we must first survive the creation of a race of support beings. He speaks of them in part one. And, just so we’re clear, as he is in his full article, these would be property. Or, more accurately, slaves. And humans have a bad track record there.
I’ve done a ton of articles on medical advances that can prolong life. And, in them, you’ll find numerous articles about using artificial intelligence as an option for organic living.
But none of those articles take into account human nature. Plus, and this is fun, artificial intelligence has shown that it seems to prefer working without us. It develops its own language, has learned how to argue points without our direction, and may not require us much longer.
What will happen then remains to be seen.
Nevertheless, for the sake of optimism, let’s say we don’t get extinguished by our robot overlords and, instead, are able to co-exist and work on methods of transplanting our consciousness into machines.
Then what?
Do you spend all eternity in a simulation to invoke feeling? Or do we use our immortal status to shepherd in the next set of organic beings to take over the Earth?
Yeah, we need to think this through a little more.
https://vimeo.com/76435162
contact Bill McCormick