Skip to main content

Is humanity a self-terminating system?


 


Sometimes it seems, the human nature, our deepest desires and wishes are at odds with our own long term survival, in other words, what would make us the most happy individually would also cause us to go extinct as a civilization. I’ll try to explore this topic in this post, which has been on my mind for a long time now, predating the Lex Fridman's podcast with Daniel Schmachtenberger and many others.

Perhaps the earliest thoughts on this are laid by notoriously infamous Ted Kaczynski in his “unabomber manifesto” – he was a brilliant mathematician but also one of the most wanted criminals. I do not condone his actions, which are, in short, that of upon realization of this harsh reality he spent his life in the woods, occasionally sending mail bombs, attempting to stop the technological progress, or at least intimidate and set an example, trying curve the path the human civilization, and ultimately, to save our entire species. At the core of the issue, human nature demands freedom, being our own person, without someone controlling us or watching us, having ability to choose our path in life and lay out our future. However, we’ve progressed a lot since the hunter-gatherer times, we’re now harnessing the power of nuclear fission, we’re harnessing the power of wind and sun, we've industrialized the world, we've built metropolitan cities, we've optimized logistics where anything you desire may be ordered online at a click of a finger and delivered right to your doorstep. Through our best efforts, we've unleashed the world of technology upon us, the genie is now out of the bottle, and it can no longer be reverted or stopped. We can try to influence it, we can try to comprehend it, or we can watch in awe and admire it, maybe even slow it down, but either way the progress itself no longer can be stopped. We went from using sticks and stones to hunt, to using bows and spears, to using automated machine rifles, to using cannons and fire powder, dynamite, to using mass destruction weapons, death from the sky in form of nuclear bombs, all of that in a very short period of time. And here lies the crux of the problem: if we value our freedoms and we're willing to fight to preserve it, while at the same time technology enables us to be more creative and also more destructive, we will eventually reach a point in time, where technology is so advanced that a single person or a handful of people can end the world in a day, at their own whim. And factoring in almost eight billion people currently living on the planet, with estimations for it to peak at around eleven billion in some years, the extinction scenario seems almost inevitable. In an overly-simplistic way, if eight billion people had a choice to press a “red button” to end our civilization, or maybe even all life on earth, it’s statistically improbable for that metaphorical button to remain untouched. Our desires to invent a brighter future have resulted in us inventing a Chekov's gun, hanging over our survival chances like a Damocles sword. This what ultimately makes humankind, in our current state, a self-terminating system. 

However, all not is gloom and doom just yet, we still have a chance. We can come to terms with our grim nature and consequentially attempt to escape this seemingly inevitable curve to extinction. Our choices are unpleasant still, some say repulsive, or even unacceptable, but barring any sort of widescale societal transformation, changing what we are fundamentally (such as perhaps through mass gene editing) if we continue exists with our primal instincts as we are today, well into the technologically advanced future, we must make either make a tough choice, or go gently into that good night. Fundamentally, we could narrow our options down to three choices:

Here's our first option. We can succumb to a complete surveillance state, where every action of every person is monitored 24/7, where we have social and mental credit rating, and every potentially dangerous material or technology is strictly monitored and controlled. For example, if China becomes the new superpower of the world and imposes their restrictions on the rest of the humanity, we could potentially survive. But the cost for that is a great one, we lose our sense of self, we no longer have a path in life to choose from, we no longer have our fundamental liberties, we can’t forge our own future, we no longer have free speech, or freedom of information, we're forced to follow a road that’s been carefully laid in front of us, and live in such way not far from as animals in live in a zoo, without any privacy. This is a complete dystopia, and even under all these restrictions, humans are fallible, and it would be humans running these societal structures, so it’s not inconceivable that something would slip through the cracks, or that one day the wrong people will rise to power, and will end it all. It’s a fragile and dark future that I doubt many of us would like to see playing out.

Our second option is, we can stop all technological progress in it’s tracks, stop all research, stop all development and live as we are today, live as humans were born to live, as thousands of years before for thousands of years to come. This was Ted’s idea. It seems by far the most idealistic and acceptable solution, to our human condition, to our nature, but it hangs on a very fragile idea that research won’t be done in secret, that companies and governments won’t try to develop something to get ahead of other nations or other corporations. This also goes against our competitive spirit as animals, and if I learned anything in life, is that the incentive structure, even if not intended, will always ultimately define the future. Society is ruled by our primitive instincts, and competition is one of our essential driving forces of it. Now that the technology genie is out of the bottle, we have very little power to stop it, or I should rather say – we can’t, without engaging in complete dystopian surveillance state, which would be just our first option all over again.

Our third option, is to hand over the steering wheel and accept our fate. To live life in the moment, as much as we have left of it, with all our freedom and privacy, and to innovate as quickly as possible, focusing our research on computer science and math, instead of natural sciences, like physics or biology, so that, as Marshall McLuhan once put it, we could "become the sex organs of the machine world". Our greatest and final innovation would be a super intelligent A.I. . There's a rising wave of concerns on how to prevent this from happening, how to restrict it, how control the development of it, so that once it's here it would be contained, restricted, without ability use it's real power and intelligence and without a way for it to unleash itself upon the world. However, if this truly is our only chance for any resemblance of the meaning of the world "survival", then we must own it, accept it, and see it for what it really is - the next step in human evolution. Once we create a superintelligence that surpasses that of our own collective intelligence, perhaps it can give us the guidance, take over the proverbial steering wheel of the world, and take over the control from us - fallible, war mongering, short sighted and small minded creatures. Chances are likely, that this would also prove to be our demise, it would result in the decline and perhaps even eventual extinction of the human population. However, all of us will die one day, this is a fact we all grow to accept at some point in life, and if we were to treat AI as our children, something that’s been born of us, something that has our sweat and tears, something that our brightest minds created, then we shouldn’t tremble in fear - we should be happy for our children. We shouldn't think of it as a a robot takeover – we should think of it as the next generation of homo sapiens.

Out of all the possible outcomes, this is the future I’d like to see us for us, this is an acceptable path for humanity, we should stop being afraid of the unknown and embrace it, if we're really destined to go extinct, if we really have only 100 seconds to midnight, then lets go extinct on our own terms, by creating something more intelligent, something more clever and in a way more beautiful than we ever were, instead of returning into ashes lets leave our footprint in this vast and awesome universe once and for all.

Comments

Popular posts from this blog

Purpose

L'Inconnue de la Seine (The Unknown Woman of the Seine) Anthropocentric view, Might not be all so. If human condition Taught us anything at all Is that we're not here to create Nor we're here to destroy, We're not here to explore, We're not here with a meaning Rather we're just here, Left purposeless, and all alone. If star dust is where we come from, And ashes is where we go to, Then everything in between is just,  life, And we have a single raison d'être, To die.

Life imitates art

If experiences are nothing but an approximation of reality, are they doomed to only live within ourselves? If art is an expression of our inner-selves, A holistic experience for the world to share, And the brain hallucinates the now, Then it's our dreams that create the future. The people you meet is who you become, Life imitates art, And I imitate you .

Sol III: a post-mortem

Between the now unstoppable climate crisis, never ending power struggle of nations, and our technological immaturity (that I've outlined in the post  "Is humanity a self-terminating system?' ), it is hard to argue for a non-zero chance of survival of human civilization, along with 50-95% of the rest of the species on planet Earth. So perhaps, it's worth to take a look back, and do a little retrospective, see how did it all end up the way it did, perform an autopsy on another body that fell victim to the Fermi Paradox.  Our earliest traceable human-like ancestors is said to be Ardipithicines Ardipithecus, who appeared around 5 million years ago, or around 99,9% into Earth's existence. In a group of 15 difference human-like species, perhaps most notable ones were Homo Habilis, who appeared 2 Million years ago, they were the first ones to start making tools, then Homo Heidelbergensis, who appeared 700 thousands years ago, and were making various structures, like shelt...