Should Humans Colonize Mars (or How I Learned to Stop Worrying and Love the Brain)

An argument to colonize Mars had gained considerable strength recently with some prominent scientists and business leaders expressing interest and drawing up plans to do so.   Mars, we are told, is very similar to Earth because of its similar diurnal cycle, existence of atmosphere, presence of water and even the possibility of life.   People however lament about the distance, the difficulty of aligning a spaceship towards Mars and difficulties of communicating with earth from so far away. But for the continuity of mankind, it is argued, it is essential to colonize another planet because a catastrophic event can happen anytime on earth – an asteroid hit or a war which leaves earth a nuclear wasteland.

Except that these two catastrophic scenarios are not the same. The first is a cosmic event over which mankind has little or no control. Humans themselves, however, will cause the second.   Despite the dangers of a full blown nuclear war being known now for nearly eight decades, mankind’s appetite for proliferation of weapons of mass destruction continues to increase unabated. Even if the existing nuclear stockpile was used in the event of a third world war a nuclear catastrophe on earth is assured. So the question arises that even if the humans were able to escape the nuclear catastrophe on earth and colonized Mars, will this scenario not replay again there. And if they escaped Mars and the same self-destructive event happened there they will need to go to another planet, then another planet and so on.   If mankind continues to develop technical abilities to develop more and more sophisticated weapons but at the same time continues to survive by hopping on to other planet(s) then in time the destructive capabilities of these weapons is bound to increase exponentially. It will then become just a matter of time whether humankind’s ability to find newer worlds can keep ahead of its capability for destroying itself.   And we will be back to the situation that we are at at present.

So before we colonize Mars, humankind may have to first confront its self-destructive tendencies, which lie deep in its brain make up. Otherwise the effort to colonize Mars may be a futile effort.   Business and scientific leaders and organizations who are focused on the technical (Musk, Enriquez, SETI etc.) aspects of colonizing Mars may want to first focus their efforts on how to make the human brain more capable of survival in the long run. Even if we do not get to Mars, such advances may make life better and more sustainable here on earth.

Frankenstein, Spock and the Matrix: The Nature of our Anxiety Regarding Artificial Intelligence

AI Anxiety PictureThough prominent scientists such as Elon Musk and Stephen Hawking have sounded the alarm about Artificial Intelligence and its possible negative impact on the human species, the specific nature of the anxiety about AI has not been clearly articulated.  It is important to understand the psychology of our anxiety about AI so that we can adequately manage the new technology constructively. Vague anxiety or conflicting views which are prevalent nowadays lead to confusion and disproportionate fears which themselves could be destructive. Understanding the nature of our anxiety regarding AI may help in how we develop AI so that our deepest fears are not realized.

Sigmund Freud introduced to the world, a model of the mind, to explain the origin of anxiety. According to Freud’s conceptualization, the Id – the primitive emotional part of human psyche, with its libidinal or destructive capacities , was constantly trying to resurface and was kept in check by the superego or the societal rules and laws. The constant conflict between the two was kept in abeyance by the ego, which lived at the borderline between these two mental realms.  When the ego fails to keep the Id in check or reconcile the differences between the superego and the id, anxiety results.

The most prevalent notion of the danger of AI is that AI will really be in the image of humans and will have Id like destructive tendencies  similar to humans. A second view of the danger is that AI will be pure intelligence and for it the humans would be the Id which it would find too primitive and try to suppress or even eliminate.   A third fear is that the machines will themselves develop some properties of survival and these would clash with the survival needs of the humans and lead to conflict and aggression of the AI against the humans.

For the first scenario, the prototype of man’s creation going awry is Mary Shelley’s Frankenstein. Her novel is thought to be the first biological science fiction story and is thought to be the parent of nearly all biological fiction stories until present. In her story, Dr. Frankenstein tries to create life by trying to create a superior human being. However, he is only able to cobble together this human being using varied body parts from slaughter houses and mortuaries resulting in the creation of a hideous but very strong creature whom he infuses with life using electricity. The creature is however very much like Frankenstein himself i.e. very human (the novel does not give the creature a name and in a telling twist of time, the monster is now known by the name of Frankenstein). The creature feels shame regarding his appearance, is angry at the doctor for creating him that way, is lonely and wants a companion and in the novel goes on a rampage and murders many people to exact a revenge on his creator leading in the end to the creators death itself while the creature continues to roam the world. Though Mary Shelley’s story is used as a cautionary tale regarding AI, it is really a story of a human who is dangerous and violent and may be a reflection of the murderous and libidinal Id that Freud formulated to be present in all of us. Unfortunately, the current discourse regarding dangers of AI in popular culture is based on the Frankenstein model which is really a fear of human beings about other human beings, the fallen nature of man and man’s relationship with his imagined creator. That AI technology will develop into a human like creature with complex feelings of guilt, shame, loneliness and revenge seems very unlikely and even if that happens then the development will not be something new but a replication of humans themselves in silico. Considering that there are an exponentially increasing numbers of humans in the world with their needs, talents but also profound flaws the purpose of developing such technology is as questionable as it is likely. The Frankenstein model then is really nothing more than a primal fear of death and destruction that mankind feels not only by something like AI but also from floods, earthquakes, nuclear bombs or God’s wrath. It could easily be explained by a Freudian model of projection of man’s own destructive id like impulses on something outside itself such as AI. In this model, AI technology serves as nothing more than a scapegoat for mankind’s own destructive impulses. The solution here would be to develop this insight and not act out on our destructive impulses and blame it on AI.

The second model of the fear of AI is based on inherent human bias towards machines.  AI as it is being developed at present using computer algorithms and mathematical equations is likely to be very precise, accurate and far more superior to human beings in conducting many logic and computational tasks. Humans on the other hand are error prone and are limited in their mathematical abilities. As we develop AI and AI routines even though we make these ourselves our limitations also become self evident. This has led to, on the one hand the admiration for the ‘machines’ and on the other a loathing and a condescending attitude regarding AI.  The dialogue between Dr. McCoy and Spock with Captain Kirk casting the deciding vote regarding the amazing powers of emotions has been played again and again in the various episodes of Star Trek. Star Trek was conceived and televised just when computational power was being developed and may represent the earliest debate regarding the superiority of AI versus humans. However, the predominant theme of the dialogue between three principal protagonists of the original Star Trek is the romanticization of emotions and their importance. Time and again, Spock is taught an important lesson by Kirk or McCoy regarding how emotions make humans special (and superior). Spock’s saving grace is that he is half human and is able to partially acknowledge the superiority of humans because of their emotions. In an underlying ambivalence regarding the love and hate relationship with technology, the character of Spock and his vastly superior cognitive abilities are also romanticized (Spock remains one the most popular character of the series) and a fear of such a purely cognitive creature is managed using humor and making Spock partially human. This position regarding AI, is somewhat opposite to the first one. Here the humans represent the Id with their chaotic and error prone emotions. The fear is that the future AI may not like such creatures and may decide to do away with them. Though there is some hope that human beings may be saved by the magical powers of their emotions, which the machines do not have, the probability of that happening is uncertain. In this model, the humans represent the Id while the machines represent the Superego and ego which may suppress the humans, perhaps permanently.  The strategy to deal with  AI in this model would be to develop a synergistic relationship with it and give up our biases of superiority.  In this model by working synergistically both humans and AI may benefit from each other.

The third model regarding AI is probably the most alarming to humans. It is the most alarming because in this model the nature of AI, its motives and its actions, is the most unknown. It is also probably the most likely scenario of development of conflict, if it develops, between humans and AI. In this model, AI will become more complex with time and as it would be programmed to become self determining it will become autonomous. Beside being able to perform tasks autonomously, it is likely to be programmed to organize itself optimally to perform current and future more complex tasks. For consistency and stability, the original programming by humans and later on by AI itself is likely to also involve some kind of hitherto unknown homeostatic and self-preserving mechanisms.  Though the fear is that these self preserving mechanisms will be like that of the humans involving aggression as in the first model, it is also likely that these mechanisms may be entirely different from emotions and their survival properties that are known to humans. In this scenario, there is a possibility that these mechanisms may be activated if human goals (which may not be completely rational) are in conflict with that of AI. Though humans are likely to respond with actions which would involve neutralizing the AI or its actions, it is unknown how the AI will respond or what kind of properties or power it may have to try to resolve the conflict with humans. In the movie Matrix the machines are portrayed as such an entity which become autonomous and enslave and cultivate humans for their energy needs. The only model that humans are familiar with in the face of irreconcilable differences is destruction or subjugation of the other party using some kind of destructive force, hence the fear of such a scenario.

However, we do not know what the AI will be capable of. In this regard, this last scenario may be the most anxiety provoking as it not only taps the fear of AI of the first two models but also of the unknown. At the same time, this last scenario may be the most hopeful. The new AI may be able to develop some new and better ways of resolving conflict which the human brain is not capable of. This is much more likely to occur the more autonomous or independently self organizing the new AI is as human input in development of the AI  is likely to lead to human like results. This perspective needs to be kept in mind by AI programmers and developers as it may be lead to the development of a better AI brain than the conflict ridden human brain.