Can the Brain understand the Brain: Incompleteness, Uncertainty and Strange Loops

To make the human brain better, humans first have to understand how the brain functions.  However, that task is much more difficult than what it seems like on the surface.

Besides the great difficulties in investigating higher mental functions, which are common with all epistemology, we are faced with some unique challenges when the brain tries to study itself.  These difficulties have more in common with investigations in mathematics and physics, where problems of measuring and describing a system while being within the system itself have been studied in more detail.  These difficulties are – 1) Is it possible for an observer within the system to study and understand the system wholly and accurately; 2) Is it possible for an observer in the system to study the system without changing it; 3) How can a system reflect upon itself?

Kurt Godel described the limits of a logical system to fully understand itself in his Incompleteness Theorem: a system with self-evident assumptions (axioms) will always contain unanswerable questions if consistent and if all questions can be answered, some of the answers may not be accurate and the system will be inconsistent.   In the case of the brain, by definition, any starting point of investigation regarding the brain about itself is a  ‘self-evident’ property.  Therefore, even questions about its basic functions may be unanswerable.  This may be a reason why even the salient properties of the brain such as volition, consciousness, emotions, self-identity, and abstract thinking are so difficult to conceptualize and study. Despite decades of research, very little is known about mechanisms behind these functions.  Keeping Godel’s theorems in mind, the only way a true understanding of brain function may arise is if something outside the system would study the brain.  At present, there are two possible candidates to conduct such an investigation – an artificial entity or an alien life force. In the case of an artificial entity, the limitation would be that the artificial entity is likely to be developed by humans and so may suffer the same limitations of logical accuracy and consistency as the human brain itself.  The case of an alien life force is purely speculative and assumes that even if such an entity exists it has some interest in studying the human brain.

A second limitation of the brain studying the brain arises from the principle of physics that an observer while studying a system changes the system.  Also known as the Heisenberg’s Uncertainty Principle, this phenomenon becomes very pertinent to the brain even if the investigation is merely thinking about itself.  Descartes’ “I think therefore I am’  is not entirely accurate if as soon as somebody starts thinking about themselves, they induce a change in the ‘I am’.  Furthermore, the current methods of conducting human brain research are so intrusive and uncontrolled that the investigator or observer changes the system dramatically.  A common method is to study brain function is to use a brain scanner, e.g., a PET scan or a structural or functional MRI scan.  At present, the technology of obtaining neurochemical and physiological measures of brain function is primitive in nature and most images are noisy and distorted.   Beside brain imaging artifacts, scan testing is usually anxiety provoking, and it is difficult to translate findings from a person while lying in a scanner to a person while going about in real life.  Ethical issues of studying living human brain function lead to further confounds of subject selection, adequate controls, and confounding factors such as medications or drug use.  In this scenario, obtaining objective knowledge regarding brain functioning free of observer bias or effect seems to be quite impossible.  Quantification and statistical analyses of these images have been unable to provide any deep understanding of higher brain function in health and disease despite decades of research.  The Uncertainty Principle is difficult to get around by definition, so all that can be done is to reduce its impact as much as possible.  This requires an exponential increase in the technology to study the living brain so that information can be obtained with minimal artifacts and with the least amount of disturbance to the system.

Finally, the very process of brain delving deeper and deeper into itself, parsing its functioning and physiology into smaller bits and then coming back to how that knowledge determines behavior, identity and consciousness seems like very strange endeavor. Douglas Hofstadter has coined the term – ‘strange loop’ in which delving deeper into levels leads to coming up to the starting level again.  At present, a strange  loop seems to exist in neuroscience research between system-level neural circuit oscillations as the basis of human behavior and reductionist-formulations of a single gene or molecule determining behavior.  Brain function is thought to arise from neuronal firings which are then related to subgroups firing, then to membrane electrical potential changes, then to molecular changes such as protein changes or gene expression changes. However, this reductionist approach has not yielded any particular molecular or gene expression abnormality related to higher mental function.  Therefore, it has to be postulated that a pattern of interaction between genes or molecular changes may explain the behavior.  To test this out, dynamic pattern of changes at the molecular level need to be looped back to neuronal firing and oscillations and group of neurons firing.  Most of neuroscience research at present follows one arm of the loop: from behavior to single molecule or brain region abnormality. Even though this reductionist approach has not led to any major findings in terms of the basis for higher level mental function, it remains the dominant paradigm in neuroscience investigations.  This may be due to a stubborn reductionist philosophy in science, a byproduct of how funding mechanisms reward research into the identification of single discrete findings, and publication bias which also caters to the same biases. The other arm of the strange loop – starting from dynamic relationship of molecular changes and working up towards neuronal level firings and networks changes has not been pursued as much. Whether following the full loop will increase our understanding of the brain or such scientific investigation will meaninglessly keep on looping back on itself remains unclear at this stage.

Should Humans Colonize Mars (or How I Learned to Stop Worrying and Love the Brain)

An argument to colonize Mars had gained considerable strength recently with some prominent scientists and business leaders expressing interest and drawing up plans to do so.   Mars, we are told, is very similar to Earth because of its similar diurnal cycle, existence of atmosphere, presence of water and even the possibility of life.   People however lament about the distance, the difficulty of aligning a spaceship towards Mars and difficulties of communicating with earth from so far away. But for the continuity of mankind, it is argued, it is essential to colonize another planet because a catastrophic event can happen anytime on earth – an asteroid hit or a war which leaves earth a nuclear wasteland.

Except that these two catastrophic scenarios are not the same. The first is a cosmic event over which mankind has little or no control. Humans themselves, however, will cause the second.   Despite the dangers of a full blown nuclear war being known now for nearly eight decades, mankind’s appetite for proliferation of weapons of mass destruction continues to increase unabated. Even if the existing nuclear stockpile was used in the event of a third world war a nuclear catastrophe on earth is assured. So the question arises that even if the humans were able to escape the nuclear catastrophe on earth and colonized Mars, will this scenario not replay again there. And if they escaped Mars and the same self-destructive event happened there they will need to go to another planet, then another planet and so on.   If mankind continues to develop technical abilities to develop more and more sophisticated weapons but at the same time continues to survive by hopping on to other planet(s) then in time the destructive capabilities of these weapons is bound to increase exponentially. It will then become just a matter of time whether humankind’s ability to find newer worlds can keep ahead of its capability for destroying itself.   And we will be back to the situation that we are at at present.

So before we colonize Mars, humankind may have to first confront its self-destructive tendencies, which lie deep in its brain make up. Otherwise the effort to colonize Mars may be a futile effort.   Business and scientific leaders and organizations who are focused on the technical (Musk, Enriquez, SETI etc.) aspects of colonizing Mars may want to first focus their efforts on how to make the human brain more capable of survival in the long run. Even if we do not get to Mars, such advances may make life better and more sustainable here on earth.

Frankenstein, Spock and the Matrix: The Nature of our Anxiety Regarding Artificial Intelligence

AI Anxiety PictureThough prominent scientists such as Elon Musk and Stephen Hawking have sounded the alarm about Artificial Intelligence and its possible negative impact on the human species, the specific nature of the anxiety about AI has not been clearly articulated.  It is important to understand the psychology of our anxiety about AI so that we can adequately manage the new technology constructively. Vague anxiety or conflicting views which are prevalent nowadays lead to confusion and disproportionate fears which themselves could be destructive. Understanding the nature of our anxiety regarding AI may help in how we develop AI so that our deepest fears are not realized.

Sigmund Freud introduced to the world, a model of the mind, to explain the origin of anxiety. According to Freud’s conceptualization, the Id – the primitive emotional part of human psyche, with its libidinal or destructive capacities , was constantly trying to resurface and was kept in check by the superego or the societal rules and laws. The constant conflict between the two was kept in abeyance by the ego, which lived at the borderline between these two mental realms.  When the ego fails to keep the Id in check or reconcile the differences between the superego and the id, anxiety results.

The most prevalent notion of the danger of AI is that AI will really be in the image of humans and will have Id like destructive tendencies  similar to humans. A second view of the danger is that AI will be pure intelligence and for it the humans would be the Id which it would find too primitive and try to suppress or even eliminate.   A third fear is that the machines will themselves develop some properties of survival and these would clash with the survival needs of the humans and lead to conflict and aggression of the AI against the humans.

For the first scenario, the prototype of man’s creation going awry is Mary Shelley’s Frankenstein. Her novel is thought to be the first biological science fiction story and is thought to be the parent of nearly all biological fiction stories until present. In her story, Dr. Frankenstein tries to create life by trying to create a superior human being. However, he is only able to cobble together this human being using varied body parts from slaughter houses and mortuaries resulting in the creation of a hideous but very strong creature whom he infuses with life using electricity. The creature is however very much like Frankenstein himself i.e. very human (the novel does not give the creature a name and in a telling twist of time, the monster is now known by the name of Frankenstein). The creature feels shame regarding his appearance, is angry at the doctor for creating him that way, is lonely and wants a companion and in the novel goes on a rampage and murders many people to exact a revenge on his creator leading in the end to the creators death itself while the creature continues to roam the world. Though Mary Shelley’s story is used as a cautionary tale regarding AI, it is really a story of a human who is dangerous and violent and may be a reflection of the murderous and libidinal Id that Freud formulated to be present in all of us. Unfortunately, the current discourse regarding dangers of AI in popular culture is based on the Frankenstein model which is really a fear of human beings about other human beings, the fallen nature of man and man’s relationship with his imagined creator. That AI technology will develop into a human like creature with complex feelings of guilt, shame, loneliness and revenge seems very unlikely and even if that happens then the development will not be something new but a replication of humans themselves in silico. Considering that there are an exponentially increasing numbers of humans in the world with their needs, talents but also profound flaws the purpose of developing such technology is as questionable as it is likely. The Frankenstein model then is really nothing more than a primal fear of death and destruction that mankind feels not only by something like AI but also from floods, earthquakes, nuclear bombs or God’s wrath. It could easily be explained by a Freudian model of projection of man’s own destructive id like impulses on something outside itself such as AI. In this model, AI technology serves as nothing more than a scapegoat for mankind’s own destructive impulses. The solution here would be to develop this insight and not act out on our destructive impulses and blame it on AI.

The second model of the fear of AI is based on inherent human bias towards machines.  AI as it is being developed at present using computer algorithms and mathematical equations is likely to be very precise, accurate and far more superior to human beings in conducting many logic and computational tasks. Humans on the other hand are error prone and are limited in their mathematical abilities. As we develop AI and AI routines even though we make these ourselves our limitations also become self evident. This has led to, on the one hand the admiration for the ‘machines’ and on the other a loathing and a condescending attitude regarding AI.  The dialogue between Dr. McCoy and Spock with Captain Kirk casting the deciding vote regarding the amazing powers of emotions has been played again and again in the various episodes of Star Trek. Star Trek was conceived and televised just when computational power was being developed and may represent the earliest debate regarding the superiority of AI versus humans. However, the predominant theme of the dialogue between three principal protagonists of the original Star Trek is the romanticization of emotions and their importance. Time and again, Spock is taught an important lesson by Kirk or McCoy regarding how emotions make humans special (and superior). Spock’s saving grace is that he is half human and is able to partially acknowledge the superiority of humans because of their emotions. In an underlying ambivalence regarding the love and hate relationship with technology, the character of Spock and his vastly superior cognitive abilities are also romanticized (Spock remains one the most popular character of the series) and a fear of such a purely cognitive creature is managed using humor and making Spock partially human. This position regarding AI, is somewhat opposite to the first one. Here the humans represent the Id with their chaotic and error prone emotions. The fear is that the future AI may not like such creatures and may decide to do away with them. Though there is some hope that human beings may be saved by the magical powers of their emotions, which the machines do not have, the probability of that happening is uncertain. In this model, the humans represent the Id while the machines represent the Superego and ego which may suppress the humans, perhaps permanently.  The strategy to deal with  AI in this model would be to develop a synergistic relationship with it and give up our biases of superiority.  In this model by working synergistically both humans and AI may benefit from each other.

The third model regarding AI is probably the most alarming to humans. It is the most alarming because in this model the nature of AI, its motives and its actions, is the most unknown. It is also probably the most likely scenario of development of conflict, if it develops, between humans and AI. In this model, AI will become more complex with time and as it would be programmed to become self determining it will become autonomous. Beside being able to perform tasks autonomously, it is likely to be programmed to organize itself optimally to perform current and future more complex tasks. For consistency and stability, the original programming by humans and later on by AI itself is likely to also involve some kind of hitherto unknown homeostatic and self-preserving mechanisms.  Though the fear is that these self preserving mechanisms will be like that of the humans involving aggression as in the first model, it is also likely that these mechanisms may be entirely different from emotions and their survival properties that are known to humans. In this scenario, there is a possibility that these mechanisms may be activated if human goals (which may not be completely rational) are in conflict with that of AI. Though humans are likely to respond with actions which would involve neutralizing the AI or its actions, it is unknown how the AI will respond or what kind of properties or power it may have to try to resolve the conflict with humans. In the movie Matrix the machines are portrayed as such an entity which become autonomous and enslave and cultivate humans for their energy needs. The only model that humans are familiar with in the face of irreconcilable differences is destruction or subjugation of the other party using some kind of destructive force, hence the fear of such a scenario.

However, we do not know what the AI will be capable of. In this regard, this last scenario may be the most anxiety provoking as it not only taps the fear of AI of the first two models but also of the unknown. At the same time, this last scenario may be the most hopeful. The new AI may be able to develop some new and better ways of resolving conflict which the human brain is not capable of. This is much more likely to occur the more autonomous or independently self organizing the new AI is as human input in development of the AI  is likely to lead to human like results. This perspective needs to be kept in mind by AI programmers and developers as it may be lead to the development of a better AI brain than the conflict ridden human brain.