The Mind of A Virus (COVID-19): Watson, Turing and Searle

Does the virus have a mind? Is it thinking? Is it scheming how it can inflict maximum damage?  From how we describe it (we have not attributed a gender yet), it would seem that not only does it have a mind, but it has a very devious and highly calculating one.  Consider a recent interview of a famous virologist who tested positive and had a severe life-threatening illness.  In the interview, he says – “they (the viruses) got me, I sometimes thought.  I have devoted my life to fighting viruses, and finally, they get their revenge”.  Or from an article in the journal Nature– “ once the invader’s genetic material gets inside the cell, the virus commandeers the host molecular machinery to produce new viral particles.  Then, the progeny exit the cell to go and infect others.” Besides invader, other references to the virus are – the invisible enemy and the enemy with which we are at war.

One could attribute these descriptions of the virus to our tendency for personification and animism. It may sound ridiculous to think that a virus has a mind.  But that depends on what we call a mind.  And as it turns out, it is not easy to define what is a ‘mind’ as it remains a debated subject in a field of philosophy called the ‘Philosophy of Mind.’  Starting from Descartes, whose cogito ergo sum divided the brain and the mind, philosophers and neuroscientists have been debating the nature of what is called the mind and how it relates to the physical aspect of the organism.  The debate is mostly about the human mind, but arguments to support or deny a mind in other animals such as a cat or a dog, which appear to possess a mind, are also proposed.  The argument however, has never stretched to the earliest living entities – the viruses. Viruses are so ancient and rudimentary that even their status as living organisms is questionable.  But currently, in the middle of a pandemic, COVID19 has proven to be not only a formidable problem, but it also appears to be highly intelligent.  It has spread over the whole world in the space of a few months.  It has infected and killed hundreds of thousands of humans, a species which prided itself on having a monopoly over an intelligent mind. It has many ways of spreading and surviving on all sorts of objects for several days.  Once inside a human, there is widespread damage within cells of lungs, heart, and kidney, while the immune response is either too little or too much. All this time, the virus replicates relentlessly.  Though humans may be projecting their own thoughts when they describe the virus’s intentional behavior, it becomes more and more difficult not to think that there is a ‘mind’ of the virus, which creates the havoc.  In fact,  some theories of philosophy-of-mind are compatible with the notion that the virus has a mind.

Behaviorism, as conceptualized in psychology (Watson, Skinner) or within the field of philosophy-of-mind terms, postulates that mental states are just descriptions of behavior or dispositions to behave in certain ways made by third parties to explain and predict another’s behavior (Glibert Ryle 1949).  In such a formulation, we do not have to postulate the existence of mental states even in humans. And if survival behavior is the sole description needed for a seemingly intentional mental state, then the virus’s behavior can be used to conceptualize the  ‘mind’ of the virus.  Attachment to specific receptors, cleaving of the human cell wall, replication by hijacking the host’s resources, bursting out of the cell to infect other cells, and spreading in the blood could all be labeled as intentional and intelligent behavior. Furthermore,  maneuvers to escape the host’s immune mechanisms and change the host’s response (e.g., cough or seek the comfort of other humans which could serve as hosts) so that further spread could occur, could be taken as evidence of an intelligent mind.  After all, these behaviors have ensured survival for millions of years of the virus or its ancestors.  Under behaviorism, for the presence of a ‘mind’, the actions of the virus are all that matter.   Under this theory, the personification of the virus attributing to it intent, purpose, and even intelligence are not absurd or off the mark.

Cognitivism is another theory about what constitutes a mind. It attributes the working of the brain to be similar to a computer.  The brain’s operations are thought to be similar to computer processes or computer states carried out by the brain’s hardware.  A separate consciousness or mind is not required.  If a computer’s behavior is indistinguishable from that of a human,  then according to the famous ‘Turing Test‘ it could be said that a computer has a mind. The virus RNA can be thought of as hardware on which the computer program of the arrangement of the genetic code runs. One could perceive the ensuing behavior of the virus in terms of survival, deception of the host, and hijacking the host’s resources as very human-like behavior.   In this respect, the virus can very much be thought of as having a ‘mind’ and working intelligently and with intent to infect hosts and host-cells and replicating itself.

Objections to both behaviorism and cognitivism and similar monistic philosophies of the mind have been raised by many.  Searle’s Chinese Room argument showed that even though all computations can be carried out by a computer, it still does not know what it is doing devoid as it is of any semantic meaning.  Other objections relate to the absence of the so-called ‘qualia’ – the something else which makes us conscious and have intent and possibly free will.  It is hard to ascribe consciousness and subjective experience to a virus.

So if consciousness, behaviorism, and cognitivism are rejected, then what are we left with?  How do we explain the mindless virus, which is proving to be a formidable match for the superior intelligence that we are so proud of?  What we are left with is the randomness of nature, which has helped the virus survive possibly for 300 million years once it was formed.  Once, millions of years back, when a random piece of RNA bumped into a cell and got inside, chance had it that some proteins were formed, which enveloped the RNA and preserved it.  By chance, at some point in a long time, it came in contact with other cells and was able to repeat the cycle, and the more efficient processes survived.   Then again, by chance over millions of years, the virus’s spreading properties increased. Still, it remained hidden in wild animals such as bats with a limited random spread. Until, one day, it got exposed to organisms with large organs full of air and studded with proteins that it had perfected attaching to.  As it spread, the only constraint to its randomness was the non-random concentration of these living beings with large aerated organs and a tendency to be in close approximation.  The non-random or intelligent behavior attributed to it is actually the non-randomness in its environment constructed by humans.  In that sense, our own intelligence is reflected in the virus, our own manipulation of the environment.

Can the Brain understand the Brain: Incompleteness, Uncertainty and Strange Loops

To make the human brain better, humans first have to understand how the brain functions.  However, that task is much more difficult than what it seems like on the surface.

Besides the great difficulties in investigating higher mental functions, which are common with all epistemology, we are faced with some unique challenges when the brain tries to study itself.  These difficulties have more in common with investigations in mathematics and physics, where problems of measuring and describing a system while being within the system itself have been studied in more detail.  These difficulties are – 1) Is it possible for an observer within the system to study and understand the system wholly and accurately; 2) Is it possible for an observer in the system to study the system without changing it; 3) How can a system reflect upon itself?

Kurt Godel described the limits of a logical system to fully understand itself in his Incompleteness Theorem: a system with self-evident assumptions (axioms) will always contain unanswerable questions if consistent and if all questions can be answered, some of the answers may not be accurate and the system will be inconsistent.   In the case of the brain, by definition, any starting point of investigation regarding the brain about itself is a  ‘self-evident’ property.  Therefore, even questions about its basic functions may be unanswerable.  This may be a reason why even the salient properties of the brain such as volition, consciousness, emotions, self-identity, and abstract thinking are so difficult to conceptualize and study. Despite decades of research, very little is known about mechanisms behind these functions.  Keeping Godel’s theorems in mind, the only way a true understanding of brain function may arise is if something outside the system would study the brain.  At present, there are two possible candidates to conduct such an investigation – an artificial entity or an alien life force. In the case of an artificial entity, the limitation would be that the artificial entity is likely to be developed by humans and so may suffer the same limitations of logical accuracy and consistency as the human brain itself.  The case of an alien life force is purely speculative and assumes that even if such an entity exists it has some interest in studying the human brain.

A second limitation of the brain studying the brain arises from the principle of physics that an observer while studying a system changes the system.  Also known as the Heisenberg’s Uncertainty Principle, this phenomenon becomes very pertinent to the brain even if the investigation is merely thinking about itself.  Descartes’ “I think therefore I am’  is not entirely accurate if as soon as somebody starts thinking about themselves, they induce a change in the ‘I am’.  Furthermore, the current methods of conducting human brain research are so intrusive and uncontrolled that the investigator or observer changes the system dramatically.  A common method is to study brain function is to use a brain scanner, e.g., a PET scan or a structural or functional MRI scan.  At present, the technology of obtaining neurochemical and physiological measures of brain function is primitive in nature and most images are noisy and distorted.   Beside brain imaging artifacts, scan testing is usually anxiety provoking, and it is difficult to translate findings from a person while lying in a scanner to a person while going about in real life.  Ethical issues of studying living human brain function lead to further confounds of subject selection, adequate controls, and confounding factors such as medications or drug use.  In this scenario, obtaining objective knowledge regarding brain functioning free of observer bias or effect seems to be quite impossible.  Quantification and statistical analyses of these images have been unable to provide any deep understanding of higher brain function in health and disease despite decades of research.  The Uncertainty Principle is difficult to get around by definition, so all that can be done is to reduce its impact as much as possible.  This requires an exponential increase in the technology to study the living brain so that information can be obtained with minimal artifacts and with the least amount of disturbance to the system.

Finally, the very process of brain delving deeper and deeper into itself, parsing its functioning and physiology into smaller bits and then coming back to how that knowledge determines behavior, identity and consciousness seems like very strange endeavor. Douglas Hofstadter has coined the term – ‘strange loop’ in which delving deeper into levels leads to coming up to the starting level again.  At present, a strange  loop seems to exist in neuroscience research between system-level neural circuit oscillations as the basis of human behavior and reductionist-formulations of a single gene or molecule determining behavior.  Brain function is thought to arise from neuronal firings which are then related to subgroups firing, then to membrane electrical potential changes, then to molecular changes such as protein changes or gene expression changes. However, this reductionist approach has not yielded any particular molecular or gene expression abnormality related to higher mental function.  Therefore, it has to be postulated that a pattern of interaction between genes or molecular changes may explain the behavior.  To test this out, dynamic pattern of changes at the molecular level need to be looped back to neuronal firing and oscillations and group of neurons firing.  Most of neuroscience research at present follows one arm of the loop: from behavior to single molecule or brain region abnormality. Even though this reductionist approach has not led to any major findings in terms of the basis for higher level mental function, it remains the dominant paradigm in neuroscience investigations.  This may be due to a stubborn reductionist philosophy in science, a byproduct of how funding mechanisms reward research into the identification of single discrete findings, and publication bias which also caters to the same biases. The other arm of the strange loop – starting from dynamic relationship of molecular changes and working up towards neuronal level firings and networks changes has not been pursued as much. Whether following the full loop will increase our understanding of the brain or such scientific investigation will meaninglessly keep on looping back on itself remains unclear at this stage.

Should Humans Colonize Mars (or How I Learned to Stop Worrying and Love the Brain)

An argument to colonize Mars had gained considerable strength recently with some prominent scientists and business leaders expressing interest and drawing up plans to do so.   Mars, we are told, is very similar to Earth because of its similar diurnal cycle, existence of atmosphere, presence of water and even the possibility of life.   People however lament about the distance, the difficulty of aligning a spaceship towards Mars and difficulties of communicating with earth from so far away. But for the continuity of mankind, it is argued, it is essential to colonize another planet because a catastrophic event can happen anytime on earth – an asteroid hit or a war which leaves earth a nuclear wasteland.

Except that these two catastrophic scenarios are not the same. The first is a cosmic event over which mankind has little or no control. Humans themselves, however, will cause the second.   Despite the dangers of a full blown nuclear war being known now for nearly eight decades, mankind’s appetite for proliferation of weapons of mass destruction continues to increase unabated. Even if the existing nuclear stockpile was used in the event of a third world war a nuclear catastrophe on earth is assured. So the question arises that even if the humans were able to escape the nuclear catastrophe on earth and colonized Mars, will this scenario not replay again there. And if they escaped Mars and the same self-destructive event happened there they will need to go to another planet, then another planet and so on.   If mankind continues to develop technical abilities to develop more and more sophisticated weapons but at the same time continues to survive by hopping on to other planet(s) then in time the destructive capabilities of these weapons is bound to increase exponentially. It will then become just a matter of time whether humankind’s ability to find newer worlds can keep ahead of its capability for destroying itself.   And we will be back to the situation that we are at at present.

So before we colonize Mars, humankind may have to first confront its self-destructive tendencies, which lie deep in its brain make up. Otherwise the effort to colonize Mars may be a futile effort.   Business and scientific leaders and organizations who are focused on the technical (Musk, Enriquez, SETI etc.) aspects of colonizing Mars may want to first focus their efforts on how to make the human brain more capable of survival in the long run. Even if we do not get to Mars, such advances may make life better and more sustainable here on earth.

Frankenstein, Spock and the Matrix: The Nature of our Anxiety Regarding Artificial Intelligence

AI Anxiety PictureThough prominent scientists such as Elon Musk and Stephen Hawking have sounded the alarm about Artificial Intelligence and its possible negative impact on the human species, the specific nature of the anxiety about AI has not been clearly articulated.  It is important to understand the psychology of our anxiety about AI so that we can adequately manage the new technology constructively. Vague anxiety or conflicting views which are prevalent nowadays lead to confusion and disproportionate fears which themselves could be destructive. Understanding the nature of our anxiety regarding AI may help in how we develop AI so that our deepest fears are not realized.

Sigmund Freud introduced to the world, a model of the mind, to explain the origin of anxiety. According to Freud’s conceptualization, the Id – the primitive emotional part of human psyche, with its libidinal or destructive capacities , was constantly trying to resurface and was kept in check by the superego or the societal rules and laws. The constant conflict between the two was kept in abeyance by the ego, which lived at the borderline between these two mental realms.  When the ego fails to keep the Id in check or reconcile the differences between the superego and the id, anxiety results.

The most prevalent notion of the danger of AI is that AI will really be in the image of humans and will have Id like destructive tendencies  similar to humans. A second view of the danger is that AI will be pure intelligence and for it the humans would be the Id which it would find too primitive and try to suppress or even eliminate.   A third fear is that the machines will themselves develop some properties of survival and these would clash with the survival needs of the humans and lead to conflict and aggression of the AI against the humans.

For the first scenario, the prototype of man’s creation going awry is Mary Shelley’s Frankenstein. Her novel is thought to be the first biological science fiction story and is thought to be the parent of nearly all biological fiction stories until present. In her story, Dr. Frankenstein tries to create life by trying to create a superior human being. However, he is only able to cobble together this human being using varied body parts from slaughter houses and mortuaries resulting in the creation of a hideous but very strong creature whom he infuses with life using electricity. The creature is however very much like Frankenstein himself i.e. very human (the novel does not give the creature a name and in a telling twist of time, the monster is now known by the name of Frankenstein). The creature feels shame regarding his appearance, is angry at the doctor for creating him that way, is lonely and wants a companion and in the novel goes on a rampage and murders many people to exact a revenge on his creator leading in the end to the creators death itself while the creature continues to roam the world. Though Mary Shelley’s story is used as a cautionary tale regarding AI, it is really a story of a human who is dangerous and violent and may be a reflection of the murderous and libidinal Id that Freud formulated to be present in all of us. Unfortunately, the current discourse regarding dangers of AI in popular culture is based on the Frankenstein model which is really a fear of human beings about other human beings, the fallen nature of man and man’s relationship with his imagined creator. That AI technology will develop into a human like creature with complex feelings of guilt, shame, loneliness and revenge seems very unlikely and even if that happens then the development will not be something new but a replication of humans themselves in silico. Considering that there are an exponentially increasing numbers of humans in the world with their needs, talents but also profound flaws the purpose of developing such technology is as questionable as it is likely. The Frankenstein model then is really nothing more than a primal fear of death and destruction that mankind feels not only by something like AI but also from floods, earthquakes, nuclear bombs or God’s wrath. It could easily be explained by a Freudian model of projection of man’s own destructive id like impulses on something outside itself such as AI. In this model, AI technology serves as nothing more than a scapegoat for mankind’s own destructive impulses. The solution here would be to develop this insight and not act out on our destructive impulses and blame it on AI.

The second model of the fear of AI is based on inherent human bias towards machines.  AI as it is being developed at present using computer algorithms and mathematical equations is likely to be very precise, accurate and far more superior to human beings in conducting many logic and computational tasks. Humans on the other hand are error prone and are limited in their mathematical abilities. As we develop AI and AI routines even though we make these ourselves our limitations also become self evident. This has led to, on the one hand the admiration for the ‘machines’ and on the other a loathing and a condescending attitude regarding AI.  The dialogue between Dr. McCoy and Spock with Captain Kirk casting the deciding vote regarding the amazing powers of emotions has been played again and again in the various episodes of Star Trek. Star Trek was conceived and televised just when computational power was being developed and may represent the earliest debate regarding the superiority of AI versus humans. However, the predominant theme of the dialogue between three principal protagonists of the original Star Trek is the romanticization of emotions and their importance. Time and again, Spock is taught an important lesson by Kirk or McCoy regarding how emotions make humans special (and superior). Spock’s saving grace is that he is half human and is able to partially acknowledge the superiority of humans because of their emotions. In an underlying ambivalence regarding the love and hate relationship with technology, the character of Spock and his vastly superior cognitive abilities are also romanticized (Spock remains one the most popular character of the series) and a fear of such a purely cognitive creature is managed using humor and making Spock partially human. This position regarding AI, is somewhat opposite to the first one. Here the humans represent the Id with their chaotic and error prone emotions. The fear is that the future AI may not like such creatures and may decide to do away with them. Though there is some hope that human beings may be saved by the magical powers of their emotions, which the machines do not have, the probability of that happening is uncertain. In this model, the humans represent the Id while the machines represent the Superego and ego which may suppress the humans, perhaps permanently.  The strategy to deal with  AI in this model would be to develop a synergistic relationship with it and give up our biases of superiority.  In this model by working synergistically both humans and AI may benefit from each other.

The third model regarding AI is probably the most alarming to humans. It is the most alarming because in this model the nature of AI, its motives and its actions, is the most unknown. It is also probably the most likely scenario of development of conflict, if it develops, between humans and AI. In this model, AI will become more complex with time and as it would be programmed to become self determining it will become autonomous. Beside being able to perform tasks autonomously, it is likely to be programmed to organize itself optimally to perform current and future more complex tasks. For consistency and stability, the original programming by humans and later on by AI itself is likely to also involve some kind of hitherto unknown homeostatic and self-preserving mechanisms.  Though the fear is that these self preserving mechanisms will be like that of the humans involving aggression as in the first model, it is also likely that these mechanisms may be entirely different from emotions and their survival properties that are known to humans. In this scenario, there is a possibility that these mechanisms may be activated if human goals (which may not be completely rational) are in conflict with that of AI. Though humans are likely to respond with actions which would involve neutralizing the AI or its actions, it is unknown how the AI will respond or what kind of properties or power it may have to try to resolve the conflict with humans. In the movie Matrix the machines are portrayed as such an entity which become autonomous and enslave and cultivate humans for their energy needs. The only model that humans are familiar with in the face of irreconcilable differences is destruction or subjugation of the other party using some kind of destructive force, hence the fear of such a scenario.

However, we do not know what the AI will be capable of. In this regard, this last scenario may be the most anxiety provoking as it not only taps the fear of AI of the first two models but also of the unknown. At the same time, this last scenario may be the most hopeful. The new AI may be able to develop some new and better ways of resolving conflict which the human brain is not capable of. This is much more likely to occur the more autonomous or independently self organizing the new AI is as human input in development of the AI  is likely to lead to human like results. This perspective needs to be kept in mind by AI programmers and developers as it may be lead to the development of a better AI brain than the conflict ridden human brain.