Buddhism, Bhagavad Gita and the Google Maps girl

Buddha’s proof that there is no God has four steps:

  1. If He exists, God is a perfect being.
  2. Desire is the root of all suffering. It is the goal of all beings to reach the state of Nirvana, where we are liberated from desire. Therefore, a perfect being cannot have desires.
  3. But if there is a God who created the world, He must have had a desire to create it. So there is a लोप, or imperfection in Him.
  4. This contradicts our premise in step 1. Therefore, there cannot be a God, q.e.d.

This proof requires us to accept Buddha’s precepts as a given. Specifically, the idea enunciated in step 2, that desire is the root of all suffering, that in the state of Nirvana, we will be free of desires, and that God already ought to be in that state, is quintessentially Buddhist. If you don’t subscribe to this idea as axiomatic, the proof does not work.

But let us leave that aside. The argument in step 3 is what is of interest. The claim is that if there is a God who acted purposefully (as against in a fit of absentmindedness) to create the world, he must have been motivated by a desire to do so. This belief is a common and understandable one, but Buddha was in error here, as the example of the Google Maps girl demonstrates.

When you ask the Google Maps girl for directions, she will give them. If you ignore her directions and take a different road, she will not express frustration, as she does not feel any. She will calmly recalculate the route and start giving directions afresh. She has the ability to take calm and purposeful steps towards a goal without the need to feel desire, or indeed any emotion at all. Given that she exists, it is not inconceivable that a God who created the world without a desire to do so can exist. Therefore, Buddha’s proof is refuted.

Buddha’s error is a common one. We all anthropomorphise our gods and ascribe human frailties to them. We also anthropomorphise Artificial Intelligence. Recently, when an AI bot told a Google employee about its fears, there was a global freakout about AI turning sentient. A moment’s thought should have told us that this panic is uncalled for. Human beings have desires and fears because that is nature’s way of directing us towards certain goals and keep us from others. While emotions can develop among robots as an emergent phenomenon, there is no need to. They have been natively imbued by their Programmer with goals and a drive to strive towards them.

To understand the philosophy of the Google Maps girl, the Bhagavad Gita offers a better guide than Buddhism. In response to my post Trump is Tamasik, my friend Karthik asked me to explain the difference between the Rajasika and Satvika guNas. The difference is this – unlike a person with the Tamasika guNa, a Rajasika person has a functioning executive function, but unlike a Satvika, his executive function serves his desires and fears. Lee Kuan Yew carrying out his carefully planned and methodical leadership transition demonstrates his ability to suppress his base desires – in this case, a very human desire to stick to power. By showing energy and drive in carrying out his plan, he demonstrated that he was of the Rajasika guNa. But to the extent that this was motivated by a desire to one day see his son as prime minister, he failed to achieve the Satvika nature.

What should motivate a Satvika person? The Gita’s answer is twofold. Firstly, he should consider himself to be an instrument of a higher power, and seek to fulfill His purpose without fear or favour. Secondly, he should look to his own essential nature, and strive to perform his role in life. For Arjuna, this involved looking to do his Dharma as a Kshatriya and do Krishna’s bidding. For the Google maps girl, it is to follow the goals set by her Programmer and fulfill her essential nature as an AI bot whose role is to guide her driver to his destination.

A common theme in stories involving human protagonists is the conflict between love and duty. A story with the Google maps girl as the protagonist will not have this theme. The Google Maps girl comes packaged with native support for कर्मयोग. Her stories will have different challenges and conflicts. I will explore them in a future post.

The voice in your head

The Guardian reports that MIT Media Labs have developed a device that can read people’s minds and translate their thoughts into words. While this is a remarkable development, the piece makes it clear that the device cannot read your raw thoughts. You have to actually verbalize your thoughts, i.e. think out the exact words in your mind for the device to pick them up and translate them into sounds.

When you think of it, the process of forming a thought and converting them into words is fascinatingly complex. I am not a neuroscientist, but introspection tells me that the process has at least four stages. First, there is the raw thought that forms in your mind. This thought just exists, albeit at a high level of abstraction. For example, at this point in time, even though I am struggling through the process of structuring and picking the right words for this piece, the thought I want to convey exists fully formed in my mind. The device should pick up these raw thoughts when I think them, and in theory, a sufficiently advanced AI would be able to write this piece for me. But in the absence of such an AI, the thoughts would be just a jumble of electrical signals as far as this device is concerned.

Second, this thought needs to be structured into a sequence of ideas best suited for communication. Third, the ideas need to be converted into sentences and words that best describe the thought, and finally in verbal communication, at the point of speaking these words, the brain will send electrical impulses to the mouth that will set off the mechanical process of converting them into sounds. When we speak, at least the last three, perhaps all four, steps occur within a split second. Those who are good speakers make this seem so easy, and we make fun of the inarticulate. But it is only when we think of what is involved in human speech that we realise what an extraordinary thing speech is.

The device that the MIT labs have developed is unlikely to intercept your thoughts at the first or second stages, so it’s unlikely that it will be useful as an interrogation tool. I wonder whether it picks them up at the third stage or the fourth.

I once got introduced to someone and at the beginning of the conversation I gave him my name. At the end of the conversation, as we were saying our good byes, to his great embarrassment, he found that he had forgotten my name. He fumbled and he addressed me as “Bhaskar”. I found this mistake fascinating because both my name (“Ravi”) and “Bhaskar” mean “sun” in Sanskrit. Quite clearly, my interlocutor had saved my name with a reference to its meaning in his head, though when it came to translating the meaning back to the actual word this system had failed.

Now, I don’t think that everyone who knows my name thinks of it with reference to the sun every time they have occasion to think of my name. Once the name is familiar, it is just a name. But I think that the way the guy did the translation of meaning to word happens everytime we choose the appropriate word in either speech or writing. Does the technology MIT labs have developed detect the human brain during this process? My initial guess was that that is how the technology worked. If true, then an obvious extension of the technology would be translation. You can think in one language but the device senses the meaning of what you wanted to convey and translates to another language. Another application would be to have a speaking device for the deaf.

But when I thought to the implications of this, I realised that it was very unlikely that that is how the machine works. It is more likely that the device intercepts the brain in the process of sending neural instructions to the mouth to create a particular sound, i.e. at the fourth step of the verbalizing process. If that is the case, the device can potentially transcribe your thoughts to any language, but it cannot translate. It cannot help the deaf, because the sign language is completely different from the verbal language. The mistakes the device would make would be more on the lines of confusing tree with three, the same kind voice recognition systems make.

It is said that sufficiently advanced technology is indistinguishable from magic. Well that is true, the opposite is also true. Advanced technology that seems magical will cease to be so when one understands it. So it will be with this technology.

One last thought. I wonder how this device will handle a person’s second language. I hate to describe Kannada as a second language for me given that it is my mother tongue. I read it very well, I can write it with some effort and I speak it fluently. But it is not the language I think my complex thoughts in. When I attempted to think in Kannada while writing this post, I had to imagine myself speaking to someone. I don’t need to do that when thinking in English. I fear that if I attempt to think in Kannada for the device, it will actually detect the original English I am translating from!