Dr. Gopi Krishna Vijaya: Morals & Technology

Technology, Health
 
 
Technology is like a mirror. | Illustration by Clint Weaver 

Technology is like a mirror. | Illustration by Clint Weaver 

Dr. Gopi Krishna Vijaya in coversation with Mathew Bate
Illustrations by Clint Weaver

Earlier this year we caught up with Dr. Gopi Krishna Vijaya. He holds a PhD in physics and has devoted most of his life to the relationship between humans and the development of technology. He had just flown over from the US to run the Moral Technologies Forum (organised by Seed Australia) in Melbourne on morals, technology and how to be a good human in our digital era. Covering the historical development of technology, the way we think and modern education, Dr. Gopi Krishna Vijaya discusses the intrapersonal relationship between humans and the technologies that we have created.


Mathew Bate: So, the subject of today’s conversation is technology and morals. In your work you emphasise the need for us to go back and look at the origins of things in order to make informed and moral decisions. Is this why you put a lot of weight on understanding the development and history of technology?
Dr. Gopi Krishna Vijaya: Yes, firstly we need to look at how we came to this point now. We need to look at the past history of technology and how a lot of the time we think that technology just came up almost by itself. When we take a deeper look at technology it's actually stuff we have drawn out from ourselves. We've drawn out the lens of the eye and created the telescope and microscope. We've drawn out the beat of the heart and created engines. We've drawn out the nervous system and created telegraph lines. It's funny because when you actually look at it, even our hair has been drawn out. Hair is made of silica. That is the same material as the fibre optic cables that enable the Internet to work. So we physically project the organs of the human being out into the world.

But then, when you come to the present day, you're not looking at organic projections anymore, you're looking at the projection of thought, of computing. In computing a part of your thought, not your senses, is incorporated and materialised into the technology. It’s only once we've understood the past and the present incarnations of technology can we can start looking at the future. Only then can we can start asking questions like where are we going and what can we do about it? It’s at this point that we enter into the realm of morals.

Are you saying let's go back to our moral foundations and from that point create technology?
No, it's far more intricate than that. For example, let’s look at technology from the most elementary feeling of morality. We would hope that the consequence of technological action should not end up destroying the people who have made it. This is a form of resonance between the person and the society. In terms of simple physics this resonance has to be introduced into our technologies. We therefore need to ask some questions. Can technology be made that can be evoked only by people who are morally dedicated? Right now we invent stuff, push it out into the world and we watch and wait. If stuff blows up in our face then we recoil and say "Oh wait a minute we have to make this safe". But if you include a feedback system as the very basis of the technology then that will make sure that the consequences are already extremely well considered even before it creates or blows up. That's the basis for moral technology.

It's a common thought that there are humans and then there is technology. Both exist separately and perhaps are even evolving independently of one another. But what you're saying is that technology is inherently a part of us; it's innately human and we need to start acting from within that framework or that way of thinking because this will serve us better.
Yes and even this separation that you're talking about is a historical process. That's not actually how it began. It's a very recent view, in the past 150 years, that we have of technology as "out there" and we're "over here". That happened when we started outsourcing our sense organs to the technology because it is through our sense organs that we know something is out there and not in here. In reality technology came from within us and then it came out and because we've forgotten that we think of it as something that's out there. Part of what I do now is I take apart things and think about the logical systems inside the technology in order to bring attention to how much of ourselves we have already put into technology. Technology is ultimately our creation and we are responsible for it.

So you’re sort of using technology to, ultimately, take a look at what makes us human and what we are therefore responsible for, as you say. Can technology can help us understand our thinking process, and therefore our humanness, in more depth?
Usually when you see a reflection you realise what you're looking at. When we look at ourselves in a reflection, and when we move, we can see that we’re looking at ourselves. Technology’s role is something like that.

A mirror...
Yes. So technology will reflect back to you the extent to which your thoughts are more in tune with the sensory world or your logical mind. But if you fail to recognise that it's only reflecting a certain aspect of your thinking process you might conclude that that is how you think. As to your other point, of course it can be used to enhance thinking but only if we understand it for what it is. Otherwise it's like mistaking the creator for the created and we relinquish the possibility for creativity. What happens there is is that with the increased use of technology you realise that the capacity of people to spend time on learning a new skill, to be creative, or to concentrate slowly disappears. Now we just look it up, we don't have to remember anymore. But we have to remember that we created search engines so that you can use your memory for other things. It is not to abandon memory. So that general theme kind of spreads to all of our other capacities because now with technology we have made it a habit to take every single last nook and cranny of the thought process which can be materialised and we’re automating it.

 
"We've drawn out the beat of the heart and created engines." | Illustration by Clint Weaver 

"We've drawn out the beat of the heart and created engines." | Illustration by Clint Weaver 

 

And I guess then the other thing that presents itself is that then we start to think of ourselves or the thinking process as a machine.
Yes, we think we are machinery in that sense.

Then what happens to creativity? Do you think that creativity, inspiration and intuition, those types of very human characteristics should be...
Safeguarded and nurtured.

As much as possible, yeah?
Yes. So basically technology is a kind of wake up call to make you realise what part of what you is actually not creative by itself. Technology exists so that you can distinguish between the creative aspect of things versus the, you know, mechanical side of thinking that categorises and reacts to stimulus. Another part that gets enhanced because of the use of technology is reactivity. Normally if there is an event and you have a response to it there would be a time lag between the event and your response. If you got a letter 100 years ago you would think about it for a couple of days and then send a response. Nowadays you get a text and within three seconds you have to respond. You just react. The more technology tends towards reaction the more it tends towards reflex action and the more it turns towards that reflex action the more it tends towards the behaviour of the animal kingdom, because the animal kingdom lives in reaction.

So then we're reacting from impulse and the thinking process is short-circuited.
When automation links up with reactivity it makes it easier for one to express all the negative emotions without filter. Taken further it enhances the extremes of addiction or extremes of getting lost in things or getting angry with people and cutting them off. So in making these expressions more likely through technology we should not ignore the fact that our tendency to express those negative emotions are going to be correspondingly greater. What this means is that we may not only lose the capacity to creatively think but we may lose compassion. Compassion needs time. If you're reacting to another person as soon as the negative side comes up you’re reacting without compassion. But if we are aware that this is what technology tends to do, because of the way we have drawn it out, we can start to implement strategies like responding to messages at the end of the day rather than being in a state of knee-jerk.

So tech is creating this playground where morality is harder to be expressed? It's also interesting when you mentioned responding to messages at certain times in the day. When you're reacting constantly it's easy to fall into self-protection mode where you're almost in a fight or flight mode.
Yes, it's almost like "I have my armour on".

Ok, so what are the practical applications people can implement to deal with tech and allow morality to flourish?
One part of it is to understand the technology itself. So for example if you search something on Google you need to know at least a little about what kind of algorithms are used to pull those results out. Even if you don't think of it every time it should be part of an awareness. So there is an attentiveness that we have to cultivate. If you're not aware you don't know what to do. You’re not even aware of the damage.

It's pretty hard to be aware when the algorithms and these types of things are essentially invisible. It's also hard to find out about because it's being pioneered by companies that have their own agendas.
Seeing as though it belongs to the invisible realm it requires that much more effort to first be attentive of it. If you’re at least attentive and aware of it then at least you know what you need to address. Now in terms of the way of addressing there are two things. One is the regulation of what we're already doing, of regulating how you behave. The other is a little more complex but ultimately calls for people to develop their skills at focusing through meditation.

Right. We need to develop our ability to remove ourselves from the overly stimulating, tech heavy, modern environment and nurture our ability to focus.
Life, due to the rise of technology, is now demanding that we start practising meditation because we’re incredibly unbalanced. Earlier we didn’t necessarily have to take it up, but now the times have changed so drastically that we can't run away from it anymore.

This is such a pressing issue today, which is why this conversation is very important. I'm sure you're aware that there's been a mindfulness boom, which has become very fashionable, especially for big business. Colouring books for CEOs have become remarkably popular, for example. Meditation or mindfulness techniques are quite specific and involve a certain level of commitment and intention. How do we not just make mindfulness or meditation just another marketable product that can be misinterpreted and therefore practised wrongly, ultimately leading to more confusion and less focus?
Even though, when a lot of big businesses speak about meditation they speak about the mind, in actuality they are referring to emotions. So when they say meditation they actually mean de-stressing or calming. It's not about thinking. In fact, there is a lot of emphasis on not thinking; to remove the thoughts and allow your feelings to steady themselves. So what happens is, again, with something like that you accentuate the problem. If you're simply wanting to dissolve yourself into a feeling of peacefulness and then go back to your work, you'll only get so far and you're not dealing with how you're thinking and what's affecting it.

 

Usually when you see a reflection you realise what you're looking at. When we look at ourselves in a reflection, and when we move, we can see that we’re looking at ourselves. Technology’s role is something like that.

 
"Nowadays you get a text and within three seconds you have to respond." |  Illustration by Clint Weaver

"Nowadays you get a text and within three seconds you have to respond." |  Illustration by Clint Weaver

 

With screens we have flattened out this expansive imagination and shoved it in front of their [children's] faces. That's when we start to see the imaginative capacity disappearing. When imagination disappears compassion disappears because when you are not able to imagine the consequence of your action on another person you lose compassion.

 

This technique seems quite intricate and difficult, especially when we're dealing with people that have well established habits. Is this something that needs to be brought into the education system? How do we best teach young people about technology?
Firstly, by paying attention to the capacity of the child and the developmental stages we can start to create a pedagogy, rather than looking abstractly at what kids should be exposed to at what ages. You pay attention to already existing processes because there are specific times when the biological processes change and there is a specific time when the independence of thought usually starts to take shape. So if you introduce technologies that have, for example, a thinking or computing component to it prior to the age of roughly 14 then it becomes distracting and destructive. This is because you are trying to introduce something which they are hopefully going to develop for themselves later. You're crippling them even before they know what it is that they are supposed to be learning from their personal development.

And in doing so your externalising the inner development of the child to technology that is "outside".
For example, in the Moral Technology conference this April (2018) I asked people to calculate some fractions and it jammed their minds. They were mostly 14 to 20-year olds. These are people for whom sums like these should warrant a high level of thinking and engagement.

Right, there's a difference between hearing something and going "Nope, I can't do it" and therefore creating a block, as opposed to "Ok, that might take me a bit to think about, but bear with me". Exposing children to technology that computes for them, may hinder their abilities to do that thinking process for themselves, and therefore jam up.
The point is to realise how much the mind has rerouted itself because of never having formed these thinking capacities in the first place. The "look up" mind has increased, whereas the thinking mind has decreased. So by observing how technology works and how a child's mind develops we get a clue as to what time to introduce something. Let's take imagination for instance. Young kids usually have imaginary friends and are rooted heavily within this imaginary realm. But what happens if they spend a lot of time with a 2D screen in front of them? What happens when all of their imagination is taken away? Even more, a child's imagination is not restricted three dimensions! It can go everywhere. With screens we have flattened out this expansive imagination and shoved it in front of their faces. That's when we start to see the imaginative capacity disappearing.

When imagination disappears compassion disappears because when you are not able to imagine the consequence of your action on another person you lose compassion. So learning the technology from its guts is what will help pedagogical questions and will assist the educational processes. But we need to understand technology and children properly and deeply rather than just generic blanket rules or abstract blanket rules. Having said this, most of the time, for younger kids, we need to create a womb so that specific capacities associated to thinking can take birth. Once it has taken birth then there may be minute amounts of technology that you have to allow in simply because of the fact that we are living in this world and you don't want to isolate them. You have to introduce technology in a controlled way but you cannot go blanket about it. You can't you say "No tech!", or, say, teach kindergarten children to code. Both are pretty disastrous.

A lot of what we've been talking about seems to be linked to materialism, consumerism and capitalism. Recently there's been lots of discussion about the new economy and what that's going to look like. Is a lot of our problems with technology and our inability to relate to it in a way that you've been describing a symptom of our capitalist system? Do you think that a new way of doing things involves us also creating a new system where we can actually do things, or think about things, differently?
Yes, yes, I think so but the reason is not capitalism itself but that's usually where it all stops. If it follows the corporate world then education becomes a push to see how early we can we get kids trained so that they can produce capital. How early can children code, how early can they create apps, how early can they contribute to this capitally-minded system so that they can be locked into the software industry or whatever industry they're being encouraged into. So it's an industry-based education. I'm not too sure how much education is discussed in this "new economy" that you mentioned, but it should be taken seriously if we want to create a better long-term solution. It's also worth mentioning that capitalism spawns advertising, which is also another way of controlling people. We are helpless in front of it. How do we survive that onslaught? So yes, the redirection of capital to educators or alternative education practices - not even alternative perhaps we should say realistic - is key.

Right, so we can focus on human potential rather than increasing capital production. It's interesting to look at the Nordic countries and how they are dealing with modern education. Finland, for example, who have one of the world's best education systems, have been able to be experimental at a political level to re-imagine how their systems operate. Australia is extremely slow on the uptake. It's interesting, you said that the demands of the modern world make the individual highly reactive. The countries themselves and the systems that define them are also reactive, at a macro level.
Very true, and even to make your way through the data on which educational system is better or worse you actually have to be able to pay attention to these invisible things within the population. These are perhaps things that we can't easily measure and that's where we can still get caught up in the rat race of comparing different political practices and structures. Going down that path also, ultimately, leaves the development of individual human capacity completely out of the conversation.

Ok, my last question. You've dedicated your life to learning the history of tech and the moral implications of living in a world with it. How does this not drive you completely insane? How are you not so pessimistic about the future when it all seems so chaotic and dystopian?
I think the greatest protection is in the knowledge itself. You reap what you sow in that sense. If you take what we've spoken about today and are able to look at every piece of technology as it is, it will actually help to stabilize you. Once you know where it comes from, how it's been created and what capacity of yours it's mimicking, you realise that you have that capacity, which means that you can develop it and ultimately, you can stay centered.


Mat - website headshot.jpg
Mathew Bate is the digital editor of Matters Journal. He's a published poet from Melbourne that likes to walk.