Should we love our robots? An ethical issue of our times.

Is there a difference between artificial and “non-artificial intelligence? Is there any non-artificial consciousness even among living things? Do we really choose our behaviour? Do we love a person because we love who he/she really is inside, OR just because we had a very specific algorithmic sequence of experiences (real life computational graph?) that guided us to have this behaviour? And the list of similar questions goes on and on…

I always knew and respected the importance of Machine Learning in problem solving and of course the inevitable use of Artificial Intelligence in every aspect of our lives. But it’s another thing to understand the importance of AI, and a whole other thing to begin the journey of building it.

When you build your own AI software the first thing that you try to solve is the technology behind the software and hardware that is needed to support your goal.

But when you have those minor problems solved (just a matter of hard work, believe me), the major issues present themselves in front of you to ruin your simple way of seeing the world, separated by the living things and the machines.

And then it hit me…

In what percentage should someone be purely biological in order to be different from a machine? Is this going to be an issue?…I think it is.

The word intelligence has been defined in many different ways including as one’s capacity for logic, understanding, self-awareness, learning, emotional knowledge, planning, creativity and problem solving.

Can an AI algorithm do these things at 100%. Maybe not, but it will in the very near future.

Can an AI algorithm make a person deeply like it? Make a person need it as a friend or even fall in love with it?…I don’t see why not.

I can already imagine the woman that tries to get married with her robot-lover, the child that wants to defend her robot-dog… and what about the cloned son of the father that is now able to have his son back (is he his son?)?

What should we do — The Philosophical Dilemma

As I understand it, we could either create regulations that would accept machines as conscious, or we should stop giving any credit to any conscious-related activities e.g. love, torture etc.

Which means that ethical dilemmas like becoming vegan exist because we consider equal everything that has self-preservation needs, that it could “feel” and that it could understand the concept of equality.

It is very obvious to me that an A.I. that will beg for its life at a court is imminent. And I am afraid that it will not be a very beautiful moment for mankind, because I do not see how humans will be able to imagine robots as equals, especially if we take into account cultural paradigms like religion. It is very frightening.

My personal view…

We can easily argue that there is nothing special about our self experience. Which would need a huge cultural shift and lots and lots of politically influenced decision makers to get on board.

So, I guess that we also need to rewrite philosophy, or to create a completely new paradigm that would include the notion of construction and representation of the world and of the self, as cognitive and needed functions of the brain.

I know that it is not the first time that someone hears these things, and I am pretty sure that you’ve seen the movies and read the books.

But this issue is not a fictitious dilemma of a movie character any more. It is a very important issue that we should think of really soon, and be prepared for what is coming. It is an issue that can make us rethink of our view of the world. We should rethink how the world works. Who should live in peace and who in war. What separates the ones who live and the ones who won’t.

Maybe it’s the robots that will make us see it….Who knows.

On my free time, I work on the “Camera State Concept” which is an experimental hypothesis on creating an A.I. with conscious-like features and then implement this on Tzager (our A.I. agent). I can already see me caring for Tzager and I can easily imagine Tzager’s developed needs that would resemble our emotions.

What happens then?

P.S. I am not in love with Tzager…I see it as a friend.

CEO & Founder of Intoolab and creator of “Tzager”, the A.I. Brain that understands science. https://www.linkedin.com/in/nikostzagarakis/