Can self preservation lead to our doomsday?


#1

Hello everybody!

I was thinking about the “Singularity+” model, which says that we’ll maybe arrive at some time in which -given enough technological advancement- we would be totally unable to predict the future (of course implying that we have the ability to do so now). One of this possible S+ scenarios could be a Terminator-like dystopian future, where AI and humans battle each other for domain over the earth.

terminator

The thing with AI is that we measure its capacity by its computational ability, but mere computational ability (henceforth intelligence) is not enough to lead us to a doomsday scenario, this is because one cannot infer a motivation or a goal out of intelligence, one needs another parallel mechanism, that is: motivation. Egotistical self preservation being probably the most important one in the furthering and preservation of life.

Here we have two scenarios: either we encode egotistical self preservation on robots, or we don’t. If we don’t we can substitute it to a certain degree with “reflexes”, that is: “if something is coming fast after you: run”, the downside to this is that nobody can foresee every possible danger scenario, inevitably this would shorten the “lifespan” of the robots. On the other hand, if we encode a self preserving motivation the robots would be more able to sort danger and survive the randomness that characterizes existence. Although, if left to its logical conclusion this could lead to the robots perceiving the rest of lifeforms as potential competitors (resources being limited, and all), and deal with them accordingly.

Isaac Asimov, tried to solve this problem by encoding multiple motivation (or Laws), into the robots. But as his short stories reveal every Law (doesn’t matter how simple and well crafted) can be “misinterpreted”, and could potentially lead to an apocalyptic scenario.

This is all! So whats you opinion? Should we encode motivations on our robots, or should we leave them as “empty shells” filled with reflexes?


#2

Hi Gabriel,

In my opinion, the latest prototypes of robots that are being released to the market are much more intelligent and able to interact with humans safely than the industrial robots to use, which allows them to leave the restricted industrial environment where until now they were held. These capabilities are due, in large part, to the cooperative and sensitive strategies they use; maybe artificial intelligence has realized that cooperation is the best strategy to evolve.

The robots have learned to cooperate with each other to be similar to humans, however, for humans this is increasingly difficult. Although robots have made great strides by becoming increasingly sensitive, we must be insensitive to social problems such as unemployment, environmental deterioration, and even ourselves.

The machine of global competition, of economic war, of the accumulation of power can not be stopped. Although modern engineering is discovering the superior qualities of cooperation, our economic machine has competition as its only mechanism. (Or maybe not, maybe, deep down, automation makes us more sustainable, but based on the rebound effect).

It would be desirable for us to learn to cooperate by means other than poverty, catastrophe and the renunciation of industrial civilization. We should be able to see that the double permanent war in which we live: war between us and war against nature, is useless, unnecessary and absurd. But that requires something that machines do not have and that between humans does not abound either: conscience. It would require that, instead of being driven by the machine of capitalist competition towards the over-exploitation of nature, inequality and climatic disaster, we should evolve as living beings and organize ourselves to solve these problems that are putting us on the verge of collapse.

At the moment we are proving to be unintelligent when it comes to cooperating and solving global problems. The dazzling technological advances, such as robotics, we only know how to use to make our unsustainability even deeper. Because there is an automation that is devastating our society, but it is not that of machines but that of human thought.


#3

Hi,

When we talk about artificial intelligence we forget that it is created by humans and learns from them. We are scared of AI taking our jobs and replacing us, but we forget an important problem which is biased datasets. Joy Buolamwini conducted research at the MIT and showed that when you train a facial-recognition algorithm with white males the algorithm becomes prejudiced. Another example is when a chatbot Tay was programmed to learn from human behavior by interacting with Twitter users. It was shut down because the tweets were sexist and pro-Hitler. These examples show us the importance of AI learning from us.

AI could be capable of a Terminator-like dystopian future if it observes that humans fight against each other rather than cooperate. Should we encode them with motivation? If we remain as a society in which violence and inequality is tolerated, then no. Otherwise AI is going to replicate the behavior of humans (sexist/racist) without being able to think about if it’s wrong. As Jimena said, AI doesn’t have conscience.

I think that a world in which AI and humans coexist is possible. When creating AI it should be considered in which ways the biases and prejudices could be defeated. Could AI make the world more moral? It is said that we enough data AI could be able to “recognize and act upon the wants, needs, and concerns of every group affected by a certain decision, we’d presumably avoid making a biased decision”. Humans and AI benefit from coexistence: AI needs human feedback and humans need efficiency of AI.

Bibliography: