I was thinking about the “Singularity+” model, which says that we’ll maybe arrive at some time in which -given enough technological advancement- we would be totally unable to predict the future (of course implying that we have the ability to do so now). One of this possible S+ scenarios could be a Terminator-like dystopian future, where AI and humans battle each other for domain over the earth.
The thing with AI is that we measure its capacity by its computational ability, but mere computational ability (henceforth intelligence) is not enough to lead us to a doomsday scenario, this is because one cannot infer a motivation or a goal out of intelligence, one needs another parallel mechanism, that is: motivation. Egotistical self preservation being probably the most important one in the furthering and preservation of life.
Here we have two scenarios: either we encode egotistical self preservation on robots, or we don’t. If we don’t we can substitute it to a certain degree with “reflexes”, that is: “if something is coming fast after you: run”, the downside to this is that nobody can foresee every possible danger scenario, inevitably this would shorten the “lifespan” of the robots. On the other hand, if we encode a self preserving motivation the robots would be more able to sort danger and survive the randomness that characterizes existence. Although, if left to its logical conclusion this could lead to the robots perceiving the rest of lifeforms as potential competitors (resources being limited, and all), and deal with them accordingly.
Isaac Asimov, tried to solve this problem by encoding multiple motivation (or Laws), into the robots. But as his short stories reveal every Law (doesn’t matter how simple and well crafted) can be “misinterpreted”, and could potentially lead to an apocalyptic scenario.
This is all! So whats you opinion? Should we encode motivations on our robots, or should we leave them as “empty shells” filled with reflexes?