Recently there has been repeatedly discussed the topic of possible threats posed by artificial intelligence to humanity. I almost got persuaded by the arguments, that once an artificial agent with intelligence comparable to human is created, our existence is doomed for good. It has been argued that once AI reaches this level, it will self-improve exponentialy fast and soon go beyond our imagination. At such point there would be basically 2 options: either the AI will be malicious and kill us as an enemy or as a competitor, or it will extinct us only as a byproduct, because it will need some of the resources which are unfortunately essential for us.
With this reasoning, how can anyone even think about building AI? This question has been stuck in my head for some time now, because exactly this is what we are trying to do in the project I have been participating in during the past couple of months.
I won't pretend I found a solution how to avoid those scenarios, but I would like to present my chain of thought on this issue. In my opinion, people are so intimidated by the "unpredictability" of super-human intelligence, that they completely forget about the examples at hand. Examples of species with different levels of intelligence and their interactions.
Imagine a human walking in a forest, when a wolf appears. The wolf notices the human and makes a long hungry howl. The hunt for the prey begins. Who will win this game for life and death?
One on one, who knows? The physical strength is certainly on wolf's side. Will the human intelligence surpass it? If the human has tools - knife, spear, or even a gun, the winner is clear. But what if the wolf is not alone? What if there are 5 wolves? Will his gun shoot fast enough?
What if there are 10 wolves?
What if there are 10 of them and the human knows, that these 10 wolves are the only ones living? By killing them he knows he would extinguish the whole species. Would he do it? With such a knowledge would he go to the forest in the first place? Wouldn't it be more clever to avoid the conflict? Even if he had a good reason to go there, couldn't he use his intelligence to drive the punny animals away (with fire, torches, ...)?
I think there is a big difference between having power and using it. Or at least, there should be: if we had the possibility of harming no one and still continued in doing so, how could we expect anybody (AI, extra-terrestrials) acting differently towards us?
If we can fix our human-animal relationship in such a general (and working) way, that we can imagine some super-human AI having this relationship with us, then we don't have to expect the undesired scenarios.
This is just one extreme case, which is convenient for my argument, because it is easy to contrast human and animal intelligence. However, I think there is a lot of work to be done even on relationships within our own species. Take for example how underpriviledged, elderly or handicapped people are treated in our society. Are you comfortable with imagining yourself in their place? Maybe this is what concerns so many people about AI, maybe they project their own attitude towards weaker onto the future AI.
I am not saying I am better than average in this sense. Before writing this article, I've never considered being a vegetarian, I am not participating in public welfare, I give to charity only now and then...
But there must exist a better and universal approach towards others. If we were able to find a system - a moral code - which would make the co-existence of these groups mutually beneficial, or at least not harmful to any party, then we would certainly have less to worry about regarding potential super-human AI. Can we find such a system?