As we move forward in the creation of ever more perfect, quick and accurate artificial intelligences ; as they are no longer just lines of code and are gaining emotional depth and humanity; as they are able to feel better, think better and are assuming increasingly significant part of our daily life: there are questions that might seem absurd but before suddenly become relevant.
For example, this. Today, studying us for hundreds of artificial intelligences and learning from us. What if the time comes that we seem too? Could an artificial intelligence (or robot) suffer a mental illness?
One issue that may not be science fiction
As usual, this is not a genuinely new possibility. The idea of artificial intelligences become dysfunctional and even dangerous populates science fiction: from HAL 9000 to Santa Claus Futurama. However, except for the work of Susan Calvin, one of the first robot psychologies of history, even literature has explored in depth the scenarios in which this could be a problem.
And it could be. Can you imagine an artificial intelligence in charge of air traffic control of an airport suffered hallucinations? An autonomous car with anger problems? Or a surgical robot with anxiety attacks? I recognize that asking these questions aloud is something ridiculous, but what if it was not? We should not working right now on this disturbing possibility?
The mental health of artificial minds
Hutan Ashrafian, in fact, not only believes it is not ridiculous at all, but we should think about it seriously. Ashrafian is a surgeon and professor of medicine at Imperial College London. But above all, it is known to be one of the best known advocates of extending human rights to artificial intelligence. One cause of which I already talked a decade ago and in our country, also has sympathizers as Helena Matute, professor of Deusto.
You may also like to read another article on TheKindle3Books: Biological Batteries? These bacteria can create based on renewable energy a world
For Professor Ashrafian the central question is whether mental illness is something that is linked to cognitive and emotional capabilities of the human being or not. If so, “as the AIs are more like us, as they learn from us they could develop problems similar to ours.” We must recognize that, ridiculous as it may sound, the argument is tremendous.
It is also possible that mental illnesses, cognitive problems and psychological disorders are ‘evolutionary bugs’, to call in some way; i.e. ‘failures’ design that have been dragging evolutionarily and are likely to be debugged.
The problem, as the same Ashrafian says, is that we are still at the gates of the real revolution of the IAs, on the one hand; and we know very little about health and mental illness, on the other. That is, we headed to the unknown. And while that it is the promise that we will be able to learn a lot about both, you should also make us reflect on the risks involved. Risks that go beyond those we usually precupar when we think of artificial minds.