A lot has been written about the coming of AI and while I don’t keep a close eye on the topic, the always thought-provoking Andrew Hammel linked an article from Kevin Drum in Motherjones that grabed my attention with a couple of pretty dreadful, though not unrealistic predictions about the end of the human species. One thing that he has has pointed out in several of his articles is that AI will completely change the way we earn a living since pretty much all jobs will be done faster, better and more cost efficient by robots than they could ever be done by humans (and to be honest, most humans are not very good at their job). Therefore, production is expected to rise immensely when AI is functional enough to take over while at the same time tens of millions of people will be out of a job and any ability to sustain themselves.
Kevin Drum points out that the smart guys over in the Silicon Valley are much in favor of some kind of universal income distribution in order to prevent widespread poverty and the social unrest that would come with it. I feel myself reminded of ancient Rome where the widespread use of slave labor disenfranchised a lot of free citizens who had to rely on a patronage system, owing their loyalty to one or several rich patrons who supplied them with counsel, money and representation in exchange for devotion and support. Bread and games sponsored by Amagooappsung anyone?
But besides the economic factors that result from the AI revolution, Kevin Drum also points out several other interesting side effects, including the disappearance of humans after just a 100 years since the invention of AI.
He lists six possible scenarios (his words in bold):
We will all be illiterate. There will be no need to learn how to write at all.
We will lose interest in other people. Robots will replace all human interaction because they are just much more agreeable.
The end of sex. They will also be better sexpartners than other humans could be.
Eternal life for the few. Solving all problems, AI might be able to make humans immortal, but this cannot be done for everyone (why not, though?)
Endless war. Robots will fight each other to live out human aggression. I think there was a Seaquest Episode about that.
Humans give up. Eventually, since humans cannot outdo robots anymore they will fall into decline and disappear in a depressed state.
None of these sound desirable and while they are not impossible, I was wondering if AI could not also be the solution to the problems that plague the human race. Let’s be honest: We are pretty bad at everything.
· Our bodies and minds are quite limited
· Humans are evolutionary designed to live in very small communities, hence we cannot even empathize with people far away or tragedies in large numbers in a meaningful way.
· We also seem to be pretty bad at solving tragedy-of-the-commons problems like global climate change
So let's face it: Overall humans are pretty much garbage as examplified by the fundamental principle of political economy.
We might, at some point and through many iterations of society, become able to solve some of these problems as we have solved
others, like polio. However in some cases we
simple don’t have the time (climate change). In short, while we are making progress in some of these areas, it is very slow and also very fragile as can be seen by the rise of the vaccine
opposition. But hey, Measles are back!
AI on the other hand might provide solutions to all these problems since many of the limitations that humans suffer would not or to a much smaller degree affect a full AI that can learn by “playing” outcomes against itself (like GO). It also seems to be the logical conclusion of books like “Against Democracy”: Put the decision making power into the hands of a greater intelligence in order to improve outcomes for everyone.
The question in this case becomes: Should this happen, what criteria will the KI use in order to solve these problems? I
propose that the philosophy that we impart on our AI at the start might greatly impact the development of that AI in the longer term in the same way that our knowledge and morals are dependent on
earlier generations and do not come into being from nothing.
If AI learns ideals of pure efficiency, humans are done. There is no way that we can compete in any meaningful way with intelligent machines at all and the consequent extermination of humanity, be it peaceful or via force, would be a logical outcome. If we allow AI to learn completely freely from human behavior, we are also done. The Twitter experiment with Tay has shown that within a day. If robots would treat us the same way we treat animals or each other we are looking at a dystopian world which would still lead to humans being exterminated because of uselessness.
But what if we give AI a humanistic core? Ideals of the ethical treatment of animals might guide us in order to make sure that AI will do the same to us. Species-appropriate living might be the best we can hope for as a species and that can only be achieved if AI realizes that living beings have an intrinsic value of their own.
This could lead to a harmonic society of AI and humans living together, where the AI enables humans to continue their evolutionary path while minimizing suffering and compensating for our inabilities. Iain Banks described a possible future in his Culture Series in which AI realizes that humans have the unique potential to perceive pleasure but also are driven by the need to be useful and thus co-existence is absolutely possible and quite beneficial, even though the humans cannot, most of the time, really understand what the AI is doing.
The problem is that our society is mostly driven by effiency thinking at the moment. If you cannot contribute or compete against others in order to achieve monetary income, your’ perceived worth is decreased. In Germany we see this dramatically in the distribution of wealth between roles that benefit society (nurses, midwifes, policemen, teachers, etc) and those that arguably contribute only to a few individuals wealth (bankers, for example).
Therefore, we are faced with a paradox: On the one hand, it seems in our common interest to give AI a philosophy that makes an ethical treatment of humans likely, on the other hand we want them for their efficiency and that ideology would could lead to our extinction. Chances are, short term profit will beat long term survival here as well.
If only we had someone greater so solve this problems for us…