Will robots with giants brains be dancing on our corpses by 2050? Will our smartphones take over our jobs and steal our girlfriends? Will drones take over space and conquer other planets? These and many other very relevant questions about Artificial Intelligence finally answered.
If you’ve read episode 1 of this series you might remember that seemingly trivial chess-game some years ago. To recap, we got our ass kicked at a chess-game by a computer. And when I say we, I don’t mean you and me, I mean The kasparov's of this world.
This event led us to think that we as humans lost our intelligence-monopoly because a computer outsmarted us!
Since then, this game of chess has been used frequently by the followers of the Artificial Intelligence church. Like all good churches this one is filled with myths, holy figures, legends and conceptual confusion. It also has strong followers, an equally strong pack of haters and a serious lack of nuance.
So much for the metaphors. I’m going to try and take the middle road in this post, explain you what we know about Artificial intelligence, what we may expect from it and what we should want from it. Misinformation, as we all know can have quite some unwanted side-effects like fear, irrational behavior and inefficiency. Personally I like neither of the above, so let’s get rid of them.
Clearing the sky
Let’s start with an etymological break-up of the word artificial intelligence.
Artificial : artificialis [Art, Artificial, according to the rules of the art], Ars (artist) [art] + facere ( -ficere) [making].
So artificial points out that the object is not crafted by nature, but designed and developed by man. If you look up intelligence the dictionary gives it to you pretty straightforward ; The ability to acquire, understand, and use knowledge.
Or like Plato stated it: “The intelligence game will be won by those who know how to ask & answer questions critically.”
So intelligence isn’t mere data capturing, but also knowing which data may be useful & relevant and hence worth collecting and curating in order to exploit their valuable patterns. Valuable patterns? Yes, if there is one big truth to the Big Data bubble it’s that data without interpretation is nothing and that its true value lies in the small patterns you can distill from the data lake.
Or to say it in the words of Luciano Floridi: “Data without a model is just noise.”
Well, nobody likes noise. Maybe for now we can say that intelligence is always about information accompanied by an interpretative, dialectic process.
A compact Definition
Artificial intelligence is a machine or software that exhibits intelligence, as described above. It also refers to the field of study that researches how to create computers & software that are capable of intelligent behavior. Right about now the Sci-Fi scenes should be popping in your head. (Ex-Machina, Ai, Strange days, Transcendence, Lucy ) Robots with giant heads taking over the world, subjecting mankind to slavery and feeding on our brains.
To put it very carefully… it’s highly unlikely this will ever happen. Think about it, if we are the creators of artificial intelligence, how could it become miraculously independent of us? Why would we build it so that it could ever outgrow us?
Putting it to the test
The founding father of the concept Artificial Intelligence is Alan Turing and he did believe in the development of artificial intelligence as an entity that outgrows humankind. The theoretical moment in time when artificial intelligence will have surpassed human intelligence is called Singularity and is often used by the church of AI. To test whether an AI is actually intelligent Alan developed the Turing test and it goes somewhat like this:
You have an interviewer, the ‘supposed’ AI and another random person put in 3 separate rooms. They can communicate only via mail, fax, or chat (nothing verbal). The interviewer has a limited amount of time to ask questions at the other rooms. If after this period he cannot define which of both is human and which is not, the AI is successful. Even if we ignore the questionability of this method for a moment, the outcome so far speaks for itself.
Every year there is a global AI competition called the Loebner prize where all researchers and developers can put their AI’s to the ‘ultimate’ test. The first prize goes to the AI that could not be identified by 2 or more jury members.
Up to today the gold medal has never been awarded.
The AIs are not capable of answering non-informative questions or questions that have never been answered before (e.g. “If we are holding hands, which hand are you holding?” “Should I marry my boyfriend?” “How can I overthrow the government?”). They can’t connect multiple questions, remember previous answers or revise previous answers given new information. This basically makes them a Google with voice recognition.
And if we would depend on Google to mediate our understanding of the world, it will be our intelligence flattening into artificial intelligence and not the other way around. Not that we could go and blame Google for dumbing us down, that would be like blaming your car for your obesity. It could drive you to McDonalds or to the gym, it’s still your responsibility.
Is it really all just sci-fi then? Some wishful thinking and fantasy? What about Siri then? And all the results of the recent chatterbot explosion on the market? And the computer did really beat us at chess, did he not?
Yes, he did. And that really was an important moment for AI, just not the kind I described above. (The kind that takes over our universe and dances on our corpses.) It’s important to distinguish two types of artificial intelligence. They go by many names, but let’s call them Reproductive and Productive AI for now.
Productive AI is the sci-fi version. Reproductive AI is the denominator of smart technologies and is very much happening today.
2 types of AI
Let’s talk a bit about Reproductive or Light artificial intelligence. Reproductive AIs are rather successful and this is mainly thanks to the fact that their environment is shaped around their limits. Take for example the smart lawnmowers, these guys do an awesome job! Only thing you have to do is show them 1 time the outer borders of the garden and the battery loading dock.
You create an environment within the capabilities of this specific technology and it excels at his job and probably does it a hell of a lot better than you.
Same with flying an airplane, if you have a bumpy landing you can bet your ass it was the pilot and not the automatic navigator. The automatic flight navigator can calculate exactly how a soft landing should go considering the wind, temperature, and all the other landings it has in its memory. It has more experience and more information than the pilot.
This environment we create for the Light AI (the garden, the airplane, the dishwasher,...) is called the envelop. When you put artificial agents in their own habitat, the Internet, they will perform at their best. The real problems occur when you let them cope with the unpredictable world out there, this is the frame problem. The envelop gives an answer to the frame problem.
Remember the post about ‘Connected objects’? The smart factories of Industry 4.0? This is all light, reproductive artificial intelligence. So it’s already happening all around us, with our smartphones, connected home devices, … we are enveloping our environment around light AIs more and more.
Is this a good thing? A bad thing? Let’s say it’s a thing worth thinking about consciously and debating often enough.
Man is still the maker
In the beginning of this post I promised to talk about “Artificial intelligence, what we may expect from it and what we should want from it.” Well, I think I’ve made my point about those misplaced expectations.
Next question now is; what should we want from AI? If we are the ones developing it, designing it, why shouldn’t we actively debate what it should look like and what purpose it should serve. We act like a people that is already defeated by a non-existing power. Pretty lazy, no? Technology will always stay a means to an end, not a goal in itself. So what is the goal then? To have artificial people or to improve our world, to create a more efficient way of living together, to have more time for study, family or leisure and to grow as a species to an improved version of ourselves.
Let me put it differently; do you want a robot like C3PO from Star Wars doing your dishes or do you want a washing machine so you have time to catch up on some reading? Would you like a robot driving the UPS or DHL van that delivers your packages or would you prefer an electrical unmanned drone that doesn’t pollute, doesn’t add to the traffic jams and simply does the job much more efficiently?
You see, it’s not a question of producing a human-like software that tries to beat us at what we are exceptionally good at: being human, communicating, creating, laughing, seeing things for what they are and putting ourselves in someone else's shoes. It’s more a question of building envelops for certain technologies to do certain defined tasks faster and better than us. Which is nice because we didn’t want to do those tasks anyway.
A look into the future
To come back to the chess game. The AI here really does outperform the human actor, but this says more about the game of chess than about us humans. ICTs, certain software or light AIs will have a memory a gazillion times bigger than ours. They will have more knowledge than the whole world-population together and be in contact with all other smart technologies, creating a giant information network of immense value. Add to this network the human factor and you will have a real revolution, not in the singularity way I told you about.
This will force people to think about their added value, what capabilities and qualities distinguish them from ICTs and how we can best empower and nurture these. So imagine a giant brain with all knowledge at your disposal and an endless flawless memory and now add your own talents, ambitions and wishes… Adds up to something nice doesn’t it? Then again, with great power comes great responsibility. And in the case of ICTs it’s a shared responsibility. There is no boss of the Internet, we are all constitutive parts of it.
Is this the end of the strong AI then? No, probably not and it doesn’t have to be. But if we clear out the concept confusion between both AIs we can have more meaningful conversations about this topic, acknowledge the progress and see clearly where there is more work to be done. And there is still a lot of work to be done!
I hope this was an insightful introduction into AI. If there are reactions or questions about my sources or the article itself, please mail me firstname.lastname@example.org, I would love to start the conversation!
Stay tuned for more!