Artificial Intelligence: I was following master's orders...
"I was merely following master's orders!" Will the day a robot will snap back at you will ever arrive??
I would say it would, if it is just the response we are talking about. It could have happened years ago - anyone can write a bit of software to do just that.The issue is deeper than that. The issue is, will the day a machine, in whatever shape or form, have a superior intelligence than human beings, who created it, ever come.
If we get wrapped up in lexical semantics, we could analyse the two terms: "artificial" and "intelligence" and see what they, in combination, can tell us.
"Artificial" conveys a meaning of being not found in nature, created/synthesised by man. That is my definition. Checking the Oxford Dictionary, I find the definition: "made or produced by human beings than occurring naturally, especially as a copy of something natural".
"Intelligence"? I would not attempt to provide my own definition here too. My Oxford Dictionary explains it as: "the ability to acquire and apply knowledge and skills"
The term "artificial" does not cause problems from a machine point of view.
But, "intelligence" does. What would the terms "ability", "acquire", "apply", "knowledge" and "skills" mean? We can, of course go back to the dictionary. It might ease the difficulties, and also it might not.
I could take the whole set and view it as passive actions. One can argue that there are no obvious pointers to an ability of applying acquired knowledge in improving, extending, creating skills a machine may possess at any given time.
See the problem with words!
An attempt was made by Turing to provide a "scientific" definition for the term "artificial intelligence". He proposed a mental experiment, which ran as follows. Imagine a set up where there are two rooms, he said. In one room sits an observer, who has at his disposal, an input device and an output device. The input and output devices are connected to "something" in the other room. The observer sends queries to the other room via the input device and receives responses via the output device.
The experiment is run by the observer inputting queries and observing the responses. From the inputs and outputs, the observer should be able to say whether a machine or a human being sits in the other room returning the responses. If it so happens that a machine sits in the other room, and the observer is unable to identify it as such, whatever queries are posted to it, then the machine is considered to be intelligent.
The big question is, can we build a computer that can meet the above specification?
There are two camps: those who believe it can be done and those who do not. Those who believe are the "strong AI" people, and the non-believers are the "weak AI" people.
Strong AI people go even further. They claim that the day when one can "move", in the sense of transferring one's intelligence, into a silicon-based "body" and continue to prosper. When one is in a silicon-based body, the term "body" loses its attraction, doesn't it? Would you want to fall in love with one?
Will that day come?
Well, it has arrived, hasn't it, in a sort of way? There were the times when rows of workers lined assembly lines assembling cars, washing machines, or whatever as they moved slowly along conveyor belts. Now, the workers have been replaced with rows of robots.
Does this mean AI of the strong kind has arrived??
Hmm!
To those who wish to read an excellent book on this subject, may I recommend the book: