The question of intelligence is the last great frontier of science. Will we ever be able to build a machine that is intelligent enough to come up with a joke or a poem? Will it be able to understand sarcasm and have consciousness in the same sense humans do? How is it possible for the brain, whether biological or electronic, to perceive, understand, predict and manipulate a world far larger and more complicated than itself? And if this is possible, are we on the right track?
Unfortunately there seems to be a very fundamental difference between the current algorithmic way in which we look at AI and the way the brain actually works. The biologists seem to reject or ignore the idea of thinking of the brain in computational terms, and computer scientists often don't believe they have anything to learn from biology.
For the first few years, the input and the goals were all represented by sentences of some mathematical logical language. Otherwise AI programs often examine large numbers of possibilities, e.g. moves in a chess game and every possible move was examined one by one before arriving at the optimal move. Another way adopted by the initial AI programmers was to use heuristics. Heuristic functions are used in some approaches to search to measure how far a node in a search tree seems to be from a goal. The optimal node would therefore be the one that returns the maximum value when substituted in the heuristic function.
Unfortunately, these methods have many practical uses but they don’t come anywhere close to mimicking the way the human brain works.
The brain seems to be continuously building a model of the world around it by taking sensory input from the eyes, ears, and other organs to record memories of our experiences. New experiences and inputs are then compared to previous memories within this model and a prediction is made. The success or failure of these predictions, as well as new experiences are continuously fed into the brain so that it continuously evolves and self modifies is model of the world. It seems to work very much like a control system with feedback.
According to me, neural networks and genetic programming seems to be the two best bets for AI among the present set of theories. The approaches to AI based on neural nets comes close to mimicking the predictive model of the brain. A neural network with Hebbian learning can be viewed as a prediction machine. Genetic programming is a machine learning technique used to optimize a population of computer programs according to a fitness landscape determined by a program's ability to perform a given computational task. But programs can only learn the facts that can be represented by their ‘formalisms’, and unfortunately learning systems are almost all based on very limited abilities to represent information.
According to Turing, in a conversation with an ideal intelligent machine, the human will not be able to tell whether he or she is talking to a human or a machine . Aeronautical engineering texts do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons.' Clearly, there seems to be something very fundamentally wrong in the way computer scientists look at AI.
No comments:
Post a Comment