Pages

Wednesday, 23 September 2015

The Search for a Thinking Machine

Li

New Age of Machine Learning


According to some experts, they are of the belief that by 2050, the machines would have reached human level intelligence. To a new age of machine learning, computers have started assimilating information from raw data in much the same way as the human infant tends to learn from the world around them.

This means that we are heading towards machines which for instance teach themselves on how to play computer games and get reasonably good at it as well as devices which tend to begin to communicate like humans such as voice assistants on the smartphones. Computers have started comprehending the world beyond bits and bytes. Fei-Fei Li has spent the last 15 years in teaching computers how to see.

 Initially, a PhD student and thereafter as a director of the computer vision lab at Stanford University, she had tracked difficult goal with the intention of creating the electronic eyes for robots as well as machines to see and to understand their environment. Half content of all human brainpower is in visual processing though it is something we all tend to do without much effort.

Ms Li in a talk at the 2015 Technology, Entertainment and Design – Ted conference had made a comparison stating that `a child learns to see especially in the early years, without being taught but learns through real world experiences and examples’.

Crowdsourcing Platforms – Amazon’s Mechanical Turk


She adds that `if you consider a child’s eyes as a pair of biological cameras, they take one image about every 200 milliseconds, which is the average time an eye movement is made. By the age of three, the child could have seen hundreds of millions of images of the real world and that is a lot of training examples. 
 Hence she decided to teach the computers in the same way. She elaborates further that `instead of focusing solely on improved algorithms, her insight was to give the algorithms the kind of training data which a child is provided through experience in quantity as well as quality. Ms. Li together with her colleagues in 2007 had set about an enormous task of sorting and labelling a billion divers as well as random images from the internet to provide examples for the real world for the computer. 
The theory was that if the machine views enough images of something, it would be capable of recognizing it in real life. Crowdsourcing platforms like Amazon’s Mechanical Turk was used, by calling on 50,000 workers from 167 countries in order to help label millions of random images of cats, planes and people.

ImageNet – Database of 15 Million Images


Ultimately they build ImageNet, which is a database of around 15 million images across 22,000 lessons of objects that were organised by daily English words. These have become the resource utilised all over the world by the research scientists in an attempt to give vision to the computers.

To teach the computer in recognizing images, Ms Li together with her team utilised neural networks, computer programs accumulated from artificial brain cells which learn as well as behave in the same ways as human brains.

At Stanford, image reading machine now tends to write accurate captions for a whole range of images though it seems to get things wrong for instance, an image of a baby holding a toothbrush was labelled wrongly as `a young boy is holding a baseball bat’.

Presently machines are learning instead of thinking and if the machine could ever be programmed to think, it is doubtful taking into consideration that the nature of the human thought has escaped scientists as well as philosophers for ages.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.