Friday, 21 October 2016

Getting Robots to Teach Each Other New Skills


Robot
Google – Robots Utilising Shared Experiences

Robots have not got hold of human intelligence still but the researchers of Google have shown how they have been progressing in utilising downloadable intelligence. Just envisage if you could be better at some skill by not just learning and practising but by accessing the brains of another to tap directly in their experiences.

That would be science fiction for human, but in the case of AI powered robotics, it could be a possibility to shortcut training times by having robots to share their experiences. Recently, Google had demonstrated this with its grasping robotic arms.

James Kuffner, former head of robotics of Google had created a term for this kind of skills acquisition, six years back, naming it `cloud robotics’. It recognises the effect of distributed sensors and processing supported by data centres and quicker networks.

 Presently Knuffer, the CTO of the Toyota Research Institute where his focus lies on cloud robotics in bringing about reality domestic helper robots. UK artificial intelligence lab, Google Research, Deep Mind together with Google X continues to discover cloud robotics in quickening general purpose skills acquisition in robots. In many demonstration video that were recently published, Google has portrayed robots utilising shared experiences to learn quickly how to move objects and open doors.

Robots – Own Copy of Neural Network

Out of the three multi-robot approaches which the researchers have been utilising is reinforcement learning, or trial and error together with deep neural networks which is the same approach that DeepMind is using in training its AI, is being skilled at Atari video games and Chinese board game Go.

Every robot tends to have its own copy of neural network which assists it to decide the ideal action in opening the door. Google has constructed data quite quickly with the addition of interference. Recording the robots actions, behaviours and concluding outcome is done by a central serve, which utilises those experiences in order to build an improved neural network which assists the robots enhancing the task.

As portrayed in two videos by Google, after a training of 20 minutes the robotics arms fumbled around for the handle though ultimately managed to open the door. But in the span of three hours, the robots could reach for the handle with ease, twist it and then pull to open the door.

Google Training Robots to Construct Mental Models 

Another system which they have been exploring could be helpful for robots to follow commands in moving objects around the home and it is here that Google has been training its robots to construct mental models on how things tend to move in response to definite actions by building experience of where pixels turn out on a screen after a certain action is done.

The robots tends to share their experiences of nudging different object surrounding a table, assisting them forecast what could happen should they tend to take a specific course of action. The researchers are ultimately exploring methods for the robots to study from humans. Google’s researchers directed robots to the doors and exhibited how to open them. These actions had been encoded in deep neural network which tends to transform camera images to robot actions.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.