Friday, 11 August 2017

The Computer That Know What Humans Will Do Next

AI

Computer Code – Comprehend Body Poses/Movement

A fresh computer code tends to provide robots with the possibility of an improved understanding of humans around them, making headway for more perceptive machines from self-driving cars to investigation. The new skill enables the computer to comprehend the body poses as well as movements of various people even to the extent of tracking parts as tiny as individual fingers.

Though humans tend to communicate naturally utilising body language, the computers tend to be somewhat blind to these interactions. However, by tracking the 2D human form and motion, the new code is said to improve greatly the abilities of the robots in social situations.

A new code had been designed by the researchers at Carnegie Mellon University’s Robotics Institute by utilising the Panoptic Studio. The two-story dome has been equipped with 500 video cameras developing hundreds of views of individual action for a specified shot. Recording of the system portrays how the system views the movement of humans utilising a 2D model of the human form.

Panoptic Studio – Extraordinary View of Hand Movement

This enables it to trail motion from the recording of video in real time, capturing everything right from the gestures of the hand to the movement of the mouth. Besides this, it also has the potential of tracking several people at once.

Associate professor of robotics, Yaser Sheikh had stated that they tend to communicate mostly with the movement of the bodies as they tend to do with their voice. However computer seems to be more or less blind to it. Multi-person tracking gives rise to various challenges to computers and hand detections is said to be more of an obstacle.

The researchers, in order to overcome this, utilised a bottom-up approach localizing individual body area in an act. Thereafter the areas were associated with certain individuals. Though the image datasets on the hand of the human seemed quite restricted than those on the face or body, the Panoptic Studio provided extraordinary view of hand movement.

 A PhD student in robotics, Hanbyul Joo had stated that a distinct shot provides 500 views of individuals hand and also automatically interprets the position of the hand.

2D to 3D Models

He further added that hands tend to be too small to be interpreted by most of the cameras, but for the research they had utilised only 32 high-definition cameras though were still capable of building a huge data set. The method could ultimately be utilised in various applications for instance helping to enhance the ability of self-driving cars to predict pedestrian movements.

 It could also be utilised in behavioural diagnosis or in sports analytics. Researchers would be presenting their work CVPR 2017, the Computer Vision and Pattern Recognition Conference, from July 21 -26 in Honolulu. Up to now they have released their code to several other groups in order to expand on its skills.

Finally, the team expects to move from 2D models to 3D models by using the Panoptic Studio in refining the body, face and hand detectors. Sheikh had mentioned that the Panoptic Studio had boosted their research and they are now capable of breaking through various technical barriers mainly as a result of the NSF grant 10 years back.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.