Pages

Sunday, 1 October 2017

New Machine Learning Algorithms of Google and MIT Retouch Your Photos Before You Take Them

Google Pixel

New machine learning algorithms by Google and MIT retouch your photos before being captured


It is getting tougher and tougher, as time goes by, to extract more and better performance out of your phone’s camera hardware. That is the reason why companies like Google are adopting the method of computational photography: using machine learning algorithm to improvise the output. The most recent exploration from the search giant, conducted along with scientists from MIT, progresses this work to a new level, creating machine learning algorithm that are able to retouch your pictures just like a professional photographer in reality, prior to capturing them.

The researchers utilised machine learning algorithm to build their software, instructing neural networks on a dataset of 5,000 images that are produced by Adobe and MIT. Every image in this compilation has been worked upon and improved by five various photographers and Google and MIT’s algorithms made use of this data to understand what kind of improvements are to be made to different photos. This might involve increasing the brightness at certain places, reducing the saturation elsewhere and so on.

Machine learning algorithm has been used before to improve photos, but the real progress with this particular research is concision of the algorithm so that they are compact and resourceful enough to efficiently and seamlessly run on any user’s device. The software itself if as big as a single digital image and as a blog post from MIT describes, it could be very well capable to “development images in a assortment of styles.”

This proves that in order to train the neural networks, new sets of images can be used and could also be able to replicate a particular photographer’s specific look. In similar way, companies like Facebook and Prisma have produced artistic filters that imitate the style of famous painters. Although smartphones and cameras are already processing the imaging data in real time, these recent techniques are more subtle and spontaneous and rather than applying general settings to the whole of the individual image.

For slimming down the machine learning algorithm, the researchers utilised a few varied techniques. These consisted of converting the changes made to every photo into formulae and using co-ordinates that are grid-like to map the pictures out. All of this means that the data about how the photos can be retouched can be mathematically expressed, instead of full-scale photos.

Google researcher Jon Barron told MIT that this technology has the probability to be very valuable for real-time image enrichment on a mobile phone. He added that utilising this machine learning algorithm for computational photography has an interesting outlook but it is retrained because of the severe constraints in computation and power of mobile phones. This paper may offer a way to avoid these hindrances and create new, interesting, real-time photographic memories without getting the battery drained or giving a slow viewfinder experience.

It’s not unlikely that this machine learning algorithm will be seen in one of Google’s future Pixel phones. Earlier, the company used its HDR+ algorithms to show more detail in terms of light and shadow on mobile phones right since the time the Nexus 6. And Google’s computational photography lead, Marc Levoy, told The Verge last year that they are “only just begun to scratch the surface” with their work.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.