Tuesday, 27 February 2018

Metalens: Breakthrough Seen in Artificial Eye and Muscle Technology

Metalens

Metalens the new human eye?

Researchers may have just found a new version of electronic eye that behaves much like a human eye. This flat electronically controlled artificial eye known as a metalens can autocorrect for blurry vision, giving it a host of uses in various industries be it Augmented Reality, Virtual Reality, in optical microscopes and so on.

By taking a cue from the human eye and its functioning, researchers have made an adaptive metalens that can control the main reasons for blurry vision, which are astigmatism, image shift and focus. While the former two is well out of control of the human eye in correcting on its own, the latter can be taken care of by the functioning of the human eye. Therefore metalens even goes beyond what a normal human eye can do.

What is a Metalens? 


Inspired by a human eye, researchers came up with a flat electronic artificial eye come to be known as a metalens. The Metalens can autocorrect for blurry images caused by focus, image shift and astigmatism.

Combining breakthrough tech in artificial muscle technology and meta lens technology, researchers have devised an artificial electronic eye that is a meta lens that can focus on images in real time much like the human eye does. Going even further than the human eye, a metalens can even correct for astigmatism and image shift that the normal human eye cannot do on its own.
Applicability of Metalens Technology:

Because of the feasibility of the project, metalens can be used in a host of applications such as Virtual Reality, Augmented Reality, Optical Microscopes that can work electronically without the need to refocus or adjust, mobile cellphones, cameras and a host of other applications as well.

While being used in these fields, ametalens is capable of auto correcting blurry vision caused by a number of factors simultaneously.

How is a Metalens made?

A metalens focuses light and gets rid of spherical aberrations by using a dense layer of nanostructures which are smaller than a wavelength of light.

Researchers first developed a metalens which was the size of a single piece of glitter. In order to be used for commercial applications the size of a metalens had to be scaled up. Since each metalens has a lot of information pertaining to it, increasing the size of the metalens from 100 microns to a centimeter sized lens, increases the information relating to it by more than 10,000 times the original metalens. This leads to a file size of gigabytes or even tetrabytes of information pertaining to the design of a metalens.

In order to reduce the size of the file, researchers came up with an algorithm that is commonly used in fabricating integrated circuits. In making a metalens for commercial purposes would require the use of two industries, the semi-conductor industry that makes computer chips and the lens manufacturing industry.

Right now researchers have no plans in selling the intellectual rights of the metalens tech and are exploring avenues to bring the metalens to the manufacturing belt.

Friday, 23 February 2018

NASA Developing 3D Printable Tools to Help Analyse Biological Samples without Sending Them Back to Earth

NASA builds up 3D printable apparatus to help evaluate biological samples without transporting them back to our planet

To make possible astronauts on the ship the International Space Station (ISS) to study biological samples without transferring them back to our planet, NASA scientists, together with an of Indian scientist, are budding 3D printable apparatus that can hold liquids such as blood biological samples with no spill out into micro-gravity.

To understand, how to have an effect on team physical condition, how to formulate an enduring role to Mars furthermore afar, NASA told on 8 February. The innovative NASA’s mission, known as Omics in Space, plans to build up expertise to cram "omics" - in microbiology that are imperative to the health of humans. Omics comprises of exploration hooked on genome, microbiomes as well as proteomes.

NASA has by now deliberated omics by way of attempts such as the Microbial trailing 1 research that check up microbial multiplicity lying on the space station. However there is no method to route biological samples lying on the station, thus they enclose to be propel down to globe. It may possibly be months between the moment biological samples are taken as well as an examination is done, told Kasthuri Venkateswaran of NASA's (JPL) in Pasadena, California, and chief researcher meant for the Omics in Space mission.

He is a former pupil of Annamalai University in Tamil Nadu, told: This mission intends to widen an programmed structure meant for learn molecular biology by means of least team intercession.

This researcher proclaimed that this is one of the major achievements in microgravity. Astronauts amass a range of biological samples, takes account of their own saliva as well as blood, and microbes washed down from the hedges of the ISS. These biological samples could subsequently be jumble through water. Exclusive of the suitable apparatus, biological samples can dribble, glide or else structure air bubbles that could conciliate consequences.

Two years ago, NASA obtains a huge leap through progressive DNA hooked on space meant for the initial time. Astronauts make use of a small, hand-held succession utensil known as the MinION, founded by Oxford Nanopore Technologies. It is a corporation having its head office at Oxford, England.

NASA told the Space mission tactics to put up this sensation by means of budding a programmed DNA / RNA extractor that may put in order biological samples intended for a MinION apparatus. A significant component of this extractor is a 3D printable synthetic sealed unit required to haul out nucleic acids from the biological samples intended for the Minion progression.

Camilla Urbaniak, a Postdoctoral investigator on JPL in addition to co-investigator on Omics in Space, told this has been checked on our Planet. "We are obtaining what is on our planet to study DNA along with uniting the entire rung hooked on an programmed method," Urbaniak told. "What is innovative is a single stop store that could haul out and route the entire of these biological samples," Urbaniak said.

Tuesday, 20 February 2018

The Next Generation of Cameras Might see Behind Walls




Single Pixel Camera/Multi-Sensor Imaging/Quantum Technology

 

Users are very much taken up with the camera technology, which has given an enhanced look to the images clicked. However these technological achievements have more in store for the users. Single-pixel cameras, multi-sensor imaging together with quantum technologies would bring about great achievements in the way we tend to take images.

The updated camera exploration has been moving away from increasing the number of mega-pixels to merging camera data with computational processing. It is a radical new approach wherein the incoming data may not seem like an image. It tends to be an image after a sequence of computational steps which involves complex mathematics together with modelling on how light tends to travel through the scene or the camera.

The extra layer of computational processing tends to eliminate the chains of conservative imaging systems and there may be an instance where we may not need camera in the conservative sense any longer. On the contrary we would utilise light detectors which few years back would never have been considered for imaging.

 However, they would be capable of performing incredible results like viewing through fog, inside the human body as well as behind the walls.

Illuminations Spots/Patterns

 

The single pixel camera is one of the examples that depend on a simple source.The usual cameras tend to utilise plenty of pixels – tiny sensor features in order to capture a scene which is probably illuminated by an individual source.

However one can also manage thing in a different manner, capturing information from several light sources with an individual pixel. To achieve this one would need a controlled light source such as a simple data projector which tends to illuminate the scene a single spot at a time or with a sequence of various patterns.

For every individual illumination spot or pattern one can then measure the quantity of light reflected thereby adding all together in creating the ultimate image. Evidently the drawback of taking a photo in this way is that one will have to send plenty of illumination spots or pattern to obtain an image – one that would take only one snapshot with a regular camera.

However this type of imaging would enable in creating otherwise impossible camera, for instance that which tends to work at wavelengths of light beyond the visible spectrum, where good detectors cannot be made into cameras.

Quantum Entanglement 

 

These types of camera could be utilised in taking images through fog or thick snowfall. They could also imitate the eyes of some animals and mechanically increase the resolution of an image based on what is portrayed. There is also a possibility of capturing images from light particles which have not interacted with object needed to be photographed.

This would have the benefit of the idea of `quantum entanglement’ which two particles can be connected in a way meaning that whatever tends to occur to one can occur to the other even though they are apart at a long distance.

 Single pixel imaging is considered as one of the simplest innovation in future camera technology and depends on the traditional concept of what forms an image. Presently we are observing a surge of interest for methods wherein lot of information is utilised though out-dated techniques tend to gather only a small portion of it.

It is here that multi-sensor approaches involving a number of detectors pointing at the same scene could be utilised. One ground-breaking example of this was the Hubble telescope that produced images made from a mixture of several different images taken at various wavelengths.

Photon & Quantum Imaging


However, one can now purchase commercial version of this type of technology like the Lytro camera that tends to accumulate information regarding light intensity and direction on the similar sensor producing images, which could be progressed after the image has been taken. The next generation camera will possibly seem like the Light L16 camera featuring ground-breaking technology based on over 10 various sensors.

Their data are connected through a computer with a provision of 50Mb, refocus able and re-zoomable, professional-quality image. The camera tends to appear like a very thrilling Picasso interpretation of a crazy cellphone camera. Researchers have been working hard on the issue of seeing through fog, beyond walls as well as imaging deep within the human body and brain. All these techniques depend on linking images with models explaining how light tends to travel through or around various substances.

Another remarking method which has been achieving ground is based on artificial intelligence to `learn’ in recognising objects from the data and these methods have been inspired by learning process in the human brain which probably likely to play a major role in the forthcoming imaging system.

Individual photon and quantum imaging technologies have been developing to the extent that they can take image with extremely low light levels as well as videos with exceptionally fast speed attaining a trillion frames per second. This is adequate to capture images of light travelling across a scene.

Tuesday, 13 February 2018

An Apology after Apple Sends wrong AD Spend Data to Developers

Apple
Recently certain developers got sent ad- spend data belonging to other apps and developers. This led to some awkward and uncomfortable questions being asked. iOS developers are given a choice whether to opt in for Apple’s search ads basic service. Developers have to pay only when their app has been installed by a user. Apple, in sending the-end-of-month ad review, has also inadvertently sent details of various developer’s ad spend details to different developers.

On Wednesday Apple acknowledged their mistake and issued an apology to all their developers. They also mentioned that henceforth all data pertaining to ad spend details can be obtained by the developer logging on to their account to prevent further mishaps in the future.

What is Apple’s Search Ad’s Basic Service? 

As a developer getting your app known to the world is always a struggle. Apple helps in solving the problem to a certain limit.

Search Ads basic is a service whereby, Apple advertises for a developers app in the app store, in exchange for signing up for this service as well as paying for the service as and when the app of the developer is downloaded.

This is cost effective as the developer pays only when the app is installed rather that when the user is just interested and does not actually download the app.

With the search Ads basic service developers get an end of month statement in which they can review the performance of their ads. They also get information such as their apps installs, average-spend per install, ads spend numbers related to their ad expenditure on the app store and more.

The search ads basic service was launched in December of 2017 and is targeted at young and emerging developers to promote their apps in the app store. Their apps will be listed in the search results. This service not only promotes an individual’s app but also gives the developer’s app a chance to be seen by the users on the app store as otherwise users would not even know of the existence of such an app.

What’s an even bigger advantage for developers is that they pay only when the app is installed as opposed to paying on an impression per app basis. Besides this the search Ads basic service also gives developers intelligent automation features in which results can be maximized with lesser efforts and a feature where developers can track performance on their individual dashboards.

Apple’s Apology after sharing different developer’s ad spend details:

Developers like to keep all their ad spend details such as installs, their expenditure related to the ads on the app store and so forth confidential. So when they got data pertaining to other developers there was also a more than probable chance that their own private ad data was being seen by someone else.
Apple realized their mistake and acknowledged their part in it by apologizing saying that the problem occurred due to “processing error” and hence forth all data will be available on an individual developer’s personal account.

Can Neural Networks Learn to Ride a bike?

Neural Networks
For some riding a bike is easy while for others it seems to be a very difficult task that they have to learn at any cost mostly because- the vast majority already know how to ride bikes and no one wants to be left out. But how easy is it for two yes you read right two neurons comprising a neural network to ride a bike?

Two neurons or two nodes forming a digital neural network to be accurate has learnt to ride a bike and that with no external information or programming to guide it. Researchers that study thinking have used neural networks to simulate models of how a person actually thinks- how thinking works, how it is made and how it responds to the external world.

What is neural networks? 


Neural networks are clusters of neurons that pass on information to one and another by simply strengthening and weakening the connections between them. Don’t be alarmed, these are not actual neurons but simulated nodes or model neurons on a neural network in a computer and not a body.

In a huge step towards artificial intelligence, neural networks can understand a problem and even respond to it without any prior programming about how to solve a certain issue.

Researchers have now used neural networks to learn to ride a bike in a simulated setting on a computer without any before- hand programming to do so. They have found that a neural network comprising of only two model neurons has proved to be successful in riding a bike.

Neural network and riding a bike: 

At the testing stage researchers used an algorithm or program, a human and of course the two neuron neural network to learn to pilot the bike giving all the same rules or instructions- controlling the speed of the bike, it’s leaning to one side or the other, the angle of the handlebars and so forth.

The researcher first tested the algorithm in riding a bike using “what if” programming- what move will keep the bike straight, what move will increase the speed of the bike?

But the algorithm was found to be unsuccessful in learning to ride a bike , it could not do two things at once and tried weird means to reach various goals such as increasing speed it would swoop from side to side to do so. As a conclusion, researchers have found that an algorithm would not have the ability to predict future outcomes as compared to neural networks and therefore would be unsuccessful in solving a particular problem.

The second round of testing involved humans where a human had to learn to ride a bike using only the keyboard. After a few tries, humans were also found to be able to ride a simulated bike.

After the human round of testing, it was time for the neural network. Based on experiences and information given by the people who tried the simulated bike, researchers built a neural network. The neural network assessed the environment the bike was in and how to maneuver in that environment- in which direction should the bike lean and how fast should its speed be. At the end of testing the neural network was found to be successful in learning to ride a bike.

In the neural network, the two neurons comprising the neural network passed on information to one another while one assessed the environment and passed on information to the other, the other neuron in the neural network actually controlled the bike.