Thursday, 26 February 2015

New Algorithms Locate Where a Video Was Filmed From Its Images and Sounds


geo taggging
Creation of A System to Geolocate Video

A system capable of geo-locating video on comparing with their audio visual content with worldwide multimedia database has been created by researchers from Ramon Llull University in Spain for instances when textual metadata is not relevant or available. This could be helpful in the future, in locating people who could have gone missing after posting images on social networks or to recognize areas of terrorist activities. Several of the available online videos have text providing information on the destination it was filmed though there are others which do not provide the information which complicates the application of the frequent geo-location tools of multimedia content.

To resolve this issue, scientists from the La Salle campus at Ramon Llull University – Barcelona - Spain, have come up with a system which helps in locating the videos without any indication of the location where they were produced on the map, which is very challenging, taking into account the major part of the scenes of daily life minus the appearance of clearly recognizable places. Since they are not accompanied with text, the system is based on the recognition of the images or frames and the audio.

Physics & Mathematical Vectors Taken

One of the authors, Xavier Sevillano, states that `the acoustic information can be as valid as the visual and on occasion even more when it comes to geo-locating a video. In this field the use of some physics and mathematical vectors is taken from the field of recognition of acoustic sources because they have already demonstrated positive results’. Data gathered is put together and diverted into clusters, to enable using computer algorithms that has been developed by researchers and compared with a large collection of recorded video that have already been geo-located across the world. The team in their study published in the journal - `Information Sciences’, used around 10,000 sequences as reference from the MediaEval Placing task audio-visual database which is a benchmarking initiative or assessment of algorithms for the purpose of multimedia content. Sevillano states that `the videos that are similar in audio-visual terms to what they need to find, are searched for in the database, to identify the most probable geographical coordinates’.

Recognized the Need for Greater Audio-Visual Base

The researchers also indicate that the proposed system inspite of a limited database with regards to size and geographical coverage has the potential of geo-locating video accurately than its competitors. Moreover, it is also capable of locating 3% of videos within a radius of ten kilometres of its actual geographical location and in 1% of the cases; it tends to be accurate to one kilometre.

The researchers have also recognized that their system may need a greater audio-visual base in order to apply it to millions of videos that tend to circulate across the internet though they highlight the usefulness in locating those which do not have textual metadata as well as the capable possibilities it offers. Sevillano further states that `this system could help rescue teams in tracking down where a person or a group may disappear in a remote area and detect the locations portrayed in the video that could have been uploaded in a social network before they lost contact.

Glassed-in DNA Makes the Ultimate Time Capsule


DNA
If you want to preserve messages for long terms such as; 100 years to 1000 years or more, so may be Blu-ray discs or USB sticks are good enough as now you have one more option to preserve even more data and that option is DNA time capsule. Theoretically, 1 gram of DNA is capable to hold 455 exabytes, which is enough to store all data held by Facebook, Google and other major tech company. DNA time capsule have incredible feature for durability as some of the DNA has been extracted and sequenced from the bones of 700,000-year-old horse, but this theory comes with certain terms and conditions.

In Zurich, Robert Grass from Swiss Federal Institute of Technology, said that “We all know that if we store it lying around, so we will lose the information” and because of that he and his team is working on ways to increase DNA's longevity with a single aim to store data for hundreds or millions of years. The process is simple to store the data on DNA is to encode the information on DNA stand whereas; the most simplest method is to treat DNA bases G and T as a 1 and A and C as 0. Of course if there will be any damage on DNA levels or in DNA holes, so there is method to recover that which is widely known as Reed-Solomon code or an error-correcting technique. Robert Grass and his team is also trying to mimic the way fossils for the purpose to keep a DNA sequence intact by excluding all water from environment, which is key to encapsulate the DNA in any microscopic spheres.


To test the longevity of storage system they encoded two venerable documents of 83 kilobytes, a 10th-century version of ancient Greek texts, The Archimedes Palimpsest and and The Swiss federal charter from 1291. And then DNA versions of these texts were kept at 60 °C to 70 °C for a week to simulate ageing and they found that documents are still readable without any major or minor errors. And result suggested that data in DNA form could last in 2000 or 2200 years if it will be at a temperature of 10 °C or little more.

Grass is trying to store all the current knowledge for future generations, but this method is so much expensive to generate DNA stand in present. In normal conditions the cost to encode 83 kilobytes is approx. £1000, so if anyone will attempt to do same with Wikipedia then it will run to billions. Apart from this Grass suggests that we should focus on what our upcoming generation or future historians might want to read rather than storing all information. He says that if you will see that how we look at the Middle Ages, so it can influence them and due t that we should save these types of specific information’s. But still Grass is not sure about what to put in a time capsule.

Wednesday, 25 February 2015

Interaction between Light and Sound in Nanoscale Waveguide


Interaction_of_Light_Sound
Interaction between Light & Sound – Nanoscale Area

Scientists of Belgium from Ghent University and Nano-electronics research institute, Imec, had a demonstration on the interaction between light and sound in a nanoscale area and their discovery published in the Nature Photonics indicate that the physics of light matter coupling at the level of nanoscale, has paved the way for enhanced signal process on mass producible silicon photonic chips.

The field of silicon in the last decade has attracted increased attention as a driver of lab-on-a-chip biosensors as well as of faster-than-electronics communication between the computer chips. The technology has been built on nanoscale structures which are called silicon photonic wires that are roughly hundred times narrower than human hair and these nanowires tend to carry optical signal from one point to another at the speed of light and are developed with the same technology to fabricate electronic circuitry.

 The wires tends to operate since light moves slower in the silicon core than the surrounding air and glass and due to the trapped light in the wire by the phenomenon of total internal reflection.

Sound Moves Quicker in Silicon Wires 

Confining light simply is one thing though manipulating it is another and the problem is that one light beam cannot change easily the properties of another. It is here that the light matter interaction is in focus which enables some photons to control the other photons.

Researchers from Imec as well as the Photonics Research Group of Ghent University portrayed a peculiar form of light matter interaction and managed to confine light as well as sound to the silicon nanowires where the sound oscillates ten billion times per second which is far more rapid that human ears can hear. They realized that the sound cannot remain trapped in the wire by total internal reflection and unlike light; sound tends to move much quicker in the silicon core than the surrounding air and glass.

The scientistsframed the environment of the core in order to make sure any vibrational wave intending to escape it would eventually bounce back and in doing so, they confined light as well as sound to the same nanoscale waveguide core, a first observation.

Light & Vibration Influence Each Other 

Light and vibrations strongly influenced each other when trapped in an incredibly small area, where light tends to generate sound and sound shifts the colour of light which is a process known as stimulated Brillouin scattering.

They exploited the interaction in order to amplify specific colour of light, anticipating that the demonstration would open up new ways in manipulating optical information such as light pulses could be converted into sonic pulses and back into light, by implementing the much needed delay lines.

Moreover the researchers are expecting that the similar techniques could also be applied to even smaller entities like viruses and DNA since these particles tend to have unique acoustic vibration which could be used to probe their global structure.

Telescopic Contact Lenses Could Magnify Human Eyesight


Telescopic_Contact_Lenses
Telescopic Lens – Magnifying Objects – 2.8 Times

Interesting development have been coming up in the world of contact lenses with scientists making great headways in creating a pair of `smart lenses for diabetic who are capable of monitoring glucose levels as well as Google inventing a set with an in-built cameras. We now have optics researchers from Sweden, developing contact lenses which can zoom in on an object at the blink of an eye where a simple wink tends to activate the telescopic lens and help people to read as well as recognize faces easily.

The prototype lenses which was unveiled recently at the American Association for the Advancement of Science annual meeting helps in magnifying objects up to 2.8 times and probably someday could be useful for those with visual impairment affecting around 285 million people all over the world.

This could be helpful for those with a condition known as age-related macular degeneration –AMD which is a major cause of blindness as well as visual impairment in older people beyond the age of 50. AMD is a progressive condition wherein the person tends to lose their central vision gradually due to cell damage and death in the retina.

Enhanced Design 

Though glasses known as bioptic telescopes are available to help them with this condition, they tend to be bulky and interfere with social interaction and this latest design seems to be less intrusive. The telescopes are inbuilt in the lenses and were first developed with funding from DARPA as super thin cameras for aerial drone, though they were then converted into vision enhancing system and was first unveiled in 2013.

They were not quite suitable for eyeballs for instance the magnification which was always on, had to be deactivated by taking them out. Besides this, the original lenses were not gas permeable and hence could not be worn for very long. Two years later, the development team led by Eric Tremblay of Swiss Federal Institute of Technology in Lausanne has now enhanced the design and they are of the belief that the telescopic lenses can be tried for humans which has been only tested on a mechanical model so far.

Updated PrototypeFeatures 

The updated prototype features small air channels which enables oxygen flow through the underside of the lens in order to eliminate eye irritation. The lenses are larger and rigid than the standard lens which covers the sclera or white area of the eyes and within the 1.5 mm thick lenses, a ring of tiny aluminium mirrors which bounces light around and increases the visualized size of the image magnifies it 2.8 times.

According to science, in order to switch between the zoomed and normal vision, the user needs to wink their right eye that interrupts the light from being reflected from the contacts to the glasses.When this signal is block, the polarized filter in the glasses kicks in and guides lights in the telescopic area and to get back to normal vision one needs to wink their left eye.

It is not certain when these future lenses would come in the market though science is making changes in viewing the world.

Wednesday, 18 February 2015

Facebook Will Soon Be Able to ID You in Any Photo


Face
Facebook’s DeepFace System

When one appears in a photo taken at any event, they are recognized most of the time but a machine may fail to do so unless a computer has been programmed to look for you and has been trained on several photos of the face with high quality images to examine you.

However, in Facebook, where one will find the largest collection of personal photographs, the technology is making some headway in that direction. The California based company besides several other corporate players in the field – Facebook’s DeepFace System, is presently as accurate as a human being with few constrained facial recognition tasks. According to Yann LeCun, a computer scientist at New York University in New York City directing Facebook’s artificial intelligence research, states that the purpose is not to intrude on the privacy of Facebook user which is more than 1.3 billion active users but to protect it.

When DeepFace tends to identify a face in one of the 400 million new photos which the users upload daily, ‘one will get an alert from Facebook informing them of their appearance in the picture. The user then has the option to blur their image from the picture in order to protect their privacy.

Automated Facial Recognition – Usage/Limitation - Unknown

Most of the people tend to get disturbed on being identified particularly in stranger’s pictures and Facebook has already begun using the system though its face tagging system reveals only the identities of their friends. Besides DeepFace, the U.S government has also been funding in university based facial recognition research while others in the private sectors, like Google together with other companies are proceeding with their own projects to identify individuals automatically who tend to appear in videos and photos.

How automated facial recognition will be used and its limitation on the law is unknown though once the technology takes shape it would create many privacy problems as it solves. Brian Mennecke, an information systems research at Iowa State University in Ames, studying privacy states `the genie is or soon will be out of the bottle and there will be no going back’.

Social Face Classification 

LeCun comments that identifying a face could be much harder problem than detecting it which unlike fingerprints constantly tends to change. By just a smile, the face gets transformed where the corners of the eyes wrinkle, nostrils flare and the teeth are seen. When the head is thrown back with laughter, the apparent shape of the face contorts and when one has the same expression, the hair varies from each photo particularly after a visit to a hair dresser.

Yet most of them are recognized without any effort even if they have been seen in just one photo. The greatest advantage of DeepFace and the aspect of the project which sparked off is its training data. The DeepFace paper mentions the presence of a data set called SFC – Social Face Classification, which is a library of 4.4 million labelled faces taken from the Facebook pages of 4030 users and though users have given permission to Facebook to use their personal data on signing up for the website, DeepFace research paper does not make a mention on the consent of the owner’s photos.