Friday, 2 March 2018

Google Embrace the eSim Technology with Its New Pixel 2 Devices

eSim is all set to become the new standard in the mobile industry and t is being actively promoted by the GSMA which is an association of the popular network operators in the world. As the name suggest the eSim will come in form of the integrated SIM chop which certainly can’t be removed from the device. This move is also appreciated by the consumer electronics manufacturers as they wish to give a major push to the connected device with their very own Internet of Things devices. Google happens to be first to adopt the built in eSim technology with its Pixel 2 devices.

Advantage of having eSim in Google mobile devices


Google will be making use of the eSim in order to authenticate the cellular account. But it will be applicable for those consumers who are active Project Fi subscriber under the Google. It is worth noting that no other Smartphone in the history has made use of the eSim standard every on the planet. For the time being this technology is only being reserved for the LTE equipped devices like Smartphones, tablets and SmartWatches apart from few wearable gadgets.

With new Pixel devices users will get the option to make use of the eSim by simply connecting it to the Fi network. Upon powering up the device users are simply required to give a tap on the SIM free option during the setup and Google will take care of rest of the process to ensure you get the necessary operator coverage on your device.

How it works? 


The best thing about the eSim is that the information placed on it is compliant and can easily be rewritten by any operator. A user will be able to change the operator by placing a simple phone call. There is no need to get a new SIM or any kind of unwanted delay in switching the eSim on purpose. In order words there are simply now physical swapping of SIM cards are required by the user when they change the operator.

This is the usability advantage of the eSim. Secondly, it allows the Smartphone manufacturer to go for the revolutionary smaller Smartphone design. Because they are no longer needed to place the SIM card tray in the devices and it will be certainly quite beneficial for the wearable products in the long run.

Google isn’t the only one to bring products with eSim technology as Apple and Samsung are in talks with a number of major network providers globally to adopt this technology. Samsung has also started using the GSM enabled eSim on the Gear S2 Classic 3g device similarly Apple has went ahead and brought the eSim on the Apple Watch Series 3. Esim is not bringing a wide range of advantages to the device manufacturers but it is also bringing the similar advantages to the user as well. In future we are likely to see Smartphone more compact and smaller in footprint with the adoption of eSim technology.

Tuesday, 27 February 2018

Metalens: Breakthrough Seen in Artificial Eye and Muscle Technology

Metalens

Metalens the new human eye?

Researchers may have just found a new version of electronic eye that behaves much like a human eye. This flat electronically controlled artificial eye known as a metalens can autocorrect for blurry vision, giving it a host of uses in various industries be it Augmented Reality, Virtual Reality, in optical microscopes and so on.

By taking a cue from the human eye and its functioning, researchers have made an adaptive metalens that can control the main reasons for blurry vision, which are astigmatism, image shift and focus. While the former two is well out of control of the human eye in correcting on its own, the latter can be taken care of by the functioning of the human eye. Therefore metalens even goes beyond what a normal human eye can do.

What is a Metalens? 


Inspired by a human eye, researchers came up with a flat electronic artificial eye come to be known as a metalens. The Metalens can autocorrect for blurry images caused by focus, image shift and astigmatism.

Combining breakthrough tech in artificial muscle technology and meta lens technology, researchers have devised an artificial electronic eye that is a meta lens that can focus on images in real time much like the human eye does. Going even further than the human eye, a metalens can even correct for astigmatism and image shift that the normal human eye cannot do on its own.
Applicability of Metalens Technology:

Because of the feasibility of the project, metalens can be used in a host of applications such as Virtual Reality, Augmented Reality, Optical Microscopes that can work electronically without the need to refocus or adjust, mobile cellphones, cameras and a host of other applications as well.

While being used in these fields, ametalens is capable of auto correcting blurry vision caused by a number of factors simultaneously.

How is a Metalens made?

A metalens focuses light and gets rid of spherical aberrations by using a dense layer of nanostructures which are smaller than a wavelength of light.

Researchers first developed a metalens which was the size of a single piece of glitter. In order to be used for commercial applications the size of a metalens had to be scaled up. Since each metalens has a lot of information pertaining to it, increasing the size of the metalens from 100 microns to a centimeter sized lens, increases the information relating to it by more than 10,000 times the original metalens. This leads to a file size of gigabytes or even tetrabytes of information pertaining to the design of a metalens.

In order to reduce the size of the file, researchers came up with an algorithm that is commonly used in fabricating integrated circuits. In making a metalens for commercial purposes would require the use of two industries, the semi-conductor industry that makes computer chips and the lens manufacturing industry.

Right now researchers have no plans in selling the intellectual rights of the metalens tech and are exploring avenues to bring the metalens to the manufacturing belt.

Friday, 23 February 2018

NASA Developing 3D Printable Tools to Help Analyse Biological Samples without Sending Them Back to Earth

NASA builds up 3D printable apparatus to help evaluate biological samples without transporting them back to our planet

To make possible astronauts on the ship the International Space Station (ISS) to study biological samples without transferring them back to our planet, NASA scientists, together with an of Indian scientist, are budding 3D printable apparatus that can hold liquids such as blood biological samples with no spill out into micro-gravity.

To understand, how to have an effect on team physical condition, how to formulate an enduring role to Mars furthermore afar, NASA told on 8 February. The innovative NASA’s mission, known as Omics in Space, plans to build up expertise to cram "omics" - in microbiology that are imperative to the health of humans. Omics comprises of exploration hooked on genome, microbiomes as well as proteomes.

NASA has by now deliberated omics by way of attempts such as the Microbial trailing 1 research that check up microbial multiplicity lying on the space station. However there is no method to route biological samples lying on the station, thus they enclose to be propel down to globe. It may possibly be months between the moment biological samples are taken as well as an examination is done, told Kasthuri Venkateswaran of NASA's (JPL) in Pasadena, California, and chief researcher meant for the Omics in Space mission.

He is a former pupil of Annamalai University in Tamil Nadu, told: This mission intends to widen an programmed structure meant for learn molecular biology by means of least team intercession.

This researcher proclaimed that this is one of the major achievements in microgravity. Astronauts amass a range of biological samples, takes account of their own saliva as well as blood, and microbes washed down from the hedges of the ISS. These biological samples could subsequently be jumble through water. Exclusive of the suitable apparatus, biological samples can dribble, glide or else structure air bubbles that could conciliate consequences.

Two years ago, NASA obtains a huge leap through progressive DNA hooked on space meant for the initial time. Astronauts make use of a small, hand-held succession utensil known as the MinION, founded by Oxford Nanopore Technologies. It is a corporation having its head office at Oxford, England.

NASA told the Space mission tactics to put up this sensation by means of budding a programmed DNA / RNA extractor that may put in order biological samples intended for a MinION apparatus. A significant component of this extractor is a 3D printable synthetic sealed unit required to haul out nucleic acids from the biological samples intended for the Minion progression.

Camilla Urbaniak, a Postdoctoral investigator on JPL in addition to co-investigator on Omics in Space, told this has been checked on our Planet. "We are obtaining what is on our planet to study DNA along with uniting the entire rung hooked on an programmed method," Urbaniak told. "What is innovative is a single stop store that could haul out and route the entire of these biological samples," Urbaniak said.

Tuesday, 20 February 2018

The Next Generation of Cameras Might see Behind Walls




Single Pixel Camera/Multi-Sensor Imaging/Quantum Technology

 

Users are very much taken up with the camera technology, which has given an enhanced look to the images clicked. However these technological achievements have more in store for the users. Single-pixel cameras, multi-sensor imaging together with quantum technologies would bring about great achievements in the way we tend to take images.

The updated camera exploration has been moving away from increasing the number of mega-pixels to merging camera data with computational processing. It is a radical new approach wherein the incoming data may not seem like an image. It tends to be an image after a sequence of computational steps which involves complex mathematics together with modelling on how light tends to travel through the scene or the camera.

The extra layer of computational processing tends to eliminate the chains of conservative imaging systems and there may be an instance where we may not need camera in the conservative sense any longer. On the contrary we would utilise light detectors which few years back would never have been considered for imaging.

 However, they would be capable of performing incredible results like viewing through fog, inside the human body as well as behind the walls.

Illuminations Spots/Patterns

 

The single pixel camera is one of the examples that depend on a simple source.The usual cameras tend to utilise plenty of pixels – tiny sensor features in order to capture a scene which is probably illuminated by an individual source.

However one can also manage thing in a different manner, capturing information from several light sources with an individual pixel. To achieve this one would need a controlled light source such as a simple data projector which tends to illuminate the scene a single spot at a time or with a sequence of various patterns.

For every individual illumination spot or pattern one can then measure the quantity of light reflected thereby adding all together in creating the ultimate image. Evidently the drawback of taking a photo in this way is that one will have to send plenty of illumination spots or pattern to obtain an image – one that would take only one snapshot with a regular camera.

However this type of imaging would enable in creating otherwise impossible camera, for instance that which tends to work at wavelengths of light beyond the visible spectrum, where good detectors cannot be made into cameras.

Quantum Entanglement 

 

These types of camera could be utilised in taking images through fog or thick snowfall. They could also imitate the eyes of some animals and mechanically increase the resolution of an image based on what is portrayed. There is also a possibility of capturing images from light particles which have not interacted with object needed to be photographed.

This would have the benefit of the idea of `quantum entanglement’ which two particles can be connected in a way meaning that whatever tends to occur to one can occur to the other even though they are apart at a long distance.

 Single pixel imaging is considered as one of the simplest innovation in future camera technology and depends on the traditional concept of what forms an image. Presently we are observing a surge of interest for methods wherein lot of information is utilised though out-dated techniques tend to gather only a small portion of it.

It is here that multi-sensor approaches involving a number of detectors pointing at the same scene could be utilised. One ground-breaking example of this was the Hubble telescope that produced images made from a mixture of several different images taken at various wavelengths.

Photon & Quantum Imaging


However, one can now purchase commercial version of this type of technology like the Lytro camera that tends to accumulate information regarding light intensity and direction on the similar sensor producing images, which could be progressed after the image has been taken. The next generation camera will possibly seem like the Light L16 camera featuring ground-breaking technology based on over 10 various sensors.

Their data are connected through a computer with a provision of 50Mb, refocus able and re-zoomable, professional-quality image. The camera tends to appear like a very thrilling Picasso interpretation of a crazy cellphone camera. Researchers have been working hard on the issue of seeing through fog, beyond walls as well as imaging deep within the human body and brain. All these techniques depend on linking images with models explaining how light tends to travel through or around various substances.

Another remarking method which has been achieving ground is based on artificial intelligence to `learn’ in recognising objects from the data and these methods have been inspired by learning process in the human brain which probably likely to play a major role in the forthcoming imaging system.

Individual photon and quantum imaging technologies have been developing to the extent that they can take image with extremely low light levels as well as videos with exceptionally fast speed attaining a trillion frames per second. This is adequate to capture images of light travelling across a scene.

Tuesday, 13 February 2018

An Apology after Apple Sends wrong AD Spend Data to Developers

Apple
Recently certain developers got sent ad- spend data belonging to other apps and developers. This led to some awkward and uncomfortable questions being asked. iOS developers are given a choice whether to opt in for Apple’s search ads basic service. Developers have to pay only when their app has been installed by a user. Apple, in sending the-end-of-month ad review, has also inadvertently sent details of various developer’s ad spend details to different developers.

On Wednesday Apple acknowledged their mistake and issued an apology to all their developers. They also mentioned that henceforth all data pertaining to ad spend details can be obtained by the developer logging on to their account to prevent further mishaps in the future.

What is Apple’s Search Ad’s Basic Service? 

As a developer getting your app known to the world is always a struggle. Apple helps in solving the problem to a certain limit.

Search Ads basic is a service whereby, Apple advertises for a developers app in the app store, in exchange for signing up for this service as well as paying for the service as and when the app of the developer is downloaded.

This is cost effective as the developer pays only when the app is installed rather that when the user is just interested and does not actually download the app.

With the search Ads basic service developers get an end of month statement in which they can review the performance of their ads. They also get information such as their apps installs, average-spend per install, ads spend numbers related to their ad expenditure on the app store and more.

The search ads basic service was launched in December of 2017 and is targeted at young and emerging developers to promote their apps in the app store. Their apps will be listed in the search results. This service not only promotes an individual’s app but also gives the developer’s app a chance to be seen by the users on the app store as otherwise users would not even know of the existence of such an app.

What’s an even bigger advantage for developers is that they pay only when the app is installed as opposed to paying on an impression per app basis. Besides this the search Ads basic service also gives developers intelligent automation features in which results can be maximized with lesser efforts and a feature where developers can track performance on their individual dashboards.

Apple’s Apology after sharing different developer’s ad spend details:

Developers like to keep all their ad spend details such as installs, their expenditure related to the ads on the app store and so forth confidential. So when they got data pertaining to other developers there was also a more than probable chance that their own private ad data was being seen by someone else.
Apple realized their mistake and acknowledged their part in it by apologizing saying that the problem occurred due to “processing error” and hence forth all data will be available on an individual developer’s personal account.