Wednesday, 1 November 2017

Can We Teach Robots Ethics?

driverless car

Artificial Intelligence – Outperforming Humans

 
Artificial intelligence is outperforming human in various fields from driverless car to `carebots’ machine which are entering the dominion of right and wrong. Would an autonomous vehicle select the lives of its passenger over pedestrians? The challenges of artificial intelligence are not only technical but moral and tend to raise queries on what it could mean to be human. As we progress ahead with time with strong network of Internet of Things where several things will have the capabilities of supporting us in our everyday undertakings, there would be instances where these very things would begin to select for us. We are not used to the notion of machines indulging in ethical decision, however the day they will do this by themselves, is not very long. David Edmonds, of BBC asks how they would be taught to do the right thing. Here the example of driverless car is utilised which are projects to be integrated on the highways towards the next years with the potential of resolving trolley problem. Two kids tend to run across the street and there is no time to stop and brake it to a halt. The car is faced with the option of swerving left which could cause it to collide into the oncoming traffic from the opposite end.
 

Robots …. Could Cause Destruction

 
What options would the car tend to make and considering the element the car would opt for gives us thought for speculation on what type of ethics should be programmed in the car? How should the value of the life of the driver be compared to the passenger in the other cars? Or would the buyer make a choice on a car which is prepared to sacrifice its driver to spare the lives of the pedestrians? With every choice is the consequence that tends to raise the query on where does the error lie, could it be on the technology, manufacturer, firm or the driver in the car. Another example is the utilisation of autonomous weapons wherein this brings up the pros and cons on the topic. The obvious con with regards to cons is that robots should not be permitted, since they could cause destruction. We are not used to the notion of machine making ethical decision though it is said that when they tend to do the same by themselves is not far off. In the present scenario, this seems to be an urgent query.
 

Robots – Care for Elderly/Disabled

 
Self-driving cars have covered up millions of miles on road by indulging in autonomous decision which could affect the safety of others using the road. Serving robots for the purpose of providing care for the elderly as well as disabled have been developed by roboticist in Japan, Europe and the United States. A robot caretaker launched in 2015 dubbed as `Robear’ tends to be adequately strong to lift frail patients from their beds and if they can do that they could also be capable of crushing them. The US Army since 2000 had deployed thousands of robots with machine guns each of which had the potential of locating targets and focusing on them without the support of human contribution. Autonomous weapons, like driverless cars, are not science fiction and are weapons which tend to operate without the support of humans.

Friday, 27 October 2017

Material Could Bring Optical Communication Onto Silicon Chips

Soon silicon chips will feature optical communication with the discovery of a new material

With each passing year computing performance has advanced significantly and if we take decades into the equation then you will be astonished at the rate of advancement. Computing performance boost has been achieved through squeezing more number of transistors within a relatively tighter space on the microchips. Now scientists have been able to develop such ultrathin films placed on the semiconductor making optical communication possible on the microchips.
 

The ‘interconnect bottleneck’ in optical communication

 
The downsizing of the microchips over the years had led to signal leakage between the different components which eventually results in slower communication between them. This delay in communication has been termed as ‘interconnect bottleneck’ which has emerged as a major issue in the high-speed computing systems.

One of the best ways to eliminate the interconnect bottleneck in microchip is to make use of light to allow communication between different parts. Using wires for communication is simply out of the question but even using light isn’t a simple or easy way as silicon used to make chip doesn’t’ happen to emit light easily.
 

Finding a new material to emit light

 
Researchers have found a light emitter as well as detector which can help in bringing optical communication by integrating it in the silicon CMOS chips. A new device has been built from a common semiconductor material, molybdenum ditelluride, which belongs to a new revolution group of materials called two-dimensional transition-metal dichalcogenides.

The best thing about this material is that it can be stacked right top of the silicon wafers which wasn’t the case earlier. This 2D molybdenum ditelluride is such a remarkable ultra-thin material that it can be easily attached with any material without much hassle. A major difficulty faced by the scientists while looking for materials to integrate with the silicon semiconductors is that most of materials happen to emit light in the visible range. And silicon is notorious for absorbing the light emitted at such wavelengths. While molybdenum ditelluride happens to emit light in the infrared range which can’t be absorbed by the silicon and thereby it helps in enabling the optical communication on the microchip.

 

Future prospects of this new discovery in optical communication

 
Researchers have stepped their efforts towards finding other materials which can also be used for the chip based optical communication in future. Currently most of the telecommunication system operates mainly using the light having the wavelength of 1.3 or 1.5 micrometers. The good thing here is that molybdenum ditelluride happens to emit light at 1.1 micrometer which is suitable for usage in the silicon chips found specifically in the computers but unsuitable when it comes to usage in the telecommunications systems.

Therefore researchers are again looking for a new material which can help initiating the optical communication the telecommunication systems. Currently they are exploring another ultra-thin material known as black phosphorus which has the potential to emit light through altering the different layers used in the process. This research has been published in the science journal called Nature Nanotechnology.

Google Photos Now Recognizes Your Pets

Google Photos Now Recognizes Your Pets

Google Photos now automatically recognizes pets and knows their names

Google has a new feature for its Google Photos app. The app recognizes the faces of your friends and family members and automatically groups them into groups.

Corresponding was now also for your pets retrofitted!

If you open the app after the update, Google Photos generally asks you if you want to give a name for the animal photos found just like the person recognition. Now, Google Photos recognizes your pet's face. Images that Google recognizes the animal can then be found through the Google Search. It has only been possible to search directly for people or certain animals, now we can also search for the name of our pet.

With this innovation, Google Photos scores with a number of pet lovers: Google Photos now also creates their own albums for your pets. The innovation is to be available promptly to all users. As a result of these extensions, creating albums, slideshows, or movies with your pets photos becomes easier. The assistant included in the Google Photos app also takes into account the extended possibilities and prepares video animations for the four-legged friends, provided the material is adequate. Already for some time now Google Photos has the possibility to search specifically for dog or cat breeds if it must be also by entering Emojis.

The automatic tagging according to terms such as "dog" or "cat" is an old hat and is dominated by Google's photo app as well as by Apple photos. However, a face recognition for pets is a novelty. Google Photos offers the like now and sorts the dogs and cats that you have clicked together with recognized human faces in the extended section "People and Pets", where you can also name the new albums.

Animal update for the "Google Photos" app: From now on, not only people but also the four-legged householders with their own label are displayed in the gallery overview. Thus the search for the pictures of the hairy favorites is clearly simplified.

What did Mr. Schnuffmann look like when he was still sitting small and tender in the basket before he became that grown-up Giant Schnauzer? Where are the pictures showing Kitty Cat after her walk around the neighborhood? If you search for the photos of your hairy favorites in the "Google Photos", you had to type "dog" or "cat" into the search field or cross the gallery.

With the new update in the latest app version 3.7.0, the management of the animal photos is much easier, a contribution to the Google blog reveals. To the galleries of the friends, which are also provided with the names of the depicted on request, now also galleries of the four-, eight- or how many-also-always-beers join. Are separated from each other under their own label. These albums can also be named.

The software also recognizes the breed of the animals. So if you're looking specifically for a poodle, you can enter this in the Google Photos search field. Already all Poodle pictures are displayed. Also new: the label "People & Pets", which lists the pictures of the owners and their favorites together.
Also the recently introduced search by dog or cat Emoji is possible, as well as the creation of a video by Google Assistant. How this might look, Google shows in this small sample clip:

Thursday, 26 October 2017

Novel Circuit Design Boosts Wearable Thermoelectric Generators

Wearable Thermoelectric Generators
With Wearable Thermoelectric Generators, a continuous monitoring of the vital data is possible for athletes and patients. The difficulty is to supply the devices with power permanently. The Wearable Thermoelectric Generators with 40 mW of continuous power, which is worn along with the regular clothes, solves the problem.

Supported by Air Force Office of Scientific Research (AFOSR) and by PepsiCo, Inc., this research has paved a way to better understand the electronic and optical properties of polymer-based materials. A team of researchers from the Georgia Institute of Technology, under the leadership of Professor Shannon Yee, has developed a Wearable Thermoelectric Generators that is both light and flexible and uses the body's heat to generate electrical energy. The thermal generators were applied both to organic and inorganic substrates. However, the polymer variant achieves a significantly lower output power. The performance of the inorganic variant, on the other hand, was satisfactory, but the prototypes were rigid, comparatively heavy and therefore not usable.

The team of Georgia Institute of Technology around Professor Shannon Yee has now developed a Wearable Thermoelectric Generators method in which a p-type and n-type materials are each presented in a pasty form and on a fabric to be printed. The pastes penetrate through the meshes of the fabric and form a layer of Wearable Thermoelectric Generators about one hundred micrometers thick. As a result, several hundred thermoelectrically active points are formed in the combination of p- and n-conducting material on a specific surface of the fabric.

The structure of this Wearable Thermoelectric Generators is stable; it does not require additional ceramic substrates that absorb a large portion of the available thermal energy. Here, the fabric itself serves as the upper and lower substrate of the generator between which the inorganic thermoelectrically active materials are introduced; the Wearable Thermoelectric Generators are also flexible. In particular, the weight of the Wearable Thermoelectric Generators in comparison with other systems could be substantially reduced: to about 0.13 g / cm 2. A 10 x 10 cm² Wearable Thermoelectric Generators designed for the power supply of a "smart fabric" produces an output power of 40 mW from the temperature difference between the skin of the wearer and the environment. 

Wednesday, 25 October 2017

Pay with Google and Speed Through Checkout

Pay with Google

Now shop easily on Android by using ‘Pay with Google’

If you love shopping online or offline shopping then dealing with cash will be a thing of past with availability of more and more mobile based payment options. Mobile based payment systems saw a surge in last couple of year with the launch of Apple Pay, Android Pay & Samsung Pay which brought the cashless future into reality. This was enabled by the launch and availability of the near-field communications in the modern smartphones. Google has announced that it will be launching very own way of paying on mobile devices with the ‘Pay With Google’ option. This will allows users to make use of their card to purchase any product or service using the application like Google Play, Chrome, Android Pay or Youtube.
 

Now pay easily on your Android devices

 
The ‘Pay With Google’ option will help in bringing together all the saved payment option within a single interface and it can be done by the app makers and retailers by implementing very few lines of code. Google has made use of the Google Payment API which made its debut during the I/O developer conference in May to give final shape and power the new payment method.

The aim behind ‘Pay with Google’ is to make the checkout process speedier for the users so that it can help in increasing the conversions for the retailers. This has been made possible by the tapping into any payment card of customer present in the file on Google rather than saving them onto the Android Pay specifically. This feature will also make it simpler for the users to shop on their Android devices using the Google powerful digital assistant called Google Assistant.
 

Google Wallet was Google’s first attempt

 
This isn’t the first time Google has forayed to make extensive use of the NFC-based systems. Just a while ago we had Google Wallet which was released in September 2011. Using the Google Wallet users were to make purchase online or even in the store as well as it can be used for sending money to the family and friends. During the initial phase this service found huge popularity and it was touted to leave the Apple Pay way behind in the adoption and active usage but sadly it didn’t happened.
 

A new era of and easy mobile transactions

 
During the launch Google official stated that facilitating payments will become a key capability for the user’s Google account through implementation of ‘Pay With Google’. This option will enable the users all across the globe to pay using their Google devices regardless of the device, platform or interface. Using the ‘Pay with Google’ feature at the checkout users will be presented with a list of payments cards already saved in their Google account. Users are simply required to give a click on the card they wish to use and Google will work towards sending the information right to the merchant along with the shipping address to facilitate the transaction instantaneously. More than 40 different payment providers had already partnered with Google to make the integration simple task for the merchants who wish to make use of this new feature.