Thursday, 1 June 2017

Twitter Adds 69 New Emoji - But There’s a Catch

Twitter has added 69 new unique emoji including the much anticipated flags for Wales, England and Scotland. The line-up of Emoji 5.0 revealed emoticons such as a star-struck emoji, woman with headscarf, a swearing emoji, dinosaurs, face with raised eyebrow and a whole range of diverse skin tones. Until now, Twitter kept a rather meagre and simple set of emoticons unlike other platforms that have a lot of emoji which are not used that often. In this update, there are 239 new emoji and 69 unique new emoticons a user can choose from while expressing themselves on Twitter. Twitter is the first to provide support for Emoji 5.0. There is, however, a catch.

The Catch

Most of the new emoji depend on characters that are included in Unicode 10.0, as reported by Emojipedia. This means that as of now these new emoji won’t work on iOS, MacOS, Windows and Android devices. All the new icons will not be viewable on platforms other than the official Twitter website and app and for the time being, it will be made available when using Twitter on desktop. Like, for example, one of the new emoji, the flag, will automatically be shown as a black flag on some platforms such as third-party apps and clients that do not support the new emoji yet. So in case you have posted something with the new features, on any app that doesn’t support Emoji 5.0, you now know why it doesn’t show up like it is supposed to. This, however, is expected to be a short term problem and a solution for this is being worked upon.

The Solution

The new Twitter emoji are all set to be included in other platforms before this year ends. Also, an update for Tweetdeck can be expected very soon. In the upcoming iOS 11 software update, Apple will most likely include the support required for the new emoji released. Apple is expected to make this announcement at its forthcoming Worldwide Developers Conference. At the same time, Google has also made an announcement about its plans to introduce support for the new emoji in the Android O release which is expected soon. Users who can access the Android O developer preview will be able to view the emoji already.

The other newly introduced emoji include an orange heart, a face vomiting emoji, an exploding head, a hedgehog, a t-rex, a breastfeeding woman, a flying saucer and a brain. Few food based icons have also been included such as a pretzel, a fortune cookie, a pie and broccoli.

The Unicode Consortium is an organisation which is mainly responsible to make sure that the icons or emoji are all the same on different platforms. They make the final decision about which new emoji should be released. Unicode had released an entire list of the 51 newly developed emoji set to be introduced in the Unicode 10 release. Also, a preview of the new line up for the current year was released by Emojipedia earlier this year which included a complete list of the to-be introduced emoji.

Prepare Yourself for Sweet Luxury of Riding in a Robocar

RoboCar
The idea of a self-propelled vehicle has existed for decades, but the car companies are only now approaching the vision. Also, data services like Google are working on the Robo car. But the way to the Robo car is quite still far.

Cars in the advertising always have free travel. What the advertising does not show: column traffic and pure boredom behind the wheel. Who does not want a chauffeur in professional traffic? In fact, drivers can soon leave the hands off the wheel and the feet off the pedals. This year, the BMW i3 will be the first held up in traffic jam on the road, which will be able to take over the running tasks up to 40 km/h. In the next Audi A8 generation, an autopilot is to accelerate, brake and steer up to 60 km / h.

Audi has unveiled the auto pilot-driven Robo car as part of the Consumer Electronics Show 2013 (CES) in Las Vegas. Also in the case of parking, you can save time and effort in the future: simply activate and disembark the park pilots. The driverless Robo car is automatically removed from the car park central computer by Wlan to the next free parking space.

The predecessors of the 1950s

"If we like it or not, slowly, but surely, the robots take over the job of the car driver," popular science cited as early as 1958. At that time, the first cruise control was introduced in the Chrysler Imperial. The system was able to keep the speed constant, but by no means recognize the traffic situation.

It was only after the turn of the millennium that robotic vehicles launched more and more sensors into the world: the research agency of the US military Darpa organized several races for driverless cars. In 2005 the victorious VW Touareg "Stanley" found its way independently through the Mojave desert. However, the Darpa had planted a zone of comfort to the electron brain through the human-free area: low earth walls as roadsides helped the radar, video and laser sensors to orient themselves. In addition, the route was set by nearly 3000 GPS points. Every 70 meters, a satellite-based road mark kept the scouts on course. In the ranking of intelligent systems, these rolling robots had not earned the predicate autonomously.

Measurement by laser scanner

"We have learned how to measure the edge of the road using a laser scanner in preparation for the Grand Challenge," explains Bjorn Giesler, who is responsible for driving the Robo car at Audi. In the meantime, cars nearer to the series are also available without drivers in the US state of Nevada. Audi was the first Robo car manufacturer to have the license to test the autopilot in regular road traffic. "How close we are to the show is to see the size of our laser scanner," said Ricky Hudi at CES. The Head of Development Electrical / Electronic at Audi presented a hand-held sensor that fits easily into any vehicle front.

This is a huge difference to the research vehicles of Google and Lexus that are on the road in California: their powerful laser scanner for the Robo car cost as much money as a fully equipped luxury limousine.

All the cars work on automated driving

The driver thinks the car is steering - as futuristic as this form of movement can also be: All leading car manufacturers are working with high pressure on the gradual introduction of automated driving Robo car. "The driver is only relieved of the burden on long, fatiguing routes on the motorway or in traffic situations through the system.

The sensor technology does not yet create the complex urban traffic or even critical situations on the highway ", explains the BMW developer Dirk Wisselmann all too euphoric expectations. There are also still high legal hurdles for fully automated driving Robo car :" Every driver should be able to reach everyone Time to control his car as well as to ensure timely and appropriate diligence, "says the 1968 Vienna Convention.

This also applies to the cars of Audi, Google, Lexus, Bosch, Continental and others that have been driving through California and Nevada for months. Each test Robo car has at least one passenger who constantly monitors the operation of all systems. The human being therefore remains the responsibility and must be able to take over the task at any time. Therefore, the system also ensures by camera that the pilot behind the wheel is neither sleeping nor impotent. If, for example, the traffic jam occurs and the traffic flows again at more than 60 km / h, the driver of that Robo car is asked to take over. In addition, the function is only available on the motorway, where there is no city-waggle with traffic and unpredictable pedestrians.

Traffic is coming into the cloud

Always on the road and on the road: In the future, the traffic in real-time will be reflected in a data cloud (cloud). Obviously the Volkswagen Group does not want to leave such data services to Internet companies like Google. "We will use the cloud as an external sensor for driving," promises Ricky Hudi.

Only the swarm-in-the-spot of the networked vehicles can control the traffic situation so precisely that the driver can turn to other activities without risk. "I assume that most vehicles will have a communication unit as a link to the cloud in ten years," says Hudi, "because we have a huge advantage with the large fleet of Volkswagen Group vehicles."

Wednesday, 31 May 2017

This Artist Has a Classroom of Robots That Chat, Count and Draw Portraits

20 robot students are busy working hard in a uniquely designed classroom near Southwark station in London. To talk to each other, they use a language inspired by the Morse code. While they are talking, their robot teacher asks them to settle down and begins to take the register. Once all the robots’ presence has been recorded, the class for the day begins, where the robots devotedly learn to count through tally, i.e. by drawing lines in their notebooks.

Patrick Tresset, an artist, in his latest exhibition, Machine Studies, included this robot classroom. His robots comprise of a camera and a pen held by a robot arm, which is controlled by a laptop concealed in a traditional school desk that is actually the robot’s body. Inspired by Tresset’s personal experience during his schooldays in France, the robot class finish an entire range of activities in Human Study #4.

Robots Displaying Human Traits and Performing Human Functions


All the robot students’ have synchronised actions but each robot has unique movements. Tresset programmed the robots to portray various behavioural qualities, such as uneasiness or timidity. Some robots appear to actively take part in the task allotted to them whereas others work a little slower, with a level of nervousness as compared to the others. Tresset says his study is about observing human nature than technology and is focused on how we can make robots more human.

In his other work, Human Study #1 3RNP, three robots wait with pens, ready to draw portraits of humans sitting in front of them. In a span of 30 minutes, the camera or “heads” are raised to view the subject and they start sketching frantically , stopping every once in awhile to have a look at their composition. Tresset has programmed each robot in such way that it can roughly imitate his own style of drawing but not fully and has left some room for the robot to use its own style. Therefore, Tresset says he cannot foresee what their final portraits will look like.

Robots Being Involved In Artistic Exhibitions


The exhibition is part of MERGE Festival held in London’s Bankside district. Donald Hyslop, head of community partnerships at the Tate Modern and chair of Better Bankside, says that the whole point of this festival is to not limit art to just museums but to extend it into new contexts within a community. He states that one doesn’t need to visit Berlin or Lisbon for experiencing interesting industrial spaces and instead this can be experienced in this part of London in Bankside where there are many hidden spaces. Tresset’s work is put up on display at Platform Southwark.

Angie Dixon who is project and production manager at Illuminate Productions, curates this festival and says that visitors are always interested to have their portrait drawn by Tresset’s robots. She herself had her portrait drawn earlier in 2012 by an earlier version of the robots. That time they were not able to differentiate between dark and light skin and so her portrait was like scratching on paper.

Nevertheless, she says she was not disappointed and it was an interesting experience for her. Tresset stated that robots cannot be counted as a threat to human artists as of now. His robots sign their creations and yet he counts himself as the author. He is currently involved in machine learning and says eventually he would want his robots to improvise and create their own style.

Tuesday, 30 May 2017

Researchers Engineer Concocted Shape Shifting Noodles

I can say even your grannie can play with the noodles or spaghetti the one that MIT researchers has invented. Yes, these spaghetti/ noodles caters a lot more fun than the normal noodles can't. What so fun about these edible films? This MIT team has made the dining experience more synergistic and lot of fun. Just add water in it and you find these can transform their shapes.

The MIT's Tangible Media Group, have commix something similar to the eatable origami, that is in the shape of flat sheets of starch and gelatin. If you immersed in water, immediately they shoot into 3D formations, that comprises of regular shapes of pasta such as macaroni, noodles and rotini. These edible three-dimensional formations can also be skillfully arranged to crimp into the form of a flower and other irregular designs. To play with these culinary prospective, the MIT researchers formulated flat discs that wrap up around caviar, akin to cannoli, and spaghetti that impromptu divides into littler noodles while soused in hot stock. They have presented their work to the Association for Computing Machinery’s 2017

This MIT's team describes that these edible 3D formations are not only the art of culinary performance, but it is a functional way to cut down food-shipping costings.

The edible films could be piled together and shipped to the customers, so alter into their ultimate shape later, while plunged in water. If you make a perfect pack of it, you will gain 67 percent of package will retain empty says the co author of the paper.

Programmable pasta

MITs researchers, Wang and Yao had been working the effects of respective materials to moisture. They were working generally with a definite bacterium that can metamorphose its form, shrinking and enlarging in effect to humidness. Coincidentally, that particular bacterium is used to ferment soybeans to prepare a regular Japanese dish called as natto. They worked with gelatin, which of course spread out while it takes up water.

This material can spread out to respective degrees that depends on the density. Then the team researched to control the bending structures of the pasta, so that they create various 3D shape-changing gelatin sheets. These gelatin sheets were covered with cellulose strips that controls the amount of water the gelatin sheet can absorb. This cellulose strip act as a water barrier! The print the cellulose onto the gelatin sheets, that can predictably control the shapes response to water and the shapes that it finally expected.

Designing for a noodle democracy

Wang and Yao formulated various different structures from the gelatin sheets, from macaroni like designs, to structures that matches flowers and horse saddles. The team showed their newly invented edibles to the head chef of a high-class Boston bistro. These two professionals designed some culinary creations.

They transcribed the cellulose shapes and the attribute of entire structures they were able to make, and as well tested properties such as strength, make all this into a database. This team used a lab 3-D printer to shape cellulose onto the gelatin films, merely they have defined ways in which we can produce akin effects with more common method, such as screen printing. With this online tool can render design instructions, and a startup establishment can transport the materials to you. They want to change the design of noodles.

Next Generation Mapping Drone

New mapping drone records areas ten times faster than before

Drones are called as remote-controlled aerial vehicles (UAVs - unmanned arial vehicles) can be both independent / machine-controlled flying craft, and also human-controlled aircraft.

The application areas are diverse with these wonderful machines enhanced with technology. Either as a pure hobby object for pure flying or even filming / photography or for professional occasion. On the other hand, they can be also used as military drones / combat drones.

Aerial photography = photography and video technical controls on, for example, high voltage masts and buildings various surveying techniques in the forestry different exploration areas and research in animal protection at the police and fire brigade. Hence the area of application of these drones are increasing day by day in various fields. And also there are a lot of new researches have been conducted to technology enhance these unmanned vehicles. Recently the technical team has formulated a mapping drone that records the ground ten times faster.

The Drone named “Marlyn”

Up to 60 kilometers per hour fast is a mapping drone that has developed the start-up Atmos UAV, a spin-off from the Technical University of Delft in the Netherlands. This technology makes it much faster than almost all conventional airplanes of this type. The reason: "Marlyn" has wings like an airplane, but can still start vertically. The rotors can be rotated by 90 degrees. In the horizontal flight they can use all their strength to move Marlyn forward. The rotors of normal drones have to provide additional buoyancy.

One square kilometer in 30 minutes

The engineers at Atmos has enhanced this mapping drone with wonderful technology that developed Marlyn specifically for taking pictures of the earth's surface. The hybrid mapping drone creates a certain area ten times faster than a conventional aircraft. The first user is the Dutch company, which sees itself as a leader in the use of drones for industrial and agricultural purposes. Marlyn – the mapping drone creates an area of one square kilometer in just 30 minutes. The resolution is three centimeters in this driving mode. The maximum is 1.5 centimeters. This means that a pixel of the recorded image actually has an extension of 1.5 centimeters. In other words, a ten-cent coin can be easily identified.

Rain does not matter to Marlyn

To start and land, this mapping drone needs an area of two by two meters. It can fly in almost any weather. Rain does not matter to her. It must remain on the ground only in strong winds. "With the special features of this mapping drone, we create even the most demanding tasks," says Pieter Franken, one of the founders of Skeye.

Marlyn is wearing a high-resolution camera. Her job makes her autonomous, but after careful preparation. In the first step, by this wonderful technology a flight plan is drawn up that takes into account the required accuracy of the images and the terrain to be mapped. Then start the mapping drone and the data recorder on board at the push of a button. When this mapping drone has landed after the work has been done, the stored image data are read out and edited with any evaluation software.