CES Roundups; Segway; What Is Needed For A True Robot Revolution?; Autonomous Rescue Drone; How A.I. Will Transform Robotics; Robot Cops; Why We Need A Legal Definition of A.I.; Dancing Drones.. and much more. Find out what's happening in the robotics’ universe this week. We select news that we hope will interest or amuse you. Enjoy!
CES came to a close last week and many exciting robots were showcased there for the very first time (including a remarkable passenger-carrying drone from China.)
But it was the "Segway robot" that stole the show...
A transport device and robot assistant rolled into one, the rideable robot --which is the result of collaboration between Segway, Intel, and Xiaomi-- features voice control, a livestreaming camera, facial-recognition, and a follow-me mode:
It rolled onto the stage with an adorable expression that could rival that of a newborn infant, capitalizing on Intel’s RealSense RGB-D camera, which imbues the self-balancing device with a greater sense of spatial depth when tracking and mapping. Intel’s Atom processor makes it all possible, as does the hardware’s GPU acceleration and embedded vision algorithms.
We're not going to rehash all the CES 2016 roundups here, but for those who missed out, here are some "robots of CES 2016" material to feast on:
7 CUTEST ROBOTS AT CES (IGN)
Robots ready to run our lives…starting in the kitchen (Euronews)
MegaBot's Giant Robot Battle At CES 2016 Was A Bust (TechTimes)
Top Ten Robotics Startups At CES 2016 (Robotics Trends)
The Best Robot 'Butlers' At CES (Fortune)
James Kuffner, who left Alphabet recently after more than six years on their robotics team to take up a new position in cloud computing at Toyota's artificial intelligence and robotics research facility in Palo Alto shared some interesting thoughts at a recent talk:
In many ways, Kuffner’s presentation confirmed [...] that the high cost of integrating robots into particular tasks will be made easier and less costly by using machine learning to have them understand their environment and adapt.
To make the point, Kuffner showed a video of a robot, from a research project he had run at Carnegie Mellon, in which the robot can walk and is tasked with finding its way across a floor, around obstacles, to a colored spot. The video shows the robot moving step by step, figuring out the paths it has to take. The tests became more and more complex, with researchers rearranging the colored forms representing obstacles on the floor, and the robot having to adapt by recomputing its path.
Kuffner made the point that with the advances of Moore’s Law, the robot can compute 25,000 footsteps every 600 milliseconds or less, versus in prior years when it would take it 1,000 milliseconds for every one step it had to contemplate, an advance that he deemed “incredible.”
Meet the "AirMule"; a hovercar-like drone designed by Tactical Robotics to operate as an autonomous air ambulance in war-zones.
Designed to carry almost 1,000 pounds, the Air Mule is designed for a top speed of 60 mph, and unladen a maximum range of 430 miles. Built around ducted fans, it takes off and lands vertically.
Knightscope's K5 security robot has hit the mean streets of Silicon Valley:
The Mountain View based developer built the K5 (not to be confused with Doctor Who’s K9) to be cute and inviting to the public. “We’ve had people go up and hug it, and embrace it for whatever reason,” said Stacy Stephens, co-founder of the borderline cartoonishly named Knightscope.
The autonomous patrol units weigh in at more than 300 pounds and are about five feet tall. Their control and movements are based on the same technology that power the new Google Self-Driving cars. The K5 gathers important real-time, on-site data through its numerous sensors, which is then processed through a predictive analytics engine. There it is combined with existing business, government, and crowdsourced social data sets to determine if there is a concern or threat in the area. If so, an issue is created with an appropriate alert level and a notification is sent to the community and authorities through the Knightscope Security Operations Center (KSOC), a browser based user interface.
[...] it’s easy to imagine that humor will be one of the last bastions that separates humans from machines. Computers, the thinking goes, cannot possibly develop a sense of humor until they can grasp the subtleties of our rich social and cultural settings. And even the most powerful AI machines are surely a long way from that.
That thinking may soon have to change. Today, Arjun Chandrasekaran from Virginia Tech and pals say they’ve trained a machine-learning algorithm to recognize humorous scenes and even to create them. They say their machine can accurately predict when a scene is funny and when it is not, even though it knows nothing of the social context of what it is seeing.
A fascinating interview with Junji Tsuda, Chairman and President of the industrial robot maker Yaskawa about the future of robotics:
It is troubling because now everything is called a “robot.” If it doesn't actually do labor, it isn't a robot. Pepper is just a computer interface. It moves around playfully, but it doesn't do work that is helpful to people. You can't change society with something that is only fun to look at.
A robot revolution will only occur when society is changed. You hear about elderly who have interacted with robots feeling alive, but [communication] is something that humans should do. Rather, I think robots should be doing physical labor. Things are going in the wrong direction.
Researchers are using a technology likened to "mini force fields" to independently control individual microrobots operating within groups, an advance aimed at using the tiny machines in areas including manufacturing and medicine.
Until now it was only possible to control groups of microbots to move generally in unison, said David Cappelleri, an assistant professor of mechanical engineering at Purdue University.
"The reason we want independent movement of each robot is so they can do cooperative manipulation tasks," he said. "Think of ants. They can independently move, yet all work together to perform tasks such as lifting and moving things. We want to be able to control them individually so we can have some robots here doing one thing, and some robots there doing something else at the same time."
When we talk about artificial intelligence (AI) — which we have done a lot recently — what do we actually mean? AI experts and philosophers are beavering away on the issue. But having a usable definition of AI – and soon — is vital for regulation and governance because laws and policies simply will not operate without one.
Robot Barista Makes Custom Coffee (CNET)
Ray Kurzweil On Giving Future AI The Right To Vote (via Lifeboat Foundation)
Art teacher stuns students with giant homemade robot that stands at almost 9ft tall (Daily Mirror)
Watch 100 Drones Dance Their Way to a World Record (Smithsonian)
Factory Automation Will Speed Forward with A.I., Says Bernstein (Barrons)