What's New in Robotics this Week - Oct 02
Posted on Oct 02, 2015 10:30 AM. 5 min read time
Real robot colleagues, examination of crash-optimization algorithms, employee suspended for answering calls like a robot and more in this week’s picks.
Are You Ready for a Robot Colleague? (MIT technology Review)
Fascinating article that explores some of the challenges surrounding human-robot interaction (HRI) in the workplace.
As Melonee Wise, founder of Fetch Robotics, a firm that manufacturers warehouse robots that work in close proximity to humans explains:
[...] careful thought needs to be put into the way other workers will react to a robot. So it’s in our best interest to make that interaction as smooth and enjoyable as possible. If they don’t want to work with our product, then no matter how good it is, it isn’t going to make it very far.
As more and more robots enter the workplace, we gain a deeper understanding of HRI, confirming some ideas (such as the "uncanny valley") and suggesting new approaches.
Colin Ritchie, head of business development at Fellow Robots:
"[...] there was initially some concern among employees that the robots would take away jobs, although this turned into a realization that the robot would just take over one of the more boring parts of their jobs, he says.
But it isn’t only workers who will interact with the robots, and customer behavior can be tricky to predict. The original design for the robot was humanoid, Ritchie says, but user testing showed this unnerved some people. “If you make it too human, people will resist it,” he says. “Now it looks more like a friendly Dalek.”
You may also like:
What does collaborative robot mean?
Researchers use air-filled modules to grasp, manipulate delicate objects (PhysOrg)
A soft robot skin developed by the team at Disney Research uses air-filled cavities to cushion collisions and to provide the pressure feedback necessary for grasping delicate objects, reports PhysOrg.
The Disney researchers used a pair of 3-D-printed soft skin modules to pick up a disposable plastic cup without breaking it, a roll of printer paper without crushing or creasing it and a piece of tofu without smashing it. Collision tests showed that the inflatable modules reduced the peak force of frontal impacts by 32-52 percent and side impacts by 26-37 percent.
This NYC employee keeps getting suspended for answering calls like a robot (Washington Post)
From the "Please Don't Try This At Work" Department.
Ronald Dillon, a veteran New York City Department of Health and Mental Hygiene employee is back in the news this week.
Dillon is facing a 30-day suspension without pay for answering customer service calls.
In. The. Voice. Of. A. Ro. Bot.
Dillon was suspended for 20 days in 2014 for the same offense (which could be dubbed "human-robot impersonation") at which time he informed DNAInfo that he was merely following orders and trying to make his voice sound neutral. "They objected to the tone of my voice, so I made it atonal," he explained.
If the allegations are true, with his track record and previous suspension Dillon really should have known better. However, the original audio used in the 2014 case reveals a robot impersonation that is certainly irritating, but also extremely short lived.
Crashing into the Unknown: An Examination of Crash-Optimization Algorithms Through the Two Lanes of Ethics and Law (Forthcoming, Albany Law Review, 2016; Full Preview.)
When accidents are truly unavoidable, how should autonomous cars be programmed to react?, asks Jeffrey K. Gurney from the University of South Carolina's School of Law in a forthcoming Albany Law Review article.
In such situations, the vehicle will make difficult ethical decisions based upon its programming — more specifically, how its crash-optimization algorithm is programmed.
This article examines crash-optimization algorithms from an ethical and legal standpoint through the lenses of six moral dilemmas. Ethically, the article focuses specifically on utilitarian and Kantian ethics. Legally, the article considers the tort and criminal law implications of crash-optimization algorithms.
In addition, the article discusses whether autonomous vehicles should even make ethical decisions. Concluding that they should make ethical decisions, the next consideration is whose — the car owner’s, the car manufacturer’s, or the government’s — ethical beliefs should the car be programmed to follow.
Recognizing that no one party could program a fully ethical vehicle, the article concludes by asserting that the government should provide partial immunity to the car manufacturer to ensure that the vehicles are programmed according to ethics.
Gruney's position is reminiscent of Ryan Calo's 2011 call for selective immunity "for manufacturers of open robotic platforms for what end users do with these platforms, akin to the immunity enjoyed under federal law by firearms manufacturers and websites". (Calo's paper is also a thought-provoking read.)
Robot mimics cockroach in Russian research initiative (PhysOrg)
A team from the Immanuel Kant Baltic Federal University has created a prototype, <10 cm robot that looks and behaves like a cockroach.
Just in case creating a cockroach-inspired device didn't attract enough attention on its own, the team based their robot on the Blaberus craniifer cockroach, commonly known as 'death head' because of the skull-like pattern on its head.
The robot has a speed of 30 cm per second, can carry up to 10 grams, and has the ability to work autonomously for around 20 minutes. The roach-bot is earmarked for search and rescue missions, given its ability to enter, map, and navigate around hard-to-access and cluttered environments.
And Finally...
An outdated and seemingly limited household robot attempts to communicate with the family dog. (Also covered on Singularity Hub.)
General Atomics Wants To Put Lasers On Drones (Popular Science); Fusion of man and machine: The First World War from a literary perspective (RUBIN); Five Things We Learned At RoboBusiness 2015 (Robotics Business Review); NASA Unveils Probe For Exploring Alien Oceans (Smithsonian).
Leave a comment