I’m in Minneapolis the next two days, taking part in a terrific industry expo show, RoboticsAlley. It covers the broad range of robotics, from industrial robots to health care and assistive robots, along with a number of exhibitors from the electronics industries. Baxter-the-robot is here. It also covers drones and self-driving cars – the UAV industry association, AUVSI, is one of the sponsors – so it is a pretty wide-ranging trade show. Likewise the various presentations, panel discussions, etc. – an excellent panel on self-driving cars, for example. Many of the presentations have focused not just on technology, but on the economics of these machines; Baxter, for example, represents a price breakthrough in a two armed robot with a screen for a face, at $22,000, but, as a panelist observed, it probably needs to be half that price in order to attract medium to small manufacturing or assembly businesses to experiment with it.
Another aspect of the economics of robotics, however, is investment into the companies bringing them from the lab to market. A number of recent news reports in the business press have remarked on falling venture capital interest in certain sectors, particularly medical devices and assistive living technologies. Part of this might be fueled by new taxes on medical devices that depress investment and innovation, but several speakers here suggested that the investment situation is not clear, even in the specific sector of medical and assistive devices. I was interested to see, in regard to the investment climate, the creation of a new Nasdaq ETF (ROBO) that tracks an index of publicly traded robotics and automation companies. Frank Tobe, founder and editor of the Robot Report, a highly regarded industry paper, said that he wanted a way to invest in the public market for robotics as a whole, and this was the result. It has obvious limitations – publicly traded companies, to start with, and few of them (IRobot is an exception) are pure play robot or even automation companies.
Some of these areas are moving rapidly, including in their regulatory environments. The Federal Aviation Administration, for example, released just ten days ago its first iteration of regulation of UAVs (drones) in the national airspace; though I am still digesting the FAA’s documents, at first blush, I’d agree with University of Washington’s Ryan Calo that the FAA has done pretty well in this first cut at what will have to be an ongoing regulatory process. And I mean that on more than just the hot-button issue of privacy which, as one FAA staffer remarked to me, occupies a huge amount of the public discussion and yet which has to come after the fundamental issues of airspace safety. My impressionistic feel of the RoboticsAlley conference is that would-be commercial UAV operators and their industry associations are very concerned to get safety issues right – including both commercial and hobbyist drones – and more so as they perceive that the FAA has moved on the basic issue of drones in the national airspace. As the people who would operate drones commercially see the day of commercial operation approaching, safety issues loom larger. No one wants to see a mid-air collision, after all. I will comment one of these days on what the new FAA documents say on these issues, but for now let me just say that, like Ryan, I’m cautiously optimistic.
I’ll be speaking tomorrow on the issues of law and ethics in robotics. Meanwhile, Heather Knight is about to speak on robot-human social interactions; she’s a researcher at Carnegie-Mellon and her current work is on “charismatic machines and robot body language.” She has been studying the social psychology of human reactions to mobility and motion in machines as such; even without anthropomorphizing characteristics such as a face or eyes, humans process motion, she notes, in ways that might in some circumstances treat the machine as alive – and even when one knows perfectly well that it is not. We have a tendency to impute intentions and other mental characteristics to its behavior, and do so at a very intuitive level in some circumstances; establishing what those circumstances are is important to robot design. As it happens, one of the points I’ll be making tomorrow will draw on this work in social psychology to suggest that regulation of various forms of robots will likely have to take this into account – for example, does the design of the robot invite, even inadvertently, affective and emotional interpretations and attachments to the machine? And if so, does that invite trust or reliance on a machine that will not be warranted in some circumstances? Attractive nuisance in a new form?