The Robots Are Coming! The Robots Are Coming!
[ad_1]
Yes, the robots are coming, but they are not conquering the planet anytime soon — even if that’s what the media hype would have us believe. And they’re not as smart or technologically advanced as we think, according to Kate Darling, a leading expert on robot ethics.
Instead of buying into the hysteria of an impending robot revolution, Darling believes we should be thinking about how we as a society use robots, and what implications this has for data collection.
“I’m not worried so much about robots developing their own agenda and taking over the world,” Darling told delegates at the 71st CFA Institute Annual Conference in Hong Kong. “I’m a little bit more worried about people and how we decide to use the robots as a society.”
Darling is a researcher at the Massachusetts Institute of Technology (MIT) Media Lab, where she investigates social robotics and conducts experimental studies on human-robot interaction. She says artificial intelligence (AI) needs “massive sets of data” to learn, and so there are a lot of incentives for companies to collect as much data as possible. At the same time, consumers don’t have much incentive to curb this trend, as data collection ties directly to the functionality of their devices. Think, for example, of Alexa, Amazon’s digital assistant, which is built into other Amazon devices.
“Over the long term, we are going to see privacy massively eroded because of this,” Darling said. “We can manipulate robots, but robots can also manipulate us, or the companies creating the robots can also manipulate us and a lot of these technologies I see being developed right now are being developed specifically for very vulnerable parts of the population, such as the elderly and children, and we might need to think about ways to protect them.”
We can manipulate #robots & companies that create and market robots can manipulate us, MIT’s Kate Darling tells #Annual2018
— Amy Resnick (@AmyResnick) May 13, 2018
Darling sought to tamp down fears about robots and AI.
“With any new technological development, but in particular this type of technology, there is lot of hype and a lot of fear that the robots are going to take over the world and kill us all,” she said.
“It’s important to anticipate problems before they happen and to try to find solutions well in advance,” Darling explained. “But, on the other hand, it’s a little frustrating because the media picks up on this as the one problem we need to be concerned about in this future of robotics, and I think that that distracts from some problems we might actually want to turn our attention to and leads people to overestimate where we are with the technology.”
She acknowledged that there are some areas where robots are much better than humans. “They can do math,” she said. “They can remember everything. They can work tirelessly on an assembly line. They can recognize patterns in data. They can beat us at Go and at Jeopardy.”
But, there are areas where they lag.
When we talk about robots, we constantly envision humans. It doesn’t make a lot of sense to compare artificial intelligence to human intelligence on a one to one scale. They can recognize patterns & remember details but there are many areas where they are behind us. #Annual2018.
— Cheryl L. Evans, JD (@cle1112) May 13, 2018
“If you ask a robot to understand context, or ask a robot to understand concept, or transfer skills from one context to another, or deal with anything unexpected that happens, the robots are woefully and hopelessly lost,” she said. “That’s not going to change anytime soon because we don’t even know where to start to begin to develop the type of artificial general intelligence that’s required to, for example, context switch the way that a person can.”
That all said, robots are becoming an increasingly common sight.
In Shanghai, the world’s first fully automated, human-free bank branch is open for business. Bank customers are greeted by Xiao Long, or “Little Dragon,” who speaks with them, accepts their bank cards, and answers simple queries.
“What I really love about the way that people interact with robots, is that we treat them a little bit like they are alive even though we know perfectly well that they are just machines,” Darling said. “Part of this is because we’re primed by science fiction and pop culture to want to personify these machines, and that is partly why we are constantly comparing them to humans”
She also believes humans are biologically conditioned to anthropomorphize robots, to want to see ourselves in them. “We generally have this tendency to want to project human-like qualities and emotion onto non-humans and it’s something we do from a very, very early age and we think we do it in order to make sense of non-human entities and relate to them.”
Why does this matter?
“If you are trying to integrate robots into shared spaces, you need to understand that people will treat them differently than other devices, and once you do understand that. it is awesome because it is something you can try to harness,” Darling said. “Robots that have more anthropomorphic attributes seem to be more accepted when they are integrated into the workplace.”
Darling’s work explores the emotional connection between people and lifelike machines. One question she is especially interested in exploring is whether we can change people’s empathy using robots and how interacting with very lifelike machines influences people’s behavior in both good and bad ways.
Kate Darling: Could the ethical challenge of robots be how they desensitize people to certain behaviors? #Annual2018
— Charlie Henneman CFA (@CHenneman) May 13, 2018
“If you are a parent and your child is hitting a robot, is there a reason to intervene that goes beyond just respecting other people’s property?” Darling asked. “As we engage more and more with very lifelike moving technology, it might get muddled in kids’ subconscious and it could have an impact on their behavior and habits and how they treat animals or other children.”
“This line between device and living thing,” Darling continued, “is so muddled in our subconscious when we’re dealing with robots, I do wonder if maybe the problem isn’t that if we teach the robots how to kick, that they will come back and kick our [butts]. What if the problem is what it does do to us if we kick the robots?”
This article originally appeared on the 71st CFA Institute Annual Conference blog. Experience the conference online through Conference Live. It’s an insider’s perspective with live broadcasts and recorded video archives of select sessions, exclusive speaker interviews, discussions of current topics, and updates on CFA Institute initiatives.
If you liked this post, don’t forget to subscribe to the Enterprising Investor.
All posts are the opinion of the author. As such, they should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute or the author’s employer.
Image courtesy of IMAGEIN
[ad_2]
Source link