More people than ever are working from home these days, and many companies want to ensure that employees are remaining productive. AI ethics writer, researcher, and thought leader Fiona McEvoy recently joined the DataRobot More Intelligent Tomorrow podcast, hosted by Ben Taylor, Chief AI Evangelist at DataRobot, to shed light on the means and metrics employers are currently implementing to keep tabs on their remote workforce—and the accompanying ethical issues.
“On the surface, it doesn’t seem necessarily like something that is particularly pernicious, but if you think about the lack of awareness and how it feels as though people are almost being shepherded and observed like lab rats, I feel a lot of people would think that that was an affront to human dignity. And that’s one of the important angles of surveillance is you could make an argument that—consequently—it’s a good thing to know how productive each individual person is. But I would say that you need to do that in a way that honors human dignity.”
McEvoy also warns against “deferring to algorithms for something that is ultimately quite a nuanced topic,” admonishing society that a person could be “chained to their workstation from 9:00 in the morning till 9:00 at night and [still] be procrastinating.”
McEvoy, named one of the 30 Women Influencing AI in San Francisco by RE•WORK, also reiterates that corporations should consider the quality of production and manner of work, the necessity of downtime, and a person’s age and their other commitments, such as family or other dependents.
“When we boil everything down to quantifiables—we lose a lot. I’m just concerned that we’ll lose some of our humanity in treating human beings like production-centric robots.”
Relationships with Non-Sentient, Personified Robots Could be “Potentially Destructive”
Fiona McEvoy, the founder of YouTheData.com, was also asked about inviting droids into your house, machines that could do your dishes, read your kids a bedtime story, or even babysit. She was extremely cautious:
“I worry that there’s something destructive about that. Relationships are really precious things, and we’ve been developing them for millennia and they work in structures and are incredibly complex. And I just worry that having a meaningful relationship with something that isn’t sentient—isn’t conscious—and it doesn’t love you back is something that is potentially destructive.”
McEvoy further cautions her tech podcast audience against robots taking care of pets and children:
“Misreading a situation is bad for us humans, but for a robot that is able to control the environment—that is responsible for looking after a pet or a child—that could be really, really problematic.”
AI for Good Will Identify and Disseminate Resources to People and Countries in Need
“I think there is some really exciting stuff in particular as it comes to identifying and disseminating resources in second and third world countries.”
She also highlighted how AI can help the blind:
“There’s some really cool stuff out there—and I’m kind of hesitant to, to name names—but things that can really help people who are partially-sighted or blind identify artifacts or navigate difficult terrains. They’re super exciting because it’s completely changing the prospects and the quality of life for certain people, not just here in America, but across the globe.”
To hear more about AI ethics and how AI for Good is helping people across the globe, check out Datarobot.com/podcast or http://datarobot.buzzsprout.com/. You can also listen everywhere you already enjoy podcasts, including Apple Podcasts, Spotify, Stitcher, and Google Podcasts.