Dirk Knemeyer

The Digital Life #175: Are we too trusting of technology

Episode Summary

This week on The Digital Life, we discuss the question: “Are we too trusting of technology?”

It takes time to build trust in technology — using e-mail instead of mail, for instance, or using a credit card to purchase something online, rather than going to the store and paying there. But we now have a host of new, emerging technologies that could help us navigate life and death situations. How do we develop trust in these new systems?

An study conducted by the Georgia Institute of Technology, placed people in a fake emergency situation to see whether they would blindly follow a robot, rather than trust their own instincts. According to the researchers, 26 out of the 30 participants decided to follow the robot, even though it was clear that it was potentially leading them in the wrong direction and into a dangerous scenario. What kinds of questions does this raise for battlefield robots or even surgical robots? How do we maintain a healthy skepticism, but still incorporate such emerging technologies into our lives?

 

Jon: Welcome to Episode 175 of the Digitalife, a show about our insights into the future of design and technology. I’m your host, Jon Follett, and with me is founder and co-host, Dirk Knemeyer.

Dirk: Greetings, listeners.

Jon: For the podcast this week, we’re going to discuss the question, are we too trusting of technology? Up until now, we’ve had a long run of technology where we’ve built our trust in these various systems. Email we now use instead of mail, we don’t feel particularly worried, we do a little, but not too much that our email is going to get to the person we wish it to get to. For online purchasing, we’re fairly confident that using a credit card online will successful transact and we’ll be able to get the goods that we want. In fact, Americans are buying more and more stuff online every day. I think we’ve gotten over the trust issue in a number of these online technologies. We have a host of new emerging technologies of course, that could help us to navigate a lot of life and death situations.

In particular, I want to draw attention to a interesting study that was conducted by the Georgia Institute of Technology, which placed volunteers in a fake emergency situation, of course they didn’t know it was fake at the time, to see whether or not they blindly follow a robot that indicated that it could get a safe route out of the building. In the experiment, the volunteers follow a robot down the hall to a conference room where they’re asked to complete some minor task. As they’re sitting there completing this task in the conference room, smoke starts to billow into the room. The robot has a little sign that, “You can follow me to safety,” basically, emergency robot guide, I think it said.

Dirk: Of course the people were going to follow the robot, unless they know the way out themselves, if they don’t know.

Jon: I believe they knew the route in, because they walked in.

Dirk: It was pretty straightforward, it was like, “Left, right, left, right, left, right, where the hell am I?”

Jon: The robot just led them off into the …

Dirk: Into the fire?

Jon: Yes, exactly, and of course, I think 26 of the 30 people followed the robot. A couple were disqualified, and the other 2 didn’t leave the conference room. Basically, everybody who left the conference room and who was not disqualified followed this robot, which of course raises an interesting question for human-robot interaction, right? You’ve got this figure of authority, which is your robot, you’re expected to follow it because it says it’s the emergency robot guide. What are our instincts, and if they’re telling us, “Hey, this is not the way I came in, why should I be following this robot out of the building when it’s clearly not leading me anywhere safe?”

Dirk: I call BS on this test. I call total BS, total BS, for a bajillion reasons. Number 1, if you’ve smelled real fire you know what it smells like, and it smells a lot different than a smoke generator or a fog generator. Unless they were really super careful about bringing in smoke that explicitly smelled like a real fire, you’re going to have people that deconstruct the activity right there. Second, people know they’re coming in for an experiment, they probably signed a form, it’s a normal lab thing. It’s not unusual for the lab things to take a left turn, to misdirect you. Some percentage of those people could have totally deconstructed the activity, and kind of called bullshit, like, “Oh, okay, here we go.” 3, we’ve been conditioned that emergency exits are different places. If you’re in a hotel for example, the emergency exits are these weird things way the hell off in the corner.

It’s not the way you came in. We’re taught that in the environments we go to that are outside of our homes, we’re taught emergency exit is a special place and it’s not the normal place. I think there’s a million reasons why people would follow the robot or not go out of the room, or disqualify. I love the theme you’ve brought up for the show, I think it’s really interesting. I think their experiment sucks.

Jon: The experiment certainly has potential flaws. However, it does raise the specter of these scenarios where human life is potentially in danger, and raises the question how we trust or do not trust the technology that is provided to us. In a very tragic example, there is a fellow who was killed in a car accident which was running on Tesla’s autopilot. He was in his Tesla, and he had it on autopilot, and there was an accident on the road and he died. There’s an example there of certainly Tesla software is still being developed, and automatic autonomous cars are still being tested, and really in alpha or beta. There’s an example of a level of trust that is afforded to a artificial intelligence, or a series of sensors and software, and where that ended up with a very tragic end, so …

Dirk: That’s a great example, because Tesla with their autopilot feature, they’re explicit to drivers, “Keep your hands on the wheel, keep your foot on the brakes and stay alert the whole time,” right? That’s one of the reasons why when we talked about the technology before I was very critical of it. I was like, “Why bother? Just drive your fricking car,” at that point. This individual was given those warnings, and despite those warnings, presumably through inattention or not having the foot or not having the hands, something didn’t override and keep himself safe, keep himself alive. It’s sort of an over-trust in the technology, like, “Oh, yeah, yeah, those warnings, those are just … They’re being over-careful.” It’s like 10 and 2, we don’t really do 10 and 2, that’s overly careful. The guy’s dead because of it, and that’s really unfortunate. Scenarios with burning buildings and the like, ways in which humans may die in the future.

Jon: It’s an interesting question, because there is a built-in expectation that technology is going to work for us and will not fail in such a way that it will put our lives in danger. We all know that there’s autopilots on planes for instance, or there’s software that helps run our subway systems, or there’s just a host of different pieces of software and hardware that direct our lives every day. For instance, the traffic light system that adjusts to the amount of traffic is another example of that.

Dirk: I’d be interested to know what’s happening on the litigation side. If the traffic light malfunctions and someone dies in a car crash due to the malfunction, can you sue the … I don’t know how all that is set up, but can you sue the city, or can you sue the engineer, or can you sue the manufacturer? I ask those questions because I think the highways are a great place to talk about all this stuff, because we have this illusion of control. Right now we’re driving our car, we crash, it’s someone else’s fault or it’s our fault. However that’s figured out, humans are blamed. We’re heading towards a future of driverless cars, I think it’s almost certain, it may not. In that scenario, it’s very likely that cars will be far more safe and less people will die on the highways. A lot of people die on highways, I don’t know what the number is. It’s certainly tens of thousands a year in the United States, maybe in the hundreds.

I don’t know scale, but it’s a lot of people die on the highways right now. If those technologies cut that number in half, objectively safer, objectively better. The people who die in those accidents, now they’re dying because something went wrong with somebody’s technology, it was my car’s software, or your car’s software, or something else other than my agency of me as a driver, you as a driver, and we’re taking responsibility for what’s going on. Now it’s something totally different. I’m really curious on the litigation side how that’s going to pan out. I think there’s going to be a lot of people that hate the technology because they were the unlucky lottery winners of their loved ones being killed. Less people die overall, but my person’s dead. If they were allowed to hold their steering wheel, they wouldn’t be dead.

I think those are knotty legal and ethical things that are going to be great strawmen, great first to the fight in how we’re thinking about all the implications of turning various parts of the world over to artificial intelligence.

Jon: You raise some of the questions about regulatory structure, there’s also all of the insurance requirements, right? Who’s responsible if something happens, is it the car manufacturer’s fault, is it the software provider’s fault, or is it some combination thereof? I don’t know. I think the danger or at least the problem with introducing these technologies that automate certain aspects that were up till now responsible via human agency, is it’s asking people to trust, and then you have the but verify. You have to keep your hands on the wheel, and your foot on the brake just in case.

Dirk: It’s not a driverless car, it’s just not.

Jon: That’s extremely difficult. In the Georgia Institute of Technology study that you panned. If that were a real situation, you’re basically saying, when do we turn off the autopilot and stop trusting the machine, and go with our gut, find our own way out of the building, step on the brake, or dodge to the left, or what have you. I think this very difficult realm of ambiently being aware of what’s going on around you, and then immediately bringing your focus into view when there’s an emergency situation. I think we’re going to run into a lot more of those, and that’s where this trust issue is going to come up more and more. We’re probably at the very, very beginnings of this discussion, and I don’t think it’s going to be an easy road, as you pointed out. There’s going to be a lot of arguments about this, and rightly so.

Dirk: We’re going to be in danger, because humans are careless, we are. There was one time, I don’t know, it’s probably been a year or 2 even, so I think it’s worse now most likely. I drove down the road and just said, “How many people are texting or on their device?” I passed 12 people, every 12 of them, every one, 12 out of 12, were on this device. It’s certainly under 100%, but that sample size is a perfect example of it. I say we are careless, because look, I’m on my device sometimes too, unfortunately, on the road. It’s been communicated to us, we know, “Hey dumb dumb, you are much more likely to die whizzing at a high rate of speed in this big, heavy metal thing if you’re doing that,” but we still do it. We make this little calculation based on incomplete understanding, incomplete … Really grocking what the danger is. We thrust ourselves into further danger, for what?

For nothing, for the difficulty of being bored, for the draw of the little serotonin rush, of the little thing. That way of behaving is going to continue to haunt us moving forward. It’s why we use easy passwords, it’s why we’re not secure with our online information, while many of us probably, unbeknownst to us, all of our goodies are out there already and people could be using those against us, and leveraging even today if they really wanted to. We’re careless. That carelessness is going to add risk to the future of AI.

Jon: Listeners, remember that while you’re listening to the show, you can follow along with the things that we’re mentioning here in real time. Just head over to Thedigitalife.com, that’s just one l on The Digitalife, and go to the page for this episode. We’ve included links to pretty much everything mentioned by everybody, so it’s a rich information resource to take advantage of while you’re listening, or afterward if you’re trying to remember something that you liked. You can find The Digitalife on iTunes, Soundcloud, Stitcher, Clare FM and Google Play. If you want to follow us outside of the show, you can follow me on Twitter @Jonfollett, that’s J-O-N F-O-L-L-E-T-T, and of course the whole show is brought to you by Involution Studios, which you can check out at Goinvo.com. That’s G-O-I-N-V-O.com. Dirk?

Dirk: You can follow me on Twitter @Dknemeyer, that’s @D-K-N-E-M-E-Y-E-R, and thanks so much for listening.

Jon: That’s it for episode for Episode 175 of The Digitalife, for Dirk Knemeyer, I’m Jon Follett, and we’ll see you next time.

Tagged on:     

Leave a Reply

Your email address will not be published. Required fields are marked *