Autopilot Users

Autopilot Users

Christian Hunt

25 years: Behavioural science & compliance

In the final video of his technology risk series, Christian looks at how machines reduce our ability and willingness to think, thereby turning us into unthinking users.

In the final video of his technology risk series, Christian looks at how machines reduce our ability and willingness to think, thereby turning us into unthinking users.

Speak to an expert

Speak to an expert today to access this and all of the content on our platform.

Autopilot Users

15 mins 25 secs

Key learning objectives:

  • Understand how technology is making us dumber

  • Give a real world example of fallible technology

  • Understand why algorithms need to be regulated

Overview:

In the final video of his technology risk series, Christian looks at how machines reduce our ability and willingness to think, thereby turning us into unthinking users.

Speak to an expert

Speak to an expert today to access this and all of the content on our platform.

Summary

What are unthinking users?

Technology was always aimed to help us do things we traditionally could not have. However, we are now witnessing situations where our dependence on technology has reached levels where we almost believe they are infallible. As a result we have not been exercising our brains and have reduced our cognitive thinking, thereby making us dumber. Our over reliance on technology has created new problems and our natural human inclination has compounded this to make it worse.

Explain a real life example of fallible technology?

Google Maps, often thought to be one of the most useful and reliable applications on smartphones in June 2019, had inadvertently caused mayhem after a crash on one of the approach roads to the Denver airport had led to all drivers pulling up their apps and following the map onto a dirt path. 

Google maps had offered this route to them as a “shortcut” but ended up creating a brand new problem. The drivers automatically thought that it is acceptable to do whatever google maps told them to and this became even worse when other drivers had seen those in front of them go onto the path and blindly followed them, creating a domino effect. This showcases how these algorithms can diminish our thinking by offering  a set of limited options to a lot of people. Each individual often believes that they are being offered personalised advice, when it is far from that.  

Who is to blame for situations like these?

Google issued a statement regarding the incident where they agreed that there are factors that may hinder the accuracy of google maps but insisted that users should always use their best judgement in such situations and follow local signs and laws. This also illustrates how technological companies often are not held accountable for these incidents and can often just shift the responsibility to a supplier or end user, thereby avoiding answering any question pertaining to their product. 

Another risk that technology has created is a new type of outsourcing risk, where people hire social media teams/other individuals to act as their digital stand-ins. This  also displays a similar lack of accountability and in situations when things go wrong, it is often easy to pass the blame onto the stand-ins. 

Why do algorithms need regulation?

Now that we’ve seen the downside and risks posed by technology, it is important to start asking questions around regulation, especially as technology is slowly playing a bigger role in industries such as financial services. As the barrier to entry is very low and algorithms are entirely unregulated, we need to expect these algorithms to conform to the same standards that we expect from humans.

As we're having to make more and more decisions every day as a species, we need to remember that whilst technology can help us in many ways, when things do go wrong somewhere down the line, it is usually because a human has had to make a decision and has made a wrong one. This is something we need to remember as we are heading towards an age where we are now programming machines that can perform tasks that require emotional intelligence, judgement, nuance and creativity.

 

Speak to an expert

Speak to an expert today to access this and all of the content on our platform.

Christian Hunt

Christian Hunt

Christian is the founder of Human Risk, a Behavioural Science led Consulting and Training Firm. Previously, Christian was Managing Director at UBS, and Head of Behavioural Science (BeSci), within the Bank’s Risk function. Prior to joining UBS, he was Chief Operating Officer at the UK’s Prudential Regulation Authority.

There are no available Videos from "Christian Hunt"