Awful Algorithms
Christian Hunt
25 years: Behavioural science & compliance
This video refers to an effective algorithm that does what it was created to do, but the consequences of this are beyond what was envisaged fully by the creators. In this video, Christian highlights how technology, particularly what he called ‘'awful algorithms'’, can make bad decisions with severe consequences.
This video refers to an effective algorithm that does what it was created to do, but the consequences of this are beyond what was envisaged fully by the creators. In this video, Christian highlights how technology, particularly what he called ‘'awful algorithms'’, can make bad decisions with severe consequences.
Awful Algorithms
15 mins 20 secs
Key learning objectives:
Be able to define technology risk
Give some real world examples of awful algorithms
Explain why algorithms need risk management
Overview:
In this video Christian looks at the times when machines get things badly wrong, how they bring out the worst in people and how they are already negatively influencing and will influence human decision-making in the future.
What is Technology Risk?
By this we don’t mean the risks of technology, but the risks posed by it! For example, unlike humans, machines aren’t sentient — at least not yet — so they’ll do exactly what they’re told, regardless of context. Unlike people, they cannot question or think about the ‘why’ of what they’re doing. Computers don’t do context - this can lead to bad outcomes.
What is an example of an awful algorithm?
After a failed terror plot in London in 2017, a news program set out to discover how easy it would be for people to buy bomb-making equipment. What they found was that Amazon’s “frequently bought together” feature, which is designed to recommend additional purchases customers might like to make alongside the item they’re looking at, was inadvertently providing customers with a guide on how to make bombs.
Black powder and thermite, two common ingredients of homemade explosive devices, were grouped together under a “Frequently bought together” section under listings for other chemicals used to make explosives.
Furthermore, steel ball bearings often used as shrapnel in explosive devices, ignition systems and remote detonators were not only readily available, but some were promoted by the website on the same page as the chemicals that “Customers who bought this item also bought”.
How may Uber’s algorithm pose technology risk?
Uber’s ‘surge pricing’ algorithm does exactly what it’s told to do — it tracks activity on the platform and increases the price of rides whenever rider demand looks like it will outstrip driver supply. When there’s less demand or increased supply, prices can then readjust.
What the algorithm wasn’t programmed to do, is to understand why there might be an increase in rider demand or a decrease in driver supply. That’s not what it’s there to do.
This may be a problem because for example, what about if there’s a sudden increase in demand because there’s been a terrorist incident and people are trying to get away, or there’s a natural disaster?
What can we do to improve these algorithms?
Technology also needs human oversight. While it can deliver perfect solutions from an economic perspective, it can’t make judgments as to whether those perfect solutions are appropriate from a human perspective. Computers can't be contextualised. Unless we program them to understand the times when there are exceptions to the rules, we’ll get bad outcomes.
Why do algorithms need risk management?
It is inevitable that for some things, we will need to use algorithms. However it is important that we think carefully about the risks we’re running when we deploy them, and try to mitigate those risks ahead of time.
Christian Hunt
There are no available Videos from "Christian Hunt"