For decades, human activities and decisions have been supported by algorithms. They are the hidden rules and instructions that help our computers to process data and run complex calculations. But in recent years, algorithms have moved from a supporting to a starring role. As our machines have become more powerful, the algorithms have become more sophisticated – so much so that they are now in control of potentially life-changing decisions.
This is the setup for Dr Hannah Fry’s latest book, Hello World. The UCL mathematician, who once explored the 'mathematics of love', has grown fascinated with the relationships between humans and algorithms, the responsibilities we give them, and the impact they are having on our societies – whether good, bad, or ugly.
In the courts, for example, algorithms help judges to determine whether jail time is warranted. But do they always get it right? Speaking at RSS Conference on Tuesday afternoon, Fry asked delegates to ponder the case of 19-year-old man convicted of the statutory rape of his girlfriend, who was 14 years old. Fry explained how an algorithm determined that the man should serve 18 months in jail, based on the fact that this was a serious crime to have committed at such a young age and may indicate a high risk of reoffending in later life.
However, as Fry then pointed out, this algorithm would have reached a very different decision had the man been older; say, 36 years of age. In that scenario, the algorithm would not have recommended jail time because the offence had been committed later in life, therefore lowering the risk of reoffending. And yet a human decision-maker might be more alarmed about a 36-year-old man having a relationship with a 14-year-old girl and might think additional jail time was warranted based on the much larger age gap between perpetrator and victim.
'You would hope that a judge would have the foresight to overrule such an obviously flawed algorithm, to trust their own instincts over that of the artificial intelligence,' said Fry. 'Unfortunately, in this case, and in many others like it, that particular judge didn’t, and the final sentence that was received was actually increased on the say so of this particular algorithm.'
Part of the problem here, Fry said, is that 'no matter how much we train them, [algorithms] just don’t really see the world in the same way that we do; they don’t understand context and they don’t understand nuance. And that particular problem is never clearer than it is in the case of image recognition.'
Borrowing from the work of blogger Janelle Shane, Fry shared some of the weird things that can happen when importing photos into image recognition software. Show an AI a picture of a rolling hillside and the computer will see sheep where there are none; show it a picture of grazing sheep, but paint those sheep pink, and it’ll see a field of flowers.
The lesson here is that algorithms may be remarkable in what they can do, but they will always be capable of making mistakes – and it is up to us humans (error-prone though we may be) to sense-check their outputs, just as our AI creations can and should be used to sense-check our own flawed decision-making processes.
Fry’s book title, Hello World, refers to a task often set for novice computer programmers, whose job is to programme a machine to display that particular phrase. But, she said, it’s not clear who is saying hello to whom. It could be the programmer saying hello to the world of programming, or the machine saying hello to the wider world. Whatever way round it is, Fry asked delegates to think of this moment 'as the beginning of a partnership, a shared journey of possibilities, where one can’t exist without the other'.
'And in the age of the machine,' she said, 'that’s a sentiment worth bearing in mind.'
-
Hannah Fry’s book, Hello World, is out now.