Recently, we had the great pleasure of hearing from award-winning mathematician and broadcaster, Professor Hannah Fry, in the latest instalment of our Unique Perspectives event series.

What an enlightening hour it was. In the age of Artificial Intelligence (AI), where data and algorithms permeate almost every aspect of our lives in largely unknown ways, most people harbour at least some scepticism about the direction of travel; the future is a perceived risk, and, following a decade of instability, trust is at a premium.

But away from the omniscience of algorithms and the perils of entrusting AI with critical decision-making, Hannah Fry remains a voice of optimism. Taking us through a wide-ranging discussion, covering everything from the mathematics of love, to dodgy AI, to the data of risk forecasting, she reminds us that data, when properly contextualised and situated within its very human context, can be a real source of knowledge, empowerment, and a way of understanding ourselves.

In an organisation where data is owned by every function in our operation, used to aid claims management, forecast risk and more besides, it is no bad thing to be reminded of this.

Data, decision-making, and the roles we play

Hannah’s emphasis on the human element of data interpretation challenges us to not only rely on numbers but also understand the nuanced contexts that shape them. The qualitative analysis must keep pace with the quantitative. Of course, data reveals patterns and trends that can help guide our decision-making, both in and out of the underwriting room. But what good is it without the proper context? 

She illustrated this point adeptly with a few cautionary tales. In a ‘choose your own adventure’ bent to the talk, she described a scenario in which a car racing team are coming up to an important race. It is the final race of the season, with potential sponsors at stake. She tells the audience that in 7 out of the last 24 races, the engine has failed, jeopardising not only the prize money, but also the safety of the driver. 

Based on the theory that the engine tends to fail when the weather is cooler, the team plots the temperatures for all the times that the engine has blown out, resulting in a reasonably broad data set. With the temperature expected to be fairly cool and the data inconclusive, she asks the room if the team should race that day.

An overwhelming majority were eager for them to press ahead, and to Hannah, this was no surprise. This was a real story, based on real data – and broadly speaking, people opt to race every time. She notes that almost nobody asks to see the rest of the data, specifically the temperatures from the races where the engine didn’t fail. Include this data, and the story looks very different.

Why data still needs a human touch

Noting an increase in risk when the full picture is understood, the revelation that the scenario is reminiscent of the Space Shuttle Challenger disaster hammers home the consequences of overlooking crucial data points in scenarios where the stakes couldn’t be higher.

These anecdotes were eerily reminiscent of the survivorship bias fallacy that threatened to misdirect engineering safety efforts before identified. Paraphrasing sociologist Diane Vaughan, she issued an important reminder to the room: that there were charts that engineers didn’t imagine and didn’t construct, that if created, could have provided the data to postpone the Challenger launch. 

In our industry, it is our role to think outside the box and imagine the charts that could have been. We are all well aware of the perils of over-interpretation, particularly in cat modelling scenarios, where statistical models will always fall short of the complexities of reality and could lead to a significant underestimation of risk.

In a world where we are seeing yet more frequent extreme weather events, for example, we must hold the wisdom of relying solely on historical observations up to increased scrutiny. “1-in-250 year events” have occurred with alarming frequency in the past decade.

I am reminded that while data is the lifeblood of what we do, it is the guiding hand of human expertise that allows us to glean a fuller picture of its credibility, unlocking its power.

Should we be worried about the pace of change?

The sentiment ‘All models are wrong, but some are useful’ was pertinent throughout Hannah’s talk. Harnessed incorrectly, data can lead to overdiagnosis in cancer patients, prolong criminal investigations, oversimplify human emotions. But like myself, she remains an optimist. This should hopefully serve as a small comfort to all in an anxious world, where many are heralding the rise of AI as a dystopian nightmare.

Hannah senses that we are on the cusp of a big change – genuinely transformational technologies that have the promise to solve systemic societal issues. Considering the recent UK General Election, she believes that in the future, those in the business of advising new Governments will have much to gain from using data and AI to solve complex society-wide problems, streamline public services, and improve policymaking.

Aligning AI with human values remains one of the biggest challenges ahead as technologies become more advanced. Citing the work of Daniel Kahneman, Hannah warned of our tendency to take a hard question and subconsciously swap it for an easier one, without even noticing the shift. She leaves us with a question. If the only language we have to communicate with AI behind the scenes is numbers, how do we effectively translate complex, context-laden instructions into a form that AI can interpret accurately?

This is the crux of our future challenge: ensuring that our technological advancements are always guided by our humanity.