AI Ethics Lessons from The Post Office Scandal

AI Ethics can learn lessons from the past

Catherine Breslin
5 min readJan 18, 2024
Two red British postboxes

The Post Office Scandal is in the news after a TV series brought widespread attention here in the UK. If you don’t know the story, it’s about how the UK’s Post Office blamed subpostmasters for accounting shortfalls that were actually the result of bugs in their Horizon computer system. They aggressively prosecuted many of these subpostmasters to recover the phantom money. Some were given criminal convictions, and many bankrupted or sent to jail. As it became apparent that these were computer errors and not human errors, the Post Office moved to cover-up the story. Computer Weekly have been reporting on the scandal since 2009, and there’s an ongoing public inquiry to establish what happened.

There are many aspects to this story, including our legal system, private prosecutions, leadership at the Post Office, & Government procurement — that are likely to be written about in detail. But, at the core of the story is a piece of technology that didn’t work as well as it should have.

As AI booms, AI ethics is regularly in the news. But, AI is also computer software, and there’s a lot that AI ethics doesn’t need to reinvent. Many of the lessons we can learn from previous software failings are also going to be applicable to AI systems. Here are 5 lessons from this story that can be applied in the age of responsible AI.

1. Understand your Users

ITV’s recent TV series, Mr Bates vs The Post Office, puts us right in the shoes of the user, and clearly shows their difficulties using the Horizon system. The user interface was clunky & unintuitive, and very little training was provided. For ongoing problems there was a support helpline, but users found that also to be unhelpful and opaque.

Many of the users themselves — the subpostmasters — had a lack of confidence and understanding of technology that made them nervous and unsure about what they were doing.

Developers have the advantage of an intimate understanding about how technology works. They’re able to clearly delineate the line between technology problems and their own capability. Even when using software that they haven’t written, anyone who’s developed software has a good intuition about how it might work. Most of our users aren’t expert in technology and aren’t able to do the same. They have doubts about whether they’re doing something wrong, or whether the technology is at fault. To build technology responsibly, we have to understand our users.

2. Be Transparent

The Horizon system was not transparent to the subpostmasters. Once their accounts had been entered, and the system showed a shortfall, there was no way to go back and see the transactions in Horizon to find the mistakes. That left the subpostmasters with no way to understand and correct the mistakes. Further, that data about the internals of Horizon wasn’t always made available by the Post Office during court cases, so those being prosecuted had no way to prove their innocence. Transparency is a key part of building trust with users.

3. Insist on Robust Software Development Processes

The BBC podcast The Great Post Office Trial touches on the sub-par quality of work and the Engineering processes within the team that was building Horizon. There are many lessons from software development that can be brought to AI teams. Gathering requirements, testing, code reviews, bug tracking, quality assurance, documentation — all aspects of a robust software development process that can be incorporated into AI development as appropriate.

4. Understand the Purpose and Limits of Technology

It seems that there was a lack of understanding about technology from many people involved in this story. In particular, there was a lack of reflection about the limits of technology and what Horizon could and couldn’t be used for. Horizon’s purpose was as an accounting system, presumably it wasn’t built with the intention or robustness that meant it could be relied on in court. Yet, the Post Office did rely on it in court. In part, this was due to a UK legal presumption that computer data is reliable, but that itself derives from a poor understanding of software and clearly will not be applicable to stochastic AI systems of the future.

Keeping users educated about the limits and purpose of a system is important for anyone building AI. There’s a lot of hype about AI, and a lot of promise about what it might do in the future. But that doesn’t absolve us of our responsibility to understand and educate about the capability and limits of the systems we’re building today.

5. Understand how Technology Interacts with Organisations

There were three organisations involved in this story — the Post Office, the subpostmasters who ran local branches, and Fujitsu, the supplier of the Horizon software. The interaction between these three organisations is also a crucial part of the picture, and more about that is being revealed through the public inquiry.

In general, every organisation is different and has different institutional views about technology. Organisations purchasing software may tend to see the technology as the answer to a problem that they then can be hands-off in managing. Hence they may fail to put the necessary process and governance around software. Suppliers of software are keen to be paid for their work and to keep large and lucrative contracts, which influence interactions with their customers. These dynamics lead to oversights and lack of accountability. Understanding the dynamics of the organisations involved in deploying a piece of AI software, not just the technology itself, is another crucial part of responsible AI governance.

Ultimately, most of the consequential decisions in this scandal were made by humans, not by machines. But the story shows how those humans were working within a system and how corporate entities can use technology to consolidate and entrench their power.

One of the subpostmasters talks about Horizon as being ‘like magic’. Our collective understanding about technology has moved on in the past two decades, but we now hear people talk about AI as if it were magic. Building AI responsibly is clearly going to be a challenge of the next decade. But it’s worth remembering that we aren’t starting from zero and there’s a lot we already know.

--

--

Catherine Breslin

Machine Learning scientist & consultant :: voice and language tech :: powered by coffee :: www.catherinebreslin.co.uk