Last week, the European Commission's High-Level Expert Group on artificial intelligence ("AI") presented their ethics guidelines for trustworthy artificial intelligence (the "AI Guidelines"). In this article, we discuss the aims of the AI Guidelines and some of the legal issues for employers and customer-facing businesses which use AI.
Code to joy
The underlying theme of the AI Guidelines is that AI will be a force for good if it is trustworthy. According to the AI Guidelines, trustworthy AI should be lawful, ethical and robust. The AI Guidelines propose seven key requirements to assess trustworthiness:
- Humans should have agency and oversight for the actions of AI.
- AI should be technically robust and safe.
- Private data should be used legitimately and governed responsibly.
- Decision made by AI should be transparent and traceable.
- Bias should be eliminated from AI so as to avoid exacerbation of prejudice and discrimination.
- AI should be sustainable and environmentally friendly and for the benefit of all living beings.
- Mechanisms should be put in place to ensure accountability for AI systems and their outcomes and so that AI systems should be auditable.
The heart of the machine
Why does the EU think AI might not be trustworthy? AI is learning from data created and compiled as a result of the cumulative actions of humans. Our many biases (conscious and unconscious) are hard-baked into the data that forms AI's curriculum. Add to that the risk that AI might be used intentionally by the powerful among us in ways that aren't so good for the vulnerable among us, and the EU definitely has a point.
To put it another way: AI has all the makings of an A-star student. But with a biased textbook and a potentially exploitative teacher, it might end up more Paranoid Android rather than OK Computer.
The AI in the sky
For financial services firms, it is worth remembering that the regulators are hot on this topic. The FCA's first Insight podcast ran with the tag line asking whether "ethics in AI is the '600lb gorilla in the room'". Given the FCA's over-arching focus on improving culture, it's worth checking that any AI being used supports the spirit of the FCA's regulatory aims of preventing harm to consumers and markets (and employees).
In the UK, it is already the case that any AI used by a business which places employees or customers with a certain protected characteristic (such as his/her sex, disability, age or race) at a disadvantage, could in certain circumstances be a breach of the Equality Act 2010. Depending on how that person's data has been used by the AI, there could be GDPR implications too. But perhaps the biggest risk of using AI is reputational. The media loves a story about AI being discriminatory. The female doctor who was unable to use her gym swipe card to access female changing rooms, because the software understood that only men could be doctors, is such an example.
The AI-madillo – hard on the outside, soft on the inside
The ethics of AI remains an evolving space. There are already "hard" legal risks surrounding AI, and now the focus is shifting towards the "soft" issues at its core. These are each likely to grow over time.
The High-Level Expert Group is running a pilot to gather practical feedback on how its seven-point assessment list can be improved. The assessment list will be reviewed in early 2020 and the EU Commission may then propose next steps.
Regardless of Brexit, the EU Commission is now the thought-leader in this space. If it proposes that the EU legislates in this area, expect the UK Government to be highly minded to follow suit.