Bias in AI:
keeping machine learning in fintech ethical

Bias in AI: keeping machine learning in fintech ethical

Left unchecked, biases can derail technology builds, organisational teams and culture. At Xinja, we’re interested in getting these things right which is why we are taking an interest in bias! We’re also very data driven, so the bias around data analysis and the interpretation of results is of interest, and in particular, its impact on AI. In this blog we explore AI bias, and practical ways to resolve the achilles heel. Worldwide spending on AI and cognitive systems is set to grow to about $52.2 billion in 2021. Alongside the growth of AI there have been a number of high profile cases highlighting the fact that AI has a bias problem. We unpack bias in AI – and what we can do about it as an industry. 

Addressing AI bias means understanding AI bias and, to do this, it’s critical to understand how bias is introduced into AI systems. As AI can get meta quite easily 😉 we bring these themes to life with some handy examples.

Bias in AI is garbage in, garbage out

Theoretically, machines are supposed to be unbiased but there have been instances in recent years that showed even algorithms can be prejudiced. This occurs when machines are fed data that reflects society’s prejudices and the machines mimic them – from anti-semitic chatbots to racially biased software. In the Guardian article: ‘How AI is learning all our worst impulses’ – Lead statistician of non-profit Human Rights Data Analysis Group (HRDAG) puts it:

If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate” – Kristian Lum, Lead statistician, HRDAG

Let’s look at the different sources of bias in AI:

Data driven bias

Data driven bias occurs when the output is determined by the data it receives. Examples abound in image recognition through deep learning and social media algorithms. You only have to look at Facebook’s algorithm to see this at play.

Bias through action

Deriving from the biases of the users driving the interaction. Remember Tay? Microsoft’s chatbot that spent a day learning from Twitter and began spouting anti semitic messages.

Emergent biases

Emergent biases occur where information is skewed toward a user’s existing belief set aka the ‘echo chamber.’ In this case Algorithms can have built-in biases because they are created by individuals who have conscious or unconscious preferences that may go undiscovered until the algorithms are used, and potentially amplified, publicly.

While more light has been shone on the problem recently, some feel:

“It’s not an issue addressed enough in the broader tech community, let alone in research at universities or the government and law enforcement agencies that implement AI.”- Liz Webber, Journalist at Forbes

With $20-30bn being invested in AI last year, according to McKinsey – how do we become vigilant to the bias in the AI we create?

Data driven bias in financial services AI

There has been concern expressed in financial services that if left entirely to an algorithm, certain inferences would be made on demographic groups e.g. historical credit scores assume a certain demographic with socio economic factors are more likely to default on their payments. Without positive discrimination algorithms are effectively applying the prejudice of a cohort and marginalizing segments in the process.

Whereas what we’re learning from insurtech (car insurers applying new technologies like AI) is that there are alternative ways to assess behaviour.

“The roll-out of machine learning to insurance, in theory, represents a chance to reset and create a more transparent system.” – Insuretech Cover’s CTO & Co-Founder, Anand Dhillon.

Historically, an insurance premium is basically a representation of your perceived risk based on the historical attributes they analysed. The use of claims history to set current car insurance premiums is a prime example of how driving behaviours of others push up costs for those in the same cohort unfairly e.g. an under 25 male is assumed to be more reckless because of the claims history of those in his cohort.

In this way, the data is static, not personalised in any way, and does not factor in that this under 25 year old could be a very safe driver or that the he can improve his driving, over time with experience and training.

With so much scaremongering about bias in AI, we’ve summarised practical strategies to turn artificial intelligence into a force for good:

How can technologists remove bias in AI?

1. Adopt dynamic, highly personalised behaviour systems like telematics

Some of AI’s problems can be resolved by using dynamic datasets to judge people’s behaviour in real time. In the insurtech world telematics is already in place. Telematics tracks physical driving as opposed to rating on characteristics. Over time that will, in theory, be more reflective of how risky a driver someone is. Then you can start taking into account things like the hours they drive.

For example, if it’s at night, it’s harder to see so they’re probably a slightly higher risk. You can look at whether drivers tend to accelerate or break quickly, whether they take long trips or short trips, where they are driving, where are they parking their car, etc. In otherwise it’s based on the individual’s behaviour, not the cohort’s.

“If it’s being implemented through behavior-based systems using telematics, it’s probably making it better.” – Insuretech Cover’s CTO & Co-Founder, Anand Dhillon.

Opportunity to set individual rates

With those data-rich systems the pricing could become more customizable to an individual person and less general based on what category they fall into.

  • Usage-based car insurance – only pay for the kilometres you drive
  • Savings for driving at safer times – pay less for driving during the day
  • Savings for good driving habits – rarely brake suddenly or never speed and you could save
Opportunity to improve your rates

Christopher Chisman-Duffy, Australia and New Zealand strategic sales manager at TomTom Telematics, believes gamification is one way to engage employees to drive and treat a company car as if it were their own.

“Essentially, gamification helps to simplify the process of changing driving behaviour, incentivising good habits, rather than criticising the bad, as well as improving employee engagement.” – Christopher Chisman-Duffy, Sales Manager, TomTom Telematics

Bringing Telematics back to banking

If this logic were applied to savings behaviour, you could even could track people’s saving and spending behaviours in real time and give individual interest rates based on live data. A customer could be given the opportunity to improve their savings patterns with behavioural nudges and receive even better rates.

2. Train out the bias

Let’s say you want something to not be biased on gender, for example, you can pull out gender from the data, train the system, and then put gender back in for a test subset then see if there’s an variation between male and female.

“It’s not that these algorithms are inherently bias, but rather the data that these algorithms are trained on are not representative due to the often perverse and over-simplistic datasets that are so easily accessible from the web.” – Stuart Loxton, Xinja’s Data Scientist

3. Plug in tools that detect bias

The tech behemoths are building tools to detect bias. IBM has launched a tool which will analyse how and why algorithms make decisions in real time. The Fairness 360 Kit will also scan for signs of bias and recommend adjustments. Often algorithms operate within what is known as a “black box” – meaning their owners can’t see how they are making decisions. The IBM cloud-based software will be open-source, and will work with a variety of commonly used frameworks for building algorithms. Other tech giants are also following suit. Microsoft said in May that it was working on a bias detection toolkit and Facebook has also said it is testing a tool to help it determine whether an algorithm is biased.

4. Monitor AI closely via algorithm audits and bias libraries

As AI becomes ever more ubiquitous, there is now a small but growing community of entrepreneurs, data scientists and researchers working to audit AI. Cathy O’Neil, founder of O’Neil Risk Consulting & Algorithmic Auditing describes an algorithm audit as:

“Delving into what it is doing to all the stakeholders in the system in which you work, in the context in which you work. I want to think more about externalities, unforeseen consequences. I want to think more about the future.” – Cathy O’Neil, founder of O’Neil Risk Consulting & Algorithmic Auditing

There are existing libraries or third-party solutions that check for fairness in your existing algorithm.

In Wired article: ‘Want to prove your business is far? Audit your algorithm’, Jessi Hempel, Senior Writer puts it:

“Having a third-party seal of approval is good marketing, like the “organic” sticker on milk” – Jessi Hempel, Senior Writer, Wired

5. Share your learnings

Open source communities are emerging, aimed at sharing AI bias findings more widely. One such community is Algorithmic Justice League founded by Joy Buolamwini, graduate researcher at MIT Media Lab; a place where anyone who cares about fairness can help fight the coded gaze. You can report bias, request audits, become a tester and join the ongoing conversation.

6. Employ a chief ethicist

Discussions around AI and ethics are on the rise. News mentions of AI and ethics increased ~5000% from 2014 to 2018, when they reached over 250 mentions in Q3’18. Recently, Google’s unveiling of Duplex — an AI assistant that can make phone calls and sounds and interacts like a real human — is already sparking ethics debates over whether or not Duplex needs to identify itself as an AI when speaking to real people. Employing an Ethicist can help your business start thinking more conscientiously about the social impact of the technology you’re developing. In a USA Today article, journalist Don Haider describes the role of chief ethicists:

“Chief ethicists could help executives think through difficult, critical decisions. They could help develop ethical guidelines for companies, even a code of ethics. And they could provide company-wide training on ethical decision-making.” – Journalist Don Haider – USA Today

7. Diversify the talent pipeline

Where solutions 1-6 resolve issues in the build phase of AI, there’s one school of thought that the problem needs to be addressed a lot earlier – by fixing the lack of diversity in the AI talent pipeline. There is a real risk of AI being built by one demographic leading to products that only cater to one market. To unpack the lack of diversity in AI teams we need to look at fixing the imbalance in the pipeline of AI students coming through the ranks.

Chart source Engadget: What happened to Women in Computer Science?

In the popular Recode Decode podcast, Kara Swisher discussed tech ethics with Y Combinator president Sam Altman:

“The most skewed field I know of right now is machine learning PhDs, which are by graduation rates, 98, 99 percent men”
– Sam Altman, Y Combinator President

Academia and tech companies need to encourage girls and diversity in general, into STEM, data science and AI. One school of thought is to better promote the career opportunities to a broader audience, in a more practical way – as Google’s STEM platform ‘Made with Code’ has started to do. AI4ALL is a initiative making strides in this area; it exposes girls, low-income students, racial minorities and those from diverse geographic backgrounds to the possibilities of AI. As an industry we need to encourage, recruit and nurture diverse talent in product design and tech teams. As highlighted in our Xinja women series – Taking it on, one industry at a time, cognitive diversity is essential.

In a recent CNBC article ‘A.I. has a bias problem that needs to be fixed: World Economic Forum’:

“Artificial intelligence has a bias problem and the way to fix it is by making the tech industry in the West much more diverse – Kay Firth-Butterfield, Head of AI and machine learning at the World Economic Forum

With great technology comes great responsibility

Developing AI involves much more than writing the code.

“The ability to ask the right question is more than half the battle of finding the answer.” – Thomas Watson, IBM Founder

To apply machine learning appropriately and ethically, is about asking the right questions, understanding the limitations of these models and then pairing the right algorithm with the right problem. Finally, these systems need to have careful monitoring, investigating and alerting just as technologists do with other everyday software.

What do you think? How are you tackling the AI bias blind-spot? Join the conversation here. Tell us your thoughts and ideas at our forum.

Sarah May is Marketing and Community Lead at Xinja. 

The content above does not represent any form of advice and Xinja has obviously not considered your individual circumstances in preparing this. It is simply a few thoughts on money to get the conversation started.

SaveSave

join Xinja
  • This field is for validation purposes and should be left unchanged.