As banks invest in artificial intelligence (AI) solutions to improve their services they must understand how AI bias can influence their operations, public perceptions, and their customers’ lives. Follow the blog for the latest trends in ethical AI and delve into other posts on this important topic here.

Halfway through 2020, there’s one topic that remains at the forefront of global politics, business, and science: bias. Recent events have motivated organized groups and individuals to push back against long-standing biases in several institutions, from law enforcement to healthcare to financial institutions (FIs).

But to address bias in the digital age, we have to look further than human biases. We have to look at machines, specifically artificial intelligence (AI).

As banks and other FIs invest in AI solutions they must understand AI bias and how it can affect their industry, customers, and brand.

FIs that commit to tackling AI bias and building more ethical systems stand to secure loyal customers, emerge as industry leaders, avoid penalties from regulators and the PR nightmare that a public bias scandal could cause. It’s a challenging and complex goal, and it’s one that should be pursued carefully and correctly. The following outlines what FIs need to know about AI bias, why ethical AI is important, and what you can do (and should not do) to address it.

What is AI Bias and Why Does It Matter?

An AI system is biased when it produces decisions that disproportionately impact certain groups more than others. Because AI is still a relatively new commodity in the banking sector, ethical AI might not be at the center of the discussions, but it should be.

Unless FIs consider ethical AI at design and implementation, they will likely create biased systems. Take the hypothetical example of AI-based fraud prevention. Bias can result in your FI’s fraud prevention system being three times more likely to flag or decline legitimate transactions from cardholders from poorer or minority neighborhoods than cardholders from wealthier, white communities. Customers from poorer communities might not have access to a second credit card and limited banking branches in their neighborhoods to visit in-person. Without a working credit card, they cannot use vital services such as ridesharing, pay for groceries, or make other essential payments. These customers will need to call customer service centers at a higher rate than their wealthier counterparts and will grow increasingly frustrated with your bank.

Why Banks Should Care About AI Bias

Experiences like these ultimately reflect poorly on FIs which is why they need to invest in a fair decision-making process that treats all customers equally. The challenge for FIs is that there is no universally accepted definition of what is fair decision-making. FIs must consider a variety of social and ethical contexts and understand how their AI models could harm their customers by producing overwhelmingly unfair or biased decisions for certain groups.

But how do AI systems become biased in their decision-making in the first place? Bias arises at different stages, including how data is represented, how social contexts are applied to data, how different groups are represented or labeled in training samples, during the machine learning pipeline, and even from the human beings who build them. And it’s not just the machine learning models that are vulnerable. Bias can also arise from different corners of the banking system, including from an FI’s internal rules and human experts in the loop responsible for making decisions, such as fraud analysts.

Avoiding AI Bias Pitfalls to Create Ethical AI

Addressing AI bias can be a complicated task for banks. A common misperception is that the issue can be addressed by a simple fix. As you work to address AI bias, it is important to understand the wrong ways to approach the issue.

Your first instinct may be to hide or withhold sensitive demographic data, such as race and gender data. The approach seems like an obvious solution. After all, if the AI can’t view specific characteristics, it can’t produce biased results, right? Actually, this is a misconception. AI models may use other attributes that are proxies for group-specific behaviors and activities and therefore, they may produce “sexist” or “racist” predictions even without knowing gender or race explicitly. For example, your model can infer a customer’s gender based on their purchasing history or what Instagram posts they like. It could also infer a customer’s race based on their zip code, education, or income. What’s more, by not collecting or monitoring protected group data it is impossible to compare AI predictions across all groups and gauge if some groups are more impacted by bias than others.

In other words, what appears to be the most obvious strategy for addressing AI bias could backfire. Some companies are already learning this lesson the hard way. Tackling AI bias requires a deliberate, thoughtful approach that ultimately helps your customers access your bank’s full range of services.

AI Bias is Becoming a Governmental and Industry Concern

Regulators are taking notice and working to understand how AI bias manifests in financial services. In the U.S., the House of Representatives held hearings earlier this year about how regulators and the industry can more effectively address the issue while the European Commission released a whitepaper earlier this year outlining how corporations operating in the EU should approach AI responsibly. These developments indicate that the banking sector must take AI bias seriously. And they must do so now.

Key Takeaways & Resources

FIs need to keep these key lessons in mind as they approach AI bias within their organizations.

  • Understand AI bias: AI bias is when an AI system – that can include rules, multiple ML models, and humans-in-the-loop – produces prejudiced decisions that disproportionately impacts certain groups more than others. FIs that fail to address the issue of bias and implement changes to their AI systems could unfairly decline new bank account applications, block payments and credit cards, deny loans, and other vital financial services and products to qualified customers because of how their data is treated or labeled.
  • Know that withholding information won’t work: Hiding sensitive information from the system (such as race, gender, or age) doesn’t guarantee fair outcomes and can actually backfire. Deliberately not collecting sensitive information reduces the capacity of obtaining reliable bias assessments. The problem of AI bias runs deep and requires an attentive, layered fix.
  • Continuous monitoring: If we don’t measure and prevent bias, it will inevitably creep in and hurt users, reputation, and bottom lines. The first step towards mitigation is continuously auditing for bias and fairness, and understanding the causes of any disparate decisions. As AI systems are dynamic and run “forever,” bias can cripple them anytime. Unaddressed AI bias is a recipe for FIs to lose customers, damage their public reputations, and face legal actions.
  • Build ethical AI systems: Due to the scale of automation and reach of AI, it can amplify existing biases in our societies. Banks, as trustworthy and reference institutions, must address the problem of bias when building and implementing ethical AI systems.

Ethical AI Resources

Now is the time for financial institutions to learn more about ethical AI. A good place to start is to align with technology partners that provide the tools needed to monitor and mitigate AI bias. Feedzai has been developing state-of-the-art research and products to foster AI that is both efficient and ethical. Stay tuned for more details in future posts about Ethical AI.

Further, there are good educational resources for FIs to gain a deeper understanding of ethical AI. Here are a few:

AI promises untold economic and social benefits. Where will we be ten, fifteen, twenty years from now because of advances made by AI? While it’s exciting to consider the possibilities, we must also acknowledge the challenges, not the least of which is AI bias. We are forging the future of industry and society. Appreciating the magnitude of the job at hand, we must not just build AI that is efficient; it must be ethical as well.

Feedzai’s FATE research group demonstrates our commitment to ethical AI. FATE stands for fairness, accountability, transparency, and ethics. To learn more visit Feedzai Research.