Ethical and Unethical Discrimination in Credit Accessibility

Share on linkedin
Share on facebook
Share on twitter

Digitalisation allows customers to demand greater accountability from brands and companies. After public outcry against TRESemmé’s racist advert, its products have been pulled from major retailers in South Africa. Whilst businesses are able to control the legality of their operations, even the most ethical intentions are exposed to the risk of a public relations crisis. If you have wondered why the allegations of racism (here and here) against South African banks have not blown up in a TRESemmé-fashion, this is due to the robust analytical construction and thorough documentation of their processes. With these defining attributes, decisions made are defendable and hence able to mitigate public outcry. Credit scoring is a well-documented, tool-based numerical assignment which provides the bank with an idea of an applicant’s risk to the bank. This is necessarily a discriminatory process and for the remainder of this article we look at the ethics of credit discrimination in developing and using scorecards.  

Ethical and unethical credit accessibility in the past

Historically businesses gave out credit based on the store or bank manager’s assessment of the customer’s character and personal relationship with them. However, assessing customers on personal knowledge become impossible as urbanisation and the number of transactions increased. In the 1800s the Mercantile Agency was founded in America. It set out to gather information on customers’ characters and assets. However, the information gathered by white male investigators was often influenced by their racial, class, and gender biases.  This paved the way for unfair discrimination practices such as redlining and reverse redlining (the act of targeting minorities and overcharging, as oppose to outright denying).

Neighborhoods with minority occupants were marked in red — hence "redlining — and considered high-risk for mortgage lenders.

In America, the Fair Credit Reporting Act (1970) required credit bureaus to remove information relating to race, sexuality and disability (In South Africa, the National Credit Act and Constitution ban discrimination). However, it was still difficult to compare and interpret the credit information stored. Thus in the 1980s, Fair, Issac, and Company (FICO) created a credit scorecard which gave customers a standardised score. Creditors could now use a standardised process to assess potential customers. Thus the modern era of scorecards and ethical discrimination was created.  Additionally, because scorecards automated decision making it reduced the impact of unconscious human biases on granting credit (an Israeli study found prisoners seen by the judge after lunch were much more likely to be paroled than those seen before).

Ethical and unethical discrimination in the present

Although legal unfair discrimination in credit, such as redlining, has been outlawed, distinguishable socio-economic remnants are still in place, surviving through multiple generations.  In South Africa, lack of access to land and to credit are identified as the two pillars key previously underpinning apartheid, which translates into many South Africans have a poor credit repayment history (post-democratic era), with nearly 18% of public servants being affected by garnishee orders in 2009. A continued “credit apartheid” is argued to be exacerbated by “lack of attention to creditworthiness”.  The National Creditor Regulator has responded by emphasizing affordability checks when extending credit. Creditors have also developed so called ‘Thin’ scorecards, for people starting to utilize credit. While, not as robust as a traditional credit scorecard, they do allow creditors to start extending credit to those they would have previously declined.


In developing a fair model, the impacts of historical legislation may still be prevalent today. A scorecard is only as impartial as the data it is based on. Data bias may seem counter-intuitive considering that data should capture the truth, and reflect existing patterns. However, data is dependent on the context in which it was gathered. Correlation does not imply causation, but this nuance may not be fairly reflected in a model. Additionally, data may be insufficient to fully capture the population it represents. Digital giants such as Facebook and Microsoft have recognised the existence of algorithmic discrimination (and here), and initiated equity teams to address constructed biases.


Scorecards allow for ethical or fair discrimination when extending credit, especially compared to past practices. However, a scorecard is only as good as the data it is built on. Past discrimination has been shown to have a lasting effect, so care should be taking to avoid biases that are in the data itself.  Incline has had experience building a variety of models and scorecards including consumer credit, claim risk, and response models. You can find more information about how Incline could help your company on our website