In 1869, English judge Baron Bramwell rejected the idea that “because the world gets wiser with age, it was therefore rather foolish.” Financial regulators should follow the same line of reasoning when assessing financial institutions’ efforts to make their lending practices fairer using advanced technology such as artificial intelligence and machine learning.
If regulators don’t, they risk holding back progress by encouraging financial institutions to stick to the status quo rather than actively looking for ways to make lending more inclusive.
The simple, yet powerful concept articulated by Bramwell supports a central pillar of government policy: you cannot use evidence that someone has corrected something against him or her to prove wrongdoing. In the law, this is referred to as the “follow-up action” doctrine. It encourages people to continuously improve products, experiences and results without fear that their efforts will be used against them. While lawyers typically apply the doctrine to things like curbside repairs, there’s no reason it can’t apply to attempts to make loaner algorithms fairer.
The Equal Credit Opportunity Act and Regulation B require lenders to ensure that their algorithms and credit policies do not unfairly deny credit to protected groups. For example, a credit underwriting algorithm would be considered unfair if it recommended denying loans to protected groups at higher rates than other groups when differences in approval rates do not reflect differences in credit risk. And even if it did, the algorithm could be considered unfair if there were another algorithm that would achieve a similar business result with fewer differences. That is, if there were a less discriminatory alternative or LDA algorithm.
Advances in modeling techniques, especially advancements enabled by artificial intelligence and machine learning, have enabled algorithms to be de-biased and search for LDAs in unprecedented ways. Using AI/ML, algorithms that would recommend denying black, Hispanic and female loan applicants at much higher rates than white males could be created to approve those groups at much more comparable rates without are materially less accurate in predicting their probability of default on a loan. Herein lies the pinch. If a lender uses an algorithm and later finds an LDA, he may worry about being sued by claimants or their regulators if he admits its existence.
This is not a theoretical problem. I have personally seen bankers and fair lending lawyers grapple with this issue. Lenders and lawyers who want to improve algorithmic fairness were held back by fears that using advanced LDA search methods will be used to show that what they were doing before was not enough to comply with the ECOA. Likewise, lenders worry that upgrading to a new and fairer credit model is essentially admitting that the previous model has broken the law. For this reason, lenders may be incentivized to stick to fair, honest loan testing and LDA searches to substantiate the validity of the status quo.
It is precisely this scenario that Bramwell’s reasoning was supposed to avoid. Economic actors should not be encouraged to avoid progress for fear of involving the past. Rather, as modern tools and technologies, including AI/ML, allow us to more accurately assess the fairness and accuracy of credit decisions, we should encourage the adoption of such tools. Of course, we must do so without condoning past discrimination. If a previous model was unduly biased, regulators need to address it appropriately. But they should not use a lender’s proactive application of a less discriminatory model to condemn the old.
Fortunately, the solution here is simple. Financial regulators must provide guidance that they will not use the fact that a lender has identified an LDA — or that an existing model has been replaced with an LDA — against the lender in regulatory or enforcement actions related to fair lending. This recognition by regulators of the 19th-century common law doctrine that encourages recovery and innovation would go a long way in encouraging lenders to continually strive for greater fairness in their lending activities. Such a position would not excuse past abuses, but rather encourage improvement and promote consumer interests.