In 2022, Wells Fargo came under scrutiny when its credit assessment algorithm was found to systematically assign higher risk scores to Black and Latino applicants compared with white applicants with similar financial backgrounds.1
The case brought to light one of the biggest challenges facing organizations as they rush to deploy AI systems: can AI really be trusted?
In Wells Fargo’s case, it wasn’t a case of malicious design; rather, the AI simply learned from historical lending patterns that reflected decades of discriminatory practices, then perpetuated them at scale. But if AI is to be successfully implemented across security-critical sectors such as banking and finance, identity verification, and public infrastructure, such biases are unacceptable. In addition to potential regulatory penalties, such incidents cause irreparable damage to customer relationships and erode the trust that organizations depend on for long-term success.
New regulations, like the EU AI Act, which carries fines of up to €35 million or 7% of global annual revenue for non-compliance,2 were introduced to help organizations achieve AI compliance and put up guardrails for safely and securely deploying AI systems. Yet viewing safe AI purely through a compliance lens misses a much bigger opportunity.





