Robotic hand touches interface with AI and ethics on it, standing for machine learning algorithms
#Tech Innovation

Building AI systems you can trust

Insights
5 Mins.

AI adoption is speeding up – but how can organizations build genuine trust in new AI systems? And what does it actually mean to have trustworthy AI? A new study led by Veridos provides some insight.

In 2022, Wells Fargo came under scrutiny when its credit assessment algorithm was found to systematically assign higher risk scores to Black and Latino applicants compared with white applicants with similar financial backgrounds.1

The case brought to light one of the biggest challenges facing organizations as they rush to deploy AI systems: can AI really be trusted?

In Wells Fargo’s case, it wasn’t a case of malicious design; rather, the AI simply learned from historical lending patterns that reflected decades of discriminatory practices, then perpetuated them at scale. But if AI is to be successfully implemented across security-critical sectors such as banking and finance, identity verification, and public infrastructure, such biases are unacceptable. In addition to potential regulatory penalties, such incidents cause irreparable damage to customer relationships and erode the trust that organizations depend on for long-term success.

New regulations, like the EU AI Act, which carries fines of up to €35 million or 7% of global annual revenue for non-compliance,2 were introduced to help organizations achieve AI compliance and put up guardrails for safely and securely deploying AI systems. Yet viewing safe AI purely through a compliance lens misses a much bigger opportunity.

Tackling AI threats with trustworthy AI

“Consumers, stakeholders, and regulators are increasingly demanding greater transparency and accountability from technologies like AI,” says Letizia Bordoli, AI Lead at Veridos. “This is particularly challenging because AI systems often operate as black boxes with complex dependencies and unpredictable behavior in new environments. As a result, organizations have a responsibility to deploy AI in a trustworthy way, especially in applications that can significantly affect people’s lives.”

The question many organizations are left with is one of how exactly to build AI systems that deserve trust. And how can trust in AI systems be reliably evaluated, quantified, and embedded into development processes?

Digital shield with the letters AI symbolizes secure and trustworthy artificial intelligence

What is trustworthy AI?

The first step organizations must take is understanding what trustworthy AI actually means. To support this, the AI community, driven by the European Commission’s High-Level Expert Group on Artificial Intelligence, established seven principles that serve as a universal definition for trustworthy and ethically sound AI. 

These principles, which became the foundation of frameworks such as the EU AI Act, include human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination, and fairness; societal and environmental well-being; and accountability.

However, these principles often remain abstract, offering little guidance for practical applications. Organizations need a framework to translate these high-level principles into measurable practices that can be embedded into development processes and organizational culture. 

Asian software developer is testing a new regulatory compliance system for AI and ethics

The foundations of trustworthy AI

This is just one of the reasons why Veridos partnered with experts from Friedrich-Alexander University of Erlangen-Nuremberg to develop a comprehensive, metric-driven framework for evaluating and quantifying AI trustworthiness. Here are five key takeaways from the study:

  1. Trust must be engineered, not assumed

    Many organizations wait for trust issues to emerge – through scandals, failed audits, or regulatory action – before reacting. This is the wrong approach. Trustworthiness must be built into systems from the ground up, using specific evaluation methods tied to known risks. These can include:

    • Group fairness metrics that identify and eliminate hidden discrimination by evaluating whether AI outcomes are distributed equitably across demographic groups (e.g. age, gender, ethnicity).
    • Saliency map robustness tests (these check whether AI explanations remain consistent when inputs change slightly) that verify the reliability of AI decision-making processes, ensuring long-term trust.
    • Membership inference tests (which test whether attackers could reverse-engineer the model to determine which data was used to train the AI) that detect privacy vulnerabilities by simulating whether attackers could determine if specific data points were used in model training, indicating potential risks of data leakage and insufficient privacy protection.
  2. Trustworthiness is multi-dimensional and context-dependent

    There is no universal metric for trustworthy AI. The dimensions that matter most depend entirely on the application and stakes involved. High-stakes systems like identity verification may prioritize robustness and accountability above all else, while consumer-facing applications might emphasize transparency and human oversight to maintain user confidence.

  3. Trust isn’t static – it may change over time

    Most AI governance frameworks mistakenly treat evaluation as a one-time task. But AI systems are continuously evolving: models drift, adversaries develop novel attack methods, and operational environments change. Monitoring and continuous evaluation must become a standard part of AI operations, just as uptime monitoring is for cloud services.

  4. Quantification enables governance

    By translating principles like fairness and privacy into quantitative indicators, trustworthiness becomes auditable and accountable. This opens the door to meaningful AI risk management, compliance automation, and internal governance dashboards that provide real-time visibility into AI system trustworthiness.

  5. Creating confidence requires cross-disciplinary teams

    No single team can “own” trust. AI engineers, UX researchers, ethics scholars, security experts, and a wide range of professionals must collaborate throughout the development life cycle to ensure trustworthiness is embedded throughout the system.

Turning trust into a competitive advantage

Adopting these principles will help organizations design and build trustworthy AI systems that have trust embedded as a foundational layer. This proactive approach will help set them apart from those that react to trust failures after the fact. 

In a future economy that will increasingly be underpinned by AI, there will be pressure to stay on top of every development for the sake of staying ahead. Rather than rushing to deploy the most advanced AI systems as quickly as possible, the true leaders will be those who take time to sustainably build systems that stakeholders can trust.

Download the full study here.

Key takeaways

  • With AI adoption accelerating, organizations urgently need frameworks in order to build and evaluate trustworthy systems.
  • Trust must be engineered, not assumed. However, there is no universal definition of “trustworthy AI.”
  • Taking a proactive approach to compliance can create a competitive advantage. Organizations that build trustworthy AI systems will differentiate themselves from competition.
  1. Wells Fargo Racial Disparity Case Heads to Class Action Decision, Bloomberg, 2024

  2. EU Artificial Intelligence Act, EU, 2025

Published: 26/08/2025

Share this article

Subscribe to our newsletter

Don’t miss out on the latest articles in InterFund Solutions SPOTLIGHT: by subscribing to our newsletter, you’ll be kept up to date on latest trends, ideas, and technical innovations – straight to your inbox every month.

Please supply your details: