<- Back to Resources

The Future of Decision-Making: AI, Trust, and Transparency

Trust is crucial for important decisions in life, business, and society. Trust in data. Trust in decision-makers. And trust in the procedures that govern critical decisions that impact lives. As society contemplates recreating human intelligence in rapidly advancing software, we will need to revisit aspects of how critical decisions are made in the first place. No doubt, enormous progress is being made in the multi-decade effort to imitate the brain and its mysterious powers. We can store and retrieve vastly more information in a computing system than an individual could ever hope to memorize or recall.  We can calculate mathematical equations with increasing complexity, and with precision a human could not achieve. And with generative AI, we see glimmers of a future where software itself might create novel and useful artifacts. 

But, can AI be trusted? Specifically, can it be trusted to replace our legacy methods in ways that replicate the evidence and earned trust in outcomes we rely upon today?

It is tempting to sound the alarm and pass judgment on AI systems. A quick glance at the news reveals cynicism simmering around this topic. For one, it’s well known that many generative systems strive to provide reasonable answers and can hallucinate, regardless of the truth. And, of course, anyone who appreciates cinema of a certain era knows that sentient killer robots are…undesirable. However, before condemning the potential of AI and further committing to legacy solutions for sensitive processes, perhaps it’s worth considering an inverted question. For the most critical and nuanced decisions in society, do we blindly trust an individual person or a singular non-AI mechanism to make the right call? Consider some examples. 

In justice systems, rather than trusting a single arbiter of judgment, many societies have developed means of evaluating evidence and circumstances. Decision-making proceeds with transparency and, often, group-based decisioning mechanisms grounded in thresholds of required evidence. The reason? In many cases, the assessment of truth is both nuanced and of grave importance to those involved. There are even methods of appealing decisions which remain contested despite these efforts. In this domain, the importance of making the right decision outweighs expediency.

In drug development, while we have credible experts with strong beliefs in the potential of new chemical compounds, we do not simply trust a solitary expert opinion and release a medicine into the market. Instead, multi-staged procedures exist to iterate through evaluations of safety, dosage, and other dynamics. At each stage, evidence and context are generated and reviewed, with gated decision-making mechanisms.

In the world of finance, many scenarios exist where nuanced evidence must be reviewed to make business decisions. Large-scale capital allocation and business acquisitions can require layers of deal and project analysis which integrate outcome projections, risk, and business context to ensure alignment to strategic objectives. Due diligence processes vary, but in most organizations, efforts are made to ensure a single fallible expert is not naively trusted in isolation from the appropriate context. Throughout, well-run enterprises prioritize transparency and traceable context to ensure successful outcomes, accountability in the event of adverse results, and adherence to required regulatory frameworks.

In these scenarios and many others, a single person (or system) is not typically empowered to make an opaque decision with serious implications for lives, an entire business, or society at large. Instead, organizations rely upon layered and systematic evaluation methods striving to balance doing what is efficient, what is cost-effective, and what is right. 

Whether considering a historical approach or an AI-powered vision of the future, critical systems require a deep and committed effort to implement in ways that elevate trust in downstream outcomes. When it comes to trust, perhaps the key lies in the transparency of evidence and the careful application of these additive insights towards downstream decisions. Many legacy decisioning systems distribute evidence and judgment mechanisms to solve some of these questions. Unfortunately, they also come with costs: sluggish bureaucratic processes, biases, corruption, and when not governed properly, opaque results and eroded trust. These inefficiencies have spawned countless initiatives focused on automation, digitization, and all manner of process improvements.  For most important decisions, few solutions have yielded results that could be called perfect, efficient, or trustworthy. 

Today, most enterprises live in a hybrid world, surrounding their employees with disparate data views, a mix of software and manual decision processes, and for the most critical decisions, distributed and layered controls remain.  The temptation to trade slow legacy methods for fast, scaled, and autonomous software seems obvious. But for some decisions, the risks are quite evident as well. Perhaps rather than a leap from legacy to SciFi-like automated bliss (and doomsday fears), a better approach exists. One that empowers enterprises with efficient access to contextualized evidence, and enables streamlined interactions to explore ever-changing questions while ensuring the ultimate decisions are backed by transparent, trustworthy evidence.

As technology presents new ways of scaling and delivering intelligent answers at superhuman scale and velocity, a concerted effort will be required to develop AI platforms that elevate trust and transparency from considerations to foundational requirements. We also benefit from maintaining an appreciation for why some decision governance has evolved to its current form, and deriving AI-powered techniques which are in part analogous to existing decision mechanisms to balance speed with accuracy when subtlety counts.

The best platforms will buttress trust with integrated context, AI-powered insights, and something we at Bordo AI call a transparent Decision Lineage.  We believe this is the future of artificial intelligence, and we are excited to play a part in building technologies aligned with this vision.

About the author
Regis Rogers
Co-founder & CEO
Business and Technology executive with expertise spanning cybersecurity, enterprise software, and AI; former GE executive with experiences across multiple industries, acquisitions, and international leadership; Banking cybersecurity; Chicago Booth MBA

More to explore

Conversational data analysis empowers professionals by bridging the gap between traditional methods and AI, facilitating intuitive interactions with complex data. This evolution...

Trust is crucial for important decisions in life, business, and society. Trust in data. Trust in decision-makers. And trust in the procedures...

In the rapidly evolving landscape of artificial intelligence, businesses are constantly seeking innovative solutions to stay ahead in the race....
Augmented intelligence means building towards a future where AI can complement users' abilities and help them grow in areas where they may...

take the next step

Ready to reimagine what your data can do?

Connect with us at hello@bordo.ai and take the first step towards transformative insights.