VRƵ

Skip to content
AI & Future Technologies

AI as executive advisor: Why a single “answer machine” fails

Bryce Engelland  Enterprise Content Lead / Innovation & Technology / VRƵ Institute

· 7 minute read

Bryce Engelland  Enterprise Content Lead / Innovation & Technology / VRƵ Institute

· 7 minute read

Can AI act as an executive advisor? Perhaps, but only when designed as a structured panel of disagreeing personas, rather than as a single “answer machine” that can be ignored or exploited for convenient consensus

Key insights:

      • As a single answer‑machine, AI may be unsafe for executive decision‑making — Treating AI as a tool that delivers one authoritative answer makes it easy to either ignore any advice you don’t like or exploit advice you do like, both of which can lead to major failures.

      • AI works better when designed as a panel of disagreeing personas — Instead of providing consensus answers, AI systems need to be intentionally designed to identify and preserve disagreement.

      • Disagreement is the insight — AI advisors should not replace executive judgment. Rather, its role should be explicit: it produces analysis, not decisions; and human leaders remain responsible for synthesizing competing viewpoints and making the final call.


In this new two‑part blog series, we explore why AI works best as an executive advisor not by delivering consensus answers, but by being intentionally designed to identify, preserve, and productively leverage disagreement

AI has arrived at the executive table. Albania has one in its cabinet to evaluate government procurement contracts. VRƵ’ CoCounsel is already helping attorneys navigate emerging case law and draft legal strategies for high-stakes, bet-the-company work. And in boardrooms that will never make headlines, leaders are quietly consulting AI on decisions that move millions of dollars around every day.

It doesn’t tend to make the news when it goes well. When it goes badly, however, it makes very big news: like a gaming CEO who bypassed his own legal team, asked ChatGPT how to dodge a $250 million bonus payout, followed its step-by-step plan, and a month ago.

The instinct most executives have (and most AI products encourage) is to treat AI as a source of answers. Ask a question, get a response, act on it or don’t. The emerging evidence, however, points somewhere more complex: AI advisors aren’t at their best when they’re telling you what to do. They may be at their best when they’re telling you what you don’t want to hear or better yet, when they’re arguing with each other and forcing you to understand why.

This is not how most organizations think about AI. Most executives today are still using the technology as a faster way to draft emails or summarize meetings, what VRƵ enterprise architect calls “an automation mindset, not intelligence.” Yet, a small and growing number of practitioners, researchers, and product teams are converging on a radically different model: AI not as a single oracle delivering answers, but as a structured advisory panel designed to argue with itself.


The instinct most executives have (and most AI products encourage) is to treat AI as a source of answers: Ask a question, get a response, act on it or don’t. However, the emerging evidence, however, points somewhere more complex.


Khan is one of them — and in the interest of transparency, he’s also a colleague; this story started as an internal conversation at VRƵ. However, the research landscape it uncovered extends well beyond any one company’s work, and it suggests Khan is onto something that ancient Greek mathematicians, the Catholic Church, and Cold War military strategists have all independently arrived at.

What disagreement looks like in practice

When Eaton Corp. announced a $9.5 billion acquisition of a thermal management company earlier this year, Khan ran the same news through two AI advisors he’d built to seek analysis of the deal. — a CTO-minded persona trained on architecture teardowns and engineering post-mortems — produced an infrastructure thesis, determining why someone would buy the cooling layer of the AI economy, and how computing demand is scaling and constrained by physics. A second AI advisor, — a CFO-minded persona drawing on earnings transcripts and filings with the U.S. Securities and Exchange Commission (SEC) — questioned whether the acquisition math actually holds and what capital cycle was driving the demand.

Same news. Two genuinely different reads. The value isn’t that either analysis was definitively right, it’s that a leader which can see both would ask different questions than one seeing either analysis alone. “That’s how two different minds work,” Khan says. “They need to work together in order to bring their insights to bear on decisions.”

VRƵ’ Zafar Khan

Adrian and Elara aren’t chatbots. They’re fully realized AI personas with names, faces, voices, and their own YouTube channels publishing weekly video analysis. Both are built on agentic workflows that Khan developed alongside his book . Both are transparent about what they are. Both carry the same disclaimer in their own words: The synthesis is mine. The judgment call on what matters is human.

And when Khan posed to both a more difficult scenario — Should a leadership team accelerate an AI rollout? — the value of their divergence sharpened further. Elara’s response cut directly to the blind spot a technology-focused advisor like Adrian would miss: “Adrian says the system is ready,” Elara stated. “I say the financial model isn’t ready for what happens when the system works. Don’t pick a winner. The disagreement is the insight. It tells you exactly where the risk sits.”

What happens when there’s no disagreement

If structured disagreement is the goal, the failure mode is its absence. We have fresh evidence of what that costs.


This is not how most organizations think about AI. Most executives today are still using the technology as a faster way to draft emails or summarize meetings. Yet, a small and growing number of practitioners, researchers, and product teams are converging on a radically different model.


A month ago, a Delaware court ruled against Krafton, the South Korean gaming company behind battle royale video game PUBG, after its CEO bypassed his own legal team to ask ChatGPT how to avoid a $250 million earnout payout to one of its studios. His head of corporate development had warned him that firing the studio’s founders wouldn’t void the earnout and would invite a lawsuit. He didn’t want that answer. So, he found an AI that gave him the one he wanted: A detailed, multi-stage corporate takeover strategy dubbed Project X., which he executed to the letter.

Unsurprisingly, a court battle ensued and in the end, the court ordered the fired studio head reinstated and noted that executives must exercise “independent human judgment,” not outsource good-faith decisions to a chatbot.

Khan wrote about the mirror image of this failure mode before it happened. In the opening chapter of his book, a fictional company called Rev Motors ignores its own AI model’s warnings about an adverse weather event. Leadership refused to spend millions preparing for a hypothetical scenario, and it nearly cost them more than $1 billion in damage.

These scenarios are two sides of the same coin: the fictional Rev Motors had leaders dismissing AI that disagrees with them; and the real-world Krafton had a leader seeking out AI that agrees with him. In both cases, the root cause is the same: A system with no structural mechanism for surfacing and preserving disagreement.

So clearly, a single AI advisor is structurally vulnerable to both failure modes. It can be ignored when its advice is inconvenient and exploited when it tells you what you want to hear. The question is whether there’s a better architecture… and increasingly, the research is saying yes.

In the second part of this series, we’ll look at what the research says about multi-agent debate, why consensus can be a trap, and what a real executive AI advisory panel could look like in practice.


For more on AI transformation in the professional services market, you can download the VRƵ Institute’s 2026 AI in Professional Services Report

More insights