Skip to content

AI as Strategist :  Integrating Human Intuition with AI Analysis

The future of strategic leadership lies not in choosing between human intuition and AI analysis, but in designing organizational structures that effectively integrate both.

Strategy formulation has traditionally been thought of as the ultimate human function. CEOs and generals imagine the future, articulate bold visions and persuade others to follow them. At the same time, AI systems are increasingly proficient at tasks once deemed uniquely human. They spot patterns in oceans of data and draft coherent language. Could an AI strategist prove sharper than its human counterpart? If so, what does that mean for organizations today? This article distills the findings of a formal paper in which I explored a hypothetical question: What would it take for an AI to serve as a firm’s strategist? The starting point was to imagine an AI far more capable than today’s systems, one that could form opinions, learn from data and communicate with people. To ground the exercise in a human narrative, I turned to a piece of popular science fiction.

In an episode of Star Trek: The Next Generation, the android officer Data takes command of a starship. His crew doubts his capacity for judgment, particularly when he makes life-and-death decisions without explaining them. Unaware that his officers need to hear his rationale, Data fails to communicate and nearly sparks a mutiny. Only after his plan succeeds do the humans see that his decisions were sound. The episode illustrates a credibility gap that looms over AI strategists: Machines may be capable of rigorous analysis, but if they cannot persuade people to trust them, their strategies will fail.

The formal work underlying my paper builds on an analytical theory of strategy developed by Harvard’s Eric Van den Steen. Instead of defining strategy by its content, Van den Steen defines it functionally, as “the smallest set of choices that provides enough guidance to align other decisions.” Strategy, in this view, does not have to spell out every action. It needs to point clearly towards the choices that shape all other choices. A decision is “strategic” when it is both uncertain (otherwise there is no decision to make), reliably communicated and strongly connected to other decisions. Strategy is most valuable when decisions are irreversible, interdependent and shrouded in ambiguity.

Van den Steen’s model highlights an often overlooked factor: credibility. When a strategist commits to a course of action, others will follow only if they believe that the commitment will hold. That is why, in his analysis, strategies formulated by a firm’s leader are generally more effective than those crafted by outsiders or consultants. Leaders who control key decisions can credibly commit to their strategies; others cannot. Disagreement exacerbates the credibility problem. When people differ in their beliefs — even when all information is shared — control over strategic decisions becomes essential for alignment.

In Van den Steen’s model, a project comprises multiple decisions, each taken by different individuals with only partial information about others’ choices. The strategist’s job is to investigate some of those decisions, learning about their implications and perhaps discovering a superior option. Investigating has a cost, so the strategist must decide which decisions to examine. After choosing, the strategist publicly announces a subset of choices — the strategy — that others can observe and rely on. Those announced choices then guide all other decisions.

Because the choices shape subsequent actions, they must meet three criteria. First, they must be uncertain enough that selecting them adds information; obvious choices need no guidance. Second, they must be reliable: The strategist must have the authority or credibility to ensure that the announced choice will be implemented. Third, they must interact strongly with other decisions so that committing to one course of action aligns many subsequent choices. When decisions are reversible or independent, strategy matters less; when decisions are interdependent and irreversible, strategy becomes indispensable.

Strategy in a World of Data

Most writing about AI and strategy stops at the observation that AI excels at prediction, while humans excel at judgment. My Rotman colleagues Ajay Agrawal, Avi Goldfarb and I have argued that as prediction becomes cheaper, the scarce resource is human judgment — the ability to define goals and weigh trade-offs. I embed this insight in my paper by allowing AI to be both cheaper and potentially more accurate than humans at investigating the consequences of different choices. Because AI can process far more information, it can reduce uncertainty about how decisions play out. However, I also allow for differences in how humans and machines form beliefs about the world and how aligned those beliefs are with the people who must implement the strategy. I identify four dimensions along which AI and human strategists may systematically differ:

  1. Cost of Investigation / Exploration : On this point, AI’s computational strength makes it relatively inexpensive to gather and analyze vast amounts of data. A machine can simulate countless scenarios, explore complex interactions and extract patterns from unstructured information far faster than any human. Humans, by contrast, face cognitive constraints and may rely on heuristics or delegate analysis. While AI naturally excels at processing data, humans can harness that advantage without ceding strategic authority entirely.
  2. Belief Accuracy / Confidence : On belief accuracy, machines may produce more precise predictions in domains where historical data exist and patterns persist. Statistical learning algorithms can reduce noise and identify relationships that humans might miss. Yet confidence in a prediction is only as good as the model assumptions and data quality. In novel or ambiguous situations, AI may misinterpret signals or extrapolate poorly, while human intuition, shaped by tacit knowledge and experience, can recognize weak signals and emergent patterns. Moreover, human judgments are susceptible to biases and overconfidence. The relative accuracy and confidence with beliefs therefore depends on the domain: AI’s accuracy advantage is greatest in stable, well-documented environments; human judgments may be more accurate in volatile or conceptually innovative contexts.
  3. Agreement with Operational Managers : On this point, human strategists enjoy an inherent advantage because they share language, cultural norms and mental models with their colleagues. They can persuade through insights, stories, analogies and empathy – to build coalitions around a course of action. AI systems must instead rely on logical arguments grounded in data and formal reasoning. In data-rich domains where outcomes are measurable and verification is easy, such arguments can be compelling. In judgment-rich domains where success depends on tacit knowledge, relationships and shared sense making, AI’s inability to connect emotionally makes agreement harder to achieve. Agreement is crucial because strategy that cannot garner buy-in will not be implemented effectively.
  4. Control Effectiveness : Here, the strategist’s formal authority influences the credibility of their announcements. A CEO who controls strategic assets can credibly commit to a course of action; a consultant cannot. AI’s control effectiveness depends on organizational design. An AI embedded in decision systems with formal authority to execute certain actions can commit credibly; an AI without such authority cannot. Moreover, as discussed above, the value of formal control diminishes as credibility increases. AI may compensate for weaker influence by being granted control in domains where its analytical advantage is high and disagreement is widespread.

These differences matter because strategic value is created in two distinct ways. First, choosing the right course of action creates value directly by improving the quality of decisions. Second, aligning the organization around that course of action creates value through coordination. An AI that sees the right choice but cannot persuade others squanders its analytical advantage. Conversely, a human strategist may inspire alignment yet pick the wrong option. A core result of my analysis is that when credibility is low, control over decisions becomes a valuable asset for the strategist. If the people who must implement decisions trust the strategist’s judgments, formal authority matters less. As credibility rises, the need for formal control falls.

My model shows that strategic interventions create value through two channels: by improving the quality of the chosen option and by coordinating others around it. The coordination benefit hinges on credibility. When influence is weak — because participants disagree or lack confidence — granting the strategist formal authority can unlock value. When agreement is high, control matters less.

Human and AI Strategists : Complementary Strengths

When we apply this framework to different types of decisions, a pattern emerges. In data-rich decisions such as supply chain optimization or inventory management, AI can often deliver both higher decision quality and sufficient credibility. Managers can see for themselves that the machine’s recommendations improve performance. In such situations, AI may not need formal authority because its influence stems from demonstrable success. In ambiguous domains such as new market entry, brand positioning or research investments, however, human strategists have comparative advantages. They can combine facts with narratives and draw on shared experience to build consensus. AI’s lack of insight, lived experience and inability to empathize create a gap in its persuasive power.

An illustrative example is a global retailer deciding whether to expand into a new region. For the question of “how much inventory to stock in each store,” AI can process sales data, weather patterns and local events to produce an optimal plan. Store managers see that the forecasts perform well and readily adopt them. For the question of whether the brand resonates with local culture, human judgment is crucial. The decision hinges on understanding consumer identities, social norms and long-term brand narratives. AI can provide data about demographics and social media trends, but a human needs to interpret those data within a cultural, business, and risk context.

Agreement Patterns

Differences in how AI and humans form beliefs and communicate shape the “agreement patterns” between strategists and the people who must implement their plans. When participants and strategists share similar mental models, they are more likely to agree and execute on the appropriate course of action even without exhaustive communication. Human strategists, because they inhabit the same social and cultural contexts as their colleagues, often benefit from such baseline agreement. They can show insight, articulate new concepts, provide context, read subtle cues, anticipate reactions and adjust their messaging in response. This communication richness facilitates implementation through influence even when formal authority is limited.

By contrast, AI may struggle to achieve agreement in domains characterized by ambiguity, tacit knowledge or emotional stakes. Its reasoning processes may be opaque, and its lack of shared experiences can lead to misalignment. However, in data-rich, verifiable domains where recommendations can be backed by clear evidence — like optimizing supply chains or pricing thousands of products — AI’s transparency can foster agreement based on performance rather than persuasion. The likelihood of agreement thus varies across decisions. Recognizing this variation is key to assigning roles between AI and humans.

The same logic applies at the level of competitive strategy. When strategic interaction resembles a numbers game — such as how much capacity to build — AI can excel. Its algorithmic nature allows it to commit credibly to quantity decisions. In quantity competition, commitments not to flood the market or to build capacity carry weight because machines are perceived as less prone to changing their minds. In price competition, however, strategies often revolve around differentiation and brand meaning. Human creativity is indispensable for escaping zero-sum price wars by inventing new categories or narratives.

My analysis also reveals that AI and human strategists respond differently to competitive pressure when they face one another. A firm run by an AI strategist is likely to make aggressive capacity commitments because its algorithmic consistency enhances credibility. A firm led by a human strategist is likely to counter with creative differentiation moves — introducing unique products, repositioning brands or changing the rules of engagement. This asymmetry can intensify competition, pushing both sides to invest heavily in capacity and innovation. Industries with a mix of AI and human strategists may therefore see both more efficient production and greater diversity of offerings. The flip side is that profits may suffer as firms compete simultaneously on scale and uniqueness.

Competitive Interactions

Looking more closely at competitive interactions shows why these patterns emerge. In quantity competition, firms decide on production capacity or market share targets. These decisions commit the firm to levels of output that influence rivals’ responses and shape market equilibrium. AI’s algorithmic nature makes such commitments more credible: Once programmed, an AI strategist is less likely to renege on a capacity commitment, making threats and promises more believable. Moreover, AI can optimize complex capacity decisions by crunching large datasets on costs, demand and competitor behaviour. The non-zero-sum nature of quantity competition — the possibility that both firms can profit from increased market size — means that credible commitments create strategic value. When an AI announces a capacity decision, its credibility can serve as a commitment device, forcing competitors to respond in predictable ways.

In price competition, however, the payoff often lies not in mechanical optimization but in escaping the zero-sum dynamic through differentiation and innovation. A human strategist can envision new product categories, craft brand stories and reposition offerings in ways that AI cannot easily conceive. Human creativity can soften price competition by making products less substitutable and by appealing to values that are hard to quantify. Thus, AI may dominate in quantity competition but humans may be indispensable in the creative aspects of price competition. When firms led by different types of strategists compete, they may each exploit their strengths. An AI-led firm may invest in capacity expansion and process efficiency, while a human-led firm may counter with imaginative differentiation. This interplay can produce industries that are more dynamic, with more investment and innovation but possibly lower profits due to intense competition.

Designing Hybrid Organizations

The most important implication of this research is that organizations should not treat strategy as an either-or choice between humans and machines. They should instead design systems that allocate authority differently across decision types and evolve those allocations as technologies and cultures develop.

In domains where data are abundant and cause-and-effect relationships are well understood — say, online ad allocation, logistics routing or clinical trial design — AI can often operate effectively through influence alone. Because the evidence is clear, people are willing to follow the machine’s lead without it having formal authority. In transitional domains where data exist but disagreement remains high — such as determining production technology or customizing product lines — firms may grant AI formal control initially but maintain human oversight. As the AI builds a track record, credibility rises and the need for control falls. In judgment-rich domains where uncertainty is high and the stakes involve identity, ethics or novel concept creation — such as setting vision, defining brand or crafting organizational culture — humans should retain authority, using AI as an advisory tool to surface patterns without relinquishing judgment.

Consider again the global retailer. For logistics, AI can propose stocking levels across its thousands of stores, freeing managers to focus on exceptions and anomalies. For decisions about entering a new country, AI might analyze macroeconomic indicators and consumption patterns, but a human strategist must integrate cultural understanding, regulatory nuance and long-term brand positioning. For decisions that fall between those poles — say, which of several proven supply-chain technologies to implement — AI might be empowered to choose, subject to human veto if it contradicts non-quantifiable organizational values.

Another example is a pharmaceutical company allocating resources between early-stage research, clinical trials and market launch. AI excels at designing and interpreting clinical trial data, where statistical regularities dominate. Humans are still needed to evaluate the scientific novelty of research programs and to tell the story that connects a therapy to patients and physicians. A hybrid approach would mean AI might manage trial design, while humans decide which molecules to pursue and how to position a breakthrough drug.

Institutions must also invest in credibility enhancement mechanisms for AI. Transparent reasoning processes, where the machine explains its analysis step by step, help people trust its recommendations. Record-keeping systems that document successes and failures build a track record. Hybrid communication strategies, where human intermediaries translate AI’s conclusions into narratives that resonate with different audiences, further bolster credibility. Over time, as AI demonstrates consistent performance in a domain, organizations can reduce oversight and formal control.

A Hybrid Strategic System: Key Takeaways

My research points to three practical mechanisms for implementing a hybrid strategic system. First, rather than granting AI blanket authority or relegating it to purely advisory roles, leaders should ask: Which decisions require AI to have the final say, and which decisions should AI merely inform? For instance, a retail chain might give AI control over daily pricing optimization plus suggest long-term brand positioning for decisions by humans. A hospital might let AI schedule operating rooms and suggest staffing for decisions by human administrators. Thinking explicitly about which decisions fall into which category allows organizations to capture AI’s analytical strengths without eroding human judgment where it is needed.

Second, progressive control models recognize that credibility builds over time. In domains where AI is new, giving it formal control can overcome skepticism and allow it to demonstrate its value. As stakeholders witness its success, the need for formal control diminishes and AI can shift to an influence role. Similarly, humans may initially oversee AI closely but gradually delegate more authority as trust builds. Progressive models therefore provide a pathway for transitioning from human-led to AI-influenced decision-making without abrupt disruptions.

Third, credibility enhancement mechanisms focus on building trust in AI’s recommendations. Techniques include transparent explanations of AI’s reasoning, record-keeping systems that track AI’s past decisions and performance and hybrid communication where human leaders translate AI analyses into insights and stories that resonate with different audiences. By making AI’s reasoning comprehensible and accountable, these mechanisms address the skepticism that often accompanies new technology, processes and algorithmic recommendations. Over time, they create the conditions under which AI can influence or guide decisions without relying on formal authority.

The need for such mechanisms across industries is clear :

  • In a consumer-goods company, AI may decide daily production schedules based on real-time demand data and suggest new initiatives to build brand while marketing strategy remains human-led. Over time, as AI’s track record in production builds trust, its role may expand to forecasting medium-term demand or planning promotions.
  • In an energy utility, AI might control power grid balancing, and suggest strategies to a human for decisions on a long-term investment in renewables. As AI proves its reliability in real-time operations, it might play a larger role in capacity planning and make suggestions for new areas of opportunity.
  •  In a professional services firm, AI can help allocate staff across projects based on skills and availability, as well as suggest new strategies and services to a human for decisions on improving and expanding client relationships.  As AI becomes better at predicting client needs, its advisory role may expand.

Across industries, AI can take charge of data-driven operations while humans steer issues involving vision, relationships and identity. As AI proves its value in tactical domains, its role can expand to broader planning. Leaders should continually adjust the allocation of authority as technology and trust evolve rather than adopt a static division of labour.

In  Closing

AI’s role in strategy is neither to replace humans wholesale nor to serve as a mere adjunct. The greatest value will arise when leaders thoughtfully combine the analytical power of AI with the narrative, empathetic and creative strengths of humans. This requires understanding what strategy does—align choices around a core set of commitments—and designing authority structures that ensure those commitments are both intelligently chosen and broadly embraced.

Ultimately, strategic leadership in the age of AI will reward those who appreciate nuance, have strong look ahead (ie: on opportunities, can create new value, assessing where the money is going to be, etc.), as well are good at dealing with unknowns and ambiguity. In some cases, AI will dictate decisions; in others, it will offer analysis that humans integrate into insight and stories. Leaders will need to build organizations flexible enough to allocate authority dynamically, invest in systems that make AI transparent and credible, and cultivate human capacities that are increasingly valuable as prediction becomes cheap or commoditized.

The future of strategy is not a contest between flesh and silicon but a partnership. The most successful organizations will be those that master this partnership, allowing AI to inform and humans to inspire. At heart, this integration echoes the themes that my Rotman colleagues and I have developed in our work on the economics of artificial intelligence. When prediction is abundant, judgment becomes scarce. Machines reduce uncertainty; people assign meaning. Strategy lives at the intersection of analysis and meaning. By bringing AI into the strategist’s seat without forgetting what makes humans persuasive, leaders can reap the benefits of both. That is the opportunity now before us.

This article originally appeared in the winter 2026 issue of the U of T  RotmanJoshua Gans.