Consider three decisions made across three different industries in the past two years. In 2024, a federal class action named six major hotel chains, Hilton, Hyatt, Marriott, Omni, Wyndham, and Four Seasons, alleging that their shared AI pricing platform had produced coordinated room rates across competing properties. The hotels had never communicated. The DOJ and FTC called it price coordination. A regional grocery chain replaced its human promotions planners with an AI system trained on the same public market signals every competitor uses. Within two quarters its promotional calendar had converged with the market leader’s. Shoppers could no longer articulate why they preferred one chain over the other. A national landlord adopted an AI rent optimization platform used by thousands of competing property managers. It raised rents in step with rivals it had never spoken to. The U.S. Department of Justice named it in a 2024 antitrust action. Three industries. Three decisions executives thought were competitive positives. Three outcomes that resulted in strategic, self-inflicted wounds. When multiple companies deploy AI systems that learn from overlapping market data, optimize similar objectives, and operate at machine speed, a pattern consistent with what we and other researchers have documented in AI-mediated markets, and one that peer-reviewed research in the American Economic Review and Journal of Political Economy has now measured empirically, those systems tend to arrive at the same conclusions independently. We call this the Agentic Convergence Trap. Understanding it requires understanding not just how AI systems behave, but how executives have enabled the behavior. The Question No One Has Asked The strategic conversation about AI has so far focused on two things. First, whether a given company has adopted AI effectively enough to gain advantage. Second, whether AI, being widely available, can provide any sustainable advantage at all, or whether it simply raises the floor for everyone. Both framings examine AI from the perspective of a single firm making a deployment decision. Those framings are not wrong, but they are incomplete. Agentic systems can now access not just more data but data that was simply unavailable previously. They can analyze that data at speeds and with feedback cycles that were not just unimaginable but were not possible until recently. But what happens when your AI and your competitors’ AI agents are both deployed in the same market and begin learning from each other simultaneously? The answer is convergence. Independent AI agents, trained on similar data, optimizing similar objectives, at machine speed, develop nearly identical models of market reality and act on them in near-identical ways. Not by design. Not through communication. By learning. This is not a theoretical risk. It is already measurable across retail, hospitality, airlines, and housing. And it is accelerating, because the agentic AI systems driving convergence are themselves getting faster. Why Switching Vendors Won’t Help When executives notice the convergence, many diagnose that the problem is common software and therefore convergent outcomes. If this is the diagnosis, then the logical solutions would be to change the interface, switch vendors, build proprietary data and features. However, the root of the convergence is not the common software, it is the common learning process and its speed. Humans also learn, so why is there less convergence when humans make strategic decisions than when agentic AI agents make them? Human judgment introduces natural variation: one manager overrides the suggested action, one team moves slowly, one regional leader reads the market differently. Those frictions, often regarded as inefficiencies, are the mechanism that produces strategic diversity. Agentic AI systems do not simply follow rules like expert systems; they learn the rules. If you remove human variation, limitations, biases, and emotion, two independently deployed AI agents, trained in the same competitive environment, will logically reach the same conclusions: same objective, same signals, similar conclusions, decisions, and outcomes. The economic evidence is compelling. Economists studying German retail gasoline markets found that when only one firm adopted an AI pricing agent, margins did not change. When both competing firms adopted, margins rose roughly 38% across the market. The agents shared no data. They simply learned, independently, that cooperative pricing was mutually beneficial. Research on U.S. airline markets found an equally counterintuitive result: Dynamic AI pricing expands output but lowers total consumer welfare compared to uniform pricing. Competing carriers’ fares converge because their AI systems respond to each other’s scarcity signals faster than any human pricing strategy can differentiate, not through coordination, but through learning the same competitive environment simultaneously. The strategic question is not whether your AI works. It is whether it is working for you or quietly working for the whole market. The Organizational Failure Behind the Trap The convergence trap is not primarily a technology problem. It is a leadership and governance failure that technology makes invisible. Across the deployments we have studied, the pattern is consistent. An executive team adopts an AI platform under time pressure or competitive urgency. Default settings are accepted because questioning them requires technical expertise the leadership team does not have. Human review processes that previously introduced strategic variation are eliminated, because removing that friction was the point. Over months and quarters, the AI learns its way to the same position as every other AI on the same platform. The RealPage case makes this explicit. RealPage did not force landlords to accept its pricing recommendations. It built an auto-accept feature. Hundreds of competing landlords independently chose to activate it. Each decision was rational: the AI was faster, and human reviewers were slower and more expensive. The aggregate result was a housing market where AI systems coordinated rent increases across cities at scale, with no individual executive having made that choice. Most AI governance frameworks focus on accuracy, bias, and legal risk. Almost none treat the preservation of strategic variation as a governance objective. The companies that will avoid the convergence trap are those with processes designed to ask one question before delegating any decision to AI: What would happen to our competitive position if every rival made exactly this same AI-driven choice today? A Sequenced Response There is a correct sequence to addressing the convergence trap. Start with governance, because without it the other moves will not hold. Then objective design. Then data. Then measurement. 1. Decide where humans stay in the loop. Executive teams should define which decisions require human sign-off regardless of what the AI recommends. Any decision where the AI-driven outcome would be identical to every competitor’s is a decision that should not be fully delegated. In practice: pricing moves above a defined threshold, promotional strategies affecting significant portions of the product range, talent acquisition criteria, and responses to competitor moves within specified time windows. Human review processes were eliminated because they were slow and expensive—but those costs may often be much lower than the cost of becoming indistinguishable. 2. Define what your AI optimizes beyond the platform default. Every platform comes with objective functions designed for broad commercial applicability: maximize revenue, minimize cost, increase conversion. Those defaults are what every competitor on the same platform is also optimizing. The question before any deployment is not whether the AI works, but whether it optimizes for something that a competitor on the same platform would not optimze for. For example, Starbucks’ Deep Brew is designed to optimize visit frequency and long-term relationship depth , with transaction value as a downstream consequence rather than a primary target. The AI asks how to make this customer come back next Tuesday. In contrast, a competitor’s AI, optimizing primarily for check size, sees the same customer and surfaces a bakery add-on or an upsize. Dunkin’s 2025 loyalty restructuring made this objective explicit, engineering its program specifically to drive higher check sizes through food attachment . Neither system is malfunctioning. They are simply optimizing different objectives. Compounded across millions of interactions, those objectives build entirely different customer relationships. Objective choice is where strategic variation begins. 3. Feed your AI data your competitors cannot access. Proprietary data is not about volume, it is about exclusivity. The default inputs of every major AI platform are publicly available market signals: competitor prices, traffic data, weather, inventory levels. When multiple firms’ AI systems draw on the same public signals, they build the same model of the market. Carnegie Mellon researchers who studied Amazon’s algorithmic pricing behavior found that the firms hardest to exploit were those whose decisions incorporated internal signals competitors could not observe. Uber’s surge pricing advantage over Lyft rests on 61 billion historical trips across more than 10,000 cities, a behavioral dataset Lyft’s smaller fleet has never matched, and on real-time signals like app-open rates at the block level that reflect a decade of scale Lyft cannot replicate overnight. Identify what your organization knows that competitors cannot observe: customer behavioral patterns from owned channels, longitudinal relationship data, frontline operational signals that never appear in industry databases. Route those signals into your AI before it defaults to the public inputs every platform already uses. 4. Measure convergence, not just performance. Most organizations track AI through operational metrics: recommendation acceptance rates, pricing accuracy, conversion lift. None of these detect convergence. A survey of more than 70 business managers with pricing algorithm oversight, conducted as part of a 2024 NBER working group review, found that most firms using pricing algorithms expressed significant concern about reduced transparency and loss of managerial control—but had not developed governance processes to address those concerns systematically. Deloitte’s State of AI in the Enterprise 2026, a survey of more than 3,200 senior leaders across industries, found that only one in five companies has a mature governance model for autonomous AI agents, even as three-quarters of organizations plan to deploy them within two years. JPMorgan Chase, which has built one of the most documented AI governance frameworks among U.S. financial institutions, now applies automatic compliance monitoring across all AI models as part of what it describes as its model risk management discipline. Google established internal AI Responsibility Review Boards in 2024. Neither company uses the term “divergence audit” publicly, but both have operationalized the underlying principle: AI systems require ongoing measurement of their competitive behavior, not just their technical performance. Executive teams at these firms are building three metrics into their governance rhythms. First, decision correlation: how closely do AI-driven decisions align with observable competitor behavior over the past ninety days? Second, timing overlap: what percentage of AI-initiated moves occur within the same window as competitor actions? Third, data exclusivity: what share of AI inputs come from sources competitors cannot access? Assign an owner to each metric. Treat declining divergence as a strategic warning with the same urgency as a declining customer satisfaction score. The Cost of Convergence The DOJ alleged that algorithmic coordination through RealPage contributed to rent increases across major U.S. cities. When AI systems learn to coordinate price increases in essential goods, without communication, without intent, and without a human anywhere in the chain making a deliberate choice to raise prices, the harm is real and the accountability is unclear. This is not an argument against AI. It is an argument for executive accountability. The organizations that activated RealPage’s auto-accept feature were not trying to harm renters. They were optimizing occupancy revenue. But the organizational choice to remove human judgment from that loop produced consequences no individual executive consciously chose. The outcome is nobody’s intention. It is everyone’s responsibility. . . . AI is now widespread in most industries. But competition is no longer about who has the strongest algorithm. The winners in the next phase of the AI transformation will be the companies that have designed their AI to reach conclusions their competitors’ AI will not. This will require deciding where humans stay accountable, defining objectives that go beyond platform defaults, building data inputs competitors cannot replicate, and measuring divergence as a strategic indicator rather than a lucky outcome. The work begins not in the technology function, but in the boardroom with a question most executive teams have not yet asked: which decisions require us to take responsibility for preserving strategic variation, and what does it cost us if we delegate that responsibility to an algorithm?