Tech is testing the limits of a half-century experiment in antitrust in which predictions made by experts have guided enforcement of the law.
Over that time, it has become increasingly common for the lawfulness of a given merger or monopolist’s conduct to be decided by predicting its actual effects on competition. On first blush, it seems a sensible approach. Ostensibly, it is a rational replacement of what came before it, which was a set of hard-and-fast rules that trained antitrust on the protection of market conditions believed conducive to competitive outcomes, with less regard for how competition was actually impacted in an individual case.
But lawyers and economists may have jumped the gun in thinking themselves up to the task. A half decade of experience with the predictive approach to antitrust, bolstered by research in uncertainty and decision-making in other fields, suggests that little more than wishful thinking may support the premise that predictions about complex markets can be accurate enough to guide competition policy. And to make matters worse, the prediction-making apparatus has been focused exclusively on an overly restrictive subset of competition concerns that only serve to help consumers to buy more things for less money. The result has been the consolidation of large swaths of the economy.
These flaws are especially pervasive in tech. Enforcers reviewing digital mergers or investigating the tech giants’ monopolistic conduct have been practically paralyzed by the burden of having to make predictions about competitive effects in difficult-to-understand, quick-to-change tech markets. This is apparent not only from a decade of missed opportunities to prevent the consolidation of critical online infrastructure, but also in the various proposed antitrust reforms that would perpetuate the status quo.
The only way out may be a tough pill to swallow: a transformative rethinking of the role of expert judgments and predictions in antitrust. But the transformation need not start from scratch. Research on decision-making outside of the competition space, combined with some old tricks from antitrust’s forbearers, show how a set of simpler, nonpredictive tools could put competition policy on sturdier ground and reinvigorate law enforcement, in tech and beyond.
FTC vs. Google in a decade of lax antitrust enforcement in tech
Two high-profile antitrust investigations gone-nowhere of Google in the earlier days of the digital revolution have proved to be harbingers of the special challenges that modern competition policy would face in tech during the decade that followed.
Round I: the Google/Admob merger review
Over a decade ago, the Federal Trade Commission—one of the two U.S. agencies responsible for ensuring that mergers comply with the competition laws—reviewed Google’s acquisition of Admob. The target was a fast-growing startup offering an online ad network to connect apps and advertisers on a still-emerging smartphone ecosystem. Like an archaeological artifact buried in the antitrust strata, the 2009 Google/Admob deal had all the hallmarks of what today some might call a “killer acquisition”: an upstart in a nascent but quickly expanding digital market being acquired by the large incumbent developing its own competing service.
After a six-month investigation, the FTC concluded in a public statement that Google and Admob were engaged in “head-to-head competition” as the “two leading mobile advertising networks”, each having “viewed the other as its primary competitor” and engaged in competitive responses to the other’s moves.1https://www.ftc.gov/sites/default/files/documents/closing_letters/google-inc./admob-inc/100521google-admobstmt.pdf; https://www.ftc.gov/news-events/press-releases/2010/05/ftc-closes-its-investigation-google-admob-deal But the FTC concluded that relying on then-prevailing market conditions would be inappropriate in light of “uncertainty about the path of competition and the durability of early leads in market share” in the nascent mobile ad network market. It found especially compelling that Apple, which had just announced it was also buying a mobile ad network, would provide a competitive balance to Google. The FTC decided not to interfere, and Google went through with the acquisition.
The FTC’s decision not to intervene in the Google/Admob deal was a consequence of an antitrust framework that expects enforcers to make predictions about what will and will not happen in highly unpredictable markets like tech. The evidence of intense competition between the two companies was not enough to overcome lingering doubts about what things might look like just around a blind corner. But it was an impossible ask of FTC officials to make an enforcement decision based on predictions about a niche mobile market just beginning to take form within a sprawling online advertising ecosystem. One can easily imagine the paralysis this might induce, and empathize with the FTC’s urge not to meddle.
Round II: the Google monopolization investigation
Four years later, the FTC had another chance with Google to set the tone for antitrust enforcement in the digital era. This time it would involve the monopolization laws that govern how a dominant company can and cannot use its strong market position. The allegations against Google were that it had advantaged its own services over those of third-party competitors in how it displayed the search results generated for users of its search engine. Competing services said they were getting demoted in search results and losing traffic to Google’s own services.
The FTC in 2013 closed the investigation and brought no enforcement action. Its public statement on the decision indicated that an analysis of the effects of Google’s conduct revealed that despite “pushing other [third-party search] results ‘below the fold,'” it “likely benefited consumers by prominently displaying its [own] vertical content on its search results page.” And so while acknowledging that Google’s actions “resulted in significant traffic loss to the demoted” third-party services, the FTC found that “these changes to Google’s search algorithm could reasonably be viewed as improving the overall quality of Google’s search results.”2https://www.ftc.gov/system/files/documents/public_statements/295971/130103googlesearchstmtofcomm.pdf
A leaked memo from the investigation suggested that strong concerns existed within the agency about the potential for Google’s conduct to harm competition.3http://graphics.wsj.com/google-ftc-report The FTC staff noted that Google’s conduct “has resulted—and will result—in real harm to consumers and to innovation” in the form of “lasting negative effects on consumer welfare.” But the fear of having to prove to a court that the conduct was not a “speculative long-run harm” seemed to have weighed on the FTC’s ultimate decision not to sue Google.
It is not explained—in the official statement or the partially-leaked memo—how the FTC’s lawyers and economists performed the precarious exercise of weighing the harm caused to Google’s competitors, on the one hand, against the benefits to its search users, on the other. But a prediction of the net effects from Google’s conduct on competition would seem to have been necessary in order for the FTC to conclude that it was not “on balance, demonstrably anticompetitive.” And so as with the Google/Admob review, the monopolization investigation reveals an expectation built into the antitrust framework that government officials are to prove actual harmful effects to competition before intervening in a market.
In all likelihood, though, the weighing of the bad and good effects from Google’s conduct happened only in the abstract. That is suggested in the FTC’s reasoning, described in its closing statement, that the potential benefits to consumers (users of Google’s search engine) had to be only “plausibly” shown in order to avoid government intervention to protect the harm to competitors (its rival online services). In this value judgment—about the relative unimportance of harms to competitors compared to benefits to consumers caused by Google’s conduct—the constraint imposed on antitrust officials by the types of competitive effects that can be considered when making an enforcement decision reveals itself.
With the benefit of hindsight: the Google cases and the shortcomings of modern antitrust
The acquisition of Admob turned out to be one of a string of acquisitions that helped turn Google into the massive powerbroker of online advertising that government studies now consistently identify as a key bottleneck on the internet. And the monopolization investigation of Google previewed mounting complaints that would follow from online businesses about Google’s strong market position in online search. U.S. enforcers now appear to be seriously rethinking both decisions in ongoing antitrust investigations (covered here). Both issues even came up at the recent Big Tech hearings held by elected officials in the U.S. (covered here). Meanwhile, the EU’s competition enforcer has already gone a step further, fining Google for the very conduct that was at issue in the FTC’s monopolization investigation (covered here).
But beyond just shining a spotlight on past missed opportunities, the test of time has also revealed just how difficult it can be for antitrust enforcers to make predictions in complex markets—especially in tech. Google/Admob provides a poignant example. It is difficult not to grimace when reading the FTC’s prediction in its closing statement for the investigation that “Apple quickly will become a strong mobile advertising network competitor” and thereby “should mitigate the anticompetitive effects” of a Google-Admob combination—knowing that a mere one year later Apple would close down the ad network it acquired,4https://www.mediapost.com/publications/article/134132/apple-shuts-down-quattro-wireless-to-focus-on-iad.html?edition=6914 and by 2016 would exit the market altogether.5https://www.theverge.com/2016/1/15/10777496/apple-iad-app-shutdown-june-30th-confirmation
This is not to second-guess the difficult decisions made by the FTC attorneys, economists, and officials involved in that or any other case. But it is to question the merits of an antitrust framework that would require them to make enforcement decisions based on speculative guesswork about the future of unpredictable markets.
The missed opportunities in the FTC’s merger and monopolization investigations of Google reveal a crack in the antitrust framework. A crack that grew into a fissure over the following decade when, despite the lessons to be learned from the 2009 and 2013 inquiries—and the countless other closed tech investigations since then—there has been no retreat in antitrust from the notion that the law’s enforcement ought to be premised on predictions. No one seems to even question the notion.
Instead, the antitrust community now seeks to double down. Of the large and growing list of tech-inspired antitrust reforms currently on the table, none tackle the problem that competition authorities readily clear digital mergers because their attorneys and economists are given a Mission: Impossible when asked to make predictions about the un-predictable. Similarly, none of the proposed reforms would ease the burden that enforcers face when asked to make conjectures about how a monopolist’s conduct nets out on a theoretical scale of good and bad competitive effects. And while some reformers have questioned the narrow set of consumer-centric concerns that are addressed by modern antitrust, there is no mainstream movement for change nor a clear alternative proposed.
How did antitrust come to be built on a foundation that turns bureaucrats into technocrats, asking government lawyers and economists to act like investment analysts who can predict where the markets are heading? And how did their efforts get zeroed in on promoting only a narrow set of consumer-focused goals?
Economism in antitrust: the hubris of making predictions in complex markets
Around the 1960s, an influential movement within academia sought to align competition policy with advancements in the field of economics which were believed capable of explaining how markets work and dictating when it was appropriate to interfere in them. Some of these thinkers came from a so-called “neoliberal” school of economic thinking and others are attributed to the “Chicago [University] School.”
More important than their label is their legacy, which placed predictions about the actual competitive effects caused by a merger or monopolist’s conduct front and center in the enforcement of the antitrust laws. Today, the idea has become so mainstream that no label need be attached to the economists—and the majority of lawyers and policymakers who also adhere to the idea—who support this foundational premise in antitrust.
The Economism guessing game
The Economism—as some call it—of antitrust has sought to make the analysis in competition cases more rational by requiring that, before intervening in markets, enforcers must make a strong showing of the expected actual effects on competition of a given merger or a monopolist’s conduct. (To be sure, it was not just an intellectual disagreement with the status quo that inspired this movement. It was an ideological one, too, guided by the belief that it was more often than not better to wait for free markets to correct themselves rather than have the government meddle in them.)
On its face, it may seem sensible that the enforcement of laws which serve to protect competition should turn on an assessment of actual competitive effects. But this shift has meant that governments (and also private plaintiffs) bringing an antitrust case are required to present more evidence to explain the competitive dynamics of a market and how the conduct of its actors impacts competition in it. This exacts a heavy toll on everyone involved. Any antitrust litigator can attest to how antitrust cases stand out from others in terms of length, complexity, and scale. They are fact-heavy and data-intensive. And in the end, it is a burden borne by everyone involved in the case—prosecutor, defendant, and judge alike.
But the burden of analyzing actual competitive effects is more than just a hassle. It is responsible for turning antitrust into a guessing game. In merger cases, this is largely a forward-looking exercise: predicting how a combination of two companies will impact competition by comparing the market’s expected competitive state if the merger goes through to its expected competitive state if it does not. In monopolization cases, a similar analysis of the impact on competition of a monopolist’s abusive conduct can either be forward-looking (for preventing future harms) or backward-looking (for righting past wrongs).
And it is through the competitive effects guessing game that Economism was thrust into the forefront of antitrust. That is because a predictive approach to enforcement would not have been possible without the belief that economic theories and models provided the scientific (hard “s”) rigor for understanding how a market operates and how the conduct of its actors impacts competition in it. Depending on how you look at it, making predictions with economic models in antitrust was either the root cause or a necessary by-product of shifting the focus to actual competitive effects. Either way, Economism became the beating heart of antitrust at the same time that the law’s enforcement became premised on making predictions about actual competitive effects.
The unproven and perhaps unprovable premise of Economism
Despite forming its foundational underpinning, the bedrock assumption in modern antitrust that lawyers supported by economic experts are capable of understanding and predicting complex markets remains unproven—if it is even provable. To the contrary, there is good reason for reserving doubt.
In Antifragile, uncertainty expert Nassim Taleb writes: “Man-made complex systems tend to develop cascades and runaway chains of reactions that decrease, even eliminate, predictability … the modern world may be increasing in technological knowledge, but, paradoxically, it is making things a lot more unpredictable.” Taleb is skeptical of what he calls “superfragile” predictions guided by economic theory and models which are inherently “unreliable for decision-making.” To him, “economics is like a fable—a fable writer is there to stimulate ideas, indirectly inspire practice perhaps, but certainly not to direct or determine practice.”
According to Taleb, policymaking that uses economic models to manage complex systems in a top-down fashion is bound to fragilize things—no matter how well-intentioned the intervention might be. His most poignant examples of the dangers of expert-guided prediction-making come from looking at economic policy which, in an attempt to minimize short-term gyrations in the economy and financial markets, instead sets them up for larger blow-ups with systemic consequences. He concludes that “even when an economic theory makes sense, its application cannot be imposed from a model, in a top-down manner.”
In Thinking, Fast and Slow, behavioral economist and decision-making researcher Daniel Kahnemann endorses a similar skepticism about relying on expert judgments to evaluate and make predictions about complex environments. Kahnemann summarizes research in various domains (medical, economic, etc.) finding that, due to limits and biases innate to human cognition, expert judgments amidst uncertainty and unpredictability—what he calls “low-validity” environments—are a dependably ineffective way to predict the future.
Antitrust operates in precisely the sort of environment that the works of Taleb and Kahnemann would suggest is poorly suited for subjective, predictive decision-making. The lawfulness of a merger is determined by predicting whether it will cause prices to go up, a monopolist’s abusive conduct by conjecturing whether prices were inflated over a surmised competitive level—everything heavily reliant on economic theories and models. And the fact-specific inquiry of every antitrust case—especially when any case involving dynamic tech markets—means that its practitioners never get exposed to the sort of “regularity” and “prolonged practice” that Kahnemann concludes is necessary for subjective expert judgments to acquire predictive validity. If anything, low validity is supercharged in digital markets operating in vast ecosystems of constantly-evolving and interrelated markets with complicated relationships among its players.
The works of Taleb and Kahnemann suggest that antitrust technocrats are on a fool’s errand that will result in inaccurate evaluations of market conditions and poor predictions about competitive effects. Bad competition policy will result, if for no other reason than the limits of human cognition and the complexities of the market environments being observed.
Pulling back the curtain on Economism in practice
Practitioners can also draw on their own experiences to find ample support for the skepticism that flows from the works of Taleb and Kahnemann about expert-based, predictive decision-making.
The pitfalls of Economism in antitrust can be seen in everyday practice. In merger cases, economic models are presented to predict future price increases by the merged companies. And parties looking to dodge enforcement actions in close-call cases hire economists to predict how a merger will lower costs, increase output, and improve innovation.
In private antitrust litigation, plaintiffs and defendants alike rely on armies of economists to make out the elements of a case or defend against it. Too often, the result is a series of warring expert reports submitted by uber-qualified economists with stellar reputations who—based on the exact same factual record—reach diametrically opposing positions about a market’s dynamics or likely competitive effects. Equally troubling is how the uncertainty of the expert opinions can be seen fading away by the time the court chooses a winner, as the prevailing view achieves a supreme prescience when cited by the judge in support of its decision.
Alarm bells should be going off. An academic field’s reputation would seem to be put in doubt, and with it the foundation of an influential body of law that shapes our economy and society. Instead, academics and policymakers are more likely to be heard describing the rigor and rationality that they believe neoliberal economic thinking has brought to antitrust enforcement. And while some reforms proposed by the mainstream antitrust community might seem dramatic within the existing paradigm, they are trivial when considering how none tackle the fundamental flaws of the status quo.
And so, paradoxically, as antitrust turns its focus on increasingly difficult-to-predict markets, it does so increasingly with Economism-driven prediction as its lodestar—like a captain that insists on navigating a ship with the stars even when it is obvious that clouds cover the night sky.
Antitrust and Economism come to a head in tech
The problems with using Economism to guide competition policy are especially glaring in tech. Thousands of pages of government studies from around the world have now exhaustively documented antitrust’s shortcomings in responding to the digital revolution. A consensus has emerged that the still-young archaeological record of antitrust enforcement in tech is already replete with opportunities missed to prevent various digital markets from “tipping” to monopoly or near-monopoly.
In the merger context, the focus has been on missed opportunities to block digital acquisitions by large incumbents of their upstart rivals. One of the main reasons these sorts of tech deals typically pass antitrust scrutiny is that making predictions about the long-run competitive impact of a small entrant into an emerging tech market is simply too tall a task for enforcers to take on. The Google/Admob investigation discussed above is a good example of how that burden can overwhelm an agency. The FTC’s closing statement in that case portrays an agency bogged down in trying to understand a nascent mobile ad network market and predict what competition in it would look like with and without the merger. It never emerged from the muck with adequate proof that Google’s acquisition could harm competition in the future, despite unambiguous evidence of close competition between the two companies at the time.
In monopolization cases, lax enforcement has been attributed to two main causes. First, an insufficient understanding of the inner workings of emerging digital markets. Second, the absence of viable antitrust theories of competitive harm to deal with situations where a tech giant operating within a large digital ecosystem leverages its monopoly power in a core market to restrict competition in an adjacent market where it has less power. Indeed, in the Google Investigation discussed above, the agency’s ability to bring a case seemed to get derailed by the burden of having to measure the competitive effects—weighing the harm to some players against the benefits to others—of a novel theory of “search bias” within a complex online ecosystem.
The deafening silence from U.S. competition authorities in the years since those investigations of Google, in terms of both merger and monopolization enforcement, is telling of the tough burden that enforcers carry in tech. (Those looking to the current antitrust investigations of Google to make amends for past failures would be wise to temper their expectations, as discussed here.)
Yet despite its glaring shortcomings, officials seem not to even question the notion of an Economism-guided, predictive enforcement of the antitrust laws in tech. There is no mapping of economic theories or models to the successes or failures of the enforcement decisions they have influenced. So-called “retrospective” analyses by governments—which would look at what happened in a market after an enforcement action was pursued (for example, blocking a merger), an action was not taken (for example, not seeking to stop certain conduct of a monopolist), or a remedy was imposed (for example, requiring divestiture of part of a business as a condition for clearing its merger)—are far and few between.
And so antitrust enforcers continue on, even as it gets more and more difficult for them to understand and predict the tech markets they are asked to analyze. Lulled into a false sense of security by their faith that Economism-guided predictions will show them the way, they seem unaware that they might get led off a cliff by a false prophet.
The consumer welfare standard: antitrust with one hand tied behind its back
The naive interventionism behind modern antitrust enforcement lies not only in its reliance on economic theories to understand and predict markets. It also flows from a school of thought that aims Economism and the antitrust laws at the protection of consumers rather than human beings.
Modern antitrust is rooted in a particular brand of a so-called “consumer welfare standard.” A legacy of the same neoliberal or Chicago School thinkers who transformed how the antitrust laws would be enforced, it says that what antitrust ought to do is optimize efficiency in the economy. And “efficiency” here is to be narrowly construed as lower prices and higher output, to the exclusion of essentially all other considerations.
This may simply be a natural consequence of having unleashed Economism on antitrust—a practical acknowledgement that, to be effective, prediction-making using economic theories and models must narrow in on a few quantifiable aspects of competition that can be measures. Or, it could be that focusing antitrust on price-output effects naturally resulted in the rise of Economism. Notwithstanding the unclear cause and effect relationship, it is evident that Economism and the idea that the law should only be used to promote efficiency optimization in markets have been wedded to each other during their half-century reign over antitrust.
This seemingly minor short-cut in the goals of the competition laws has had major consequences for the course of antitrust enforcement. As enforcers conduct the now-requisite analysis to understand and predict the actual competitive effects of a merger or monopolist’s conduct, they look only to how consumers are harmed or benefitted. And the decision of whether the antitrust laws have been violated boils down to the narrow question of whether prices go up or output goes down.
This modern orthodoxy has meant that significant harmful effects accompanying the efficiency optimization of markets are ignored in the competition analysis and can, perversely, be encouraged by the enforcement of the antitrust laws.
The broader harms that have little to do with “consumer welfare” can take many forms. For example, hyper-optimization of agricultural markets has endangered smaller family farms, devastated the environment, and hollowed out American communities.6https://www.vox.com/future-perfect/2020/7/8/21311327/farmers-factory-farms-cafos-animal-rights-booker-warren-khanna; https://www.theguardian.com/environment/2019/mar/09/american-food-giants-swallow-the-family-farms-iowa; https://www.nybooks.com/articles/2020/06/11/covid-19-sickness-food-supply/ Cost-reducing, volume-increasing consolidation in meat processing has resulted in unsafe conditions for workers, declining real wages, inhumane conditions for animals, and a food supply chain vulnerable to shocks like COVID-19 or the spread of food-borne illnesses.7http://www.foodandpower.net/2020/05/07/meatpacking-more-dangerous-today-than-a-generation-ago-amplifying-covid-19-crisis/; https://www.nytimes.com/2020/04/18/business/coronavirus-meat-slaughterhouses.html; https://www.politico.com/news/2020/05/25/meatpackers-prices-coronavirus-antitrust-275093 Hyper-efficient dairy producers have made milk suppliers dependent on far-off markets (and vulnerable to any weather, geopolitical, or other event interfering with international trade), increased hidden environmental costs by de-localizing production and distribution, and fragilized supply chains to shifts in demand caused by events such as COVID-19.8https://washingtonmonthly.com/2020/04/28/why-are-farmers-destroying-food-while-grocery-stores-are-empty/; https://washingtonmonthly.com/2019/11/21/the-monopolization-of-milk/; https://www.wsj.com/articles/why-milks-best-sales-in-a-decade-wont-save-struggling-dairy-farmers-11592220289?redirect=amp#click=https://t.co/wToporR6wD
Despite the fact that these systemic fragilities are caused by the sort of lost market redundancies and increased consolidations that would seem to be at the core of competition policy, they play no part in a modern antitrust deployed to protect little more than the interest of a narrowly-conceived subset of consumer who desire only to buy more for less. What any such consumer might also want as a worker, a small business owner, a citizen, a voter, or a member of the local community, goes ignored.
Consumer welfare in the digital revolution
The narrow focus on price-output effects plays an important part in the lax enforcement of antitrust laws in the digital economy. Tech is all about the consumer and satisfying their desires for entertaining, convenient, and easy-to-use products and services. This has made the big tech companies, according to surveys, among the most beloved and trusted institutions in American society. Their products and services are wildly popular and, even more important, they are largely free.
The evolution of a free, ad-revenue based business model on the internet has been the miracle drug that inoculated the tech industry from antitrust scrutiny. Big tech has been the Invisible Man when it comes to merger and monopolization enforcement in large part because governments and private plaintiffs must always contend with the fact that no matter how compelling their story of harm to competitors or other market players, the effect on consumers is often benign, if not beneficial. And since consumer welfare reigns supreme in antitrust, enforcement in tech cases rarely results.
But even though it can hide in plain sight from antitrust scrutiny, tech’s consolidation still has the potential to create massive systemic fragilities having nothing to do with prices or output. After all, what if not the concentration of information infrastructure in the hands of a few tech giants is at the core of fake news, hate speech, the undermining of truth in society, the manipulation of the democratic process, the polarization of political discourse, and the radicalization of citizens? And what if not the hyper-optimization of sales and distribution online is behind the destruction of small and local businesses, the centralization of profit and political power, the mass migration of the workforce to a vulnerable non-employee status, and the race to the bottom to create cheaper, low-quality, high-waste goods? And what sort of buried systemic risks are all online users exposed to by the accumulation and consolidation of their personal data in the hands of a few large, for-profit enterprises?
Yet despite these consequences having flowed directly from the concentration of digital markets, all are ignored by a modern competition law orthodoxy that focuses exclusively on the consumer experience. With such a narrow target to aim at, it should come as no surprise that antitrust enforcers rarely bring enforcement actions in tech.
To summarize the problem before turning to potential solutions: modern antitrust rests on a shaky foundation. For one thing, its enforcement requires analyzing and predicting actual competitive effects, despite that the Economism relied upon to do so has not been shown to be good science for this purpose. At the same time, enforcers are left short-handed by a school of thought that focuses antitrust exclusively on price-output effects. The results were bad enough in the analog world, but they have ground enforcement to a halt in the digital one. And so unless reformers start by rebuilding the antitrust enforcement framework on sturdier ground, their well-intentioned reforms may face instant collapse.
A blast from the past: nonpredictive decision-making with economic structuralism
The cracks in the modern antitrust orthodoxy—predictive enforcement blindly guided by Economism and overly constrained by a focus on optimizing efficiency—are foundational problems for competition policy which are only worsening in the digital era. Fortunately, they may converge on a singular solution.
For that, we return to Kahnemann and Taleb, the skeptics of expert judgments in complex environments. One of the surprising takeaways from Kahnemann’s work about predictive decision-making amidst complexity is that “to maximize predictive accuracy, the final decision should be left to formulas, especially in low-validity environments.” Counterintuitively, the more complex the environment—and digital markets set the high water mark for low validity—the more simple the method for predicting ought to be. Taleb advocates for something similar in what he refers to as “nonpredictive decision making under uncertainty” with “simpler methods of forecasting” in place of “complicated systems and regulations and intricate policies.”
Kahnemann gets more specific about what such an approach might look like in practice. It starts with a baseline prediction (or “base rate”) that is set according to empirical observations about the typical (average) outcome under normal (average) circumstances. Once the baseline is set, subjective expert judgment can then be relied upon to adjust the baseline prediction upward or downward. However, the extent of this subjective adjustment can be reined in by reference to certain objective factors known to be associated (correlated) with the outcome that is being predicted.
Could a simple, nonpredictive formula be used to guide antitrust enforcement? It just so happens that the annals of antitrust provide a good starting point for such an approach.
Prior to its transition in the 1970s to an Economism-driven predictive approach, antitrust was guided by a so-called “economic structuralism” which said that market structure determined market outcomes. Simply put, when there are fewer competitors in a market, a firm is more likely to collude with others and abuse a strong market position. Therefore, to protect competition, enforcers should prevent markets from tilting—whether by merger or monopolistic conduct—into the structures and conditions conducive to bad competitive outcomes by ensuring that markets have many competitors (low “concentration”) and are open to new rivals (low “barriers to entry”).
Economic structuralism in merger enforcement
Perhaps the height of economic structuralism in merger enforcement was in 1963. That year, the Supreme Court made a seminal ruling in the Philadelphia National Bank case that sought to “simplify the test of illegality” in merger cases. It did this by creating a strong pro-enforcement presumption that mergers which increase the concentration of a market beyond a certain threshold (based on counting and measuring the market shares of its players) are harmful to competition and therefore presumed unlawful under the antitrust laws.9https://www.law.cornell.edu/supremecourt/text/374/321 The Court thereby sought to streamline merger enforcement by “dispensing, in certain cases, with elaborate proof of market structure, market behavior, or probable anticompetitive effects.”
The FTC and Department of Justice—the country’s two federal antitrust enforcers—ran with this idea and placed economic structuralism front and center in merger enforcement. In 1968, they published influential enforcement guidelines announcing that it was the “primary role” of merger laws “to preserve and promote market structures conducive to competition.”10https://www.justice.gov/archives/atr/1968-merger-guidelines The guidelines identified specific threshold levels of combined market shares (and increases in those levels) which, if found to result from a merger, would weigh strongly in favor of the government intervening to stop the deal or to place conditions on it that remedy its anti-competitive potential. These guidelines influenced not only the agencies’ own practices when deciding whether to bring an enforcement action, but also how courts ruled when hearing merger cases.
This so-called “structural presumption” in merger law is not as blunt an instrument as it may seem. That is because, in law, a presumption can be rebutted, and so it is the start but not the end of the analysis. The party on the receiving end can—with enough evidence—overcome the presumption and push the burden of proof back to the other side. In merger cases, this means that once the government shows the market share thresholds have been crossed to establish presumptive illegality, the merging companies can rebut (overcome) that presumption by showing that their combined market shares do not, in fact, indicate a likely negative effect on competition. If they succeed in rebutting the presumption, the case is not over, but the burden shifts back to the government to prove through a detailed analysis of the facts that the merger will likely have a harmful effect on competition.
The major impact of this approach rests in more than just beginning the analysis with the presumptive illegality of any merger that sufficiently increases market concentration. Just as important is how the burden-shifting is calibrated in terms of the amount of proof required to rebut the presumption. The Supreme Court in Philadelphia National Bank required of merging parties seeking to rebut the structural presumption “evidence clearly showing that the merger is not likely to have such anticompetitive effects.” The FTC/DOJ guidelines stated that it was only in “exceptional circumstances” that a merger crossing the problematic concentration thresholds would require a “more complex and inclusive evaluation” to determine its legality.
In other words, if the government met its initial burden, it was more often than not to result in a quick end to the case and the merger’s prospects—without any cause to delve into a detailed analysis to predict its likely actual competitive effects.
Although the structural presumption lives on today, it has been recalibrated to no longer play the outcome-determining role that the Supreme Court and antitrust agencies envisioned in the 1960s. In the case law, courts influenced by broader trends in the transition to Economism and a more hands-off approach to private markets have lowered the evidentiary bar required of the merging companies to rebut the structural presumption and have shifted the focus of the competitive analysis from market structure to predictions about actual competitive effects. At the FTC and DOJ, revisions to the enforcement guidelines have followed a similar trend and motivation, with actual enforcement practices arguably drifting even further away from the presumption and closer to analysis of actual competitive effects.
The result is that the Supreme Court’s precedent and agency guidance on structural presumptions are zombie doctrines in any but the clearest-cut (very strong or very weak) merger cases. The presumption remains on the books. Courts in their decisions and enforcers in their investigations still pay it its dues by starting the analysis with market shares. But, more often, the analysis of actual competitive effects has become the real battleground in merger cases.
In extreme cases like Google/Admob, the defendant’s burden seems to have almost flipped and become the government’s. In others, it provides just enough of an opening for the merging companies to sow confusion and instill doubt in the enforcer (or the court) about the strength of the case. Either way, due to the challenges that a full competitive effects analysis presents for enforcers and courts, mergers that as a technical legal matter should be “presumed” unlawful due to their impact on concentration are too often cleared with minimal or no conditions attached.
Economic structuralism in monopolization enforcement
Economic structuralism was not a phenomenon limited to merger enforcement. In the first step of a monopolization case, courts at one time would rely heavily on market shares to establish that a company was “dominant.”
And also for the second step—which requires showing that the monopolist abused its dominant position by hindering competition—conduct was presumed anti-competitive if it was simply shown to maintain or increase the monopolist’s market power. Most often, this was shown by reference to two structural factors: market concentration and barriers to entry. If a monopolist did anything to maintain or elevate either one, it was presumed to lead to market conditions that would eventually manifest themselves in anti-competitive conduct. This alone could be enough to establish a violation of the monopolization laws in some cases. This rooted monopolization cases in the same structuralist ideas underlying the presumption used in merger cases.
But just as the structural presumption in merger law lost its appeal in favor of an analysis based on predicting actual competitive effects, a similar transition occurred in monopolization law. The movement towards Economism and a more laissez faire approach to private markets has shifted the focus in monopolization cases to the question of whether a monopolist’s conduct in fact did (in the past) or is likely to (in the future) eliminate competition by raising prices or restricting output.
And so in proving a firm’s monopoly power, now market shares are often only the start of an analysis which delves into market characteristics such as the likelihood that the position will be “durable” in the long-run. The same can be said for showing that a monopolist abused (wrongfully maintained or elevated) its monopoly power. To successfully bring a case alleging predatory (below-cost) pricing, for example, it now must be shown that the monopolist’s losses in the short-term will be “recouped” with profits in the long-term. Meanwhile, arguments by the monopolist that its conduct was objectively justified or had a net beneficial impact on competition receive more serious consideration. The results of some monopolization cases turn on a precarious balancing of the harmful and helpful effects of the monopolist’s conduct on the competitive process.
And so with a shift to analyzing actual competitive effects, proving up monopolization cases has become the same morass that it is in merger cases. That said, the comparison to merger law should not be overstated. Discerning bad from benign unilateral conduct by monopolists is particularly difficult. Empirical evidence is mixed about the relationship between certain conduct and anti-competitive market outcomes. And, in practice, economic structuralism in monopolization cases resulted in many questionable outcomes, including cases that condemned little more than the mere bigness of a firm. Moreover, monopolization cases are often brought after-the-fact, when market outcomes can as a practical matter be measured; by contrast, merger cases are almost always brought before-the-fact, when making an educated guess about future market outcomes is the only option.
But the shortcomings of the structuralist approach ought to be no defense for perpetuating a status quo that is ineffective itself. The modern approach to monopolization cases, which requires speculating about actual competitive effects with unproven economic tools, can claim no empirical or intellectual superiority to the rough but simple economic structuralism that preceded it—while it certainly can stake a claim to the consequences of the massive scaling back of antitrust enforcement that followed in its wake.
Imperfect as they were, and despite landing on the losing side of history, might antitrust’s previous experiences with economic structuralism in the merger and monopolization contexts inform a more effective antitrust framework for the modern tech age?
A modern nonpredictive antitrust framework
The relic of economic structuralism offers a good starting point for brainstorming an improved approach to antitrust enforcement, in tech and beyond. The guiding principle might be to move the decision-making away from subjective human judgments based on Economism-driven predictions about actual competitive effects in favor of objective, nonpredictive screens preserving the market conditions conducive to competitive outcomes.
Nonpredictive merger enforcement
Antitrust’s history with merger enforcement may have the most to offer in the way of a nonpredictive, formulaic enforcement tool.
It might start with reviving in merger law and agency practice a stricter structural presumption that is more in line with what the Supreme Court and antitrust agencies had in mind in the 1960s. Mergers crossing problematic thresholds of concentration would be unlawful unless the merging parties made a strong evidentiary showing that anti-competitive effects were unlikely to result.
But the presumption could also be updated to include objective factors other than just market shares, consistent with a modern-day understanding of the relationship between market conditions and competitive outcomes. One example might be the conditions for entry in the market (with the higher the barriers to entry in the market, the stronger the case for blocking the merger). So whereas entry barriers are currently subjected to the same Economism-heavy prediction game as the rest of the competitive effects analysis, they could instead be treated as an objective variable (tabulating, for example, the number of recent entrants, etc.) as part of the formulaic structural presumption.
Other factors could be added to the presumption, too, so long as their links to bad competitive outcomes were solidly rooted in empirical study. How the factors are “added up” to determine if the presumption threshold has been crossed would, of course, require more study. But following Kahnemann’s rubric for simple, formulaic decision-making, it would be important for the factors to be objectively identifiable and limited in number. The factors considered in the presumption should be no more than is necessary to screen for those mergers which result in market conditions not conducive to adequate competition.
As for subjective expert judgments, those would still play a role—though a subordinate one—in the new framework. When the merging parties are seeking to rebut a presumption of unlawfulness, they would be permitted to rely upon expert analysis to show that the merger will not, in fact, lessen competition. But such competitive effects analyses would be reserved for the rare case in which extraordinary proof exists to overcome the baseline expectation of competitive harm that arises from triggering the structural presumption. So the burden of proof for rebutting the presumption would be set very high, along the lines of what the Supreme Court and antitrust agencies had in mind in the 1960s.
If a return to economic structuralism is justified by a distrust of the current Economism-driven predictive approach to antitrust, it might also be that under the new enforcement framework mergers not triggering the presumption ought to to be presumed lawful. This “reverse” presumption could be rebutted by the government only by proving likely harmful competitive effects, subject to the same tough burden of proof that merging companies would face in rebutting a presumption of illegality. A presumption, in other words, would run in both directions.
So in rough outline, a reboot of the merger framework as an objective, nonpredictive screen could be achieved by reviving a strong structural presumption based on market shares and perhaps some additional objective factors.
Nonpredictive monopolization enforcement
A fix to monopolization law in the form of a nonpredictive, formulaic enforcement framework would be more elusive than in the merger context, but it is still worth consideration.
The threshold requirement in monopolization cases that the company is “dominant” in a market could undergo the same structural presumption that a merger does. But the second requirement—that the monopolist maintained or increased its monopoly power to the detriment of competition—would require looking beyond mere market structure. Otherwise, it would collapse into the first requirement and put monopolization law in the awkward position of forbidding the mere possession of monopoly power—a condition that can emerge organically and with benign competitive effects, especially in winner-takes-all tech markets.
In place of using market structure to determine when a monopolist has maintained or increased its monopoly position, the nature of its conduct would be a more natural focal point. Though the specifics would require further study, a nonpredictive framework could start with a strong presumption of illegality for a limited set of conduct that has been empirically established to be associated with bad competitive outcomes when performed by a monopolist.
Once established, the presumption would place a heavy burden—similar to the one considered above for merger cases—on the monopolist to rebut it with proof that competition is not likely to be (or to have already been) harmed by the conduct. Therefore, as with the structural presumption applied to mergers, the presumptive test for monopolistic conduct would leave room for subjective expert judgments in trying to rebut the presumption. There would also be a similar reverse presumption, meaning that market activities not falling within the pre-determined list of problematic monopolistic conduct would be (rebuttably) presumed lawful.
Such a framework for monopolization claims could also draw from case law experience with “unreasonable restraints of trade”, which are collusive agreements among competitors that are subject to another subset of the antitrust laws. Certain such agreements are treated as so pernicious as to render them strictly “per se” illegal (unlawful without any regard for their actual competitive effects), and others as so benign as to subject them to a highly permissive “rule of reason” (usually lawful under a full-blown competitive effects analysis). But a “truncated” rule of reason lying in a Goldilocks middle between these two extremes causes certain agreements to be presumed unlawful without delving into its actual competitive effects, while still allowing the parties to the agreement to rebut that presumption with adequate proof. This framework could be roughly imported into a presumption-based structuralist approach to monopolization cases.
One major hurdle for monopolization cases under the new framework would be in determining whether, in a particular case, the monopolist has engaged in a preset category of problematic conduct. This would not always be obvious (a lesson learned from courts grappling with when to apply the truncated rule of reason in restraints of trade cases). But in keeping with the goal of a simple, formulaic approach that avoids slipping into the competitive effects quagmire, an objective screen could be used. This screen would look at certain nonpredictive indicators—market conditions or circumstances present and not present—which would function as a checklist or be summed up to formulaically determine whether the monopolist’s conduct falls within the pre-determined list of presumptively unlawful activities.
Fine-tuning the proper aims of a nonpredictive antitrust
Although the proposed frameworks for monopolization and merger cases differ in some ways, both rely on an objectively-determined presumption of unlawfulness on the front-end which pushes any Economism-based, predictive analysis of actual competitive effects to the back-end, where the opposing party faces a high evidentiary burden for rebuttal.
This approach, while seeking to minimize the role of subjective judgment in antitrust decisions, does not eliminate it, which means still having to grapple with the issue of what the proper aim of antitrust ought to be. In either the merger or monopolization context, the presumption (whether facing the party bringing the case or the one defending it) can be rebutted with sufficient proof regarding actual competitive effects. Naturally, a question therefore arises about what types of effects are fair game for argument.
As discussed above, the current consumer welfare approach which focuses entirely on prices and output ignores various harmful effects from the concentration of economic power that would seem otherwise within the reach of antitrust laws. But how much broader ought the goals of antitrust be under the new proposed enforcement frameworks? Harm to competitors (exclusion), laborers (wage suppression), and suppliers (price squeezes) might be the low hanging fruit for inclusion in a broader welfare standard. The same might be said of loss of redundancies in the supply chain, or consolidation of control over user data. Harm to the environment and concentration of political power may be tougher to incorporate. While hate speech and the polarization of public discourse would almost certainly fall outside of the proper purview of antitrust.
Wherever the line is ultimately drawn by policymakers, it need not be inclusive to an extreme. After all, broader societal concerns about concentration of private markets can be left to the protection of a very strong presumption on the front-end of the new enforcement framework. But other than to say that it is intended to be the rare case where a competitive effects analysis is performed on the back-end, it must be acknowledged that more work would need to be done to figure out its proper boundaries.
Questions surrounding how to define the proper aims of antitrust would also seep into the judgment calls that need to be made about what triggers the presumptions of illegality on the front-end. That is because the threshold levels of concentration and additional objective factors triggering the structural presumption in merger cases, as well as the categories of conduct deemed presumptively unlawful in monopolization cases, would be determined according to their tendencies to result in market conditions conducive to bad competitive outcomes. But what is a “competitive outcome” is in the eye of the beholder, and so difficult questions would arise in formulating the front-end presumptions in both merger and monopolization cases.
Difficult as that task may be, there is much benefit to working out those difficulties at a policy level. Those who in the last half-century have—through their influence over academia, the courts, and government officials—reined in merger and monopolization enforcement by shifting its focus to price-output effects have done so with little say from lawmakers. A reset of the antitrust enforcement framework would be an opportune moment to refocus competition policy on the broader detrimental effects of allowing markets to persist in conditions of concentrated economic power.
Where the lines are drawn would have a huge impact on the reach of antitrust laws under the new enforcement regime. The debate would be especially fraught and consequential in the digital context, where existing enforcement of the merger and monopolization laws has been particularly controversial and prone to disappointing results (the latter discussed here and here in the context of investigations of Google). Difficult cuts would have to be made, and the results would ultimately reflect not only ideology about the proper role of antitrust, but also pragmatic factors such as the likelihood and ability of other regulations to fill the gaps (covered here).
Nonpredictive antitrust enforcement in practice
The formulaic, nonpredictive approaches outlined above are guided by a simple principle: that antitrust enforcement ought to be put on a sounder intellectual footing that acknowledges the limits of the human mind in making predictions amidst complexity.
The practical effects of the proposed changes would be to improve clarity and certainty for everyone involved—companies, government agencies, courts—in distinguishing lawful from unlawful market activities. They would also ease the burden for bringing such cases, and in the process free up resources for more enforcement of the antitrust laws. At the same time, some of the changes—such as adding new objective factors to the structural presumption in merger cases, employing a clear-cut list of presumptively unlawful monopolistic conduct, and subjecting enforcers to reverse presumptions of lawfulness—would probably tip the balance the other way, scaling back certain types of enforcement.
Still, it seems self-evident that the net result of the proposed changes would be more active enforcement of the merger and monopolization laws. The specific make-up of the resulting cases—which types would increase versus decrease, which industries or players would see the biggest changes, etc.—is less clear. But the aim in reforming competition policy should be more accurate enforcement, targeting the right mergers and monopolistic conduct, for its own sake. Then let the chips fall where they may.
As for the day-to-day enforcement of the antitrust laws, the major implications could be summarized as follows.
First, there would be the lowering of the barrier currently put in front of enforcers and courts that requires the lawfulness of market activities to be determined by performing the difficult task of predicting and conjecturing about actual competitive effects.
Second, the simple, formulaic framework put in its place would de-emphasize the role of predictions in the decision-making process, streamlining antitrust enforcement for those activities which are empirically known to perpetuate the structural market conditions associated with bad competitive outcomes.
Third, at the same time, it would leave some wiggle room for nuanced expert judgments to soften the blunt force of a trial-by-formula in those rare instances when unique circumstances justify diving back into the lion’s den of analyzing actual competitive effects.
Fourth, by relying on objective criteria about market structure or conduct instead of subjective judgments about market effects, the new framework would empower antitrust to reach various other important kinds of harm—beyond just price and output effects—that can flow from the concentration of economic power. That is, by targeting the roots of harmful concentration instead of just cutting off a few branches that have grown out of its trunk, antitrust would protect various interests in society other than just the consumer who wants to buy more for less.
Only one way out: a new antitrust framework
Governments, policymakers, and enforcers around the world have been befuddled by the fact that, in a mere decade, entire digital industries emerged and consolidated into near-monopolies right under their noses. With the best of intentions, they now have undertaken a massive effort to figure out what got broken and how to fix it. In the process, they have reached for a familiar tool: an antitrust framework that seeks to protect competition by using economic theories and models to understand markets and make predictions about them.
But the antitrust reforms under consideration will only go as far as their lofty assumptions. And in a search for complex solutions to complex problems, it should come as no surprise that all that is found is confusion. There is no blame to be cast or shame to be felt in that outcome. Humans have simply run up against nature’s speed limit for the use of the mind to understand and predict complex systems. What would be shameful is to allow complacency, pride, or vested interests to perpetuate a broken competition policy.
The hard truth may be that antitrust was never equipped to be more than an ax that tackles economic concentration at its structural roots. And so a better alternative may exist in a nonpredictive, formulaic approach to antitrust enforcement that is less tied to analyzing competitive effects and more focused on identifying the market conditions that cause an excessive concentration of economic power. Not only does this approach do away with speculation by economics—and the enforcement-squashing burden that it puts on enforcers and courts—but it also has the potential to steer competition laws towards the protection of everyone and not just consumers.
Much work remains to be done to dust off the economic structuralism of antitrust’s past and adapt it to a modern empirical understanding of the factors and conditions that cause poor competitive outcomes in markets. Some soul searching about the proper goals for antitrust laws would also be a part of the process. But none of the challenges seem insurmountable if there is sufficient will to get antitrust right.