Rebalancing the scales

Rebalancing the scales

Is a new framework needed to assess antitrust risk? 

Concerns about the risk of “killer acquisitions” have led to calls for a reset of the way antitrust agencies weigh up competition risk when investigating mergers. Reports commissioned by both the European Commission and the UK government have floated the idea of moving to an “error-cost” framework that would bolster the ability of authorities to prohibit transactions that bring a small risk of a bad outcome for consumers. An assessment that takes account of both the likelihood and potential severity of harm to consumers seems hard to argue with in principle. But is the proposal workable in practice? And would such a framework – if applied in a balanced way – really lead to more mergers being blocked, as some advocates assume?

In its report to the UK government in March, the Digital Competition Expert Panel – chaired by Harvard economist Jason Furman – identified what it believed to be a pattern of systematic underenforcement of competition policy in the tech sector internationally. The leading digital firms, Furman observed, had made hundreds of acquisitions in the last decade across the globe, but only a handful had been investigated by Europe’s competition authorities and none had been blocked. This suggested that something was awry: even if most of these transactions had been good for consumers, there must have been at least some harmful exceptions that slipped through the regulatory nets.

In his diagnosis, Furman drew attention to the risk of “killer acquisitions” whereby market-leading firms buy innovative start-ups to snuff them out before they can grow into a force capable of challenging their hegemony. The concern is neither new in the tech sector (a classic 1998 episode of The Simpsons parodied the idea at the height of first dotcom boom) nor unique to digital markets: in fact, the concern first came to prominence in the pharmaceuticals sector with companies with a successful drug brand allegedly buying smaller competitors to prevent them from developing rival drugs. However, the emergence of a new generation of tech giants has reignited the debate.

Furman and his colleagues suggested that underenforcement could be attributable to the “more likely than not” framework that underpins merger assessments in Europe. Under this regime, competition authorities are permitted to prohibit a merger only if they judge that it will more likely than not lead to a substantial lessening of competition. At face value this sounds sensible. But the Furman panel contended that such an approach shackles competition authorities in situations where there is a small chance of a very bad outcome for consumers.

Furman’s concern can be illustrated by considering two hypothetical mergers:

  • Merger A is associated with a high likelihood (60%) of an outcome that causes harm to consumers (price rises or foregone service improvements equivalent to €10 per consumer).
  • Merger B  is associated with a much smaller likelihood of a negative outcome (just 10%), but the cost of such an outcome is more severe (equivalent to €100 per consumer). This second merger could be thought of as akin to the killer acquisition scenario: the firm being acquired may be a small start-up with a remote – but still positive – chance of evolving into a significant independent competitive force.  

Under the current “more likely than not” regime, competition authorities would be able to block Merger A, on the basis that it was more probable than not that the outcome for consumers would be detrimental. By contrast, they would not be permitted to block Merger B, on the ground that there was only a one in 10 chance that it would prove detrimental. However, it is not obvious that a merger that brings a small risk of a very bad outcome is less of a concern, from a consumer welfare perspective, than one with a high risk of a mildly bad outcome. Indeed, in the example above, the expected detriment to consumers – derived from multiplying the severity of the harmful outcome by the probability of its occurring – would be higher under Merger B (€100 x 10% = €10) than Merger A (€10 x 60% = €6).

To address this issue, Furman and his colleagues called for a move to a “balance of harms” regime that would take account of the potential severity of the harm as well as the likelihood that it would materialise. Such a regime would justify blocking both types of merger in the above scenarios, thereby giving European competition authorities the legal teeth to engage with killer acquisitions.

Furman is not alone in proposing a reset of the way antitrust watchdogs assess competition risk. In their recent report on competition policy for the digital era, commissioned by the European Commission, Jacques Crémer, Yves-Alexandre de Montjoye and Heike Schweitzer put forward the idea of shifting to an “error-cost” framework. Under this framework, competition authorities would seek to minimise the expected cost of making an erroneous decision about whether to block or permit a merger. In doing this, such an approach would – in much the same way as Furman’s “balance of harms” approach – consider not only the likelihood of making an incorrect decision, but also the severity of consumer harm that would result from this error. Furthermore, Crémer et al suggested that such a regime be extended beyond mergers to encompass the assessment of any form of potentially anticompetitive conduct by dominant firms.

In principle, the “balance of harms” or “error-cost” frameworks would appear to be a more coherent basis for evaluating competition issues than the existing “more likely than not” criterion. But would it be workable in practice? And would such an approach – if applied in a balanced manner – really lead to stricter enforcement in the way that the Furman panel appears to assume?

Unknown unknowns

The first challenge is that of practicality. The illustrative examples set out above assume that there is uncertainty about whether a merger will prove to be anticompetitive, but that the level of uncertainty is known with precision. To borrow from former US Secretary of Defense Donald Rumsfeld, we might call this a “known unknown”. But, in reality, this is unlikely to be the case. While a small start-up may have some positive chance of making it big in the absence of the merger, the question of whether this chance is 0.1%, 1% or 10% may be inherently difficult to gauge. The difference between having a 0.1% chance and a 1% chance of success may seem trivial, but it constitutes a tenfold difference in probability. Given the potential scale of the consumer impact associated with enabling or preventing the emergence of the next tech superstar, this difference in probability could be enough to tip the correct merger decision from allow to block. And the fate of the start-up is not necessarily binary either: game-changing success and abject failure may be two bookends of a spectrum of possible outcomes. Competition authorities have little experience of making these types of judgments and no established toolkit for doing so.

Nor are these challenges restricted to the difficulty of estimating the likelihood of success or failure. In order to apply the error-cost framework on a case-by-case basis, competition authorities would also need to form a clear view about the ways in which the target company might compete with the acquirer in the absence of the merger. In traditional industries where a new entrant is a “mini-me” facsimile of the larger incumbent, economists have developed a number of tools for estimating the price effects of a merger. It may also be possible to gauge such effects in sectors characterised by high levels of innovation, provided the innovation takes place within clearly defined parameters. In the pharmaceuticals sector, for example – where concerns about “killer” acquisitions first rose to prominence – companies have long-term R&D programmes designed to develop specific new treatments or better versions of existing drugs. But in a rapidly evolving sector where the entrant may develop a radical new proposition that leapfrogs existing technologies and practices (the PC, the smart phone, online social networks…), gauging the outcome is inherently difficult.

Looking both ways

Perhaps recognising these practical difficulties, Crémer et al warned against trying to work within their error-cost framework on a case-by-case basis. Instead, they suggested, competition law should try to translate some of the “general insights” from the error-cost way of thinking into a set of practical legal tests for competition authorities for follow. In digital markets – or indeed any markets characterised by strong network effects and consequently high barriers to expansion – the law might want to “err on the side of disallowing types of conduct that are potentially anticompetitive, and to impose the burden of proof for showing pro-competitiveness on the incumbent”.

In making this proposal, Crémer et al appear to be suggesting that the system should be rebalanced with a view to reducing the risk of a “false negative” decision – i.e. when a merger or practice that should have been prohibited on competition grounds is given the green light. But the error-cost framework – if applied in an even-handed way – should balance this risk against the converse risk of a “false positive” outcome, whereby a merger or practice that should have been permitted is blocked.

Accurately gauging the benefits of mergers is arguably even harder than weighing up the anti-competitive effects. Economists have a fairly well-developed toolkit for assessing some types of benefit, such as the likelihood that a merger will generate cost savings that are passed on to consumers in the form of lower prices. But in fast-growing sectors characterised by high levels of innovation, the main benefits may be more likely to take a different form, namely:

  • Unlocking innovation – a start-up business may have limited financial resources and little or no expertise outside its specific area of focus. Combining with a resource-rich business with a wider base of expertise may allow the start-up’s ideas to be taken in directions that would not otherwise have been possible. The larger company may also be able to take risks with the idea that would be impossible for a small independent firm, safe in the knowledge that if the idea fails to take off it is not going to spell the end of the whole business.
  • More effective management – while businesses and investment bankers like to talk about the prospect of mergers generating cost and innovation “synergies”, in many cases the main benefits are more humdrum. Shareholders may throw their weight behind an acquisition because they believe the new management team will simply do a better job overseeing the growth of the business than the existing owners. This is a particularly common consideration in the context of start-ups. As the many mauled contestants from the television show “Dragons’ Den” (or “Lions’ Den” or “Shark Tank” depending on your country) will be all too aware, there is a big difference between the talent required to found a start-up and the skillset needed to take that business to scale.

Putting a number on these types of benefits is challenging. In principle, competition authorities may clear a merger that would result in a prima facie reduction in competition if it can be shown that the deal will give rise to “technical and economic progress” and that the resulting benefit to consumers will outweigh any negative impact. In practice, such a trade-off is difficult to demonstrate, meaning that it is rare for mergers to be cleared on the grounds of these countervailing benefits. Furthermore in the few instances in which competition authorities have accepted evidence for merger efficiencies in recent years – such as the European Commission’s assessment of the proposed UPS/TNT tie-up and two grocery sector investigations conducted by the UK Competition and Markets Authority – the focus has been on measurable cost savings associated with improved supply chain logistics or enhanced bargaining power with suppliers. The benefits associated with tech sector mergers may be harder to measure, but this does not make them less real. A recent report published by Lear for the UK Competition and Markets Authority, for example, found evidence of merger-specific efficiency benefits in two out of the five digital sector merger cases it reviewed.

There is a further question as to whether shifting the burden of proof might have a chilling effect on investment and innovation in the tech sector – an issue close to the hearts of European policymakers who are aware that European businesses are conspicuous by their absence from the roll call of the world’s leading tech companies. Preventing American or Asian giants from gobbling up promising European start-ups may seem like a way of increasing the chances of creating a European tech titan, but it could prove counterproductive if it loads risk onto start-ups by depriving them of an attractive exit strategy. If European rules force entrepreneurs to shoot for the moon in the hope of becoming a standalone superstar, some may choose not to take aim at all.

The right approach?

The “balance of harms” and “error-cost” frameworks have some attractive features that warrant further consideration. However, for the reasons suggested above, moving away from the existing “more likely than not” framework would not be without risk. Any reform to merger assessment guidelines in practice needs to be founded on two principles:

  • Keep things practical. One of the attractions of the “more likely than not” framework is that, by simplifying the exam question, it facilitates a thoroughgoing evidence-based assessment. It is critical that any move towards an error-cost framework does not force competition authorities to abandon this commitment to the evidence. In just the same way as competition authorities set a high empirical bar for accepting that the merger will generate consumer efficiencies, any assessment of the risk of a “killer” acquisition must also be rooted in case-specific evidence. The future may be hazy, but there are still red flags that competition authorities can look out for to inform their thinking. For example, is there any clear evidence in the acquiring party’s internal strategy documents to suggest that it sees the target as a potential future threat? If the acquirer is a publicly listed company, how has it justified the purchase of the target business to its shareholders? Conversely, is there anything in the target company’s own strategy documents or business plans to suggest it had aspirations – and a concrete plan – to grow into a significantly larger independent player before the acquirer made the approach?
  • Keep things balanced. Merger prohibition rates in industries characterised by high levels of entry and innovation may be low, but this does not necessarily mean that the existing framework needs to be fundamentally rebalanced in the way that Crémer et al have suggested. Deliberately tipping the scales in a way that introduces a presumption that mergers involving the acquisition of small tech start-ups are anti-competitive unless proven otherwise risks undermining the very things that competition authorities are seeking to foster – high levels of market entry, expansion and innovation. Any error-cost framework that is built into merger assessment guidelines should be neutral – that is, it should explicitly recognise the risk of both “false positive” and “false negative” decisions and avoid hard-wiring in an assumption that one of these risks deserves greater attention than the other.
Rebalancing the scales