Algorithms and price collusion

Algorithms and price collusion

Computer collusion 

The idea of computer algorithms secretly colluding with one another without the knowledge of their human creators may sound like science fiction, but they were a serious topic for competition regulators in 2017.

But how real is the risk? Should policymakers – and businesses – be worried? Our editor, Alan Wheatley, caught up with David Foster, a director in Frontier’s Competition practice, who has been working on the subject.

Colluding algorithms sounds far-fetched, doesn’t it?

You might think so, but some pretty serious people have been talking about it. EU competition chief Margrethe Vestager gave a speech on the subject last year and The Economist magazine ran a piece on it too. Anyone who wants to find out more should read a recent book by Ariel Ezrachi, an Oxford University academic,. It’s called “Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy”, he wrote it with Maurice Stucke. It makes for interesting and scary reading in equal measure. 

So, what exactly are people worrying about?

Well, all the cases so far have involved computer algorithms that help implement conventional cartel- type behaviour. But what interests me is the idea that if algorithms learn to become experts at tacit collusion, that is where some argue that there is a chink in the regulators’ armour. Because tacit collusion itself isn’t illegal, they argue there could be a major “enforcement gap” if algorithms colluding becomes a widespread problem.

I still don’t see why computers might be better at collusion than humans

There are two basic reasons people seem to have put forward. First is speed. The idea here is that computers can pull in pricing data quickly, particularly in digital markets. I don’t find this very convincing. Most of the time, whether pricing reactions take a second or an hour hardly matters to the profitability of the strategy – a company will typically make less than 0.1% of its annual profit during that hour. What’s generally much more important is how fast consumers react.

Second is predictability, which is important for achieving and sustaining collusion. I find this much more convincing. Remember, we’re talking about computers. They’re sophisticated, but they’re not emotional like we are. That makes them a lot more dangerous in my view. You only have to think of Hal in “2001: A Space Odyssey”.

What does your own work in this area focus on?

So far the debate hasn’t moved much beyond outlining conceptual concerns. So I thought I’d build my own pricing algorithm and see how it got on at learning to collude. It’s pretty basic, but in a way that’s the point. I wanted to test how easy it was to get some very simple algos to learn to collude.

Can you explain what the algorithm does exactly?

Just three things. First, it experiments with different pricing strategies over time. Second, it works out which ones are profitable and learns the lessons. Third, following on from that, its choice of pricing strategies gradually shifts and it settles on those that are more profitable. In other words, it operates through simple trial and error, learning as it goes what tends to make profits and what doesn’t.

And how did you test that?

We let the algorithm loose on a market – though obviously not a real one! As a test bed, we took the standard model of price competition that the European Commission uses to assess merger effects in differentiated product markets – its so-called “calibrated merger simulation”.

What did you find?

This model lets us simulate price competition in a variety of situations – with differences in the number of players, market shares, extent of market power and so on. Now, the model – as you would expect – assumes that firms are rational and make pricing decisions in a way that maximises profits, just like the textbooks say. But we replace those pricing decisions with the algorithm and let it learn from experience.

We start by assuming that every firm in the market is using the same simple algorithm to set prices. The algo learns that higher prices tend to be more profitable, and in the end it manages to increase prices to more than double the competitive benchmark.

I imagine the degree of competition in a particular market must make a difference?

Interestingly, no. The results were pretty insensitive to the number of competitors. On the face of it, this lends credence to the idea that algorithms – if they can get collusion going – might be successful at coordinating prices even in more competitive markets.

There was another interesting finding that’s worth mentioning. It looks like all competitors need to be using some form of algorithm for collusion to be sustainable. We looked at a situation where only one firm in the market uses the algo. In this scenario, collusion never got off the ground. When even just one firm in the market was using a more aggressive pricing algorithm collusion was immediately a lot less successful.

How robust do you feel your conclusions are?

There’s a lot more work to be done to understand whether and how successfully algorithms can achieve tacit collusion. This was a laboratory scale experiment, and so it’s a long way from capturing the full range of effects. But it does start to confirm suspicions that the most immediate risk might be a situation where multiple firms deploy very similar pricing software and that software then starts to behave in similar ways, and to learn by trial and error in very similar ways. Before long you could end up with a new, higher price point.

So how well suited is the existing competition law framework to dealing with this type of problem?

It’s a tricky one. It’s perfectly conceivable that this sort of thing could happen without the designers of the algorithms even knowing, let alone the businesses that employ them. So I think the legal challenge would be to bring a case based on Article 101 – which prevents agreements that distort or prevent competition within the EU – without any explicit collusion, but rather perhaps based on the negligence of the algorithms’ owners to understand what their pricing tools were up to.

Do you think antitrust regulators have any other options?

If the circumstances were sufficiently clear cut, I wouldn’t rule out the Commission looking at bringing an excessive pricing case based on a finding of joint dominance. Proving joint dominance is very difficult right now. But that might no longer be the case if and when the authorities are in a position to make a forensic copy of the price setter… and subject it to the kind of interrogation that wouldn’t be allowed on humans!

 

David’s recent article “Algorithms and price collusion – Learning to love artificial intelligence” was recently published by Competition Law Insight. The full article is available here.

With special thanks to Alan Wheatley for his time.

 

Algorithms and Price Collusion