25 April 2018
Automatic Harm to Competition? Pricing Algorithms and Coordination
Speaker: Mike Walker, Chief Economic Adviser, Competition and Markets Authority
Speaker: Stephen Lewis, Partner, RBB Economics
This SPE evening meeting brought together a leading UK regulator and high-profile economic consultant to discuss whether competition authorities should be concerned about collusion arising from pricing algorithms. Stephen Lewis, Partner at RBB Economics, reminded the audience that these issues are neither new nor a reason for a more interventionist approach by regulators. Dr Mike Walker, Chief Economist at the Competition and Markets Authority, speaking in a personal capacity, highlighted that more theoretical and empirical research was required in the area – at this stage, his main concern was the use of algorithms to facilitate higher prices as part of a wider collusive agreement.
Stephen Lewis began with a recap on the economics of collusion. In a one-shot pricing game, collusion will not occur. In repeated games, however, collusion can occur when the present value of sticking to the collusive price exceeds that of deviating (i.e. undercutting the collusive price) and then being punished (e.g. suffering lower profits as collusion breaks down). One key component of the equation is the discount factor which captures how much firms value profits in the future versus today. As that falls, deviating becomes more attractive. This is because the deviant values profits today more highly relative to the (lower) profits tomorrow (when collusion breaks down).
The critical discount factor leaves a firm indifferent between deviating and sticking to a collusive agreement. Economics sheds light on factors which sustain collusion at lower discount factors. For example, if there is a time lag between deviation occurring and being punished, the critical discount factor rises. On the other hand, if punishment is immediate, gains from deviation are limited so collusion may be sustainable at very low discount factors.
Lewis emphasised that while economics helps us understand whether collusion can arise it does not provide a guide to whether it will arise. The problem is that collusion involves a problem of choosing between multiple equilibria – firms need to communicate so as to choose which of these to adopt.
Do algorithms help in this regard? In theory, algorithms could lead to immediate punishment of deviation making the existence of collusion plausible. Further, one can devise rules which – if adopted by two firms – would lead to prices above the one-shot game level. For example, consider two strategies. Firm A acts as a price leader and refrains from a price war only if B follows its lead. Firm B always follows A’s lead. If Firms A and B establish algorithms to price in that way, collusion will occur. But should we presume this will arise?
Axelrod analysed algorithms in the 1980s, pitting tit-for-tat strategies (follow your opponent’s last move) against grim-trigger strategies (if your opponent undercuts, never collude again) and several others. In the first tournament, tit-for-tat worked best. But in later tournaments, other algorithms fared better.
We learn that in reality a collusive strategy might need to forgive deviation so that breakdowns of collusion are not forever. But there is no one algorithm that dominates all others. So should we expect competing firms to align on algorithms that give rise collusion without some form of separate explicit communication? Should we presume machines learn how best to coordinate and inevitably end up setting collusive prices? No argues Lewis. In fact, algorithms may be designed to do the opposite (or learn to deviate in covert ways).
Some online marketplaces may, for example, attract custom by offering the best prices. They may develop rules that undermine incentives for collusion. Further, algorithms may be designed to price discriminate targeting individual consumers with a price based on their prior search behaviour or other information on their characteristics. Collusion in that setting may be hard to achieve as some recent theory papers have argued. For these reasons, Lewis concluded that existing enforcement tools were sufficient to deal with pricing algorithms in online markets.
Mike Walker then spoke. He explained that while he would not assume that algorithms give rise to collusive outcomes, the theories of harm are worth taking seriously and should be debated and researched further. He highlighted that while neither Stephen Lewis nor he were addressing efficiencies arising from algorithms, many are possible such as lowering menu costs, allowing prices to reflect economic scarcity and permitting individuals to receive useful, personalised offers.
Walker then summarised a recent CMA case in which two poster producers had agreed to collude using algorithms to maintain higher prices. Here, the algorithms did not give rise to higher prices per se, rather, they were a mechanism used by conspirators to implement the collusive strategy.
Walker noted that a theoretical concern could be where competitors use the same provider to develop their algorithms – that might, for example, make it easier to reverse engineer a competitor’s pricing policy. In practice, however, algorithms may be developed in-house or by a number of different providers, so this seems a special case.
He cited a well-known US case in which airlines signalled prices to each other in relation to fares not available to consumers until a later date. That permitted costless signalling in that “getting the price wrong” would have no implication for output. Algorithms might, in theory, permit such signalling during time periods when demand is very low.
However, such concerns are theoretical. Moreover, there are some important features of the online world that would counter collusion. First, competition among online marketplaces would encourage each marketplace to display low prices and hence to use algorithms that undermine collusion. Second, behavioural economics also teaches us that how a price is presented matters – where such framing effects influence demand, this may also undermine collusion because the same price may be perceived differently according to the online context in which it is displayed.
Third, and importantly, Walker cited a tension between personalised pricing and collusion. Where algorithms target individuals with prices based on their characteristics and past-purchasing behaviour, collusion would be particularly hard to sustain. No market price would exist and personalised prices would not be observable.
For these reasons Walker concluded that the jury is still out. As such, his main concern (as with the “posters case”), is where algorithms are used to implement collusion that results from prior explicit communication between firms.
Adrian Majumdar, SPE Councillor
To listen to the podcast click below.
For a copy of Stephen Lewis’s slides, please click on the download above (please note Mike Walker didn’t give a powerpoint presentation).
To hear the recording of the meeting click below.