Philip K. Dick’s post-apocalyptic novel “Do Androids Dream of Electric Sheep” (the basis for the movie Blade Runner) asked whether robots can think and feel. One of the hot topics du jour in antitrust is whether (software) robots can conspire and collude for purposes of the Sherman Act. We’re in the very early days, so we must caveat every statement and preliminary conclusion, but just as robots can’t dream, there are reasons to believe that, at least in the short- to intermediate-term, they also cannot collude to violate the antitrust laws. (A couple years ago I did a related post on this issue: Can Computers Conspire to Fix Prices?.)
First, there is no empirical evidence that software has been able to (or could) learn to conspire. Despite some recent hype about computer collusion, one recalls the adage that “artificial intelligence is always 10 years away.” One of the few (if only) studies in this area – of the ability of computer algorithms to cooperate in the famous “Prisoner’s Dilemma” game – yielded mixed results. It appears that the chance of creating algorithms that just happen to be good at colluding may be small.
Let’s assume, however, that computer software develops faster than we otherwise would predict. What are the risks that companies’ computers are going to be charged with price-fixing, or that companies will be held responsible for their doing so?
To answer this question, we should step back and methodically consider the various types of activities at issue here – because sometimes the discussions do not unpack the various distinct scenarios. First, reference is often made to the 2015 DOJ case against Daniel William Aston and his company Trod Ltd. for allegedly fixing the prices of posters sold online via Amazon Marketplace. According to the DOJ, the conspirators agreed to adopt specific pricing algorithms for the sale of posters with the goal of offering online shoppers the same price for the same product and coordinating changes to their respective prices. Importantly, although the alleged conspirators used algorithms, the alleged conspiracy involved an old-fashioned and very human meeting of the minds, and so the case doesn’t break new ground, any more than the first prosecutions of price-fixing conspiracies conducted over the telephone or via email did.
The second scenario involves employing algorithms as a business practice that can tend to facilitate collusion. The concern here comes in one of two flavors. First is the concern that simply having more data (about customers as well as competitors’ behavior) available for real-time analysis may facilitate collusion. But this seems to be a question of degree, rather than kind, because firms already look at the same types of (and sometimes voluminous) data in making decisions about their pricing (input costs, buyer behavior, publicly-available information on competitors, etc.). Second is the concern that competitors may use the exact same algorithms, which will result in parallel pricing, or make parallel pricing more likely, even if the algorithms do not communicate with each other. That’s a possibility, although it is not clear that major competitors will buy the same off-the-shelf algorithms. If they did, perhaps that could be a “plus factor” to be considered in combination with parallel pricing and the like to evaluate whether there is circumstantial evidence of an agreement. However, using the same algorithm may be a relatively weak plus factor – after all, options traders have for decades used the same Black-Scholes formula to calculate options prices without any antitrust challenge.
The third scenario is the (for now) hypothetical one: two or more firms employ pricing algorithms that, without full human control, somehow communicate and ultimately conspire with each other. This scenario also has two variants – a difficult case and an easy (or at least easier) case. In the easier case – which may be the more likely one – although humans do not affirmatively program the algorithms to conspire, they can observe the results. After all, it seems likely that for the foreseeable future humans will remain in the buying and selling loop even if computers set the prices. And so, for example, if a computer does not lower prices when demand is down and supply is up, then arguably the humans may be on some sort of inquiry notice to figure out what is going on. In at least certain of these cases, one can at least imagine a rule that holds the company responsible for setting the wheel in motion and knowingly turning a blind eye to the results.
In the difficult case, competitors’ algorithms communicate and conspire with each other, and somehow the results are sufficiently masked or cloaked so that no one is any the wiser. Although this variant seems improbable, we may not be able to entirely rule it out a priori. At the moment, with no case having presented these facts, the best one can probably say is that the competitors’ liability is not entirely certain. Perhaps there also might be an argument that the software manufacturer should bear some sort of liability for its creation – although it is not at all clear that the language of the Sherman Act would support such liability. Algorithms can make pricing more competitive, and we should be reluctant to adopt a rule that interferes with those pro-competitive efficiencies.
 See https://www.justice.gov/opa/pr/e-commerce-exec-and-online-retailer-charged-price-fixing-wall-posters (Dec. 4, 2015). An earlier plea agreement regarding similar activity was reached with David Topkins.