Competition Law and Algorithms: Current Framework of Algorithms and Competition Policies
There is a broad consensus that digitalization is the most important driving force shaping the modern economy. Although digitalization includes many revolutionary technologies, algorithms, big data, and artificial intelligence have been under the spotlight recently because of their extensive area of usage and adverse effects yet to be explored. On the one hand, algorithms provide many beneficial opportunities for the economy and society. For example, they can facilitate innovation, allow for the personalization of products and services, and reduce search costs. And since algorithms continue to be developed further thanks to the developments in big data and artificial intelligence technologies, their potential has not been exhausted yet. On the other hand, their possible detrimental effects on competition, particularly by facilitating collusive practices remain as a significant issue that needs to be addressed. In the following sections, this study will explore some of those detrimental effects in the light of three reports published by respectable competition authorities.
First, a joint study by the Autorité de la concurrence, the French competition authority, and the Bundeskartellamt, the German competition authority, (the “Study”) will be examined. This Study addresses the potential competitive risks associated with the use of algorithms with a particular focus on pricing algorithms. The Study predominantly discusses the usage of algorithms in different scenarios and how this affects competition. The Study then analyzes practical challenges when investigating algorithms.
Secondly, OECD’s report titled “Algorithms and Collusion: Competition Policy in the Digital Age” (“OECD Report”) that has been published in 2017 will be summarized. OECD Report discusses ways that algorithms can be used to engage in collusive behaviors without establishing an explicit agreement and measures that can be taken to detect and prevent such outcomes. Also, the OECD Report suggests possible regulatory interventions to prevent collusion.
Thirdly, Competition and Market Authority’s, i.e. the United Kingdom’s competition authority, report titled “Pricing algorithms – Economic working paper on the use of algorithms to facilitate collusion and personalised pricing” (“CMA Report”) will be examined. Focusing on the pricing algorithms, the CMA Report explains the scope and usage of algorithms, demonstrates possible pro-competitive effects of algorithms and discusses the relationship between algorithms and personalized pricing with respect to competition.
To concretize these three policy documents, two competition investigations, which scrutinize algorithm use, Lufthansa case in Germany and real estate brokerage market in Spain, are explained and Turkish Competition Authority’s approach to algorithms reflected to public is also provided, in order to exhibit the project the potential developments in Turkey.
II. The French-German Joint Study on Algorithms and Competition
As mentioned above, this joint Study by the Autorité de la Concurrence and the Bundeskartellamt, examines the risk of collusion between companies created by the extensive usage of algorithms in particular pricing algorithms, as they are relevant in e-commerce. For the purpose of illustration, the paper considers three scenarios. In the first scenario, situations in which a “traditional” anticompetitive practice resulting from prior contact between humans are discussed. Algorithms are only used as a facilitator of anticompetitive practice in this scenario. In the second scenario, a third party, such as an external consultant or software developer, provides the same or similar algorithm to competitors which may have detrimental effects on competition. In the third scenario, competitors use different pricing algorithms, but the anti-competitive practice may derive from a mere interaction of computers. Before discussing all these scenarios, it may be helpful to understand the concept of algorithm, different algorithm types and their fields of application as explained in the paper.
A. Algorithms – notion, types, and fields of application
By the task they perform, the Study categorizes algorithms according to typical tasks that they perform in multiple sectors and market levels. It exemplifies that algorithms may be used for monitoring and data collection, dynamic pricing, personalization based on consumers’ data, ranking, matching and price tracking under this category. By the type of inputs they use, the Study categorizes algorithms according to the number of data they use, the granularity (i.e. the level of detail of the data), the data types (e.g. numerical inputs in tabular form, textual inputs, or image data), and the content of inputs (for example, a company might gather mainly information that refers to its own situation, or it could gather data on competitors or customers). According to the Study, algorithms can also be classified according to their method of learning (e.g. self-learning algorithms which generally known as “machine-learning” algorithms and “fixed” algorithms that do not automatically change over time in response to new information) and their degree of interpretability. The Study makes a distinction between “descriptive” algorithms, which are basically interpretable for humans, from “black-box” algorithms, which is hardly interpretable for humans.
B. Use of algorithms in different scenarios
As is known, Article 101 of the TFEU prohibits anticompetitive agreements and concerted practices between undertakings (or associations of undertakings), thus, a violation of competition law necessitates some kind of communication between the parties concerned. In other words, undertakings are allowed to adapt their behavior intelligently to the existing or anticipated conduct of their competitors without some kind of explicit or implicit agreement between the competitors.
The Study projects three different scenarios in which algorithms may be used to facilitate such communication for collusion or may be benefited for violation of competition without such communication between parties.
1. Algorithms as supporters or facilitators of “traditional” anticompetitive practices:In this first scenario, the Study discusses the situations, in which there are “traditional” anticompetitive practices conducted as a result of prior contact between undertakings such as explicit collusion between competitors or any other type of practice. In this scenario, the algorithm is only used as a supporter or as a facilitator of the implementation, monitoring, enforcement or concealment of anticompetitive practices. The Study, therefore, explains that explicit collusion supported or facilitated by an algorithm may cover a wide variety of different situations. First of all, algorithms could be used to implement collusive prices or support market segmentation. For instance, there might be an agreement between competitors preventing them from actively targeting each other’s customers and in order to implement this agreement they could employ an algorithm that blocks the recruitment of each other’s customers, as happened in a case decided by Ofgem. Moreover, an algorithm could be used to monitor competitors’ prices or punish the competitor if a deviation from the price previously agreed occurs. Furthermore, an information exchange between competitors, which causes the anticompetitive practice might be supported or facilitated by an algorithm by making such communication more simple, rapid and direct. Algorithms could also be used to hide such communication activities, e.g. by allowing for encrypted messaging. To hide anticompetitive behavior, algorithms could be used in different ways as well, such as to implement different prices when there is no (or very low) demand. The Study states that the involvement of an algorithm in this scenario does not raise algorithm-specific competition law issues since a prior agreement or concerted practice agreed on may be assessed under Article 101 of the TFEU without further consideration of the algorithm. However, developing a case-specific understanding of the algorithm might still be useful, considering that it could allow an assessment of potential counteracting efficiencies as well as reinforced negative effects of the anti-competitive practice.
2. Algorithm-driven collusion between competitors involving a third party: In the second scenario, a third party, e.g. an external consultant or software developer, provides the same algorithm or somehow coordinated algorithms to competitors. In this scenario, there is no direct communication or contact between the companies, but a certain degree of alignment could be the result of the actions of the third party or procurement of same products/services from third parties. Also, assessment of such behaviors should take whether the undertakings knowingly or unknowingly used the coordinating third party service/product. While the Study discusses liabilities of both undertakings concerned and third party algorithm providers for knowingly use of algorithms, it specially underlines the need of designating express liability regulations for third party providers for the cases whereby the undertakings unknowingly benefited such algorithms.
Alignment of algorithmic decision-making could arise both at the code level and at the data level. Alignment at code level becomes an issue when a third party not only provides algorithms with a shared purpose but also a similar implemented methodology. Alignment at data level could occur when a common or somehow coordinated algorithm could provide the means for information exchange amongst competitors; or, instead of acting as a facilitator of information exchange between competitors, an algorithm might also use a shared data pool to pursue the goal of maximizing joint profits.
On collisional practices, until now, there has not been many algorithm-specific case law in relation to the situations described above. However, existing precedents provide that anticompetitive agreements or concreted practices can be conducted indirectly by third party behaviors. For example, the ECJ found in the VM Remonts case that an undertaking may be held liable for a concerted practice on account of the anticompetitive acts of an external service provider if either (i) the service provider was acting under the direction or control of the undertaking concerned, or (ii) the undertaking was aware of the anti-competitive objectives pursued by its competitor(s) and the service provider and intended to contribute to them by its own conduct, or (iii) the undertaking could reasonably have foreseen the anti-competitive acts of its competitors and the service provider and was prepared to accept the risk which they entailed.
In another case (Eturas et al. v. Lietuvos Respublikos), the ECJ assessed that “the mere existence of a technical restriction implemented in the [booking] system” restricting discounts excessing 3% should not amount to concertation, but if the users of the e-booking platform are aware of the discount restriction, then a concerted practice under Article 101 of the TFEU can be found. However, the ECJ further argued that presumption of innocence shall not account for presuming the users of such platform were not aware of the platform operator’s message informing the discount restriction. Furthermore, the Study remarks the AC-Treuhand case as a reference for holding third parties liable for algorithmic collusions under Article 101 of the TFEU, on the basis that cartel facilitators can be held liable regardless of whether they operated in the same market with competitors.
According to the Study, alignment of algorithms at code level most particularly trigger restriction of competition, through automation or suggestion of uniformed prices. However, when an alignment takes place at a data level, the established assessment principles for information exchange apply. Needless to indicate, whether an information exchange constitutes an infringement of Article 101 of the TFEU always depends on the particularities of the individual case. The type of information and the specific market conditions play an important role when deciding the violation of competition and other factors such as age or currentness of the data, the extent to which the data is individualized, and whether the data is public or not to must be taken into account as well. The way the data is used in the context of the algorithm may also play a role. As already mentioned above, where the third-party algorithm facilitates a direct information exchange amongst competitors, this could be treated as any other information exchange amongst competitors. However, even where the third party provides algorithms that calculate prices separately for each competitor, competition violation might occur when the third party uses a common pool of training data including non-public data by multiple competitors. The Study even questions that algorithms enabling awareness of publicly available information more simply, rapidly and directly as a potential competition restrictive tool.
Also, the Study examines the algorithmic software, which are used not only for pricing or information exchange, but as a management consultant or SaaS that the undertaking delegates rendering strategic decisions. For example, demand and supply matching software serving to competitors concurrently may result with reliance on same data/decision making and thereby a continuous behavior alignment or information exchange.
3. Collusion induced by the parallel use of individual algorithms: The third scenario mentioned in the Study is the case whereby undertakings use distinct algorithms and without prior or ongoing communication or contact between them. Nevertheless, an alignment of their market behaviors, which pricing algorithms might facilitate, may result from a mere interaction of algorithms. The discussion on this scenario is mainly hypothetical as so far there has not been any case practice.  As the OECD states with regard to self-learning algorithms: “It is still not clear how machine learning algorithms may actually reach a collusive outcome”
Beyond algorithms reaching tacit collusion, the discussion especially focuses on the question of whether algorithms could engage in the behavior more similar to explicit forms of collusion. The possibility of such complex interactions might be possible in the context of black-box algorithms driven by artificial intelligence. As Schwalbe points out, “[…] considering the rapid progress in research on AI, it cannot be ruled out that algorithms may learn to communicate and thereby increase the likelihood of algorithmic collusion”.
The Study refers to some academic research that considers the plausibility of algorithmic collusion mostly in experimental settings. Most of them are about black-box algorithms and demonstrated that a certain degree of cooperation could be achieved using pricing algorithms. However, since these experiments rely on certain strong assumptions, their results may not be observed in the same way in real-world settings.The Study concludes that it currently seems to remain an unanswered question whether the alignment of two or more pricing algorithms can likely arise by chance in real market conditions. Nevertheless, the Study identifies 7 conditions on assessments for plausibility of algorithmic collusion, which are (i) transparency of market and degree of common knowledge between competitors, (ii) time horizon, (iii) stability of the competitive environment, (iv) degrees of freedom and complexity of algorithm, (v) initialization, exploration strategy and learning rate, (vi) symmetry/similarity in terms of algorithms and companies, and (vii) interim conclusion.
Another important point mentioned in the Study is since Article 101 of the TFEU does not prohibit conscious parallel behavior, an algorithm that merely unilaterally observes, analyses and reacts to the publicly observable behavior of the competitors’ algorithms might be considered as an intelligent adaptation to the market rather than infringing coordination. Given this, some scholars or stakeholders propose that the current understanding that mere parallel behavior does not fall into the scope of Article 101 of the TFEU needs to be reconsidered.
Another legal issue in this scenario concerns the question of the extent to which the behavior of a self-learning algorithm can be attributed to a company. According to the Study, as long as an algorithm pursues a predefined strategy, the company will usually be responsible for the algorithmic behavior. However, in case of autonomously acting black-box algorithms, this question continues to be relevant. In this context, some suggest the accountability of a company for the behavior of its algorithms must be subject to a reasonable standard of care and foreseeability. Others suggest that only if an undertaking does not take due precautions after becoming aware of coordinated behavior, it could violate competition rules. The last approach explained by the Study suggests that algorithmic behavior could be treated similarly to a company’s employees’ actions.  According to established case law of the ECJ, for an undertaking to be held accountable for the actions of its employee: “[…] it is not necessary for there to have been action by, or even knowledge on the part of, the partners or principal managers of the undertaking concerned; action by a person who is authorized to act on behalf of the undertaking suffices.”
The Study, nonetheless, concludes that the standards for assessing a undertakings’ liabilities over collusive algorithmic behavior, through distinct algorithms are still not clear. However, what is clear is that undertakings need to think about how they could ensure antitrust compliance when they use pricing algorithms, especially after EU Commissioner Vestager has called “compliance by design” in this context.
C. Practical challenges when investigating algorithms
The Study also discusses practical challenges when investigating algorithms by first describing potential types of evidence that might be used to establish a competition violation and subsequently outlining ways to obtain and analyze relevant information. Concerning the burden of proof, cases involving algorithms do not create novel issues since the authority asserting an infringement bears the burden of proof in principle.
While explaining potential types of evidence, the Study makes a distinction between potentially relevant information associated with the role of the algorithm and its context on the one hand, and the functioning of the algorithm on the other hand.
According to the Study, information on the role of the algorithm and its business and/or technical context may serve as circumstantial evidence to prove a competition violation, depending on the case. First, information on the objective of the algorithm, its implementation and changes over time could be relevant to understand potential coordination, the undertaking’s responsibility from the algorithmic behavior, the scope of a suspected infringement and intent or negligence of the undertaking. Furthermore, an authority could investigate information on the input data used by the algorithm when assessing whether there is a restriction by object. Finally, information on the output and the decision-making process connected with the algorithm might be helpful for finding a collusion in the context of a pricing algorithm. Information on the output might also play a role when assessing whether a potential infringement can be attributed to the company, in particular, whether the algorithmic behavior was intended or foreseeable.
The Study states that the investigating authority may also want to have a thorough understanding of the functioning of an algorithm. Within the normative assessment, such information might, in particular, be relevant when investigating potential coordination (e.g. at code level). When it comes to ways to obtain and analyze evidence, the Study explains that the investigating authority can use its established investigative powers, such as information and documentation requests, inspections and interviews. The Study also suggests that a more in-depth analysis of the algorithm may unveil additional evidence, in particular revealing additional facts associated with the functioning of the algorithm. For such an analysis, different investigative approaches could be developed, such as an analysis of the source code in connection with information on the respective environment and interfaces, a comparison of real past input/output couples, a simulation of the algorithmic behavior on generated inputs or a comparison of the algorithm to other more easily interpretable algorithms and methods. Moreover, the Study asserts addressing requests for information from or conducting dawn raids at the premises of such algorithm’s developer(s), particularly when it had not been developed and maintained in house.
The Study concludes that in the scenarios mentioned here, the contemporary legal framework, in particular Art. 101 TFEU and its accompanying jurisprudence, allow competition authorities to address possible competitive concerns. In fact, competition authorities already have dealt with some cases involving algorithms, which have not caused specific legal difficulties.
As regards the scholarly debate about whether Art. 101 TFEU needs to be understood more broadly, as algorithms would test the conceptual limits between mere parallel behavior and illegal coordination, the Study claims that it is yet unclear which types of cases competition authorities will face in the future; consequently, it is not possible yet to predict whether there is a need to reconsider the current legal regime and the methodological toolkit.
The OECD Report both focuses on algorithms’ collusion effects, as well as pro-competitive effects.
1. Pro-competitive Effects of Algorithms:
In the OECD Report, efficiencies in the supply and demand sides of the market created by algorithms are explained as pro-competitive effects of algorithms. Accordingly, on the supply side of the market, algorithms can increase transparency, improve existing products or develop new ones, promote market entry by providing ability to develop new offerings and push firms to innovate, as well as reduce cost of production, improve quality and resource utilization. On the other hand, on the demand side of the market, algorithms may help optimizing consumer decision by providing more information including quality and consumers’ preferences, therefore, increase welfare of both consumers and society in total.
2. Collusion Effects of Algorithms:
a. Monitoring algorithms, which monitor competitors’ actions in order to enforce a collusive agreement, may facilitate collusions, however, traditional antitrust tools are still applicable for them as they do not eliminate human interactions to establish collusive agreements.
b. Parallel algorithms, which coordinate behaviors of competitors by sharing same dynamic pricing algorithm, using same IT companies or programmers, or following in real time a market leader, may result in coordinated behaviors although there is no active communication between firms.
c. Signaling algorithms, which are used by firms to signal desire to collude, eliminate the costs of traditional signaling methods and enable coordination before the consumers exploit the action.
d. Self-learning algorithms, in which machine/deep learning methods are used and which generally target profit maximization, may result in unconscious collusion as the complex nature of process may direct firms to collusion without any known intentions.
a. Challenging the Concept of Agreement:
As algorithms create new methods to collude, OECD Report analyses need for redefinition of “agreement”, which is a basis for most competition law investigations. It is argued that narrow interpretation of agreement may not be sufficient to detect complex interactions between firms that are made through algorithms. OECD Report assesses that signaling algorithms may mean offer for an agreement, whereas it nevertheless cannot be concluded that whether using algorithms to observe competitors constitute an agreement.
b. Liability Discussion
Antitrust liability arising from algorithm usage is also discussed in the OECD Report. It is determined that since the most of algorithms used today operate with the instructions of humans, the humans should be held responsible for the decisions concluded by algorithms. On the other hand, it is also acknowledged that emerging technological developments weaken the link between the algorithm and the human being using it. Accordingly, evaluation of programed instructions of the algorithm, available safeguards, reward structure and the scope of its activities is suggested to assess the liability in the future.
c. Traditional Methods
While the above mentioned solutions are introduced as more radical solutions to the problems arising from algorithms, the OECD Report also suggests some traditional methods to resolve the problems:
i. Market studies and market investigations that may help to determine failure in the market also can be used to for future regulatory preparations of the government to address problems with respect to competition. Moreover, nonbinding recommendations may also be issued by using such studies.
ii. Ex-ante merger control may be conducted by lowering thresholds for interventions. It is suggested that market characteristics such as transparency and velocity of interaction shall be evaluated and conglomerate mergers when tacit collusion can be facilitated by multimarket contacts shall be considered.
iii. Remedies such as special compliance and monitoring programs can be introduced and auditing mechanisms for algorithms can be applied. However, it is underlined that auditing may not be effective solution as algorithms may not intent collusion and auditing may not keep up with technological developments.
OECD Report asserts that in addition to the concerns regarding collusion, algorithms may also create problems with regards to abuse of market, bias, censorship, manipulation, privacy rights, property rights and social discrimination.Considering this, the OECD Report also introduces possible regulatory interventions:
i. Institutional options to govern algorithms: In addition to the supply-demand side market solutions, which include active actions from both sides, the OECD Report also suggest self-organization, self-regulation, co-regulation and state intervention. Information measures, principles of search neutrality, cybercrime regulations, data protection certification schemes, etc. are proposed in this regard. OECD Report also proposes new regulatory institutions for digital economy, such as a global digital regulator, to be established. Lastly, OECD recommends governments to consider competitive approach in all legislation and public policy preparations.
ii. Measures on algorithmic transparency and accountability: While the examples of set of principles for algorithmic transparency and accountability, compliance by design concept are provided as examples and suggested that reverse engineering algorithms can be used by competition authorities. OECD Report, however, points out the difficulties to establish transparency machine/deep learning cases, considering their uninstructed decisions. Also, OECD Report remarks the obstacles on implementation of such measures, which are the fact that digital economy is governed by a number of different policy and law areas and digital economy market players almost always operates cross border.
iii. Regulations to prevent algorithmic collusion: OECD Report states that no suggestion has been made to regulate algorithms yet, and exemplifies possible prospective interventions to that end:
- Price regulation: OECD Report suggest that maximum price setting can be introduced when there is not a more efficient solution, while it also indicates its anti-innovation and potential entry barrier strengthening effects.
- Policies to make tacit collusion unstable: The OECD discusses whether restriction of publicly available market information or concealing discounts would hinder collusion or support competition restriction by preventing consumers to access market data.·
- Rules on algorithm design: OECD Report states that restrictions on design of algorithms may be imposed, however, emphasizes that such restrictions may cause prevention of innovation and additional burden of supervision.
The CMA Report focuses on the pricing functions of algorithms and divides them into two categories: i) in-house developed algorithms and ii) algorithms that are developed by specialist algorithm development firms.
1. POSSIBLE PRO-COMPETITIVE EFFECTS OF ALGORITHMS
Similar to the OECD Report, the CMA Report also remarks the positive effects of algorithms on supply and demand sides. For the supply side, the CMA Report states that using algorithms reduces labor costs by replacing human workers, improves efficiency of human workers and makes markets more efficient and clears faster due to the increase in price responsiveness. On the other hand, it is stated that algorithms may help consumers make decisions, provide accurate price forecasting, reduce search and transaction costs for consumers.
2. ALGORITHMS & COORDINATION
CMA Report emphasizes that distinction between the use of algorithms for monitoring and enforcing an existing coordinated strategy and theories of harm under which pricing algorithms might lead to coordinated outcomes when make unilateral pricing decisions shall be made.Theories of harm are discussed below:
a. The use of algorithms to facilitate explicit agreementsThe CMA Report indicates that algorithms can be used for establishing explicit collusion. As algorithms render it easier to detect and respond to deviations from collusion, reduce the chance of errors or accidental deviations and reduce agency slack, i.e. eliminate intervention of non-senior employees, using them may strengthen the explicit collusive agreements.
b. Tacit coordination and conscious parallelism
The CMA Report segments tacit coordination resulted through use of algorithms into three different kinds, which it names as (i) hub and spoke, (ii) predictable agent and (iii) autonomous machine.
On the hub and spoke model, the CMA describes the case as use of same algorithm or data pool by different undertakings. In parallel with the Study, the CMA Report differentiates its approach on whether the undertakings benefit the same algorithm knowingly or unknowingly. Accordingly, the CMA Report sets forth that mere usage of same algorithm should not necessarily be sufficient for establishment of tacit collusion, while arguing that explicit communication on the pricing algorithm benefited may be construed as an explicit collusion.On predictable agent algorithms, which are described as unilaterally designed algorithms reacting to external factors in a predictable way, the CMA Report identifies them as a strengthening factor for tacit collusion since they reduce uncertainty in the market. This interpretation of predictable agent algorithms is similar to the OECD Report’s “parallel algorithms” categorization.Lastly, the CMA Report defines autonomous machine algorithms as unilaterally designed systems by undertakings to achieve specific target, such as maximizing profit. This categorization of the CMA Report is similar with the OECD Report’s “self-learning algorithms” classification.As a result, the CMA Report comes to the conclusion that while hub-and-spoke model algorithms are immediate risk for competition, the other two may become important in the future. Needless to say, considering the pace of emergence of new technologies in the last decade and we are in 2020 currently, it would not be excessive to also deem that “future” as imminent.
3. NEGATIVE CORRELATION BETWEEN PERSONALISED PRICING AND COLLUSION
CMA Report also consists of different approach on personalized pricing from the two other above mentioned studies. First, it puts forward that the increasing availability of data and more enhanced algorithms will result with highly personalized pricing, by sorting customers into ever finer categories, particularly at online marketplaces. Then, it asserts that while algorithms enable personalized pricing less resource intensive and more accurate and also hinder detection of deviations, the extensive use of personalized pricing would halt tacit coordination between undertakings.
4. EXAMINING AND REMEDYING ALGORITHM BASED COMPETITION CONCERNS
CMA Report suggests the below methods to adopt while examining presence of a competition infringement conducted through algorithms:
a. To evaluate time horizon of a reinforcement learning algorithm’s objective function as stable tacit coordination sacrifices short-term profits in favor of long-term profits.
b. To detect whether all/many competitors are using the same algorithm/objective function.
c. To determine what data the algorithm is using as collecting data from many competitors may indicate collusion.
On fighting with algorithm based anti-competitive behaviors, CMA Report suggests the below actions:
a. Auditing algorithms: It may be used to understand whether and if a firm could know that its algorithm is implementing a collusive outcome.
b. Algorithmic decisions should be presumed to be anticompetitive – whereas the CMA Report also refers to opposing opinion that it may be too interventionist and restrict undertakings’ liberty to set their prices.
c. Secret offers and masking: Consumers may request secret offers to prevent collusion.
III. CASE EXAMPLES
While there are three notable studies on algorithms’ effect on competition, the effects of algorithms on competition has been discussed in only two investigations so far and not thoroughly. These are Bundeskartellamt’s Lufthansa inspection and the real estate brokerage market investigation being conducted by Spanish National Commission on Markets and Competition (CNMC).
In 2017, the Bundeskartellamt has launched an investigation on Lufthansa alleging that the undertaking abused its dominant position in the market by charging abusive prices following Air Berlin’s insolvency. On its defense, Lufthansa has argued that it did not charge abusive pricing as its prices are determined by a fully automated booking system through analyzing demand in the market. While the Bundeskartellamt concluded not to initiate a proceeding on account of abusive pricing it also disregarded the importance of whether the scrutinized price increases were conducted by a price algorithm or human intervention.
As a result, although algorithms were a subject to the Bundeskartellamt’s Lufthansa examination, the case does not constitute a landmark precedent guiding competition enforcers. Nevertheless, Andreas Mundt, the President of the Bundeskartellamt, signified the Bundeskartellamt’s potential attitude towards future cases whereby algorithmic behaviors will be investigated by declaring that “the use of an algorithm for pricing naturally does not relieve a company of its responsibility” and added that as the algorithms are created by humans and require human intervention to a certain extent as an input for obtaining certain output, the undertakings should not be able hide behind algorithms.
CNMC Real Estate Brokerage Market
In November 2019, the CNMC initiated an investigation against seven companies operating in the online real estate brokerage market against the allegations that they might have engaged in anti-competitive practices intended to directly or indirectly fix prices and other conditions. The critical point of the investigation is that whether the design of certain real estate software and its algorithms could have made it easier to implement and maintain this direct or indirect fixing of commissions and commercial conditions.
While the investigation is still ongoing, the CNMC announced that this is the first competition investigation case that has been initiated due to the use of algorithms. However, Idealista, one of the investigated undertkaings, commented on the case that its platform never establishes or alters neither the prices nor the commercial conditions of real estates and the prices are set without any algorithms modifying them, but directly by advertisers of them.
IV. TURKISH COMPETITION AUTHORITY’S (TCA) POSITION ON ALGORITHMS
While the Turkish Competition Authority (“TCA”) has not published any study regarding effects of algorithms to competition, nor a general study on the digital economy, there has also not been any case inspecting algorithmic commercial behaviors. Nonetheless, the TCA’s officials is not hesitant to express their prospective enforcement and regulatory attitudes to press and public.
In 2017, Ömer Torlak, the former President of the TCA, underlined the emergence of new business models on electronic platforms and the entry of new actors into the markets, which could have been suffered entry barriers for brick and mortar markets. Mr. Torlak urged that algorithms developed by undertakings, prices in can be automatically be altered by using huge amount of data that are not available in the physical markets, in accordance with specific timing and location conditions. He also stated that although the price change may not be significant for a single consumer, in total, it may reflect significant change..
Also, the TCA recently, on January 30, 2020, announced that it will prepare a report on “Digitalization and Competition Policy”. Meltem Bağış Akkaya, one of the members of the team appointed for preparation of the TCA’s digitalization report, has stated that Alphabet, Amazon, Microsoft, Apple and Facebook and such similar undertakings make it harder for all competition authorities to conduct their enforcement with traditional methods. According to Akkaya, instruments, such as market share, abuse of dominant position in the market, price increases, foreclosing competitors out of the market, do not suffice for effectively investigating such undertakings. Akkaya has also underpinned the difficulty of monitoring pricing behaviors of such undertakings, since they benefit algorithms and, while offering free services to the consumers, offer the personalized prices and alter prices very frequently, which lead troubles in the competition authorities’ operations.
Moreover, while many governmental and non-governmental competition agencies and associations increasingly focus on discussing competition law enforcement in digital markets, the TCA either organizes or attends to various events convening stakeholders to discuss prospective competition policies in digital economy. While the former President Torlak himself attended or commissioned delegates to attend numerous international and national events to express the TCA’s stance, newly appointed president Prof. Birol Küle organized two Istanbul International Competition Forums, until now. While the digitalization was only a topic of the of panel in the former, which was dated 2019 November, in the latter one, which was organized this March, the vast majority of panels were on discussing adaptation of existing competition law instruments to the digital economy.
In light of above, it could be summed up that the inexistence of any cases or materialized studies before the TCA, should not mean that the TCA does not follow the trends rising globally in the competition enforcement; on contrary, it could be projected that the TCA prepares itself being ready to any sudden policy or legislation shift against the digital economy.
In light of the foregoing, it could be easily seen that integration of algorithms to economic activities is a hot topic among competition law enforcers. As a result, three major three major competition authorities of the UK, France and Germany and the international organization of OECD, which mostly includes developed and developing countries as members, have issued study reports on revealing novel effects of algorithms to economic policies and discussing alternative methods to address potential concerns to arise.
While the OECD Report and the CMA Report merely highlights some new concepts to be initially introduced by algorithms to competition law enforcement, given its recentness, the joint Study prepared by the French Autorité de la Concurrence and the Bundekartellamt consists of more detailed proposals of questions to be considered and asked in order to resolve the new problems of competition policies triggered by the digitalization, and primarily by algorithm use. Although the three studies diverge in practical suggestions to handle the algorithmic concerns, they broadly agree on incurring liability to algorithm using undertakings, particularly for the cases they are aware of the usage common algorithmic systems or data pools. However, for the cases whereby undertakings exploit distinct algorithms, the discussion seems more unresolved yet and the competition agencies seek resolution through alternative methods such as establishing new concepts regarding antitrust liabilities, for example third-party liability of algorithm developers, or amending the concept of agreement within means of competition law.
Within this framework, the studies also present possible new tools to fight with algorithmic anticompetitive behaviors such as, ex-ante regulation of algorithms, ensuring accountability, transparency and establishing a presumption of guilt, auditing and understanding algorithms on either code level or output level and establishing indicator mechanisms or tests to determine whether an algorithm’s decisions are anti-competitive.
Regardless of these developments at the roots, considering the new emergence of the technology, there has not been a significant case law, in which usage of algorithms are widely assessed under specific circumstances. While the Bundeskartellamt has not differentiated whether Lufthansa have increased its prices through using an algorithm or human intervention, the CNMC has initiated its probe very recently, and thus it does not have any outcome yet.
The competition policy world will therefore keep monitoring, as the TCA does, and trying to construct contributing ideas for preventing unprecedented implications of algorithms to economical and consumer policies. As the above-mentioned studies has referred to each other’s arguments and given the global convergence augmented by the digitalized economy, we believe that only a unified, or at least widely converged, global attitude towards competition concerns of the new economy would produce fruitful results. Thus, this article has been prepared in order to shed light to the studies, which have been issued until now, in a single landscape and thereby support the convergence of different jurisdictions’ prospective policies.
 Bundeskartellamt and The Autorité de la concurrence report on “Algorithms and Competition”, November 2019 p. 1.
 ibid 15.
 ibid 1.
 ibid 15.
 ibid 60.
 ibid 4-7.
 ibid 8-9
 ibid 9-10.
 ibid 14.
 ibid 26-27.
 ibid 27.
 Bundeskartellamt (n 1) 27-28.
 ibid 28.
 ibid 60.
 ibid 31. Cf. e.g. Commission, Commission Staff Working Document – Final report on the E-commerce Sector Inquiry, May 2017.
 Bundeskartellamt (n 1) 33.
 ibid 33-34.
 ibid 37.
 ibid 38-39.
 ibid 42-43.
 OECD, Algorithms and Collusion, 2017, p. 31.
 Bundeskartellamt (n 1) 44.
 Schwalbe, Algorithms, Machine Learning, and Collusion, Journal of Competition Law & Economics 2018, pp. 568 et seq. (596).
 Bundeskartellamt (n 1) 45.
 ibid 52.
 ibid 55-56.
 ibid 60.
 ibid 57-58.
 ECJ, Musique Diffusion française and Others v Commission, Judgment of 07.06.83, Joined Cases 100/80 to 103/80, para. 97; see also ECJ, Protimonopolný úrad Slovenskej republiky v Slovenská sporitelna, Judgment of 07.02.13, Case C-68/12, para. 25.
 Bundeskartellamt (n 1) 59.
 ibid 61.
 ibid 62.
 ibid 62-64.
 ibid 64.
 ibid 65-70.
 ibid 75.