Why is it that Google, a company once known for its distinctive “Do no evil” guideline, is now facing the same charges of “surveillance capitalism” as Facebook, a company that never made such claims? Why is it now subject to the same kind of antitrust complaints once faced by Microsoft, the “evil empire” of the previous generation of computing? Why is it that Amazon, which has positioned itself as “the most customer-centric company on the planet,” now lards its search results with advertisements, placing them ahead of the customer-centric results chosen by the company’s organic search algorithms, which prioritize a combination of low price, high customer ratings, and other similar factors?
The answer can be found in the theory of economic rents, and in particular, in the kinds of rents that are collected by companies during different stages of the technology business cycle. There are many types of rents and an extensive economics literature discussing them, but for purposes of this article, they can be lumped into two broad categories—“rising tide rents” that benefit society as a whole, such as those that encourage innovation and the development of new markets, and “robber baron rents” that disproportionately benefit those with power.
Learn faster. Dig deeper. See farther.
What Is Economic Rent?
Not to be confused with the ordinary sense of rent as a charge for temporary use of property, economic rents are the income above a competitive market rate that is collected because of asymmetries in ownership, information, or power.
Economists Mariana Mazzucato and Josh Ryan-Collins write, “If the reward accruing to an actor is larger than their contribution to value creation, then the difference may be defined as rent. This can be due to the ownership of a scarce asset, the creation of monopolistic conditions that enable rising returns in a specific sector, or policy decisions that favour directly or indirectly a specific group of interest.”
For example, consider drug pricing. Patents—exclusive, government-granted rights intended to encourage innovation—protect pharmaceutical companies from competition and allow them to charge high prices. Once the patents expire, there is competition from so-called “generic drugs,” and the price comes down. That difference in price (and its impact on pharmaceutical company profits) shows the extent of the rent.
In 20th century neoliberal economics, rents have typically been seen as a temporary aberration that is eventually competed away. They are a price that we pay for a rising tide of innovation. But as Mazzucato points out, to the classical economists—Smith, Ricardo, and Mill—who lived in a world of inherited power and privilege, rents were a pernicious and persistent consequence (and source) of inequality. At the dawn of economic theory, agriculture was still the chief source of value creation, and much of that value created by the labor of serfs and tenant farmers was appropriated by those who owned the land. When the local baron sent his troops to collect what he considered his share of the harvest, it was impossible to say no. In an unjust society, neither effort nor investment nor innovation but rents rooted in power asymmetries determine who gets what and why.
But not all rents represent abuse of power. As noted by economist Joseph Schumpeter, innovation—whether protected by patents, trade secrets, or just by moving faster and more capably than the competition—provides an opportunity to receive a disproportionate share of profits until the innovation is spread more widely.
During the expansive period of a new technology cycle, market leaders emerge because they solve new problems and create new value not only for consumers but also for a rich ecosystem of suppliers, intermediaries, and even competitors. Even though the market leaders tend to receive a disproportionate share of the profits as they lay waste to incumbents and dominate the emerging market, value creation is a rising tide that lifts all boats.
But this kind of virtuous rising tide rent, which benefits everyone, doesn’t last. Once the growth of the new market slows, the now-powerful innovators can no longer rely on new user adoption and collective innovation from a vibrant ecosystem to maintain their extraordinary level of profit. In the dying stages of the old cycle, the companies on top of the heap turn to extractive techniques, using their market power to try to maintain their now-customary level of profits in the face of macroeconomic factors and competition that ought to be eating them away. They start to collect robber baron rents. That’s exactly what Google, Amazon, and Meta are doing today.
Then the cycle begins again with a new class of competitors, who are forced to explore new, disruptive technologies that reset the entire market. Enter OpenAI, Anthropic, and their ilk.
Attention is all you need
What is the source of big tech market power? What is the limited resource that they control and monopolize? It’s not our data. It’s not the price of the services we purchase from them—they give those away for free. It’s our attention.
Back in 1971, in a talk called “Designing Organizations for an Information-rich World,” political scientist Herbert Simon noted that the cost of information is not just money spent to acquire it but the time it takes to consume it.
“In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.”
In the discussion following the talk, Simon noted that in the future, information would be so abundant that we would need machines to help us manage our attention.
And that has indeed been the secret to success in the information age. Google was founded with the promise of finding the right web page out of billions, giving you just what you want and then sending you on your way. Amazon aimed to help customers find the best quality and price for any one of millions of products. Even social media started with the promise of information triage: for each person, a unique feed of updates from only the friends they had chosen to follow. These are all astonishing tools for making our limited capacity for attention more efficient.
In the early idealistic days of internet expansion, the leading companies earned outsized profits by solving the attention allocation problem. As the internet grew, the amount of information available to consumers became so vast that it outran traditional human means of curation and selection. Attention allocation was outsourced to the machines. Algorithms for search, recommendations, social media feeds, entertainment, and news became the foundation of an enormous new economy.
The internet giants succeeded by doing what they are now too often reviled for: extracting signal from massive amounts of data. Google not only crawled and indexed virtually every page on the web, it looked at how sites linked to each other, tracked which of the ten top links it showed were clicked on the most, which ones led people to come back and try another and which sent them away satisfied. It used location data and past searches to make answers more relevant and personalized. Amazon too used everything from price, user reviews, popularity, and your individual purchase history to bring to the top the products they believed best matched their customers’ needs. In my 2005 essay “What is Web 2.0?,” I made the case that the companies that had survived the dotcom bust had all in one way or another become experts at “harnessing collective intelligence.”
Perhaps a more direct way to say this in the context of economic value creation is that companies such as Amazon, Google, and Facebook had developed a set of remarkable advances in networked and data-enabled market coordination.
But over time, something went very wrong. Instead of continuing to deploy their attention optimization algorithms for their users’ and suppliers’ benefit, the tech giants began to use them to favor themselves. It first became obvious with social media: recommended posts and amplification of addictive, divisive content in order to keep users scrolling, creating additional surface area for advertising. Google began to place more and more advertising ahead of “organic” search results, turning advertising from a complementary stream of useful information that ran beside search results into a substitute. Amazon was late to the party, but once it discovered advertising, it went all in. Now a typical page of Amazon product search results consists of 16 ads and only four organic results.
Google and Amazon were still atop their respective hills of web search and ecommerce in 2010, and Meta’s growth was still accelerating, but it was hard to miss that internet growth had begun to slow. The market was maturing. From 2000 to 2011, the percentage of US adults using the internet had grown from about 60% to nearly 80%. By the end of 2012, it was up to 82%. But in 2013 and 2014, it remained stuck at 83%, and while in the ten years since, it has reached 95%, it had become clear that the easy money that came from acquiring more users was ending. Penetration in Europe, the other lucrative market, was on a similar track to the US, and while there was lots of user growth still to be found in the rest of the world, the revenue per user was much lower. What are now-gigantic companies to do when their immense market capitalization depends on rapid growth and the expectation of growing profits to match?
These companies did continue to innovate. Some of those innovations, like Amazon’s cloud computing business, represented enormous new markets and a new business model. But the internet giants also came to focus on extracting more usage and time spent, and thus more revenue, from a relatively stable base of existing customers. Often this was done by making their products more addictive, getting more out of their users by nefarious means. Cory Doctorow calls this the “enshittification” of Big Tech platforms.
Fast forward to the present, and Amazon has clearly given up on the goal of finding the best result for its users. Since launching its Marketplace advertising business in 2016, Amazon has chosen to become a “pay to play” platform where the top results are those that are most profitable for the company.
In “Amazon is burying organic search results,” research firm Marketplace Pulse notes:
Of the first twenty products a shopper sees when searching on Amazon, only four are organic results. There is little space left for organic results at the top of the page, the real estate that drives most sales. Few purchases happen beyond the first page of search results. And not many shoppers scroll to the bottom of even the first page…
It takes scrolling past three browser windows worth of search results to get to the fifth organic result. It takes even more swipes to see the fifth organic result on mobile.
This is what we mean by a “robber baron” rent: “pay us, or you’ll effectively disappear from search.”
The harm to users isn’t just time lost while scrolling through ads to find the best results. In a recent research project at University College London’s Institute for Innovation and Public Purpose, my colleagues and I found that users still tend to click on the product results at the top of the page even when they are no longer the best results. Amazon abuses the trust that users have come to place in its algorithms, and instead allocates user attention and clicks to inferior quality sponsored information. The most-clicked sponsored products were 17% more expensive and 33% lower ranked according to Amazon’s own quality, price, and popularity optimizing algorithms. And because product suppliers must now pay for the product ranking that they previously earned through product quality and reputation, their profits go down as Amazon’s go up, and prices rise as some of the cost is passed on to customers.
It appears to have worked—for now. Amazon’s recent quarterly disclosures (Q4, 2023), for example, show year-on-year growth in online sales revenue of 9%, but growth in fees of 20% (third-party seller services) and 27% (advertising sales). But the historical lessons from the downfall of both IBM mainframe monopoly and Microsoft’s stranglehold on the personal computer suggests that the company will be forced to renew its commitment to value creation or face decline and challenges from new, disruptive market entrants who are focused on providing the kind of value to users and suppliers that Amazon once did. The damage to Amazon may be a gradual downslope or a sudden cliff. When does brand and reputation damage accumulate to the point that consumers start trusting Amazon less, shopping at Amazon less, and expending the effort of trying alternatives? If history is any judge, it will happen sooner or later unless Amazon dials back the rents.
A similar dark pattern is visible in the evolution of Google search. Starting around 2011, advertising, which once framed the organic results and was clearly differentiated from them by color, gradually became more dominant, and the signaling that it was advertising became more subtle. Today, especially on mobile, the user may have to scroll down several times to get to the first organic result. The result is less striking than on Amazon, since a very large percentage of Google searches carry no advertisements at all. But for commercial searches, the best result for users (a local merchant, for example) can often only be found after scrolling through pages of ads from internet sellers and national chains.
The harms to users are thus less than they appear to be at Amazon, where advertising distorts the results of every search, but there are still serious concerns. Both Google and Amazon are gatekeepers controlling the visibility of a vast ecosystem of suppliers. Those suppliers aren’t just a commodity to be exploited by the platform. They are its partners in creating the value that draws users to the platform. Without websites, there would be no need for Google search or raw material for its results; without merchants, no Amazon. The same is true of other internet gatekeepers. Without app developers, there would be no App Stores; without users creating content as well as consuming it, no social media.
This is what we mean by a “robber baron” rent: “pay us, or you’ll effectively disappear from search.”
When suppliers are harmed, users too will be harmed over the long run. These ecosystems of value co-creators depend on the platform’s fairness in allocating attention to the most relevant results. When the platform displaces organic results with paid results, preferences its own applications, products, or services, or provides information directly to the consumer in competition with the originators of that information, the ecosystem suffers a loss of incentive and reward for continuing to produce value. Eventually, this loss of value affects both users and the platform itself, and the whole virtuous circle of creation, aggregation, and curation breaks down.
The company itself is also harmed, as even its own innovations may be held back in order to protect lucrative existing lines of business. Google, for example, invented the Large Language model architecture that underlies today’s disruptive AI startups. It published the original Transformer paper (not quite coincidentally called “Attention is All You Need”) in 2017, and released BERT, an open source implementation, in late 2018, but never went so far as to build and release anything like OpenAI’s GPT line of services. It’s unclear whether this was a lack of imagination or a kind of “strategy tax.” It was certainly obvious to outsiders how disruptive BERT could be to Google Search. In 2020, when my own company released O’Reilly Answers, a plain language search engine based on BERT for the content on the O’Reilly platform, I was struck by how, for the first time, we could search our own content better than Google could.
It was left to startups to explore the broader possibilities of generative AI and chatbots.
Will History Repeat Itself?
The enshittification of Amazon and Google is old news to most users. We remember how good these services used to be, and lament their decline. But we have slowly gotten used to the fact that results are not what they once were.
Antitrust authorities in Europe and the US have woken up, and are questioning abuses of market power by Big Tech companies, albeit not always successfully. Regulators may force better behavior. My hope, though, is that in responding to new competitors, the companies themselves may wake up and pull back from the brink before it’s too late.
It’s already clear that LLMs may offer the greatest competition that Google, Amazon, and other current internet giants have ever faced. While the results are as yet inferior to those offered by Google and Amazon, users are already asking questions of ChatGPT that would once have gone to a search engine. The lower quality of the results is typical in the early days of a disruptive technology. It doesn’t matter, because disruptive technologies start out by solving new problems, serving new markets, and creating new opportunities. But their disruptive quality also comes because novel technology companies draw outside the lines that have been drawn to protect the business model of the existing players. They are eager to surprise and delight their users; the focus in the early days is always on value creation. Mature and declining companies, by contrast, tend to hobble their products as they focus on value extraction. They lose their ideals and their edge, eventually alienating their customers and their suppliers and opening the door to competition.
We are in those early days once again. Leadership comes to those who create the most value for the most users. It is only later, after the market consolidates, that the value extraction phase begins. At that point, will the new market leaders also turn to more traditional extractive techniques? Just like today’s incumbents, will they end up using their market power to protect their now-customary level of profits in the face of macroeconomic factors and competition that ought to be eating them away?
Regulators would be wise to get ahead of this development. The current generation of algorithmic overlords shape the attention of their users, helping to decide what we read and watch and buy, whom we befriend and whom we believe. The next generation will shape human cognition, creativity, and interaction even more profoundly.
There is a great deal of discussion about the risks and benefits of AI, but it is generally focused narrowly on the technical capabilities of AI tools and whether continued advances will eventually put AI beyond human control, leading to possible disaster. Closer to the present, risk analysis focuses on social problems like bias, misinformation, and hate speech, or the potential spread of biological and nuclear capabilities.
Yet many of the most pressing risks are economic, embedded in the financial aims of the companies that control and manage AI systems and services. Are AI companies going to be immune to the incentives that have made today’s current tech giants turn against their users and their suppliers, the same incentives that have led financial institutions to peddle bad assets, pharmaceutical companies to promote opioids, cigarette companies to hide the health risks of smoking, and oil companies to deny climate change? I think not.
Rather than blaming the moral failings of company leadership, look instead to the economic incentives that rule public companies. Financial markets (including venture capitalists considering valuation of the next round) reward companies handsomely for outsized growth of revenue and profit, while brutally punishing any slowdown. Since stock options are a large part of executive compensation—and all compensation at Silicon Valley companies—failing to deliver the required growth comes at a very high cost to company leadership and employees.
It is too early to know best how to regulate AI. But one thing is certain. You can’t regulate what you don’t understand. Economic abuses by companies typically hide in plain sight for years, with whistleblowers, researchers, regulators, and lawyers struggling to prove what the companies continue to deny. This is going to be even more true of an inscrutable black box like AI.
AI safety and governance will be impossible without robust and consistent institutions for disclosure and auditing. To achieve prosocial outcomes, AI model and application developers need to define the metrics that explicitly aim for those outcomes and then measure and report the extent to which they have been achieved. These are not narrow technical disclosures of model capabilities, but the metrics the companies use to manage AI as a business, including what processes and metrics they use to reduce the risks that have been identified. If they begin to twist AI’s training, guardrails, and objectives for their own benefit, we should be able to see it in the numbers.
The time to do this is now, when AI developers are still in the virtuous stage of innovation and rising tide rents, and while the companies are exploring the possibilities of AI regulation. It is important to understand what “good” looks like while companies are still putting their best foot forward, developing services to delight and serve users and suppliers and society, so that if (or perhaps when) the incentives to take advantage of others take over, we can look back and see when and how things began to go wrong.
Let’s not wait till the robber barons are back.
A longer version of this article was previously published as part of the UCL Institute for Innovation and Public Purpose, Working Paper Series (IIPP WP 2024-04). Available at: https://www.ucl.ac.uk/bartlett/public-purpose/wp2024-04. That version includes additional history of earlier cycles of value creation and extraction during the mainframe and PC eras.