On the inadequacy of Wolt’s Algorithmic Transparency Report and the limits of “algorithmic management” discourse

Niels van Doorn
Published on: 2022-10-24

This blog post is a lightly edited version of the talk Niels van Doorn gave during a panel on algorithmic management, at the 2022 Reshaping Work conference in Amsterdam (14 October).

Over the past years, the debate on the societal role of algorithms and what they are capable of in particular settings has intensified immensely. The adjective “algorithmic” has been put in front of a great number of verbs and nouns. But what is actually “algorithmic” about a particular process or practice and what parts of it are not? If algorithms are about controlling (work and production processes, risk), then who controls the algorithms—and for what purposes?

Algorithms, in the context of work and industry, serve as technologies for scaling operations, automating the execution of tasks based on managerial decisions. As such, they are very useful and important—they have agency. But to be clear: they do not make decisions, at least not by themselves. Algorithms only have agency within broader systems or organisations of people and technologies, where they are (re)programmed to achieve the objectives of this organization, which are often profit driven. As such, they are expected to achieve efficiency along with scale, while minimizing frictions (i.e. risks).

Now, my argument is that, rather than focusing primarily on the entities that execute decisions at scale, we should spend more time addressing the systems in which they are embedded and the people who determine what they are optimizing for (and at what cost/risk for those who are impacted by such optimization). Who makes the pertinent operational and business decisions, based on what criteria and with what objectives in mind—and what are the (potential) consequences of these decisions?

Doing so moves us far beyond demands for “algorithmic transparency,” which, along with “fairness,” has become something of a gratuitous term used to appease concerned publics and ultimately conceals more than it reveals. For instance, Finnish delivery company Wolt recently published an "Algorithmic Transparency Report” – a first of its kind in the industry. While such an initiative is in itself laudable, a closer look at the report shows it to be little more than a promotional folder produced by a company that really wants you to know they care and are willing to learn. When it comes to actually detailing the inputs and outputs of their algorithms, however, the document comes up woefully short.

First, it does not explain why Wolt has decided to opt for a dynamic pricing system, refusing to make transparent how much the company is actually paying its delivery “partners.” How much is the base fee per delivery in each market and how many cents per km do partners get as part of the “distance fee” in different markets? How often are fee calculations updated and based on what new data inputs or metrics? Merely explaining that its order pricing algorithm is based on two components does not tell us (or Wolt’s partners) much. Second, it also does not become clear why Wolt has opted for an algorithmic calculation of the delivery fees based on a straight line across the map (“as the crow flies”), rather than the actual distances that have to be traversed when delivering an order in a city. This does not seem to benefit their partners’ earnings, although it does seem to be most cost-efficient for Wolt. Third, the report also does not explain or justify why Wolt only pays its delivery partners for the time spent on a delivery, rather than including waiting time at a restaurant. While this takes us beyond the realm of the “algorithmic” proper, that’s exactly the point: merely focusing on the automated execution of operational decisions keeps us from asking more fundamental questions regarding the reasoning and motivations behind such decisions.

Finally, and even more fundamentally, Wolt’s report says nothing about the company’s decision to work with ostensibly self-employed “delivery partners” rather than hiring these riders as employees. To be sure, I am not claiming that reclassification would be in the best interest of all riders, but it would provide much needed rights and protections for those who spend a lot of time on the app and deliver the bulk of Wolt’s orders. Not hiring these “high volume” riders offloads the risks of doing business on Wolt’s platform to workers, rather than sharing these risks, as true partners would/should.

Ultimately, algorithms are a means to an end. In this case, these ends are cost minimization and revenue maximization. Under such conditions, workers will have to absorb a variety of risks, with various consequences: e.g. the risk of getting injured; of not having access to vital information; of suddenly being paid less; of not getting work offers; and of not being insured. All of these issues are of central concern to gig workers, and they cannot be blamed on (the opacity of) algorithms alone. Rather, one could argue that the entire gig economy is structurally organized against them, from dynamic pricing algorithms to clickwrap service agreements and the absence of collective bargaining.

In this light, then, algorithmic transparency is only a tiny step in the right direction, since it intervenes too far downstream in terms of where decisions are made and power resides. Moreover, algorithmic systems, by design, resist transparency – if only because they change so frequently. But even if they did not change, there are still huge gaps to be bridged between seeing, knowing and doing. The latter requires political will, which is currently gaining traction. The critical question remains what will be the scope and force of public interventions into the systems of control that are only partly “algorithmic” in nature.