Pro
Siirry sisältöön
Service Business

Looking for algorithmic transparency

Kirjoittajat:

Mário Passos Ascenção

yliopettaja, palveluliiketoiminnan kehittäminen ja muotoilu
principal lecturer, service business development and design
Haaga-Helia ammattikorkeakoulu

Aarni Tuomi

lehtori, majoitus ja ravitsemisliiketoiminta
lecturer, hospitality business
Haaga-Helia ammattikorkeakoulu

 

Visiting Research Fellow
University of Surrey

Published : 23.03.2023

As part of Haaga-Helia’s AlgoAmmatti-project, we set out to shed light on the ‘black box’, that is, to study the kinds of design features digital labour platforms that operate in Finland have implemented. Specifically, we chose to focus on algorithmic management practices and how these are communicated to the general public.

As a basis for our study, we used a list of platforms operating in Finland compiled by the Finnish Institute of Occupational Health (2023). The list currently includes 53 platforms across a wide range of multi-sided online platforms, for example from connecting parents looking for childcare with carers, to connecting brands with influencers, to connecting building operators with plumbers and locksmiths.

Algorithmic management

Data-driven companies increasingly utilise machine learning (ML) algorithms and other forms of artificial intelligence (AI) to e.g. track and manage the productivity of workers or to personalise customers’ user experience on commercial platforms. Such algorithmic control or algorithmic management (Kellogg et al., 2020) is particularly well-studied in the context of digital labour platforms or the ‘gig economy’, which enables untraditional forms of earning an income, whether it’s remote ‘click’ or ‘cloud’ work (e.g. Amazon MTurk, TaskRabbit), white collar freelancing (Fiverr, Freedomly.io), or spatiotemporally bound on-demand work (Bolt, Wolt, Just Eat).

Often, digital labour platforms not only create the online marketplace that connects buyers, sellers and third parties, but also impose the specific features that determine how user interactions on the marketplace work, for example how offerings are categorised, ranked, or priced, or how order-fulfilment times are determined. For example, some platforms offer sellers an opportunity to pay for boosting their visibility on the platform. To build trust, platforms often include rating and reviewing systems, whereby buyers are able to rate their experience with a particular seller.

The black-box problem of algorithms

Given the increasing popularity of the platform economy, and the use of AI, research has noted the often-opaque way algorithms are discussed by technology companies, with some scholars noting the black-boxed nature of algorithms and the lack of transparency and explainability related to them.

This difficulty of comprehension, amongst other things, has led Pasquale (2015) to conclude that we are living in a black box society, populated by enigmatic technologies, where authority is increasingly expressed algorithmically (ibid 2015, 8). Indeed, no service business or context escapes the power of the algorithm. As most algorithms are developed by commercial organisations that consider them intellectual property, they are generally not available to the public, making it effectively difficult to evaluate, and therefore leading us to the so-called black-box problem. (von Eschenbach 2021.)

Recently, Cory Doctorow termed enshittification to reflect the convergence of “the power of platform owners to change how their platforms extract value from users and the nature of the multi-sided markets – where the platforms [algorithms] sit between buyers and sellers, holding each hostage to the other and then raking off an ever-larger share of the value that passes between them”. (Naughton 2023)

Scholars looking at the philosophy of technology have also highlighted the non-neutrality of technology, whereby technology is always laden with human decisions and values, and as a consequence, is never really neutral (Ihde 1979). According to Pasquale (2015, 8) authority is increasingly expressed algorithmically as values and prerogatives that the encoded rules enact are hidden within black boxes.

Following this line of reasoning, the design features of multi-sided matchmaking on digital labour platforms create implicit and explicit affordances for users that are value-laden and, in the case of algorithmic management, potentially opaque.

Design features of digital labour platforms

Based on our review, many of the platforms operating in Finland have implemented design features that could be characterised as algorithmic control. Several small and large platforms allow users to rate and review sellers, e.g. Babysits, Gixon, Superprof or Upwork. However, the platforms do not communicate clearly how exactly such ratings are used in buyer-seller matchmaking, or how users may dispute ratings or reviews.

Going beyond rating and reviewing, interesting examples include Bolt.works and Fiverr. For example, Fiverr has recently introduced a promoted gigs system which automatically attempts to match sellers that are highly relevant to buyers’ projects. However, there is no transparency as to how the matchmaking algorithm actually works, i.e. what constitutes as high relevance and how different types of data are weighted.

In terms of algorithmically tracking workers’ productivity, the most explicit example comes from Bolt. Bolt offers a ride-hailing online platform which connects users looking for a ride with a fleet of on-demand taxi drivers. According to Bolt’s public reporting, the company tracks a metric called Driver Score to capture ride confirmation rates as well as driver performance, e.g. poor ratings received, cancelling ride requests without contacting the customer first, or not reacting (i.e. starting to drive) towards customer’s location quickly enough. In our review of material provided on Bolt’s website, we did not find more information as to what specific data goes into calculating driver scores or how the driver score is used in practice.

Of the 53 companies we reviewed, only one company, the restaurant food and grocery delivery platform Wolt has published a specific report trying to explain how the company’s algorithms work. We see Wolt’s Algorithmic Transparency Report (2022) as a clear move in the direction of making digital labour platforms’ design features more transparent and understandable. We note that this is definitely a positive step and sets a benchmark for other platform companies that operate in Finland.

However, Wolt’s report itself is not without faults and at times it could have gone into even greater detail. For example, the report states that they match couriers with delivery tasks purely based on couriers’ location (assumably GPS from the mobile app) and the type of vehicle the courier uses (e.g. car, bike). However, how such data is actually used is not detailed, for example in what types of orders a specific type of vehicle is given more weight in the algorithm’s decision-making, or what impact single vs. bundled delivery task mode has on task dispatching. The report could also perhaps have given more case examples, e.g. how tasks are distributed when a group of couriers using the same vehicle type are waiting for orders at the same location, for example a parking lot in front of Wolt Market or food courts.

Towards ethical and responsible algorithmic management

Digital labour platforms are becoming increasingly complex, making it difficult for anyone to understand the underlying reasons for an algorithm’s output. This means that the ability to discover, audit, and address issues such as data quality, algorithmic bias, etc. requires accessibility, explanation, and human understanding, of the inner workings of the enigmatic black-box.

Diakopoulos and Koliska (2017, 811) have coined and defined Algorithmic Transparency as the disclosure of information about algorithms to enable monitoring, checking, criticism, or intervention by interested parties. This is central to ethical AI development, implementation, and responsible algorithmic management (Rojas & Tuomi, 2022). However, who are the ‘interested parties’ remains mostly unanswered. Kemper and Kolkman (2019) argue that, if transparency is a primary concern, then to whom should algorithms be transparent.

The lack of algorithmic transparency and accountability generally undermine users’ trust on the platform, and scholars have argued for transparency-as-code-availability (Grimmelikhuijsen 2023) and for algorithms to be audited by independent auditors (Aragona 2021) as potential solutions. Hosanagar (2020) goes even further, proposing the Algorithmic Bill of Rights to ensure users are safeguarded from the unintended consequences of AI and that transparency is guaranteed. We see also great potential on citizens’ assemblies/fora (e.g. ADAPT’s #DiscussAI Think-Ins in Ireland) made of inclusive and diverse participants (interested parties), which incidentally can be selected by a fair algorithm (Flanigan et al 2021) to deliberate on issues of AI transparency.

Haaga-Helia’s AlgoAmmatti – Algorithmic Management and Professional Growth in Platform Economy -project seeks to understand algorithmic management practices and the impact of these on workers’ day-to-day experience in the context of digital labour platforms, e.g. Yango, Wolt, or Skillshare. The aim of the service design project is to develop a worker-centric model for conceptualising algorithmic management in the context of professional growth. We seek to create new value for service companies by shedding light on the broader impacts of algorithmic management on digital labour platforms and thus, help companies to proactively develop their services. From a worker-perspective, the goal is to facilitate and manage service work in a more human-centric and socially sustainable manner, focusing on creating balanced and fulfilling careers.

The project is funded by the Finnish Work Environment Fund between 03/2022-12/2023 and conducted by Haaga-Helia’s Service Experience Laboratory LAB8.

References

Aragona, B. 2021. Algorithm Audit: Why, What, and How? New York: Routledge.

Diakopoulos, N., Koliska, M. 2017. Algorithmic transparency in the news media. Digital Journalism 5(7), 809-828.

Finnish Institute of Occupational Health. 2023. A list of platform companies that mediate work.

Flanigan, B., Gölz, P., Gupta, A., Hennig, B., & Procaccia, A. D. 2021. Fair algorithms for selecting citizens’ assemblies. Nature, 596(7873), 548-552.

Grimmelikhuijsen, S. 2023. Explaining why the computer says no: Algorithmic transparency affects the perceived trustworthiness of automated decision-making. Public Administration Review, 83(2), 241-262.

Hosanagar, K. 2020. A Human’s Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control (Reprint ed.). New York: Penguin Books.

Ihde, D. 1979. Technics and praxis: A philosophy of technology. Dordrecht: Springer.

Kellogg, K. C., Valentine, M. A., & Christin, A. 2019. Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366-410.

Kemper, J., & Kolkman, D. 2019. Transparent to whom? No algorithmic accountability without a critical audience. Information, Communication & Society, 22(14), 2081-2096.

Pasquale, F. 2015. The Black Box Society: The Secret Algorithms that Control Money and Information. Cambridge: Harvard University Press.

Rojas, A., Tuomi, A. 2022. Reimagining the sustainable social development of AI for the service sector: the role of startups. Journal of Ethics, Entrepreneurship and Technology 2(1), 39-54.

von Eschenbach, W. J. 2021. Transparency and the black box problem: Why we do not trust AI. Philosophy & Technology, 34(4), 1607-1622.

Picture: www.shutterstrock.com

Kirjoittajat:

Mário Passos Ascenção

yliopettaja, palveluliiketoiminnan kehittäminen ja muotoilu
principal lecturer, service business development and design
Haaga-Helia ammattikorkeakoulu

Aarni Tuomi

lehtori, majoitus ja ravitsemisliiketoiminta
lecturer, hospitality business
Haaga-Helia ammattikorkeakoulu

 

Visiting Research Fellow
University of Surrey