Document Type

Article

Publication Date

4-26-2023

Source Publication

European Labour Law Journal, 0(0). https://doi.org/10.1177/20319525231167982

Keywords

Artificial Intelligence; EU AI Act; Platform Work Directive; workplace surveillance; algorithmic management; gig-economy; labour regulation; risk regulation

Abstract

In this article, we provide an overview of efforts to regulate the various phases of the artificial intelligence (AI) life cycle. In doing so, we examine whether—and, if so, to what extent—highly fragmented legal frameworks are able to provide safeguards capable of preventing the dangers that stem from AI- and algorithm-driven organisational practices. We critically analyse related developments at the European Union (EU) level, namely the General Data Protection Regulation, the draft AI Regulation, and the proposal for a Directive on improving working conditions in platform work. We also consider bills and regulations proposed or adopted in the United States and Canada via a transatlantic comparative approach, underlining analogies and variations between EU and North American attitudes towards the risk assessment and management of AI systems. We aim to answer the following questions: Is the widely adopted risk-based approach fit for purpose? Is it consistent with the actual enforcement of fundamental rights at work, such as privacy, human dignity, equality and collective rights? To answer these questions, in section 2 we unpack the various, often ambiguous, facets of the notion(s) of ‘risk’—that is, the common denominator with the EU and North American legal instruments. Here, we determine that a scalable, decentralised framework is not appropriate for ensuring the enforcement of constitutional labour-related rights. In addition to presenting the key provisions of existing schemes in the EU and North America, in section 3 we disentangle the consistencies and tensions between the frameworks that regulate AI and constrain how it must be handled in specific contexts, such as work environments and platform-orchestrated arrangements. Paradoxically, the frenzied race to regulate AI-driven decision-making could exacerbate the current legal uncertainty and pave the way for regulatory arbitrage. Such a scenario would slow technological innovation and egregiously undermine labour rights. Thus, in section 4 we advocate for the adoption of a dedicated legal instrument at the supranational level to govern technologies that manage people in workplaces. Given the high stakes involved, we conclude by stressing the salience of a multi-stakeholder AI governance framework.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License

Share

COinS