Written by Mihalis Kritikos,
Workers’ interests should always be at the forefront of company approaches to privacy and data protection and worker representatives must always be consulted when a new technology is considered for workplace operations and analytics. This was one of the main conclusions of the study ‘Data subjects, digital surveillance, AI and the future of work’, which was carried out by Professor Phoebe Moore of the University of Leicester at the request of the STOA Panel, following a proposal from Lina Galvez Munoz (S&D, Spain), member of the Panel. This new STOA study provides a timely, in depth overview of the social, political and economic urgencies in identifying what we call the ‘new surveillance workplace’.
A wide range of technologies are gradually being introduced to monitor, track and, ultimately, surveil workers. Workplace surveillance is age-old, but it has become easier and more common, as new technologies enable more varied, pervasive and widespread monitoring practices and have increased employers’ ability to monitor apparently every aspect of workers’ lives. New technological innovations include surveillance cameras and keylogging software on work laptops to biometric sensors and GPS tracking, micro-chip implants, automated video pattern recognition and biometric access control.
Digital transformation, work design experimentation and new technologies are, indeed, overwhelming methods with intensified potential to process personal data in the workplace. New issues are emerging to do with ownership of data, power dynamics of work-related surveillance, usage of data, human resource practices and workplace pressures in ways that cut across all socio-economic classes.
The current pandemic has expanded the use of AI-empowered real-time work place monitoring systems and workforce analytics software that quantifies the previously un-measurable factors for team success, like collaboration and communication that are essential for productivity and performance. During the last few months, workplace monitoring appears to be stress-inducing, demotivating and dehumanising, leading to phenomena of presenteeism, a growing datafication of employment and the blurring of the boundaries between public and private spheres. Such technological practices threaten to alter workplaces in fundamental ways and to undermine trust between employers and employees.
How are institutions responding to the widespread uptake of new tracking technologies in workplaces, from the office, to the contact centre, to the factory? What are the parameters to protect the privacy and other rights of workers, given the unprecedented and ever-pervasive functions of monitoring technologies? The report evidences how and where new technologies are being implemented; looks at the impact that surveillance workplaces are having on the employment relationship and on workers themselves at the psychosocial level; and outlines the social, legal and institutional frameworks within which this is occurring, across the EU and beyond, ultimately arguing that more worker representation is necessary to protect the data rights of workers.
The study carries out a thorough analysis of automated decision-making, considering the extent to which it is admissible, the safeguard measures to be adopted, and whether data subjects have a right to individual explanations. It then considers the extent to which the General Date Protection Regulation (GDPR) provides for a preventive risk-based approach, focused on data protection by design and by default. In adopting an interdisciplinary perspective, the study identifies all major tensions between the traditional data protection principles — purpose limitation, data minimisation, special treatment of ‘sensitive data’, limitations on automated decisions — and the full deployment of the power of AI and big data. The vague and open-ended GDPR prescriptions are analysed in detail regarding the development of AI and big data applications. The analysis sheds light on the limited guidance offered by the GDPR on how to balance competing interests, which aggravates the uncertainties associated with the novel and complex character of new and emerging AI applications. As a result of this limited guidance, controllers are expected to manage risks amidst significant uncertainties about the requirements for compliance and under the threat of heavy sanctions.
It should be noted that the author makes several interesting findings, including the rapid increase of employees’ stress and anxiety as well as the augmented accuracy of tracking and monitoring technologies, but also the marginal role that the concept of consent and the workers’ representatives has so far exerted in the frame of the relevant technological and policy debates. The study’s added value lies not only in the detailed legal analysis but also in its methodological rigour: its findings are based on a wide range of country case studies and ‘worker cameos’ that are based on semi-structured interviews carried out with a series of workers to identify where electronic performance monitoring (EPM) and tracking are occurring. The study then proposes a wide range of concrete and applicable policy options about how to ensure union/worker involvement at all stages, how to introduce and enforce co-determination into labour law in all EU Member States, how to require businesses to compile certification and codes of conduct and how to prioritise collective governance. The study emphasises the need to guarantee worker representatives’ involvement at each increment of the life cycle of any technological tracking procedure and for EU states to establish co-determination rights in a firm manner. The author’s proposal concerning full inclusion – beyond trade unions – of employer associations in writing codes of conduct for data tracking and processing activities as partners is of practical importance. The arguments and findings of the study offer both theoretical insight and practical suggestions for action that policy-makers will hopefully find stimulating and worth pursuing.