Abstract
This dissertation examines the experiences of workers when artificial intelligence (AI) technologies are implemented within organisations without formal consultation processes or collective bargaining mechanisms. Through a comprehensive synthesis of contemporary peer-reviewed literature, this study identifies consistent patterns of negative worker experiences, including heightened job insecurity, increased surveillance and technostress, erosion of trust, and diminished autonomy. The research reveals that workers frequently perceive AI rollouts conducted without their input as threatening, alienating, and fundamentally unfair, with documented consequences for psychological wellbeing, engagement, and organisational commitment. However, the analysis also demonstrates that contextual factors—including the function of AI technology, quality of training and communication, and occupational status—significantly moderate these experiences. Comparative analysis indicates that participatory approaches involving worker voice, collective agreements, and transparent communication substantially mitigate negative outcomes whilst enabling more positive, augmentative experiences. The findings contribute to theoretical understanding of technology-mediated employment relations and provide evidence-based recommendations for policy-makers, employers, and trade unions seeking to ensure ethical and sustainable AI integration within contemporary workplaces.
Introduction
The rapid integration of artificial intelligence technologies into contemporary workplaces represents one of the most significant transformations in employment relations since the industrial revolution. Organisations across sectors increasingly deploy AI systems for functions ranging from recruitment and performance management to task automation and workplace surveillance, fundamentally reshaping the nature of work and the employment relationship (Bankins et al., 2023). Whilst considerable academic and policy attention has focused on the macroeconomic implications of AI adoption—including aggregate effects on employment levels, productivity, and skills demand—substantially less systematic research has examined how individual workers experience these technological changes, particularly when implementation occurs without meaningful worker participation.
The question of worker voice in technological change holds profound significance for employment relations scholarship and practice. Democratic traditions in industrial relations, particularly prominent in European contexts, have long recognised worker participation as essential for legitimate and sustainable organisational change (Haipeter et al., 2024). The International Labour Organization’s decent work agenda explicitly emphasises social dialogue and collective bargaining as fundamental pillars of quality employment (International Labour Organization, 2019). Yet evidence suggests that AI implementation frequently occurs through top-down managerial decisions, with workers positioned as passive recipients rather than active participants in shaping technological transformation (Monod et al., 2024).
This tension between participatory ideals and implementation realities carries substantial practical consequences. Research increasingly documents associations between non-consultative AI rollouts and negative worker outcomes, including psychological distress, reduced engagement, and deteriorating employment relationships (Braganza et al., 2020; Zirar, Ali and Islam, 2023). These findings challenge techno-optimist narratives suggesting that AI will straightforwardly augment human capabilities and enhance job quality, instead revealing complex and often adverse dynamics when workers lack voice in technological governance.
Understanding worker experiences of AI implementation without consultation matters for multiple reasons. Academically, it advances theoretical understanding of how technological change intersects with psychological contracts, organisational trust, and worker identity. Practically, it informs organisational approaches to technology implementation that may enhance both ethical standards and operational effectiveness. From a policy perspective, it provides evidence relevant to regulatory frameworks governing AI in employment, including emerging initiatives such as the European Union’s AI Act and various national strategies for ethical AI deployment.
This dissertation addresses these concerns through systematic synthesis of contemporary research evidence, examining how workers experience AI rollouts when formal consultation and collective bargaining mechanisms are absent, identifying factors that shape variation in these experiences, and considering implications for theory, practice, and policy.
Aim and objectives
Aim
The primary aim of this dissertation is to critically examine and synthesise evidence regarding worker experiences of AI implementation in organisational contexts where formal consultation processes and collective bargaining mechanisms are absent or inadequate.
Objectives
To achieve this aim, the following specific objectives guide the research:
1. To identify and characterise the predominant patterns of worker experience reported in contexts of non-consultative AI implementation, including psychological, relational, and occupational dimensions.
2. To analyse the contextual factors that moderate worker experiences of AI rollouts, including technological, organisational, and individual-level variables.
3. To compare worker experiences in non-participatory AI implementation contexts with those in settings where consultation, collective bargaining, or other voice mechanisms exist.
4. To evaluate the theoretical implications of the evidence for understanding technology-mediated employment relations and psychological contracts.
5. To develop evidence-based recommendations for policy-makers, employers, and worker representatives regarding ethical and sustainable AI implementation practices.
Methodology
This dissertation employs a literature synthesis methodology, systematically reviewing and integrating findings from contemporary peer-reviewed research to address the stated aim and objectives. This approach is appropriate given the emerging and rapidly evolving nature of the research field, where synthesis of dispersed empirical and conceptual contributions enables identification of consistent patterns, theoretical development, and evidence-based recommendations.
Search strategy and source selection
The primary research evidence derives from a structured search of academic databases, including Scopus, Web of Science, and specialised repositories for organisational behaviour and employment relations research. Search terms combined concepts relating to artificial intelligence, machine learning, and algorithmic management with terms relating to worker experience, employee perception, consultation, collective bargaining, and worker voice. The search prioritised peer-reviewed journal articles published between 2019 and 2024, reflecting the contemporary nature of widespread organisational AI adoption and ensuring currency of findings.
Supplementary sources included policy documents and working papers from reputable international organisations, particularly the Organisation for Economic Co-operation and Development (OECD), which has conducted substantial survey-based research on AI and employment. Government publications and reports from recognised research institutions provided additional contextual material.
Inclusion and quality criteria
Sources were included if they presented primary empirical research or systematic theoretical analysis relevant to worker experiences of AI implementation, with particular attention to studies addressing contexts with limited or absent worker participation mechanisms. Quality assessment considered factors including peer-review status, methodological rigour, sample characteristics, and relevance to the research questions.
Analytical approach
The synthesis followed a thematic analysis framework, identifying recurring patterns, concepts, and relationships across sources. Evidence was organised according to themes emerging from the literature—including job insecurity, surveillance and control, trust and fairness, autonomy and skill utilisation, and social dimensions of work—whilst attending to contextual factors moderating reported experiences. Comparative analysis examined differences between participatory and non-participatory implementation contexts where evidence permitted such comparison.
Limitations
Literature synthesis methodology carries inherent limitations, including dependence on the quality and comprehensiveness of available primary research, potential publication bias toward significant findings, and challenges in generalising across diverse technological, sectoral, and national contexts. The relatively recent emergence of organisational AI adoption means that longitudinal evidence on sustained worker experiences remains limited. These limitations are acknowledged whilst recognising that synthesis provides valuable integration of current understanding to inform theory and practice.
Literature review
The changing landscape of AI in employment
Artificial intelligence technologies have progressed from specialised applications to pervasive presence across employment sectors, transforming how organisations recruit, manage, monitor, and evaluate workers (Bankins et al., 2023). Contemporary workplace AI encompasses diverse applications including automated screening and selection systems, algorithmic scheduling and task allocation, performance monitoring and evaluation tools, chatbots and virtual assistants mediating customer interactions, and decision-support systems augmenting or replacing human judgment in professional contexts (Abdullah and Fakieh, 2020; De Stefano and Taes, 2022).
This technological transformation occurs within broader trends reshaping employment relations, including the growth of platform-mediated work, intensification of performance management, and erosion of collective bargaining coverage in many national contexts (De Stefano and Taes, 2022). The convergence of these trends raises fundamental questions about worker agency, dignity, and voice in technology-mediated employment relationships.
Job insecurity and threat perception
Research consistently identifies job insecurity as a predominant concern among workers experiencing AI implementation without consultation. Workers frequently interpret AI adoption through a “threat” framing, perceiving technologies as potential replacements for human labour rather than tools for augmentation (Braganza et al., 2020). This threat perception intensifies when management communication is absent or inadequate, leaving workers to interpret technological changes through available cues and organisational signals.
Braganza et al. (2020) demonstrate that AI adoption significantly affects psychological contracts—the implicit expectations governing employment relationships—with workers perceiving that management views tasks as easily automatable or offshorable. This perception weakens relational aspects of psychological contracts whilst strengthening transactional orientations, reducing worker engagement and trust. Similarly, Zirar, Ali and Islam (2023) find that workers experiencing AI introduction without consultation report heightened anxiety about job continuity and a sense that their contributions are undervalued.
The relationship between AI adoption and job insecurity appears moderated by occupational status and task characteristics. Abdullah and Fakieh (2020) report that healthcare workers perceive varying degrees of threat depending on the specific AI applications involved and their implications for professional roles. Malik et al. (2021) find that workers in Industry 4.0 manufacturing environments experience substantial concern about automation displacing human labour, particularly where tasks are routine or easily codified.
These insecurity concerns carry documented consequences for worker wellbeing and organisational outcomes. Heightened job insecurity associates with reduced engagement, diminished organisational commitment, and impaired psychological health (Braganza et al., 2020). Workers experiencing threat perceptions may also exhibit resistance to technological change, potentially undermining implementation effectiveness and organisational performance.
Surveillance, control, and technostress
A second major theme concerns the use of AI technologies for worker surveillance and managerial control. Workers across sectors describe AI systems as mechanisms for intensified monitoring, performance scoring, and behavioural control, often operating as opaque “black boxes” whose functioning remains unclear to those subject to their assessments (Zirar, Ali and Islam, 2023; Corvite et al., 2023).
Monod et al. (2024) provide detailed analysis of how AI tools initially intended to empower workers can devolve into mechanisms for managerial control. Their research documents a trajectory from worker empowerment—where AI systems were designed to provide useful feedback and support—to managerial control, where the same systems became instruments for surveillance, discipline, and performance pressure. This devolution reflects broader power asymmetries in technology governance, where management retains authority over system design, data access, and consequential use of algorithmic outputs.
The surveillance dimensions of workplace AI generate substantial psychological consequences. Workers report technostress—technology-induced strain characterised by anxiety, overload, and feelings of inadequacy—particularly where monitoring is continuous, criteria are unclear, and consequences are significant (Malik et al., 2021). Privacy concerns feature prominently, with workers expressing discomfort at the scope and granularity of data collection enabled by AI systems (Corvite et al., 2023).
Corvite et al. (2023) examine particularly concerning developments in emotion AI—technologies claiming to detect worker emotional states through analysis of facial expressions, voice patterns, or physiological signals. Workers subject to such systems express profound unease at the intimate nature of monitoring and scepticism about the validity of inferences drawn. The deployment of emotion AI without worker consultation exemplifies how technological capabilities can outpace ethical deliberation and participatory governance.
The control functions of AI also extend to scheduling, task allocation, and work intensification. De Stefano and Taes (2022) analyse algorithmic management practices, whereby AI systems determine work schedules, monitor task completion, and apply performance sanctions with minimal human intermediation. Workers subject to algorithmic management report feeling reduced to data points, with limited scope for negotiation or appeal when algorithmic decisions prove unfair or erroneous.
Trust, transparency, and perceived fairness
Closely related to surveillance concerns are questions of trust, transparency, and fairness in AI-mediated employment decisions. Research consistently finds that workers express distrust toward AI systems and the organisations deploying them when implementation occurs without consultation or adequate explanation (Bankins et al., 2023; Tong et al., 2021).
Transparency emerges as a critical factor shaping worker perceptions. Workers report heightened anxiety and resistance when AI systems operate opaquely, with unclear criteria for decisions affecting employment outcomes (Corvite et al., 2023). Zhao and Jakkampudi (2023) find that workers particularly fear biased or unfair use of AI in high-stakes decisions including hiring, performance evaluation, and termination. Where workers cannot understand or contest algorithmic determinations, perceptions of procedural and distributive justice deteriorate.
Tong et al. (2021) examine the “Janus face” of AI feedback systems, demonstrating that worker responses depend critically on whether AI involvement is disclosed. Their research reveals complex dynamics whereby covert AI feedback may enhance performance through reduced social comparison, whilst disclosed AI feedback generates resistance and distrust. These findings highlight the relational dimensions of AI implementation, where trust depends not only on system accuracy but on organisational honesty about technological deployment.
The absence of consultation appears particularly damaging for trust because it signals organisational disregard for worker perspectives and interests. Kelley (2022) finds that worker perceptions of AI adoption depend substantially on whether organisations demonstrate commitment to ethical principles, including fairness, transparency, and human oversight. Where workers perceive that AI implementation prioritises cost reduction over ethical considerations, trust in both technology and management erodes.
Autonomy, skill, and job quality
AI implementation without consultation frequently associates with reduced worker autonomy and narrowed opportunities for skill utilisation. Research documents how AI systems can transform work roles, concentrating meaningful tasks within automated systems whilst relegating human workers to data production, compliance verification, and exception handling (Braganza et al., 2020; Bankins et al., 2023).
Bankins et al. (2023) provide multilevel analysis of AI implications for organisational behaviour, identifying significant effects on worker autonomy, skill utilisation, and job meaning. They find that AI can diminish opportunities for workers to exercise judgment, creativity, and interpersonal skills, reducing work to routinised interactions with technological systems. This deskilling trajectory carries implications for worker motivation, occupational identity, and long-term employability.
Braganza et al. (2020) introduce the concept of “alienational” psychological contracts, whereby AI adoption leads workers to feel interchangeable, disconnected from organisational purposes, and reduced to functional inputs rather than valued contributors. This alienation reflects Marxist analyses of technological change under capitalist employment relations, where productivity gains accrue to capital whilst labour experiences degradation.
Bell (2023) specifically examines AI effects on job quality for frontline workers, documenting reductions in autonomy, task variety, and skill utilisation as AI systems assume greater control over work processes. Frontline and lower-status occupations appear particularly vulnerable to these degradations, reflecting broader patterns whereby technological risks concentrate among workers with limited labour market power.
Alienation and the social fabric of work
Beyond instrumental concerns about job security and autonomy, research reveals AI effects on the social dimensions of work. Tang et al. (2023) examine consequences of interacting with AI systems rather than human colleagues, finding associations with loneliness and deterioration in work’s social fabric. Their research documents spillover effects whereby workplace AI interaction predicts poorer wellbeing after work, including insomnia and increased alcohol consumption.
Selenko et al. (2022) develop a functional-identity perspective on AI and work, analysing how AI systems affect the identity functions that work provides—including social integration, purpose, and self-esteem. They find that AI can disrupt these functions when it replaces or mediates interpersonal interactions central to occupational identity. The “asocial system” created by AI mediation undermines the relational aspects of work that contribute to worker wellbeing.
These social effects appear particularly pronounced where AI implementation occurs without consultation, as workers lack opportunities to shape technology in ways that preserve valued aspects of work. The imposition of AI systems that reduce human interaction compounds instrumental concerns about job security with relational losses affecting work meaning and social connection.
Contextual factors moderating worker experiences
Whilst the literature documents predominantly negative experiences of non-consultative AI implementation, research also reveals substantial variation across contexts. Several factors moderate worker experiences, shaping whether AI adoption proves harmful or potentially beneficial.
The function of AI technology—whether augmenting human capabilities or replacing human tasks—significantly affects worker responses (Selenko et al., 2022). Augmentative AI that enhances worker effectiveness whilst preserving human agency generates less resistance than substitutive AI that displaces workers from meaningful roles. However, this distinction may prove unstable, as technologies initially framed as augmentative can evolve toward substitution through subsequent implementation decisions (Monod et al., 2024).
Training and communication quality emerge as critical moderators of worker experience. Kelley (2022) finds that effective communication about AI purposes, functioning, and safeguards substantially improves worker acceptance and reduces anxiety. The OECD (2023) reports that workers receiving adequate training express more positive attitudes toward AI than those lacking preparation. Conversely, Chiu, Zhu and Corbett (2021) demonstrate that inadequate information about AI systems intensifies threat appraisals and resistance. Where consultation is absent, workers lack channels for obtaining information and addressing concerns, amplifying negative experiences.
Sector and occupational status shape AI experiences through differential exposure to automation risk and surveillance intensity. The OECD (2023) documents variation across industries in AI adoption patterns and worker perceptions. Corvite et al. (2023) find that frontline workers face disproportionate surveillance and algorithmic control compared to professional and managerial employees. Malik et al. (2021) identify manufacturing workers as particularly vulnerable to automation-related insecurity, whilst Abdullah and Fakieh (2020) examine healthcare-specific dynamics including professional identity concerns.
Comparative evidence: participatory versus non-participatory implementation
Research comparing worker experiences across implementation approaches provides compelling evidence for the value of consultation and collective bargaining. The OECD (2023) surveys spanning multiple countries find that workers in organisations with consultation mechanisms report more positive perceptions of AI effects on working conditions, wages, and productivity. Conversely, workers in non-consultative contexts express greater anxiety, resistance, and perception of imposed change.
Zhao and Jakkampudi (2023) analyse policy measures safeguarding workers from AI, finding that regulatory frameworks incorporating worker voice and consultation requirements associate with more positive outcomes. Their research highlights the role of institutional structures in shaping whether AI implementation benefits or harms workers.
Haipeter et al. (2024) examine the potential for “human-centred AI” achieved through employee participation. Their research documents how collective agreements and workplace negotiation can establish safeguards including transparency requirements, limits on surveillance, training entitlements, and human oversight of consequential decisions. These participatory mechanisms enable workers to shape AI implementation in ways that preserve dignity and distribute technological benefits more equitably.
De Stefano and Taes (2022) provide detailed analysis of collective bargaining responses to algorithmic management, identifying emerging agreements addressing AI-specific concerns including data access, algorithmic transparency, and appeal mechanisms. They argue that collective bargaining remains essential for addressing power asymmetries in technology governance, particularly as individual workers lack capacity to negotiate effectively with organisations possessing superior information and resources.
The comparative evidence suggests that negative worker experiences of AI implementation are not technologically determined but rather reflect governance choices amenable to institutional intervention. Participatory structures, transparent communication, and negotiated safeguards substantially moderate harms and can enable more positive, augmentative experiences.
Discussion
Synthesis of findings
The evidence reviewed presents a consistent and concerning picture of worker experiences when AI is implemented without formal consultation or collective bargaining. Across diverse sectors, occupational categories, and national contexts, workers commonly report heightened job insecurity, increased surveillance and technostress, erosion of trust and perceived fairness, reduced autonomy and skill utilisation, and deterioration in the social dimensions of work. These findings converge despite methodological diversity across studies, lending confidence to the robustness of identified patterns.
The consistency of negative experiences reflects underlying mechanisms whereby non-consultative AI implementation violates fundamental worker interests and expectations. Psychological contract theory provides explanatory purchase: workers interpret AI implementation as management signal about their replaceability and disposability, weakening relational employment bonds whilst intensifying transactional orientations (Braganza et al., 2020). The absence of voice amplifies these effects by denying workers opportunity to influence implementation in ways that might preserve valued aspects of their employment relationships.
Procedural justice perspectives similarly illuminate the dynamics observed. Workers evaluate not only outcomes but processes through which decisions are made; exclusion from AI governance violates procedural justice expectations, generating resistance and distrust regardless of instrumental consequences (Tong et al., 2021). The opacity of many AI systems compounds these procedural concerns, as workers cannot evaluate fairness when criteria remain hidden.
Theoretical implications
The findings carry significant implications for theoretical understanding of technology-mediated employment relations. First, they challenge techno-determinist assumptions that AI effects follow straightforwardly from technological characteristics. Worker experiences depend critically on governance choices—including consultation, communication, and safeguards—that shape how technologies are implemented and experienced. This social construction of technological impacts aligns with broader scholarship in science and technology studies whilst grounding these insights in employment relations contexts.
Second, the evidence extends psychological contract theory to technology-mediated contexts. Traditional psychological contract scholarship has focused on human-to-human relationships, examining how managers and workers negotiate implicit expectations. AI implementation introduces technological mediators that can disrupt or reconfigure these relationships, with implications for conceptualising psychological contracts in algorithmically-managed workplaces.
Third, the findings contribute to debates about job quality and decent work in technologically transformed employment. The International Labour Organization’s decent work framework emphasises security, equity, voice, and dignity as essential dimensions of employment quality (International Labour Organization, 2019). Non-consultative AI implementation threatens each of these dimensions, suggesting that technological transformation requires explicit attention to decent work standards if quality employment is to be preserved.
Practical implications
The evidence carries substantial practical implications for organisations implementing AI technologies. The documented negative consequences—including reduced engagement, deteriorating trust, and resistance to change—undermine both worker wellbeing and organisational effectiveness. Organisations pursuing AI implementation without consultation may achieve short-term cost reductions whilst generating longer-term costs through diminished commitment, increased turnover, and implementation failures.
The comparative evidence suggests that participatory approaches need not impede AI adoption whilst potentially enhancing implementation success. Consultation can identify worker concerns amenable to design modifications, build understanding that reduces resistance, and establish legitimacy that supports sustainable change. Organisations may therefore face false trade-offs between efficiency and participation; effective consultation may prove efficiency-enhancing over relevant timeframes.
Specific practical recommendations emerging from the evidence include: establishing meaningful consultation mechanisms before AI implementation; providing comprehensive training and transparent communication about AI systems; maintaining human oversight of consequential algorithmic decisions; creating accessible appeal processes for workers affected by AI determinations; negotiating safeguards through collective bargaining where unions are present; and monitoring AI effects on job quality with commitment to corrective action.
Policy implications
At the policy level, the findings support regulatory intervention to ensure worker voice in AI implementation. The European Union’s AI Act establishes risk-based regulation of AI systems, with requirements for transparency and human oversight in high-risk applications including employment (European Commission, 2024). The evidence suggests that such regulatory frameworks should extend to encompass consultation requirements, ensuring workers have meaningful input into AI systems affecting their employment.
The documented harms of non-consultative AI implementation also support strengthened collective bargaining rights and structures. Where union coverage has declined, workers lack institutional capacity to negotiate AI safeguards; policy measures supporting unionisation and collective bargaining may therefore prove essential for equitable technology governance. The emerging agreements documented by De Stefano and Taes (2022) and Haipeter et al. (2024) provide templates for AI-specific provisions that collective bargaining might address.
Beyond specific AI regulation, the findings reinforce arguments for comprehensive labour market institutions that preserve worker voice in technological change. Historical experience suggests that technological transformations distribute benefits more equitably when workers possess countervailing power to shape implementation; AI appears no exception to this pattern.
Limitations and future research
Several limitations qualify interpretation of the findings and suggest directions for future research. The literature synthesised derives primarily from cross-sectional studies capturing worker perceptions at specific implementation moments; longitudinal research examining how experiences evolve over time would provide valuable insight into adaptation, habituation, or persistent harm. The concentration of research in developed economies limits generalisability to contexts with different institutional configurations; comparative research spanning institutional varieties would illuminate how context shapes AI experiences.
Methodologically, greater integration of qualitative research illuminating worker sense-making with quantitative research measuring outcomes across populations would strengthen understanding. Research designs permitting causal inference about consultation effects—potentially including natural experiments where policy changes alter consultation requirements—would address limitations of correlational evidence.
Substantively, future research should examine emerging AI applications including generative AI tools that may transform knowledge work in ways distinct from earlier algorithmic management. The differential effects across occupational categories warrant further investigation, as do intersections between AI experiences and worker characteristics including age, education, and prior technological exposure.
Conclusions
This dissertation has examined worker experiences of AI implementation in the absence of formal consultation and collective bargaining, synthesising contemporary research evidence to address stated aims and objectives. The findings demonstrate that workers commonly experience non-consultative AI rollouts negatively, reporting heightened job insecurity, increased surveillance and technostress, erosion of trust, reduced autonomy, and deterioration in work’s social dimensions. These experiences carry documented consequences for psychological wellbeing, organisational engagement, and employment relationship quality.
Regarding the first objective—identifying predominant patterns of worker experience—the evidence reveals consistent themes across diverse contexts, lending confidence that non-consultative AI implementation systematically generates adverse worker experiences rather than context-specific difficulties. The second objective—analysing moderating factors—has been addressed through examination of how AI function, training and communication quality, and occupational status shape variation around these predominant patterns. The third objective—comparative analysis—has been achieved through review of evidence contrasting participatory and non-participatory implementation, demonstrating that consultation and collective bargaining substantially moderate negative outcomes. The fourth objective—theoretical implications—has been addressed through discussion of how findings extend psychological contract theory, procedural justice perspectives, and decent work frameworks to technology-mediated contexts. The fifth objective—evidence-based recommendations—has been developed through articulation of practical and policy implications for organisations, regulators, and worker representatives.
The overall contribution demonstrates that worker experiences of AI implementation are not technologically determined but rather reflect governance choices that can be influenced through institutional design. This conclusion carries significance for academic understanding, suggesting that employment relations scholarship must engage seriously with technology governance as a terrain of contestation with consequences for job quality and worker welfare. Practically, it implies that organisations bear responsibility for implementation approaches that preserve worker dignity and enable participatory governance. At policy level, it supports regulatory frameworks ensuring worker voice in AI deployment and institutional structures sustaining collective bargaining capacity.
The contemporary moment presents critical choices about how AI technologies will reshape employment. The evidence synthesised here suggests that choices favouring worker consultation and participation can enable technological transformation that enhances rather than degrades job quality. Conversely, continuation of top-down implementation approaches risks generating widespread worker harm whilst potentially undermining implementation effectiveness. The future of AI in employment will be determined not by technological imperatives but by governance choices that societies, organisations, and workers make collectively.
References
Abdullah, R. and Fakieh, B., 2020. Health care employees’ perceptions of the use of artificial intelligence applications: Survey study. *Journal of Medical Internet Research*, 22(5), e17620. https://doi.org/10.2196/17620
Bankins, S., Ocampo, A., Marrone, M., Restubog, S. and Woo, S., 2023. A multilevel review of artificial intelligence in organizations: Implications for organizational behavior research and practice. *Journal of Organizational Behavior*, 44(2), pp.159-182. https://doi.org/10.1002/job.2735
Bell, S., 2023. AI and job quality: Insights from frontline workers. *SSRN Electronic Journal*. https://doi.org/10.2139/ssrn.4337611
Braganza, A., Chen, W., Canhoto, A. and Sap, S., 2020. Productive employment and decent work: The impact of AI adoption on psychological contracts, job engagement and employee trust. *Journal of Business Research*, 131, pp.485-494. https://doi.org/10.1016/j.jbusres.2020.08.018
Chiu, Y., Zhu, Y. and Corbett, J., 2021. In the hearts and minds of employees: A model of pre-adoptive appraisal toward artificial intelligence in organizations. *International Journal of Information Management*, 60, 102379. https://doi.org/10.1016/j.ijinfomgt.2021.102379
Corvite, S., Roemmich, K., Rosenberg, T. and Andalibi, N., 2023. Data subjects’ perspectives on emotion artificial intelligence use in the workplace: A relational ethics lens. *Proceedings of the ACM on Human-Computer Interaction*, 7(CSCW1), pp.1-38. https://doi.org/10.1145/3579600
De Stefano, V. and Taes, S., 2022. Algorithmic management and collective bargaining. *Transfer: European Review of Labour and Research*, 29(1), pp.21-36. https://doi.org/10.1177/10242589221141055
European Commission, 2024. *Artificial Intelligence Act*. Brussels: European Commission.
Haipeter, T., Wannöffel, M., Daus, J. and Schaffarczik, S., 2024. Human-centered AI through employee participation. *Frontiers in Artificial Intelligence*, 7, 1272102. https://doi.org/10.3389/frai.2024.1272102
International Labour Organization, 2019. *Work for a brighter future: Global Commission on the Future of Work*. Geneva: International Labour Office.
Kelley, S., 2022. Employee perceptions of the effective adoption of AI principles. *Journal of Business Ethics*, 178(4), pp.871-893. https://doi.org/10.1007/s10551-022-05051-y
Malik, N., Tripathi, S., Kar, A. and Gupta, S., 2021. Impact of artificial intelligence on employees working in industry 4.0 led organizations. *International Journal of Manpower*, 43(2), pp.334-354. https://doi.org/10.1108/ijm-03-2021-0173
Monod, E., Mayer, A., Straub, D., Joyce, E. and Qi, J., 2024. From worker empowerment to managerial control: The devolution of AI tools’ intended positive implementation to their negative consequences. *Information and Organization*, 34(1), 100498. https://doi.org/10.1016/j.infoandorg.2023.100498
OECD, 2023. *The impact of AI on the workplace: Main findings from the OECD AI surveys of employers and workers*. OECD Social, Employment and Migration Working Papers. Paris: OECD Publishing. https://doi.org/10.1787/ea0a0fe1-en
Selenko, E., Bankins, S., Shoss, M., Warburton, J. and Restubog, S., 2022. Artificial intelligence and the future of work: A functional-identity perspective. *Current Directions in Psychological Science*, 31(3), pp.272-279. https://doi.org/10.1177/09637214221091823
Tang, P., Koopman, J., Mai, K., De Cremer, D., Zhang, J., Reynders, P., Ng, C. and Chen, I., 2023. No person is an island: Unpacking the work and after-work consequences of interacting with artificial intelligence. *Journal of Applied Psychology*, 108(6), pp.906-921. https://doi.org/10.1037/apl0001103
Tong, S., Jia, N., Luo, X. and Fang, Z., 2021. The Janus face of artificial intelligence feedback: Deployment versus disclosure effects on employee performance. *Strategic Management Journal*, 42(9), pp.1600-1631. https://doi.org/10.1002/smj.3322
Zhao, Y. and Jakkampudi, K., 2023. Assessing policy measures safeguarding workers from artificial intelligence in the United States. *Journal of Computer and Communications*, 11(11), pp.118-132. https://doi.org/10.4236/jcc.2023.1111008
Zirar, A., Ali, S. and Islam, N., 2023. Worker and workplace artificial intelligence (AI) coexistence: Emerging themes and research agenda. *Technovation*, 124, 102747. https://doi.org/10.1016/j.technovation.2023.102747
—
