+44 115 966 7987 contact@ukdiss.com Log in

Who gets heard when labour market shocks hit?

//

Oliver Hartley

Abstract

This dissertation examines differential policymaker responsiveness to three key stakeholder groups—labour unions, startups, and Big Tech corporations—during artificial intelligence-driven labour market disruptions. Through systematic literature synthesis of recent empirical and theoretical research, the study investigates how resource asymmetries, institutional contexts, and issue framing shape whose voices influence policy outcomes. Findings demonstrate that Big Tech companies wield disproportionate influence over AI workforce policy through superior financial resources, lobbying infrastructure, and strategic control over digital platforms. Labour unions retain significant influence primarily in countries with institutionalised social partnership traditions, though declining membership constrains their broader effectiveness. Startups contribute valuable perspectives on emerging risks and opportunities but lack sustained policy clout without alignment to dominant innovation narratives. The research reveals that policymaker responsiveness is contingent upon issue framing, with economic competitiveness arguments receiving greater attention than ethical or social justice concerns. These findings carry significant implications for democratic governance during technological transitions, suggesting that deliberate institutional reforms are necessary to achieve balanced stakeholder representation in AI labour policy development.

Introduction

The rapid proliferation of artificial intelligence technologies represents one of the most significant structural transformations facing contemporary labour markets. As machine learning algorithms, large language models, and autonomous systems increasingly perform tasks previously requiring human cognition, fundamental questions arise regarding employment security, skill requirements, and the distribution of economic benefits from technological progress. These developments have precipitated urgent policy debates across developed and developing economies, with governments seeking to balance innovation promotion against worker protection in an environment of profound uncertainty.

The stakes of these policy decisions extend far beyond abstract economic efficiency calculations. Labour market disruptions carry profound consequences for individual livelihoods, community stability, and social cohesion. Historical precedents from previous industrial revolutions demonstrate that technological transitions can either exacerbate inequality and social fragmentation or, when carefully managed, generate broadly shared prosperity. The policy choices made during the current AI transition will substantially determine which trajectory prevails.

Within this context, multiple stakeholder groups compete to influence policymaker attention and subsequent regulatory outcomes. Labour unions historically served as primary advocates for worker interests during technological change, yet face contemporary challenges including declining membership density and fragmented representation across increasingly diverse employment arrangements. Simultaneously, technology corporations—particularly the dominant platforms characterised as ‘Big Tech’—have accumulated unprecedented economic and political resources, enabling sophisticated lobbying operations and strategic agenda-setting through public discourse control. Between these established actors, innovative startups contribute perspectives on emerging opportunities and risks, though their influence remains constrained by limited organisational capacity.

Understanding whose voices policymakers hear—and respond to—during AI-driven labour market shocks carries fundamental importance for democratic governance. If certain stakeholders enjoy systematic advantages in policy influence, resulting regulations may reflect narrow interests rather than broader public welfare. This concern acquires particular salience given mounting evidence of regulatory capture across technology policy domains and growing concentration of economic power among leading AI developers.

This dissertation addresses these concerns through systematic examination of recent research on stakeholder influence in AI labour policy. The analysis contributes to scholarly understanding of interest group politics under conditions of rapid technological change while generating practical insights for policymakers, advocacy organisations, and researchers seeking to improve governance quality during this critical transition period.

Aim and objectives

The primary aim of this dissertation is to critically analyse and compare policymaker responsiveness to labour unions, startups, and Big Tech corporations regarding artificial intelligence and employment policy, identifying factors that shape differential influence during labour market disruptions.

To achieve this aim, the following specific objectives guide the research:

1. To synthesise existing empirical evidence regarding Big Tech’s mechanisms of policy influence in AI governance, including direct lobbying, agenda-setting, and public-private partnerships.

2. To examine the conditions under which labour unions effectively shape AI-related workforce policies, with particular attention to institutional context and social partnership traditions.

3. To evaluate the role and limitations of startups in influencing policy debates around AI and employment.

4. To analyse how issue framing and institutional design affect policymaker responsiveness to different stakeholder groups.

5. To identify research gaps and propose directions for future investigation into balanced stakeholder representation in technology governance.

Methodology

This dissertation employs a systematic literature synthesis methodology to examine policymaker responsiveness to different stakeholder groups during AI-driven labour market disruptions. Literature synthesis represents an appropriate methodological approach for integrating findings across multiple studies addressing related research questions, enabling identification of patterns, contradictions, and gaps in existing knowledge.

The search strategy encompassed comprehensive queries across major academic databases, including Semantic Scholar and PubMed, targeting literature addressing policymaker responsiveness, stakeholder influence, and AI labour market impacts. Eight distinct search strategies captured foundational debates, stakeholder-specific roles, power asymmetries, interdisciplinary perspectives, and adjacent topics including algorithmic management. The initial search identified 1,047 potentially relevant papers, which underwent systematic screening for relevance and methodological quality.

Following initial screening, 628 papers received detailed evaluation against eligibility criteria emphasising direct relevance to stakeholder influence mechanisms during AI-driven employment disruptions. From this pool, 447 papers met eligibility requirements, with final analysis incorporating the 50 most relevant and methodologically rigorous contributions. This selection process prioritised peer-reviewed journal articles, working papers from established research institutions, and governmental policy analyses.

Quality assessment considered multiple dimensions including methodological rigour, empirical grounding, theoretical coherence, and relevance to the research objectives. Studies employing diverse methodological approaches—including quantitative analyses, comparative case studies, discourse analyses, and theoretical frameworks—were incorporated to provide comprehensive understanding of the research domain.

The synthesis process involved thematic coding of findings according to stakeholder category, influence mechanisms, institutional context, and policy outcomes. Evidence strength assessments considered sample sizes, analytical approaches, convergence across studies, and potential sources of bias. Particular attention was directed toward identifying areas of scholarly consensus and disagreement, enabling nuanced conclusions regarding the current state of knowledge.

Literature review

Big Tech’s mechanisms of policy influence

Contemporary research consistently identifies Big Tech corporations as dominant actors shaping artificial intelligence policy, including regulations affecting labour markets. This influence derives from multiple reinforcing mechanisms that collectively create substantial asymmetries in stakeholder access to policymakers.

Financial resources constitute a foundational source of Big Tech influence. Leading technology corporations maintain extensive lobbying operations, employing former government officials and policy experts who possess established relationships with current decision-makers. Research examining the policy process surrounding generative AI demonstrates that these lobbying investments translate directly into preferential access, with corporate representatives regularly participating in consultations, advisory committees, and informal discussions that shape regulatory frameworks (Khanal, Zhang and Taeihagh, 2024). This access advantage proves particularly consequential during rapidly evolving policy debates where early positioning substantially influences subsequent trajectories.

Beyond direct lobbying, Big Tech companies exercise influence through agenda-setting and discourse control. As owners of major communication platforms and significant investors in AI research, these corporations shape public understanding of technological capabilities and appropriate policy responses. Studies document how corporate framing emphasises innovation benefits and national competitiveness while downplaying potential harms to workers or broader society (Zhang, Khanal and Taeihagh, 2024). This discursive power extends through extensive media engagement, sponsored research, and partnerships with academic institutions that collectively establish parameters for acceptable policy debate.

Public-private partnerships represent another mechanism through which Big Tech shapes policy outcomes. Governments increasingly depend upon technology corporations for technical expertise, infrastructure development, and implementation capacity in AI-related initiatives. This dependency creates opportunities for corporate influence over programme design and evaluation criteria. Analysis of AI governance arrangements reveals that such partnerships frequently prioritise corporate interests, particularly in contexts where governmental technical capacity remains limited (Iazzolino and Stremlau, 2024).

Research addressing the global AI development landscape identifies concerns regarding privatisation of governance functions, whereby corporate standards and practices effectively substitute for public regulation. This dynamic proves especially pronounced in emerging technology domains where governmental expertise lags behind industry knowledge, creating informational asymmetries that advantage corporate positions in policy negotiations (Brandusescu, 2025). The resulting governance arrangements may inadequately address worker concerns given corporations’ primary accountability to shareholders rather than broader stakeholders.

Labour unions and institutional context

Labour unions historically served as primary organisational vehicles for worker voice in policy processes, yet contemporary research reveals substantial variation in union effectiveness regarding AI-related employment policies. This variation correlates strongly with institutional context, particularly the presence or absence of formalised social partnership arrangements.

In countries characterised by corporatist or social partnership traditions—notably Germany and Scandinavian nations—unions retain significant capacity to shape AI workforce policies. These institutional arrangements embed union participation in policy development through tripartite structures involving government, employer organisations, and labour representatives. Research examining German approaches to workplace AI implementation demonstrates that union involvement has influenced requirements for algorithmic transparency, worker consultation, and human-centred design principles (Krzywdzinski, Gerst and Butollo, 2022). These outcomes reflect union capacity to leverage institutionalised access for substantive policy influence.

However, research comparing labour union influence across national contexts identifies substantial challenges facing organised labour in pluralist systems lacking formalised corporatist structures. Declining union membership density reduces organisational resources available for policy engagement while diminishing unions’ claims to representativeness. Analysis of union futures amid automation and AI development suggests that unless unions successfully adapt recruitment and organising strategies, their influence over technology policy will continue eroding (Nissim and Simon, 2021).

Resource constraints significantly limit union capacity for sustained policy engagement on AI issues. Effective participation requires technical expertise to evaluate algorithmic systems, legal knowledge to propose regulatory mechanisms, and communications capacity to compete with corporate messaging. Studies examining worker surveillance and productivity scoring tools identify that unions often lack specialised knowledge necessary to effectively critique or propose alternatives to employer-implemented AI systems (Hickok and Maslej, 2023). This expertise gap disadvantages union voices relative to technology corporations that command extensive technical resources.

Research comparing robotics and AI policy development in Norway and the United Kingdom illustrates how institutional differences shape outcomes. Norwegian unions, operating within established social partnership frameworks, achieved greater influence over workforce transition policies than British counterparts functioning in a more deregulated environment (Lloyd and Payne, 2019). These findings suggest that union effectiveness depends substantially upon broader institutional contexts rather than solely organisational strategies.

Startups as policy actors

Startups occupy an ambiguous position in AI labour policy debates, simultaneously serving as innovation drivers and potential voices for alternative regulatory approaches. Research examining startup perspectives reveals distinctive concerns regarding regulatory frameworks that differ from both incumbent technology corporations and labour organisations.

As disruptive market entrants, startups frequently highlight emerging opportunities and risks overlooked by established actors. Their proximity to technological frontiers positions them to identify novel applications and potential harms before these become apparent to policymakers or larger organisations. Studies developing startup-based measures of AI exposure across occupations demonstrate how entrepreneurial activity provides early indicators of technological diffusion patterns (Fenoaltea et al., 2024). This anticipatory knowledge could valuably inform policy development if effectively channelled into governance processes.

However, research consistently identifies substantial constraints on startup policy influence. Limited organisational scale restricts resources available for lobbying, policy research, or sustained engagement with regulatory processes. Unlike established technology corporations maintaining dedicated government relations teams, startups typically lack capacity for systematic policy monitoring and participation. Analysis suggests that startup perspectives reach policymakers primarily through association memberships, coalition participation, or alignment with broader innovation narratives rather than direct influence (Hazra, Majumder and Chakrabarty, 2025).

The interests of startups in AI policy debates prove complex and context-dependent. While some entrepreneurs may share corporate preferences for minimal regulation enabling rapid market entry, others recognise that appropriate governance frameworks could address market failures, establish trust, and create competitive advantages relative to larger incumbents. This heterogeneity complicates characterisation of ‘startup interests’ as a unified policy position.

Research examining AI safety priorities argues that greater attention to workforce implications would benefit from startup engagement, given entrepreneurial actors’ detailed understanding of technological capabilities and deployment contexts. However, realising this potential requires deliberate mechanisms for incorporating startup perspectives into policy processes that currently privilege actors with greater organisational resources (Hazra, Majumder and Chakrabarty, 2025).

Framing effects and policymaker responsiveness

Beyond stakeholder resources and institutional access, research demonstrates that issue framing substantially shapes policymaker responsiveness to different actors during AI-driven labour market disruptions. How problems are defined and solutions characterised influences which stakeholders’ perspectives receive attention and credibility.

Studies examining AI policy discourse identify persistent dominance of economic competitiveness frames emphasising innovation leadership, productivity enhancement, and international competition. Within these framings, technology corporations and entrepreneurs appear as authoritative voices given their direct involvement in AI development and commercialisation. Worker protection concerns may be acknowledged but typically receive subordinate priority to maintaining favourable innovation environments (Ulnicane et al., 2020).

Research analysing framing contestation in US AI policy discourse finds that public engagement can shift policymaker priorities when economic concerns predominate but proves less effective at elevating ethical or social justice considerations. This asymmetry disadvantages stakeholders—including unions and civil society organisations—whose primary concerns involve distributional fairness, dignity, and workplace rights rather than aggregate economic performance (Schiff, 2024).

Institutional design influences which frames predominate in policy deliberation. Analysis of German AI policy discourse demonstrates how issue definition processes shaped subsequent governance approaches, with early framing choices constraining later options (Lemke, Trein and Varone, 2024). Stakeholders capable of influencing initial problem definitions thereby gain advantages that persist throughout policy development cycles.

Research examining public sector AI decision-making reveals how networks of power relations shape agency choices around system design and deployment. Policymakers operate within institutional contexts that privilege certain knowledge sources and stakeholder relationships while marginalising others. Understanding these relational dynamics proves essential for explaining patterns of responsiveness that cannot be reduced to simple resource differentials (Kawakami et al., 2024).

Regulatory capture and governance risks

The research literature identifies regulatory capture—whereby regulated entities come to dominate ostensibly public oversight mechanisms—as a persistent risk in AI governance. This concern proves especially salient given technology corporations’ substantial resources and governments’ dependence upon industry cooperation for implementation capacity.

Analysis of US AI policy and industry self-regulation raises concerns regarding inadequate public accountability when governance functions effectively transfer to private actors. Without robust external oversight, self-regulatory frameworks may prioritise corporate interests over worker welfare or broader societal concerns (Wallace, 2024). The technical complexity of AI systems exacerbates these risks by creating knowledge asymmetries that advantage industry participants in regulatory negotiations.

Studies examining state-level AI legislation in the United States reveal substantial industry influence over regulatory content, with corporate lobbying correlating with weaker worker protections and more permissive deployment standards (Parinandi et al., 2024). These findings suggest that regulatory capture represents not merely a theoretical concern but an observable pattern across multiple jurisdictions.

Research on AI applications in digital welfare states demonstrates how technological solutions can embed assumptions and priorities that evade democratic scrutiny. When AI systems mediate access to employment services or social benefits, governance choices with substantial distributional consequences may occur through technical design rather than transparent political deliberation (Zenkl, 2025). This dynamic creates opportunities for stakeholders with technical expertise—predominantly technology corporations—to shape outcomes through system architecture rather than explicit policy advocacy.

Discussion

The synthesised evidence reveals consistent patterns of differential policymaker responsiveness to stakeholder groups during AI-driven labour market disruptions, with substantial implications for democratic governance and policy quality during technological transitions.

Resource asymmetries and influence concentration

The research findings strongly support the conclusion that Big Tech corporations wield disproportionate influence over AI labour policy relative to unions, startups, or civil society organisations. This influence concentration reflects cumulative advantages across multiple domains: financial resources enabling sustained lobbying and expertise acquisition; platform control providing agenda-setting capacity; technical knowledge creating informational advantages in policy negotiations; and economic significance generating governmental dependence on corporate cooperation.

These findings align with broader theoretical perspectives on interest group politics, which predict that concentrated interests with substantial per-capita stakes will invest more heavily in policy influence than diffuse interests facing collective action barriers. Workers affected by AI-driven displacement, though numerous, face coordination challenges that unions only partially overcome. Technology corporations, by contrast, possess organisational structures facilitating coherent strategic action on policy issues directly affecting their core business interests.

The evidence suggests that current governance arrangements inadequately address these structural asymmetries. Without deliberate countermeasures, policy outcomes will systematically favour actors with greater resources and organisational capacity, potentially undermining democratic legitimacy and public welfare during critical transition periods.

Institutional variation and union effectiveness

The research demonstrates substantial cross-national variation in union influence over AI workforce policies, with institutional context proving a crucial explanatory factor. Where formalised social partnership arrangements embed union participation in policy development, organised labour achieves meaningful influence over regulatory approaches to workplace AI implementation. Absent such institutional supports, unions struggle to counterbalance corporate interests regardless of strategic sophistication.

These findings carry important implications for understanding the conditions under which worker interests receive adequate representation in technology governance. The decline of corporatist institutions across many developed economies may correlate with diminishing worker voice in AI policy, even as technological disruption intensifies need for effective advocacy. Revitalising union influence likely requires institutional reforms establishing participatory rights rather than solely organisational strategies within existing frameworks.

The German case proves particularly instructive, demonstrating that sustained union engagement can shape implementation requirements promoting human-centred AI design. However, translation of these successes to different institutional contexts remains uncertain, given their dependence upon specific configurations of industrial relations, legal frameworks, and political culture.

Startups and innovation narratives

The evidence regarding startup policy influence reveals a paradoxical situation wherein actors possessing valuable knowledge about emerging technologies lack effective channels for contributing to governance debates. Startups’ proximity to technological frontiers positions them to identify risks and opportunities before these become apparent to established policy actors, yet limited organisational resources constrain their participation in formal processes.

The research suggests that startup perspectives reach policymakers primarily when aligned with dominant innovation narratives emphasising economic growth and competitiveness. This filtering mechanism may systematically exclude startup perspectives that challenge prevailing assumptions or highlight uncomfortable implications of technological deployment. Greater diversity of entrepreneurial voices in policy debates could improve regulatory quality by surfacing concerns overlooked by both large incumbents and traditional worker representatives.

Developing effective mechanisms for startup policy participation represents a significant governance challenge. Traditional interest group structures privilege established organisations with sustained capacity for engagement, potentially disadvantaging dynamic entrepreneurial sectors characterised by rapid organisational turnover and resource constraints.

Framing and discursive power

The findings regarding framing effects illuminate how differential responsiveness operates not only through direct resource advantages but through discursive mechanisms shaping which concerns appear legitimate and urgent. The predominance of economic competitiveness frames in AI policy debates systematically advantages stakeholders whose claims align with growth and innovation narratives while marginalising perspectives emphasising ethical considerations, distributional fairness, or worker dignity.

This discursive dimension of power proves particularly consequential given technology corporations’ substantial influence over public communication through platform control, media engagement, and sponsored research. The ability to shape baseline assumptions about what AI policy should accomplish—and which trade-offs appear acceptable—may prove more consequential than direct lobbying expenditures in determining long-term policy trajectories.

Effective counterweights to discursive power concentration require both alternative institutional venues for policy deliberation and strategic capacity among marginalised stakeholders to articulate compelling counter-narratives. Research suggests that ethical and social justice framings achieve traction primarily when amplified by organised advocacy and aligned with salient public concerns, indicating that framing contests require sustained organisational investment rather than merely persuasive argumentation.

Implications for governance reform

The analysis suggests several implications for improving stakeholder balance in AI labour policy development. First, institutional reforms establishing formal participatory rights for worker representatives could partially offset resource asymmetries favouring corporate interests. Second, deliberate mechanisms for incorporating startup perspectives—such as innovation-focused consultative bodies—might surface valuable knowledge currently excluded from governance processes. Third, diversifying venues for policy deliberation beyond forums dominated by industry participants could enable alternative framings to achieve visibility.

However, the research also indicates significant obstacles to such reforms. Stakeholders benefiting from current arrangements possess resources to resist changes threatening their influence advantages. Governmental capacity constraints may perpetuate dependence upon corporate expertise and cooperation. Path dependencies in policy development create persistent advantages for actors who shaped initial framing and institutional design. These barriers suggest that achieving balanced stakeholder representation requires sustained political mobilisation rather than solely technical governance improvements.

Conclusions

This dissertation has examined differential policymaker responsiveness to labour unions, startups, and Big Tech corporations during AI-driven labour market disruptions, revealing systematic patterns with significant implications for democratic governance during technological transitions.

The first objective sought to synthesise evidence regarding Big Tech’s policy influence mechanisms. The research demonstrates that technology corporations exercise influence through multiple reinforcing channels including direct lobbying, agenda-setting through platform control, public-private partnerships creating governmental dependence, and effective self-regulation substituting for public oversight. These mechanisms collectively create substantial advantages that translate into policy outcomes prioritising innovation and economic growth over worker protection concerns.

The second objective addressed conditions for effective union influence. Evidence indicates that unions achieve meaningful impact primarily in institutional contexts featuring formalised social partnership arrangements embedding worker participation in policy development. Outside such contexts, declining membership and resource constraints substantially limit union capacity to counterbalance corporate interests, regardless of strategic choices.

The third objective examined startup roles in policy debates. Findings reveal that startups possess valuable knowledge regarding emerging technologies but lack organisational resources for sustained policy engagement. Their perspectives reach policymakers primarily through coalition participation or alignment with dominant innovation narratives, constraining the diversity of entrepreneurial voices in governance processes.

The fourth objective analysed how framing and institutional design affect responsiveness. Research demonstrates that economic competitiveness frames systematically receive greater attention than ethical or distributional concerns, advantaging stakeholders whose claims align with growth narratives. This discursive dimension of power concentration operates alongside material resource advantages to shape policy outcomes.

The fifth objective identified research gaps requiring future investigation. Significant opportunities exist for comparative analyses across national contexts beyond Europe and North America, longitudinal studies tracking evolving stakeholder strategies, and empirical research on mechanisms for amplifying marginalised voices in technology governance.

These findings carry substantial significance for understanding democratic governance under conditions of rapid technological change. If policy processes systematically privilege certain stakeholders regardless of their claims’ merit or public interest alignment, resulting regulations may entrench existing power disparities rather than addressing challenges posed by transformative technologies. Ensuring that AI-driven labour market transitions generate broadly shared benefits rather than concentrated gains requires deliberate institutional reforms amplifying diverse stakeholder voices.

Future research should address several priorities identified through this analysis. Comparative studies examining how different national institutional configurations shape stakeholder influence patterns would enable identification of transferable governance innovations. Longitudinal research tracking how union strategies and effectiveness evolve as AI deployment expands could inform organisational adaptation. Empirical investigation of specific mechanisms—such as consultative bodies, participatory governance arrangements, or alternative dispute resolution—could generate actionable knowledge for improving representation quality.

In conclusion, the evidence synthesised in this dissertation demonstrates that current AI labour policy governance exhibits substantial imbalances in stakeholder responsiveness, with Big Tech enjoying systematic advantages over unions and startups. Achieving more balanced representation requires recognising these dynamics and implementing deliberate reforms to ensure that policy processes serving broad public interests rather than solely those with the loudest voices or deepest resources.

References

Brandusescu, A., 2025. Challenging privatization in governance by AI: A caution for the future of AI governance. *Business and Politics*. https://doi.org/10.51644/bap76

Cole, M., Cant, C., Ustek-Spilda, F. and Graham, M., 2022. Politics by automatic means? A critique of artificial intelligence ethics at work. *Frontiers in Artificial Intelligence*, 5. https://doi.org/10.3389/frai.2022.869114

Fenoaltea, E., Mazzilli, D., Patelli, A., Sbardella, A., Tacchella, A., Zaccaria, A., Trombetti, M. and Pietronero, L., 2024. Follow the money: a startup-based measure of AI exposure across occupations, industries and regions. *ArXiv*, abs/2412.04924. https://doi.org/10.48550/arxiv.2412.04924

Hazra, S., Majumder, B. and Chakrabarty, T., 2025. AI safety should prioritize the future of work. *ArXiv*, abs/2504.13959. https://doi.org/10.48550/arxiv.2504.13959

Hickok, M. and Maslej, N., 2023. A policy primer and roadmap on AI worker surveillance and productivity scoring tools. *AI and Ethics*, pp. 1-15. https://doi.org/10.1007/s43681-023-00275-8

Iazzolino, G. and Stremlau, N., 2024. AI for social good and the corporate capture of global development. *Information Technology for Development*, 30, pp. 626-643. https://doi.org/10.1080/02681102.2023.2299351

Joshi, S., 2025. Generative AI: Mitigating workforce and economic disruptions while strategizing policy responses for governments and companies. *International Journal of Advanced Research in Science, Communication and Technology*. https://doi.org/10.48175/ijarsct-23260

Kawakami, A., Coston, A., Heidari, H., Holstein, K. and Zhu, H., 2024. Studying up public sector AI: How networks of power relations shape agency decisions around AI design and use. *Proceedings of the ACM on Human-Computer Interaction*, 8, pp. 1-24. https://doi.org/10.1145/3686989

Khanal, S., Zhang, H. and Taeihagh, A., 2024. Why and how is the power of Big Tech increasing in the policy process? The case of generative AI. *Policy and Society*. https://doi.org/10.1093/polsoc/puae012

Krzywdzinski, M., Gerst, D. and Butollo, F., 2022. Promoting human-centred AI in the workplace. Trade unions and their strategies for regulating the use of AI in Germany. *Transfer: European Review of Labour and Research*, 29, pp. 53-70. https://doi.org/10.1177/10242589221142273

Lemke, N., Trein, P. and Varone, F., 2024. Defining artificial intelligence as a policy problem: A discourse network analysis from Germany. *European Policy Analysis*. https://doi.org/10.1002/epa2.1203

Lloyd, C. and Payne, J., 2019. Rethinking country effects: Robotics, AI and work futures in Norway and the UK. *New Technology, Work and Employment*, 34(3), pp. 208-225. https://doi.org/10.1111/ntwe.12149

Nissim, G. and Simon, T., 2021. The future of labor unions in the age of automation and at the dawn of AI. *Technology in Society*, 67, 101732. https://doi.org/10.1016/j.techsoc.2021.101732

Occhipinti, J., Prodan, A., Hynes, W., Buchanan, J., Green, R., Burrow, S., Eyre, H., Skinner, A., Hickie, I., Heffernan, M., Song, C., Ujdur, G. and Tanner, M., 2025. Artificial intelligence, recessionary pressures and population health. *Bulletin of the World Health Organization*, 103, pp. 155-163. https://doi.org/10.2471/blt.24.291950

Parinandi, S., Crosson, J., Peterson, K. and Nadarevic, S., 2024. Investigating the politics and content of US State artificial intelligence legislation. *Business and Politics*. https://doi.org/10.1017/bap.2023.40

Schiff, D., 2024. Framing contestation and public influence on policymakers: evidence from US artificial intelligence policy discourse. *Policy and Society*. https://doi.org/10.1093/polsoc/puae007

Ulnicane, I., Knight, W., Leach, T., Stahl, B. and Wanjiku, W., 2020. Framing governance for a contested emerging technology: insights from AI policy. *Policy and Society*, 40, pp. 158-177. https://doi.org/10.1080/14494035.2020.1855800

Wallace, A., 2024. Who will watch the watchers? The state of United States artificial intelligence policy and self-regulation in an ever-changing digital world. *Newhouse Impact Journal*. https://doi.org/10.14305/jn.29960819.2024.1.1.03

Yang, Z., 2025. AI, job displacement, and support for workers. *Highlights in Business, Economics and Management*. https://doi.org/10.54097/2xhrnp65

Zenkl, T., 2025. The taming of sociodigital anticipations: AI in the digital welfare state. *Frontiers in Sociology*, 10. https://doi.org/10.3389/fsoc.2025.1556675

Zhang, H., Khanal, S. and Taeihagh, A., 2024. Public-private powerplays in generative AI era: Balancing Big Tech regulation amidst global AI race. *Digital Government: Research and Practice*, 6, pp. 1-11. https://doi.org/10.1145/3664824

Zhao, Y. and Jakkampudi, K., 2023. Assessing policy measures safeguarding workers from artificial intelligence in the United States. *Journal of Computer and Communications*, 11(11). https://doi.org/10.4236/jcc.2023.1111008

To cite this work, please use the following reference:

Hartley, O., 4 February 2026. Who gets heard when labour market shocks hit?. [online]. Available from: https://www.ukdissertations.com/dissertation-examples/who-gets-heard-when-labour-market-shocks-hit/ [Accessed 13 February 2026].

Contact

UK Dissertations

Business Bliss Consultants FZE

Fujairah, PO Box 4422, UAE

+44 115 966 7987

Connect

Subscribe

Join our email list to receive the latest updates and valuable discounts.