+44 115 966 7987 contact@ukdiss.com Log in

How are UK employers actually using generative AI, and who bears the risk when things go wrong?

//

Emily Carter

Abstract

This dissertation examines the emerging patterns of generative artificial intelligence (GenAI) adoption across United Kingdom workplaces and analyses the legal and organisational liability frameworks governing instances where such technologies produce harmful outcomes. Drawing upon a systematic synthesis of peer-reviewed literature, government publications, and empirical survey data, this study addresses a significant gap in understanding how UK employers currently deploy GenAI tools and who bears responsibility when these systems malfunction, produce biased outputs, or compromise data security. The findings reveal that GenAI adoption in UK workplaces is predominantly bottom-up and largely unsupervised, with only 32 per cent of employees reporting clear workplace guidance. Common applications include administrative drafting, document summarisation, coding assistance, and human resources functions. Critically, the analysis demonstrates that existing legal frameworks consistently attribute liability to employers and human decision-makers rather than the AI systems themselves, creating substantial organisational risk exposure. Harmed individuals face considerable evidential burdens, whilst regulatory pressure concerning fairness, transparency, and privacy continues to intensify. This dissertation concludes by emphasising that robust governance frameworks, comprehensive training programmes, and documented oversight mechanisms constitute essential risk-mitigation strategies for UK employers.

Introduction

The rapid proliferation of generative artificial intelligence technologies represents one of the most significant technological transformations affecting contemporary workplaces. Since the public release of large language models such as ChatGPT in late 2022, organisations across all sectors have witnessed unprecedented adoption of tools capable of generating human-like text, code, images, and analytical outputs. This technological shift carries profound implications for employment practices, organisational liability, and the broader regulatory landscape governing workplace conduct in the United Kingdom.

Generative AI fundamentally differs from earlier forms of workplace automation. Rather than performing deterministic tasks according to explicit programming, these systems generate novel outputs based on probabilistic models trained on vast datasets, introducing inherent unpredictability and opacity into organisational processes. This characteristic creates unique challenges for traditional legal frameworks designed to attribute responsibility and liability for harmful outcomes.

The significance of this topic extends across multiple domains. From an academic perspective, the intersection of artificial intelligence, employment law, and organisational behaviour represents fertile ground for interdisciplinary scholarship that can inform both theoretical understanding and practical application. Socially, the widespread deployment of GenAI in workplace settings raises fundamental questions about fairness, particularly concerning the potential for these systems to perpetuate or amplify discriminatory patterns in hiring, performance evaluation, and service delivery. Practically, UK employers face immediate decisions about GenAI adoption with potentially significant legal and reputational consequences.

Current evidence suggests that workplace adoption is outpacing organisational governance. A substantial survey of 938 UK public service professionals found that whilst 22 per cent personally use GenAI and 45 per cent observe colleagues using it, only 32 per cent reported having clear workplace guidance on such use (Bright et al., 2024). This governance gap creates considerable uncertainty regarding responsibility allocation when GenAI-assisted processes produce harmful outcomes.

The UK regulatory environment adds further complexity. Unlike the European Union’s comprehensive AI Act, the United Kingdom has pursued a more principles-based, sector-specific approach to AI governance, creating potential ambiguities regarding employer obligations and liability exposure. Understanding how existing legal frameworks apply to GenAI-related harms, and identifying gaps requiring policy attention, carries substantial importance for employers, employees, and policymakers alike.

This dissertation addresses these pressing concerns through a systematic examination of current GenAI usage patterns in UK workplaces and a critical analysis of the liability frameworks governing harmful outcomes. By synthesising empirical evidence with legal analysis, this study provides both a descriptive account of the current landscape and normative guidance for responsible organisational practice.

Aim and objectives

### Main aim

The primary aim of this dissertation is to provide a comprehensive, evidence-based analysis of how UK employers are currently deploying generative AI technologies and to determine where legal and organisational liability resides when these systems cause harm.

### Specific objectives

This dissertation pursues the following specific objectives:

1. To map the current patterns of GenAI adoption across UK workplaces, with particular attention to sectoral variations and the distinction between formal organisational deployment and informal employee-initiated use.

2. To categorise the primary use cases for GenAI in UK employment contexts and identify the specific risks associated with each application domain.

3. To analyse the existing UK legal framework governing employer liability for harms arising from GenAI use, including discrimination, data protection breaches, and negligence claims.

4. To evaluate how current liability regimes distribute risk among employers, employees, technology providers, and affected third parties.

5. To identify governance, training, and oversight mechanisms that can effectively mitigate employer liability exposure whilst enabling beneficial GenAI adoption.

Methodology

This dissertation employs a literature synthesis methodology, systematically reviewing and analysing existing scholarly research, empirical studies, and authoritative policy documents to address the stated research objectives. This approach is appropriate given the rapidly evolving nature of the subject matter and the need to consolidate fragmented evidence across multiple disciplinary domains.

### Data sources and selection criteria

The evidence base for this synthesis comprises peer-reviewed journal articles, government publications, and reports from recognised international organisations. Primary databases consulted include Scopus, Web of Science, and Google Scholar, supplemented by targeted searches of UK government websites and institutional repositories. Selection criteria prioritised sources published between 2023 and 2025 to ensure currency, although foundational legal principles from earlier scholarship were included where relevant.

Inclusion criteria required that sources address one or more of the following themes: generative AI workplace applications, organisational liability for AI-related harms, UK employment law concerning technology use, data protection and privacy implications of AI deployment, or governance frameworks for responsible AI adoption. Sources were excluded if they lacked peer review or equivalent quality assurance, focused exclusively on non-UK jurisdictions without transferable insights, or comprised opinion pieces without empirical or analytical foundation.

### Analytical approach

The synthesis follows a thematic analysis framework, organising evidence according to the research objectives. For the analysis of workplace usage patterns, particular weight was given to the survey conducted by Bright et al. (2024) as the most comprehensive UK-specific empirical study currently available, supplemented by broader reviews examining cross-sectoral and international patterns with applicability to UK contexts.

Legal analysis draws upon both doctrinal scholarship examining existing liability frameworks and emerging literature addressing the specific challenges posed by AI opacity and probabilistic decision-making. Given the absence of UK case law directly addressing GenAI-specific liability questions, the analysis necessarily engages in reasoned extrapolation from established principles and analogous precedents.

### Limitations

This methodology carries inherent limitations. The rapid pace of technological and regulatory development means that evidence may require ongoing revision. UK-specific empirical data remains relatively sparse compared to international evidence, necessitating careful consideration of transferability. Furthermore, the absence of established case law concerning GenAI liability requires analytical inference rather than direct legal determination.

Literature review

### The emergence of generative AI in organisational contexts

Generative artificial intelligence represents a distinct category within the broader AI landscape, characterised by the capacity to produce novel outputs—including text, code, images, and structured data—based on patterns learned from training datasets. Unlike discriminative AI systems designed for classification or prediction, generative models create content that did not previously exist, introducing unique opportunities and risks for organisational deployment (Feuerriegel et al., 2023).

The business implications of GenAI have attracted substantial scholarly attention. Kanbach et al. (2023) examine GenAI through a business model innovation perspective, identifying opportunities for value creation across multiple organisational functions whilst cautioning that realising such potential requires sophisticated governance arrangements. Similarly, Naqbi, Bahroun, and Ahmed (2024) provide a comprehensive review of GenAI’s productivity-enhancing potential, documenting applications across administrative, creative, analytical, and technical domains.

The interdisciplinary nature of GenAI’s organisational impact is evident in the literature. Ooi et al. (2023) present perspectives from across academic disciplines, noting that GenAI’s potential spans business functions including content creation, customer service, software development, and decision support. This breadth of application creates correspondingly diverse risk profiles that organisations must navigate.

### Patterns of adoption in UK workplaces

Empirical evidence concerning GenAI adoption in UK workplaces remains emergent but increasingly substantial. The most comprehensive UK-specific study to date surveyed 938 public service professionals across health, education, social work, and emergency services, providing valuable insights into real-world usage patterns (Bright et al., 2024).

This survey revealed that 22 per cent of respondents personally use GenAI in their professional roles, whilst 45 per cent have observed colleagues using such tools. Crucially, adoption appears predominantly bottom-up rather than organisation-led, with employees independently experimenting with available tools to address perceived workflow inefficiencies. Only 32 per cent of respondents reported the existence of clear workplace guidance on GenAI use, suggesting significant governance gaps across the public sector.

Common use cases identified in UK public sector contexts include drafting emails and reports, summarising lengthy documents, preparing lesson plans and training materials, and reducing time spent on bureaucratic tasks. NHS staff responding to the survey anticipated that effective GenAI deployment could reduce time spent on administrative bureaucracy from approximately 50 per cent to 30 per cent of their working hours (Bright et al., 2024).

International reviews corroborate and extend these findings. Kar, Varsha, and Rajan (2023) examine GenAI applications across industrial contexts, identifying common deployment in marketing, customer relations, coding, and operational support functions. Budhwar et al. (2023) focus specifically on human resource management applications, documenting use in recruitment screening, training content development, and employee communications.

### Risk profiles associated with GenAI workplace applications

The literature identifies distinct risk profiles corresponding to different GenAI applications, which can be systematically categorised according to organisational function.

#### Administrative and drafting applications

GenAI tools deployed for drafting correspondence, reports, and administrative documents offer substantial time savings but carry risks of inaccuracy and confidentiality compromise. Large language models are known to generate plausible-sounding but factually incorrect content—a phenomenon termed “hallucination”—which may propagate errors through organisational processes if outputs are insufficiently verified (Bright et al., 2024; Naqbi, Bahroun and Ahmed, 2024).

Confidentiality risks arise when employees input sensitive information into externally-hosted GenAI services. Without appropriate technical safeguards and user training, confidential data may be transmitted to third-party servers, potentially violating data protection obligations and professional confidentiality duties (Diro et al., 2025).

#### Coding and technical applications

GenAI assistance for software development and IT support functions offers accelerated development cycles and enhanced accessibility for non-specialist users. However, the literature identifies significant security concerns. Generated code may contain vulnerabilities, insecure practices, or even deliberately embedded backdoors if training data has been compromised (Humphreys et al., 2024; Diro et al., 2025).

Humphreys et al. (2024) argue that the “hype” surrounding GenAI capabilities creates moral hazards whereby organisations deploy inadequately tested generated code in production environments, exposing systems to exploitation. Diro et al. (2025) provide a comprehensive survey of workplace security implications, emphasising the potential for GenAI to introduce novel attack vectors through data leakage, model poisoning, and insecure output generation.

#### Human resources and employment decisions

GenAI applications in human resources contexts present particularly acute risks given the legal protections surrounding employment decisions. Bankins et al. (2023) provide a multilevel review of AI implications for organisational behaviour, noting that AI-assisted hiring, performance evaluation, and disciplinary processes may embed discriminatory patterns present in training data or design choices.

Feuerriegel et al. (2023) examine the business and information systems implications of GenAI, highlighting that opacity in model decision-making creates challenges for organisations required to demonstrate non-discriminatory practices. Yorks and Jester (2024) address ethical considerations in human resource development applications, emphasising the need for careful validation of GenAI outputs in consequential employment contexts.

Budhwar et al. (2023) present research directions for human resource management in the GenAI era, identifying risks including biased recruitment screening, privacy violations through employee monitoring, and dehumanisation of workplace relationships. These risks carry legal implications under the Equality Act 2010 and broader common law duties of employers.

#### Customer communications and marketing

GenAI deployment in customer-facing functions offers personalisation and efficiency benefits but creates reputational and regulatory risks. Wach et al. (2023) provide a critical analysis of GenAI controversies, noting that misleading or offensive generated content can rapidly damage organisational reputation whilst potentially engaging consumer protection and advertising standards regulation.

Ooi et al. (2023) observe that customer service chatbots powered by GenAI may generate inappropriate, inaccurate, or harmful responses, for which the deploying organisation rather than the technology provider typically bears responsibility. Kar, Varsha, and Rajan (2023) similarly identify reputational damage as a significant risk in marketing applications where generated content fails to meet accuracy or appropriateness standards.

### Legal frameworks governing AI-related liability

The legal analysis of liability for GenAI-related harms draws upon established principles whilst acknowledging their imperfect application to novel technological circumstances. In UK law, potential bases for liability include negligence, breach of statutory duty, vicarious liability for employee acts, and direct liability for discriminatory practices.

#### Attribution of responsibility for AI outputs

Al-Dulaimi and Mohammed (2025) examine legal responsibility for AI errors in the public sector, concluding that AI systems are consistently treated as tools rather than legal persons. Responsibility for harmful outputs therefore attaches to the deploying organisation and the human decision-makers who utilise AI-generated information, regardless of the system’s autonomous characteristics.

This analytical position accords with general UK legal principles. An employer who deploys a tool that produces defective outputs cannot ordinarily disclaim responsibility by reference to the tool’s autonomous operation. Rather, the duty to ensure safe and lawful operations remains with the employer, who must implement appropriate safeguards, training, and oversight.

#### Challenges of causation and proof

Cheong, Caliskan, and Kohno (2024) analyse the societal impacts of GenAI from a legal perspective, identifying significant challenges for harmed individuals seeking redress. The opacity of AI systems—often characterised as “black box” decision-making—creates evidential barriers for claimants required to demonstrate causation between system operation and suffered harm.

These challenges do not, however, relieve organisations of responsibility. Rather, they shift practical burden whilst creating potential injustice for affected individuals. Al-Dulaimi and Mohammed (2025) note that this structural imbalance may require legislative attention to ensure adequate protection for those harmed by AI systems.

#### Discrimination and equality law

Employment discrimination claims arising from GenAI-assisted decisions engage the Equality Act 2010, which prohibits direct and indirect discrimination on protected characteristics. The opacity of GenAI systems creates particular challenges for employers, who bear the burden of demonstrating non-discriminatory practices once a prima facie case is established.

Bankins et al. (2023) observe that AI systems may perpetuate or amplify historical patterns of discrimination present in training data, potentially creating indirect discrimination even absent discriminatory intent. Feuerriegel et al. (2023) emphasise that organisations cannot outsource responsibility for fair treatment to algorithmic systems; rather, deployment of biased tools may itself constitute a discriminatory practice.

#### Data protection and privacy

The UK General Data Protection Regulation (UK GDPR) and Data Protection Act 2018 impose obligations concerning the processing of personal data, including requirements for lawful basis, purpose limitation, and data minimisation. GenAI applications that process personal data must comply with these requirements, with potential regulatory enforcement action and civil liability for non-compliance.

Diro et al. (2025) identify specific privacy risks associated with GenAI deployment, including inadvertent disclosure of personal data to third-party service providers, use of personal data for purposes beyond original collection, and potential re-identification of individuals from anonymised datasets. Organisational liability for such breaches attaches to the data controller, typically the employer, rather than individual employees or technology providers.

### Governance and risk mitigation frameworks

The literature increasingly addresses governance mechanisms for responsible GenAI deployment. Hagendorff (2024) provides a comprehensive scoping review of GenAI ethics, mapping the terrain of normative concerns and identifying governance responses across jurisdictions and sectors.

Rana et al. (2024) examine the relationship between GenAI adoption, ethical considerations, and organisational performance, suggesting that robust governance frameworks can simultaneously mitigate risk and enhance value realisation. Jonnala, Thomas, and Mishra (2025) develop a multi-stakeholder approach to assessing interconnected risks in GenAI deployment, emphasising the importance of comprehensive risk assessment prior to implementation.

Healthcare contexts provide instructive examples of governance development. Reddy (2024) addresses GenAI implementation in healthcare settings, proposing an implementation science framework for application, integration, and governance that balances innovation benefits against patient safety requirements. This framework emphasises the necessity of clear policies, appropriate training, human oversight of consequential decisions, and continuous monitoring of system performance.

The common themes emerging from governance literature include the importance of explicit organisational policies addressing permissible GenAI uses, mandatory training for employees utilising such tools, human review requirements for consequential outputs, technical safeguards against data leakage and security compromise, and incident reporting mechanisms enabling continuous improvement.

Discussion

This discussion analyses the key findings emerging from the literature synthesis, critically evaluates their implications for UK employers, and assesses how the evidence addresses the stated research objectives.

### The governance gap and its implications

The evidence reveals a striking disconnect between the pace of GenAI adoption and the development of organisational governance frameworks. The finding that only 32 per cent of UK public sector professionals report clear workplace guidance on GenAI use, whilst 22 per cent personally use such tools and 45 per cent observe colleagues doing so, indicates substantial unsupervised experimentation with consequential technologies (Bright et al., 2024).

This bottom-up adoption pattern creates significant organisational risk. Employees experimenting with GenAI tools may inadvertently compromise confidential information, introduce errors into critical processes, or generate outputs that expose their employers to discrimination claims. Absent clear guidance, employees cannot be expected to navigate the complex risk landscape associated with GenAI deployment, yet the legal consequences of harmful outcomes attach primarily to the employer.

The governance gap appears to result from multiple factors. First, the rapid pace of technological development has outstripped organisational policy-making capacity. Second, the accessibility of public-facing GenAI tools means that adoption requires no formal organisational decision or procurement process. Third, initial use cases often appear low-risk, obscuring the potential for downstream harms from accumulated inappropriate uses.

Addressing this gap requires proactive organisational response. Employers cannot reasonably prohibit all GenAI use, which would likely prove unenforceable and potentially counterproductive. Rather, the development of clear policies distinguishing permissible from prohibited uses, combined with training that enables employees to recognise and manage risks, appears essential.

### Risk distribution and the employer’s burden

The legal analysis confirms that existing liability frameworks consistently attribute responsibility to employers and human decision-makers rather than AI systems themselves. This conclusion is doctrinally straightforward—AI systems lack legal personality and cannot bear responsibility—but carries significant practical implications.

Employers face potential liability across multiple dimensions. Negligence claims may arise where GenAI outputs cause foreseeable harm that appropriate oversight would have prevented. Discrimination claims may arise from biased AI-assisted employment decisions, with the employer bearing the burden of demonstrating non-discriminatory practices. Data protection violations may result in regulatory enforcement action and civil liability. Professional negligence claims may arise in regulated sectors where GenAI-assisted work fails to meet required standards.

The concentration of liability on employers, whilst third-party technology providers bear limited responsibility, creates potential injustice and may require policy attention. However, for present purposes, this risk distribution means that employer governance and oversight are not merely best practice but essential risk management.

The opacity of GenAI systems creates particular challenges. Employers may struggle to detect biased outputs, identify data protection violations, or anticipate failure modes without technical expertise that many organisations lack. This asymmetry between responsibility and capability suggests the importance of external assurance mechanisms and potentially regulatory guidance on minimum governance standards.

### Sectoral variations and specific risk profiles

The evidence indicates that risk profiles vary significantly according to application domain, requiring tailored governance responses. Administrative applications present relatively lower but still significant risks, primarily concerning accuracy and confidentiality. Coding applications present security risks requiring technical safeguards and security review processes. Human resources applications present discrimination and privacy risks requiring human oversight of consequential decisions. Customer-facing applications present reputational risks requiring quality assurance processes.

This sectoral variation suggests that generic organisational policies may be insufficient. Rather, application-specific guidance addressing the particular risks of each use case appears necessary. For example, HR policies might specify that GenAI screening tools may only assist, never determine, candidate shortlisting, whilst customer service policies might require human review of GenAI-generated responses before transmission.

The public sector context examined in the primary UK study raises additional considerations. Public sector employers bear responsibilities to citizens that may exceed those of private sector employers, including duties of procedural fairness in administrative decision-making. GenAI deployment in citizen-facing functions requires particular attention to these public law obligations.

### Implications for regulatory development

The UK’s principles-based approach to AI regulation, in contrast to the EU’s prescriptive AI Act framework, creates both flexibility and uncertainty for employers. Whilst avoiding the compliance burdens of detailed regulatory requirements, this approach leaves employers to determine appropriate governance standards with limited authoritative guidance.

The evidence suggests that this regulatory gap creates risks of inadequate governance, particularly among smaller employers lacking resources for sophisticated policy development. Potential policy responses might include sector-specific guidance from relevant regulators, development of industry codes of practice, or eventual legislation establishing minimum governance standards for AI deployment.

The emerging regulatory pressure concerning fairness, transparency, and accountability, identified in the literature (Hagendorff, 2024; Rana et al., 2024; Jonnala, Thomas and Mishra, 2025), suggests that governance expectations will likely tighten. Employers who develop robust frameworks proactively may therefore gain competitive advantage whilst reducing risk exposure.

### Limitations and areas requiring further research

This analysis necessarily operates with significant limitations. UK-specific empirical evidence remains sparse, with the Bright et al. (2024) survey providing the most substantial data point but focusing specifically on public sector contexts. Private sector adoption patterns may differ substantially, requiring dedicated empirical investigation.

The legal analysis proceeds without benefit of UK case law directly addressing GenAI-specific liability questions. Whilst extrapolation from established principles is doctrinally appropriate, authoritative judicial determination of novel questions remains pending. As litigation proceeds, the legal landscape may evolve in unexpected directions.

The rapid pace of technological development means that findings require ongoing revision. GenAI capabilities continue to expand, new use cases emerge, and the risk landscape evolves correspondingly. Longitudinal research tracking adoption patterns and harm incidents over time would enhance understanding of both beneficial and harmful outcomes.

Conclusions

This dissertation has examined how UK employers are using generative AI and who bears liability when these technologies cause harm. The synthesis of available evidence supports several significant conclusions directly addressing the stated research objectives.

Regarding adoption patterns, UK workplaces are already experiencing substantial GenAI deployment, particularly through bottom-up employee experimentation. Survey evidence from the public sector indicates that nearly half of professionals observe GenAI use amongst colleagues, with adoption spanning administrative, technical, and analytical functions. However, organisational governance significantly lags adoption, with only approximately one-third of employees reporting clear workplace guidance.

Regarding risk profiles, distinct categories of GenAI application carry correspondingly distinct risks. Administrative applications present accuracy and confidentiality concerns; coding applications introduce security vulnerabilities; human resources applications risk discrimination and privacy violations; customer-facing applications create reputational exposure. Effective governance requires application-specific risk assessment and tailored mitigation strategies.

Regarding liability distribution, existing legal frameworks consistently attribute responsibility for GenAI-related harms to employers and human decision-makers rather than AI systems themselves. Employers cannot delegate responsibility by deploying autonomous tools, and the opacity of GenAI systems does not relieve organisations of their duty to ensure lawful and non-harmful operations. Harmed individuals face significant evidential challenges, but this structural disadvantage does not transfer responsibility to affected parties.

Regarding risk mitigation, the literature identifies clear governance, comprehensive training, and documented oversight as essential risk-management mechanisms. Employers who develop explicit policies, train employees in risk recognition, require human review of consequential outputs, implement technical safeguards, and establish incident reporting mechanisms can significantly reduce liability exposure whilst capturing GenAI’s productivity benefits.

The significance of these findings extends to multiple audiences. For employers, the evidence demonstrates that governance development is urgent and that informal or absent policies create substantial legal and reputational exposure. For employees, the findings emphasise that individual experimentation with GenAI tools may expose both themselves and their employers to harm, and that organisational guidance serves legitimate protective purposes. For policymakers, the evidence suggests that the current regulatory approach may require supplementation to ensure adequate protection for those harmed by AI systems and to establish minimum governance standards.

Future research should address several priorities. Empirical investigation of private sector adoption patterns would complement existing public sector evidence. Longitudinal tracking of GenAI-related harms and litigation outcomes would enhance understanding of actual risk materialisation. Comparative analysis of governance frameworks across organisations and jurisdictions would identify effective practices for dissemination. Finally, as the regulatory landscape develops, ongoing analysis of compliance requirements and their practical implementation will be required.

Generative AI presents UK employers with genuine opportunities for enhanced productivity and service delivery. However, realising these benefits whilst avoiding significant legal and reputational harms requires deliberate governance development. The evidence reviewed in this dissertation makes clear that employers who fail to develop appropriate policies, training, and oversight mechanisms do so at considerable risk. The question is no longer whether organisations should engage with GenAI governance, but how quickly and comprehensively they can do so.

References

Al-Dulaimi, A. and Mohammed, M. (2025) ‘Legal responsibility for errors caused by artificial intelligence (AI) in the public sector’, *International Journal of Law and Management*. https://doi.org/10.1108/ijlma-08-2024-0295

Bankins, S., Ocampo, A., Marrone, M., Restubog, S. and Woo, S. (2023) ‘A multilevel review of artificial intelligence in organizations: Implications for organizational behavior research and practice’, *Journal of Organizational Behavior*. https://doi.org/10.1002/job.2735

Bright, J., Enock, F., Esnaashari, S., Francis, J., Hashem, Y. and Morgan, D. (2024) ‘Generative AI is already widespread in the public sector: evidence from a survey of UK public sector professionals’, *Digital Government: Research and Practice*, 6, pp. 1-13. https://doi.org/10.1145/3700140

Budhwar, P., Chowdhury, S., Wood, G., Aguinis, H., Bamber, G., Beltran, J., Boselie, P., Cooke, F., Decker, S., Denisi, A., Dey, P., Guest, D., Knoblich, A., Malik, A., Paauwe, J., Papagiannidis, S., Patel, C., Pereira, V., Ren, S., Rogelberg, S., Saunders, M., Tung, R. and Varma, A. (2023) ‘Human resource management in the age of generative artificial intelligence: Perspectives and research directions on ChatGPT’, *Human Resource Management Journal*. https://doi.org/10.1111/1748-8583.12524

Cheong, I., Caliskan, A. and Kohno, T. (2024) ‘Safeguarding human values: rethinking US law for generative AI’s societal impacts’, *AI and Ethics*, 5, pp. 1433-1459. https://doi.org/10.1007/s43681-024-00451-4

Diro, A., Kaisar, S., Saini, A., Fatima, S., Pham, H. and Erba, F. (2025) ‘Workplace security and privacy implications in the GenAI age: A survey’, *Journal of Information Security and Applications*, 89, pp. 103960. https://doi.org/10.1016/j.jisa.2024.103960

Feuerriegel, S., Hartmann, J., Janiesch, C. and Zschech, P. (2023) ‘Generative AI’, *Business and Information Systems Engineering*, pp. 1-16. https://doi.org/10.1007/s12599-023-00834-7

Hagendorff, T. (2024) ‘Mapping the Ethics of Generative AI: A Comprehensive Scoping Review’, *Minds and Machines*, 34. https://doi.org/10.1007/s11023-024-09694-w

Humphreys, D., Koay, A., Desmond, D. and Mealy, E. (2024) ‘AI hype as a cyber security risk: the moral responsibility of implementing generative AI in business’, *AI and Ethics*, 4, pp. 791-804. https://doi.org/10.1007/s43681-024-00443-4

Jonnala, S., Thomas, N. and Mishra, S. (2025) ‘Navigating ethical minefields: a multi-stakeholder approach to assessing interconnected risks in generative AI using grey DEMATEL’, *Frontiers in Artificial Intelligence*, 8. https://doi.org/10.3389/frai.2025.1611024

Kanbach, D., Heiduk, L., Blueher, G., Schreiter, M. and Lahmann, A. (2023) ‘The GenAI is out of the bottle: generative artificial intelligence from a business model innovation perspective’, *Review of Managerial Science*, pp. 1-32. https://doi.org/10.1007/s11846-023-00696-z

Kar, A., Varsha, P. and Rajan, S. (2023) ‘Unravelling the Impact of Generative Artificial Intelligence (GAI) in Industrial Applications: A Review of Scientific and Grey Literature’, *Global Journal of Flexible Systems Management*, 24, pp. 659-689. https://doi.org/10.1007/s40171-023-00356-x

Naqbi, H., Bahroun, Z. and Ahmed, V. (2024) ‘Enhancing Work Productivity through Generative Artificial Intelligence: A Comprehensive Literature Review’, *Sustainability*. https://doi.org/10.3390/su16031166

Ooi, K., Tan, G., Al-Emran, M., Al-Sharafi, M., Căpățînă, A., Chakraborty, A., Dwivedi, Y., Huang, T., Kar, A., Lee, V., Loh, X., Micu, A., Mikalef, P., Mogaji, E., Pandey, N., Raman, R., Rana, N., Sarker, P., Sharma, A., Teng, C., Wamba, S. and Wong, L. (2023) ‘The Potential of Generative Artificial Intelligence Across Disciplines: Perspectives and Future Directions’, *Journal of Computer Information Systems*, 65, pp. 76-107. https://doi.org/10.1080/08874417.2023.2261010

Rana, N., Pillai, R., Sivathanu, B. and Malik, N. (2024) ‘Assessing the nexus of Generative AI adoption, ethical considerations and organizational performance’, *Technovation*. https://doi.org/10.1016/j.technovation.2024.103064

Reddy, S. (2024) ‘Generative AI in healthcare: an implementation science informed translational path on application, integration and governance’, *Implementation Science*, 19. https://doi.org/10.1186/s13012-024-01357-9

Wach, K., Duong, C., Ejdys, J., Kazlauskaitė, R., Korzyński, P., Mazurek, G., Paliszkiewicz, J. and Ziemba, E. (2023) ‘The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT’, *Entrepreneurial Business and Economics Review*. https://doi.org/10.15678/eber.2023.110201

Yorks, L. and Jester, M. (2024) ‘Applying generative AI ethically in HRD practice’, *Human Resource Development International*, 27, pp. 410-427. https://doi.org/10.1080/13678868.2024.2337963

To cite this work, please use the following reference:

Carter, E., 6 February 2026. How are UK employers actually using generative AI, and who bears the risk when things go wrong?. [online]. Available from: https://www.ukdissertations.com/dissertation-examples/how-are-uk-employers-actually-using-generative-ai-and-who-bears-the-risk-when-things-go-wrong/ [Accessed 13 February 2026].

Contact

UK Dissertations

Business Bliss Consultants FZE

Fujairah, PO Box 4422, UAE

+44 115 966 7987

Connect

Subscribe

Join our email list to receive the latest updates and valuable discounts.