Abstract
The rapid proliferation of generative artificial intelligence (AI) tools, particularly large language models such as ChatGPT, presents unprecedented challenges and opportunities for the legal profession. This dissertation examines how these technologies are fundamentally reshaping professional standards and ethical obligations within legal practice. Through a comprehensive literature synthesis, this study analyses the evolving landscape of professional duties, including competence requirements, confidentiality obligations, accountability frameworks, and the imperative to maintain human oversight in legal decision-making. The findings reveal that generative AI is not supplanting traditional legal ethics but rather recalibrating established duties to accommodate technological realities. Competence now extends to AI literacy and supervisory capabilities; confidentiality encompasses data protection within cloud-based systems; and professional integrity demands that practitioners maintain decisive human control over legal judgment. Whilst formal regulatory frameworks continue to develop, the trajectory indicates movement toward normalised, regulated, and ethically structured AI integration. This analysis contributes to ongoing scholarly and practical debates concerning the responsible adoption of generative AI in legal services, offering insights for practitioners, regulators, and legal educators navigating this transformative period.
Introduction
The emergence of generative artificial intelligence represents one of the most significant technological disruptions to affect professional services in recent decades. Since the public release of ChatGPT in November 2022, legal practitioners, regulators, and scholars have grappled with fundamental questions concerning how these tools should be integrated into legal practice whilst preserving the ethical foundations upon which the profession rests (Perlman, 2024a). The legal sector, traditionally characterised by conservative approaches to technological adoption, now confronts technologies capable of drafting legal documents, conducting research, summarising case law, and engaging in sophisticated legal analysis at unprecedented speed and scale.
The significance of this transformation extends beyond mere operational efficiency. Generative AI tools challenge core assumptions about professional competence, the nature of legal expertise, and the relationship between legal practitioners and their clients. When a machine can produce work that appears substantively indistinguishable from that of a qualified lawyer, questions arise concerning the appropriate boundaries of technological delegation, the duties owed to clients whose matters may be processed through AI systems, and the mechanisms through which accountability should be assigned when errors occur (Guleria et al., 2023).
The academic and practical importance of examining this phenomenon cannot be overstated. The legal profession serves as a cornerstone of democratic societies, providing essential services that protect rights, facilitate commerce, and ensure access to justice. Any fundamental shift in how legal services are delivered therefore carries profound implications for the rule of law itself. Moreover, the legal sector’s response to generative AI may serve as a template for other professions navigating similar challenges, rendering this analysis relevant beyond its immediate subject matter.
This dissertation addresses the central question of how generative AI tools are reshaping professional standards and ethical obligations in the legal sector. In doing so, it examines the interplay between established ethical frameworks and emerging technological capabilities, the regulatory responses developing across jurisdictions, and the theoretical reconceptualisation of professional duties necessitated by AI integration.
Aim and objectives
The primary aim of this dissertation is to critically analyse how generative AI tools, particularly large language models such as ChatGPT, are transforming professional standards and ethical obligations within the legal sector.
To achieve this aim, the following objectives guide the analysis:
1. To examine how the traditional duty of competence is being reinterpreted to encompass technological literacy and AI supervision capabilities in legal practice.
2. To analyse the implications of generative AI adoption for confidentiality obligations, data protection requirements, and client consent mechanisms.
3. To evaluate emerging frameworks for accountability and liability allocation when AI-assisted legal work produces errors or harm.
4. To assess the importance of maintaining human oversight and professional judgment in AI-augmented legal decision-making.
5. To identify the trajectory of regulatory development and propose considerations for the ethical integration of generative AI in legal services.
Methodology
This dissertation employs a literature synthesis methodology, systematically reviewing and integrating scholarly sources to develop a comprehensive understanding of the research question. Literature synthesis represents an established approach within legal scholarship, particularly suited to examining emerging phenomena where empirical data remains limited but theoretical and normative analysis proves essential (Snyder, 2019).
The research process involved identifying relevant peer-reviewed journal articles, academic working papers, and authoritative regulatory guidance addressing the intersection of generative AI and legal ethics. Sources were selected based on their relevance to the research objectives, methodological rigour, and contribution to understanding the evolving professional landscape. Priority was given to recent scholarship published between 2023 and 2025, reflecting the rapidly developing nature of this field following the widespread availability of advanced generative AI tools.
The analysis proceeded through several stages. Initially, sources were categorised according to the primary ethical themes they addressed, including competence, confidentiality, accountability, and professional independence. Subsequently, the identified literature was synthesised to identify areas of scholarly consensus, ongoing debates, and gaps in current understanding. This thematic analysis enabled the development of a coherent analytical framework for understanding how generative AI reshapes professional obligations.
The methodology acknowledges certain limitations inherent in literature-based research. The field remains in rapid flux, with new regulatory developments and scholarly contributions emerging continuously. Additionally, empirical research examining actual practitioner experiences with generative AI remains nascent, necessitating reliance upon theoretical analysis and early observational studies. Nevertheless, literature synthesis provides an appropriate foundation for mapping the current state of knowledge and identifying trajectories for future development.
Literature review
The evolving duty of competence in an AI-enabled profession
The duty of competence stands as a foundational principle of legal ethics across common law and civil law jurisdictions. This duty traditionally encompasses the knowledge, skill, thoroughness, and preparation reasonably necessary for effective representation. However, the emergence of generative AI necessitates a fundamental reconsideration of what competence requires in contemporary legal practice.
Scholars increasingly argue that careful AI use aligns with existing professional rules and that competence may eventually require lawyers to understand and appropriately utilise generative AI, much as earlier generations adapted to email, word processing, and online legal research. Perlman (2024a) contends that as AI becomes embedded in routine legal work, technological competence transitions from an optional enhancement to a baseline professional expectation. This position finds support in earlier analyses suggesting that lawyers bear an ethical responsibility to leverage AI capabilities where doing so advances client interests (The LegalTech Book, 2020).
Murray (2023) elaborates upon this theme, emphasising that competent AI use requires lawyers to serve as professional and responsible supervisors of AI systems. This supervisory function extends beyond mere output review to encompass understanding of AI capabilities and limitations, recognition of potential failure modes, and appreciation of contexts where AI assistance proves inappropriate. The lawyer deploying generative AI must possess sufficient technical literacy to evaluate whether the tool suits the task at hand.
The verification of AI outputs emerges as a critical competence requirement. Generative AI systems demonstrate well-documented tendencies toward confabulation, producing plausible-sounding but factually incorrect information, including fabricated legal citations and misrepresentations of case holdings. Lawyers utilising these tools must verify authorities and avoid delegating legal judgment to systems that may hallucinate or misapply law (Perlman, 2024a; Murray, 2023). High-profile instances of practitioners submitting AI-generated briefs containing fictitious citations underscore the practical urgency of this obligation.
Mohamed et al. (2024) position technological competence as an evolving standard that extends to understanding AI capabilities and limitations. This conceptualisation suggests that competence requirements will continue developing as AI technologies mature, imposing ongoing learning obligations upon practitioners throughout their careers. The American Bar Association’s amendment to Model Rule 1.1 to include technological competence, though predating current generative AI capabilities, provides a regulatory foundation for this expanded understanding.
Confidentiality, data protection, and third-party systems
Client confidentiality represents perhaps the most sacrosanct obligation in legal ethics, underpinning the trust essential to effective legal representation. The deployment of cloud-based generative AI tools introduces novel vectors through which confidential information may be exposed, transmitted, or compromised, necessitating careful reconsideration of how practitioners discharge confidentiality duties in technologically mediated practice.
The use of cloud-based generative tools heightens obligations around client confidentiality, data security, and privacy compliance. Darmawan and Soesatyo (2025) identify encryption protocols, secure cloud management practices, and limitations on third-party access as essential safeguards when utilising AI systems in legal work. When client information enters generative AI systems, it may be processed on remote servers, potentially retained for model training purposes, or accessible to technology provider personnel, each scenario raising distinct confidentiality concerns.
Kim, Yi and Park (2025), through systematic review and expert-driven analysis, identify liability and accountability for AI-related errors or breaches as priority concerns in AI adoption within the legal domain. Their analysis reveals that practitioners express significant apprehension regarding the security of sensitive client data when processed through AI systems, particularly those operated by commercial entities with opaque data handling practices.
Zahra (2025) emphasises that privacy compliance frameworks, including the General Data Protection Regulation in European jurisdictions and analogous regimes elsewhere, impose specific obligations regarding the processing of personal data through AI systems. Legal practitioners must ensure that their use of generative AI complies with applicable data protection requirements, which may necessitate data processing agreements with AI providers, privacy impact assessments, and explicit client notification of AI utilisation.
The question of client consent emerges as a significant consideration within this framework. Whilst traditional confidentiality frameworks developed around interpersonal communications and physical document security, contemporary practice requires practitioners to consider whether utilising AI tools constitutes a form of disclosure requiring client authorisation. Scholars debate whether existing consent frameworks adequately address AI-mediated processing or whether novel consent mechanisms prove necessary.
Accountability frameworks and liability allocation
The distribution of accountability when AI-assisted legal work produces errors presents complex challenges for existing professional responsibility frameworks. Traditional liability models assume direct human agency in professional work, rendering them potentially inadequate for contexts where AI systems contribute substantially to legal outputs.
Kim, Yi and Park (2025) and Guleria et al. (2023) both identify clarification of accountability structures as essential to responsible AI integration in legal practice. When a lawyer submits a brief containing AI-generated errors, questions arise concerning whether responsibility lies solely with the supervising attorney, whether the AI provider bears some liability, and whether contributory factors such as inadequate training or misleading AI interface design should affect liability allocation.
Zahra (2025) argues that emerging regulatory frameworks must clearly delineate responsibility for AI-assisted work, ensuring that the opacity of AI decision-making processes does not create accountability gaps through which injured parties cannot obtain redress. This position aligns with broader scholarly concern regarding the “responsibility gap” in AI systems, where the complexity of AI operations may obscure causal chains linking decisions to harms (Wang et al., 2023).
The standing orders and guidelines emerging from courts and regulatory bodies consistently emphasise that ultimate responsibility for legal work remains with the human practitioner, regardless of AI involvement. This principle, whilst providing doctrinal clarity, does not fully resolve questions concerning appropriate liability standards, the duty of care owed when selecting and deploying AI tools, or the evidentiary challenges of demonstrating AI-related causation in malpractice proceedings.
The healthcare sector’s experience with AI liability, as examined by Terranova et al. (2024), offers potentially instructive parallels for legal practice. Medical malpractice frameworks have begun adapting to AI-assisted diagnosis and treatment, developing standards for appropriate AI utilisation that balance innovation benefits against patient safety. Similar evolutionary processes appear likely within legal malpractice doctrine.
Professional independence and the human-in-the-loop imperative
A consistent theme across the literature concerns the imperative that AI augment rather than replace human professional judgment. This principle, sometimes characterised as maintaining a “human in the loop,” reflects both ethical commitments and practical recognition of AI limitations in contexts requiring nuanced judgment.
Scholars stress that AI must augment, not replace, human professional judgment, particularly in advocacy and sensitive areas including criminal justice and family law. Darmawan and Soesatyo (2025) and Ehirim (2025) both emphasise that the irreducibly human dimensions of legal representation—empathy with clients, persuasion of decision-makers, ethical reasoning in novel situations—cannot be delegated to algorithmic systems regardless of their sophistication.
Guleria et al. (2023) examine this question through the lens of forensic applications, where AI assistance in evidence analysis must remain subject to expert human oversight. Their analysis highlights concerns that over-reliance upon AI outputs may compromise professional independence, with practitioners deferring to algorithmic conclusions rather than exercising independent judgment.
Terranova et al. (2024) similarly argue that professional independence requires practitioners to maintain capacity for critical evaluation of AI outputs, resisting pressures toward automation bias whereby human operators uncritically accept AI-generated conclusions. This concern proves particularly acute in legal contexts where adversarial dynamics and client interests demand sceptical engagement with all information sources.
The disclosure of AI use emerges as a related professional obligation. Standing orders and proposed guidelines often require practitioners to acknowledge when AI tools have contributed to legal work product, enabling courts and opposing parties to assess the reliability of submitted materials. Wu and Wang (2024) advocate for transparency requirements as essential components of responsible AI governance, enabling informed evaluation of AI-assisted work.
Algorithmic bias, fairness, and explainability
The potential for generative AI systems to perpetuate or amplify biases present in their training data raises distinct ethical concerns for legal practitioners committed to justice and fairness. Legal work frequently involves matters with significant consequences for individual rights and liberties, rendering bias in AI-assisted analysis particularly problematic.
Zahra (2025) and Cheng and Liu (2023) both emphasise that responsible AI use requires scrutiny of algorithmic bias and attention to explainability concerns. When AI systems trained on historical legal data reproduce patterns reflecting past discrimination, practitioners deploying such systems may inadvertently perpetuate injustice even whilst believing themselves to act neutrally.
Stahl and Eke (2024) provide a comprehensive ethical analysis of ChatGPT and similar technologies, identifying fairness as a core ethical dimension requiring ongoing attention. Their analysis suggests that the opacity of large language model decision-making processes complicates bias detection and remediation, as the reasoning underlying particular outputs often cannot be traced or explained.
Mohamed et al. (2024) argue that emerging standards must address bias and explainability concerns directly, requiring practitioners to consider whether AI tools deployed in their practice produce outputs that systematically disadvantage particular groups or produce conclusions that cannot be adequately justified. This obligation aligns with broader professional duties to avoid discrimination and promote equal justice.
The intersection of AI bias with legal doctrine concerning discriminatory impact presents particularly complex questions. When AI-assisted legal analysis reproduces biased patterns, determining whether such bias rises to the level of professional misconduct, and specifying the duties practitioners bear to investigate and remediate bias, remains an area requiring further scholarly and regulatory development.
Discussion
The literature reviewed demonstrates that generative AI is recalibrating rather than replacing traditional legal ethics. Core professional duties—competence, confidentiality, accountability, and integrity—remain conceptually intact whilst acquiring new dimensions responsive to technological change. This process of ethical adaptation, rather than ethical revolution, suggests that the legal profession possesses frameworks capable of absorbing significant technological disruption whilst preserving essential values.
The reinterpretation of competence to encompass AI literacy and supervisory capabilities represents perhaps the most significant conceptual shift identified in this analysis. Historically, competence requirements focused upon substantive legal knowledge and procedural skill. The emerging understanding extends competence to include technological sophistication, requiring practitioners to evaluate when AI use proves appropriate, to supervise AI outputs effectively, and to recognise the limitations of algorithmic legal analysis. This expanded conception aligns with broader trends toward technology-enabled practice whilst maintaining the profession’s commitment to client protection through practitioner expertise.
The analysis reveals that confidentiality obligations have acquired substantial additional complexity through AI adoption. Where earlier technological adaptations—photocopying, facsimile transmission, electronic communication—presented relatively discrete confidentiality challenges, generative AI introduces systemic concerns regarding data handling by third-party providers, the potential for training data retention, and the opacity of information flows within AI systems. Practitioners now bear responsibility not only for their own confidentiality practices but for the practices of technology providers whose systems they deploy. This expanded scope of confidentiality duty represents a significant shift in professional responsibility.
Accountability frameworks emerge from this analysis as an area requiring considerable further development. The principle that human practitioners remain ultimately responsible for AI-assisted work provides doctrinal clarity but may prove insufficient as AI systems contribute more substantially to legal outputs. Questions concerning the standard of care for AI tool selection, the duties owed when AI systems produce errors, and the evidentiary challenges of establishing AI-related causation require more detailed scholarly attention and, eventually, regulatory or judicial resolution.
The human-in-the-loop imperative reflects both ethical commitment and practical recognition of current AI limitations. Generative AI systems demonstrate remarkable capabilities but remain susceptible to confabulation, lack genuine understanding of legal principles, and cannot exercise the discretionary judgment essential to legal practice. Maintaining human oversight protects against these limitations whilst preserving the professional relationship central to legal representation. However, as AI capabilities advance, determining the appropriate degree of human involvement may require ongoing recalibration.
The analysis identifies algorithmic bias and fairness as areas where existing ethical frameworks may prove inadequate. Traditional professional responsibility rules developed without contemplation of algorithmic decision-making and may not adequately address the distinct challenges posed by AI bias. Practitioners committed to justice face difficulties in detecting bias in opaque AI systems and may lack guidance concerning their duties when bias is suspected but cannot be verified.
Regarding the objectives specified for this dissertation, the analysis demonstrates substantial achievement. The reinterpretation of competence to encompass technological literacy emerges clearly from the literature, with strong scholarly consensus that AI competence will increasingly constitute a baseline professional expectation. The implications of AI adoption for confidentiality prove significant and multifaceted, requiring attention to data security, privacy compliance, and consent mechanisms. Accountability frameworks, whilst still developing, show clear trajectories toward maintaining human responsibility whilst accommodating AI contributions. The importance of human oversight receives consistent emphasis across sources, though the precise boundaries of appropriate delegation remain subject to ongoing debate. Finally, regulatory development appears to trend toward normalised, regulated AI integration rather than prohibition, with emerging standards reflecting adaptation of existing ethical principles to technological realities.
The findings carry significant implications for legal practice, education, and regulation. Practitioners must develop AI competencies to meet evolving professional standards, whilst remaining vigilant regarding the limitations and risks of AI tools. Legal education programmes must incorporate AI literacy into curricula, preparing future practitioners for technology-integrated practice. Regulators face the challenge of developing frameworks that enable beneficial AI adoption whilst protecting clients and maintaining public confidence in the legal system.
Conclusions
This dissertation has examined how generative AI tools are reshaping professional standards and ethical obligations in the legal sector. Through literature synthesis, the analysis reveals that generative AI is not replacing legal ethics but recalibrating traditional duties to accommodate technological realities. Competence now includes AI literacy and oversight capabilities; confidentiality and accountability extend into data-intensive, third-party systems; and professional integrity demands keeping humans decisively in the loop.
The first objective, concerning the duty of competence, has been achieved through analysis demonstrating that technological literacy is increasingly recognised as a component of professional competence, with scholars and regulators anticipating that AI understanding will become a baseline expectation for legal practitioners.
The second objective, addressing confidentiality implications, has been met through examination of the substantial new challenges posed by cloud-based AI systems, including data security requirements, privacy compliance obligations, and emerging client consent considerations.
The third objective, concerning accountability frameworks, has been achieved through identification of the principle that human practitioners retain ultimate responsibility for AI-assisted work, whilst acknowledging that detailed liability allocation mechanisms remain in development.
The fourth objective, regarding human oversight, has been addressed through analysis consistently emphasising the augmentative rather than substitutive role of AI in legal practice, particularly in contexts requiring nuanced professional judgment.
The fifth objective, concerning regulatory trajectories, has been met through identification of movement toward normalised, regulated, and ethically structured AI integration, adapting rather than abandoning traditional ethical frameworks.
The significance of these findings extends beyond legal practice to broader questions concerning professional responsibility in an age of artificial intelligence. The legal profession’s response to generative AI offers a potential template for other professions navigating similar challenges, demonstrating that established ethical frameworks possess sufficient flexibility to accommodate significant technological change whilst preserving core professional values.
Future research should address several areas identified as underdeveloped in current scholarship. Empirical investigation of practitioner experiences with generative AI would complement the predominantly theoretical literature currently available. Comparative analysis of regulatory approaches across jurisdictions would illuminate effective strategies for AI governance in legal services. Detailed examination of liability frameworks, informed by emerging case law, would provide practical guidance for practitioners and regulators. Finally, investigation of AI bias in legal applications, drawing upon interdisciplinary expertise in machine learning fairness, would address ethical concerns currently lacking detailed treatment.
Formal bar rules and regulatory frameworks continue developing, but the trajectory points unmistakably toward integrated, regulated AI deployment in legal practice. The profession that has adapted to successive technological transformations—from typewriters to computers, from libraries to databases, from correspondence to electronic communication—now faces its most significant technological challenge. The analysis presented here suggests that legal ethics, properly reconceptualised, remains adequate to guide this transition, provided practitioners, educators, and regulators engage thoughtfully with the opportunities and risks that generative AI presents.
References
Cheng, L. and Liu, X. (2023) ‘From principles to practices: the intertextual interaction between AI ethical and legal discourses’, *International Journal of Legal Discourse*, 8(1), pp. 31–52. Available at: https://doi.org/10.1515/ijld-2023-2001
Darmawan, A. and Soesatyo, B. (2025) ‘The Impact of Artificial Intelligence Utilization on Advocacy Practices and Professional Ethics in the Legal Field’, *Devotion: Journal of Research and Community Service*, 6(7). Available at: https://doi.org/10.59188/devotion.v6i7.25476
Ehirim, U. (2025) ‘Ethical Legal Practice and the Integration of AI into Legal Profession: Striking the Balance’, *Open Journal for Legal Studies*, 8(1). Available at: https://doi.org/10.32591/coas.ojls.0801.02013e
Guleria, A., Krishan, K., Sharma, V. and Kanchan, T. (2023) ‘ChatGPT: Forensic, legal, and ethical issues’, *Medicine, Science and the Law*, 64(2), pp. 150–156. Available at: https://doi.org/10.1177/00258024231191829
Kim, S., Yi, S. and Park, S. (2025) ‘Prioritizing challenges in AI adoption for the legal domain: A systematic review and expert-driven AHP analysis’, *PLOS One*, 20(1). Available at: https://doi.org/10.1371/journal.pone.0326028
Mohamed, E., Quteishat, A., Qtaishat, A. and Mohammad, A. (2024) ‘Exploring the Role of AI in Modern Legal Practice: Opportunities, Challenges, and Ethical Implications’, *Journal of Electrical Systems*, 20(3). Available at: https://doi.org/10.52783/jes.3320
Murray, M. (2023) ‘Artificial Intelligence and the Practice of Law Part 1: Lawyers Must be Professional and Responsible Supervisors of AI’, *SSRN Electronic Journal*. Available at: https://doi.org/10.2139/ssrn.4478588
Perlman, A. (2024a) ‘The Legal Ethics of Generative AI’, *SSRN Electronic Journal*. Available at: https://doi.org/10.2139/ssrn.4735389
Perlman, A. (2024b) ‘The Implications of ChatGPT For Legal Services and Society’, *Michigan Technology Law Review*, 30(1). Available at: https://doi.org/10.36645/mtlr.30.1.implications
Snyder, H. (2019) ‘Literature review as a research methodology: An overview and guidelines’, *Journal of Business Research*, 104, pp. 333–339.
Stahl, B. and Eke, D. (2024) ‘The ethics of ChatGPT – Exploring the ethical issues of an emerging technology’, *International Journal of Information Management*, 74, p. 102700. Available at: https://doi.org/10.1016/j.ijinfomgt.2023.102700
Terranova, C., Cestonaro, C., Fava, L. and Cinquetti, A. (2024) ‘AI and professional liability assessment in healthcare. A revolution in legal medicine?’, *Frontiers in Medicine*, 10. Available at: https://doi.org/10.3389/fmed.2023.1337335
The LegalTech Book (2020) ‘Lawyers’ Ethical Responsibility to Leverage AI in the Practice of Law’. Available at: https://doi.org/10.1002/9781119708063.ch11
Wang, C., Liu, S., Yang, H., Guo, J., Wu, Y. and Liu, J. (2023) ‘Ethical Considerations of Using ChatGPT in Health Care’, *Journal of Medical Internet Research*, 25. Available at: https://doi.org/10.2196/48009
Wu, Y. and Wang, X. (2024) ‘Balancing Innovation and Regulation in the Age of Generative Artificial Intelligence’, *Journal of Information Policy*, 14. Available at: https://doi.org/10.5325/jinfopoli.14.2024.0012
Zahra, Y. (2025) ‘Regulating AI in Legal Practice: Challenges and Opportunities’, *Journal of Computer Science Application and Engineering (JOSAPEN)*, 3(1). Available at: https://doi.org/10.70356/josapen.v3i1.47
