Abstract
The rapid proliferation of generative artificial intelligence (GenAI) tools in higher education has fundamentally challenged established conceptualisations of academic integrity, plagiarism, and authorship. This dissertation synthesises contemporary research to examine how GenAI reshapes misconduct definitions, detection capabilities, and institutional policy responses. Through a systematic literature synthesis of peer-reviewed publications from 2023 to 2025, this study identifies three critical findings: first, traditional plagiarism detection tools demonstrate significant limitations when confronting AI-generated content, with dedicated AI detectors exhibiting problematic rates of false positives and negatives; second, institutional policies remain fragmented and inconsistent, with fewer than ten per cent of institutions initially providing clear guidance on GenAI use; and third, an emerging scholarly consensus advocates reframing academic integrity from a punitive, detection-focused paradigm towards an educative approach centred on AI literacy, ethical transparency, and responsible use. The research concludes that effective responses require assessment redesign, culturally sensitive policies clarifying acceptable AI collaboration, and institution-wide literacy initiatives. These findings hold significant implications for policymakers, educators, and academic integrity officers navigating this evolving landscape.
Introduction
The emergence of sophisticated generative artificial intelligence technologies, particularly large language models (LLMs) such as ChatGPT, has precipitated what many scholars characterise as an unprecedented crisis in academic integrity within higher education institutions globally. Since November 2022, when OpenAI released ChatGPT to the public, universities have grappled with fundamental questions about the nature of original work, the boundaries of acceptable assistance, and the very meaning of academic integrity in an age where artificial intelligence can produce coherent, contextually appropriate text indistinguishable from human writing (Perkins, 2023).
This technological disruption occurs within an educational context already under considerable pressure. The COVID-19 pandemic accelerated digital transformation in higher education, normalising remote assessment and online submission processes that inadvertently created new vulnerabilities for academic misconduct. The subsequent introduction of freely accessible, powerful text generation tools has compounded these challenges, enabling what researchers have termed “AI-giarism”—the presentation of AI-generated output as one’s own original work (Chan, 2024). This phenomenon fundamentally blurs traditional distinctions between plagiarism, collaboration, and legitimate assistance, rendering existing definitions and detection mechanisms increasingly inadequate.
The academic and practical significance of this issue cannot be overstated. Academic integrity constitutes a foundational pillar of higher education, underpinning the validity of qualifications, the trustworthiness of research outputs, and the development of graduate competencies essential for professional practice. When students submit AI-generated work as their own without appropriate acknowledgement, they potentially compromise not only their own learning but also the broader credibility of academic credentials. Moreover, the inequitable access to GenAI tools and the varied digital literacies among student populations risk exacerbating existing educational disparities.
Institutionally, the challenge extends beyond individual misconduct cases to encompass systemic questions about assessment design, pedagogical approaches, and the very purpose of higher education in an AI-augmented world. Faculty members find themselves navigating unclear institutional expectations whilst simultaneously adapting their teaching practices and assessment strategies. Students, meanwhile, operate within environments where the boundaries of acceptable AI use remain ambiguous, creating conditions ripe for both inadvertent and deliberate violations of academic integrity standards.
This dissertation addresses these pressing concerns through a comprehensive synthesis of contemporary research, examining how generative AI transforms understandings of plagiarism and misconduct, evaluating the capabilities and limitations of current detection approaches, analysing the evolution of institutional policies, and exploring emerging frameworks for redefining academic integrity in educative rather than purely punitive terms.
Aim and objectives
Aim
This dissertation aims to critically examine the impact of generative artificial intelligence on plagiarism detection and academic integrity policy evolution within higher education, synthesising contemporary research to inform institutional responses and pedagogical practices.
Objectives
To achieve this aim, the following specific objectives guide this research:
1. To analyse how generative AI technologies fundamentally alter traditional conceptualisations of plagiarism, authorship, and academic misconduct in higher education contexts.
2. To evaluate the capabilities and limitations of current plagiarism and AI detection tools in identifying AI-generated content, including text similarity software and dedicated AI detectors.
3. To examine the evolution of institutional policies regarding generative AI use, identifying patterns of response, gaps in guidance, and variations in approach across higher education institutions.
4. To investigate faculty and student perceptions of AI-assisted academic work, exploring divergent understandings of acceptable use and the implications for policy implementation.
5. To synthesise emerging frameworks that reconceptualise academic integrity from detection-focused approaches towards educative models emphasising AI ethics, transparency, and responsible use.
6. To propose evidence-based recommendations for institutional policy development, assessment redesign, and AI literacy initiatives that address the challenges posed by generative AI whilst supporting student learning.
Methodology
This dissertation employs a systematic literature synthesis methodology to examine the intersection of generative AI, academic integrity, and plagiarism detection in higher education. Literature synthesis represents an appropriate methodological approach for this research given the rapidly evolving nature of the field, the need to integrate findings across diverse institutional and geographical contexts, and the goal of informing policy and practice through comprehensive evidence review.
Search strategy and source identification
The research drew upon peer-reviewed academic literature published between February 2023 and August 2025, corresponding to the period following the public release of ChatGPT and the subsequent emergence of scholarly attention to GenAI’s implications for academic integrity. Primary sources were identified through systematic searches of academic databases, focusing on publications addressing generative AI, academic misconduct, plagiarism detection, and institutional policy responses within higher education contexts.
The core literature base comprises nineteen peer-reviewed publications spanning multiple disciplines including education, information systems, ethics, and higher education policy. These sources were supplemented by authoritative institutional guidance documents and policy statements from universities and educational bodies to contextualise scholarly findings within practical implementation frameworks.
Inclusion and exclusion criteria
Sources were included if they: addressed generative AI in relation to academic integrity or plagiarism; focused on higher education contexts; were published in peer-reviewed journals or as peer-reviewed conference proceedings; and were available in English. Sources were excluded if they: focused exclusively on secondary or primary education; addressed AI in educational contexts unrelated to integrity concerns; represented opinion pieces without empirical or theoretical grounding; or originated from non-peer-reviewed blogs or websites of questionable credibility.
Analytical approach
The synthesis employed thematic analysis to identify patterns, convergences, and tensions within the literature. Key themes were inductively derived through iterative reading and coding of source materials, with findings subsequently organised according to the research objectives. This approach enabled both descriptive synthesis of existing knowledge and critical analysis of implications for policy and practice.
Particular attention was paid to the chronological development of the literature, recognising that scholarly understanding of GenAI’s implications has evolved rapidly. The temporal dimension of the analysis enabled identification of shifts in focus from initial panic and prohibition towards more nuanced, educative approaches.
Limitations
Several methodological limitations warrant acknowledgement. The rapid pace of technological and policy development means that some findings may require updating as new evidence emerges. The predominance of sources from Western, Anglophone contexts may limit generalisability to other educational traditions. Additionally, the reliance on published literature may underrepresent practitioner knowledge and institutional experiences not yet documented in scholarly outlets.
Literature review
The emergence of AI-giarism and transformed misconduct
The introduction of sophisticated large language models has generated a new category of academic misconduct that scholars have termed “AI-giarism”—the presentation of AI-generated output as one’s own original work. This phenomenon fundamentally challenges traditional conceptualisations of plagiarism, which historically centred on the appropriation of another human author’s words or ideas. Chan (2024) demonstrates that AI-giarism blurs conventional distinctions between authorship and assistance, creating definitional ambiguities that complicate both detection and adjudication.
Research indicates widespread student engagement with LLMs for academic purposes. Pudasaini et al. (2024) document extensive use of these tools for paraphrasing, homework completion, and assessment preparation. Significantly, a substantial minority of students regard such use as a normal study practice rather than academic misconduct, reflecting generational shifts in attitudes towards AI-assisted work. Yusuf, Pervin and Román-González (2024) corroborate these findings through multicultural research, identifying consistent patterns of GenAI adoption across diverse student populations whilst noting variations in perceptions of acceptability.
The drivers of AI-facilitated misconduct extend beyond simple temptation or moral failing. Song (2024) identifies multiple contributing factors including academic stress, skill deficits, weak engagement with learning, and institutional cultures that implicitly tolerate certain forms of technological assistance. This analysis suggests that attributing misconduct solely to individual student culpability oversimplifies a complex phenomenon shaped by systemic pressures and environmental conditions (Bittle and El-Gayar, 2025).
Ibrahim et al. (2023) provide particularly illuminating evidence through their study spanning thirty-two university courses, documenting both the pervasiveness of conversational AI use and the variable detectability of such use across different assessment types and disciplinary contexts. Their findings suggest that certain assessment formats prove significantly more vulnerable to undetected AI assistance than others, with implications for assessment design strategies.
Detection technologies: capabilities and constraints
The scholarly literature reveals substantial limitations in current approaches to detecting AI-generated content, challenging assumptions that technological solutions can effectively address the challenges posed by generative AI.
Traditional text similarity tools, exemplified by platforms such as Turnitin, were designed to identify textual matches against databases of previously submitted work and published sources. Whilst these tools retain utility for detecting conventional plagiarism involving direct copying, research consistently demonstrates their inadequacy for identifying AI-generated content. Bing and Leong (2025) document the fundamental mismatch between similarity-based detection and AI-generated text, which by definition creates novel outputs rather than reproducing existing sources. Baron (2024) extends this critique, questioning whether similarity scores retain meaningful value in an environment where students can readily generate original-appearing text through AI tools.
The emergence of dedicated AI detection tools, designed specifically to identify machine-generated text, initially appeared promising. However, empirical evaluations reveal significant performance limitations. Sharma and Panja (2025) document problematic rates of both false positives—incorrectly flagging human-written text as AI-generated—and false negatives—failing to identify actual AI-generated content. These reliability issues carry serious consequences: false positives risk unjust accusations against innocent students, whilst false negatives enable misconduct to proceed undetected.
Furthermore, AI detection tools prove readily circumventable. Tirumala and Khwakhali (2025) demonstrate that relatively simple editing of AI-generated text substantially reduces detection accuracy, suggesting that determined students can easily evade technological surveillance. Zinchenko, Rezchikova and Tarapanova (2025) corroborate these findings within technical university contexts, documenting the ease with which learners can modify AI outputs to avoid detection.
The combination of human judgement with technological tools offers modest improvements in detection accuracy. Perkins et al. (2023) found that academic staff, when combining their professional judgement with software outputs, achieved better outcomes than either approach alone. However, Safi (2025) cautions that even combined approaches miss many cases of AI-assisted work, particularly when students employ sophisticated integration strategies.
These findings collectively suggest that detection-focused responses to generative AI face inherent limitations that no purely technological solution can overcome.
Institutional policy responses and their evolution
Research documents significant gaps and inconsistencies in institutional policy responses to generative AI. Perkins (2023), writing in the immediate aftermath of ChatGPT’s release, found that fewer than ten per cent of institutions had established clear guidance on AI use in academic work. This policy vacuum created confusion for both students seeking to understand acceptable practices and staff attempting to enforce standards.
Subsequent studies reveal a fragmented policy landscape characterised by diverse and sometimes contradictory approaches. Rana (2024), reviewing policies from selected higher education institutions, identifies substantial variation ranging from outright prohibition of AI tools to permissive frameworks encouraging their use. This heterogeneity reflects broader uncertainty about appropriate institutional responses and the absence of sector-wide consensus or guidance.
Alsharefeen and Sayari (2025) compare policies addressing traditional plagiarism with those governing GenAI use, finding marked differences in specificity and enforceability. Whilst policies for conventional plagiarism typically provide detailed definitions, clear sanction frameworks, and established procedural mechanisms, GenAI policies often remain abstract, fragmented, and only moderately effective in practice. This asymmetry creates implementation challenges, particularly when academic staff must interpret vague guidance in specific case contexts.
The evolution of policy approaches over the period 2023-2025 reveals a gradual shift from reactive prohibition towards more nuanced frameworks. Early responses frequently implemented blanket bans on AI tool use, an approach that Bittle and El-Gayar (2025) characterise as both impractical and pedagogically counterproductive. More recent policy developments demonstrate movement towards frameworks that differentiate acceptable from unacceptable uses, require disclosure and acknowledgement of AI assistance, and integrate AI literacy into academic integrity expectations.
Song (2024) argues that effective policies must address the systemic drivers of misconduct rather than focusing exclusively on detection and punishment. This perspective suggests that policy development should encompass assessment design, student support services, and pedagogical approaches alongside regulatory frameworks.
Faculty and student perspectives
Research reveals significant divergence between faculty and student perceptions of AI-assisted academic work, with implications for policy implementation and educational practice.
Students tend to perceive generative AI tools as legitimate forms of assistance, analogous to spell-checkers, grammar tools, or library databases. Yusuf, Pervin and Román-González (2024) document student views that AI tools enhance learning and productivity without necessarily compromising academic integrity. Chan (2024) explores how student understandings of AI-giarism differ from traditional plagiarism conceptions, with many students drawing distinctions based on the degree of AI involvement, the nature of human input, and the context of the task.
Faculty perspectives tend towards greater caution, particularly where institutional policies remain unclear. Duah and McGivern (2024) identify staff concerns about blurred authorial identity and disrupted academic norms, with many educators uncertain about how to respond when AI use is suspected but difficult to prove. This uncertainty generates inconsistent responses across and within institutions.
Notably, research indicates faculty preference for educative over purely punitive responses to suspected AI misuse. Alsharefeen (2025) conceptualises faculty members as “street-level bureaucrats” who exercise considerable discretion in implementing institutional policies. This discretionary authority enables adaptation to individual circumstances but also generates inconsistency that may undermine perceived fairness.
Alsharefeen and Sayari (2025) document faculty strategies including preventive measures, assessment redesign, and discretionary judgement-based responses. These approaches reflect recognition that detection and punishment alone cannot address the challenges posed by generative AI, supporting calls for systemic rather than reactive interventions.
Towards ethical and educative frameworks
A growing body of scholarship argues for fundamental reconceptualisation of academic integrity in response to generative AI, moving beyond narrow detection-focused approaches towards frameworks emphasising ethics, transparency, and responsible use.
Laflamme and Bruneault (2025) articulate this position most explicitly, contending that traditional academic integrity frameworks prove inadequate for the AI age. They propose integrating AI ethics into integrity education, helping students develop critical understanding of AI capabilities, limitations, and appropriate applications. This approach positions academic integrity not merely as rule-following but as ethical practice requiring judgement and reflection.
Ekaterina, Ana and Maia (2025) explore these themes within medical education, a context where integrity concerns intersect with patient safety and professional standards. Their analysis highlights the particular importance of ensuring that future healthcare professionals develop appropriate competencies for AI use alongside understanding of integrity requirements.
Effective responses emerging from the literature combine multiple elements. AI and academic integrity literacy for all stakeholders—students, faculty, and administrators—provides foundational understanding of both technological capabilities and ethical principles. Assessment redesign, incorporating project-based learning, oral examinations, and process-focused tasks, reduces opportunities for undetectable GenAI use whilst promoting deeper learning. Culturally sensitive policies that clarify acceptable AI collaboration and acknowledgement practices address the definitional ambiguities that currently generate confusion (Pudasaini et al., 2024; Chan, 2024).
Discussion
The synthesis of contemporary research reveals a higher education sector in transition, grappling with technological disruption that fundamentally challenges established approaches to academic integrity. This discussion critically analyses key findings in relation to the stated research objectives and explores their implications for institutional practice.
Reconceptualising plagiarism and misconduct
The emergence of AI-giarism necessitates substantial revision of traditional plagiarism definitions, which centred on the appropriation of identifiable human sources. When AI generates novel text rather than reproducing existing content, conventional understandings of “copying” become inadequate. This conceptual disruption extends beyond definitional technicalities to fundamental questions about authorship, originality, and the educational value of written assessments.
The research objective concerning transformed misconduct conceptualisations has been substantially addressed through the literature synthesis. Evidence demonstrates that GenAI creates genuinely new forms of academic dishonesty that existing frameworks were not designed to address. The distinction between human and machine authorship, previously unproblematic, has become central to integrity considerations. Institutions must now grapple with questions about the degree of AI involvement that constitutes misconduct, the relationship between AI assistance and student learning, and the implications for graduate capabilities.
The finding that students and faculty hold divergent views on acceptable AI use holds particular significance. Where students perceive AI tools as legitimate productivity aids, faculty concerns about authorship and academic norms create conditions for miscommunication and perceived injustice. Effective institutional responses must address these perceptual gaps through clear communication and stakeholder engagement.
The limitations of detection-focused approaches
The research strongly supports the conclusion that technological detection cannot provide a reliable solution to AI-facilitated misconduct. Traditional similarity tools fail to identify AI-generated content by design, whilst dedicated AI detectors exhibit error rates that preclude confident adjudication. The ease with which AI outputs can be modified to evade detection further undermines surveillance-based strategies.
These findings have profound implications for institutional approaches that prioritise detection and punishment. Substantial investment in detection technologies may prove ineffective and potentially counterproductive if it generates false confidence or diverts resources from more effective interventions. The combination of human judgement with technological tools offers modest improvements but cannot overcome fundamental detectability limitations.
This analysis does not suggest abandoning detection efforts entirely. Detection tools retain utility for identifying conventional plagiarism and may deter casual misuse of AI tools. However, institutions must recognise that detection alone cannot sustain academic integrity in the GenAI era, necessitating complementary approaches including assessment redesign and educative interventions.
Policy fragmentation and the need for coherence
The documented policy landscape reveals concerning fragmentation that undermines both student understanding and fair enforcement. When policies vary substantially across institutions—and sometimes within institutions across different departments or faculties—students face uncertainty about acceptable practice whilst staff lack consistent guidance for handling suspected misconduct.
The contrast between detailed, sanction-driven policies for traditional plagiarism and abstract, inconsistent approaches to GenAI use reflects the novelty of the challenge and the absence of established precedent. However, this asymmetry cannot persist indefinitely without generating inequity and confusion. Students and staff alike require clear, specific guidance on acceptable AI use, disclosure requirements, and consequences for violations.
The evolution observable in the literature, from initial prohibition towards more nuanced frameworks, suggests institutional learning occurring in real-time. Early blanket bans proved both unenforceable and pedagogically questionable, prompting movement towards differentiated approaches that distinguish legitimate from problematic AI use. This trajectory supports recommendations for agile, adaptable policies that can evolve as technological capabilities and educational understanding develop.
From policing to education
Perhaps the most significant finding concerns the emerging scholarly consensus that academic integrity must be reconceptualised from policing to education. This shift carries implications extending well beyond policy language to encompass fundamental assumptions about the purpose of integrity frameworks and their relationship to student learning.
The detection-focused paradigm assumes that integrity can be maintained through surveillance and punishment—identifying violations and applying consequences to deter future misconduct. Whilst deterrence retains some value, this approach proves inadequate when detection itself becomes unreliable and when the behaviours in question may not be clearly understood as violations by those engaged in them.
The educative paradigm reconceptualises integrity as a capability to be developed rather than merely a rule to be followed. Students require understanding of AI ethics, critical evaluation skills for AI outputs, and the ability to make contextually appropriate judgements about when and how AI assistance is appropriate. This framing positions integrity education as integral to graduate development rather than peripheral compliance training.
Implementation of educative approaches requires substantial institutional investment. AI literacy must be integrated across curricula, not confined to isolated workshops or policy documents. Faculty require support to redesign assessments in ways that promote learning whilst reducing vulnerability to AI-facilitated misconduct. Institutional cultures must evolve to value ethical AI use rather than merely prohibiting its misuse.
Implications for assessment practice
The literature consistently identifies assessment redesign as essential for maintaining integrity in the GenAI era. Traditional written assignments, completed independently outside supervised environments, prove particularly vulnerable to undetectable AI assistance. Alternative approaches including project-based learning, oral examinations, and process-focused assessments can reduce this vulnerability whilst potentially enhancing learning outcomes.
However, assessment redesign carries resource implications that institutions must acknowledge. Oral examinations require substantially more staff time than written assignment marking. Project-based learning demands different pedagogical approaches and assessment criteria. Process-focused assessments, which evaluate students’ working methods rather than merely final products, require new frameworks for documentation and evaluation.
These considerations suggest that effective responses to GenAI cannot be cost-neutral. Institutions must weigh the resource requirements of alternative assessment approaches against the risks to integrity and graduate capabilities posed by current practices.
Conclusions
This dissertation has examined the multifaceted impact of generative artificial intelligence on plagiarism detection and academic integrity policy within higher education, addressing each stated research objective through systematic synthesis of contemporary literature.
The first objective, concerning transformed conceptualisations of plagiarism and misconduct, has been achieved through analysis demonstrating how AI-giarism fundamentally challenges traditional definitions centred on human source appropriation. The emergence of AI as a content generator rather than content repository necessitates new frameworks addressing authorship, originality, and the educational value of written work.
The second objective, evaluating detection capabilities and limitations, has been comprehensively addressed through evidence demonstrating the inadequacy of both traditional similarity tools and dedicated AI detectors. False positive and negative rates, combined with easy circumvention through text modification, render detection-focused strategies unreliable as primary integrity safeguards.
The third and fourth objectives, examining policy evolution and stakeholder perceptions, have been achieved through documentation of fragmented institutional responses and divergent faculty-student understandings. The finding that fewer than ten per cent of institutions initially provided clear GenAI guidance, combined with evidence of continued policy inconsistency, highlights urgent need for coherent frameworks.
The fifth objective, synthesising emerging reconceptualisation frameworks, has been addressed through analysis of scholarly arguments for shifting from punitive to educative approaches. The integration of AI ethics, transparency requirements, and responsible use principles into integrity frameworks offers a path forward that addresses both immediate concerns and longer-term graduate development.
The sixth objective, proposing evidence-based recommendations, is fulfilled through the following conclusions for institutional action:
First, institutions should develop clear, specific policies addressing GenAI use that distinguish acceptable from unacceptable practices, require appropriate disclosure and acknowledgement, and provide guidance for common scenarios. These policies require regular review and updating as technological capabilities evolve.
Second, detection tools should be understood as supplementary rather than definitive, with institutional processes designed to incorporate human judgement and avoid over-reliance on technological outputs that may generate false accusations or false confidence.
Third, assessment redesign should be prioritised as a proactive integrity measure, with investment in alternative approaches including oral examinations, project-based learning, and process-focused assessments that reduce vulnerability to AI-facilitated misconduct whilst enhancing learning.
Fourth, AI literacy education should be integrated across curricula for both students and staff, developing critical understanding of AI capabilities, limitations, and appropriate applications within academic and professional contexts.
Fifth, institutional cultures should evolve towards educative rather than purely punitive approaches to integrity, emphasising ethical development and responsible use rather than mere compliance and surveillance.
This research contributes to an evolving scholarly conversation that will require continued attention as generative AI capabilities develop and educational practices adapt. Future research should examine the long-term effectiveness of alternative assessment approaches, explore cultural variations in AI use perceptions and practices, and evaluate the impact of AI literacy initiatives on student understanding and behaviour. The challenge posed by generative AI ultimately invites deeper reflection on the purposes of higher education and the capabilities graduates require for professional and civic life in an AI-augmented world.
References
Alsharefeen, R. (2025) ‘Faculty as street-level bureaucrats: discretionary decision-making in the era of generative AI’, *Frontiers in Education*. https://doi.org/10.3389/feduc.2025.1662657
Alsharefeen, R. and Sayari, N. (2025) ‘Examining academic integrity policy and practice in the era of AI: a case study of faculty perspectives’, *Frontiers in Education*. https://doi.org/10.3389/feduc.2025.1621743
Baron, P. (2024) ‘Are AI detection and plagiarism similarity scores worthwhile in the age of ChatGPT and other Generative AI?’, *Scholarship of Teaching and Learning in the South*. https://doi.org/10.36615/sotls.v8i2.411
Bing, Z. and Leong, W. (2025) ‘AI on Academic Integrity and Plagiarism Detection’, *ASM Science Journal*. https://doi.org/10.32802/asmscj.2025.1918
Bittle, K. and El-Gayar, O. (2025) ‘Generative AI and Academic Integrity in Higher Education: A Systematic Review and Research Agenda’, *Information*. https://doi.org/10.3390/info16040296
Chan, C. (2024) ‘Students’ perceptions of “AI-giarism”: investigating changes in understandings of academic misconduct’, *Education and Information Technologies*, 30, pp. 8087-8108. https://doi.org/10.1007/s10639-024-13151-7
Duah, J. and McGivern, P. (2024) ‘How generative artificial intelligence has blurred notions of authorial identity and academic norms in higher education, necessitating clear university usage policies’, *The International Journal of Information and Learning Technology*. https://doi.org/10.1108/ijilt-11-2023-0213
Ekaterina, K., Ana, M. and Maia, Z. (2025) ‘Academic Integrity Within the Medical Curriculum in the Age of Generative Artificial Intelligence’, *Health Science Reports*, 8. https://doi.org/10.1002/hsr2.70489
Ibrahim, H., Liu, F., Asim, R., Battu, B., Benabderrahmane, S., Alhafni, B., Adnan, W., Alhanai, T., AlShebli, B., Baghdadi, R., Bélanger, J., Beretta, E., Çelik, K., Chaqfeh, M., Daqaq, M., Bernoussi, Z., Fougnie, D., De Soto, B., Gandolfi, A., Gyorgy, A., Habash, N., Harris, J., Kaufman, A., Kirousis, L., Kocak, K., Lee, K., Lee, S., Malik, S., Maniatakos, M., Melcher, D., Mourad, A., Park, M., Rasras, M., Reuben, A., Zantout, D., Gleason, N., Makovi, K., Rahwan, T. and Zaki, Y. (2023) ‘Perception, performance, and detectability of conversational artificial intelligence across 32 university courses’, *Scientific Reports*, 13. https://doi.org/10.1038/s41598-023-38964-3
Laflamme, A. and Bruneault, F. (2025) ‘Redefining Academic Integrity in the Age of Generative Artificial Intelligence: The Essential Contribution of Artificial Intelligence Ethics’, *Journal of Scholarly Publishing*, 56, pp. 481-509. https://doi.org/10.3138/jsp-2024-1125
Perkins, M. (2023) ‘Academic integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond’, *Journal of University Teaching and Learning Practice*. https://doi.org/10.53761/1.20.02.07
Perkins, M., Roe, J., Postma, D., McGaughran, J. and Hickerson, D. (2023) ‘Detection of GPT-4 Generated Text in Higher Education: Combining Academic Judgement and Software to Identify Generative AI Tool Misuse’, *Journal of Academic Ethics*, 22, pp. 89-113. https://doi.org/10.1007/s10805-023-09492-6
Pudasaini, S., Miralles-Pechuán, L., Lillis, D. and Salvador, M. (2024) ‘Survey on AI-Generated Plagiarism Detection: The Impact of Large Language Models on Academic Integrity’, *Journal of Academic Ethics*, 23, pp. 1137-1170. https://doi.org/10.1007/s10805-024-09576-x
Quality Assurance Agency for Higher Education (2023) *Artificial Intelligence*. Gloucester: QAA. Available at: https://www.qaa.ac.uk/membership/membership-resources/artificial-intelligence
Rana, N. (2024) ‘Generative AI and Academic Research: A Review of the Policies from Selected HEIs’, *Higher Education for the Future*, 12, pp. 97-113. https://doi.org/10.1177/23476311241303800
Russell Group (2024) *Russell Group Principles on the Use of Generative AI Tools in Education*. London: Russell Group.
Safi, R. (2025) ‘Detecting Plagiarism in the Age of Generative AI: An Exploratory Experiment’, *Communications of the Association for Information Systems*. https://doi.org/10.17705/1cais.05624
Sharma, R. and Panja, S. (2025) ‘Addressing Academic Dishonesty in Higher Education: A Systematic Review of Generative AI’s Impact’, *Open Praxis*. https://doi.org/10.55982/openpraxis.17.2.820
Song, N. (2024) ‘Higher education crisis: Academic misconduct with generative AI’, *Journal of Contingencies and Crisis Management*. https://doi.org/10.1111/1468-5973.12532
Tirumala, S. and Khwakhali, U. (2025) ‘Unethical Academic Practices through Ethical Tools’, *2025 10th International STEM Education Conference (iSTEM-Ed)*, pp. 1-6. https://doi.org/10.1109/istem-ed65612.2025.11129384
UNESCO (2023) *Guidance for Generative AI in Education and Research*. Paris: United Nations Educational, Scientific and Cultural Organization.
Yusuf, A., Pervin, N. and Román-González, M. (2024) ‘Generative AI and the future of higher education: a threat to academic integrity or reformation? Evidence from multicultural perspectives’, *International Journal of Educational Technology in Higher Education*, 21, pp. 1-29. https://doi.org/10.1186/s41239-024-00453-6
Zinchenko, L., Rezchikova, E. and Tarapanova, E. (2025) ‘Features of Plagiarism Checking in a Technical University Considering the Possibilities of Application of Generative Artificial Intelligence by Learners’, *Open Education*. https://doi.org/10.21686/1818-4243-2025-2-4-13
