+44 115 966 7987 contact@ukdiss.com Log in

Student perceptions of plagiarism in the age of AI: ethics, awareness and institutional policy effectiveness

//

Sarah Mitchell

Abstract

This dissertation synthesises contemporary research examining student perceptions of plagiarism in the context of generative artificial intelligence (AI) technologies, with particular focus on ethical considerations, awareness levels, and institutional policy effectiveness. Through systematic literature review methodology, this study analyses peer-reviewed publications from 2023 to 2025, a period marked by rapid AI tool proliferation in educational settings. Key findings reveal that whilst students generally uphold academic integrity values, significant uncertainty exists regarding the boundaries between legitimate AI assistance and academic misconduct. Students consistently condemn wholesale AI-generated submissions but exhibit considerable ambivalence towards intermediate uses such as paraphrasing, outlining, and language polishing. The research identifies several motivational factors driving AI-assisted misconduct, including academic pressure, time constraints, and ease of access. Critically, institutional policies are perceived as only moderately effective, with guidance frequently lagging behind technological developments. The dissertation concludes that addressing these challenges requires comprehensive ethics education, clearer definitional frameworks for AI-related plagiarism, and collaborative policy development involving students and faculty. These findings carry significant implications for higher education institutions navigating academic integrity in an increasingly AI-mediated landscape.

Introduction

The emergence of generative artificial intelligence tools, particularly large language models such as ChatGPT, has fundamentally disrupted traditional conceptions of academic integrity and plagiarism within higher education. Since the public release of ChatGPT in November 2022, universities worldwide have grappled with unprecedented challenges in maintaining academic standards whilst adapting to technological innovation that offers both pedagogical opportunities and integrity risks (Cotton, Cotton and Shipway, 2023). This technological shift has necessitated urgent reconsideration of what constitutes original academic work, authentic learning, and acceptable technological assistance.

Academic integrity has long served as a foundational pillar of higher education, underpinning the validity of academic credentials, the advancement of knowledge, and the development of professional competence among graduates. Traditional plagiarism—the unattributed use of others’ words or ideas—has been well-defined and addressed through established institutional mechanisms, including detection software, honour codes, and disciplinary procedures. However, AI-generated content presents qualitatively different challenges that existing frameworks were not designed to address (Eaton, 2023). Unlike conventional plagiarism, AI-assisted academic misconduct may not involve copying from identifiable human sources, rendering traditional detection methods partially ineffective and complicating questions of authorship and originality.

Understanding how students perceive these emerging forms of potential misconduct is crucial for several interconnected reasons. First, student attitudes and beliefs significantly influence their behavioural choices regarding AI use in academic work. Second, effective policy development requires insight into how students interpret and respond to existing guidelines. Third, educational interventions designed to promote integrity must address students’ actual knowledge gaps and ethical uncertainties rather than assumed deficiencies. The rapid pace of AI development has outstripped institutional responses, creating a regulatory vacuum that students must navigate with limited guidance.

This dissertation addresses these concerns by synthesising recent empirical research on student perceptions of AI-related plagiarism, examining the ethical frameworks students employ when evaluating AI use, assessing awareness levels regarding what constitutes misconduct, and evaluating the perceived effectiveness of institutional policies. The research is particularly timely given the exponential growth in academic literature addressing these questions since 2023, reflecting the urgency with which the educational community has engaged with AI-related integrity challenges.

The significance of this research extends beyond academic settings. As AI tools become increasingly integrated into professional environments, the ethical frameworks and digital literacy skills students develop during their education will shape their professional conduct. Furthermore, the credibility of academic qualifications depends upon robust integrity standards that accurately reflect graduates’ capabilities. If AI-mediated misconduct becomes normalised, the signalling function of educational credentials may be compromised, with broader implications for employers, professional bodies, and society.

Aim and objectives

Main aim

The primary aim of this dissertation is to critically examine and synthesise contemporary research evidence regarding student perceptions of plagiarism in the context of generative artificial intelligence, with particular attention to ethical considerations, awareness levels, and the perceived effectiveness of institutional policies.

Specific objectives

To achieve this aim, the dissertation pursues the following specific objectives:

1. To analyse how students perceive different types and degrees of AI use in academic work, identifying which practices are considered acceptable, ambiguous, or clearly unethical.

2. To examine the ethical frameworks and motivational factors that influence student decisions regarding AI use in academic contexts.

3. To assess student awareness and understanding of what constitutes AI-related plagiarism and the boundaries of legitimate AI assistance.

4. To evaluate the perceived effectiveness of institutional academic integrity policies in addressing AI-related misconduct from both student and faculty perspectives.

5. To identify recommendations for policy development, educational intervention, and future research based on the synthesised evidence.

Methodology

This dissertation employs a systematic literature synthesis methodology to examine student perceptions of AI-related plagiarism. Literature synthesis, sometimes termed integrative review or narrative synthesis, represents an established approach for consolidating research evidence across multiple studies to identify patterns, contradictions, and knowledge gaps within a defined field (Snyder, 2019). This methodology is particularly appropriate given the nascent and rapidly evolving nature of research on AI and academic integrity, where synthesising emerging findings can provide valuable insights for policy and practice.

Search strategy and source identification

The primary sources for this review comprise peer-reviewed academic publications from 2023 to 2025, a period selected to capture research conducted following the widespread adoption of generative AI tools in educational settings. Sources were identified through systematic database searches and supplemented with targeted searches of specialist educational technology and academic integrity journals. The research summary provided for this dissertation, generated through the Consensus research platform, served as a structured foundation for source identification, ensuring comprehensive coverage of relevant recent literature.

Inclusion criteria required sources to be peer-reviewed, published in English, and directly addressing student perceptions, attitudes, or behaviours regarding AI use and academic integrity. Exclusion criteria eliminated opinion pieces, conference abstracts without full papers, and publications focused exclusively on AI detection technology without addressing student perspectives. Additional high-quality sources from university websites, government publications, and reputable educational organisations were consulted to contextualise findings within broader policy frameworks.

Data extraction and analysis

Data extraction focused on identifying key themes across the literature, including student attitudes towards different forms of AI use, reported motivations for AI-assisted misconduct, awareness levels regarding institutional policies and definitions, and evaluations of policy effectiveness. Thematic analysis principles guided the synthesis, with findings organised according to recurring conceptual categories that emerged from the literature.

Methodological limitations

Several limitations warrant acknowledgement. The rapid publication pace in this field means that some relevant studies may not have been captured. The geographical distribution of included studies, whilst international, may not fully represent student perspectives across all educational contexts. Additionally, the reliance on self-reported data in many primary studies introduces potential social desirability bias, as students may underreport behaviours they perceive as unacceptable.

Literature review

Student perceptions of AI use and plagiarism

Contemporary research reveals a complex and often contradictory landscape of student attitudes towards AI use in academic work. A central finding across multiple studies is that students generally value academic integrity as a principle whilst simultaneously expressing uncertainty about where legitimate AI support ends and plagiarism begins (Alawad, Ayadi and Alhinai, 2025; Chan, 2024). This ambiguity is intensified by the rapid evolution of AI tools, inconsistent institutional policies, and limited systematic education on ethical AI use in academic contexts.

Notably, many students perceive plagiarism, including AI-assisted forms, as relatively common and in some contexts normalised academic behaviour. Research conducted in Russian higher education found that students often view various forms of academic dishonesty as standard practice rather than exceptional transgression (Sysoyev, 2024). Broader literature reviews have similarly identified normalisation patterns across diverse educational settings (Sozon et al., 2024). This normalisation effect represents a significant challenge for integrity promotion efforts, as behavioural norms established among peer groups powerfully influence individual choices.

Students demonstrate marked divisions regarding whether AI use constitutes academic misconduct or represents an acceptable technological enhancement analogous to calculators or spell-checkers. Even students who profess strong commitment to integrity principles express uncertainty about AI-specific boundaries (Alawad, Ayadi and Alhinai, 2025; Kotsis, 2024). This suggests that the conceptual frameworks students apply to traditional plagiarism do not straightforwardly transfer to AI-mediated contexts, necessitating explicit guidance and education.

Typology of AI use and perceived acceptability

A consistent pattern across the literature is the differentiation students make between different types and degrees of AI use, with acceptability judgements varying substantially across use scenarios. Research by Chan (2024) introduced the concept of “AI-giarism” to capture this spectrum of AI-related academic misconduct, recognising that student perceptions depend heavily on the nature and extent of AI involvement.

Whole-paper generation, where students submit AI-created text with minimal or no personal contribution, attracts clear disapproval across studies. Students consistently characterise this practice as unethical, recognising it as fundamentally incompatible with authentic learning and fair assessment (Alawad, Ayadi and Alhinai, 2025; Cotton, Cotton and Shipway, 2023; Chan, 2024). This consensus suggests that students retain a core understanding that academic work should represent their own intellectual effort.

However, intermediate uses provoke considerably more ambivalence. Heavy AI rewriting and paraphrasing of student-drafted content occupies a grey zone where student opinions diverge significantly. Many students underestimate the extent to which such practices might constitute misconduct, particularly when they perceive themselves as having contributed initial ideas or structure (Alawad, Ayadi and Alhinai, 2025; Shin, Wei and Vanchinkhuu, 2025; Kotsis, 2024). This finding is concerning because substantial AI paraphrasing may effectively obscure the AI’s contribution whilst allowing students to present polished work that does not reflect their actual writing capabilities.

At the other end of the spectrum, uses such as idea generation, outlining assistance, and language polishing are widely regarded as acceptable, provided the student contributes substantively to the final work. Students frequently draw analogies to other forms of legitimate assistance, such as peer feedback, writing centre consultations, or grammar-checking software (Sysoyev, 2024; Shin, Wei and Vanchinkhuu, 2025; Chan, 2024). The critical distinction students often invoke concerns whether AI serves as a tool supporting human effort or as a substitute for it.

Ethical frameworks and motivational factors

Understanding why students engage with AI in ways that may compromise integrity requires examination of both ethical reasoning and practical motivations. Research consistently identifies grade pressure as a primary driver, with students perceiving AI as offering competitive advantages in high-stakes assessment environments. Time constraints, particularly among students balancing academic, employment, and personal responsibilities, further incentivise efficiency-maximising strategies that may include problematic AI use (Alawad, Ayadi and Alhinai, 2025; Sozon et al., 2024).

Skill deficits represent another significant motivational factor. Students who struggle with writing, research, or disciplinary content may view AI as compensating for perceived inadequacies rather than as a means of circumventing learning. This framing can enable rationalisation of AI use as necessary accommodation rather than dishonesty (Yakovenko and Yakovenko, 2025). Additionally, the unprecedented ease of access to generative AI tools reduces barriers that previously constrained academic misconduct, creating opportunity structures that some students exploit.

Regarding ethical reasoning, students often lack systematic understanding of what constitutes AI plagiarism and how to use AI “legally” with appropriate citation and transparency (Sysoyev, 2024; Kotsis, 2024). This knowledge gap means that even students with genuine integrity commitments may inadvertently violate norms they do not fully comprehend. Conversely, many students express authentic ethical concern about authenticity and deception, particularly regarding AI use that enables them to feign competence, inflate grades unfairly, or bypass genuine learning processes (Alawad, Ayadi and Alhinai, 2025; Yakovenko and Yakovenko, 2025). These concerns suggest that ethical education emphasising the purposes of assessment and the value of skill development may resonate with student values.

Awareness, education, and behavioural outcomes

A critical finding with practical implications is that awareness of academic integrity principles is associated with lower plagiarism rates among frequent AI users. Research by Sarwar et al. (2025) demonstrated that students with greater understanding of integrity concepts, even those who regularly use ChatGPT, exhibited reduced misconduct behaviours. This correlation suggests that knowledge and awareness function as protective factors, supporting investment in educational interventions rather than purely punitive approaches.

However, the literature also reveals substantial gaps in student knowledge regarding AI-specific integrity expectations. Many students report receiving inadequate guidance on acceptable AI use, leaving them to navigate ambiguous territory based on personal judgement or peer norms that may not align with institutional expectations (Bissessar, 2025). This uncertainty extends to practical questions such as whether and how to cite AI assistance, what disclosure is required, and how different instructors may interpret policies.

Cross-cultural research has identified variation in student perceptions across educational contexts. Comparative studies examining learners and instructors in Korea, Mongolia, and China found differing attitudes towards digital plagiarism in the AI era, suggesting that cultural factors, educational traditions, and local policy environments shape student perceptions in important ways (Shin, Wei and Vanchinkhuu, 2025). These findings caution against assuming universal student attitudes and highlight the importance of context-sensitive policy development.

Institutional policies and perceived effectiveness

The literature reveals substantial concerns regarding the effectiveness of current institutional approaches to AI-related academic integrity. Both faculty and students rate existing policies—whether addressing traditional plagiarism or specifically addressing generative AI—as only moderately effective. A common criticism concerns unclear guidance on grey-area AI uses that fall between obvious misconduct and clearly acceptable assistance (Alsharefeen and Sayari, 2025; Goel and Nelson, 2024; Sozon et al., 2024).

Analysis of institutional anti-plagiarism policies suggests many “lack teeth” and have failed to keep pace with tools that enable misconduct. Goel and Nelson (2024) examined internet-level data on policy presence and enforcement, finding that policies frequently remain aspirational rather than operationally effective. This policy-practice gap undermines deterrence and contributes to perceptions that misconduct carries limited risk of detection or sanction.

Educators generally favour educative over punitive responses to AI-related integrity concerns. However, they report significant barriers to implementation, including increased workload for assessing potential AI use, limited reliability of detection tools, and cultural norms within institutions that may not prioritise integrity enforcement (Alsharefeen and Sayari, 2025; Cotton, Cotton and Shipway, 2023; Bing and Leong, 2025). Detection technology, whilst improving, remains imperfect and can generate both false positives (incorrectly flagging human-written work) and false negatives (failing to identify AI-generated content), complicating disciplinary decision-making.

Systematic reviews of the literature consistently recommend integrated approaches combining multiple strategies rather than reliance on any single intervention. Mpolomoka et al. (2025) and Bing and Leong (2025) emphasise that effective responses require pairing AI detection tools with policy reform and comprehensive ethics education. Sozon et al. (2024) similarly conclude that technological solutions alone are insufficient without accompanying cultural and pedagogical changes that address underlying motivations and knowledge gaps.

The timeline of academic publications addressing AI and plagiarism demonstrates dramatically increased scholarly attention since 2023, with publication frequency accelerating through 2024 and into 2025. This rapid growth reflects both the urgency of the challenge and the recognition among researchers that existing knowledge requires substantial expansion to guide institutional responses effectively.

Discussion

The ambiguity challenge

The findings synthesised in this dissertation reveal that ambiguity represents perhaps the central challenge in addressing AI-related plagiarism from a student perspective. Unlike traditional plagiarism, which involves relatively clear conceptual boundaries around copying and attribution, AI-assisted misconduct exists on a continuum without established demarcations. Students’ consistent differentiation between whole-paper generation, heavy rewriting, and idea assistance reflects genuine uncertainty rather than strategic obfuscation.

This ambiguity carries significant implications for policy development. Policies that articulate only general prohibitions on AI use or academic dishonesty may fail to provide actionable guidance for students navigating specific decisions. The evidence suggests that students require explicit instruction regarding particular use cases, ideally illustrated with concrete examples distinguishing acceptable from unacceptable practices. The concept of “AI-giarism” proposed by Chan (2024) offers a potentially useful framework for developing such nuanced guidance, though operationalising it within institutional policy remains challenging.

The knowledge-behaviour relationship

The finding that awareness of academic integrity correlates with reduced misconduct among AI users (Sarwar et al., 2025) provides important support for educational approaches to integrity promotion. This relationship suggests that students who understand what constitutes misconduct and why it matters are more likely to make ethical choices, even when technological tools make misconduct easier. However, the correlational nature of this finding means that causation cannot be definitively established; it remains possible that students predisposed to integrity are both more likely to seek out knowledge and to avoid misconduct.

Nevertheless, the practical implications support investment in comprehensive ethics education that addresses AI-specific scenarios. Such education should extend beyond rule enumeration to engage students with the underlying purposes of academic integrity—ensuring that credentials reflect genuine competence, that assessment fairly differentiates student achievement, and that the learning process develops capabilities students will need beyond their studies. Connecting integrity to students’ own interests in meaningful credentials and genuine skill development may prove more effective than appeals to abstract principles or fear of sanctions.

Policy effectiveness and institutional adaptation

The moderate effectiveness ratings assigned to current policies by both students and faculty indicate significant room for improvement. Several factors appear to contribute to policy limitations. First, policies frequently lag behind technological developments, addressing yesterday’s challenges whilst students encounter tomorrow’s tools. The pace of AI advancement likely exceeds the capacity of traditional governance processes to respond, suggesting that institutions may need more agile mechanisms for policy development and revision.

Second, enforcement challenges undermine policy credibility. When students perceive that misconduct is unlikely to be detected or sanctioned, deterrence effects are diminished regardless of policy stringency on paper. The limitations of current AI detection tools, combined with resource constraints on thorough investigation, create enforcement gaps that may be widely recognised within student communities.

Third, the finding that educators favour educative over punitive responses, whilst potentially pedagogically sound, may contribute to perceptions that policies “lack teeth” if not accompanied by clear consequences for serious violations. Balancing developmental approaches for ambiguous or first-time cases with meaningful sanctions for deliberate, substantial misconduct presents an ongoing challenge.

The literature’s consistent recommendation for integrated approaches combining detection, education, and policy reform appears well-supported by the evidence. Institutions that rely primarily on any single strategy—whether technological detection, honour codes, or punitive enforcement—are likely to achieve only partial success. Comprehensive approaches that address student knowledge, motivations, and opportunity structures offer better prospects for promoting genuine integrity rather than mere compliance.

Addressing objectives

Returning to the dissertation’s stated objectives, the synthesised evidence provides substantial insight into each. Regarding the first objective concerning student perceptions of different AI uses, the research clearly demonstrates that acceptability judgements vary with the nature and extent of AI involvement, with whole-paper generation condemned but intermediate uses provoking ambivalence. The second objective addressing ethical frameworks and motivations is addressed by evidence of grade pressure, time constraints, skill deficits, and ease of access as key drivers, alongside genuine ethical concern about authenticity among many students. The third objective concerning awareness is addressed by findings of substantial knowledge gaps regarding AI-specific integrity expectations, coupled with evidence that greater awareness correlates with reduced misconduct. The fourth objective regarding policy effectiveness is addressed by consistent findings of moderate effectiveness ratings and identification of specific policy limitations. The fifth objective regarding recommendations emerges from the collective evidence pointing towards comprehensive ethics education, clearer definitional frameworks, and collaborative policy development.

Limitations and future research directions

Several limitations of the current evidence base warrant acknowledgement. Much of the research relies on self-reported data, introducing potential biases. The geographical distribution of studies, whilst international, may not capture perspectives across all educational contexts. The rapid pace of AI development means that findings may quickly become dated as tools and student practices evolve.

Future research should address several priorities. Longitudinal studies tracking how student perceptions and behaviours change over time as AI tools mature and policies develop would provide valuable insights. Experimental research evaluating the effectiveness of specific educational interventions could inform evidence-based practice. Comparative studies examining institutional approaches across different policy environments could identify effective practices for broader adoption. Additionally, research examining the perspectives of students from diverse backgrounds, including those with learning differences who may benefit from AI assistance, could inform more equitable policy development.

Conclusions

This dissertation has examined student perceptions of plagiarism in the age of artificial intelligence through systematic synthesis of contemporary research evidence. The findings reveal a complex landscape characterised by genuine commitment to integrity principles coexisting with substantial uncertainty regarding AI-specific boundaries.

The first objective—analysing student perceptions of different AI uses—has been achieved through identification of a clear typology distinguishing condemned practices (whole-paper generation) from ambiguous uses (heavy rewriting) and generally accepted applications (idea assistance). This typology provides a foundation for developing clearer institutional guidance.

The second objective—examining ethical frameworks and motivations—has been addressed through identification of grade pressure, time constraints, skill deficits, and opportunity as key drivers of potentially problematic AI use, alongside evidence of genuine student concern about authenticity and deception. These findings suggest that effective interventions must address practical motivations whilst engaging students’ existing ethical commitments.

The third objective—assessing student awareness—has been achieved through identification of substantial knowledge gaps regarding AI-specific integrity expectations, coupled with evidence that greater awareness functions as a protective factor against misconduct. This finding provides strong support for investment in comprehensive ethics education.

The fourth objective—evaluating policy effectiveness—has been addressed through synthesis of evidence indicating moderate effectiveness ratings and identification of specific limitations including unclear grey-area guidance, enforcement challenges, and failure to keep pace with technological change.

The fifth objective—identifying recommendations—emerges from the collective evidence. Institutions should prioritise comprehensive ethics education that addresses AI-specific scenarios, develop clearer definitional frameworks distinguishing acceptable from unacceptable AI use, adopt collaborative approaches to policy development involving students and faculty, maintain integrated strategies combining detection, education, and appropriate sanctions, and establish mechanisms for agile policy revision as AI tools continue to evolve.

The significance of these findings extends beyond immediate policy implications. As AI becomes increasingly embedded in professional and personal contexts, the ethical frameworks students develop during their education will shape their lifelong engagement with these technologies. Higher education institutions bear responsibility not only for protecting academic credentials but for preparing students to navigate AI-mediated environments with integrity. Meeting this responsibility requires moving beyond reactive, punitive approaches towards proactive, educative strategies that equip students with the knowledge and ethical reasoning to make sound choices.

Future research should examine the effectiveness of specific educational interventions, track changes in student perceptions over time, and explore how institutions can develop sufficiently agile governance mechanisms to address rapidly evolving technological landscapes. The challenge of maintaining academic integrity in the AI era is unlikely to diminish; indeed, as AI capabilities expand, the questions raised in this dissertation will only become more pressing. The evidence synthesised here provides a foundation for informed institutional responses, but ongoing attention, research, and adaptation will be essential.

References

Ajwang, S. and Ikoha, A., 2024. Publish or perish in the era of artificial intelligence: which way for the Kenyan research community? *Library Hi Tech News*. https://doi.org/10.1108/lhtn-04-2024-0065

Alawad, E., Ayadi, H. and Alhinai, A., 2025. Guarding Integrity: A Case Study on Tackling AI-Generated Content and Plagiarism in Academic Writing. *Theory and Practice in Language Studies*. https://doi.org/10.17507/tpls.1506.02

Alsharefeen, R. and Sayari, N., 2025. Examining academic integrity policy and practice in the era of AI: a case study of faculty perspectives. *Frontiers in Education*. https://doi.org/10.3389/feduc.2025.1621743

Anđelić, K., 2025. Artificial intelligence and plagiarism in student writing. *Book of Proceedings*. https://doi.org/10.46793/edai24.069a

Aryal, M. and Sharma, A., 2025. Plagiarism and Artificial Intelligence: Balancing Innovation with Academic Integrity. *Nepalese Heart Journal*. https://doi.org/10.3126/nhj.v22i1.78326

Bing, Z. and Leong, W., 2025. AI on Academic Integrity and Plagiarism Detection. *ASM Science Journal*. https://doi.org/10.32802/asmscj.2025.1918

Bissessar, C., 2025. An exploration of students’ perceptions of artificial intelligence and plagiarism at a higher education institution. *Equity in Education & Society*. https://doi.org/10.1177/27526461251326919

Chan, C., 2024. Students’ perceptions of ‘AI-giarism’: investigating changes in understandings of academic misconduct. *Education and Information Technologies*, 30, pp. 8087-8108. https://doi.org/10.1007/s10639-024-13151-7

Cotton, D., Cotton, P. and Shipway, J., 2023. Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. *Innovations in Education and Teaching International*, 61, pp. 228-239. https://doi.org/10.1080/14703297.2023.2190148

Dupps, W., 2023. Artificial intelligence and academic publishing. *Journal of Cataract and Refractive Surgery*, 49(7), pp. 655-656. https://doi.org/10.1097/j.jcrs.0000000000001223

Eaton, S., 2023. Postplagiarism: transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. *International Journal for Educational Integrity*, 19, pp. 1-10. https://doi.org/10.1007/s40979-023-00144-1

Goel, R. and Nelson, M., 2024. Do college anti-plagiarism/cheating policies have teeth in the age of AI? Exploratory evidence from the Internet. *Managerial and Decision Economics*. https://doi.org/10.1002/mde.4139

Ibrahim, H., Liu, F., Asim, R., Battu, B., Benabderrahmane, S., Alhafni, B., Adnan, W., Alhanai, T., AlShebli, B., Baghdadi, R., Bélanger, J., Beretta, E., Çelik, K., Chaqfeh, M., Daqaq, M., Bernoussi, Z., Fougnie, D., De Soto, B., Gandolfi, A., Gyorgy, A., Habash, N., Harris, J., Kaufman, A., Kirousis, L., Kocak, K., Lee, K., Lee, S., Malik, S., Maniatakos, M., Melcher, D., Mourad, A., Park, M., Rasras, M., Reuben, A., Zantout, D., Gleason, N., Makovi, K., Rahwan, T. and Zaki, Y., 2023. Perception, performance, and detectability of conversational artificial intelligence across 32 university courses. *Scientific Reports*, 13. https://doi.org/10.1038/s41598-023-38964-3

Kotsis, K., 2024. Artificial Intelligence Creates Plagiarism or Academic Research? *European Journal of Arts, Humanities and Social Sciences*. https://doi.org/10.59324/ejahss.2024.1(6).18

Mpolomoka, D., Luchembe, M., Mushibwe, C., Muvombo, M., Changala, M., Sampa, R. and Banda, S., 2025. Artificial intelligence-related plagiarism in education: a systematic review. *European Journal of Education Studies*. https://doi.org/10.46827/ejes.v12i4.6029

Quality Assurance Agency for Higher Education, 2023. *Contracting to cheat in higher education: how to address essay mills and contract cheating*. Gloucester: QAA.

Sarwar, S., Bushra, M., Ullah, Z. and Hadi, S., 2025. Plagiarism in the Age of AI: Exploring the Role of ChatGPT in Student Writing and Academic Integrity. *Journal of Information Systems Engineering and Management*. https://doi.org/10.52783/jisem.v10i14s.2404

Shin, Y., Wei, S. and Vanchinkhuu, N., 2025. Digital Plagiarism in EFL Education during the AI Era: A Comparative Study of Perceptions among Learners and Instructors in Korea, Mongolia, and China. *LEARN Journal: Language Education and Acquisition Research Network*. https://doi.org/10.70730/rmka9428

Snyder, H., 2019. Literature review as a research methodology: an overview and guidelines. *Journal of Business Research*, 104, pp. 333-339.

Sozon, M., Alkharabsheh, O., Fong, P. and Chuan, S., 2024. Cheating and plagiarism in higher education institutions (HEIs): A literature review. *F1000Research*, 13. https://doi.org/10.12688/f1000research.147140.2

Sysoyev, P., 2024. Ethics and AI-Plagiarism in an Academic Environment: Students’ Understanding of Compliance with Author’s Ethics and the Problem of Plagiarism in the Process of Interaction with Generative Artificial Intelligence. *Vysshee Obrazovanie v Rossii = Higher Education in Russia*. https://doi.org/10.31992/0869-3617-2024-33-2-31-53

Yakovenko, T. and Yakovenko, K., 2025. Artificial intelligence in pedagogical education: results of a student survey. *PRIMO ASPECTU*. https://doi.org/10.35211/2500-2635-2025-1-61-61-66

To cite this work, please use the following reference:

Mitchell, S., 20 January 2026. Student perceptions of plagiarism in the age of AI: ethics, awareness and institutional policy effectiveness. [online]. Available from: https://www.ukdissertations.com/dissertation-examples/education/student-perceptions-of-plagiarism-in-the-age-of-ai-ethics-awareness-and-institutional-policy-effectiveness/ [Accessed 23 January 2026].

Contact

UK Dissertations

Business Bliss Consultants FZE

Fujairah, PO Box 4422, UAE

+44 115 966 7987

Connect

Subscribe

Join our email list to receive the latest updates and valuable discounts.