Abstract
This dissertation examines the extent to which reliance on artificial intelligence (AI)-assisted writing tools affects originality, accountability, and authorship within higher education contexts. Employing a systematic literature synthesis methodology, the study analyses recent peer-reviewed research to understand how varying degrees of AI tool usage influence academic writing outcomes and integrity frameworks. Key findings reveal a threshold effect: when AI functions as a limited assistant for editing, feedback, and brainstorming, originality and clear authorship can be preserved. However, when AI assumes the role of a hidden co-author or ghostwriter, evidence consistently demonstrates diminished originality, reduced critical thinking engagement, and substantial complications regarding accountability and authorship attribution. The research identifies that over-reliance on AI tools risks skill erosion in argumentation and synthesis, potentially weakening independent academic literacy over time. Furthermore, authorship ambiguity emerges as a central concern, with risks including unattributed AI-generated content, fabricated citations, and factual inaccuracies. The dissertation concludes that structured institutional policies, comprehensive AI literacy programmes, and pedagogical task designs that foreground human judgement are essential for maintaining academic norms whilst harnessing the legitimate benefits of AI-assisted writing tools.
Introduction
The rapid proliferation of artificial intelligence-assisted writing tools has fundamentally transformed the landscape of academic writing in higher education. Since the public release of large language models such as ChatGPT in late 2022, educational institutions worldwide have grappled with unprecedented questions concerning the nature of authorship, the preservation of originality, and the maintenance of academic integrity (Perkins, 2023). These tools, capable of generating coherent, grammatically correct prose on virtually any topic within seconds, present both remarkable opportunities for learning support and profound challenges to established academic conventions.
Higher education has traditionally valued originality as a cornerstone of intellectual development, expecting students to demonstrate independent thought, critical analysis, and authentic engagement with subject matter. Written assessments serve not merely as evaluation instruments but as developmental processes through which students cultivate essential academic literacies, including argumentation, synthesis, and scholarly communication (Chanpradit, 2025). The introduction of AI tools capable of producing polished academic text therefore raises fundamental questions about whether these developmental objectives can be achieved when technology increasingly mediates the writing process.
The significance of this topic extends beyond pedagogical concerns to encompass broader issues of academic integrity and ethical conduct. Universities function as institutions of trust, certifying that graduates possess certain competencies and have demonstrated mastery of disciplinary knowledge. When AI tools can generate assessment submissions with minimal human input, this certification function becomes compromised, potentially devaluing academic credentials and undermining public confidence in higher education (Gustilo, Ong and Lapinid, 2024).
Furthermore, the question of authorship in the age of AI carries substantial implications for scholarly publishing, intellectual property, and professional accountability. If AI-generated text cannot be reliably attributed to human authors, questions arise regarding who bears responsibility for accuracy, who claims credit for insights, and who is accountable for errors or misconduct (Kotsis, 2025). These considerations affect not only students but also academics, researchers, and the broader knowledge ecosystem.
This dissertation addresses these interconnected concerns by synthesising the rapidly growing body of empirical research on AI-assisted writing in higher education. By examining evidence across multiple studies and contexts, this work seeks to provide a nuanced understanding of how different levels of AI reliance affect the core academic values of originality, accountability, and authorship.
Aim and objectives
The primary aim of this dissertation is to critically evaluate the extent to which reliance on AI-assisted writing tools affects originality, accountability, and authorship in higher education contexts.
To achieve this aim, the following objectives guide the investigation:
1. To synthesise existing empirical evidence regarding the effects of AI-assisted writing tools on student originality and creative expression in academic work.
2. To analyse how varying degrees of AI tool reliance influence critical thinking and independent academic skill development.
3. To examine the implications of AI-assisted writing for accountability frameworks and academic integrity structures in higher education.
4. To investigate the authorship ambiguities introduced by AI writing tools and evaluate proposed responses from educational institutions and scholarly publishing bodies.
5. To identify threshold conditions under which AI assistance supports rather than undermines academic values, and to propose evidence-based recommendations for policy and practice.
Methodology
This dissertation employs a systematic literature synthesis methodology to address the research aim and objectives. Literature synthesis represents an established approach for consolidating evidence across multiple studies to generate comprehensive understanding of complex phenomena (Grant and Booth, 2009). This methodology proves particularly appropriate given the rapidly evolving nature of research on AI in education, where synthesising diverse findings enables identification of consistent patterns and emerging consensus.
The literature search strategy prioritised peer-reviewed journal articles published between 2023 and 2025, capturing the surge of empirical research following the widespread adoption of generative AI tools in educational contexts. Sources were identified through academic databases including Scopus, Web of Science, and Google Scholar, supplemented by the Consensus AI-powered research search engine, which facilitated comprehensive identification of relevant studies. Search terms included combinations of “artificial intelligence,” “AI writing tools,” “academic writing,” “higher education,” “originality,” “authorship,” “academic integrity,” and “accountability.”
Inclusion criteria specified that studies must: (a) focus on AI-assisted writing tools in higher education contexts; (b) address at least one of the core themes of originality, accountability, or authorship; (c) present empirical findings or substantive theoretical analysis; and (d) be published in English in peer-reviewed venues. Exclusion criteria eliminated opinion pieces without empirical grounding, studies focused exclusively on primary or secondary education, and papers addressing AI applications unrelated to writing assistance.
The analysis followed a thematic synthesis approach, wherein findings from included studies were coded according to the research objectives and subsequently organised into coherent thematic categories. This approach enabled identification of convergent findings across diverse methodological approaches and geographical contexts whilst also highlighting areas of disagreement or nuance within the literature.
Limitations of this methodology include potential publication bias towards studies finding significant effects, possible gaps in coverage of non-English language research, and the inherent challenge of synthesising studies employing varied definitions of key concepts such as “originality” and “AI reliance.” Nevertheless, the systematic approach enhances transparency and reproducibility compared to traditional narrative reviews.
Literature review
Defining AI-assisted writing tools in educational contexts
AI-assisted writing tools encompass a spectrum of technologies ranging from basic grammar checkers to sophisticated large language models capable of generating original prose. Contemporary discussion centres primarily on generative AI tools, particularly those built on transformer architecture, which can produce contextually appropriate text responses to user prompts (Perkins, 2023). These tools differ fundamentally from earlier writing technologies in their capacity to generate substantive content rather than merely correcting or reformatting human-authored text.
The educational application of these tools varies considerably. At one end of the spectrum, students may use AI for discrete tasks such as grammar correction, paraphrasing assistance, or generating initial ideas during brainstorming phases. At the other end, AI may produce entire essays or substantial portions thereof with minimal human editorial intervention (Chanpradit, 2025). Understanding this spectrum proves essential for evaluating impacts on originality, accountability, and authorship, as effects appear highly dependent upon the nature and extent of AI involvement.
Effects on originality and creative expression
Empirical research consistently identifies a paradoxical relationship between AI tool usage and originality in academic writing. At low to moderate levels of use, AI tools demonstrably improve surface-level writing qualities including grammar, coherence, and organisational structure. Studies report that students using AI assistants for editing and feedback produce more polished prose with fewer mechanical errors and clearer paragraph structures (Malik et al., 2023; Quratulain, Maqbool and Bilal, 2025). Furthermore, AI tools can support idea generation and enhance writer confidence, particularly among students who lack proficiency in the language of instruction (Ridho et al., 2025).
However, at higher levels of reliance, research consistently documents diminished originality and creativity. Aljuaid (2024) found that students who relied heavily on AI-generated content produced work characterised by formulaic structures and generic arguments lacking authentic voice or novel insight. Similar findings emerge from studies across diverse contexts, including Nigerian higher education (Ya’u and Mohammed, 2025), Indonesian doctoral programmes (Pratiwi et al., 2025), and Thai universities (Chanpradit, 2025). This convergence across different cultural and educational contexts suggests that the relationship between AI over-reliance and diminished originality represents a robust phenomenon rather than a context-specific artefact.
The mechanisms underlying this relationship involve reduced engagement with source materials and arguments. When AI generates text, students may accept outputs without the deep processing that characterises genuine learning and original thought development. Gustilo, Ong and Lapinid (2024) observed that heavy AI users demonstrated less evidence of having genuinely grappled with disciplinary concepts, instead producing work that, whilst superficially competent, lacked the intellectual struggle visible in authentic student writing.
Critical thinking and skill development
Beyond immediate effects on originality, research raises concerns about longer-term impacts on critical thinking and academic skill development. Several studies identify a “skill erosion” phenomenon wherein sustained AI reliance weakens students’ independent capacities for argumentation, synthesis, and scholarly analysis (Chanpradit, 2025; Sodangi and Isma’il, 2025). This erosion occurs because AI tools may function as cognitive shortcuts, enabling task completion without the effortful processing that builds durable skills.
Demirel (2024) documented that students who habitually used AI for writing tasks subsequently performed worse on assessments where AI tools were unavailable, suggesting that AI reliance may impede rather than support genuine competence development. Pratiwi et al. (2025) similarly found that Indonesian doctoral students who relied heavily on AI exhibited weaker argumentation skills in oral examinations and unassisted writing contexts.
These findings carry significant implications for the developmental purposes of higher education. If written assessments function as learning experiences through which students develop academic literacies, AI tools that enable task completion without genuine engagement may undermine these developmental objectives even when producing acceptable written products.
Authorship attribution and conceptual ambiguity
AI-assisted writing introduces substantial complexity regarding authorship attribution. Traditional conceptions of authorship presuppose human cognitive effort as the source of ideas and their expression. When AI generates significant portions of text or shapes the direction of arguments, established authorship concepts become strained (Yeo, 2023).
Research identifies multiple dimensions of this authorship ambiguity. First, questions arise regarding ownership: who can legitimately claim authorship of AI-influenced text, and under what conditions? Second, credit attribution becomes problematic: should AI contributions be acknowledged, and if so, how? Third, responsibility allocation grows uncertain: who bears accountability for errors, misrepresentations, or misconduct in AI-assisted work? (Aljuaid, 2024; Kotsis, 2025).
These questions have prompted varied responses from scholarly publishing bodies. Major publishers and journals have generally prohibited listing AI systems as authors whilst requiring disclosure of AI assistance in manuscript preparation. The rationale centres on accountability: authorship implies responsibility, and AI systems cannot bear responsibility for scholarly claims (Cheng, Calhoun and Reedy, 2025). However, the precise boundaries between acceptable AI assistance and problematic AI authorship remain contested and context-dependent.
Academic integrity risks and institutional responses
Research documents multiple academic integrity risks associated with AI writing tools. Most prominently, students may submit AI-generated work as their own without appropriate disclosure, constituting a form of misrepresentation analogous to traditional plagiarism (Perkins, 2023). Additionally, AI tools may produce fabricated citations—references to non-existent sources presented as genuine scholarly support—creating what has been termed a “hallucination” problem (Clark, 2025). Factual inaccuracies within AI-generated content pose further integrity concerns, particularly when students lack the disciplinary knowledge to identify errors.
Institutional responses have varied considerably. Some universities have implemented outright bans on AI tool usage for assessed work, whilst others have adopted permissive approaches that encourage AI use with appropriate disclosure. Between these extremes, many institutions have developed nuanced policies that permit certain AI applications whilst prohibiting others (Vetter et al., 2024).
Research on policy effectiveness remains limited but suggests that overly restrictive approaches may prove both unenforceable and counterproductive, potentially driving AI use underground rather than promoting transparent, ethical engagement (Velez and Rister, 2024). Conversely, permissive approaches without adequate scaffolding may enable the skill erosion and originality diminishment documented in other studies.
Student and educator perceptions
Both students and educators demonstrate ambivalent attitudes toward AI writing tools, simultaneously recognising benefits and expressing concerns. Surveys consistently find that users value AI tools for improving writing quality, saving time, and supporting confidence, particularly among non-native speakers (Malik et al., 2023; Ridho et al., 2025). However, these same populations often acknowledge risks to originality and academic integrity, suggesting a sophisticated awareness of the technology’s double-edged nature.
Subaveerapandiyan, Kalbande and Ahmad (2025) found that doctoral students in India perceived AI tools as effective for specific writing tasks whilst recognising ethical limitations that should constrain usage. Similarly, Nam and Bai (2023) identified that media discourse frames AI writing tools through both opportunity and threat lenses, reflecting broader societal ambivalence about generative AI technologies.
Educators express particular concern about assessment validity and the certification function of higher education. If AI-assisted work cannot be reliably distinguished from independent student work, the capacity of assessments to certify genuine competence becomes compromised (Gustilo, Ong and Lapinid, 2024).
Discussion
The threshold effect: understanding variable impacts
The synthesised evidence points compellingly to a threshold effect in AI-assisted writing, wherein impacts depend critically upon the nature and extent of AI involvement. When AI functions as a limited assistant—supporting editing, providing feedback, or scaffolding brainstorming—originality and authorship can be preserved, and the tool may even enhance learning outcomes. However, when AI assumes a co-authoring or ghostwriting role, evidence consistently demonstrates negative consequences for originality, critical thinking, and accountability frameworks.
This threshold conception provides a useful framework for understanding seemingly contradictory findings in the literature. Studies reporting benefits from AI tools typically examine moderate usage for specific supportive functions, whilst those documenting harms generally focus on extensive reliance wherein AI generates substantial content with minimal human transformation. The key distinction concerns whether human cognitive effort remains central to the writing process or becomes marginalised by AI contribution.
Importantly, this threshold appears to be contextually variable rather than fixed. For novice writers developing foundational skills, even moderate AI assistance with content generation may interfere with necessary developmental struggles. For advanced scholars with established disciplinary expertise, similar AI contributions might represent legitimate efficiency gains without skill erosion risks. Policy and pedagogical approaches must therefore attend to student developmental stage and learning objectives rather than applying uniform rules.
Implications for originality and authentic learning
The findings regarding originality carry profound implications for higher education’s developmental mission. If written assessments serve as vehicles for learning—occasions through which students develop disciplinary knowledge and academic literacies—then AI tools that enable task completion without genuine cognitive engagement fundamentally undermine this function. Students may submit acceptable products whilst bypassing the learning processes that assessments are designed to facilitate.
This analysis suggests that concerns about AI in education extend beyond academic integrity narrowly conceived. Even where students use AI with full disclosure and institutional permission, pedagogical harms may occur if AI usage prevents the effortful processing through which learning occurs. The skill erosion phenomenon documented across multiple studies illustrates this danger: students who habitually delegate writing tasks to AI may fail to develop independent capabilities that they will need in professional and scholarly contexts where AI assistance is unavailable or inappropriate.
Addressing these concerns requires pedagogical rather than merely regulatory responses. Task designs that foreground analysis, synthesis, and critical evaluation—and that cannot be adequately completed through AI-generated text alone—may preserve developmental objectives whilst acknowledging AI’s presence in student writing practices. Similarly, assessment approaches that value process alongside product, requiring students to demonstrate their reasoning and decision-making, may detect and discourage problematic AI reliance.
Authorship reconceptualised in the AI era
The authorship ambiguities introduced by AI tools necessitate reconceptualisation of this fundamental academic concept. Traditional authorship presupposed human agency as the source of intellectual contribution, but AI disrupts this assumption by introducing non-human text generation capabilities. How should academic communities respond?
The emerging consensus, reflected in publisher policies and institutional guidelines, maintains human responsibility as the criterion for authorship. Under this view, AI cannot be an author because it cannot bear responsibility for claims, respond to criticism, or be held accountable for errors. Humans who use AI tools retain authorship—and full accountability—for work produced with AI assistance. This position effectively treats AI as a sophisticated tool, analogous to reference management software or grammar checkers, rather than as a contributing author.
However, this framework requires transparency regarding AI use. If human authors bear responsibility for AI-assisted work, they must be able to stand behind claims and verify accuracy. Requiring disclosure of AI assistance enables reviewers, markers, and readers to contextualise work appropriately and authors to fulfil accountability obligations. The prohibition on listing AI as an author, combined with mandatory disclosure requirements, represents a coherent response that preserves traditional accountability structures whilst acknowledging AI’s role.
Accountability frameworks and institutional governance
Effective governance of AI-assisted writing requires clear, comprehensive policies that define acceptable assistance, specify disclosure requirements, and establish human responsibility for content verification. Research indicates that students and educators alike desire policy clarity, finding ambiguity more problematic than either restrictive or permissive approaches (Vetter et al., 2024).
Effective policies must navigate between extremes. Outright bans appear neither practical nor desirable: AI tools are ubiquitous, detection is unreliable, and legitimate educational benefits exist for appropriate usage. However, wholly permissive approaches without structure may enable the skill erosion and integrity problems documented in research. The optimal approach appears to involve contextualised guidance that specifies appropriate uses for different assessment types and learning objectives, supported by AI literacy education that enables students to make informed choices.
Crucially, policies must be accompanied by pedagogical transformation. Rules alone cannot ensure appropriate AI use; students must understand why boundaries exist and develop the judgment to navigate novel situations not covered by explicit guidance. This suggests that AI literacy—understanding of AI capabilities, limitations, and ethical implications—should become an explicit component of higher education curricula.
Toward evidence-based recommendations
Synthesising findings across the reviewed literature, several evidence-based recommendations emerge:
First, institutional policies should define AI assistance along a spectrum, distinguishing between editing assistance, feedback tools, and content generation. Clear communication about acceptable uses for specific assessment types reduces ambiguity and enables consistent enforcement.
Second, mandatory disclosure requirements should accompany any permitted AI use, enabling appropriate contextualisation and maintaining human accountability. Disclosure frameworks should specify not merely whether AI was used but how and for what purposes.
Third, assessment design should evolve to emphasise learning objectives that cannot be achieved through AI text generation alone. Tasks requiring critical analysis, personal reflection, disciplinary synthesis, and novel argumentation preserve developmental value whilst acknowledging AI’s presence.
Fourth, AI literacy education should become embedded within curricula, developing student capacity to use these tools ethically, effectively, and with appropriate critical awareness of limitations.
Fifth, ongoing monitoring and research should track both AI tool evolution and educational impacts, enabling evidence-based policy refinement as the technology landscape continues to shift.
Conclusions
This dissertation set out to evaluate the extent to which reliance on AI-assisted writing tools affects originality, accountability, and authorship in higher education. Through systematic literature synthesis, the investigation has achieved its stated objectives and generated nuanced understanding of this rapidly evolving challenge.
Regarding the first objective, the evidence demonstrates that AI tool effects on originality are contingent upon usage patterns. Low to moderate usage for editing and feedback preserves or may even enhance originality, whilst high reliance involving content generation consistently diminishes original thought and authentic voice in academic writing.
The second objective, concerning critical thinking and skill development, reveals concerning findings. Over-reliance on AI tools correlates with skill erosion in argumentation and synthesis, potentially weakening the independent academic literacies that higher education seeks to develop. This finding underscores that even permissible AI usage may carry pedagogical costs requiring careful management.
For the third objective, examining accountability implications, the research identifies multiple integrity risks including undisclosed AI-generated content, fabricated citations, and factual inaccuracies. Effective accountability requires policy frameworks that combine mandatory disclosure with human responsibility for verification and accuracy.
The fourth objective, investigating authorship ambiguity, finds that the emerging consensus maintains human authorship and accountability even for AI-assisted work, with AI treated as a tool rather than a contributor. This framework requires transparency to function effectively.
Finally, addressing the fifth objective, the evidence points to threshold conditions wherein AI assistance supports rather than undermines academic values when framed as limited, disclosed, and supplementary to human cognitive effort.
The significance of these findings extends beyond immediate practical applications. They suggest that AI writing tools represent neither unqualified threat nor unmixed blessing but rather technologies whose impacts depend critically upon governance frameworks, pedagogical approaches, and individual usage patterns. Higher education institutions that develop sophisticated, evidence-based responses can harness legitimate benefits whilst protecting core academic values.
Future research should investigate longitudinal impacts of AI tool usage on skill development, comparative effectiveness of different policy approaches, and discipline-specific considerations that may require tailored responses. As AI capabilities continue to advance, ongoing empirical investigation will prove essential for maintaining evidence-based educational governance.
References
Aljuaid, H., 2024. The Impact of Artificial Intelligence Tools on Academic Writing Instruction in Higher Education: A Systematic Review. *Arab World English Journal*. Available at: https://doi.org/10.24093/awej/chatgpt.2
Chanpradit, T., 2025. Generative artificial intelligence in academic writing in higher education: A systematic review. *Edelweiss Applied Science and Technology*, 9(4). Available at: https://doi.org/10.55214/25768484.v9i4.6128
Cheng, A., Calhoun, A. and Reedy, G., 2025. Artificial intelligence-assisted academic writing: recommendations for ethical use. *Advances in Simulation*, 10. Available at: https://doi.org/10.1186/s41077-025-00350-6
Clark, T., 2025. Ethical Use of Artificial Intelligence (AI) in Scholarly Writing. *Journal of Pediatric Surgical Nursing*, 14, pp. 85-91. Available at: https://doi.org/10.1177/23320249251343881
Demirel, E., 2024. The Use and Perceptions Towards AI Tools For Academic Writing Among University Students. *Innovations in Language Teaching Journal*. Available at: https://doi.org/10.53463/innovltej.20240328
Grant, M.J. and Booth, A., 2009. A typology of reviews: an analysis of 14 review types and associated methodologies. *Health Information and Libraries Journal*, 26(2), pp. 91-108.
Gustilo, L., Ong, E. and Lapinid, M., 2024. Algorithmically-driven writing and academic integrity: exploring educators’ practices, perceptions, and policies in AI era. *International Journal for Educational Integrity*, 20, pp. 1-43. Available at: https://doi.org/10.1007/s40979-024-00153-8
Kotsis, K., 2025. Legality of Employing Artificial Intelligence for Writing Academic Papers in Education. *Journal of Contemporary Philosophical and Anthropological Studies*, 3(1). Available at: https://doi.org/10.59652/jcpas.v3i1.375
Malik, A., Pratiwi, Y., Andajani, K., Numertayasa, I., Suharti, S., Darwis, A. and M., 2023. Exploring Artificial Intelligence in Academic Essay: Higher Education Student’s Perspective. *International Journal of Educational Research Open*. Available at: https://doi.org/10.1016/j.ijedro.2023.100296
Nam, B. and Bai, Q., 2023. ChatGPT and its ethical implications for STEM research and higher education: a media discourse analysis. *International Journal of STEM Education*, 10, pp. 1-24. Available at: https://doi.org/10.1186/s40594-023-00452-5
Perkins, M., 2023. Academic integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond. *Journal of University Teaching and Learning Practice*, 20(2). Available at: https://doi.org/10.53761/1.20.02.07
Pratiwi, H., S., H. and Ridha, M., 2025. Between Shortcut and Ethics: Navigating the Use of Artificial Intelligence in Academic Writing Among Indonesian Doctoral Students. *European Journal of Education*. Available at: https://doi.org/10.1111/ejed.70083
Quratulain, Maqbool, S. and Bilal, S., 2025. The Effectiveness of AI-Powered Writing Assistants in Enhancing Essay Writing Skills at Undergraduate Level. *Journal for Social Science Archives*, 3(1). Available at: https://doi.org/10.59075/jssa.v3i1.166
Ridho, M., Jaya, A., H., Chantavhong, S. and Rattanakosin, N., 2025. Analyzing the Use of Artificial Intelligence (AI) in Writing Academic Papers of Student at Universitas PGRI Palembang. *Esteem Journal of English Education Study Programme*, 8(2). Available at: https://doi.org/10.31851/esteem.v8i2.18732
Sodangi, U. and Isma’il, A., 2025. Responsible integration of generative artificial intelligence in academic writing: a narrative review and synthesis. *Journal of Artificial Intelligence, Machine Learning and Neural Network*, 52, pp. 13-23. Available at: https://doi.org/10.55529/jaimlnn.52.13.23
Subaveerapandiyan, A., Kalbande, D. and Ahmad, N., 2025. Perceptions of effectiveness and ethical use of AI tools in academic writing: A study Among PhD scholars in India. *Information Development*, 41, pp. 728-746. Available at: https://doi.org/10.1177/02666669251314840
Velez, M. and Rister, A., 2024. Beyond Academic Integrity: Navigating Institutional and Disciplinary Anxieties About AI-Assisted Authorship in Technical and Professional Communication. *Journal of Business and Technical Communication*, 39, pp. 115-132. Available at: https://doi.org/10.1177/10506519241280646
Vetter, M., Lucia, B., Jiang, J. and Othman, M., 2024. Towards a framework for local interrogation of AI ethics: A case study on text generators, academic integrity, and composing with ChatGPT. *Computers and Composition*. Available at: https://doi.org/10.1016/j.compcom.2024.102831
Ya’u, M. and Mohammed, M., 2025. AI-Assisted Writing and Academic Literacy: Investigating the Dual Impact of Language Models on Writing Proficiency and Ethical Concerns in Nigerian Higher Education. *International Journal of Education and Literacy Studies*, 13(2), pp. 593. Available at: https://doi.org/10.7575/aiac.ijels.v.13n.2p.593
Yeo, M., 2023. Academic integrity in the age of Artificial Intelligence (AI) authoring apps. *TESOL Journal*. Available at: https://doi.org/10.1002/tesj.716
