+44 115 966 7987 contact@ukdiss.com Log in

How do students use generative AI for learning rather than cheating, and what teaching designs support this?

//

UK Dissertations

Abstract

This dissertation examines how university students employ generative artificial intelligence (GenAI) for legitimate learning purposes rather than academic misconduct, and identifies teaching designs that support productive engagement with these technologies. Employing a systematic literature synthesis methodology, this study analyses recent empirical research from multiple international contexts, including a landmark study of 72,615 students. The findings reveal that students predominantly use GenAI as a study support tool—for concept clarification, brainstorming, writing refinement, and practice exercises—rather than as a substitute for intellectual engagement. Critically, the benefits of GenAI use manifest only within high-challenge, high-support learning environments; in low-challenge contexts, GenAI correlates with diminished engagement. Six key pedagogical strategies emerge as effective: explicit AI policies, challenging yet supported tasks, scaffolded in-class AI activities, process-oriented assessment redesign, comprehensive AI literacy instruction, and personalised learning materials. The dissertation concludes that educational institutions must shift from prohibition-focused approaches toward structured integration that positions GenAI as a transparent, critiqued co-creator within thoughtfully designed curricula, thereby preserving academic integrity whilst harnessing technological affordances for enhanced learning.

Introduction

The rapid proliferation of generative artificial intelligence technologies, particularly large language models such as ChatGPT, Claude, and Gemini, has fundamentally disrupted established paradigms within higher education. Since the public release of ChatGPT in November 2022, universities worldwide have grappled with unprecedented questions concerning academic integrity, pedagogical adaptation, and the very nature of learning itself (Chan and Hu, 2023). Initial institutional responses frequently emphasised prohibition and detection, treating GenAI primarily as a threat to academic standards. However, emerging evidence suggests that students are already integrating these tools into their study practices in ways that extend far beyond simple cheating, warranting a more nuanced examination of how GenAI can be harnessed for legitimate educational purposes.

The academic significance of this inquiry cannot be overstated. Higher education institutions serve as crucial sites for developing critical thinking, independent inquiry, and disciplinary expertise—competencies that remain essential regardless of technological advancement. If GenAI use undermines these developmental processes, the consequences for graduate employability, professional competence, and societal knowledge production could prove severe. Conversely, if properly integrated, GenAI may offer unprecedented opportunities for personalised learning, immediate feedback, and cognitive scaffolding that democratises access to high-quality educational support (Ruiz-Rojas et al., 2023).

The social implications are equally pressing. Questions of equity arise when considering differential access to GenAI tools, varying levels of digital literacy among student populations, and the potential for AI-assisted learning to either exacerbate or ameliorate existing educational disparities. Furthermore, the ethical dimensions of authorship, intellectual ownership, and authentic assessment require careful consideration within evolving professional and academic contexts (Deric, Frank and Vuković, 2025).

Practically, educators face immediate decisions regarding policy development, assessment design, and classroom practice. Survey evidence indicates that the majority of university students now utilise GenAI for academic work, regardless of institutional policies (Ahmed et al., 2024). This reality demands evidence-based guidance for teaching staff who must navigate between enabling legitimate learning support and maintaining meaningful academic standards. The urgency of this practical challenge, combined with rapidly accumulating research evidence, creates both the necessity and opportunity for systematic scholarly synthesis.

This dissertation addresses these intersecting concerns by examining current evidence regarding student GenAI use patterns and identifying pedagogical approaches that channel this use toward learning enhancement rather than academic misconduct. In doing so, it contributes to the emerging scholarly conversation about AI-integrated education whilst providing actionable insights for institutional policy and teaching practice.

Aim and objectives

The overarching aim of this dissertation is to synthesise current evidence regarding how university students use generative artificial intelligence for learning purposes and to identify teaching designs that effectively support productive, ethical engagement with these technologies.

To achieve this aim, the following specific objectives guide the inquiry:

1. To characterise the current patterns of GenAI use among university students, distinguishing between productive learning-supportive applications and risky substitution behaviours.

2. To examine the conditions under which GenAI use enhances versus diminishes student learning engagement, with particular attention to course design characteristics.

3. To identify and evaluate pedagogical strategies that channel GenAI use toward legitimate learning support whilst maintaining academic integrity.

4. To synthesise evidence regarding student perceptions, concerns, and ethical considerations surrounding GenAI use in academic contexts.

5. To develop evidence-based recommendations for teaching staff and institutional policymakers seeking to integrate GenAI constructively within higher education curricula.

Methodology

This dissertation employs a systematic literature synthesis methodology to address the stated research objectives. Given the nascent and rapidly evolving nature of research on GenAI in higher education, this approach enables comprehensive integration of diverse empirical evidence whilst maintaining analytical rigour appropriate for scholarly synthesis.

Search strategy and source selection

The literature search prioritised peer-reviewed academic publications from 2023 to 2025, reflecting the emergence of GenAI as a significant educational phenomenon following ChatGPT’s public release. Primary databases searched included Web of Science, Scopus, ERIC, and Google Scholar, using search terms combining “generative AI” or “ChatGPT” with “higher education,” “student learning,” “academic integrity,” and “teaching design.” Additional sources were identified through citation tracking and reference list examination.

Source selection criteria emphasised empirical research designs, including surveys, experimental studies, and systematic reviews, published in peer-reviewed journals. Large-scale studies received particular attention given their capacity to identify patterns across diverse student populations. The 72,615-participant study by Guo et al. (2025) provided especially valuable quantitative evidence regarding engagement outcomes.

Analytical approach

Thematic analysis guided the synthesis of selected literature. Initial coding identified recurrent patterns in student GenAI use, followed by categorisation of teaching strategies and their reported outcomes. Evidence quality was assessed based on methodological rigour, sample size, and replicability of findings across different institutional and cultural contexts.

The synthesis integrated findings across multiple dimensions: descriptive patterns of student use, quantitative relationships between GenAI use and learning outcomes, qualitative insights regarding student perceptions, and evaluations of pedagogical interventions. This multi-dimensional approach enables comprehensive understanding of a complex phenomenon whilst maintaining focus on the practical implications for educational design.

Limitations

Several methodological limitations warrant acknowledgement. The rapid evolution of GenAI technologies means that research findings may reflect capabilities and interfaces that have since changed substantially. Self-reported survey data, which constitutes much of the available evidence, may underestimate problematic use due to social desirability bias. Additionally, publication bias may favour studies reporting successful interventions over null or negative results. These limitations are addressed through triangulation across multiple studies and careful qualification of conclusions.

Literature review

Prevalence and general patterns of student GenAI use

Recent large-scale surveys consistently demonstrate that GenAI use has become normative among university students. Chan and Hu (2023) found widespread adoption across diverse disciplinary contexts, whilst Ahmed et al. (2024) documented extensive use for academic tasks in a comprehensive survey examining opportunities and challenges. Critically, these studies indicate that most students employ GenAI as support rather than full substitution for their own intellectual work. This distinction between supportive and substitutive use emerges as fundamental to understanding the relationship between GenAI and learning outcomes.

The prevalence of use appears largely independent of institutional policies, suggesting that prohibition-based approaches may be ineffective in practice. Johnston et al. (2024) found that students continue using GenAI regardless of restrictive guidance, often without clear understanding of institutional expectations. This reality underscores the importance of developing constructive integration strategies rather than relying solely on detection and punishment mechanisms.

Productive uses supporting learning

Empirical research identifies four primary categories of productive GenAI use that support rather than replace student learning.

First, students employ GenAI extensively for clarifying concepts and generating summaries. When encountering difficult material, students request explanations, simplified restatements, and quick overviews that supplement rather than substitute for engagement with primary sources. Chan and Hu (2023) found this clarification function among the most commonly reported uses, whilst Leahy, Ozer and Cummins (2025) documented its particular value for struggling learners who benefit from multiple explanation formats.

Second, brainstorming and ideation represent significant productive applications. Students use GenAI for idea generation, outlining potential approaches, generating examples, and exploring alternative perspectives on complex problems. Kadaruddin (2023) emphasised these creative applications as supporting innovative instructional strategies, whilst Rana, Verhoeven and Sharma (2025) demonstrated specific benefits within design thinking pedagogy where divergent thinking precedes convergent analysis.

Third, writing support emerges as a common but carefully bounded use category. Students seek assistance with grammar, style, and structural suggestions rather than complete essay generation. Importantly, Chan and Hu (2023) found that most students view having AI write entire assignments as inappropriate, indicating meaningful ethical awareness within the student population. Johnston et al. (2024) corroborated this finding, noting student recognition of boundaries between acceptable support and academic misconduct.

Fourth, practice and feedback functions enable students to generate quiz questions, obtain code explanations, receive draft critique, and create study materials. Ruiz-Rojas et al. (2023) highlighted these applications within their instructional design matrix, whilst Pesovski et al. (2024) demonstrated effectiveness for customised learning experiences that adapt to individual student needs.

Student concerns and ethical awareness

Contrary to assumptions that students seek to exploit GenAI for shortcuts, research reveals substantial student concern regarding accuracy problems, hallucinations, and ethical implications. Chan and Hu (2023) documented widespread awareness of AI limitations, including fabricated citations and factual errors. Francis, Jones and Smith (2025) found students actively worried about plagiarism boundaries, authorship questions, and fairness implications.

Mixed feelings about creativity loss and over-reliance appear consistently across studies. Ahmed et al. (2024) found students expressing concern that excessive AI use might undermine their own skill development, whilst Deric, Frank and Vuković (2025) explored ethical implications that students themselves raised about appropriate use boundaries. These findings suggest that students may be more ready for nuanced AI integration policies than institutions have assumed.

Irshad, Uzair-Ul-Hassan and Iram-Parveen (2025) characterised productive use as “study companion” engagement involving critical checking and reflection, distinguishing this from passive acceptance of AI outputs. Rana, Verhoeven and Sharma (2025) similarly emphasised the importance of maintaining human agency and critical evaluation within AI-assisted learning processes.

Risky patterns and negative outcomes

Research also identifies patterns of GenAI use associated with diminished learning outcomes. Guo et al. (2025), in their landmark study of 72,615 students, found that GenAI use correlates with lower active learning and motivation when applied to low-challenge tasks or when students simply copy outputs without critical engagement. Bittle and El-Gayar (2025), in their systematic review addressing academic integrity concerns, documented similar risks associated with over-automation and substitution behaviours.

The distinction between productive and risky patterns centres on student cognitive engagement rather than mere tool use. When students employ GenAI to avoid thinking—requesting complete answers, accepting outputs uncritically, or automating tasks they should practice—learning outcomes suffer. Conversely, when students maintain intellectual agency whilst using GenAI to enhance their capabilities, positive outcomes emerge.

Course design as moderating factor

Perhaps the most significant finding in current research concerns the moderating role of course design characteristics. Guo et al. (2025) demonstrated that GenAI use in academic tasks is linked to higher cognitive and emotional engagement, but only in high-challenge, high-support courses. In low-challenge contexts, GenAI use is associated with lower engagement. This finding has profound implications for pedagogical design, suggesting that the same student behaviour—using GenAI for coursework—may enhance or undermine learning depending on the educational environment.

High-challenge environments present students with complex, open-ended problems that GenAI cannot straightforwardly solve. High-support environments provide scaffolding, feedback, and guidance that helps students use GenAI productively. The combination appears essential: challenge without support may produce frustration and inappropriate AI reliance, whilst support without challenge may enable shortcuts that undermine learning.

Explicit AI policies and normative guidance

Research consistently supports the value of explicit, clear policies specifying allowed and prohibited GenAI uses. Francis, Jones and Smith (2025) found that students want clarity regarding boundaries and expectations. Cacho (2024) developed a model for balanced guidelines that specify appropriate uses such as brainstorming and feedback whilst prohibiting submission of AI-generated content as finished work.

Johnston et al. (2024) documented student confusion arising from ambiguous or inconsistent policies, suggesting that lack of clarity may paradoxically increase problematic use by failing to establish meaningful norms. Bittle and El-Gayar (2025) emphasised that explicit policies reduce integrity worries by clarifying what constitutes acceptable practice, enabling students to use AI support without anxiety about inadvertent misconduct.

Effective policies address authorship and citation expectations directly. When students understand that AI assistance must be acknowledged and that human intellectual contribution remains essential, they can make informed decisions about appropriate use levels. This normative clarity supports rather than constrains productive engagement.

Scaffolded AI activities within instruction

Pedagogical approaches that incorporate structured AI activities within classroom instruction show particular promise. Ruiz-Rojas et al. (2023) demonstrated improvements in engagement, efficiency, and critical thinking through teacher-led GenAI sessions. Leahy, Ozer and Cummins (2025), through their AI-ENGAGE multicentre intervention, found that guided practice with prompting and output critique developed transferable skills whilst surfacing ethical issues for explicit discussion.

Kong and Yang (2024) developed a human-centred learning and teaching framework using GenAI for self-regulated learning development, demonstrating benefits across educational levels. Their approach positions teachers as guides who help students develop metacognitive awareness about effective AI collaboration. Rana, Verhoeven and Sharma (2025) applied similar principles within design thinking pedagogy, finding that structured AI integration enhanced both creativity and critical thinking.

These scaffolded approaches share common features: explicit instruction in prompting techniques, guided practice with output evaluation, discussion of AI limitations, and reflection on learning processes. By making AI use visible and subject to pedagogical guidance, instructors help students develop productive habits transferable to independent study.

Assessment redesign for the GenAI era

Traditional assessment formats—particularly unsupervised essays and problem sets—face significant challenges when students can generate plausible responses using GenAI. Research supports assessment redesign emphasising process evidence and higher-order thinking that AI cannot readily substitute.

Francis, Jones and Smith (2025) recommended incorporating drafts, reflections, and oral defences that require students to demonstrate understanding beyond submitted text. Guo et al. (2025) found that process-oriented assessment aligns with high-challenge course characteristics associated with positive GenAI engagement outcomes. Rana, Verhoeven and Sharma (2025) documented effectiveness of in-class work components that verify student capability independent of AI assistance.

Bittle and El-Gayar (2025), in their systematic review, synthesised evidence supporting assessment approaches that reduce opportunities for “AI substitution” cheating whilst maintaining meaningful evaluation of learning. These include authentic assessments connected to unique personal experiences, real-time demonstrations of competence, and iterative projects with documented development processes.

The goal is not to make AI use impossible—an increasingly futile aim—but to design assessments where AI assistance supports rather than replaces the learning being evaluated. When assessments require synthesis, critical evaluation, and personal engagement, AI becomes a tool for enhanced performance rather than a shortcut to fraudulent credentials.

AI literacy and ethics instruction

Research identifies substantial gaps in student AI literacy that warrant direct instructional attention. Krause, Dalvi and Zaidi (2025) documented underdeveloped critical evaluation skills and bias awareness among students who nonetheless use GenAI regularly. Irshad, Uzair-Ul-Hassan and Iram-Parveen (2025) found that students often lack understanding of how AI systems work, limiting their ability to use these tools effectively and ethically.

Deric, Frank and Vuković (2025) explored ethical implications that students may not spontaneously consider, including fairness in AI-assisted work, environmental costs of AI computation, and broader societal implications of AI dependence. Kong and Yang (2024) integrated AI literacy within their human-centred framework, treating understanding of AI capabilities and limitations as prerequisite for effective use.

Rana, Verhoeven and Sharma (2025) and Bittle and El-Gayar (2025) identified AI literacy and ethical practice as central competencies for contemporary education. This suggests that GenAI integration should include explicit instruction addressing: how large language models generate outputs, why hallucinations occur, how to evaluate AI-generated content, what biases may affect outputs, and what ethical principles should guide use decisions.

Personalised and multimodal learning materials

GenAI offers substantial potential for generating personalised learning materials adapted to individual student needs. Ruiz-Rojas et al. (2023) demonstrated increased time-on-task when students received varied explanations matching their learning preferences. Sousa and Cardoso (2025) found benefits particularly pronounced for struggling students who benefit from multiple presentation formats.

Kong and Yang (2024) integrated personalised AI-generated materials within their self-regulated learning framework, whilst Pesovski et al. (2024) focused specifically on customisable learning experiences including adaptive formative quizzes. These applications position GenAI as an instructional resource rather than a student tool, with educators curating and guiding AI-enhanced materials.

This instructor-mediated use of GenAI may complement student direct use, providing additional support channels that do not raise the same integrity concerns as unsupervised student AI interaction. When instructors use GenAI to generate diverse explanations, practice problems, and feedback, they expand pedagogical resources without compromising assessment validity.

Discussion

Reframing GenAI as scaffolded study partner

The evidence synthesised in this dissertation supports a fundamental reframing of how educational institutions should conceptualise and respond to student GenAI use. Rather than viewing these tools primarily as cheating mechanisms requiring detection and prohibition, the research indicates that GenAI functions most beneficially as what might be termed a “scaffolded study partner”—a cognitive support tool that enhances learning when embedded within appropriate pedagogical structures.

This reframing carries significant implications. The distinction between productive and risky GenAI use depends not on the technology itself but on how students employ it within educational contexts. The same AI interaction—asking ChatGPT to explain a concept—may enhance learning when the student then engages critically with the explanation, or undermine learning when the student passively copies content without comprehension. Pedagogical design determines which outcome predominates.

The critical role of high-challenge, high-support environments

The finding from Guo et al. (2025) that GenAI benefits appear only in high-challenge, high-support courses deserves particular emphasis. This evidence suggests that many current concerns about GenAI undermining learning may reflect inadequate course design rather than inherent technology problems. When courses present genuinely challenging tasks that GenAI cannot simply solve, and when students receive support for productive AI engagement, the technology enhances rather than replaces learning.

This finding aligns with established educational theory regarding productive struggle and scaffolded learning. Vygotsky’s (1978) zone of proximal development concept suggests that learning occurs optimally when tasks exceed current independent capability but remain achievable with appropriate support. GenAI may function as a sophisticated form of scaffolding—providing explanation, feedback, and guidance—that enables students to tackle more challenging material than they could approach alone.

However, this scaffolding function requires pedagogical intentionality. Without high challenge, GenAI becomes a shortcut that circumvents rather than supports learning. Without high support, students may lack the metacognitive skills to use AI productively. The combination appears essential, and its importance should inform institutional responses to GenAI integration.

Teaching designs as integrity infrastructure

The pedagogical strategies identified in this review function collectively as integrity infrastructure—course design elements that structurally support ethical, productive GenAI use rather than relying solely on prohibition and detection. This infrastructure approach recognises that student behaviour responds to environmental incentives and constraints.

Explicit policies establish normative expectations that most students wish to meet. High-challenge tasks ensure that AI cannot substitute for genuine learning. Scaffolded activities develop skills for productive AI collaboration. Process-oriented assessment verifies learning independent of potentially AI-assisted products. AI literacy instruction develops critical evaluation capabilities. Personalised materials leverage AI benefits whilst maintaining instructor oversight.

Together, these elements create learning environments where productive GenAI use becomes the path of least resistance. Students can use AI support without integrity concerns because appropriate use is clearly defined and verified through process evidence. The incentive to misuse AI diminishes when challenging tasks require genuine understanding that products alone cannot demonstrate.

Student agency and ethical awareness

The evidence regarding student concerns about GenAI limitations and ethical implications challenges deficit-based assumptions about student motivations. Students are not uniformly seeking shortcuts; many actively worry about over-reliance, skill development, and appropriate boundaries. This ethical awareness represents a foundation upon which constructive policies can build.

Effective integration approaches treat students as partners in developing appropriate AI use norms rather than adversaries to be policed. When institutions provide clear guidance, students can make informed decisions aligned with educational goals. When policies remain ambiguous, students navigate uncertainty without the clarity needed for confident ethical practice.

Meeting the stated objectives

The evidence synthesised in this dissertation addresses each stated objective. Regarding the first objective—characterising current patterns—the review documents extensive student GenAI use primarily for concept clarification, brainstorming, writing support, and practice, with most students employing AI as supplement rather than substitute.

Regarding the second objective—conditions affecting engagement outcomes—the high-challenge, high-support finding from Guo et al. (2025) provides clear evidence that course characteristics moderate whether GenAI use enhances or diminishes learning.

Regarding the third objective—identifying effective pedagogical strategies—six key approaches emerge with empirical support: explicit policies, high-challenge tasks, scaffolded activities, assessment redesign, AI literacy instruction, and personalised materials.

Regarding the fourth objective—student perceptions and concerns—the review documents substantial awareness of accuracy problems, ethical issues, and over-reliance risks among student populations.

Regarding the fifth objective—developing recommendations—the discussion synthesises findings into actionable guidance for institutional and pedagogical practice.

Limitations and areas requiring further research

Several limitations constrain the conclusions drawn from current evidence. Most available research relies on self-reported survey data, which may underestimate problematic use and overstate ethical awareness. Longitudinal studies tracking learning outcomes over extended periods remain scarce, limiting understanding of cumulative effects. Cross-cultural variation in AI use patterns and institutional responses warrants additional investigation.

Furthermore, the rapid evolution of GenAI capabilities means that findings based on current technologies may not generalise to future systems. As AI becomes more sophisticated, the boundaries between productive support and problematic substitution may shift in ways that current research cannot anticipate. Ongoing investigation will be essential as the technological landscape continues to evolve.

Conclusions

This dissertation has synthesised current evidence regarding student generative AI use for learning purposes and identified teaching designs that support productive engagement. The findings support several key conclusions with implications for educational policy and practice.

First, students already use generative AI widely as a study aid, and this use serves predominantly supportive rather than substitutive functions. Most students employ GenAI for concept clarification, brainstorming, writing refinement, and practice—applications that complement rather than replace intellectual engagement. This pattern suggests that concerns about universal cheating may be overstated, whilst the potential for learning enhancement may be underappreciated.

Second, the benefits of GenAI use for learning depend critically on course design characteristics. High-challenge, high-support environments produce positive engagement outcomes from AI use, whilst low-challenge contexts produce negative associations. This finding has profound implications: the same student behaviour may enhance or undermine learning depending on pedagogical context. Institutional responses should therefore focus on course design rather than technology restriction.

Third, six key pedagogical strategies emerge as effective for supporting productive GenAI integration: explicit AI policies establishing clear norms and expectations; high-challenge tasks that AI cannot straightforwardly complete; scaffolded in-class activities developing productive AI collaboration skills; assessment redesign emphasising process evidence and higher-order thinking; AI literacy and ethics instruction addressing capabilities, limitations, and appropriate use; and personalised materials leveraging AI for enhanced instructional support.

Fourth, students demonstrate meaningful ethical awareness regarding GenAI limitations and appropriate use boundaries. This awareness provides a foundation for constructive policies that treat students as partners in developing responsible AI integration rather than adversaries to be monitored and punished.

The significance of these findings extends beyond immediate practical application. As AI capabilities continue to advance, the fundamental educational questions addressed here—how to preserve human learning whilst leveraging technological support—will only become more pressing. The pedagogical approaches identified in this review offer frameworks for navigating technological change whilst maintaining focus on genuine learning outcomes.

Future research should prioritise longitudinal investigation of learning outcomes, experimental comparison of pedagogical approaches, and attention to equity implications of differential AI access and literacy. As the evidence base develops, institutional policies and teaching practices should evolve accordingly, maintaining the orientation toward constructive integration that current evidence supports.

Designs that embed AI as a critiqued, transparent co-creator—rather than a hidden answer machine—encourage learning, reduce cheating incentives, and preserve academic integrity. This framing offers a path forward that neither ignores technological reality nor abandons educational values, instead seeking their productive integration in service of student learning.

References

Ahmed, Z., Shanto, S., Rime, M., Morol, M., Fahad, N., Hossen, M. and Abdullah-Al-Jubair, M. (2024) ‘The generative AI landscape in education: mapping the terrain of opportunities, challenges, and student perception’, *IEEE Access*, 12, pp. 147023-147050. https://doi.org/10.1109/access.2024.3461874

Bittle, K. and El-Gayar, O. (2025) ‘Generative AI and academic integrity in higher education: a systematic review and research agenda’, *Information*, 16(4), pp. 296. https://doi.org/10.3390/info16040296

Cacho, R. (2024) ‘Integrating generative AI in university teaching and learning: a model for balanced guidelines’, *Online Learning*. https://doi.org/10.24059/olj.v28i3.4508

Chan, C. and Hu, W. (2023) ‘Students’ voices on generative AI: perceptions, benefits, and challenges in higher education’, *International Journal of Educational Technology in Higher Education*, 20. https://doi.org/10.1186/s41239-023-00411-8

Deric, E., Frank, D. and Vuković, D. (2025) ‘Exploring the ethical implications of using generative AI tools in higher education’, *Informatics*, 12(2), pp. 36. https://doi.org/10.3390/informatics12020036

Francis, N., Jones, S. and Smith, D. (2025) ‘Generative AI in higher education: balancing innovation and integrity’, *British Journal of Biomedical Science*, 81. https://doi.org/10.3389/bjbs.2024.14048

Guo, F., Zhang, L., Shi, T. and Coates, H. (2025) ‘Whether and when could generative AI improve college student learning engagement?’, *Behavioral Sciences*, 15(8), pp. 1011. https://doi.org/10.3390/bs15081011

Irshad, A., Uzair-Ul-Hassan, M. and Iram-Parveen (2025) ‘Students’ perspectives on generative AI’s role in transforming, challenging, and enhancing higher education learning practices in education’, *Indus Journal of Social Sciences*. https://doi.org/10.59075/ijss.v3i3.1897

Johnston, H., Wells, R., Shanks, E., Boey, T. and Parsons, B. (2024) ‘Student perspectives on the use of generative artificial intelligence technologies in higher education’, *International Journal for Educational Integrity*, 20, pp. 1-21. https://doi.org/10.1007/s40979-024-00149-4

Kadaruddin, K. (2023) ‘Empowering education through generative AI: innovative instructional strategies for tomorrow’s learners’, *International Journal of Business, Law, and Education*. https://doi.org/10.56442/ijble.v4i2.215

Kong, S. and Yang, Y. (2024) ‘A human-centered learning and teaching framework using generative artificial intelligence for self-regulated learning development through domain knowledge learning in K–12 settings’, *IEEE Transactions on Learning Technologies*, 17, pp. 1588-1599. https://doi.org/10.1109/tlt.2024.3392830

Krause, S., Dalvi, A. and Zaidi, S. (2025) ‘Generative AI in education: student skills and lecturer roles’, *ArXiv*, abs/2504.19673. https://doi.org/10.48550/arxiv.2504.19673

Leahy, K., Ozer, E. and Cummins, E. (2025) ‘AI-ENGAGE: a multicentre intervention to support teaching and learning engagement with generative artificial intelligence tools’, *Education Sciences*. https://doi.org/10.3390/educsci15070807

Pesovski, I., Santos, R., Henriques, R. and Trajkovik, V. (2024) ‘Generative AI for customizable learning experiences’, *Sustainability*. https://doi.org/10.3390/su16073034

Rana, V., Verhoeven, B. and Sharma, M. (2025) ‘Empowering creativity and critical thinking: the transformative role of generative AI in design thinking pedagogy’, *Journal of University Teaching and Learning Practice*. https://doi.org/10.53761/tjse2f36

Ruiz-Rojas, L., Acosta-Vargas, P., De-Moreta-Llovet, J. and González-Rodríguez, M. (2023) ‘Empowering education with generative artificial intelligence tools: approach with an instructional design matrix’, *Sustainability*, 15(15), pp. 11524. https://doi.org/10.3390/su151511524

Sousa, A. and Cardoso, P. (2025) ‘Use of generative AI by higher education students’, *Electronics*. https://doi.org/10.3390/electronics14071258

Vygotsky, L.S. (1978) *Mind in society: the development of higher psychological processes*. Cambridge, MA: Harvard University Press.

To cite this work, please use the following reference:

UK Dissertations. 10 February 2026. How do students use generative AI for learning rather than cheating, and what teaching designs support this?. [online]. Available from: https://www.ukdissertations.com/dissertation-examples/how-do-students-use-generative-ai-for-learning-rather-than-cheating-and-what-teaching-designs-support-this/ [Accessed 13 February 2026].

Contact

UK Dissertations

Business Bliss Consultants FZE

Fujairah, PO Box 4422, UAE

+44 115 966 7987

Connect

Subscribe

Join our email list to receive the latest updates and valuable discounts.