+44 115 966 7987 contact@ukdiss.com Log in

How are deepfake scams changing fraud risk controls in UK retail banking?

//

UK Dissertations

Abstract

This dissertation examines how deepfake scams are fundamentally reshaping fraud risk controls within United Kingdom retail banking. Through a systematic literature synthesis, this study analyses the evolving threat landscape created by synthetic media technologies that enable criminals to convincingly impersonate executives, customers, and staff across voice, video, and identity document channels. The research identifies four primary control shifts emerging within the sector: enhanced authentication protocols incorporating active liveness detection; deployment of artificial intelligence-based deepfake and anomaly detection systems; redesign of trusted communication channels with mandatory out-of-band verification; and integration of deepfake risks into holistic enterprise fraud frameworks. Findings indicate that previously high-assurance channels, including telephone banking and video-based know-your-customer processes, have become significant vulnerability points requiring immediate remediation. The study concludes that effective defence requires multi-layered, artificial intelligence-augmented approaches combining technical controls, organisational governance, and comprehensive stakeholder education. These findings hold significant implications for banking practitioners, regulators, and policymakers seeking to maintain financial system integrity whilst preserving customer experience in an era of increasingly sophisticated synthetic identity fraud.

Introduction

The rapid advancement of artificial intelligence technologies has fundamentally altered the landscape of financial crime, presenting unprecedented challenges to established fraud prevention frameworks within the banking sector. Among the most concerning developments is the proliferation of deepfake technology—sophisticated synthetic media created through deep learning algorithms capable of generating highly convincing audio, video, and image content that can convincingly replicate real individuals (Kietzmann et al., 2020). This technological capability has enabled criminals to execute fraud schemes of remarkable sophistication, undermining trust in communication channels and identity verification processes that financial institutions have historically treated as reliable authentication mechanisms.

The United Kingdom retail banking sector faces particular exposure to these emerging threats. As one of the world’s leading financial centres, the UK processes trillions of pounds in transactions annually, making it an attractive target for organised criminal enterprises seeking to exploit technological vulnerabilities. The Financial Conduct Authority and Prudential Regulation Authority have increasingly emphasised the importance of operational resilience and fraud prevention, yet the regulatory framework continues to evolve in response to rapidly changing threat vectors (Financial Conduct Authority, 2023).

Deepfake scams represent a qualitative shift in fraud methodology. Unlike traditional social engineering attacks that rely upon psychological manipulation alone, deepfake-enabled fraud combines advanced technical capability with social engineering principles, creating attack vectors that can deceive even trained personnel. Cases involving artificial intelligence-cloned chief executive officer and chief financial officer voices and video calls have already resulted in multi-million-dollar losses affecting UK-linked firms and financial institutions (De Rancourt-Raymond and Smaili, 2022; Mustak et al., 2023; Muhly, Chizzonic and Leo, 2025). These incidents demonstrate that deepfakes have moved beyond theoretical concern to present clear and present danger to financial system integrity.

The academic significance of this topic extends beyond immediate practical concerns. Understanding how institutions adapt their control frameworks in response to novel technological threats provides valuable insights into organisational resilience, technology governance, and the dynamic interplay between criminal innovation and defensive countermeasures. Furthermore, the financial sector’s response to deepfake threats may serve as a template for other sectors facing similar challenges, including healthcare, government services, and telecommunications.

From a societal perspective, the integrity of banking systems underpins economic stability and public confidence in financial institutions. Consumer trust, painstakingly built over decades through regulatory oversight and institutional accountability, can be rapidly eroded through high-profile fraud incidents. The reputational and financial consequences of deepfake-enabled fraud extend beyond immediate victims to affect broader market confidence and systemic stability.

This dissertation therefore examines how deepfake scams are changing fraud risk controls in UK retail banking, synthesising current academic literature to identify emerging control responses, evaluate their effectiveness, and consider implications for practitioners, regulators, and researchers.

Aim and objectives

Research aim

The primary aim of this dissertation is to critically examine how the emergence of deepfake scams is transforming fraud risk controls within the United Kingdom retail banking sector, with particular focus on identifying the control adaptations being implemented and their implications for future fraud prevention strategies.

Research objectives

To achieve this aim, the following objectives have been established:

1. To analyse the evolving threat model posed by deepfake technologies to UK retail banking, including the mechanisms through which synthetic media undermines established identity verification and authentication processes.

2. To identify and categorise the primary control shifts being implemented by financial institutions in response to deepfake-enabled fraud, including technical, organisational, and educational countermeasures.

3. To evaluate the effectiveness and limitations of emerging artificial intelligence-based detection technologies in identifying synthetic media within banking contexts.

4. To examine the organisational and regulatory implications of deepfake threats, including impacts upon know-your-customer processes, anti-money laundering compliance, and customer experience considerations.

5. To synthesise findings into actionable recommendations for banking practitioners and identify areas requiring further research.

Methodology

This dissertation employs a systematic literature synthesis methodology to examine the intersection of deepfake technology and fraud risk controls within UK retail banking. Literature synthesis represents an appropriate methodological approach for emerging research domains where primary empirical studies remain limited and where consolidation of existing knowledge can provide valuable theoretical and practical insights (Snyder, 2019).

Literature search strategy

The research utilised multiple academic databases including Web of Science, Scopus, IEEE Xplore, and Google Scholar to identify relevant peer-reviewed publications. Search terms included combinations of “deepfake,” “synthetic media,” “fraud,” “banking,” “financial services,” “identity verification,” “biometric authentication,” and “artificial intelligence detection.” Searches were conducted between January 2020 and March 2025 to capture the most recent developments in this rapidly evolving field.

Inclusion and exclusion criteria

Studies were included if they addressed deepfake technologies in relation to financial services, fraud prevention, identity verification, or organisational security responses. Peer-reviewed journal articles, conference proceedings from reputable academic venues, and authoritative institutional reports were prioritised. Sources were excluded if they lacked academic rigour, appeared in predatory publications, or addressed deepfakes solely in non-financial contexts without transferable insights.

Analytical approach

The synthesis followed a thematic analysis framework, with sources grouped according to emergent themes relating to threat characteristics, detection technologies, control responses, and regulatory implications. Particular attention was paid to sources addressing UK banking contexts, although international literature was included where findings demonstrated clear transferability to the UK regulatory and operational environment.

Methodological limitations

Several limitations warrant acknowledgement. First, the rapidly evolving nature of both deepfake technology and institutional responses means that published academic literature may lag behind current practice. Second, commercial sensitivity surrounding fraud prevention measures may limit the availability of detailed institutional case studies. Third, the synthesis approach, whilst appropriate for consolidating existing knowledge, cannot generate new empirical findings. These limitations are addressed through triangulation across multiple source types and explicit acknowledgement of knowledge gaps requiring further investigation.

Literature review

The nature and evolution of deepfake technology

Deepfake technology emerged from advances in deep learning, particularly generative adversarial networks, which enable the creation of synthetic media that can convincingly replicate human faces, voices, and movements. The term “deepfake” itself combines “deep learning” with “fake,” reflecting the technology’s foundation in neural network architectures (Verdoliva, 2020). What distinguishes deepfakes from earlier manipulation techniques is their ability to produce content of sufficient quality to deceive human observers and, increasingly, automated verification systems.

The accessibility of deepfake creation tools has expanded dramatically in recent years. Whilst early implementations required substantial computational resources and technical expertise, current applications enable individuals with minimal technical knowledge to create convincing synthetic content using consumer-grade hardware (Gambín et al., 2024). This democratisation of creation capability has significant implications for fraud threat modelling, as it expands the potential perpetrator population beyond technically sophisticated criminal organisations.

Heidari et al. (2023) provide a comprehensive review of deepfake detection methodologies, noting the ongoing “arms race” between creation and detection technologies. As detection capabilities improve, creation techniques evolve to circumvent them, creating a dynamic equilibrium that presents ongoing challenges for defensive applications.

Deepfake threats to banking identity verification

Financial institutions have historically relied upon multiple identity verification channels, with certain channels—particularly voice calls and in-person video interactions—treated as providing high assurance of identity authenticity. Deepfakes fundamentally undermine this trust model by enabling criminals to convincingly impersonate executives, staff, or customers across voice, video, and identity document channels (De Rancourt-Raymond and Smaili, 2022; Gupta, 2025; Sidelov, 2022; Sandoval et al., 2024).

Sidelov (2022) specifically analyses deepfake problems for banks and financial institutions, identifying video-based know-your-customer processes as particularly vulnerable. These processes, designed to enable remote customer onboarding whilst maintaining regulatory compliance, increasingly rely upon video verification that deepfake technology can potentially defeat. The implications extend beyond individual fraud incidents to potentially systematic exploitation of onboarding vulnerabilities.

Voice-based authentication presents similar concerns. Voice biometrics, once considered a secure authentication factor, can be defeated by artificial intelligence voice cloning technologies that require only small samples of target audio to create convincing replicas (Gupta, 2025). Banking telephone services that use voice recognition for customer authentication therefore face fundamental reassessment of their security assumptions.

Documented cases demonstrate the real-world impact of these vulnerabilities. De Rancourt-Raymond and Smaili (2022) and Mustak et al. (2023) describe incidents involving artificial intelligence-cloned executive voices that resulted in multi-million-dollar fraudulent transfers. Muhly, Chizzonic and Leo (2025) document similar cases affecting UK-linked financial institutions, emphasising that these represent not hypothetical risks but demonstrated attack vectors.

Deepfakes in social engineering and payment redirection

Beyond identity verification attacks, deepfakes now rank among the top identity fraud methods and are increasingly used in social engineering and payment redirection scams (Sidelov, 2022; Muhly, Chizzonic and Leo, 2025; Metibemu, 2025). These attacks typically combine synthetic media with traditional social engineering techniques, using fabricated audio or video of trusted individuals to authorise fraudulent transactions or extract sensitive information.

Business email compromise, already a significant threat vector, evolves substantially when combined with deepfake capabilities. Rather than relying solely upon spoofed email addresses and written impersonation, attackers can supplement written communications with voice calls or video messages that appear to confirm the legitimacy of payment requests. This multi-channel approach significantly increases attack success rates by satisfying verification procedures designed for single-channel threats.

Kietzmann et al. (2020) provide an early analysis of deepfake implications for business, noting that the technology presents both risks and opportunities. From a risk perspective, they emphasise the potential for deepfakes to undermine trust in digital communications, with particular implications for organisations dependent upon remote decision-making and authorisation processes—a description that fits retail banking operations precisely.

Enhanced authentication and liveness detection

In response to deepfake threats, financial institutions are implementing enhanced authentication mechanisms that move beyond single-factor biometrics to multi-factor systems incorporating active liveness detection and challenge-response protocols (Gupta, 2025; Sidelov, 2022; Kothinti, 2025).

Passive facial and voice verification, which simply compare presented biometric data against stored templates, prove vulnerable to high-quality deepfakes that can generate synthetic biometrics indistinguishable from genuine samples. Active liveness detection addresses this vulnerability by requiring real-time responses to unpredictable challenges—such as turning the head, blinking in specified patterns, or responding to verbal prompts—that pre-recorded deepfakes cannot satisfy.

Kothinti (2025) examines artificial intelligence-powered identity verification and risk analysis as the future of fraud prevention in financial services, emphasising the integration of multiple verification signals including behavioural biometrics, device fingerprinting, and contextual risk assessment. This layered approach reflects recognition that no single verification method provides sufficient assurance against sophisticated attacks.

The implementation of enhanced authentication involves trade-offs between security and customer experience. Frictionless onboarding and authentication processes that minimise customer effort have been competitive differentiators for digital banking services. Deepfake threats force reconsideration of these priorities, as passive, frictionless journeys prove increasingly insufficient to maintain security (Gupta, 2025; Sidelov, 2022; Metibemu, 2025; Vecchietti, Liyanaarachchi and Viglia, 2025).

Artificial intelligence-based deepfake detection

Parallel to enhanced authentication, financial institutions are deploying artificial intelligence-based systems specifically designed to detect synthetic media. These systems employ various technical approaches including generative adversarial network-based detectors, convolutional neural network architectures, micro-expression analysis, and eye-movement tracking (Ke et al., 2025; Heidari et al., 2023; Gambín et al., 2024; Raza, Munir and Almutairi, 2022; Verdoliva, 2020).

Ke et al. (2025) specifically examine the detection of artificial intelligence deepfakes and fraud in online payments using generative adversarial network-based models, demonstrating the application of detection technologies within financial transaction contexts. Their work illustrates the potential for real-time screening of identity verification submissions to identify synthetic content before fraudulent transactions are authorised.

Detection systems leverage various artifacts that distinguish synthetic from genuine media. These include inconsistencies in lighting and shadowing, unnatural eye movements, temporal inconsistencies between video frames, and spectral anomalies in audio recordings. Raza, Munir and Almutairi (2022) describe deep learning approaches that can identify these artifacts with high accuracy, although performance varies depending upon the sophistication of the deepfake creation method.

The effectiveness of detection systems remains subject to the adversarial dynamic between creation and detection. As Gambín et al. (2024) note in their review of deepfake current and future trends, detection methodologies require continuous updating to address new creation techniques. This creates ongoing investment requirements for financial institutions seeking to maintain effective detection capabilities.

Communication security and trusted channel redesign

Perhaps the most fundamental control shift involves the redesign of communication security protocols and the reconceptualisation of trusted channels. Policies increasingly mandate that high-value payments cannot rely solely upon audio or video identity verification, with independent out-of-band verification and callback procedures emphasised as essential safeguards (De Rancourt-Raymond and Smaili, 2022; Kietzmann et al., 2020; Alarfaj and Shahzadi, 2025; Muhly, Chizzonic and Leo, 2025; Metibemu, 2025).

The principle underlying these controls is that deepfakes can compromise any single communication channel, but simultaneously compromising multiple independent channels requires substantially greater attacker capability. Out-of-band verification—confirming requests through a channel separate from that through which they were received—therefore significantly raises the barrier to successful attack.

Muhly, Chizzonic and Leo (2025) specifically address artificial intelligence-deepfake scams and the importance of holistic communication security strategy, emphasising that technical controls alone prove insufficient without corresponding procedural and cultural changes. Their analysis highlights the need for clear escalation procedures, mandatory verification steps for sensitive transactions, and cultural norms that prioritise security over convenience.

Alarfaj and Shahzadi (2025) examine enhanced fraud detection using graph neural networks and autoencoders for real-time credit card fraud prevention. Their work demonstrates how transaction monitoring systems can identify anomalous authorisation patterns that may indicate deepfake-enabled fraud, providing an additional detection layer beyond media analysis.

Holistic risk frameworks and governance

Leading institutions are integrating deepfake risk into broader enterprise fraud management frameworks rather than treating it as an isolated technical problem. This approach combines artificial intelligence monitoring with governance structures and comprehensive training programmes addressing synthetic media threats (Kietzmann et al., 2020; Metibemu, 2025; Vecchietti, Liyanaarachchi and Viglia, 2025).

Vecchietti, Liyanaarachchi and Viglia (2025) introduce the concept of business privacy calculus in managing deepfakes with artificial intelligence, examining how organisations balance security investments against customer experience impacts and privacy considerations. Their framework provides theoretical grounding for the practical trade-offs that banking institutions must navigate.

Metibemu (2025) addresses financial risk management in digital-only banks, noting that challenger banks and neobanks face particular challenges given their reliance upon digital-only verification processes. Without physical branch networks enabling in-person verification fallbacks, these institutions must develop especially robust synthetic media defences.

Staff training represents a critical component of holistic frameworks. Employees require education on deepfake red flags—such as unusual speech patterns, inconsistent visual quality, or atypical request characteristics—that may indicate synthetic content. Consumer fraud awareness programmes similarly reduce susceptibility to synthetic social engineering by educating customers about emerging threat types (Kietzmann et al., 2020; Alarfaj and Shahzadi, 2025; Muhly, Chizzonic and Leo, 2025; Metibemu, 2025).

Regulatory and compliance implications

Deepfake threats expose weaknesses in existing know-your-customer and anti-money laundering verification regimes that were designed for earlier threat models. Regulatory expectations continue to evolve, with supervisory authorities increasingly emphasising technological resilience and adaptive control frameworks (Financial Conduct Authority, 2023; Bank of England, 2024).

The challenge for regulators lies in establishing requirements that are sufficiently specific to guide institutional practice whilst remaining flexible enough to accommodate rapid technological change. Prescriptive technical standards risk obsolescence, whilst principle-based guidance may provide insufficient direction for institutions lacking internal expertise.

International coordination presents additional complexity. Deepfake-enabled fraud often involves cross-border elements, with attackers operating from jurisdictions with limited enforcement capability. Effective regulatory responses therefore require international cooperation through bodies such as the Financial Action Task Force and Basel Committee on Banking Supervision.

Discussion

Transformation of the banking threat model

The literature synthesis reveals that deepfake technology has fundamentally transformed the threat model for UK retail banking. Previously high-assurance channels—telephone banking, video-based know-your-customer processes, and biometric authentication—have become significant vulnerability points requiring comprehensive remediation. This transformation demands reconsideration of assumptions embedded within control frameworks developed for earlier technological eras.

The implications extend beyond fraud prevention to affect broader questions of trust in digital banking relationships. When customers and staff can no longer rely upon voice or visual confirmation of identity, the foundations of remote banking interaction require reconstruction. This represents not merely a technical challenge but a fundamental question about how financial institutions establish and maintain trust relationships in an era of pervasive synthetic media capability.

The documented cases of multi-million-pound losses demonstrate that these concerns reflect genuine operational reality rather than theoretical speculation. The translation from capability to exploitation has already occurred, establishing deepfake fraud as an active threat vector requiring immediate attention.

Evaluation of emerging control responses

The four primary control shifts identified through this research—enhanced authentication, artificial intelligence-based detection, communication security redesign, and holistic risk frameworks—represent complementary rather than alternative approaches. Effective defence requires layered implementation that addresses multiple attack vectors simultaneously.

Enhanced authentication with active liveness detection addresses the vulnerability of passive biometric verification but introduces friction that may affect customer experience. The balance between security and convenience requires ongoing calibration based upon threat intelligence and customer tolerance thresholds. Institutions must avoid both excessive friction that drives customers to competitors and insufficient security that enables successful attacks.

Artificial intelligence-based detection provides valuable supplementary capability but cannot guarantee identification of all synthetic content. The adversarial dynamic between creation and detection technologies means that detection systems require continuous investment and updating. Reliance upon detection alone without complementary procedural controls would create unacceptable residual risk.

Communication security redesign through out-of-band verification represents perhaps the most robust defensive approach, as it addresses the fundamental vulnerability of single-channel trust rather than attempting to distinguish synthetic from genuine content. However, implementation requires cultural change alongside procedural modification, ensuring that staff and customers comply with verification requirements rather than circumventing them for convenience.

Holistic risk frameworks provide the governance structure within which technical and procedural controls operate effectively. Without appropriate oversight, training, and continuous improvement mechanisms, individual control elements may degrade or prove insufficient against evolving threats.

Achievement of research objectives

This research has achieved its stated objectives through systematic analysis of available literature. The first objective—analysing the evolving threat model—has been addressed through examination of how deepfakes undermine established verification channels and enable sophisticated social engineering attacks. The second objective—identifying control shifts—has been achieved through categorisation of the four primary response types evident in current practice and literature.

The third objective—evaluating detection technology effectiveness—has been addressed through analysis of various technical approaches and acknowledgement of their limitations within the adversarial detection dynamic. The fourth objective—examining organisational and regulatory implications—has been achieved through discussion of governance frameworks, staff training requirements, and evolving regulatory expectations.

The fifth objective—synthesising recommendations—is addressed below alongside identification of research gaps requiring further investigation.

Implications for practitioners

Banking practitioners should recognise that deepfake threats require multi-layered responses combining technical, procedural, and educational elements. No single control measure provides sufficient protection, and effective defence requires coordinated implementation across multiple domains.

Priority actions include assessment of current authentication processes for deepfake vulnerability, implementation of active liveness detection where video or voice verification is used, establishment of mandatory out-of-band verification for high-value transactions, and development of training programmes addressing synthetic media recognition for both staff and customers.

Investment in artificial intelligence-based detection capability should be accompanied by realistic expectations regarding detection limitations and commitment to ongoing system updating. Governance structures should explicitly incorporate deepfake risk within enterprise fraud management frameworks, with clear accountability for monitoring, response, and continuous improvement.

Regulatory considerations

Regulators face the challenge of providing sufficient guidance to support institutional responses whilst maintaining flexibility for ongoing adaptation. Principle-based requirements emphasising outcomes—such as maintaining identity verification integrity and preventing authorised push payment fraud—may prove more durable than prescriptive technical standards.

Enhanced information sharing regarding deepfake attack patterns and successful countermeasures could accelerate sectoral learning. Regulatory frameworks should encourage rather than impede such sharing, recognising that collective defence benefits all participants whilst individual institutional secrecy benefits only attackers.

International regulatory coordination remains essential given the cross-border nature of synthetic media-enabled fraud. UK regulators should actively engage with international counterparts to develop consistent approaches and facilitate cross-jurisdictional enforcement cooperation.

Limitations and future research directions

This research is subject to limitations inherent in literature synthesis methodology. The rapidly evolving nature of both deepfake technology and institutional responses means that published literature may not fully reflect current practice. Commercial sensitivity regarding fraud prevention measures limits the availability of detailed institutional case studies that would enable deeper analysis of implementation challenges and outcomes.

Future research should include empirical investigation of control effectiveness through case studies, surveys, or experimental methodologies. Longitudinal studies tracking the evolution of institutional responses over time would provide valuable insights into adaptation dynamics. Research examining customer attitudes toward enhanced security measures and their impact upon banking relationship choices would inform the security-convenience balance.

Investigation of detection technology performance against increasingly sophisticated deepfake creation methods remains a priority for technical research. Understanding the trajectory of the creation-detection arms race would inform institutional investment decisions and regulatory expectations.

Conclusions

This dissertation has examined how deepfake scams are transforming fraud risk controls within UK retail banking. The research demonstrates that deepfake technology has turned previously trusted identity and communication channels into high-risk vectors, driving banks toward multi-layer, artificial intelligence-augmented fraud controls, hardened payment authorisation processes, and substantially strengthened communication security.

The stated research objectives have been achieved through systematic literature synthesis. Analysis of the evolving threat model reveals that deepfakes enable criminals to convincingly impersonate executives, staff, and customers across voice, video, and identity document channels, undermining trust in channels previously treated as high assurance. Four primary control shifts have been identified: enhanced authentication with active liveness detection; deployment of artificial intelligence-based deepfake and anomaly detection systems; redesign of trusted communication channels with mandatory out-of-band verification; and integration of deepfake risks into holistic enterprise fraud frameworks.

Evaluation of detection technologies reveals both significant capability and inherent limitations arising from the adversarial dynamic between creation and detection. Effective defence requires layered approaches that do not depend upon detection alone. Organisational and regulatory implications include the need to rebalance convenience against security, develop comprehensive training programmes, and evolve regulatory frameworks to address emerging threats whilst maintaining flexibility for adaptation.

The significance of these findings extends beyond immediate practical application. Understanding institutional adaptation to novel technological threats provides insights applicable across sectors facing similar challenges. The banking sector’s experience with deepfake threats may inform responses in healthcare, government services, telecommunications, and other domains where identity verification and trusted communication are operationally critical.

Future research should prioritise empirical investigation of control effectiveness, longitudinal analysis of adaptation dynamics, and continued technical research into detection capabilities. The ongoing evolution of deepfake technology ensures that this will remain an active research domain requiring sustained attention from academic, practitioner, and regulatory communities.

The imperative for UK retail banking is clear: proactive investment in multi-layered defences, continuous updating of detection capabilities, cultural embedding of security-conscious practices, and active engagement with evolving regulatory expectations. The institutions that respond most effectively to deepfake threats will maintain customer trust and operational integrity; those that fail to adapt face significant financial, reputational, and regulatory consequences.

References

Alarfaj, F. and Shahzadi, S., 2025. Enhancing fraud detection in banking with deep learning: graph neural networks and autoencoders for real-time credit card fraud prevention. *IEEE Access*, 13, pp.20633-20646. https://doi.org/10.1109/access.2024.3466288

Bank of England, 2024. *Operational resilience: critical third parties to the UK financial sector*. London: Bank of England.

De Rancourt-Raymond, A. and Smaili, N., 2022. The unethical use of deepfakes. *Journal of Financial Crime*. https://doi.org/10.1108/jfc-04-2022-0090

Financial Conduct Authority, 2023. *FG23/3: Guidance on operational resilience*. London: Financial Conduct Authority.

Gambín, Á., Yazidi, A., Vasilakos, A., Haugerud, H. and Djenouri, Y., 2024. Deepfakes: current and future trends. *Artificial Intelligence Review*, 57. https://doi.org/10.1007/s10462-023-10679-x

Gupta, N., 2025. Security risks of generative AI in financial systems: a comprehensive review. *World Journal of Information Systems*. https://doi.org/10.17013/wjis.v1i3.16

Heidari, A., Navimipour, N., Dağ, H. and Unal, M., 2023. Deepfake detection using deep learning methods: a systematic and comprehensive review. *Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery*, 14. https://doi.org/10.1002/widm.1520

Ke, Z., Zhou, S., Zhou, Y., Chang, C. and Zhang, R., 2025. Detection of AI deepfake and fraud in online payments using GAN-based models. *2025 8th International Conference on Advanced Algorithms and Control Engineering (ICAACE)*, pp.1786-1790. https://doi.org/10.1109/icaace65325.2025.11020513

Kietzmann, J., Lee, L., McCarthy, I. and Kietzmann, T., 2020. Deepfakes: trick or treat?. *Business Horizons*. https://doi.org/10.1016/j.bushor.2019.11.006

Kothinti, K., 2025. AI-powered identity verification and risk analysis: the future of fraud prevention in financial services. *European Journal of Computer Science and Information Technology*. https://doi.org/10.37745/ejcsit.2013/vol13n92355

Metibemu, O., 2025. Financial risk management in digital-only banks: addressing fraud and cybersecurity threats in a cashless economy. *Asian Journal of Research in Computer Science*. https://doi.org/10.9734/ajrcos/2025/v18i3603

Muhly, F., Chizzonic, E. and Leo, P., 2025. AI-deepfake scams and the importance of a holistic communication security strategy. *International Cybersecurity Law Review*, 6, pp.53-61. https://doi.org/10.1365/s43439-025-00143-7

Mustak, M., Salminen, J., Mäntymäki, M., Rahman, A. and Dwivedi, Y., 2023. Deepfakes: deceptions, mitigations, and opportunities. *Journal of Business Research*. https://doi.org/10.1016/j.jbusres.2022.113368

Raza, A., Munir, K. and Almutairi, M., 2022. A novel deep learning approach for deepfake image detection. *Applied Sciences*. https://doi.org/10.3390/app12199820

Sandoval, M., De Almeida Vau, M., Solaas, J. and Rodrigues, L., 2024. Threat of deepfakes to the criminal justice system: a systematic review. *Crime Science*, 13. https://doi.org/10.1186/s40163-024-00239-1

Sidelov, P., 2022. Analysis of deepfakes problem for banks and financial institutions. *Věda a perspektivy*. https://doi.org/10.52058/2695-1592-2022-3(10)-97-108

Snyder, H., 2019. Literature review as a research methodology: an overview and guidelines. *Journal of Business Research*, 104, pp.333-339.

Vecchietti, G., Liyanaarachchi, G. and Viglia, G., 2025. Managing deepfakes with artificial intelligence: introducing the business privacy calculus. *Journal of Business Research*. https://doi.org/10.1016/j.jbusres.2024.115010

Verdoliva, L., 2020. Media forensics and deepfakes: an overview. *IEEE Journal of Selected Topics in Signal Processing*, 14, pp.910-932. https://doi.org/10.1109/jstsp.2020.3002101

To cite this work, please use the following reference:

UK Dissertations. 10 February 2026. How are deepfake scams changing fraud risk controls in UK retail banking?. [online]. Available from: https://www.ukdissertations.com/dissertation-examples/how-are-deepfake-scams-changing-fraud-risk-controls-in-uk-retail-banking/ [Accessed 13 February 2026].

Contact

UK Dissertations

Business Bliss Consultants FZE

Fujairah, PO Box 4422, UAE

+44 115 966 7987

Connect

Subscribe

Join our email list to receive the latest updates and valuable discounts.