+44 115 966 7987 contact@ukdiss.com Log in

Regulatory design: what “proportionate” enforcement looks like when harms are fast, viral, and cross-border

//

Emily Carter

Abstract

This essay examines what constitutes proportionate regulatory enforcement when online harms—particularly non-consensual deepfakes and intimate imagery—spread rapidly across jurisdictional boundaries. Through a synthesis of contemporary literature, it analyses how emerging regulatory frameworks in the European Union and United Kingdom conceptualise proportionality, and evaluates whether current approaches adequately address the unique challenges posed by viral, cross-border digital harms. The analysis reveals that proportionate enforcement increasingly requires graduated, systems-level duties rather than reactive content removal alone. Key findings indicate that effective regulation must combine risk-based tiering of platform obligations, time-sensitive moderation mechanisms, upstream controls on generative technologies, and robust due process safeguards. The essay identifies significant tensions between national regulatory divergence and the inherently transnational nature of online harm propagation. It concludes that proportionate enforcement for fast-moving digital harms necessitates coordinated international frameworks anchored in rights-based standards, with particular attention to balancing expedient harm reduction against fundamental freedoms. Future research should examine the practical efficacy of upstream technological interventions and develop metrics for evaluating cross-border regulatory cooperation.

Introduction

The digital age has fundamentally transformed how harm manifests and propagates through society. Non-consensual intimate imagery and synthetic deepfake content represent particularly acute challenges for regulatory systems designed around traditional, geographically bounded conceptions of wrongdoing. When harmful content can achieve global reach within hours of creation, and when sophisticated generative artificial intelligence tools enable the production of hyper-realistic fabricated intimate imagery, conventional enforcement mechanisms face unprecedented strain.

The proliferation of non-consensual intimate deepfakes illustrates these challenges with troubling clarity. Recent empirical research documents the profound psychological and social harms experienced by victims, including anxiety, depression, relationship dissolution, and professional consequences (Flynn et al., 2025). Yet the very characteristics that make such content so damaging—its viral spreadability, its persistence once distributed, and its ability to cross jurisdictional boundaries instantaneously—simultaneously confound regulatory responses predicated on identifying, locating, and removing discrete pieces of content.

This regulatory challenge has prompted significant legislative activity. The European Union’s Digital Services Act (DSA) establishes a comprehensive framework for platform accountability, whilst the United Kingdom’s Online Safety Act (OSA) represents an ambitious attempt to impose safety duties on service providers. Both frameworks grapple with the central question of proportionality: how can regulatory obligations be calibrated to address genuine harms without imposing excessive burdens on platforms, stifling legitimate expression, or undermining due process protections?

The principle of proportionality holds particular significance in this context. Regulatory theory generally holds that enforcement measures should be commensurate with the risks they address, the capacities of regulated entities, and the potential for collateral consequences (Baldwin, Cave and Lodge, 2012). However, when harms are fast, viral, and cross-border, the temporal and spatial assumptions underlying conventional proportionality analysis require fundamental reconsideration. A moderation response that might be proportionate for static, geographically contained content becomes wholly inadequate when facing exponentially spreading synthetic media.

This essay therefore examines how contemporary regulatory frameworks conceptualise proportionate enforcement for online harms characterised by speed, virality, and transnational reach. It focuses particularly on non-consensual deepfakes and intimate imagery as paradigmatic examples of such harms, whilst drawing broader lessons for digital regulation. The analysis synthesises recent scholarship to identify emerging principles of proportionate enforcement and critically evaluates their adequacy for addressing the challenges these harms present.

Aim and objectives

The primary aim of this essay is to analyse what constitutes proportionate regulatory enforcement when online harms spread rapidly, achieve viral distribution, and traverse national boundaries, with particular reference to non-consensual deepfakes and intimate imagery.

To achieve this aim, the essay pursues the following specific objectives:

1. To examine how existing regulatory frameworks, particularly the EU Digital Services Act and UK Online Safety Act, conceptualise and operationalise proportionality in relation to platform obligations.

2. To evaluate the shift from content-focused to systems-based regulatory approaches and assess its implications for proportionate enforcement.

3. To analyse the effectiveness of time-sensitive moderation mechanisms in achieving harm reduction for viral content.

4. To critically examine proposals for upstream controls on generative technologies as a component of proportionate regulatory design.

5. To identify the challenges that cross-border harm propagation presents for consistent and proportionate enforcement, and to evaluate mechanisms for international regulatory coordination.

6. To assess the due process implications of current enforcement approaches and examine how rights-based frameworks might be integrated into proportionate regulatory design.

Methodology

This essay employs a literature synthesis methodology to analyse scholarly and regulatory materials concerning proportionate enforcement for online harms. Literature synthesis represents an appropriate methodological approach for examining complex regulatory questions that span multiple disciplines and jurisdictions, enabling the integration of diverse perspectives into a coherent analytical framework (Snyder, 2019).

The primary sources for this synthesis comprise peer-reviewed academic articles published in law, communications, and technology journals, supplemented by official regulatory documents and guidance from relevant authorities. The selection criteria prioritised recent scholarship (2017-2025) addressing the intersection of platform regulation, content moderation, and online harms, with particular attention to materials examining non-consensual intimate imagery and deepfake technology.

The analytical approach involves thematic organisation of the literature around key dimensions of proportionate enforcement: risk-based tiering, systems regulation, temporal dynamics, cross-border coordination, and due process safeguards. This thematic structure enables systematic comparison of different regulatory approaches and identification of emerging consensus positions and contested issues within the field.

Limitations of this methodology include reliance on published scholarship, which may lag behind rapidly evolving technological and regulatory developments. Additionally, the synthesis necessarily reflects the jurisdictional focus of available literature, with greater coverage of European and Anglo-American frameworks than other regulatory traditions. These limitations are acknowledged whilst recognising that the selected sources provide substantial insight into the central research questions.

Literature review

Risk-based tiering and proportionate obligations

Contemporary platform regulation increasingly employs risk-based tiering to calibrate obligations according to the characteristics of regulated entities and the potential harms they intermediate. The EU Digital Services Act exemplifies this approach, scaling duties by platform function and size. All intermediaries face baseline transparency and notice-and-action requirements, whilst escalating obligations for risk assessment, independent auditing, and crisis response attach specifically to very large online platforms and search engines (Nash and Felton, 2024; Farrand, 2024; Rusli, Halim and Mujahid, 2025).

This tiered structure reflects a particular conception of proportionality: that regulatory burdens should match the risk profile and resource capacity of regulated entities. Larger platforms with greater reach possess both enhanced potential to propagate harm and superior capacity to implement sophisticated compliance measures. The DSA accordingly imposes more extensive systemic risk assessment duties on platforms reaching more than 45 million monthly active users within the European Union.

The UK Online Safety Act adopts a somewhat different approach. Whilst the statute itself establishes less explicit tiering, Ofcom’s draft guidance informally incorporates similar proportionality reasoning, adjusting expectations according to service type and scale (Nash and Felton, 2024). This regulatory divergence creates analytical challenges for comparative assessment and practical difficulties for platforms operating across jurisdictions.

Scholars have identified tensions within risk-based approaches. Farrand (2024) observes that the OSA and DSA conceptualise harm differently—the former emphasising individual harm to users, the latter incorporating broader systemic risks to democratic processes and public discourse. These divergent harm concepts complicate efforts to establish consistent, proportionate enforcement standards across regulatory regimes.

From content removal to systems regulation

A significant development in regulatory thinking involves the shift from content-focused to systems-based approaches. Traditional platform regulation emphasised ex post content measures: removal speeds, takedown rates, and response times following notification of harmful material. Whilst such metrics provide measurable compliance indicators, scholarship increasingly questions their adequacy for addressing ultra-fast viral harms (Nash and Felton, 2024; Schneider and Rizoiu, 2023).

Systems-based regulation instead requires ex ante design changes to reduce the probability and scale of harm before it materialises. This approach encompasses safety-by-design principles, limitations on contact between adult users and minors, visibility controls that constrain algorithmic amplification, and enhanced user tools for controlling exposure to potentially harmful content (Nash and Felton, 2024; Trengove et al., 2022; Sanders et al., 2023).

The rationale for systems regulation derives partly from recognition that content removal alone cannot address harms that propagate exponentially. Once non-consensual intimate imagery achieves viral distribution, removal from a single platform provides limited remediation when copies persist across numerous services and private communications channels. Proportionate enforcement must therefore address the structural conditions enabling harm propagation, not merely individual instances of harmful content.

Sanders et al. (2023) examine these dynamics specifically in relation to commercial content creation platforms, documenting how platform design choices—including verification systems, distribution mechanisms, and monetisation structures—shape the prevalence and persistence of non-consensually shared imagery. Their analysis suggests that meaningful harm reduction requires regulatory attention to these architectural features rather than exclusive focus on post-hoc removal.

Temporal dynamics and moderation effectiveness

The effectiveness of content moderation depends critically on temporal factors. Schneider and Rizoiu (2023) provide important quantitative analysis demonstrating that DSA-style mechanisms combining trusted flagger prioritisation with 24-hour removal deadlines can achieve substantial harm reduction even on fast-moving platforms. However, their modelling reveals that effectiveness varies significantly according to content half-life and the prioritisation of the most harmful posts.

These findings carry significant implications for proportionate regulatory design. They suggest that appropriately structured moderation requirements can meaningfully reduce harm from viral content, but only if enforcement mechanisms account for the specific propagation dynamics of different content types and platform architectures. Generic removal timeframes may prove either excessively burdensome for low-risk content or inadequately responsive for rapidly spreading harmful material.

For non-consensual intimate deepfakes specifically, the literature indicates that purely ex post, platform-policy-driven moderation is insufficient. Kira (2024) argues that the UK Online Safety Act’s framework fails to address the distinctive characteristics of deepfake harms, which include not only distribution but initial creation using generative technologies. The instantaneous creation potential of modern AI systems means that harmful deepfakes can be produced and distributed before any moderation intervention becomes possible.

Upstream controls on generative technologies

Recognition of moderation limitations has prompted scholarly attention to upstream interventions targeting the creation rather than merely the distribution of harmful content. Proposals include prohibitions on generating intimate content without consent, mandatory detection systems within generative AI tools, and requirements for platforms hosting such tools to implement technical safeguards against misuse (Kira, 2024; Flynn et al., 2025).

Flynn et al. (2025) examine perpetrator motivations for creating non-consensual sexualised deepfakes, identifying both intimate relationship contexts and broader patterns of harassment and abuse. Their research suggests that creation-focused interventions might address motivational factors inaccessible through distribution-focused moderation, though they acknowledge significant enforcement challenges given the decentralised nature of generative technology development.

The proportionality of upstream controls raises distinctive questions. Restrictions on creation tools implicate expression interests differently than distribution restrictions, potentially constraining legitimate artistic, educational, and research applications of generative technology. Calibrating such controls requires careful attention to overbreadth concerns whilst recognising that downstream harms may be irremediable once creation occurs.

Cross-border enforcement challenges

The transnational character of online harm propagation creates fundamental challenges for proportionate enforcement. Content created in one jurisdiction may be hosted in a second, distributed to users in a third, and cause harm to victims located in a fourth. This jurisdictional fragmentation complicates enforcement in multiple dimensions: determining applicable law, securing cooperation from foreign platforms, coordinating investigation across regulatory authorities, and achieving meaningful remediation for affected individuals (Elhai, 2020; Pillai, 2025).

Existing frameworks attempt various approaches to cross-border coordination. The DSA establishes mechanisms for regulatory cooperation among European authorities, including information sharing and coordinated enforcement actions. However, cooperation with authorities outside the European Union depends on bilateral arrangements and mutual recognition frameworks that remain underdeveloped for online harms.

Rusli, Halim and Mujahid (2025) compare EU and Malaysian approaches, identifying significant divergences in harm conceptualisation, platform obligations, and enforcement mechanisms. Their comparative analysis illustrates how jurisdictional differences create both regulatory gaps—where harmful content escapes effective oversight—and compliance conflicts—where platforms face inconsistent or contradictory obligations.

Scholars increasingly argue that meaningful proportionality for cross-border harms requires international coordination mechanisms that currently do not exist. Elhai (2020) proposes a Content Platform Commission model drawing on telecommunications and financial services precedents, though implementation would require unprecedented international agreement on substantive harm standards.

Due process and rights-based frameworks

Proportionate enforcement must balance expedient harm reduction against fundamental rights protections, including expression freedoms and procedural fairness. Scholarship identifies significant concerns regarding the delegation of enforcement authority to private platforms operating under their own Terms of Service and algorithmic moderation systems.

Frosio (2017) traces the evolution from intermediary liability to intermediary responsibility, arguing that regulatory pressure has incentivised platforms to over-remove content to avoid potential liability, with consequent chilling effects on legitimate expression. This dynamic illustrates how nominally proportionate regulatory frameworks may generate disproportionate outcomes when mediated through private enforcement mechanisms operating under different incentive structures.

Pillai (2025) emphasises the need for public, rights-based standards and redress mechanisms rather than purely delegated enforcement. This perspective holds that proportionality requires not only appropriate substantive standards but also procedural safeguards ensuring affected parties can challenge enforcement decisions. Algorithmic moderation systems, however sophisticated, lack the transparency and accountability mechanisms that proportionate enforcement demands.

These concerns acquire particular salience for content categories where error costs are high. Intimate imagery may be shared with consent in some contexts and without consent in others; distinguishing these cases requires contextual judgment that automated systems perform poorly. Disproportionate removal of consensually shared content imposes expression costs, whilst under-removal of non-consensual content fails to protect victims. Proportionate enforcement must navigate this terrain with appropriate procedural safeguards.

Discussion

The literature synthesis reveals that proportionate enforcement for fast, viral, cross-border harms requires fundamental reconceptualisation of traditional regulatory approaches. Several key themes emerge that warrant critical analysis in relation to the stated research objectives.

The limitations of content-centric enforcement

The evidence strongly supports the proposition that content removal alone cannot constitute proportionate enforcement for viral harms. When non-consensual deepfakes can achieve global distribution within hours, even expedited removal timeframes may leave substantial harm unremedied. The mathematical reality of exponential propagation means that each hour of delay potentially doubles or triples the harm footprint.

This analysis suggests that proportionality for viral harms must be assessed at the systems level rather than the individual content level. Regulatory frameworks that measure proportionality solely by reference to removal speeds and takedown rates employ metrics fundamentally mismatched to the harm dynamics they address. A platform that removes 95% of reported non-consensual intimate imagery within 24 hours may appear highly compliant, yet this performance permits massive harm when the 5% of unaddressed content includes the most viral material.

The systems-based approaches examined in the literature offer more promising frameworks. By requiring platforms to address the structural conditions enabling harm propagation—algorithmic amplification, friction-free sharing, inadequate verification—systems regulation attacks the problem at a more fundamental level. However, this shift raises new proportionality questions: how should regulators assess whether platform design choices adequately balance harm reduction against legitimate functionality?

Tiering and the challenges of platform heterogeneity

The risk-based tiering approaches employed by the DSA reflect sensible proportionality reasoning in allocating regulatory burdens according to platform capacity and potential harm. However, the literature reveals significant implementation challenges that complicate this apparently straightforward logic.

Platform size correlates imperfectly with harm potential. Smaller, specialised platforms may facilitate disproportionate harm within particular communities, whilst very large platforms may possess moderation systems that effectively contain certain harm types. The DSA’s threshold-based approach necessarily employs somewhat arbitrary boundaries that may not track actual risk distributions.

More fundamentally, tiered approaches face challenges when platforms operate across jurisdictions with different tiering frameworks or incompatible harm definitions. The divergence between OSA and DSA approaches identified by Farrand (2024) illustrates how regulators prioritising different harm concepts may reach inconsistent conclusions about proportionate obligations for identical platform conduct.

Upstream controls and overbreadth concerns

The proposals for upstream controls on generative technologies represent a significant extension of proportionality analysis to the creation stage of harmful content. The logic supporting such controls appears compelling: if downstream moderation cannot adequately address harms from non-consensual intimate deepfakes, proportionate enforcement must extend to preventing creation.

However, upstream controls raise distinctive proportionality concerns. Restrictions on generative tools that prevent creation of non-consensual intimate imagery may simultaneously prevent legitimate uses—artistic expression, educational demonstrations, consensual adult content creation, and research applications. The overbreadth inherent in technology-level restrictions differs qualitatively from the overbreadth risks in content moderation.

The literature provides insufficient guidance on calibrating upstream controls proportionately. Technical measures such as content filtering in generative AI systems may prove both over-inclusive (blocking legitimate content) and under-inclusive (failing to detect harmful content that evades filters). Regulatory frameworks that mandate such measures without specifying performance standards risk either excessive compliance burdens or inadequate protection.

Cross-border coordination deficits

The analysis of cross-border challenges reveals perhaps the most fundamental limitation of current proportionality frameworks. Regulatory approaches calibrated to domestic contexts inevitably prove disproportionate—in both directions—when applied to transnational harm propagation.

Domestic regulators face a dilemma: applying stringent requirements to domestically-accessible content may burden platforms excessively relative to what domestic enforcement can achieve, whilst accepting more permissive standards may leave domestic users inadequately protected against harms originating elsewhere. Neither approach achieves proportionality in any meaningful sense.

The literature identifies international coordination as essential for resolving this dilemma, yet offers limited guidance on achieving such coordination given current institutional arrangements. The Elhai (2020) proposal for a Content Platform Commission represents one ambitious vision, though its feasibility depends on international consensus unlikely to emerge soon. In the interim, regulatory divergence will continue generating both gaps and conflicts that undermine proportionate enforcement.

Due process and the delegation problem

The critical perspectives on private enforcement delegation raise important concerns for proportionality analysis. When regulatory frameworks effectively require platforms to make content determinations under conditions of legal uncertainty and liability risk, the resulting private enforcement may systematically diverge from what proportionate public enforcement would produce.

The literature documents tendencies toward over-removal in such contexts, as platforms rationally prefer removing potentially legitimate content to risking liability for failing to remove actually harmful content. This dynamic generates hidden costs—chilled expression, reduced access to information, inconsistent enforcement—that rarely feature in official assessments of regulatory proportionality.

Addressing these concerns requires embedding robust due process protections within regulatory frameworks, including transparency about enforcement decisions, meaningful appeal mechanisms, and accountability for erroneous removals. The literature suggests current frameworks inadequately address these requirements, particularly for algorithmic enforcement at scale.

Conclusions

This essay has examined what constitutes proportionate regulatory enforcement when online harms spread rapidly, achieve viral distribution, and cross national boundaries. The analysis, synthesising contemporary scholarship on platform regulation and online safety, reveals that proportionate enforcement for such harms requires fundamental departures from traditional regulatory approaches.

The first objective—examining how existing frameworks conceptualise proportionality—has been addressed through detailed analysis of the DSA and OSA approaches. The evidence demonstrates that risk-based tiering according to platform size and function represents the dominant contemporary approach, though significant divergences in harm conceptualisation complicate cross-jurisdictional consistency.

The second objective—evaluating the shift to systems regulation—has been achieved through examination of scholarly critiques of content-focused approaches and analysis of safety-by-design alternatives. The literature strongly supports systems-level duties as more proportionate responses to viral harms than reactive content removal alone.

The third objective—analysing time-sensitive moderation effectiveness—has been addressed through engagement with quantitative modelling demonstrating that appropriately structured moderation can achieve meaningful harm reduction for viral content, provided prioritisation mechanisms focus resources on the most harmful material.

The fourth objective—examining upstream controls—has been achieved through critical analysis of proposals for creation-focused interventions. Whilst such controls address limitations of distribution-focused moderation, they raise distinctive proportionality concerns regarding overbreadth that current literature incompletely addresses.

The fifth objective—identifying cross-border challenges—has been achieved through synthesis of comparative regulatory scholarship documenting how jurisdictional fragmentation undermines consistent enforcement. The analysis confirms that meaningful proportionality for transnational harms requires international coordination mechanisms that remain underdeveloped.

The sixth objective—assessing due process implications—has been addressed through examination of scholarly critiques regarding private enforcement delegation. The literature reveals significant concerns about transparency, accountability, and rights protection that current frameworks inadequately address.

In conclusion, proportionate enforcement for fast, viral, cross-border harms like non-consensual intimate deepfakes requires tiered, risk-based systemic duties; time-critical moderation mechanisms; upstream controls on creation technologies; and rights-anchored transparency and redress—all coordinated across jurisdictions through mechanisms yet to be fully developed. Future research should examine the practical efficacy of upstream technological interventions, develop metrics for evaluating cross-border regulatory cooperation, and investigate how procedural safeguards can be embedded within algorithmic enforcement at scale.

References

Baldwin, R., Cave, M. and Lodge, M., 2012. *Understanding regulation: Theory, strategy, and practice*. 2nd ed. Oxford: Oxford University Press.

Elhai, W., 2020. Regulating digital harm across borders: Exploring a Content Platform Commission. *Proceedings of the International Conference on Social Media and Society*. Available at: https://doi.org/10.1145/3400806.3400832

Farrand, B., 2024. How do we understand online harms? The impact of conceptual divides on regulatory divergence between the Online Safety Act and Digital Services Act. *Journal of Media Law*, 16, pp.240-262. Available at: https://doi.org/10.1080/17577632.2024.2357463

Flynn, A., Powell, A., Eaton, A. and Scott, A., 2025. Sexualized deepfake abuse: Perpetrator and victim perspectives on the motivations and forms of non-consensually created and shared sexualized deepfake imagery. *Journal of Interpersonal Violence*, pp.8862605251368834. Available at: https://doi.org/10.1177/08862605251368834

Frosio, G., 2017. Why keep a dog and bark yourself? From intermediary liability to responsibility. *International Journal of Law and Information Technology*, 26, pp.1-33. Available at: https://doi.org/10.1093/ijlit/eax021

Kira, B., 2024. When non-consensual intimate deepfakes go viral: The insufficiency of the UK Online Safety Act. *Computer Law and Security Review*, 54, pp.106024. Available at: https://doi.org/10.1016/j.clsr.2024.106024

Nash, V. and Felton, L., 2024. Treating the symptoms or the disease? Analysing the UK Online Safety Act’s approach to digital regulation. *Policy and Internet*. Available at: https://doi.org/10.1002/poi3.404

Pillai, A., 2025. Striking the balance: Global frameworks for regulating internet content and combating hate speech in a borderless digital era. *BIS Humanities and Social Science*. Available at: https://doi.org/10.31603/bishss.335

Rusli, M., Halim, Z. and Mujahid, A., 2025. Regulating social media responses to online harms: A comparative study between the European Union (EU) and Malaysia. *Environment-Behaviour Proceedings Journal*. Available at: https://doi.org/10.21834/e-bpj.v10isi33.7065

Sanders, T., Trueman, G., Worthington, K. and Keighley, R., 2023. Non-consensual sharing of images: Commercial content creators, sexual content creation platforms and the lack of protection. *New Media and Society*, 27, pp.84-105. Available at: https://doi.org/10.1177/14614448231172711

Schneider, P. and Rizoiu, M., 2023. The effectiveness of moderating harmful online content. *Proceedings of the National Academy of Sciences of the United States of America*, 120. Available at: https://doi.org/10.1073/pnas.2307360120

Snyder, H., 2019. Literature review as a research methodology: An overview and guidelines. *Journal of Business Research*, 104, pp.333-339.

Trengove, M., Kazim, E., Almeida, D., Hilliard, A., Zannone, S. and Lomas, E., 2022. A critical review of the Online Safety Bill. *Patterns*, 3. Available at: https://doi.org/10.1016/j.patter.2022.100544

To cite this work, please use the following reference:

Carter, E., 16 January 2026. Regulatory design: what “proportionate” enforcement looks like when harms are fast, viral, and cross-border. [online]. Available from: https://www.ukdissertations.com/dissertation-examples/law/regulatory-design-what-proportionate-enforcement-looks-like-when-harms-are-fast-viral-and-cross-border/ [Accessed 17 January 2026].

Contact

UK Dissertations

Business Bliss Consultants FZE

Fujairah, PO Box 4422, UAE

+44 115 966 7987

Connect

Subscribe

Join our email list to receive the latest updates and valuable discounts.