Abstract
This dissertation examines the user-visible effects of United Kingdom online safety regulation, with particular focus on the Online Safety Act 2023 (OSA) and its implications for platform moderation practices. Employing a literature synthesis methodology, the study analyses peer-reviewed scholarship, legal commentary, and regulatory documentation to identify observable changes in enforcement practices from the perspective of platform users. The findings reveal three principal areas of transformation: an expansion of proactive content removal systems that may incentivise over-moderation of lawful speech; the standardisation and proliferation of user-facing safety tools including blocking, muting, and reporting functions; and shifts in transparency mechanisms that, whilst improving upon previous practices, remain substantially less robust than comparable European Union frameworks. The analysis concludes that UK online safety regulation drives platforms toward more systematic, risk-based moderation, resulting in broader and faster enforcement that users experience through increased content removals and enhanced safety controls. However, significant concerns persist regarding opacity in decision-making processes, chilling effects on legitimate expression, and the absence of meaningful due process protections for users whose content or accounts are sanctioned.
Introduction
The regulation of online platforms has emerged as one of the most consequential policy challenges of the digital age. As social media platforms, video-sharing services, and other digital intermediaries have become central to public discourse, commerce, and social interaction, governments worldwide have grappled with how to address the harms that can proliferate within these spaces whilst preserving the benefits of open communication. The United Kingdom has positioned itself at the forefront of this regulatory movement through the enactment of the Online Safety Act 2023, a comprehensive legislative framework that imposes significant duties upon platforms operating within its jurisdiction.
The OSA represents a paradigm shift in how the UK approaches digital regulation. Rather than relying primarily upon reactive mechanisms triggered by user complaints or law enforcement referrals, the legislation mandates that platforms adopt proactive, risk-based approaches to content moderation. This transformation carries profound implications for millions of users who interact with regulated platforms daily, yet scholarly attention has concentrated primarily upon the legal architecture of the regime rather than its experiential dimensions.
Understanding how regulatory changes manifest at the user level matters for several interconnected reasons. First, the legitimacy of any regulatory intervention ultimately depends upon its effects upon those it purports to protect. Users who experience moderation as arbitrary, opaque, or excessive may lose trust in both platforms and the regulatory frameworks that govern them. Second, the OSA’s stated objectives include protecting users from harmful content whilst safeguarding freedom of expression; evaluating whether these objectives are achieved requires attention to user-level outcomes. Third, as other jurisdictions observe and potentially emulate the UK’s approach, understanding its practical effects provides essential evidence for comparative policy analysis.
This dissertation addresses a significant gap in existing scholarship by systematically examining what changes in enforcement practice are visible to users under UK online safety rules. By synthesising legal analysis, empirical research, and regulatory documentation, it provides a comprehensive account of how the OSA and related frameworks reshape the user experience of platform moderation.
Aim and objectives
The overarching aim of this dissertation is to identify and analyse the user-visible effects of UK online safety regulation on platform moderation practices, with particular attention to the Online Safety Act 2023.
To achieve this aim, the following specific objectives guide the investigation:
1. To examine how the OSA’s requirements for proactive content moderation affect the scope and frequency of content removals and account sanctions experienced by users.
2. To analyse the expansion and standardisation of user-facing safety tools mandated under UK online safety frameworks and assess their effectiveness from the user perspective.
3. To evaluate the transparency and procedural protections available to users when content is removed or accounts are sanctioned, particularly in comparison with alternative regulatory approaches.
4. To critically assess the implications of UK online safety regulation for freedom of expression and the risk of over-moderation of lawful speech.
5. To synthesise findings into a coherent account of how UK online safety rules reshape the user experience of platform moderation and identify areas requiring further research or regulatory attention.
Methodology
This dissertation employs a literature synthesis methodology to address the research aim and objectives. Literature synthesis represents an established approach within legal and policy scholarship, enabling the systematic integration of diverse sources to construct a comprehensive understanding of complex regulatory phenomena.
The methodology involved several distinct phases. Initially, a comprehensive search strategy identified relevant peer-reviewed journal articles, legal commentary, government publications, and regulatory documents addressing UK online safety regulation and its effects on platform moderation. Search terms included combinations of “Online Safety Act,” “platform moderation,” “content moderation,” “UK digital regulation,” “Ofcom,” and related terminology. Databases searched included legal repositories, social science databases, and government publication archives.
Selection criteria prioritised sources offering direct analysis of user-facing effects of UK online safety regulation, comparative analysis with other regulatory frameworks, empirical evidence regarding platform moderation practices, and theoretical frameworks for understanding regulatory effects on digital platforms. Sources were excluded if they lacked academic rigour, focused exclusively on technical implementation without user-level implications, or addressed jurisdictions outside the UK without comparative relevance.
The synthesis process involved thematic analysis of selected sources to identify recurring themes, points of consensus, and areas of scholarly disagreement. Findings were organised according to the three principal dimensions of user-visible change identified in the literature: proactive content removal, safety tool provision, and transparency mechanisms. Critical analysis examined the implications of these changes for the stated objectives of UK online safety regulation and for user rights and experiences.
This methodological approach is appropriate given the relatively recent enactment of the OSA and the consequent limited availability of empirical data regarding its effects. As implementation proceeds and more evidence becomes available, future research may productively employ empirical methodologies including surveys, platform data analysis, and qualitative interviews with affected users.
Literature review
### The regulatory architecture of UK online safety
The Online Safety Act 2023 establishes a comprehensive regulatory framework for online platforms operating in the United Kingdom. The legislation designates Ofcom as the independent regulator responsible for overseeing compliance and enforcing the regime’s requirements (Nash and Felton, 2024). The Act imposes differentiated duties upon platforms based upon their size, functionality, and the risks they present, with the most stringent obligations falling upon Category 1 services that meet specified thresholds for user numbers and functionality.
Central to the OSA’s approach is the imposition of duties of care upon regulated services. These duties require platforms to conduct risk assessments identifying potential harms that may arise from user-generated content and to implement proportionate measures to mitigate identified risks (Law, 2024). The legislation specifies categories of illegal content that platforms must address, whilst also establishing duties relating to content that is harmful to children and, for Category 1 services, content that poses risks to adults.
Scholars have characterised this approach as representing a shift from reactive to proactive regulation. Whereas previous UK frameworks primarily held platforms accountable for responding to notified illegal content, the OSA requires platforms to design systems capable of detecting and removing harmful content without relying upon user reports (Judson, Kira and Howard, 2024). This transformation has significant implications for how platforms structure their moderation operations and, consequently, for what users experience.
### Proactive moderation and the risk of over-removal
A substantial body of scholarship addresses the implications of the OSA’s proactive moderation requirements for the scope and character of content enforcement. Legal analysis by Judson, Kira and Howard (2024) identifies what they term the “Bypass Strategy” as a likely response to the Act’s requirements. This strategy involves platforms expanding their own community guidelines and terms of service beyond the specific requirements of criminal law to avoid the necessity of making fine-grained legality assessments in individual cases.
The incentive structure driving this bypass strategy merits careful attention. The OSA establishes substantial penalties for non-compliance, including fines of up to ten per cent of global annual revenue. Simultaneously, the Act’s definitions of illegal and harmful content involve considerable legal complexity, requiring platforms to make assessments that may ultimately depend upon context-specific factors that automated systems struggle to evaluate. Faced with this combination of high stakes and legal uncertainty, platforms may rationally conclude that over-removal of borderline content presents less risk than under-removal of content that subsequently proves harmful or illegal.
Coe (2022) examined these dynamics in the context of hate speech regulation, concluding that the draft legislation risked opening “Pandora’s box” by incentivising expansive content removal policies that would capture substantial quantities of lawful expression. The analysis highlighted particular concerns regarding political speech, religious commentary, and other categories of expression that may be controversial or offensive without meeting legal thresholds for unlawfulness.
Similar concerns arise in relation to sexual content and pornography. McGlynn, Woods and Antoniou (2024) analysed the OSA’s treatment of pornographic content, noting that the legislation’s requirements may drive platforms toward blanket restrictions that fail to distinguish between harmful and lawful sexual expression. Users may consequently experience reduced access to legitimate adult content as platforms adopt precautionary approaches to compliance.
The phenomenon of over-moderation is not unique to the UK context but may be intensified by the OSA’s specific requirements. Empirical research in other jurisdictions has documented how regulatory pressure can produce “collateral censorship” whereby platforms remove lawful content to minimise risk exposure (Suzor, 2019). The UK regime’s combination of proactive duties, substantial penalties, and definitional ambiguity creates conditions conducive to such over-inclusive enforcement.
### User-facing safety tools and their effectiveness
Beyond content removal, the OSA mandates that platforms provide users with tools enabling them to manage their own exposure to potentially harmful material. The legislation requires platforms to ensure “straightforward access” to effective safety technologies as part of their duty of care obligations (Bright et al., 2024). These tools encompass blocking and muting functions, content reporting mechanisms, and controls over algorithmic curation and content recommendation.
Research by Bright et al. (2024) provides valuable empirical evidence regarding user engagement with platform safety technologies. Their survey data indicate that most UK adults have utilised at least one safety tool, with blocking and reporting functions being particularly commonly employed. This finding suggests that user-facing safety tools constitute a meaningful component of how individuals experience and navigate platform environments.
However, the same research reveals significant limitations in current implementation. User satisfaction with safety tools remains low, and many users report uncertainty regarding whether their use of reporting functions results in meaningful platform action (Bright et al., 2024). This dissatisfaction may reflect both technical limitations in tool design and the broader opacity of platform moderation processes discussed below.
Nash and Felton (2024) critically examined Ofcom’s approach to regulating safety mechanisms, concluding that platforms frequently provide limited evidence regarding the effectiveness of their safety tools. Regulatory emphasis on risk assessment processes may consequently produce compliance in form rather than substance, with platforms implementing safety features that satisfy procedural requirements without demonstrating meaningful protection of user interests.
The standardisation of safety tools across platforms represents a potentially significant user-visible change. As regulatory requirements specify baseline expectations for tool provision, users may experience greater consistency when moving between different services. However, standardisation may also produce homogenisation that fails to account for the distinct characteristics and user communities of different platforms (Gorwa, 2019).
### Transparency, due process, and procedural fairness
A recurring theme within scholarship on the OSA concerns the regime’s treatment of transparency and procedural protections for users whose content or accounts are subject to moderation action. Multiple commentators have identified risks of chilling effects arising from opaque enforcement practices that leave users uncertain about what content is permitted and why particular expressions are sanctioned (Trengove et al., 2022; Harbinja and Leiser, 2022).
The concept of “harm” central to the OSA’s framework has attracted particular criticism for its vagueness. Harbinja and Leiser (2022) argued that indeterminate definitions of harmful content create conditions for arbitrary enforcement, as platforms and regulators exercise substantial discretion in determining what falls within regulatory scope. Users may struggle to predict whether their expression will be treated as harmful, producing self-censorship that extends beyond content genuinely warranting restriction.
Neudert (2023) developed a particularly incisive analysis of these dynamics through the concept of “regulatory capacity capture.” This framework suggests that the UK online safety regime concentrates decision-making authority in ways that limit meaningful user participation and accountability. Platforms become the primary decision-makers regarding content permissibility, whilst users lack effective avenues for challenging adverse determinations.
Comparative analysis with the European Union’s Digital Services Act (DSA) illuminates the OSA’s distinctive approach to transparency. Law (2024) examined the compliance and enforcement regimes of both frameworks, concluding that the EU approach places substantially greater emphasis on user-level transparency regarding content moderation decisions. The DSA requires platforms to provide clear explanations when content is removed or demoted and establishes mechanisms for users to contest adverse decisions.
Leerssen (2023) specifically analysed the DSA’s treatment of “shadow banning” and other covert moderation practices, identifying transparency rights that require platforms to inform users when their content is subject to visibility restrictions. The OSA lacks comparable provisions, remaining more focused on outcomes—the reduction of harmful content—than on procedural protections ensuring users understand why their content is limited (Farrand, 2024).
This comparative perspective highlights a significant distinction in regulatory philosophy. The EU approach conceptualises users as rights-holders entitled to due process when their expression is restricted. The UK approach, by contrast, primarily conceptualises users as potential victims requiring protection from harmful content, with less attention to users’ interests as speakers whose expression may be unduly restricted.
### Platform responses and implementation dynamics
Understanding user-visible effects requires attention to how platforms respond to regulatory requirements. Scholarly analysis suggests that platform responses to the OSA are shaped by multiple factors including competitive pressures, technical capabilities, and strategic considerations regarding regulatory relationships.
The global nature of major platforms creates particular complexities. Services operating across multiple jurisdictions must navigate potentially conflicting regulatory requirements, and compliance strategies developed for the UK market may have spillover effects in other regions. Conversely, platforms may resist UK-specific adaptations that complicate global operations, seeking instead to demonstrate compliance through generally applicable measures (Kaye, 2019).
Technical constraints also shape platform responses. The OSA’s expectations regarding proactive detection of illegal and harmful content presuppose capabilities for automated content moderation that remain imperfect. Natural language processing systems struggle with context-dependent assessments, sarcasm, coded language, and cultural references (Gillespie, 2020). Image and video analysis tools face similar limitations. Platforms may consequently rely upon over-inclusive automated systems supplemented by human review, with implications for the speed, consistency, and accuracy of moderation decisions visible to users.
Discussion
The literature synthesis reveals a complex and in many respects concerning picture of how UK online safety regulation reshapes the user experience of platform moderation. Three principal dimensions of change merit detailed discussion: the expansion of proactive enforcement, the proliferation of safety tools, and the persistence of opacity in moderation processes.
### Proactive enforcement and its consequences
The OSA’s requirement that platforms design systems to detect and swiftly remove illegal content represents a fundamental reorientation of platform obligations. For users, this shift manifests most directly through increased content removals and account sanctions. The scholarly consensus suggests that platforms will respond to proactive duties by expanding the scope of prohibited content beyond strict legal requirements, preferring over-removal to the regulatory and reputational risks of under-removal.
This dynamic raises profound concerns for freedom of expression. The bypass strategy identified by Judson, Kira and Howard (2024) effectively delegates content regulation to private platforms, which face incentives to restrict expression more broadly than law requires. Users experience the consequences through removal of lawful content, particularly expression that is controversial, provocative, or that addresses sensitive topics.
The implications extend beyond individual instances of over-moderation to encompass broader chilling effects. When users cannot predict with confidence whether their expression will be removed, rational self-censorship may result. This phenomenon is particularly concerning in relation to political speech, artistic expression, and discourse on matters of public controversy where robust debate serves democratic values.
Addressing the first objective of this dissertation, the evidence indicates that users are likely to experience more frequent content removals and account sanctions under the OSA regime. However, distinguishing removals attributable to regulatory compliance from those reflecting pre-existing platform policies presents empirical challenges that future research should address.
### Safety tools: promises and limitations
The OSA’s requirements regarding user-facing safety tools represent a more straightforwardly positive dimension of regulatory change, at least in principle. Mandating that platforms provide accessible blocking, muting, and reporting functions empowers users to exercise greater control over their online experiences. The evidence that most UK adults have engaged with such tools suggests meaningful uptake (Bright et al., 2024).
However, the effectiveness of these tools in achieving user safety objectives remains questionable. Low user satisfaction indicates that current implementations fall short of expectations, whether because tools are difficult to use, because their effects are insufficiently clear, or because they fail to address the harms users experience. The finding that platforms provide limited evidence of tool effectiveness to regulators suggests inadequate accountability for this dimension of compliance.
The second objective of this dissertation concerned the standardisation of safety tools. The evidence supports a conclusion that regulatory requirements are producing greater consistency in baseline tool provision, though implementation quality varies substantially. Users benefit from knowing that certain protective functions will be available across regulated services, but may be frustrated by variations in how effectively these functions operate.
A critical observation concerns the relationship between safety tools and other moderation practices. Tools enabling users to manage their own exposure do not substitute for effective platform-level action against harmful content. If platforms prioritise safety tool provision as a lower-cost compliance strategy whilst underinvesting in content moderation, users may experience a regulatory regime that shifts responsibility for safety onto individuals rather than addressing systemic harms.
### Transparency and accountability deficits
Perhaps the most significant concern emerging from this analysis relates to the OSA’s treatment of transparency and procedural fairness. The scholarly literature is remarkably consistent in identifying opacity as a fundamental problem with the UK approach. Users whose content is removed or whose accounts are sanctioned frequently lack meaningful information about why action was taken and face limited avenues for contesting adverse decisions.
The comparison with the EU’s Digital Services Act proves illuminating. The DSA establishes explicit transparency obligations requiring platforms to explain moderation decisions and provide contestation mechanisms. The OSA’s relative silence on these matters reflects a regulatory philosophy prioritising harm reduction outcomes over procedural protections for speakers.
This distinction has significant implications for user experience. Under the OSA regime, users may encounter what Leerssen (2023) terms “shadow banning”—covert restrictions on content visibility—without any notification or explanation. Content may be demoted algorithmically, reach may be artificially limited, or accounts may be subject to invisible restrictions. Users experiencing such treatment may suspect but cannot confirm that their expression is being limited, and lack any formal mechanism to seek review.
The third and fourth objectives of this dissertation addressed transparency and expression concerns respectively. The evidence supports conclusions that transparency protections under the OSA remain inadequate by comparison with emerging international standards, and that the regime creates substantial risks of over-moderation affecting lawful expression.
### Regulatory philosophy and its implications
Farrand (2024) identified conceptual divergence between the OSA and DSA approaches to understanding online harms. This divergence proves consequential for user experience. The UK regime’s outcome-focused orientation—reducing harm—may justify extensive platform intervention without corresponding attention to the interests of those whose expression is restricted. The EU’s rights-based orientation, by contrast, treats users as holders of expression and due process rights that constrain regulatory and platform action.
Neither approach is without difficulties. An excessive focus on speaker rights may impede effective action against genuine harms. An exclusive focus on harm reduction may sacrifice important values of open discourse and individual autonomy. The challenge lies in achieving an appropriate balance, and the scholarly consensus suggests the OSA has not yet achieved such balance.
Neudert’s (2023) concept of regulatory capacity capture provides a useful framework for understanding these dynamics. The OSA concentrates significant power in platforms and Ofcom whilst limiting meaningful user participation in governance processes. Users become objects of protection rather than agents with standing to shape how protection operates. This power allocation may prove particularly problematic as platforms exercise discretion in implementing regulatory requirements and as Ofcom develops interpretive guidance and enforcement priorities.
Conclusions
This dissertation set out to examine what changes in platform enforcement practice are visible to users under UK online safety rules, with particular attention to the Online Safety Act 2023. Through systematic literature synthesis, it has addressed five specific objectives relating to content removal, safety tools, transparency, freedom of expression, and the overall user experience of moderation.
The analysis supports several principal conclusions. First, UK online safety regulation is driving platforms toward more proactive and expansive moderation practices. Users are likely to experience more frequent content removals and account sanctions as platforms adopt the bypass strategy of enforcing policies broader than legal requirements to minimise compliance risk. This expansion carries significant implications for lawful expression, particularly speech that is controversial or that addresses sensitive topics.
Second, the OSA’s requirements regarding user-facing safety tools are producing greater standardisation in protective functions available to users. However, the effectiveness of these tools remains questionable, and regulatory focus on their provision may inadequately address user needs for genuine protection from harmful content. Low user satisfaction indicates substantial room for improvement in both tool design and platform accountability for tool effectiveness.
Third, transparency and procedural fairness remain significant weaknesses of the UK approach. Users lack meaningful information about why content is removed or restricted and face limited mechanisms for contesting adverse decisions. Comparison with the EU’s Digital Services Act reveals a less rights-protective orientation in UK regulation, with concerning implications for users whose expression is restricted without adequate explanation or recourse.
Fourth, the risk of over-moderation affecting lawful expression represents a substantial concern requiring ongoing attention. The incentive structures created by the OSA favour precautionary content removal, and definitional vagueness regarding “harm” provides latitude for expansive platform action. The chilling effects of uncertain enforcement may extend well beyond content actually removed to encompass self-censorship by users unwilling to risk sanctions.
Fifth, synthesising these findings, UK online safety regulation is reshaping the user experience of platform moderation in ways that involve significant trade-offs. Users may benefit from more systematic action against harmful content and from greater consistency in safety tool provision. However, they simultaneously face increased risk of having their own expression restricted without adequate justification or due process, and continue to navigate substantially opaque moderation systems.
These conclusions carry implications for regulatory development. As Ofcom implements the OSA and platforms adapt their practices, attention should focus on strengthening transparency requirements, establishing meaningful contestation mechanisms, and ensuring that compliance strategies do not produce excessive restriction of lawful expression. The EU’s Digital Services Act provides instructive precedents that UK regulators might productively consider.
Future research should pursue several directions. Empirical studies examining actual changes in content removal rates, user complaints, and contestation outcomes would valuably supplement the legal and theoretical analysis predominating in current scholarship. Comparative research tracking divergence or convergence between UK and EU platform practices as both regimes mature would illuminate regulatory effects. Qualitative research engaging user perspectives on their experiences of moderation under the new regime would provide essential evidence regarding whether regulatory objectives are being achieved.
The significance of this analysis extends beyond academic interest. How societies govern online expression is among the most consequential policy questions of the contemporary era. The UK’s approach through the Online Safety Act represents a significant intervention that will shape the communicative experiences of millions. Understanding its effects on users is essential for evaluating whether the intervention serves the public interest and for informing ongoing refinement of regulatory design.
References
Bright, J., Enock, F., Johansson, P., Margetts, H. and Stevens, F., 2024. Understanding engagement with platform safety technology for reducing exposure to online harms. *ArXiv*, abs/2401.01796. https://doi.org/10.48550/arxiv.2401.01796
Coe, P., 2022. The Draft Online Safety Bill and the regulation of hate speech: have we opened Pandora’s box?. *Journal of Media Law*, 14(1), pp.50-75. https://doi.org/10.1080/17577632.2022.2083870
Farrand, B., 2024. How do we understand online harms? The impact of conceptual divides on regulatory divergence between the Online Safety Act and Digital Services Act. *Journal of Media Law*, 16(2), pp.240-262. https://doi.org/10.1080/17577632.2024.2357463
Gillespie, T., 2020. Content moderation, AI, and the question of scale. *Big Data & Society*, 7(2), pp.1-5.
Gorwa, R., 2019. What is platform governance?. *Information, Communication & Society*, 22(6), pp.854-871.
Harbinja, E. and Leiser, M., 2022. [Redacted]: This Article Categorised [Harmful] by the Government. *SCRIPT-ed*, 19(1), pp.88-123. https://doi.org/10.2966/scrip.190122.88
Judson, E., Kira, B. and Howard, J., 2024. The Bypass Strategy: platforms, the Online Safety Act and future of online speech. *Journal of Media Law*, 16(2), pp.336-357. https://doi.org/10.1080/17577632.2024.2361524
Kaye, D., 2019. *Speech police: the global struggle to govern the internet*. New York: Columbia Global Reports.
Law, S., 2024. Effective enforcement of the Online Safety Act and Digital Services Act: unpacking the compliance and enforcement regimes of the UK and EU’s online safety legislation. *Journal of Media Law*, 16(2), pp.263-300. https://doi.org/10.1080/17577632.2025.2459441
Leerssen, P., 2023. An end to shadow banning? Transparency rights in the Digital Services Act between content moderation and curation. *Computer Law & Security Review*, 48, p.105790. https://doi.org/10.1016/j.clsr.2023.105790
McGlynn, C., Woods, L. and Antoniou, A., 2024. Pornography, the Online Safety Act 2023 and the need for further reform. *Journal of Media Law*, 16(2), pp.211-239. https://doi.org/10.1080/17577632.2024.2357421
Nash, V. and Felton, L., 2024. Treating the symptoms or the disease? Analysing the UK Online Safety Act’s approach to digital regulation. *Policy & Internet*. https://doi.org/10.1002/poi3.404
Neudert, L., 2023. Regulatory capacity capture: the United Kingdom’s online safety regime. *Internet Policy Review*, 12(4). https://doi.org/10.14763/2023.4.1730
Suzor, N., 2019. *Lawless: the secret rules that govern our digital lives*. Cambridge: Cambridge University Press.
Trengove, M., Kazim, E., Almeida, D., Hilliard, A., Zannone, S. and Lomas, E., 2022. A critical review of the Online Safety Bill. *Patterns*, 3(7), p.100544. https://doi.org/10.1016/j.patter.2022.100544
