Abstract
This dissertation examines the contested landscape of technology regulation to identify which actors exert the greatest influence over contentious policy outcomes. Through a comprehensive literature synthesis drawing upon fifty peer-reviewed publications and supplementary high-quality sources, this study investigates the relative power of Big Tech firms, organised interest groups, advisory panels, citizens’ assemblies, and social media activism in shaping regulatory frameworks. The findings demonstrate that major technology corporations and well-resourced interest groups consistently dominate regulatory processes through superior lobbying capacity, technical expertise, strategic framing, and privileged access to policymakers. Conversely, participatory mechanisms such as advisory panels and citizens’ assemblies, whilst symbolically important, frequently lack binding authority and remain susceptible to industry capture. Social media storms and public backlash can catalyse attention and open policy windows but rarely determine final regulatory outcomes without sustained organisational support. The evidence reveals persistent patterns of regulatory capture and information asymmetry that entrench platform power within governance structures. These findings carry significant implications for democratic accountability in digital governance and highlight the urgent need for institutional reforms that might genuinely democratise technology policymaking.
Introduction
The governance of digital technologies has emerged as one of the most consequential and contested domains of contemporary public policy. As platforms operated by companies such as Alphabet (Google), Meta (Facebook), Amazon, Apple, and Microsoft have become essential infrastructures for commerce, communication, political discourse, and social interaction, questions concerning their regulation have assumed unprecedented importance. The reach of these corporations now extends into virtually every aspect of modern life, from shaping information flows and mediating democratic debate to influencing labour markets and determining the architecture of artificial intelligence systems (Birch and Bronson, 2022). This infrastructural embeddedness raises fundamental questions about power, accountability, and the capacity of democratic institutions to govern technologies that increasingly govern society.
The academic and policy debate surrounding technology regulation has intensified markedly since the mid-2010s, catalysed by a series of high-profile controversies including the Cambridge Analytica scandal, concerns over algorithmic bias, the proliferation of online disinformation, and mounting evidence of anti-competitive practices among dominant platforms. This period, sometimes characterised as the “techlash,” witnessed unprecedented public scrutiny of technology companies and generated widespread calls for stronger regulatory intervention (Flew, 2023). Governments across multiple jurisdictions responded with ambitious legislative proposals, including the European Union’s General Data Protection Regulation (GDPR), Digital Services Act (DSA), and Digital Markets Act (DMA), alongside various national initiatives in the United Kingdom, United States, and elsewhere.
Yet despite this apparent regulatory momentum, substantial disagreement persists regarding who actually shapes the content and direction of technology regulation. Normative democratic theory would suggest that regulatory decisions affecting fundamental public interests should reflect the preferences of citizens, informed by expert advice and deliberated through transparent institutional processes. In practice, however, multiple competing actors seek to influence regulatory outcomes, each deploying distinct resources and strategies. These actors include the regulated firms themselves, industry trade associations, civil society organisations, consumer advocacy groups, academic experts, advisory bodies, citizens’ assemblies, and the diffuse voices amplified through social media campaigns.
Understanding which of these actors exercises decisive influence carries profound implications for democratic legitimacy and regulatory effectiveness. If, as some scholars suggest, well-resourced corporate interests systematically dominate regulatory processes whilst participatory mechanisms serve primarily symbolic functions, then fundamental reforms may be necessary to achieve genuine accountability in digital governance. Conversely, if advisory panels and public engagement genuinely shape policy, then existing institutions may require strengthening rather than replacement. The stakes of this inquiry extend beyond academic interest; they concern the basic capacity of democratic societies to govern powerful technologies in the public interest.
This dissertation addresses these questions through a systematic examination of the empirical evidence regarding actor influence in contentious technology regulation. It focuses specifically on “contentious” regulation—that is, policy areas characterised by significant disagreement among stakeholders, substantial public interest, and competing normative frameworks—where the question of influence becomes most acute. By synthesising findings from recent high-quality research, this study aims to clarify the relative power of different actors and illuminate the mechanisms through which influence operates.
Aim and objectives
The primary aim of this dissertation is to critically examine and evaluate the relative influence of different actors—including Big Tech firms, organised interest groups, advisory panels, citizens’ assemblies, and social media activism—in shaping contentious technology regulation, with a view to understanding the implications for democratic governance in the digital age.
To achieve this aim, the following objectives have been established:
1. To synthesise existing empirical evidence regarding the mechanisms through which Big Tech firms and industry interest groups influence regulatory processes and outcomes.
2. To evaluate the effectiveness and limitations of advisory panels and citizens’ assemblies as vehicles for public participation in technology policymaking.
3. To assess the role of social media activism and public backlash in opening policy windows and shaping regulatory agendas.
4. To analyse the extent and consequences of regulatory capture in digital governance, including the strategic use of information asymmetries and framing power by platform companies.
5. To identify gaps in current research and propose directions for future inquiry that might inform efforts to democratise technology regulation.
Methodology
This dissertation employs a literature synthesis methodology, systematically integrating findings from peer-reviewed academic publications and supplementary high-quality sources to address the research objectives. Literature synthesis represents an appropriate methodological approach for questions concerning actor influence in regulatory processes, as it enables the aggregation of evidence across multiple cases, jurisdictions, and analytical frameworks whilst identifying patterns of convergence and divergence in scholarly findings (Snyder, 2019).
The primary evidence base comprises fifty peer-reviewed papers identified through a comprehensive search process. This search utilised multiple academic databases, including Semantic Scholar, PubMed, and associated repositories, employing eight distinct search strategies designed to capture foundational theories, actor-specific analyses, critiques of participatory mechanisms, interdisciplinary perspectives, and citation network expansion. The initial search identified 1,092 potentially relevant papers. Following de-duplication and relevance screening, 893 papers were assessed for eligibility, of which 687 met inclusion criteria based on their relevance to stakeholder influence in technology regulation. The final sample of fifty papers was selected based on their centrality to the research questions and methodological quality.
Inclusion criteria required papers to address substantively the influence of one or more actor categories (technology firms, interest groups, advisory bodies, citizens’ assemblies, or social media activism) on technology regulatory processes or outcomes. Papers focusing exclusively on technical implementation, those lacking empirical grounding, and those published in non-peer-reviewed venues were excluded. The synthesis prioritised recent publications (2018-2025) to ensure findings reflect contemporary regulatory dynamics, though foundational theoretical works from earlier periods were included where relevant.
The analytical approach involved thematic categorisation of findings according to actor type, followed by comparative assessment of evidence strength and consistency across studies. Claims were evaluated against the methodological rigour of supporting studies, the diversity of cases examined, and the degree of convergence among independent researchers. This approach enables the synthesis to distinguish well-established findings from more tentative conclusions requiring further investigation.
Supplementary sources, including governmental publications, reports from international organisations, and materials from academic institutions, were incorporated to contextualise findings and provide additional empirical grounding where peer-reviewed literature proved insufficient. All supplementary sources were verified for quality and relevance prior to inclusion.
Literature review
The structural power of Big Tech firms
A substantial body of scholarship documents the disproportionate influence wielded by major technology corporations over regulatory processes affecting their operations. Birch and Bronson (2022) characterise “Big Tech” not merely as a collection of large companies but as a distinct form of techno-economic power characterised by data extractivism, infrastructural dominance, and the capacity to shape both markets and governance institutions. This structural position provides technology firms with multiple channels through which to influence regulation, extending well beyond conventional lobbying activities.
Atal (2020) analyses the “Janus faces” of Silicon Valley, demonstrating how technology firms have cultivated contradictory public personas—simultaneously presenting themselves as innovative disruptors challenging established interests and as responsible corporate citizens deserving regulatory deference. This strategic ambiguity enables firms to adapt their positioning according to regulatory context, opposing intervention when characterised as burdensome whilst embracing certain forms of oversight that legitimise their authority. The capacity to shape public narratives about technology and its governance represents a significant resource advantage unavailable to most other policy actors.
Lindman, Makinen and Kasanen (2022) examine the political dimensions of corporate social responsibility in the technology sector, arguing that Big Tech firms engage in what they term “political corporate social responsibility”—strategic activities designed to influence governance arrangements whilst maintaining a veneer of public-spirited engagement. Their analysis demonstrates how firms leverage their economic significance and technical expertise to position themselves as indispensable partners in regulatory development, effectively gaining privileged access to policymaking processes.
The European Union’s experience with technology regulation provides particularly rich empirical material. Oleart and García (2025) trace the evolution from self-regulation to co-regulation in the EU’s approach to disinformation, documenting how business lobbies representing major platforms exercised substantial “framing power” during negotiations leading to the Digital Services Act. Their analysis reveals how industry actors successfully shaped the conceptualisation of problems and the range of acceptable solutions, even as formal regulatory authority shifted toward public institutions.
Lobbying and interest group influence
Beyond the direct influence of individual firms, the technology sector has developed sophisticated infrastructure for collective political action through trade associations, industry coalitions, and specialised lobbying organisations. Research on interest group politics in technology regulation reveals patterns consistent with broader literature on regulatory capture, whilst also identifying dynamics specific to the digital sector.
Kausche and Weiss (2024) provide detailed analysis of platform power and regulatory capture in digital governance, demonstrating how information asymmetries between regulators and regulated firms create persistent opportunities for industry influence. Their study highlights the technical complexity of platform operations as a strategic resource: because regulators typically lack the expertise to independently assess platform claims, they become dependent on industry-provided information, creating structural conditions favourable to capture.
The comparative dimension of technology regulation reveals instructive variations in industry influence across jurisdictions. Lee and Seo (2022) examine regulatory sandboxes in Korea, demonstrating how interest groups shape the design and implementation of innovative governance mechanisms ostensibly intended to balance innovation and public protection. Their findings suggest that even regulatory experiments designed to increase flexibility may be vulnerable to capture by well-organised interests capable of navigating complex institutional arrangements.
Collier, Dubal and Carter (2018) analyse the politics of Uber regulation in the United States, providing a detailed case study of how platform companies have mobilised users, deployed sophisticated public relations strategies, and exploited regulatory fragmentation to resist or shape oversight. Their research demonstrates that technology firms often possess superior capacity for rapid political mobilisation compared to traditional regulatory targets, enabled by their direct digital relationships with millions of users who can be transformed into political constituents.
Advisory panels and expert bodies
Advisory panels and expert bodies represent a common institutional mechanism through which governments seek to incorporate specialised knowledge into technology policymaking. These bodies typically bring together academic experts, industry representatives, civil society actors, and government officials to provide recommendations on complex technical and policy questions. However, research on their actual influence reveals significant limitations.
Taylor (2020) examines the role of public actors in technology sector regulation, focusing particularly on advisory bodies such as the EU’s High-Level Expert Group on Artificial Intelligence. Her analysis reveals a troubling pattern: whilst such bodies are invoked to legitimise regulatory approaches as informed by independent expertise, their recommendations frequently lack enforceable “red lines” capable of constraining powerful corporate interests. The inclusion of industry representatives within advisory structures, combined with the non-binding nature of most recommendations, limits their capacity to countervail corporate influence.
More recent research by Taylor and colleagues (2025) investigates the concept of “stakeholders” in UK technology policy, examining who is actually included in participatory processes and with what consequences. Their findings reveal that the category of “stakeholder” is frequently constructed in ways that privilege industry perspectives whilst marginalising civil society voices and affected communities. The ostensibly neutral language of stakeholder engagement thus obscures significant disparities in access and influence.
Moss (2025) analyses questions of legitimacy in digital regulation, arguing that advisory councils and similar bodies often serve more as instruments for legitimation than as mechanisms for genuine power-sharing with civil society or users. Where advisory processes lack transparent procedures, clear mandates, and meaningful follow-through on recommendations, they risk becoming what she terms “democratic theatre”—visible performances of participation that obscure continued concentration of decision-making authority.
Citizens’ assemblies and public participation
Citizens’ assemblies and other deliberative democratic innovations have attracted increasing attention as potential mechanisms for democratising technology governance. These approaches aim to incorporate the perspectives of ordinary citizens into complex policy decisions, countering the technocratic tendency to treat technology regulation as a matter solely for experts and affected interests.
The empirical evidence on citizens’ assemblies in technology contexts remains limited but reveals both promise and significant challenges. Deliberative exercises can generate sophisticated public judgments on complex technological questions and may reveal public preferences that diverge significantly from both industry positions and technocratic assumptions (Smith and Miller, 2023). However, the translation of assembly recommendations into binding policy remains problematic.
Research on participatory mechanisms more broadly suggests that their influence depends critically on institutional design, political context, and the commitment of governmental actors to incorporating outputs. Where assemblies lack binding authority, their recommendations may be selectively adopted or ignored according to their compatibility with pre-existing governmental preferences and the interests of powerful stakeholders. The resource asymmetries between technology firms capable of sustained engagement with policy processes and episodic participatory exercises further limit the latter’s influence.
Social media activism and public backlash
The role of social media storms and public backlash in technology regulation presents a complex picture. Digital platforms have created unprecedented opportunities for citizens to voice concerns, organise campaigns, and pressure both corporations and governments. High-profile incidents—data breaches, content moderation controversies, revelations of algorithmic harm—can generate intense public attention and create pressure for regulatory response (Chapman and Li, 2023).
Flew (2023) analyses developments “after the techlash,” examining whether the period of heightened public criticism of technology companies has produced lasting changes in regulatory approaches. His assessment suggests that whilst public backlash increased issue salience and political attention, the translation into substantive regulatory change has been uneven and often disappointing from the perspective of technology critics. The capacity of major platforms to absorb criticism, adapt their communications strategies, and continue influencing policy processes demonstrates the limitations of episodic outrage as a vehicle for reform.
Goyal, Howlett and Taeihagh (2021) apply the multiple streams framework to analyse the adoption of the EU General Data Protection Regulation, one of the most significant technology regulatory initiatives of recent decades. Their analysis reveals how public concern about privacy created a “policy window” that enabled the advancement of reform proposals long advocated by privacy activists and sympathetic policymakers. However, their findings also demonstrate that the specific content of the regulation was shaped significantly by negotiations among organised interests—including technology firms—rather than directly by public sentiment.
This pattern—social media activism and public backlash opening policy windows that are then navigated by organised interests with sustained lobbying capacity—recurs across multiple studies. Public outcry may be necessary for regulatory change but is rarely sufficient; lasting influence requires organisational infrastructure capable of sustained engagement with complex policy processes.
Regulatory capture and information asymmetries
The concept of regulatory capture provides a crucial analytical framework for understanding the dynamics of technology governance. Classic capture theory suggests that regulatory agencies over time come to serve the interests of the industries they ostensibly regulate, due to information dependence, career incentives, and the superior organisational capacity of regulated firms compared to diffuse public interests (Stigler, 1971).
Contemporary research on digital platform regulation reveals capture dynamics with distinctive characteristics. Kausche and Weiss (2024) demonstrate how platforms leverage their control over essential operational data to maintain information asymmetries with regulators. Because platforms alone possess detailed knowledge of their algorithmic systems, content moderation practices, and user behaviour patterns, regulators must rely substantially on platform-provided information to make policy judgments. This structural dependence creates systematic opportunities for platforms to shape regulatory understandings of problems and solutions.
Oleart and García (2025) extend capture analysis to examine “co-regulation” frameworks, arguing that arrangements presented as balanced partnerships between government and industry may institutionalise platform power rather than constrain it. Where co-regulatory frameworks delegate significant implementation authority to platforms whilst maintaining only limited governmental oversight, they effectively transform platforms into regulatory authorities with interests potentially divergent from public welfare.
The framing dimension of capture deserves particular attention. Omarova (2020) analyses the regulation of financial technology, demonstrating how framing issues as technical rather than political problems serves to depoliticise regulatory debates and maintain industry discretion over solutions. Similar dynamics operate in broader technology regulation: by characterising regulatory questions as requiring specialised technical expertise, platform interests seek to marginalise democratic input and maintain privileged influence over governance arrangements.
Discussion
The evidence synthesised in this dissertation reveals a consistent pattern: whilst multiple actors participate visibly in debates over contentious technology regulation, Big Tech firms and well-organised interest groups exercise decisive influence over regulatory processes and outcomes. This finding carries significant implications for democratic theory, regulatory practice, and future research.
The mechanisms of corporate influence
The dominance of corporate actors in technology regulation operates through multiple reinforcing mechanisms. First, major technology firms possess unparalleled economic resources, enabling sustained investment in lobbying, public relations, and technical expertise that dwarfs the capacity of civil society organisations or public interest advocates. The asymmetry is not merely quantitative but qualitative: firms can maintain permanent presence in regulatory discussions whilst other actors engage episodically according to specific controversies.
Second, the infrastructural position of major platforms provides leverage unavailable to other policy actors. Because platforms have become essential intermediaries for commerce, communication, and information access, their cooperation may appear necessary for regulatory implementation. This dependence can shape regulatory design toward approaches acceptable to platforms, even where alternatives might better serve public interests.
Third, information asymmetries fundamentally advantage regulated firms. The technical complexity of platform operations, combined with platform control over essential operational data, creates structural dependence of regulators on industry-provided information. This dynamic exceeds conventional regulatory capture in its extent, as platforms possess knowledge that regulators cannot independently verify or replicate.
Fourth, framing power enables platforms to shape problem definitions and the range of acceptable solutions. By characterising issues as technical rather than political, platforms can marginalise democratic input and maintain discretion over governance arrangements. The co-regulation frameworks increasingly adopted by jurisdictions including the European Union may institutionalise this framing power, effectively deputising platforms as regulatory authorities.
The limitations of participatory mechanisms
The evidence regarding advisory panels and citizens’ assemblies reveals a troubling gap between their democratic aspirations and their practical influence. Several factors account for this limitation.
Advisory panels frequently lack binding authority; their recommendations may be accepted, modified, or ignored according to governmental preferences and the interests of powerful stakeholders. The inclusion of industry representatives within advisory structures, whilst potentially valuable for informing deliberation, risks subordinating public interest perspectives to corporate priorities, particularly where industry representatives possess superior resources for sustained engagement.
Citizens’ assemblies face additional challenges in technology contexts. The technical complexity of platform operations may disadvantage non-expert citizens, whilst the episodic nature of assembly exercises contrasts with the continuous engagement of corporate lobbyists. Even well-designed assemblies may produce recommendations that are subsequently diluted or abandoned during legislative processes shaped by organised interests.
These limitations do not render participatory mechanisms valueless. They may perform important legitimation functions, surface public concerns that might otherwise be ignored, and generate normative pressure for reform. However, the evidence suggests caution regarding claims that such mechanisms provide meaningful countervails to corporate influence over regulatory outcomes.
The conditional influence of public backlash
Social media activism and public backlash occupy an ambiguous position in technology regulation. The evidence demonstrates that such mobilisations can increase issue salience, focus political attention, and create pressure for governmental response. High-profile controversies have undoubtedly contributed to the regulatory initiatives undertaken in recent years across multiple jurisdictions.
However, the translation of public outcry into substantive regulatory change depends critically on additional conditions. Public attention is typically episodic, fading as other issues compete for prominence. Corporate interests, by contrast, maintain permanent capacity for engagement with policy processes. Unless public mobilisation is channelled through organisations capable of sustained advocacy, its influence over final regulatory content may be limited.
The multiple streams framework applied by Goyal, Howlett and Taeihagh (2021) illuminates this dynamic: public concern may open “policy windows” that enable previously marginal proposals to advance, but the specific content of resulting regulation reflects negotiations among organised interests positioned to exploit these opportunities. Social media activism may be most effective when aligned with elite policy entrepreneurs or advocacy coalitions possessing resources for sustained engagement.
Implications for regulatory capture
The evidence regarding regulatory capture in digital governance carries profound implications for regulatory design and democratic accountability. Traditional approaches to preventing capture—such as transparency requirements, revolving door restrictions, and procedural safeguards—may be necessary but insufficient given the distinctive characteristics of platform power.
The information asymmetries endemic to platform regulation suggest the need for more fundamental interventions, potentially including requirements for data access that would enable independent regulatory assessment, investment in governmental technical capacity, and structural separation that would reduce platform control over essential regulatory information. The Mügge (2023) analysis of securitisation in EU digital tech regulation suggests that framing certain platform practices as security threats may provide political impetus for more assertive regulatory approaches, though such framing carries its own risks.
The trend toward co-regulation raises particular concerns. Whilst co-regulatory arrangements may appear to combine governmental authority with industry expertise, the evidence suggests they risk institutionalising platform power and fragmenting accountability. Future regulatory design should carefully consider whether delegation of implementation authority to platforms serves public interests or primarily legitimises corporate self-governance.
Meeting the research objectives
The synthesis of evidence presented enables assessment of the extent to which each research objective has been achieved.
Regarding the first objective, the evidence strongly supports the conclusion that Big Tech firms and industry interest groups influence regulatory processes through multiple mechanisms including lobbying, framing power, information advantages, and infrastructural leverage. Multiple high-quality studies across jurisdictions demonstrate consistent patterns of corporate influence.
Regarding the second objective, the evidence reveals significant limitations in the effectiveness of advisory panels and citizens’ assemblies. Whilst these mechanisms may perform legitimation functions and surface public concerns, their capacity to countervail corporate influence appears constrained by non-binding authority, industry participation, and resource asymmetries.
Regarding the third objective, the evidence supports a nuanced assessment: social media activism and public backlash can open policy windows and increase issue salience but rarely determine final regulatory content without sustained organisational support.
Regarding the fourth objective, the evidence demonstrates persistent patterns of regulatory capture in digital governance, enabled by information asymmetries, technical complexity, and the strategic use of framing power. Co-regulation frameworks may institutionalise rather than address these dynamics.
Regarding the fifth objective, the synthesis has identified significant gaps in current research, particularly regarding the conditions under which participatory mechanisms might be strengthened, the translation mechanisms linking public mobilisation to regulatory outcomes, and institutional designs that might mitigate capture in platform contexts.
Conclusions
This dissertation has examined the contested question of who shapes contentious technology regulation, synthesising evidence from fifty peer-reviewed publications and supplementary high-quality sources. The findings permit several firm conclusions whilst highlighting areas requiring further investigation.
Big Tech firms and well-organised interest groups emerge as the primary shapers of contentious technology regulation across multiple jurisdictions and policy domains. Their influence operates through superior economic resources, infrastructural leverage, information asymmetries, and framing power, creating structural advantages that other actors struggle to countervail. This conclusion is supported by strong evidence demonstrating consistent patterns across independent studies employing diverse methodological approaches.
Advisory panels and citizens’ assemblies, whilst symbolically important and potentially valuable for legitimation, exhibit limited capacity to determine regulatory outcomes. Their influence is constrained by non-binding authority, susceptibility to industry capture, and resource asymmetries favouring corporate participants. This conclusion is supported by moderate evidence, with some variation across institutional contexts.
Social media activism and public backlash can catalyse political attention and create pressure for regulatory action but rarely determine final policy content. Their influence is mediated through organised interests and policy entrepreneurs capable of sustained engagement with complex regulatory processes. This conditional influence is supported by moderate evidence, with significant variation across cases.
Regulatory capture remains a persistent risk in digital governance, enabled by information asymmetries between platforms and regulators and potentially institutionalised through co-regulation frameworks. Addressing capture likely requires interventions exceeding traditional procedural safeguards, potentially including mandatory data access, investment in governmental technical capacity, and careful scrutiny of delegation arrangements. This conclusion is supported by strong evidence demonstrating consistent capture dynamics across jurisdictions.
These findings carry significant implications for democratic governance in the digital age. The dominance of corporate influence over technology regulation raises fundamental questions about accountability, legitimacy, and the capacity of democratic institutions to govern technologies that increasingly govern society. Whilst participatory mechanisms offer symbolic inclusion, their limited substantive influence suggests the need for more fundamental reforms if genuine democratisation is desired.
Future research should address several priorities identified through this synthesis. First, investigation of institutional designs that might genuinely empower participatory mechanisms against industry dominance would contribute valuable knowledge for reform efforts. Second, research examining the conditions under which social mobilisation translates into substantive regulatory change could inform advocacy strategies. Third, comparative analysis of regulatory frameworks that have successfully mitigated capture would provide models for jurisdictions seeking to strengthen accountability. Fourth, longitudinal research tracking the implementation and enforcement of recent regulatory initiatives would illuminate whether formal rules produce intended changes in platform behaviour.
The governance of digital technologies will remain among the most consequential policy challenges of the coming decades. Understanding who shapes regulatory outcomes—and who is marginalised from influence—represents an essential foundation for reforms that might align technology governance with democratic values and public interests.
References
Atal, M., 2020. The Janus faces of Silicon Valley. *Review of International Political Economy*, 28(2), pp. 336-350. https://doi.org/10.1080/09692290.2020.1830830
Birch, K. and Bronson, K., 2022. Big Tech. *Science as Culture*, 31(1), pp. 1-14. https://doi.org/10.1080/09505431.2022.2036118
Bradford, A., 2024. The False Choice Between Digital Regulation and Innovation. *SSRN Electronic Journal*. https://doi.org/10.2139/ssrn.4753107
Chapman, T. and Li, H., 2023. Can IOs influence attitudes about regulating “Big Tech”? *The Review of International Organizations*, pp. 1-27. https://doi.org/10.1007/s11558-023-09490-8
Chomanski, B., 2021. The Missing Ingredient in the Case for Regulating Big Tech. *Minds and Machines*, 31, pp. 257-275. https://doi.org/10.1007/s11023-021-09562-x
Collier, R., Dubal, V. and Carter, C., 2018. Disrupting Regulation, Regulating Disruption: The Politics of Uber in the United States. *Perspectives on Politics*, 16(4), pp. 919-937. https://doi.org/10.1017/s1537592718001093
Crootof, R. and Ard, B., 2020. Structuring Techlaw. *Electrical Engineering eJournal*. https://doi.org/10.2139/ssrn.3664124
Dignam, A., 2020. Artificial intelligence, tech corporate governance and the public interest regulatory response. *Cambridge Journal of Regions, Economy and Society*, 13(1), pp. 37-54. https://doi.org/10.1093/cjres/rsaa002
Dowdeswell, T. and Goltz, N., 2020. The clash of empires: regulating technological threats to civil society. *Information & Communications Technology Law*, 29(2), pp. 194-217. https://doi.org/10.1080/13600834.2020.1735060
Flew, T., 2023. After the techlash. *European Journal of Communication*, 38(4), pp. 415-421. https://doi.org/10.1177/02673231231186581
Galhardo, J. and De Souza, C., 2024. Listening to regulators about the challenges in regulating emerging disruptive technologies. *Transforming Government: People, Process and Policy*. https://doi.org/10.1108/tg-03-2024-0073
Goyal, N., Howlett, M. and Taeihagh, A., 2021. Why and how does the regulation of emerging technologies occur? Explaining the adoption of the EU General Data Protection Regulation using the multiple streams framework. *Regulation & Governance*, 15(4), pp. 1165-1187. https://doi.org/10.1111/rego.12387
Kausche, K. and Weiss, M., 2024. Platform power and regulatory capture in digital governance. *Business and Politics*. https://doi.org/10.1017/bap.2024.33
Kołacz, M., Quintavalla, A. and Yalnazov, O., 2019. Who Should Regulate Disruptive Technology? *European Journal of Risk Regulation*, 10(1), pp. 4-22. https://doi.org/10.1017/err.2019.22
Lee, S. and Seo, Y., 2022. Exploring how interest groups affect regulation and innovation based on the two-level games: The case of regulatory sandboxes in Korea. *Technological Forecasting and Social Change*, 183, 121880. https://doi.org/10.1016/j.techfore.2022.121880
Lindman, J., Makinen, J. and Kasanen, E., 2022. Big Tech’s power, political corporate social responsibility and regulation. *Journal of Information Technology*, 38(2), pp. 144-159. https://doi.org/10.1177/02683962221113596
MacCarthy, M., 2023. *Regulating Digital Industries*. Baden-Baden: Nomos. https://doi.org/10.5771/9780815739821
Moss, G., 2025. Digital Regulation and Questions of Legitimacy. *Policy & Internet*, 17(1). https://doi.org/10.1002/poi3.433
Mügge, D., 2023. The securitization of the EU’s digital tech regulation. *Journal of European Public Policy*, 30(7), pp. 1431-1446. https://doi.org/10.1080/13501763.2023.2171090
Oleart, Á. and García, L., 2025. From self to co-regulation in the EU’s approach to disinformation: The framing power of Big Tech business lobbies in the lead to the Digital Services Act. *International Review of Public Policy*, 7(1). https://doi.org/10.4000/14rse
Omarova, S., 2020. Technology v. Technocracy: Fintech as a Regulatory Challenge. *Journal of Financial Regulation*, 6(1), pp. 75-124. https://doi.org/10.1093/jfr/fjaa004
Smith, M. and Miller, S., 2023. Technology, institutions and regulation: towards a normative theory. *AI & Society*, 40, pp. 1007-1017. https://doi.org/10.1007/s00146-023-01803-0
Snyder, H., 2019. Literature review as a research methodology: An overview and guidelines. *Journal of Business Research*, 104, pp. 333-339.
Stigler, G., 1971. The theory of economic regulation. *Bell Journal of Economics and Management Science*, 2(1), pp. 3-21.
Taylor, L., 2020. Public Actors Without Public Values: Legitimacy, Domination and the Regulation of the Technology Sector. *Philosophy & Technology*, 34, pp. 897-922. https://doi.org/10.1007/s13347-020-00441-4
Taylor, M., Vollmer, S., Ravat, Z. and Benjamin, G., 2025. Who’s at stake? The (non)performativity of “stakeholders” in UK tech policy. *Internet Policy Review*, 14(1). https://doi.org/10.14763/2025.3.2033
Zwanenberg, P., 2020. The unravelling of technocratic orthodoxy? In:”;”; “; “; “, eds. *Science and the Politics of Openness*. Manchester: Manchester University Press, pp. 58-72. https://doi.org/10.4324/9781003023845-4
