Abstract
This dissertation examines whether United Kingdom employment law requires reform to adequately address algorithmic decision-making in hiring and performance management contexts. Employing a comprehensive literature synthesis methodology, this study analyses peer-reviewed scholarship, legal commentary, and empirical research to evaluate the sufficiency of existing regulatory frameworks and identify proposed reform pathways. The findings reveal that current UK law, which relies predominantly upon equality legislation and data protection provisions derived from European Union frameworks, provides inadequate protection against the distinct harms arising from algorithmic employment practices. Evidence demonstrates that algorithmic systems in recruitment and workplace management generate discriminatory outcomes, reduce worker autonomy, diminish perceptions of procedural fairness, and operate with insufficient transparency. Scholars consistently advocate for targeted reforms encompassing mandatory algorithmic auditing, enhanced transparency requirements, meaningful limitations on automated decision-making, strengthened worker voice mechanisms, and explicit substantive protections against algorithmic harm. This dissertation concludes that legislative reform is necessary to ensure accountability, transparency, and effective redress within algorithmic employment contexts, and identifies priority areas for future regulatory intervention and academic inquiry.
Introduction
The proliferation of algorithmic systems within employment contexts represents one of the most significant transformations in contemporary labour relations. Employers increasingly deploy sophisticated automated tools to screen job applicants, assess candidate suitability, monitor employee performance, allocate work tasks, and inform disciplinary and dismissal decisions. Whilst proponents argue these technologies enhance efficiency, reduce human bias, and enable data-driven decision-making, mounting evidence reveals substantial concerns regarding discrimination, transparency, worker autonomy, and procedural fairness. These developments raise fundamental questions about whether existing legal frameworks adequately protect workers from novel forms of algorithmic harm.
The United Kingdom occupies a distinctive regulatory position following its departure from the European Union. Although the UK retained data protection provisions substantially derived from the General Data Protection Regulation through domestic legislation, Brexit has created both uncertainty and opportunity regarding future regulatory trajectories. The question of whether UK employment law should be reformed to address algorithmic decision-making has therefore assumed particular salience, engaging considerations of worker protection, technological innovation, international competitiveness, and fundamental rights.
This topic demands academic attention for several interconnected reasons. First, algorithmic employment systems are becoming ubiquitous across sectors, affecting millions of workers and job applicants. Second, existing research reveals systematic evidence of harm that current legal frameworks struggle to address. Third, regulatory divergence between the UK and European Union creates comparative opportunities and competitive pressures that inform policy debates. Fourth, the intersection of employment law, data protection, equality law, and emerging technology governance presents complex doctrinal questions requiring scholarly analysis. This dissertation therefore contributes to ongoing debates regarding appropriate regulatory responses to algorithmic employment practices within the specific context of UK law.
Aim and objectives
The primary aim of this dissertation is to critically evaluate whether United Kingdom employment law requires reform to adequately address algorithmic decision-making in hiring and performance management contexts.
To achieve this aim, this study pursues the following specific objectives:
1. To analyse the current UK legal framework governing algorithmic decision-making in employment, identifying the primary statutory and common law provisions offering worker protection.
2. To evaluate evidence of harm arising from algorithmic hiring and performance management systems, examining discrimination, fairness perceptions, transparency deficits, and impacts upon worker well-being.
3. To assess the adequacy of existing UK law in addressing identified harms, examining enforcement mechanisms, transparency requirements, and substantive protections.
4. To synthesise scholarly proposals for legal reform, categorising recommended interventions and evaluating their potential effectiveness.
5. To provide recommendations regarding priority areas for legislative reform and future research directions.
Methodology
This dissertation employs a literature synthesis methodology, systematically analysing peer-reviewed academic scholarship, legal commentary, and empirical research to address the stated research objectives. Literature synthesis represents an appropriate methodological approach for examining complex regulatory questions where primary empirical investigation is impractical and where consolidation of existing knowledge provides valuable scholarly contribution.
The research draws upon sources identified through systematic database searches, focusing upon peer-reviewed journal articles published in recognised employment law, technology law, human resource management, and interdisciplinary outlets. Primary databases consulted include Westlaw UK, LexisNexis, JSTOR, and specialist academic search engines. Search terms encompassed combinations of “algorithmic management,” “automated hiring,” “algorithmic recruitment,” “employment law reform,” “UK data protection,” “workplace discrimination,” and related terminology. Sources were selected based upon relevance to the UK regulatory context, methodological rigour, publication in recognised scholarly venues, and citation by other authoritative works.
The analytical approach involves thematic synthesis, whereby key findings from included sources are coded according to emergent themes aligned with research objectives. These themes include: current legal framework analysis; evidence of algorithmic harm; adequacy of existing protections; and proposed reform directions. Critical evaluation of sources considers methodological limitations, potential biases, and the strength of evidential claims. Where sources present conflicting findings or interpretations, these disagreements are explicitly acknowledged and analysed.
Limitations of this methodology include reliance upon published scholarship, which may lag behind rapidly evolving technological developments. Additionally, the relatively recent emergence of algorithmic employment systems means the empirical evidence base remains developing. Nevertheless, sufficient high-quality scholarship exists to support meaningful analysis and conclusions regarding the research questions posed.
Literature review
Current UK legal framework governing algorithmic employment decisions
The United Kingdom presently lacks dedicated legislation specifically addressing algorithmic decision-making in employment contexts. Instead, worker protection derives from the intersection of several legal regimes developed for broader purposes. Understanding this fragmented framework is essential for evaluating its adequacy.
Data protection law provides the primary regulatory mechanism addressing automated decision-making. The UK General Data Protection Regulation, retained through the Data Protection Act 2018 following Brexit, preserves provisions substantially equivalent to EU GDPR Article 22. This provision establishes that data subjects have the right not to be subject to decisions based solely on automated processing which produce legal effects or similarly significantly affect them. For employment contexts, this ostensibly restricts fully automated hiring decisions without meaningful human involvement. However, scholars identify significant ambiguity regarding when automated hiring decisions are truly “solely” automated, particularly in mass recruitment contexts where human oversight may be nominal rather than substantive (Parviainen, 2022).
Equality legislation, principally the Equality Act 2010, prohibits direct and indirect discrimination based upon protected characteristics including race, sex, disability, and age. Algorithmic systems producing discriminatory outcomes may therefore generate legal liability under existing provisions. However, Kelly-Lyth (2020) demonstrates that challenging biased hiring algorithms under equality law presents substantial practical difficulties, including establishing causation, accessing evidence regarding algorithmic operation, and meeting evidential thresholds. The opacity of many algorithmic systems frustrates claimants’ ability to identify and prove discrimination.
Employment protection legislation, including the Employment Rights Act 1996, provides certain substantive protections regarding dismissal, redundancy, and contractual terms. However, these provisions were not drafted contemplating algorithmic decision-making and provide limited specific protections against automated management practices. Collins and Atkinson (2023) argue that existing worker voice mechanisms, including information and consultation rights, inadequately address the distinctive challenges of algorithmic management, particularly following Brexit when EU-derived collective rights frameworks no longer apply.
Evidence of harm in algorithmic hiring systems
Substantial empirical and theoretical scholarship documents harms arising from algorithmic recruitment systems. Köchling and Wehner (2020) conducted a systematic review examining discrimination and fairness in algorithmic human resource decision-making, finding that algorithmic recruitment and development tools generate implicit discrimination, perceived unfairness, and significant legal and reputational risks for employers. Their analysis reveals that discrimination may arise through multiple mechanisms, including biased training data, proxy variables correlating with protected characteristics, and feedback loops reinforcing historical patterns of exclusion.
Investigation of specific algorithmic hiring tools deployed in the UK context reveals concerning practices. Sánchez-Monedero, Dencik and Edwards (2019) examined automated hiring systems including HireVue, Pymetrics, and Applied, finding limited transparency regarding algorithmic operation, unclear bias-mitigation claims, and uncertain compliance with legal standards. Their analysis demonstrates that vendors frequently make unsubstantiated assertions regarding bias reduction whilst resisting independent scrutiny of algorithmic processes. This opacity frustrates both regulatory oversight and individual redress.
Importantly, recent scholarship demonstrates that algorithmic discrimination often operates through complex human-algorithm interaction effects rather than simply biased code. Bursell and Roumbanis (2024) conducted empirical research examining meta-algorithmic judgments within a large multisite company, finding that algorithmic recruitment can reduce diversity through mechanisms extending beyond algorithmic outputs themselves. Human actors interpreting and acting upon algorithmic recommendations may amplify or introduce bias, suggesting that technical fixes alone cannot resolve discrimination concerns.
Fairness perceptions research provides additional evidence of harm. Lavanchy et al. (2023) examined applicants’ fairness perceptions of algorithm-driven hiring procedures, finding that applicants consistently judge algorithm-only hiring as less fair than human or hybrid procedures. This finding has significant implications for employer reputation, applicant pool quality, and the legitimacy of selection processes. Perceived unfairness may deter qualified candidates from applying or accepting positions, undermining purported efficiency benefits.
Impacts of algorithmic performance management
Algorithmic systems increasingly extend beyond hiring into ongoing performance management, encompassing work allocation, monitoring, evaluation, and disciplinary decisions. Scholarship documents significant harms arising from these practices.
Kinowska and Sienkiewicz (2022) examined the influence of algorithmic management practices on workplace well-being across European organisations, finding that algorithmic management is associated with reduced worker autonomy and lower workplace well-being, despite generating certain efficiency gains for employers. Their research reveals that workers subject to intensive algorithmic monitoring experience heightened stress, reduced job satisfaction, and diminished sense of professional identity.
Duggan et al. (2019) developed a research agenda for employment relations and human resource management regarding algorithmic management and app-work within the gig economy. Their analysis identifies how algorithmic systems can enable intensive surveillance, unpredictable scheduling, automated disciplinary measures, and termination without meaningful human review. These practices fundamentally alter the employment relationship, shifting power toward employers whilst insulating managerial decisions from traditional forms of accountability.
Platform work exemplifies these concerns. Workers in the gig economy frequently experience algorithmic control over work allocation, performance ratings, and account deactivation without transparent explanation or meaningful appeal processes. Whilst platform work represents an extreme case, similar algorithmic management practices are increasingly deployed within conventional employment relationships across sectors including retail, logistics, and professional services.
Limitations of current UK regulatory framework
Scholars consistently identify significant gaps and weaknesses within the current UK regulatory framework addressing algorithmic employment decisions. Abraha (2023) examines the potential of data protection law to regulate algorithmic employment decisions, concluding that whilst data protection rules can constrain certain algorithmic management practices, they leave significant gaps in worker protection requiring complementary measures. Data protection rights are individually exercised, placing substantial burdens upon workers to identify violations, access complex technical evidence, and pursue enforcement through under-resourced regulatory channels.
Enforcement represents a critical weakness. Kelly-Lyth (2020) demonstrates that although equality law theoretically applies to algorithmic discrimination, enforcement mechanisms are inadequate. The Information Commissioner’s Office, responsible for data protection enforcement, lacks specialist employment expertise and investigatory capacity regarding algorithmic systems. Employment tribunals face challenges adjudicating technologically complex disputes without appropriate technical support. Individual workers typically lack resources to pursue litigation against well-resourced employers and technology vendors.
Transparency deficits compound enforcement difficulties. Sánchez-Monedero, Dencik and Edwards (2019) identify that vendors of algorithmic hiring systems typically resist disclosure regarding algorithmic operation, citing commercial confidentiality. This opacity means that neither workers, employers, nor regulators possess sufficient information to evaluate whether systems operate lawfully. Without transparency, accountability becomes practically impossible.
The UK’s post-Brexit position introduces additional uncertainty. Collins and Atkinson (2023) analyse worker voice and algorithmic management in post-Brexit Britain, noting that withdrawal from EU frameworks removes certain protections and consultation requirements that might otherwise constrain algorithmic management practices. The UK government has signalled intentions to diverge from EU regulatory approaches, potentially weakening worker protections further.
Scholarly proposals for legal reform
Academic literature identifies multiple reform directions warranting consideration. These proposals can be categorised into four primary themes: transparency and auditing requirements; limitations on automation; worker voice mechanisms; and substantive protections.
Regarding transparency and auditing, scholars advocate mandatory disclosure requirements and independent algorithmic auditing. Mariani and Lozada (2023) argue that organisations deploying algorithmic recruitment tools should be required to publish impact assessments documenting system operation and outcomes. Adams-Prassl et al. (2023) propose regulatory blueprints requiring bias and fairness audits of hiring tools by independent third parties with appropriate technical expertise. Kelly-Lyth (2020) similarly advocates transparency reforms enabling workers and regulators to scrutinise algorithmic systems effectively.
Limitations on automation represent a second reform category. Parviainen (2022) argues that fully automated decisions with significant employment consequences should be prohibited or substantially restricted. Aloisi (2024) advocates banning fully automated dismissals, requiring meaningful human review for high-stakes decisions affecting workers’ livelihoods. Adams-Prassl et al. (2023) propose that certain algorithmic management practices should be deemed “automatically unfair” regardless of outcome, establishing clear boundaries on acceptable employer conduct.
Strengthening worker voice constitutes a third reform direction. Collins and Atkinson (2023) advocate stronger information and consultation rights regarding algorithmic system deployment, arguing that workers and their representatives should have meaningful input before such systems are implemented. Aloisi (2024) examines European developments including proposed platform work directives, suggesting that the UK should adapt co-determination and consultation requirements to address algorithmic management specifically.
Finally, scholars propose enhanced substantive protections. Aloisi (2024) advocates clear rules addressing discrimination, surveillance intensity, and algorithmic harm within employment legislation. Duggan et al. (2019) argue that employment protection should explicitly address automated disciplinary and termination decisions. Adams-Prassl et al. (2023) propose comprehensive regulatory blueprints encompassing transparency, participation, and substantive limits on algorithmic management practices.
Discussion
The literature reviewed demonstrates compelling evidence that UK employment law requires reform to adequately address algorithmic decision-making in hiring and performance management. This discussion critically analyses key findings and their implications for achieving the stated research objectives.
Adequacy of existing legal protections
The analysis reveals that existing UK law provides insufficient protection against algorithmic employment harms. Whilst data protection and equality legislation theoretically apply to algorithmic systems, practical enforcement faces insurmountable obstacles. The opacity of algorithmic systems means workers cannot access evidence necessary to establish discrimination claims. Regulatory bodies lack technical expertise and investigatory capacity to scrutinise algorithmic operation effectively. Individual enforcement mechanisms place unrealistic burdens upon workers with limited resources and informational disadvantages.
The fragmented nature of current protections creates additional difficulties. Workers must navigate multiple legal regimes—data protection, equality, employment protection—each with distinct procedures, remedies, and limitations. This complexity favours sophisticated employers and disadvantages workers seeking redress. Furthermore, gaps between regulatory regimes mean certain algorithmic harms fall outside any existing framework. The literature demonstrates that current law was not designed contemplating algorithmic decision-making and consequently fails to address its distinctive characteristics.
Nature and significance of evidenced harms
The evidence of harm from algorithmic employment systems is substantial and multifaceted. Discriminatory outcomes occur through multiple mechanisms, including biased training data, proxy discrimination, and human-algorithm interaction effects. These findings complicate simplistic narratives suggesting either that algorithms eliminate human bias or that technical fixes can resolve discrimination concerns. Instead, algorithmic discrimination reflects and potentially amplifies existing structural inequalities through complex sociotechnical processes.
Fairness perception research reveals additional dimensions of harm extending beyond discrimination. Applicants experiencing algorithm-only hiring processes perceive these as less fair, potentially deterring qualified candidates and undermining employer reputation. Worker well-being research documents psychological harms from intensive algorithmic monitoring and control, including reduced autonomy, heightened stress, and diminished job satisfaction. These harms affect workers regardless of whether discrimination occurs, suggesting that regulatory responses must address algorithmic management practices broadly rather than focusing exclusively upon discriminatory outcomes.
Evaluation of proposed reforms
The reform proposals identified in the literature address distinct but interconnected concerns. Transparency and auditing requirements tackle the fundamental problem of algorithmic opacity, enabling scrutiny by workers, regulators, and civil society. However, transparency alone cannot ensure accountability without complementary enforcement mechanisms and substantive standards against which algorithmic operation can be evaluated.
Limitations on automation address concerns regarding algorithmic power imbalances by establishing boundaries on employer discretion. Prohibiting fully automated dismissals or requiring meaningful human review for high-stakes decisions preserves human agency within employment relationships. Such limitations acknowledge that certain decisions affecting workers’ livelihoods and dignity should not be delegated entirely to automated systems regardless of purported efficiency benefits.
Worker voice mechanisms address the collective dimension of algorithmic management. Individual enforcement rights, however robust, cannot adequately address systemic practices affecting entire workforces. Consultation requirements, information rights, and collective bargaining over algorithmic systems enable workers to influence technological deployment rather than merely responding to algorithmic outputs. Such mechanisms reflect recognition that algorithmic management fundamentally alters workplace power dynamics, requiring collective responses.
Substantive protections establishing clear legal standards regarding algorithmic employment practices provide necessary certainty for workers, employers, and regulators. Defining certain practices as automatically unfair, establishing discrimination standards appropriate for algorithmic contexts, and creating explicit surveillance limits would clarify legal obligations and facilitate enforcement.
Implementation considerations and challenges
Several considerations inform implementation of proposed reforms. Technical expertise represents a critical requirement. Regulatory bodies, tribunals, and courts require capacity to evaluate algorithmic systems, assess bias and discrimination claims, and interpret technical evidence. Investment in training, specialist personnel, and expert advisory mechanisms is essential for effective implementation.
International competitiveness concerns frequently arise in regulatory debates. Critics argue that stringent regulation may disadvantage UK employers relative to competitors in jurisdictions with weaker protections. However, this argument has limited force given that algorithmic harms generate substantial costs including litigation, reputational damage, and workforce disengagement. Furthermore, regulatory leadership may generate competitive advantages through enhanced legitimacy, attracted talent, and innovation in responsible artificial intelligence practices.
The UK’s post-Brexit position creates both challenges and opportunities. Divergence from EU regulatory approaches enables tailored domestic responses but risks undermining portability of worker protections and creating barriers for organisations operating across jurisdictions. The Government’s expressed preference for lighter-touch technology regulation suggests political headwinds facing comprehensive reform proposals. Nevertheless, the evidence reviewed demonstrates that voluntary approaches and existing frameworks are inadequate, suggesting that legislative intervention will ultimately prove necessary.
Research limitations and evidential gaps
This analysis acknowledges certain limitations warranting consideration. The literature examined reflects predominantly European and UK perspectives, with limited engagement with regulatory approaches in other jurisdictions that might offer valuable comparative insights. Empirical research on algorithmic employment systems remains relatively limited given the recent emergence of these practices, suggesting that evidence regarding harm and regulatory effectiveness will continue developing.
Additionally, the rapid pace of technological change means that regulatory responses risk obsolescence. Artificial intelligence capabilities are advancing substantially, potentially enabling new forms of algorithmic management not fully addressed by current reform proposals. Regulatory frameworks must therefore incorporate flexibility and adaptive mechanisms enabling responses to emerging technologies whilst maintaining worker protection principles.
Conclusions
This dissertation has critically evaluated whether United Kingdom employment law requires reform to address algorithmic decision-making in hiring and performance management. The analysis demonstrates that existing law, whilst providing certain protections, is inadequate to address the distinctive harms arising from algorithmic employment practices. Reform is therefore both warranted and necessary.
Regarding the first objective, analysis reveals that current UK law governing algorithmic employment decisions comprises fragmented provisions from data protection, equality, and employment protection legislation, none designed specifically contemplating algorithmic systems. This patchwork framework creates enforcement difficulties, transparency deficits, and gaps in coverage.
Addressing the second objective, substantial evidence demonstrates harms arising from algorithmic hiring and performance management, including discriminatory outcomes, reduced diversity, diminished fairness perceptions, lower worker autonomy, and negative well-being impacts. These harms occur through complex mechanisms extending beyond biased algorithms to encompass human-algorithm interaction effects and systemic features of algorithmic management.
The third objective regarding adequacy of existing protections confirms that current UK law fails to address identified harms effectively. Enforcement mechanisms are inadequate, transparency requirements insufficient, and substantive protections incomplete. Workers face insurmountable obstacles pursuing individual redress against algorithmic systems operating opaquely.
Concerning the fourth objective, scholarly proposals for reform coalesce around four themes: transparency and auditing requirements; limitations on automation; enhanced worker voice mechanisms; and substantive protections establishing clear legal standards. These complementary reforms address distinct dimensions of algorithmic harm and collectively could substantially strengthen worker protection.
The fifth objective regarding recommendations identifies several priority areas. Legislative reform should mandate algorithmic impact assessments and independent auditing for employment systems. Meaningful human review should be required for high-stakes decisions including hiring, disciplinary action, and dismissal. Worker consultation rights regarding algorithmic system deployment should be strengthened. Clear substantive standards addressing algorithmic discrimination, surveillance, and automatically unfair practices should be established. Regulatory capacity should be enhanced through technical training, specialist personnel, and expert advisory mechanisms.
Future research should examine implementation experiences in jurisdictions adopting algorithmic employment regulations, develop methodologies for assessing algorithmic fairness across diverse protected characteristics, investigate worker perspectives on algorithmic management, and analyse the effectiveness of different regulatory mechanisms in practice. Longitudinal research tracking algorithmic employment practices and regulatory responses as technologies evolve would provide valuable evidence informing policy development.
The significance of this analysis extends beyond immediate policy debates. Algorithmic employment systems fundamentally alter workplace power dynamics, creating new forms of managerial control whilst insulating decisions from traditional accountability mechanisms. How societies regulate these technologies reflects deeper choices regarding the values governing employment relationships, the distribution of technological benefits and burdens, and the balance between efficiency and dignity in working life. The evidence reviewed demonstrates that current UK law does not adequately protect workers within this rapidly evolving landscape. Legislative reform is therefore not merely desirable but essential to ensure that algorithmic employment practices operate within frameworks reflecting fundamental principles of fairness, transparency, and accountability.
References
Abraha, H., 2023. Regulating algorithmic employment decisions through data protection law. *European Labour Law Journal*, 14(2), pp. 172-191. https://doi.org/10.1177/20319525231167317
Adams-Prassl, J., Abraha, H., Kelly-Lyth, A., Silberman, M. and Rakshita, S., 2023. Regulating algorithmic management: A blueprint. *European Labour Law Journal*, 14(2), pp. 124-151. https://doi.org/10.1177/20319525231167299
Aloisi, A., 2024. Regulating Algorithmic Management at Work in the European Union: Data Protection, Non-discrimination and Collective Rights. *International Journal of Comparative Labour Law and Industrial Relations*, 40(1), pp. 1-32. https://doi.org/10.54648/ijcl2024001
Bursell, M. and Roumbanis, L., 2024. After the algorithms: A study of meta-algorithmic judgments and diversity in the hiring process at a large multisite company. *Big Data & Society*, 11(1), pp. 1-14. https://doi.org/10.1177/20539517231221758
Collins, P. and Atkinson, J., 2023. Worker voice and algorithmic management in post-Brexit Britain. *Transfer: European Review of Labour and Research*, 29(1), pp. 37-52. https://doi.org/10.1177/10242589221143068
Duggan, J., Sherman, U., Carbery, R. and McDonnell, A., 2019. Algorithmic management and app-work in the gig economy: A research agenda for employment relations and HRM. *Human Resource Management Journal*, 30(1), pp. 114-132. https://doi.org/10.1111/1748-8583.12258
Kelly-Lyth, A., 2020. Challenging Biased Hiring Algorithms. *Oxford Journal of Legal Studies*, 41(4), pp. 899-928. https://doi.org/10.1093/ojls/gqab006
Kinowska, H. and Sienkiewicz, L., 2022. Influence of algorithmic management practices on workplace well-being – evidence from European organisations. *Information Technology & People*, 36(8), pp. 21-42. https://doi.org/10.1108/itp-02-2022-0079
Köchling, A. and Wehner, M., 2020. Discriminated by an algorithm: a systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development. *Business Research*, 13(3), pp. 795-848. https://doi.org/10.1007/s40685-020-00134-w
Lavanchy, M., Reichert, P., Narayanan, J. and Savani, K., 2023. Applicants’ Fairness Perceptions of Algorithm-Driven Hiring Procedures. *Journal of Business Ethics*, 188(1), pp. 125-150. https://doi.org/10.1007/s10551-022-05320-w
Mariani, K. and Lozada, F., 2023. The Use of AI and Algorithms for Decision-making in Workplace Recruitment Practices. *Journal of Student Research*, 12(1), pp. 1-12. https://doi.org/10.47611/jsr.v12i1.1855
Parviainen, H., 2022. Can algorithmic recruitment systems lawfully utilise automated decision-making in the EU?. *European Labour Law Journal*, 13(2), pp. 225-248. https://doi.org/10.1177/20319525221093815
Sánchez-Monedero, J., Dencik, L. and Edwards, L., 2019. What does it mean to ‘solve’ the problem of discrimination in hiring?: social, technical and legal perspectives from the UK on automated hiring systems. *Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency*, pp. 458-468. https://doi.org/10.1145/3351095.3372849
