Abstract
App-based digital identity systems have emerged as pivotal mechanisms for accessing essential services across public and private sectors. Whilst proponents herald these technologies as vehicles for efficiency and inclusion, mounting evidence suggests they disproportionately harm marginalised and vulnerable populations. This literature synthesis examines the mechanisms through which digital identity platforms create and reinforce exclusionary practices, focusing specifically on elderly individuals, migrants, refugees, persons with disabilities, and those experiencing socioeconomic disadvantage. Drawing upon systematic analysis of fifty peer-reviewed sources identified through comprehensive database searches, this review identifies three primary mechanisms of exclusion: technical barriers relating to device access and digital literacy; design and policy failures that neglect diverse user needs; and surveillance-related privacy risks that disproportionately burden minority communities. The findings reveal that digital exclusion operates intersectionally, with compounding vulnerabilities amplifying harm for those possessing multiple marginalised identities. The consequences extend beyond mere service inaccessibility to encompass psychological harm, loss of autonomy, and reinforcement of existing social inequalities. This synthesis concludes by advocating for inclusive design principles, policy reform, and sustained attention to intersectional vulnerabilities in digital identity system development.
Introduction
The proliferation of digital technologies has fundamentally transformed how individuals interact with governmental institutions, financial services, and essential public provisions. Central to this transformation has been the emergence of app-based digital identity systems, which promise streamlined access to services whilst reducing administrative burdens and enhancing security (Beduschi, 2019). Governments worldwide have embraced these technologies, positioning digital identification as a cornerstone of modernised public administration and a pathway toward greater inclusion for previously undocumented populations (Addo and Senyo, 2021).
However, beneath the rhetoric of digital inclusion lies a more troubling reality. The implementation of app-based identity systems has generated new forms of exclusion that disproportionately affect those already positioned at society’s margins. Far from serving as equalising forces, these technologies frequently replicate and intensify existing patterns of social stratification, creating what scholars have termed “digital divides” that operate along familiar axes of inequality including age, disability, socioeconomic status, and migration history (Robinson, Ragnedda and Schulz, 2020).
The academic significance of this phenomenon extends across multiple disciplinary boundaries. From a sociological perspective, digital exclusion represents a novel mechanism through which social stratification becomes technologically mediated and potentially rendered invisible within policy discourse. Legal scholars have raised fundamental questions regarding the compatibility of mandatory digital identification with human rights frameworks, particularly concerning privacy, non-discrimination, and access to essential services (Beduschi, 2019). Information systems researchers have documented the ways in which platform design choices embed assumptions about “normal” users that systematically disadvantage those whose circumstances deviate from imagined norms (Park and Humphry, 2019).
The practical implications of digital exclusion from identity systems are equally profound. As governments increasingly mandate digital identity verification for accessing healthcare, welfare benefits, financial services, and legal protections, those unable to navigate these systems face tangible deprivations that compromise their wellbeing and life opportunities (Masiero and Arvidsson, 2021). During the COVID-19 pandemic, the acceleration of digital service delivery rendered these exclusionary dynamics particularly acute, with elderly individuals experiencing heightened isolation and marginalisation as in-person alternatives diminished (Seifert, 2020).
This review addresses the critical question of which populations face the greatest risk of harm from app-based identity systems and interrogates the mechanisms through which such harm manifests. Understanding these dynamics is essential for policymakers, system designers, and civil society advocates seeking to ensure that digital identity technologies fulfil their inclusive potential rather than entrenching existing inequalities.
Aim and objectives
The primary aim of this literature synthesis is to critically examine the evidence concerning digital exclusion arising from app-based identity systems, with particular attention to identifying the populations most vulnerable to harm and elucidating the mechanisms through which exclusion operates.
To achieve this aim, the following objectives guide this review:
1. To identify and characterise the demographic groups most likely to experience exclusion from app-based digital identity systems based on existing empirical and theoretical literature.
2. To analyse the technical, design-related, and systemic mechanisms through which digital identity platforms create barriers to access for marginalised populations.
3. To evaluate the consequences of digital exclusion, encompassing practical impacts on service access, psychological and social harms, and the reinforcement of structural inequalities.
4. To examine how intersectionality shapes experiences of digital exclusion, considering how overlapping identities amplify vulnerability.
5. To assess the current evidence regarding policy and design interventions capable of mitigating exclusionary outcomes.
6. To identify gaps in existing research and propose priorities for future investigation.
Methodology
This study employs a literature synthesis methodology, drawing upon a comprehensive search of academic databases to identify relevant peer-reviewed publications addressing digital exclusion in the context of app-based identity systems. The approach follows established principles for systematic literature reviews whilst acknowledging the interpretive nature of synthesis in drawing together diverse disciplinary perspectives.
The search strategy utilised the Consensus research platform, which aggregates content from major academic databases including Semantic Scholar and PubMed, encompassing over 170 million research papers. Eight distinct search groups were employed, targeting foundational concepts related to digital exclusion, specific vulnerable populations, interdisciplinary perspectives, and adjacent topics relevant to digital identity systems.
The initial search identified 1,086 potentially relevant papers. These underwent systematic screening, with 746 papers receiving initial review and 501 deemed eligible for closer examination. From this pool, the fifty most relevant papers were selected for inclusion based on their direct engagement with questions of digital exclusion, vulnerable populations, and identity system design or implementation. Selection criteria prioritised empirical studies, theoretical contributions from recognised scholars in the field, and policy analyses published in peer-reviewed venues.
The included papers span publication dates from 2017 to 2025, capturing both foundational theoretical work on digital inequality and recent empirical investigations into specific identity system implementations. The corpus represents contributions from information systems research, human-computer interaction, sociology, social work, public policy, and law, reflecting the inherently interdisciplinary nature of the topic.
Thematic analysis guided the synthesis of findings, with particular attention to identifying consistent patterns across studies regarding populations at risk, mechanisms of exclusion, and consequences of digital identity-related marginalisation. Where studies offered conflicting findings or interpretations, these were examined critically with attention to methodological differences and contextual factors that might explain divergent conclusions.
Literature review
### Populations experiencing heightened vulnerability to digital exclusion
The literature consistently identifies several demographic groups as facing substantially elevated risks of exclusion from app-based identity systems. These populations share common characteristics including limited access to necessary technological resources, reduced digital literacy, and historical patterns of marginalisation that shape their interactions with institutional systems.
Elderly individuals constitute perhaps the most extensively studied population in relation to digital exclusion. Research documents multiple barriers confronting older users, including lack of familiarity with smartphone interfaces, physical limitations affecting vision and manual dexterity, and reduced confidence in navigating digital environments (Zhu, Yu and Krever, 2024). Studies from diverse national contexts reveal that digital exclusion among the elderly extends beyond practical difficulties to encompass emotional and psychological dimensions, including diminished self-esteem, increased dependency on others, and feelings of social marginalisation as peers and institutions assume universal digital competence (Fang, Shao and Wang, 2025; Ge et al., 2025). The COVID-19 pandemic rendered these dynamics particularly visible, as elderly individuals faced sudden exclusion from services that had previously been accessible through in-person channels (Seifert, 2020).
Migrants and refugees represent another population facing acute exclusion from digital identity systems. These individuals frequently lack documentation required by identity verification processes, possess limited familiarity with host country digital infrastructures, and encounter language barriers that impede navigation of app interfaces designed for dominant language speakers (Schoemaker et al., 2020). Research conducted in Lebanon, Jordan, and Uganda demonstrates that refugees’ experiences with digital identity systems are shaped by pre-existing power asymmetries between displaced populations and humanitarian organisations, with identity platforms sometimes operating as mechanisms of surveillance rather than empowerment (Schoemaker et al., 2020). Studies of refugee integration in Germany further document how digital exclusion intersects with broader processes of social othering, limiting opportunities for meaningful participation in host society institutions (Berg, 2025).
Persons with disabilities encounter distinct barriers arising from the accessibility failures of app-based identity systems. Research identifies multiple points of exclusion, including visual interfaces inaccessible to users with visual impairments, authentication methods requiring physical capabilities not possessed by all users, and cognitive demands that may exceed the capacities of individuals with intellectual or developmental disabilities (Park and Humphry, 2019; Egard and Hansson, 2021). These technical barriers compound existing patterns of social exclusion, limiting disabled individuals’ capacity to independently access services that non-disabled users navigate with relative ease (Sannon and Forte, 2022).
Individuals experiencing socioeconomic disadvantage face exclusion rooted in material deprivation. Research consistently identifies device affordability, internet access costs, and limited opportunities for digital skill development as barriers confronting low-income populations (Krishna, 2020; Allmann and Radu, 2022). Studies of India’s Aadhaar system illustrate how informal workers lacking stable addresses, bank accounts, or regular income face systematic difficulties obtaining and maintaining digital identities, with cascading consequences for their access to welfare benefits and formal economic participation (Krishna, 2020).
Minority and marginalised communities experience forms of exclusion operating through algorithmic processes embedded within identity systems. Research documents how facial recognition technologies exhibit differential accuracy across racial groups, creating authentication barriers for users whose physical characteristics diverge from those represented in training datasets (Rawat et al., 2020). Beyond technical bias, minority communities face heightened surveillance risks when engaging with digital identity platforms, as data generated through identity verification may be utilised for purposes extending beyond users’ intentions or awareness (Karizat et al., 2021; De Oliveira Mariano et al., 2025).
### Mechanisms generating digital exclusion
The literature identifies three primary categories of mechanisms through which app-based identity systems produce exclusionary outcomes: technical barriers, design and policy failures, and surveillance-related risks.
Technical barriers encompass the material and capability requirements that identity systems impose upon users. The near-universal dependence of app-based systems on smartphone ownership immediately excludes substantial portions of vulnerable populations who lack access to such devices or possess only older models incapable of running current applications (Allmann and Radu, 2022). Beyond device ownership, identity systems frequently require robust digital footprints including email addresses, phone numbers, and sometimes existing digital accounts, creating circular dependencies that disadvantage precisely those populations lacking prior digital integration (Allmann and Radu, 2022).
Biometric requirements present particular challenges for certain user groups. Research documents that elderly individuals and manual labourers frequently possess degraded fingerprints that fail recognition algorithms, whilst facial recognition systems exhibit documented biases against darker-skinned users (Schoemaker, Martin and Weitzberg, 2023; Beduschi, 2019). Users with physical disabilities may be unable to position themselves appropriately for biometric capture or may possess atypical physical characteristics that confound recognition systems (Kemppainen et al., 2023).
Design and policy failures constitute a second category of exclusionary mechanism. Research demonstrates that digital identity systems are frequently designed with implicit assumptions about “normal” users that fail to account for the diversity of circumstances individuals bring to identity verification encounters (Park and Humphry, 2019). Systems developed within government welfare contexts may assume stable addresses, consistent names, and linear documentation histories that poorly match the lived realities of homeless individuals, women who have changed names through marriage or divorce, or individuals from cultures with naming conventions diverging from Western norms (Schou and Pors, 2018; Hundal and Chaudhuri, 2020).
Policy frameworks surrounding digital identity implementation frequently mandate digital verification without ensuring adequate alternatives for those unable to comply. Studies of India’s public distribution system reveal how digital identity requirements resulted in legitimate benefit claimants being denied essential food rations due to authentication failures, with particularly severe impacts in rural areas where connectivity limitations compounded individual capability constraints (Hundal and Chaudhuri, 2020; Masiero and Arvidsson, 2021). Research on Italy’s SPID system similarly documents how digitalisation of public administration created new forms of inequality by disadvantaging citizens lacking digital competencies (Esposito, 2024).
Surveillance and privacy risks constitute the third mechanism through which digital identity systems harm marginalised populations. Research demonstrates that identity platforms generate extensive data about users’ movements, service access patterns, and behavioural characteristics, creating surveillance infrastructures that disproportionately burden those whose circumstances render them objects of state interest (Masiero, 2023). Refugees interfacing with humanitarian identity systems, welfare recipients subject to fraud detection algorithms, and minority community members facing heightened law enforcement scrutiny all experience amplified surveillance exposure through digital identity engagement (Beduschi, 2019; Gangadharan, 2017).
The expectation of surveillance may itself generate exclusionary effects by deterring vulnerable individuals from engaging with identity systems despite legitimate service needs. Research with marginalised internet users documents how awareness of surveillance possibilities shapes digital engagement decisions, with some individuals deliberately limiting their digital footprints to reduce exposure to perceived risks (Gangadharan, 2017; Sannon and Forte, 2022).
### Consequences of exclusion from digital identity systems
The harms arising from digital exclusion operate across practical, psychological, and structural dimensions, with consequences that extend far beyond immediate service access difficulties.
Loss of access to essential services represents the most immediate and tangible consequence of digital exclusion. As governments mandate digital identity verification for healthcare access, welfare benefit claims, financial services, and legal protections, individuals unable to navigate these requirements face deprivation of goods and services essential to their wellbeing (Masiero and Arvidsson, 2021; Zhu, Yu and Krever, 2024). Studies from India document cases where identity system failures resulted in individuals being denied food rations, pension payments, and healthcare access, with consequences including documented starvation deaths linked to Aadhaar authentication problems (Hundal and Chaudhuri, 2020).
Psychological and social harms extend the impact of exclusion beyond material deprivation. Research with elderly users documents feelings of shame, inadequacy, and dependency arising from inability to independently manage digital identity requirements (Fang, Shao and Wang, 2025). Studies of refugee populations reveal how digital exclusion compounds broader experiences of displacement and marginalisation, with identity system failures reinforcing narratives of not belonging and unworthiness (Berg, 2025). The psychological burden of navigating systems perceived as hostile or indifferent to one’s circumstances generates stress and anxiety that may discourage help-seeking behaviour even when support theoretically exists (Liu, 2020).
The reinforcement of existing inequalities represents perhaps the most concerning consequence of digital exclusion. Research applying relative deprivation theory to digital inequality demonstrates how digital exclusion compounds across dimensions, with those lacking digital access experiencing cumulative disadvantage as social, economic, and institutional life increasingly assumes digital participation (Helsper, 2017; Ragnedda, Ruiu and Addeo, 2022). Studies document self-reinforcing cycles whereby digital exclusion limits educational and economic opportunities, perpetuating the material disadvantage that produced initial exclusion (Ragnedda, Ruiu and Addeo, 2022).
### Intersectionality and compounding vulnerabilities
The literature increasingly recognises that digital exclusion operates intersectionally, with individuals possessing multiple marginalised identities facing compounded barriers that exceed the sum of individual characteristics. Research demonstrates that the intersection of age, gender, race, disability, and socioeconomic status generates distinctive vulnerability profiles requiring tailored analytical and policy responses (Karizat et al., 2021; Liu, 2020).
Studies of algorithmic experiences on social media platforms document how users holding intersecting marginalised identities develop sophisticated “folk theories” about algorithmic behaviour shaped by their positioned experiences of digital systems (Karizat et al., 2021). These findings suggest that understanding digital exclusion requires attention not merely to categorical membership but to the specific ways in which identity intersections shape encounters with technological systems.
Research examining digital health inequalities among elderly UK residents reveals significant variation based on the combination of age with other identity dimensions including gender, ethnicity, and socioeconomic position (Liu, 2020). Similarly, studies of digital inequality profiles identify eight distinct configurations of digital exclusion requiring differentiated intervention strategies, highlighting the inadequacy of one-size-fits-all approaches to inclusion (Asmar, Mariën and Van Audenhove, 2022).
Contextual and structural factors further shape exclusion patterns across different settings. Research from the Global South emphasises how colonial legacies, infrastructural limitations, and distinctive regulatory environments create forms of digital exclusion not fully captured by frameworks developed in wealthy Western contexts (Heeks, 2022; Siad and Sagar, 2025). Studies examining cross-border digital identity initiatives reveal particular challenges for populations whose circumstances require engagement with multiple national identity systems possessing divergent technical requirements and policy frameworks (Supangkat et al., 2025).
Discussion
The evidence synthesised in this review demonstrates conclusively that app-based digital identity systems, despite their potential benefits, currently generate substantial harm for marginalised and vulnerable populations. The consistency of findings across diverse geographical contexts, disciplinary perspectives, and methodological approaches strengthens confidence in this central conclusion. Elderly individuals, migrants and refugees, persons with disabilities, those experiencing socioeconomic disadvantage, and minority communities face systematically elevated risks of exclusion from services increasingly predicated upon successful digital identity verification.
The mechanisms generating these exclusionary outcomes operate at multiple levels. Technical requirements including smartphone ownership, digital footprint prerequisites, and biometric capabilities create immediate barriers for populations lacking resources or possessing physical characteristics that confound recognition systems. These technical barriers do not arise neutrally but rather reflect design choices that privilege imagined “standard” users whilst neglecting the diversity of circumstances individuals actually bring to identity verification encounters. The embedding of normative assumptions within ostensibly technical systems represents a form of exclusion by design that obscures the political character of technological choices behind a veneer of algorithmic neutrality.
Policy frameworks surrounding digital identity implementation frequently compound these technical exclusions. The mandate of digital verification without provision of accessible alternatives effectively transforms optional technologies into mandatory gatekeepers, with consequences that fall disproportionately upon those least equipped to comply. This pattern reflects broader tendencies within digitalisation initiatives to prioritise administrative efficiency whilst externalising costs onto vulnerable populations least positioned to advocate effectively for their interests.
The surveillance dimensions of digital identity systems introduce additional layers of harm that extend beyond access denial to encompass risks to privacy, autonomy, and security. Marginalised populations frequently encounter digital identity systems not as neutral service access mechanisms but as components of broader surveillance infrastructures that generate information potentially deployable against their interests. The awareness of such surveillance possibilities may itself deter engagement with identity systems, creating exclusionary effects even among individuals technically capable of compliance.
The consequences of digital exclusion extend substantially beyond immediate service access difficulties. The psychological harms documented in the literature—including shame, diminished self-esteem, and reinforced experiences of marginalisation—demonstrate that digital exclusion affects individuals’ sense of self and social belonging. The structural reinforcement of inequality through digital exclusion raises fundamental questions about whether current trajectories of digital identity implementation are compatible with commitments to social justice and inclusive development.
The intersectional character of digital exclusion demands analytical and policy approaches attentive to the compounding effects of multiple marginalised identities. Interventions designed around single identity categories risk overlooking the distinctive vulnerabilities faced by those occupying multiple marginal positions simultaneously. The finding that eight distinct profiles of digital exclusion can be identified within populations previously treated as homogeneous underscores the inadequacy of undifferentiated inclusion strategies.
The evidence regarding interventions capable of mitigating digital exclusion remains less developed than documentation of exclusionary dynamics themselves. Whilst researchers have identified principles including universal design, policy safeguards, and ongoing monitoring as potentially protective, systematic evaluation of intervention effectiveness remains limited. This gap between problem identification and solution development represents a significant limitation in current knowledge that impedes evidence-based policy formation.
Several limitations of the existing literature warrant acknowledgement. Geographic concentration of research in certain contexts—notably India’s Aadhaar system—may limit generalisability to settings with different technological, institutional, and cultural configurations. The rapid pace of technological change means that findings from earlier studies may not fully reflect current system capabilities and limitations. Additionally, the voices of excluded individuals themselves remain underrepresented in literature that predominantly adopts institutional or technical perspectives on exclusion dynamics.
Conclusions
This literature synthesis has addressed the question of which populations face the greatest risk of harm from app-based digital identity systems and has elucidated the mechanisms through which such harm manifests. The objectives established for this review have been substantially achieved through systematic examination of fifty peer-reviewed sources spanning diverse disciplinary perspectives and geographical contexts.
The first objective—identifying demographic groups most vulnerable to exclusion—has been met through documentation of consistently elevated risks facing elderly individuals, migrants and refugees, persons with disabilities, those experiencing socioeconomic disadvantage, and minority communities. These populations share exposure to barriers arising from material resource limitations, capability constraints, and historical patterns of institutional marginalisation.
The second objective—analysing mechanisms of exclusion—has been achieved through identification of technical barriers relating to device access and biometric capability; design failures embedding assumptions about “normal” users; policy frameworks mandating digital compliance without accessible alternatives; and surveillance dynamics that deter engagement or generate privacy risks.
The third objective—evaluating consequences of exclusion—has been addressed through documentation of practical harms including service access denial, psychological impacts including shame and diminished autonomy, and structural effects including reinforcement of existing inequalities.
The fourth objective—examining intersectionality—has been met through analysis demonstrating that overlapping marginalised identities generate compounded vulnerabilities exceeding the sum of individual characteristics, requiring differentiated analytical and policy responses.
The fifth objective—assessing intervention evidence—has been partially achieved, revealing that whilst protective principles have been identified, systematic evaluation of intervention effectiveness remains underdeveloped.
The sixth objective—identifying research gaps—has been addressed through recognition of limited attention to long-term exclusion impacts, insufficient evaluation of intervention effectiveness, and underrepresentation of excluded individuals’ own perspectives in existing research.
The significance of these findings extends across academic, policy, and practical domains. Academically, this synthesis contributes to understanding digital inequality as technologically mediated social stratification requiring attention to design choices, policy frameworks, and structural power dynamics. For policymakers, the findings underscore the necessity of inclusion safeguards, accessible alternatives, and ongoing monitoring within digital identity implementation strategies. For system designers, the evidence demands attention to user diversity and rejection of assumptions about “standard” users that systematically disadvantage marginalised populations.
Future research should prioritise several areas identified as underdeveloped within current literature. Longitudinal studies tracking exclusion impacts over extended timeframes would illuminate cumulative effects inadequately captured by cross-sectional designs. Rigorous evaluation of inclusion interventions would provide evidence-based guidance for policy and design choices. Research centring the perspectives of excluded individuals themselves would enrich understanding currently dominated by institutional viewpoints. Finally, comparative analysis across national and technological contexts would identify factors shaping variation in exclusion patterns and intervention effectiveness.
The overarching conclusion emerging from this synthesis is that app-based digital identity systems, whilst offering genuine potential benefits, currently operate in ways that reinforce and amplify existing social inequalities. Realising the inclusive potential of digital identity technologies requires fundamental reorientation of design processes, policy frameworks, and implementation practices to centre the needs and circumstances of those most vulnerable to exclusion. Without such reorientation, digital identity systems risk becoming mechanisms through which technological change entrenches rather than ameliorates the marginalisation of society’s most disadvantaged members.
References
Addo, A. and Senyo, P., 2021. Advancing E-governance for development: Digital identification and its link to socioeconomic inclusion. *Government Information Quarterly*, 38(2), pp.101568. https://doi.org/10.1016/j.giq.2021.101568
Allmann, K. and Radu, R., 2022. Digital footprints as barriers to accessing e‐government services. *Global Policy*, 14(1), pp.48-59. https://doi.org/10.1111/1758-5899.13140
Asmar, A., Mariën, I. and Van Audenhove, L., 2022. No one-size-fits-all! Eight profiles of digital inequalities for customized inclusion strategies. *New Media & Society*, 24(1), pp.279-310. https://doi.org/10.1177/14614448211063182
Beduschi, A., 2019. Digital identity: Contemporary challenges for data protection, privacy and non-discrimination rights. *Big Data & Society*, 6(2), pp.1-6. https://doi.org/10.1177/2053951719855091
Berg, M., 2025. Refugee Integration in Germany: The Interplay of Othering, Digital Exclusion, and Identity Negotiation. *Journal of International Migration and Integration*, 26(1), pp.235-254. https://doi.org/10.1007/s12134-025-01238-0
De Oliveira Mariano, L., De Santos Moura, L., Mattos, R., De Almeida Bizarria, F. and Kind, L., 2025. Faces of exclusion: the “social,” the “digital” and “digital racism” in a decolonial critical essay. *Frontiers in Sociology*, 10, pp.1534313. https://doi.org/10.3389/fsoc.2025.1534313
Egard, H. and Hansson, K., 2021. The digital society comes sneaking in. An emerging field and its disabling barriers. *Disability & Society*, 38(5), pp.761-775. https://doi.org/10.1080/09687599.2021.1960275
Esposito, F., 2024. Digitalizzare la pubblica amministrazione: Il caso Spid tra pratiche digitali e nuove diseguaglianze. *Cambio. Rivista sulle Trasformazioni Sociali*. https://doi.org/10.36253/cambio-16092
Fang, Y., Shao, Y. and Wang, M., 2025. The involuntary experience of digital exclusion among older adults: A taxonomy and theoretical framework. *The American Psychologist*. https://doi.org/10.1037/amp0001502
Gangadharan, S., 2017. The downside of digital inclusion: Expectations and experiences of privacy and surveillance among marginal Internet users. *New Media & Society*, 19(4), pp.597-615. https://doi.org/10.1177/1461444815614053
Ge, H., Li, J., Hu, H., Feng, T. and Wu, X., 2025. Digital exclusion in older adults: A scoping review. *International Journal of Nursing Studies*, 168, pp.105082. https://doi.org/10.1016/j.ijnurstu.2025.105082
Heeks, R., 2022. Digital inequality beyond the digital divide: conceptualizing adverse digital incorporation in the global South. *Information Technology for Development*, 28(4), pp.688-704. https://doi.org/10.1080/02681102.2022.2068492
Helsper, E., 2017. The Social Relativity of Digital Exclusion: Applying Relative Deprivation Theory to Digital Inequalities. *Communication Theory*, 27(3), pp.223-242. https://doi.org/10.1111/comt.12110
Hundal, H. and Chaudhuri, B., 2020. Digital Identity and Exclusion in Welfare: Notes from the Public Distribution System in Andhra Pradesh and Karnataka. *Proceedings of the 2020 International Conference on Information and Communication Technologies and Development*. https://doi.org/10.1145/3392561.3397583
Karizat, N., Delmonaco, D., Eslami, M. and Andalibi, N., 2021. Algorithmic Folk Theories and Identity: How TikTok Users Co-Produce Knowledge of Identity and Engage in Algorithmic Resistance. *Proceedings of the ACM on Human-Computer Interaction*, 5(CSCW2), pp.1-44. https://doi.org/10.1145/3476046
Kemppainen, L., Kemppainen, T., Kouvonen, A., Shin, Y., Lilja, E., Vehko, T. and Kuusio, H., 2023. Electronic identification (e-ID) as a socio-technical system moderating migrants’ access to essential public services – The case of Finland. *Government Information Quarterly*, 40(4), pp.101839. https://doi.org/10.1016/j.giq.2023.101839
Krishna, S., 2020. Digital identity, datafication and social justice: understanding Aadhaar use among informal workers in south India. *Information Technology for Development*, 27(1), pp.67-90. https://doi.org/10.1080/02681102.2020.1818544
Liu, B., 2020. The Impact of Intersectionality of Multiple Identities on the Digital Health Divide, Quality of Life and Loneliness amongst Older Adults in the UK. *British Journal of Social Work*, 51(8), pp.2927-2949. https://doi.org/10.1093/bjsw/bcaa149
Martin, A. and Taylor, L., 2020. Exclusion and inclusion in identification: regulation, displacement and data justice. *Information Technology for Development*, 27(1), pp.50-66. https://doi.org/10.1080/02681102.2020.1811943
Masiero, S., 2023. Digital Identity Platforms: A Data Justice Perspective. *Proceedings of the 56th Hawaii International Conference on System Sciences*, pp.4433-4442. https://doi.org/10.24251/hicss.2023.540
Masiero, S., 2023. Digital identity as platform-mediated surveillance. *Big Data & Society*, 10(1). https://doi.org/10.1177/20539517221135176
Masiero, S. and Arvidsson, V., 2021. Degenerative outcomes of digital identity platforms for development. *Information Systems Journal*, 31(6), pp.903-928. https://doi.org/10.1111/isj.12351
Park, S. and Humphry, J., 2019. Exclusion by design: intersections of social, digital and data exclusion. *Information, Communication & Society*, 22(7), pp.934-953. https://doi.org/10.1080/1369118x.2019.1606266
Ragnedda, M., Ruiu, M. and Addeo, F., 2022. The self-reinforcing effect of digital and social exclusion: The inequality loop. *Telematics and Informatics*, 72, pp.101852. https://doi.org/10.1016/j.tele.2022.101852
Rawat, S., Vashista, R., Baruah, D., Dange, A. and Boyer, A., 2020. Layers of Marginality: An Exploration of Visibility, Impressions, and Cultural Context on Geospatial Apps for Men Who Have Sex With Men in Mumbai, India. *Social Media + Society*, 6(2). https://doi.org/10.1177/2056305120913995
Robinson, L., Ragnedda, M. and Schulz, J., 2020. Digital inequalities: contextualizing problems and solutions. *Journal of Information, Communication and Ethics in Society*, 18(3), pp.323-327. https://doi.org/10.1108/jices-05-2020-0064
Sannon, S. and Forte, A., 2022. Privacy Research with Marginalized Groups. *Proceedings of the ACM on Human-Computer Interaction*, 6(CSCW2), pp.1-33. https://doi.org/10.1145/3555556
Schoemaker, E., Baslan, D., Pon, B. and Dell, N., 2020. Identity at the margins: data justice and refugee experiences with digital identity systems in Lebanon, Jordan, and Uganda. *Information Technology for Development*, 27(1), pp.13-36. https://doi.org/10.1080/02681102.2020.1785826
Schoemaker, E., Martin, A. and Weitzberg, K., 2023. Digital Identity and Inclusion: Tracing Technological Transitions. *Georgetown Journal of International Affairs*, 24(1), pp.36-45. https://doi.org/10.1353/gia.2023.a897699
Schou, J. and Pors, A., 2018. Digital by default? A qualitative study of exclusion in digitalised welfare. *Social Policy & Administration*, 53(3), pp.464-477. https://doi.org/10.1111/spol.12470
Seifert, A., 2020. The Digital Exclusion of Older Adults during the COVID-19 Pandemic. *Journal of Gerontological Social Work*, 63(6-7), pp.674-676. https://doi.org/10.1080/01634372.2020.1764687
Siad, R. and Sagar, A., 2025. AI-Powered Digital Identity Systems and the New Digital Divide: The Case of World. *Emerging Media*, 3(2), pp.391-400. https://doi.org/10.1177/27523543251340713
Sieck, C., Sheon, A., Ancker, J., Castek, J., Callahan, B. and Siefer, A., 2021. Digital inclusion as a social determinant of health. *NPJ Digital Medicine*, 4, pp.52. https://doi.org/10.1038/s41746-021-00413-8
Supangkat, S., Firmansyah, H., Rizkia, I. and Kinanda, R., 2025. Challenges in Implementing Cross-Border Digital Identity Systems for Global Public Infrastructure: A Comprehensive Analysis. *IEEE Access*, 13, pp.42083-42098. https://doi.org/10.1109/access.2025.3547373
Tsatsou, P., 2022. Editor’s introduction. *New Media & Society*, 24(2), pp.271-278. https://doi.org/10.1177/14614448211063175
Zhu, R., Yu, X. and Krever, R., 2024. The Double Burden: The Digital Exclusion and Identity Crisis of Elderly Patients in Rural China. *Media and Communication*, 12, pp.8106. https://doi.org/10.17645/mac.8106
