Abstract
This literature synthesis examines whether artificial intelligence (AI) triage and assessment tools can meaningfully reduce patient harm in emergency departments experiencing “corridor care” without concomitant increases in physical capacity. Drawing upon systematic reviews, scoping analyses, and prospective implementation studies, the synthesis evaluates the benefits, limitations, and risks of AI deployment in chronically overcrowded clinical environments. Evidence indicates that AI triage demonstrates superior risk stratification compared with traditional methods, achieving areas under the curve exceeding 0.80 for high-acuity outcomes and reducing mis-triage rates. Selected real-world implementations show improved hard outcomes, including reduced mortality and enhanced functional status following intracranial haemorrhage through faster recognition protocols. However, most evidence derives from retrospective or small prospective studies with minimal evaluation under severe overcrowding conditions. Critical limitations include unpredictable algorithmic errors, bias, workload redistribution, and poor adaptation to low-resource contexts. The synthesis concludes that whilst AI triage offers modest harm reduction through improved prioritisation, no robust evidence supports technological solutions alone offsetting structural under-capacity risks. Effective implementation requires integration with adequate staffing, infrastructure investment, and robust governance frameworks.
Introduction
The phenomenon of “corridor care”—where patients receive treatment in hallways, waiting areas, and other non-clinical spaces due to insufficient bed capacity—has emerged as one of the most pressing challenges confronting modern healthcare systems. This practice, once considered an exceptional occurrence during periods of extreme demand, has become normalised across emergency departments in the United Kingdom and internationally, representing a fundamental failure to match healthcare infrastructure with population need (Royal College of Emergency Medicine, 2023). The implications extend beyond patient dignity concerns to encompass genuine clinical risks, including delayed assessment, missed deterioration, increased mortality, and compromised infection control.
Within this context of chronic overcrowding, artificial intelligence technologies have been proposed as potential solutions to optimise existing resources. Proponents argue that AI-driven triage and assessment tools could enhance patient safety by improving risk stratification, accelerating recognition of time-critical conditions, and enabling more efficient allocation of limited clinical attention. Such arguments appeal to healthcare administrators and policymakers seeking solutions that do not require substantial capital investment in physical infrastructure or ongoing expenditure on additional workforce capacity.
However, the proposition that technological interventions can meaningfully substitute for fundamental capacity deficits warrants rigorous critical examination. The question of whether AI triage genuinely reduces harm without added capacity, or merely redistributes risk whilst providing political cover for continued under-investment, carries significant implications for healthcare policy, resource allocation, and patient safety. This matters not only academically but practically, as decisions made regarding AI deployment in overcrowded emergency departments will directly affect patient outcomes and potentially influence future infrastructure investment priorities.
The tension between technological optimism and structural necessity reflects broader debates within healthcare systems research regarding the appropriate balance between innovation and foundational resource adequacy. Understanding the true capabilities and limitations of AI triage in corridor care settings is essential for informed decision-making by clinicians, administrators, and policymakers alike.
Aim and objectives
The primary aim of this synthesis is to critically evaluate whether artificial intelligence triage and assessment technologies can meaningfully reduce patient harm in emergency departments experiencing corridor care conditions without corresponding increases in physical capacity.
To achieve this aim, the following specific objectives guide the analysis:
1. To synthesise current evidence regarding the performance of AI triage systems in improving risk stratification and patient outcomes within emergency department settings.
2. To examine the limitations and potential risks associated with AI deployment in chronically overcrowded, resource-constrained clinical environments.
3. To evaluate the contextual factors and implementation requirements that influence whether AI triage tools deliver genuine harm reduction.
4. To assess whether technological interventions alone can meaningfully offset the clinical risks inherent in structural under-capacity.
5. To identify the governance, workforce, and infrastructure conditions necessary for AI triage to contribute positively to patient safety in corridor care contexts.
Methodology
This study employs a literature synthesis methodology, integrating findings from systematic reviews, scoping analyses, narrative reviews, and prospective implementation studies to address the research question. The synthesis draws upon peer-reviewed publications identified through structured database searches, focusing on studies examining AI applications in emergency department triage and patient safety contexts.
The methodological approach recognises that the research question requires integration of evidence from multiple domains: clinical informatics, emergency medicine, patient safety science, health services research, and bioethics. Accordingly, the synthesis adopts an integrative rather than purely systematic approach, enabling incorporation of diverse study designs and evidence types whilst maintaining analytical rigour.
Source materials included systematic and scoping reviews examining AI triage performance (Arab and Moosa, 2025; Yi, Baik and Baek, 2024; Kim, Nam and Lee, 2025), narrative reviews of AI applications in emergency medicine (Farrokhi et al., 2025; Di Sarno et al., 2024), scoping reviews of AI-related patient safety concerns (Bates et al., 2021; Botha et al., 2024; De Micco et al., 2025), implementation studies providing outcome data (Kotovich et al., 2023), and ethical analyses examining risk frameworks for AI deployment (Nord-Bronzyk et al., 2025; Classen, Longhurst and Thomas, 2023).
Quality appraisal considered study design, sample characteristics, outcome measures, and relevance to the specific context of overcrowded emergency departments. Particular attention was paid to whether studies addressed system-level outcomes under conditions of severe resource constraint, given the specific focus on corridor care settings.
The synthesis was structured thematically, grouping findings according to demonstrated benefits, identified limitations and risks, and necessary implementation conditions. This approach facilitates clear presentation of evidence whilst enabling critical analysis of the extent to which current research addresses the specific question of AI effectiveness in chronically overcrowded environments.
Literature review
### Performance characteristics of AI triage systems
Contemporary evidence consistently demonstrates that AI triage systems achieve superior risk stratification compared with traditional nurse-led triage protocols. Multiple systematic and scoping reviews report areas under the receiver operating characteristic curve (AUC) exceeding 0.80 for prediction of high-acuity outcomes, including intensive care admission, critical interventions, and mortality (Arab and Moosa, 2025; Yi, Baik and Baek, 2024; Kim, Nam and Lee, 2025). These performance metrics suggest that AI algorithms can identify patients requiring urgent intervention with greater accuracy than conventional triage scoring systems.
The integrative systematic review conducted by Arab and Moosa (2025) synthesised evidence from emergency department AI triage implementations, concluding that machine learning approaches demonstrated consistent advantages in discriminating between patients requiring immediate attention and those suitable for delayed assessment. Similarly, the systematic review of prospective studies by Yi, Baik and Baek (2024) reported that AI application in triage settings reduced the proportion of mis-triaged patients, potentially decreasing both under-triage of seriously ill patients and over-triage of lower-acuity presentations.
Importantly, some real-world implementation evidence extends beyond process measures to demonstrate improved hard clinical outcomes. Kotovich et al. (2023) evaluated outcomes following one year of AI implementation for detection of intracranial haemorrhage, finding that AI prioritisation was associated with lower 30-day and 120-day mortality alongside improved functional status. Crucially, these benefits were achieved primarily through faster recognition and workflow prioritisation rather than through provision of additional beds or staff, suggesting that algorithmic optimisation of existing resources can yield meaningful clinical improvements.
### Early detection and diagnostic enhancement
Beyond triage prioritisation, AI systems demonstrate potential for improving early detection of clinical deterioration and reducing diagnostic error. Scoping and narrative reviews examining AI applications across patient safety domains conclude that appropriately implemented systems can enhance identification of decompensating patients, flag potential diagnostic errors, and alert clinicians to emerging harms when embedded effectively within clinical workflows and supplied with high-quality data (Bates et al., 2021; Classen, Longhurst and Thomas, 2023; De Micco et al., 2025).
The scoping review by Bates et al. (2021) identified multiple domains where AI demonstrates potential safety benefits, including medication error prevention, diagnostic support, and deterioration prediction. These applications share a common mechanism: AI systems can process larger volumes of data more rapidly than human clinicians, potentially identifying patterns indicative of adverse trajectories before they become clinically apparent.
In paediatric emergency medicine specifically, Di Sarno et al. (2024) reviewed AI applications and identified promising performance in risk stratification for conditions where early intervention significantly affects outcomes. The authors noted that AI tools could augment clinical decision-making by providing additional data synthesis capabilities, though they emphasised that such tools function most effectively as decision support rather than autonomous systems.
### Limitations of evidence quality and generalisability
Despite promising performance characteristics, the evidence base for AI triage suffers from significant methodological limitations that constrain confidence in generalisability to corridor care contexts. Multiple reviews observe that most emergency department AI research employs retrospective designs or small-scale prospective implementations, with very limited evaluation of system-level outcomes under conditions of severe overcrowding (Arab and Moosa, 2025; Yi, Baik and Baek, 2024; Kim, Nam and Lee, 2025; Farrokhi et al., 2025).
This limitation is particularly consequential for understanding AI effectiveness in corridor care settings. Algorithms trained and validated on data from adequately resourced departments may perform differently when deployed in environments characterised by chronic overcrowding, staff exhaustion, infrastructure degradation, and compromised workflows. The scoping review by Kim, Nam and Lee (2025) explicitly noted the absence of robust evidence regarding AI performance in resource-constrained settings, identifying this as a critical gap requiring prospective investigation.
Furthermore, the outcome measures employed in most AI triage studies focus on discrimination accuracy and process metrics rather than patient-centred outcomes or system-level harm reduction. Whilst improved AUC values suggest better risk stratification capability, translating this capability into actual harm reduction requires effective integration with clinical workflows, appropriate response capacity, and sustained implementation fidelity—factors rarely examined in published research.
### Workload and efficiency effects
Contrary to intuitive expectations that AI triage would reduce clinician workload, reviews of online and AI triage tools report limited efficiency gains and sometimes increased or redistributed workload. This finding has particular relevance for corridor care contexts, where any intervention consuming additional staff time without providing corresponding capacity benefits may worsen rather than improve overall system function.
Gottliebsen and Petersson (2020) conducted a literature review of patient-operated intelligent primary care triage tools, finding limited evidence of benefits and identifying concerns regarding efficiency impacts. Risk-averse algorithmic design, intended to minimise the danger of under-triage, can produce systematic over-triage that increases demand on urgent care services. This phenomenon potentially worsens overcrowding by directing patients to emergency departments who might otherwise have been appropriately managed in lower-acuity settings.
The systematic scoping review by Ciecierski-Holmes et al. (2022) examined AI applications in low- and middle-income country healthcare systems, identifying workload redistribution as a consistent implementation challenge. Rather than reducing overall system burden, AI tools frequently shifted work between different staff categories or required additional oversight activities that offset anticipated efficiency gains. Dawoodbhoy et al. (2021) reported similar findings in their review of AI applications for patient flow in acute mental health settings, noting that realised benefits depended heavily on implementation context and supporting infrastructure.
### Risks and potential harms in resource-constrained settings
Patient safety literature increasingly recognises that AI deployment introduces novel risk categories alongside potential benefits. Safety-focused reviews highlight unpredictable algorithmic errors, embedded biases reflecting training data limitations, opaque decision logic, and the potential for both over-triage and under-triage that can add clinical risk if systems are not rigorously evaluated and continuously monitored (Da’costa et al., 2025; Botha et al., 2024; Classen, Longhurst and Thomas, 2023; Challen et al., 2019).
The scoping review by Botha et al. (2024) systematically examined perceived threats to patient rights and safety from healthcare AI, identifying algorithmic opacity and bias as central concerns. When AI systems produce recommendations through processes that clinicians cannot interpret or verify, appropriate error detection becomes problematic. In corridor care settings where clinical attention is already stretched thin, the cognitive burden of monitoring AI outputs for potential errors may exceed available capacity.
Challen et al. (2019) provided an early analysis of AI bias and clinical safety implications, noting that algorithms trained on historical data may perpetuate or amplify existing disparities in care. Patients from demographic groups underrepresented in training datasets, or those presenting atypically, face elevated risk of algorithmic misclassification. In overcrowded departments where clinicians have limited time to question or override AI recommendations, such biases may translate directly into differential harm.
The ethical analysis by Nord-Bronzyk et al. (2025) examined risk assessment frameworks for AI triage implementation, questioning how much additional risk is reasonable to accept when deploying novel technologies in already risky clinical environments. The authors argued that corridor care settings, where baseline risk levels are already unacceptably elevated, require particularly cautious approaches to AI deployment given the potential for algorithmic errors to compound existing hazards.
### Contextual factors affecting implementation success
Reviews examining AI implementation in diverse healthcare settings consistently emphasise that algorithm performance alone does not determine real-world effectiveness. Benefits depend critically on integration with staffing levels, physical infrastructure, governance arrangements, and organisational culture—factors that cannot be addressed through technology alone (Arab and Moosa, 2025; Da’costa et al., 2025; Kim, Nam and Lee, 2025; Classen, Longhurst and Thomas, 2023; Farrokhi et al., 2025).
Hosseini et al. (2023) conducted a scoping review of factors affecting AI implementation in emergency care, finding that overcrowded and low-resource settings reported mixed impacts on workflows, system reliability, and user acceptance. Many tools were poorly adapted to local context, having been developed and validated in well-resourced environments with different patient populations, workflow patterns, and infrastructure constraints. Successful implementation required substantial local adaptation and ongoing optimisation that resource-constrained settings may lack capacity to provide.
Ciecierski-Holmes et al. (2022) reached similar conclusions regarding AI deployment in low- and middle-income countries, identifying context-specific factors including infrastructure reliability, data quality, workforce digital literacy, and organisational readiness as critical determinants of implementation success. These findings suggest that AI tools developed in high-resource settings cannot simply be transferred to resource-constrained environments with expectation of equivalent performance.
### Governance and monitoring requirements
Ethical and safety-focused analyses emphasise that AI deployment in high-risk clinical environments requires ongoing evaluation, human-in-the-loop operation, and learning health system-style monitoring rather than one-off adoption (Classen, Longhurst and Thomas, 2023; Nord-Bronzyk et al., 2025; Hunter et al., 2023). These requirements have significant resource implications that may be difficult to satisfy in settings already experiencing severe capacity constraints.
Classen, Longhurst and Thomas (2023) argued that realising AI’s potential for patient safety improvement requires embedded monitoring systems capable of detecting performance degradation, identifying emerging biases, and triggering algorithm updates in response to changing patient populations or clinical contexts. Such capabilities demand dedicated analytical resources and governance infrastructure that extend well beyond initial technology procurement.
The review of AI applications in trauma care by Hunter et al. (2023) similarly emphasised the importance of continuous monitoring and human oversight. Whilst acknowledging AI’s potential for improving triage and decision support, the authors cautioned against deployment without robust safety monitoring frameworks. In corridor care settings where clinical governance capacity is often already compromised by overcrowding pressures, establishing and maintaining such frameworks presents substantial challenges.
Mani and Albagawi (2024) examined AI’s role in emergency nursing interventions, concluding that effective implementation requires integration with broader workforce development and practice change initiatives. Technology alone could not substitute for adequate staffing or appropriate skills development; rather, AI functioned most effectively when supporting rather than replacing human clinical judgement.
Discussion
The synthesised evidence presents a complex picture regarding AI triage effectiveness in corridor care contexts. Whilst the technology demonstrates clear capability for improved risk stratification and earlier detection of high-acuity conditions, fundamental questions remain regarding whether these capabilities translate into meaningful harm reduction when implemented in chronically overcrowded environments without additional physical or workforce capacity.
### Addressing the central research question
The evidence reviewed provides no robust support for the proposition that AI triage alone can offset the risks inherent in structural under-capacity. This conclusion emerges from several converging lines of evidence. First, the methodological limitations of existing research—predominantly retrospective designs and small prospective studies conducted in adequately resourced settings—mean that evidence regarding AI performance specifically under severe overcrowding conditions is essentially absent. Extrapolating from performance in typical emergency departments to performance in corridor care contexts requires assumptions that the evidence does not justify.
Second, the mechanisms through which AI triage produces benefit appear to require response capacity that corridor care settings may lack. Improved identification of high-acuity patients produces clinical benefit only if those patients subsequently receive appropriate intervention. When physical space, equipment, and clinical attention are all maximally committed, algorithmic prioritisation may simply re-order the queue without reducing aggregate harm. The positive outcomes demonstrated by Kotovich et al. (2023) following AI implementation for intracranial haemorrhage detection involved workflow prioritisation in settings with sufficient capacity to act on accelerated recognition—conditions not characteristic of true corridor care.
Third, the consistent finding that AI triage tools frequently increase rather than decrease workload has particular salience for resource-constrained settings. Risk-averse algorithmic design that systematically over-triages may actively worsen overcrowding by directing additional patients toward emergency departments. The efficiency gains that would be necessary for technology to compensate for capacity deficits are not apparent in the evidence reviewed.
### Potential for harm in overcrowded settings
The evidence suggests that AI deployment in corridor care contexts carries meaningful potential to introduce additional harm rather than reduce it. Algorithmic errors, which occur with all current AI systems, may be more consequential in settings where clinical attention for error detection and correction is scarce. Biases embedded in training data may disadvantage patient groups who are already at elevated risk due to overcrowding effects. Opaque decision logic may be accepted uncritically by exhausted clinicians who lack cognitive resources for appropriate scepticism.
Furthermore, the resource requirements for safe AI deployment—ongoing monitoring, regular validation, governance oversight, and technical maintenance—represent additional burdens on systems already operating beyond sustainable limits. Implementing AI without these supporting structures, as may be tempting in resource-constrained settings, increases the probability that algorithmic failures will produce patient harm.
### Necessary conditions for harm reduction
The reviewed literature consistently identifies conditions under which AI triage might contribute positively to patient safety: integration with adequate staffing levels, sufficient physical infrastructure, robust governance frameworks, and learning health system-style continuous monitoring. These conditions are precisely those absent in corridor care settings. The implication is that AI triage functions not as a substitute for capacity but as a tool that can optimise the use of adequate capacity when it exists.
This conclusion aligns with broader principles in patient safety science, which emphasise that technological interventions succeed within supportive systemic contexts and fail when deployed as isolated solutions to problems with structural origins. The evidence does not support AI triage as a means of making under-capacity safe; rather, it suggests AI as a potential enhancement for systems that have first addressed fundamental resource adequacy.
### Policy and practice implications
These findings carry significant implications for healthcare policy and emergency department management. Decisions to implement AI triage should not be framed as alternatives to capacity investment but rather as potential complements to adequately resourced systems. Policymakers and administrators should resist the attraction of technological solutions that promise to address overcrowding without the politically difficult decisions regarding infrastructure expansion and workforce investment.
For clinical settings already experiencing corridor care, the evidence suggests caution regarding AI deployment. Implementation should be accompanied by honest acknowledgement that technology cannot substitute for beds and staff, robust governance arrangements for monitoring and error detection, and continued advocacy for the capacity increases necessary to address fundamental safety concerns. Presenting AI as a solution to corridor care risks legitimising conditions that are inherently unsafe and should not be normalised.
### Limitations and uncertainties
This synthesis is subject to several limitations that affect confidence in its conclusions. The primary evidence base consists largely of secondary analyses (systematic and scoping reviews), which may inherit limitations from their constituent primary studies. The specific question of AI performance in corridor care settings has received minimal direct research attention, necessitating inference from studies conducted in different contexts. Publication bias may inflate reported AI performance metrics, whilst implementation challenges may be under-reported.
Additionally, AI technology continues to evolve rapidly, and evidence from even recent studies may not reflect current algorithmic capabilities. Future systems may achieve performance characteristics that address some identified limitations, though the fundamental tension between technological capability and structural capacity is unlikely to be resolved through algorithmic advancement alone.
Conclusions
This synthesis addressed whether artificial intelligence triage and assessment technologies can meaningfully reduce patient harm in corridor care settings without added capacity. The evidence reviewed supports several conclusions that respond to the stated objectives.
First, AI triage systems demonstrate genuine capability for improved risk stratification compared with traditional methods, with consistent evidence of superior discrimination for high-acuity outcomes. This addresses the first objective regarding AI performance characteristics.
Second, significant limitations and risks attend AI deployment in resource-constrained settings, including inadequate evidence from overcrowded contexts, potential for increased workload, algorithmic errors and biases, and poor contextual adaptation. These limitations, addressing the second objective, substantially constrain confidence that demonstrated capabilities will translate into harm reduction under corridor care conditions.
Third, the contextual requirements for effective AI implementation—adequate staffing, infrastructure, governance, and monitoring—are precisely those absent in corridor care settings, suggesting that technology functions as an optimisation tool for adequate capacity rather than a substitute for it. This addresses the third objective regarding implementation requirements.
Fourth, and most centrally addressing the fourth objective, the evidence does not support the proposition that AI triage alone can meaningfully offset the clinical risks inherent in structural under-capacity. Improved prioritisation cannot compensate for the absence of beds, staff, and equipment necessary to provide appropriate care.
Fifth, addressing the final objective, the conditions necessary for AI to contribute positively to patient safety include the very capacity investments that AI is sometimes proposed to replace. Effective deployment requires integration with workforce, infrastructure, and governance improvements rather than substitution for them.
The significance of these conclusions extends beyond academic interest to inform consequential policy decisions. Healthcare systems must resist the temptation to view AI as a technological fix for problems that are fundamentally structural. Whilst AI triage may modestly reduce harm through improved prioritisation when implemented with appropriate supporting conditions, it cannot make corridor care safe. Addressing the harms of chronic overcrowding requires investment in physical capacity and workforce adequacy, with AI serving as a potential enhancement to—not replacement for—these foundational requirements.
Future research should prioritise prospective evaluation of AI triage performance specifically in overcrowded settings, examination of equity impacts across patient populations, and development of governance frameworks appropriate to resource-constrained contexts. Until such evidence becomes available, AI deployment in corridor care should be approached with appropriate caution and honest acknowledgement of its limitations.
References
Arab, R. and Moosa, O., 2025. The role of AI in emergency department triage: An integrative systematic review. *Intensive & Critical Care Nursing*, 89, p.104058. https://doi.org/10.1016/j.iccn.2025.104058
Bates, D., Levine, D., Syrowatka, A., Kuznetsova, M., Craig, K., Rui, A., Jackson, G. and Rhee, K., 2021. The potential of artificial intelligence to improve patient safety: a scoping review. *NPJ Digital Medicine*, 4. https://doi.org/10.1038/s41746-021-00423-6
Botha, N., Segbedzi, C., Dumahasi, V., Maneen, S., Kodom, R., Tsedze, I., Akoto, L., Atsu, F., Lasim, O. and Ansah, E., 2024. Artificial intelligence in healthcare: a scoping review of perceived threats to patient rights and safety. *Archives of Public Health*, 82. https://doi.org/10.1186/s13690-024-01414-1
Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T. and Tsaneva-Atanasova, K., 2019. Artificial intelligence, bias and clinical safety. *BMJ Quality & Safety*, 28, pp.231-237. https://doi.org/10.1136/bmjqs-2018-008370
Ciecierski-Holmes, T., Singh, R., Axt, M., Brenner, S. and Barteit, S., 2022. Artificial intelligence for strengthening healthcare systems in low- and middle-income countries: a systematic scoping review. *NPJ Digital Medicine*, 5. https://doi.org/10.1038/s41746-022-00700-y
Classen, D., Longhurst, C. and Thomas, E., 2023. Bending the patient safety curve: how much can AI help? *NPJ Digital Medicine*, 6. https://doi.org/10.1038/s41746-022-00731-5
Da’costa, A., Teke, J., Origbo, J., Osonuga, A., Egbon, E. and Olawade, D., 2025. AI-driven triage in emergency departments: A review of benefits, challenges, and future directions. *International Journal of Medical Informatics*, 197, p.105838. https://doi.org/10.1016/j.ijmedinf.2025.105838
Dawoodbhoy, F., Delaney, J., Cecula, P., Yu, J., Peacock, I., Tan, J. and Cox, B., 2021. AI in patient flow: applications of artificial intelligence to improve patient flow in NHS acute mental health inpatient units. *Heliyon*, 7. https://doi.org/10.1016/j.heliyon.2021.e06993
De Micco, F., Di Palma, G., Ferorelli, D., De Benedictis, A., Tomassini, L., Tambone, V., Cingolani, M. and Scendoni, R., 2025. Artificial intelligence in healthcare: transforming patient safety with intelligent systems—A systematic review. *Frontiers in Medicine*, 11. https://doi.org/10.3389/fmed.2024.1522554
Di Sarno, L., Caroselli, A., Tonin, G., Graglia, B., Pansini, V., Causio, F., Gatto, A. and Chiaretti, A., 2024. Artificial Intelligence in Pediatric Emergency Medicine: Applications, Challenges, and Future Perspectives. *Biomedicines*, 12. https://doi.org/10.3390/biomedicines12061220
Farrokhi, M., Fallahian, A., Rahmani, E., Aghajan, A., Alipour, M., Khouzani, P., Hezarani, H., Sabzehie, H., Pirouzan, M., Pirouzan, Z., Dalvandi, B., Dalvandi, R., Doroudgar, P., Azimi, H., Moradi, F., Nozari, A., Sharifi, M., Ghorbani, H., Moghimi, S., Azarkish, F., Bolandi, S., Esfahani, H., Hosseinmirzaei, S., Niknam, A., Nikfarjam, F., Boroujeni, P., Noorbakhsh, M., Rahmani, P., Motlagh, F., Harati, K., Farrokhi, M., Talebi, S. and Lahijan, L., 2025. Current Applications, Challenges, and Future Directions of Artificial Intelligence in Emergency Medicine: A Narrative Review. *Archives of Academic Emergency Medicine*, 13. https://doi.org/10.22037/aaemj.v13i1.2712
Gottliebsen, K. and Petersson, G., 2020. Limited evidence of benefits of patient operated intelligent primary care triage tools: findings of a literature review. *BMJ Health & Care Informatics*, 27. https://doi.org/10.1136/bmjhci-2019-100114
Hosseini, M., Hosseini, S., Qayumi, K., Ahmady, S. and Koohestani, H., 2023. The Aspects of Running Artificial Intelligence in Emergency Care; a Scoping Review. *Archives of Academic Emergency Medicine*, 11. https://doi.org/10.22037/aaem.v11i1.1974
Hunter, O., Perry, F., Salehi, M., Bandurski, H., Hubbard, A., Ball, C. and Hameed, M., 2023. Science fiction or clinical reality: a review of the applications of artificial intelligence along the continuum of trauma care. *World Journal of Emergency Surgery*, 18. https://doi.org/10.1186/s13017-022-00469-1
Kim, S., Nam, S. and Lee, J., 2025. Artificial intelligence in emergency department triage: a scoping review on workload reduction and patient safety enhancement. *Journal of Korean Biological Nursing Science*. https://doi.org/10.7586/jkbns.25.045
Kotovich, D., Twig, G., Itsekson-Hayosh, Z., Klug, M., Simon, A., Yaniv, G., Konen, E., Tau, N., Raskin, D., Chang, P. and Orion, D., 2023. The impact on clinical outcomes after 1 year of implementation of an artificial intelligence solution for the detection of intracranial hemorrhage. *International Journal of Emergency Medicine*, 16. https://doi.org/10.1186/s12245-023-00523-y
Mani, Z. and Albagawi, B., 2024. AI frontiers in emergency care: the next evolution of nursing interventions. *Frontiers in Public Health*, 12. https://doi.org/10.3389/fpubh.2024.1439412
Nord-Bronzyk, A., Savulescu, J., Ballantyne, A., Braunack-Mayer, A., Krishnaswamy, P., Lysaght, T., Ong, M., Liu, N., Menikoff, J., Mertens, M. and Dunn, M., 2025. Assessing Risk in Implementing New Artificial Intelligence Triage Tools—How Much Risk is Reasonable in an Already Risky World? *Asian Bioethics Review*, 17, pp.187-205. https://doi.org/10.1007/s41649-024-00348-8
Royal College of Emergency Medicine, 2023. *Corridor care position statement*. London: Royal College of Emergency Medicine.
Yi, N., Baik, D. and Baek, G., 2024. The effects of applying artificial intelligence to triage in the emergency department: A systematic review of prospective studies. *Journal of Nursing Scholarship*, 57, pp.105-118. https://doi.org/10.1111/jnu.13024
