AI in the ER: Opportunities, Challenges, & Ethical Questions

by Grace Gibson

Graphic design by Josip Petrusa

As a rapidly advancing field, there are many proposed applications for artificial intelligence (AI) in healthcare settings. In high-stress specialties like emergency medicine, AI-based solutions have the potential to transform patient experiences by increasing the standard of care and ensuring faster emergency response times.1,2 However, AI solutions have their limitations, raising numerous concerns if introduced into healthcare settings.1,3

“AI” has become an umbrella term for many different technologies, but as a general description, AI tools integrate and process data to produce an output. The subset of AI discussed here, generative AI (GenAI), uses input data to produce new text or images.4 These tools, many of which fall under the category of large language models (LLMs), process huge volumes of information from past cases so that they may output reasonable predictions for similar scenarios in the future.1

In ER environments that demand immediate and accurate decision-making, AI solutions may augment the capabilities of healthcare providers, improving their ability to care for patients.3,4 In one possible use, AI tools can improve initial triage performed when a patient enters the ER. These assessments often use the emergency severity index (ESI) to standardize evaluations, but ESI values may be inaccurate due to provider fatigue or limited time.1 Additionally, demographic factors like race and gender can impact perceptions of a patient’s condition.5 Theoretically, an AI model could replace human-directed ESI assessment, as healthcare providers could input the patient’s health records and physiological data to generate a suggested ESI value.1,3 

AI models can also be used in the diagnostic process. An AI tool that can compare a patient’s scans to a comprehensive dataset could identify potentially concerning indications without the need for human evaluation.1 Alternatively, a tool trained to identify the common presentations of specific conditions, such as different types of stroke, could recognize medical emergencies in their early stages based on a patient’s test results. Some emergency departments already use this type of AI tool for early stroke detection, with applications to cardiac arrhythmias and traumatic brain injuries in development.2  In the future, LLMs may also play a role in directing ambulance dispatch to patients and transcribing doctor-patient interactions as they occur, though these uses have not yet been implemented.1,2,3,4 

When presented in examples like this, AI tools appear to be wholly positive for emergency medicine; however, institutions should remain cautious about AI for many reasons.4 As an example, if a model is trained on inaccurate or biased datasets, the outputs will mirror the deficiencies in the original data.3 This may be the case for LLMs trained on data from a particular set of institutions—data that might not be representative of other healthcare settings, which differ in institutional policies, patient demographics, and more.3 Additionally, human error and prejudice may be present in AI training data, introducing the possibility of inaccurate or biased output.1,3,4

Healthcare AI tools can also raise questions of privacy and security for patients. In some cases, LLM training involves the release of personal health information to third-party companies that develop AI tools, something that patients might want to avoid, especially given the limited regulation of the use of personal data in training LLMs.6 In-house development and training can ensure that patient data is not released to third-party companies, but this fact may not assuage patients’ perceived security concerns about new AI tools.3,6 

The lack of AI legislation also presents liability challenges. Just as any human physician can make mistakes, AI tools can produce incorrect results or provide substandard information; however, medical malpractice legislation does not account for AI applications, and an AI can’t be held liable for harm it may cause. Until there are significant developments in legislation over GenAI tools and healthcare, implementing tools like these introduces the risk that patients who are harmed by an inaccurate AI output may not be able to pursue legal action.7

There are steps institutions can take to address these issues before introducing GenAI tools: training datasets can be carefully selected to ensure generalizability, data protection measures can be strengthened, and LLMs can be developed internally.8 However, beyond the specific concerns around introducing AI into healthcare settings, there are additional ethical questions around any application of GenAI. 

One concern is the impact of AI on the job market. Expanding the use of AI fuels widespread worker displacement, leaving many people jobless across industries.9 However, the implementation of AI does not appear to balance this job loss with either equivalent productivity gains or generation of high-quality jobs.10 Furthermore, replacement of skilled human work contributes to a general decline in worker skill levels. Increased automation of the workforce may improve performance in some sectors, but this comes at the cost of permanent worker displacement and decay of skilled human labor, hurdles that may not be overcome with the projected benefits of automation.9,10

Another concern about increasing GenAI use is the associated environmental damage. The high computing power required by AI data centres has driven a global increase in electricity use, negating previous goals in improving efficiency and reducing emissions.11,12 AI’s increasing energy demands have additionally necessitated the rapid construction of new data centers, a process that can have serious consequences for communities in which they’re established.12 Residents living near these centers face higher electricity costs, lower air quality, and exposure to hazardous waste products.13 

Emergency departments looking to implement AI solutions therefore must grapple with overarching ethical questions: do the benefits of using an AI tool outweigh the economic and environmental costs associated with widespread GenAI adoption? For some situations, the answer might be “yes”—there are significant, data-validated benefits to some applications of GenAI in the ER, including improvements to triage and early stroke detection.1,2 However, healthcare institutions should avoid indiscriminately promoting AI solutions, especially if they’re unable to rectify issues like inaccurate training data or violation of patient privacy.11 Moreover, developing new GenAI tools without sufficient justification for their use contributes to an unnecessary AI saturation, a trend with potentially disastrous consequences for our society, our environment, and, by extension, our health.11 As we envision how GenAI might impact emergency medicine, we should aim to embrace its transformative and life-saving uses, but we should remain wary of introducing AI solutions without weighing their wider ethical implications.  

References

  1. Hosseini M, Gao P, Vivas-Valencia C. A social-environmental impact perspective of generative artificial intelligence. Environmental science and ecotechnology. 2025;23. 
  2. Kachman MM, Brennan I, Oskvarek JJ, et al. How artificial intelligence could transform emergency care. Am J Emerg Med. 2024 Jul;81:40-46. 
  3. Piliuk K, Tomforde S. Artificial intelligence in emergency medicine. A systematic literature review. Int J Med Inform. 2023 Dec;180:105274. 
  4. Chenais G, Lagarde E, Gil-Jardiné C. Artificial Intelligence in Emergency Medicine: Viewpoint of Current Applications and Foreseeable Opportunities and Challenges. J Med Internet Res. 2023 May 23;25:e40031. 
  5. Zhang K, Meng X, Yan X, et al. Revolutionizing Health Care: The Transformative Impact of Large Language Models in Medicine. J Med Internet Res. 2025 Jan 7;27:e59069. 
  6. Sax DR, Warton EM, Mark DG, et al. Evaluation of Version 4 of the Emergency Severity Index in US Emergency Departments for the Rate of Mistriage. JAMA. 2023;6(3):e233404.
  7. Da Silva M, Flood CM, Goldenberg A, et al. Regulating the Safety of Health-Related Artificial Intelligence. Healthc Policy. 2022 May;17(4):63-77. 
  8. Shumway DO, Hartman HJ. Medical malpractice liability in large language model artificial intelligence: legal review and policy recommendations. J Osteopath Med. 2024 Jan 31;124(7):287-290. 
  9. Labkoff S, Oladimeji B, Kannry J, et al. Toward a responsible future: recommendations for AI-enabled clinical decision support. J Am Med Inform Assoc. 2024 Nov 1;31(11):2730-2739. 
  10. Holzer HJ. Job Loss, Displacement, and AI: Anticipating and Preventing Their Costs. In: Orrell B, editor. New Approaches to Characterize Industries: AI as a Framework and a Use Case. Washington, DC: American Enterprise Institute; 2025. p. 59–62
  11. Tyson LD, Zysman J. Automation, AI & Work. Daedalus. 2022;151(2):256–271.
  12. Bashir N, Donti P, Cuff J, Sroka S, Ilic M, Sze V, et al. The Climate and Sustainability Implications of Generative AI [Internet]. An MIT Exploration of Generative AI. 2024 Mar 27.
  13. Crawford K. Generative AI’s environmental costs are soaring—and mostly secret [Internet]. Nature. 2024 Feb 20;626:693.