
Physiology News Magazine
Promises and pitfalls for artificial intelligence in healthcare
How AI can support or undercut quality, accessibility, and equity in health innovation, care, and management
Features
Promises and pitfalls for artificial intelligence in healthcare
How AI can support or undercut quality, accessibility, and equity in health innovation, care, and management
Features

John P Nelson
Postdoctoral Research Fellow, Georgia Institute of Technology

Alexander Stevens
PhD candidate, University of California, Los Angeles
Artificial intelligence in healthcare: Hope and hype
Recent years have seen increasing excitement about the potential for artificial intelligence (AI) to improve the quality and accessibility of healthcare (Davenport and Kalakota, 2019). While these possibilities are real and important, AI also bears substantial potential to reproduce and exacerbate existing problems in healthcare systems. In this article, we provide a high-level overview of AI’s potential applications in healthcare innovation, healthcare delivery, and healthcare management, as well as ways in which AI could contribute to healthcare problems including unreliability, misinformation, exploitation of patients and healthcare workers, and healthcare inequities. Like other emerging health technologies, AI systems are flexible; their outcomes will depend upon the interests and values that shape them. If healthcare AI is to respect and advance public values, healthcare workers, patients, and citizens must be empowered to guide AI’s goals, development, and use.
Types of artificial intelligence: A thumbnail sketch
The European Union’s High-Level Expert Group on Artificial Intelligence defines AI as “systems that display intelligent behaviour by analysing their environment and taking actions—with some degree of autonomy—to achieve specific goals” (High-Level Expert Group, 2019). Within this definition, there are several ways to typologise AI, including by applications, by overall level of “intelligence,” and by how it works. We’ll talk about applications later. On “intelligence levels,” all AI systems now in existence are “artificial narrow intelligence,” which is (at most) capable of performing specific and constrained tasks, such as writing or sorting images. This is often distinguished from “artificial general intelligence,” a hypothetical category that could perform a wide variety of tasks (like humans), and “artificial superintelligence,” which would, if achieved, vastly exceed human capabilities and possibly even human comprehension. The important thing to know is that there is little prospect of artificial general intelligence or superintelligence anytime soon (Collins, 2021). For the foreseeable future, AI will be limited to impressive performances on narrow tasks, such as generating images based on text prompts, remixing text on particular topics, or classifying images into specified categories.
More directly relevant is categorisation by how AI works. There are two big categories of AI, which can be hybridised: rule-based and machine-learning-based AI. Speaking broadly, rule-based systems identify and respond to inputs according to an (often quite complicated) “script” written by human designers. For example, a very simple rule-based patient care recommendation system might look something like the following:
IF patient is immunocompromised AND has a pneumonia diagnosis, THEN recommend patient be kept in hospital overnight.
In contrast, machine-learning-based systems iteratively develop their own scripts, in part or in whole, based on attempted applications of prior scripts and feedback on how they turned out. This process can be very intensive in data and computation time. Such generated scripts can then be “frozen” and used as-is, or permitted to continue evolving in response to novel data and feedback. A machine-learning system is given a goal and a dataset, and told to work out through trial and error the most effective way to use the dataset to achieve the goal.
Most recent AI hype has been around machine-learning systems, such as text generator ChatGPT and image generator Midjourney. Machine-learning systems are particularly powerful because they can develop and iterate upon scripts too complicated for human designers to manually construct, using datasets too large for manual review. This, in turn, allows machine-learning-based systems to mimic, and, in some cases, exceed human capabilities in tasks requiring integration of many variables or large amounts of data. For example, to manually design a detailed rule-based system for diagnosing cancer based on MRI imaging would require a great deal of time and effort. However, a machine-learning system can attempt to automate some of this process if set on a collection of preidentified cancer-positive and cancer-negative images (Wang, 2022). The system will mark some images as cancer-positive based on its existing script, check its answers, modify its script (to prioritise, deprioritise, or shift the interpretation of image features), and check whether this modification improves its accuracy. The downside of machine-learning is that a system’s designers are not in complete control of what it learns. Thus, designers cannot always predict or explain how a machine-learning system will behave under all contexts of application.
Applications of artificial intelligence in healthcare
As AI is potentially applicable to any activity that involves, or that could involve, processing and analysis of data, it has myriad potential applications in healthcare. Some, such as protein folding or diagnostics, are specific to medicine; others, such as automated logistics or employee management, are not.
AI in preclinical treatment development
AI can be employed to collect, annotate, and analyse tremendous amounts of biomedical data. In so doing, it can identify connections between data points, and, sometimes, reveal previously obscured insights, such as predisposition to disease or treatment efficacy based on a combination of genes or patient-specific biomarkers (Michelhaugh and Januzzi, 2022; Wang et al., 2019). Furthermore, machine-learning models trained on large datasets can often extrapolate their findings to new or proposed work, enabling scientists to perform experiments on computers at a fraction of the time and cost of traditional lab research. DeepMind’s Alphafold, for instance, was trained on a pre-existing database of proteins covering about 17% of the human proteome and was found to confidently predict protein structures based on amino acid sequence alone (Jumper et al., 2021). As discovering the structure of a single druggable protein is often a PhD dissertation’s worth of work, DeepMind’s prediction of the entire human proteome in under a year represents a triumph of machine-learning (Tunyasuvunakool et al., 2021). Beyond protein folding, greater integration of AI systems into preclinical development may help scientists to repurpose existing drugs and discover new ones (Fleming 2018; Ge et al., 2021; Zhou et al., 2020). One study utilised AI to identify a drug currently undergoing clinical trials for breast cancer treatment as an effective inhibitor of SARS-CoV-2 cell entry (Ge et al., 2021). The impressive results of AI-guided research underscore AI’s potential to streamline biomedical research, reduce the cost of treatment development, and reduce the time needed to bring new treatments to patients.
But despite being heralded as a cornucopia of biomedical insight, AI applications remain dependent on the guidance of human experts. While machine-learning models can identify patterns under mountains of genomic data, researchers must still corroborate these predictions and update models based on empirical data. For example, despite its success at predicting many protein structures, Alphafold struggles to accurately predict structures for membrane proteins (Bertoline et al., 2023). Membrane proteins are integral to our understanding of disease, but they are poorly represented in Alphafold’s training dataset. Thus, Alphafold has limited “experience” with membrane proteins and correspondingly weak performance. AI-derived research will only become as good as the datasets that AI is trained on. Real-world research will always be needed to guide and correct AI’s missteps, as with any other modelling method.
AI in clinical treatment
Beyond preclinical research, AI has been touted as a key driver of innovation in the patient-care space, particularly for image analysis and patient monitoring (Brinker et al., 2019; Dubey and Tiwari 2023; Najjar, 2023). AI can be a powerful tool for analysing clinical images such as MRIs and X-rays (Najjar, 2023), and, potentially, identifying disease markers that are either too subtle for human detection or whose relevance to disease has gone unnoticed by entire medical fields (Shen et al., 2017). AI has demonstrated image-based detection speed and sensitivity meeting or exceeding human capabilities for diseases such as pneumonia and melanoma (Brinker et al., 2019; Plesner et al., 2023). While critical review is still necessary to verify AI diagnoses, these tools could drastically increase the throughput of disease diagnosis and enhance diagnostic sensitivity (Yu et al., 2023). However, as suggested previously, AI is only as good as the data on which it is trained. Atypical presentation or user-based errors in image processing can produce erroneous results. These errors are often systematic. As discussed further below, training datasets are biased by differences in individuals’ ability or willingness to share their data across racial, ethnic, or geographic lines (Johnson et al., 2010).
AI is also being integrated into applications meant to reduce the rate of hospitalisation in seniors and other high-risk groups. Some healthcare professionals are looking to remote patient-monitoring devices to enable continuous monitoring without direct medical supervision or hospitalisation (Dubey and Tiwari, 2023). These tools can ensure patients receive proactive care and alleviate the burden of preventable hospitalisations by detecting anomalous indicators of disease, such as arrhythmias or elevated blood pressure, and notifying healthcare personnel. Recording patient data at scale may also facilitate early disease detection. AI models could parse large volumes of patient data to rapidly detect novel patterns calling for intervention. However, increased patient surveillance also necessitates greater scrutiny regarding how those data are used. Though physicians have an obligation to ensure that medical data are not intentionally misused, the rapid advancement of AI means the scale and variety of data collection will likely outpace the development of best practices in managing and protecting them. Thus, AI patient monitoring will complicate efforts to prevent questionable or unethical collection and use of data.
AI in epidemiology and healthcare management
Beyond improvements in disease treatment, AI may play a substantial role in large-scale tasks such as epidemiology, healthcare administration, and health resource allocation. During the COVID-19 pandemic, much work was dedicated to the development of machine-learning-based models to forecast the rise of new SARS-CoV-2 variants and prospective regional healthcare burden based on the susceptibility of local populations to severe disease (Abdulkareem and Petersen, 2021; Al-qaness et al., 2020; Jiang et al., 2020; Nagpal et al., 2022; Zheng et al., ,2020). Such enhanced surveillance could rapidly identify epidemiological trends and give researchers, medical personnel, and policymakers more time to coordinate targeted responses.
Algorithms are already ubiquitous in healthcare, and AI has already been used to handle tasks like medical workforce scheduling (Fornell, 2023). AI could also be used to guide allocation of scarce healthcare resources, a major difficulty under both normal and crisis healthcare conditions. Resource allocation has direct consequences for morbidity and mortality (Ji et al., 2020), and some authors hope that AI could increase the speed and efficiency of resource allocation under crisis (Wu et al., 2023). Moreover, some researchers hope that AI decision-making could help to redress inequities in healthcare by targeting resources to communities in need. However, as discussed below, existing healthcare inequities can often find their way into models and algorithms in subtle ways.
Potential problems for artificial intelligence in healthcare
Because AI has so many possible applications, it has many different potential upsides and downsides. We’ve already discussed many upsides. As examples of potential downsides,
we’ll explore four important problems: unreliability, misinformation, exploitation, and inequity. These do not exhaust all of the ways in which AI could contribute to health problems. However, they provide a potent set of examples to emphasise the importance of social responsibility in healthcare AI development and deployment.
Errors and accountability
Although AI systems can be very powerful, they can certainly err—sometimes in novel and unexpected ways. One well-publicised example concerns an experimental hospital AI designed to recommend whether pneumonia patients should be kept overnight. This system was trained on a historical dataset wherein patients with a history of asthma were always kept overnight for observation, and, consequently, had high recovery rates. Because of those high recovery rates, the AI system took a history of asthma as a predictor of recovery and recommended that such patients should be sent home (Christian, 2020). This is a fairly obvious error, but more subtle ones might evade detection and correction—particularly if AI is developed and deployed without input from a broad set of patients, communities, and medical professionals, or if medical professionals are not empowered to oversee, understand, and question AI decisions (Mackenzie et al., 2023). Moreover, use of AI systems could both legally and psychologically diffuse responsibility for patient care and accountability for care decisions (Naik et al., 2022).
AI systems are more likely to produce invalid or harmful results if applied to populations who substantially differ from those on which they were trained, or if deployed outside the scope for which they were designed. For example, machine-learning algorithms capable of identifying cancerous skin lesions are predominantly trained on white skin, and their sensitivities fall far below clinician capabilities when applied to darker skin types (Kamulegeya et al., 2023). Diagnostic AI has been found to vary in performance even between different imaging machines of the same type, requiring adjustment (De Fauw et al., 2018). In healthcare, measures initially developed for one purpose are often reapplied for other purposes, and not always with appropriate validation. For example, the faecal occult blood test, validated for colorectal cancer screening, is frequently reapplied in patients not appropriately prepared through dietary and medication restriction (Sharma et al., 2001). This approach could lead to excessive false positives and mislead an AI system trained on a dataset including these test results. Yet much of the healthcare data available are riddled with such opportunism, heterogeneity, and inherent biases (Chin-Yee and Upshur, 2019). Dynamic, “unfrozen” AI systems can even develop new failure and error modes in response to novel data and training (DeCamp and Lindvall, 2020). In short, AI systems deployed in healthcare will require careful data and logic auditing, judicious application, and consistent expert oversight to prevent, apprehend, and correct errors (Broussard, 2023; Mackenzie et al., 2023).
Misinformation
While much work is ongoing to develop and deploy AI within healthcare, population health will also be affected by AI developments outside of healthcare. Many people turn first to the internet when they get sick, and, of course, not all health information on the internet is accurate. Indeed, World Health Organization Director-General Tedros Adhanom Ghebreyesus declared in 2020 that the world faced not only the COVID-19 pandemic, but a parallel “infodemic” of COVID-19 misinformation and conspiracy theories (Zarocostas, 2020). In the future, many people may turn to AI language models like ChatGPT for health advice. Leaders in the AI search and language model space have put in a lot of work to prevent their models from repeating bad health advice (Ayers et al., 2023), but, due to the structures of such models, the designers and trainers will always be one step behind new or more obscure canards (Nelson, 2023). Moreover, as AI answer services proliferate, the likelihood that one or more publicly available AI systems will be prone to giving out bad advice will increase. It should also be noted that ChatGPT’s paid version has been found to provide better information about vaccine safety than ChatGPT’s free version, leading to potential disparities in access to reliable health information (Deiana et al., 2023).
AI could also be used to generate or spread misleading content about health, as about other topics. AI-generated images have already been used in a disinformation campaign around the August 2023 wildfire burning of Lahaina, Hawaii (Sanger and Myers, 2023). Historically, online platforms have not been held responsible for misinformation or disinformation posted by users, even when such content is promoted by recommendation algorithms (Accountable Tech, 2020; Bertolini et al., 2021). It remains to be seen whether this no-responsibility approach will also apply to the outputs of language models or chatbots. Prevention of harms from AI health misinformation may require improvements in population digital and health literacy and stronger incentives for online platforms to ensure the validity of their hosted, recommended, or AI-generated content.
Facilitation or incentivisation of exploitation
AI systems require large amounts of data to train. In general, the larger and more representative the dataset available, the more powerful an AI model can become. This requirement creates a perverse incentive in healthcare recordkeeping; health data privacy is a barrier to health AI development (Bak et al., 2022). Moreover, AI tools can, in some cases, be used to reidentify anonymised health data, facilitating privacy invasion (Murdoch, 2021). Debates are ongoing about both technical methods to permit health data use for AI without compromising privacy, and about the appropriate balance to be kept between these goals (Prayitno et al., 2021). The European Union’s General Data Protection Regulation (Regulation 2016/679) provides a good start for residents of current and former EU nations, but, as ever, efficacy will depend upon implementation.
Marketing is another area of concern. Major pharmaceutical companies have expressed interest in using AI to enhance the efficacy and cost-effectiveness of drug marketing to physicians (Sagonowsky, 2019; Wunker, 2023). Such applications may not be inherently problematic. However, during the United States’ opioid abuse epidemic, direct-to-physician opioid marketing has been found to geographically associate with opioid overdose deaths (Hadland et al., 2019). In 2021, consulting firm McKinsey and Company paid a settlement of 573 million USD to U.S. states over the alleged contribution of its marketing advice to the opioid epidemic (Forsythe and Bogdanich, 2021). If a major consulting firm composed of highly educated humans arguably did not take appropriate care to prevent adverse consequences from its marketing strategy, there is little reason to expect that AI systems designed to maximise return on advertising would do so. Regulatory guidance and wariness from physicians, patients, and citizens in general may be necessary to avoid manipulation, exploitation, and other harms from AI-guided business strategies and tactics.
Healthcare workers may be subject to AI-powered exploitation as well. AI could be applied to many healthcare administration tasks, from logistics and ordering to recordkeeping to management, assessment, and punishment or rewarding of healthcare workers (Reddy et al., 2018). Algorithmic workforce management, not unlike algorithmic health decision-making, carries potential for systemic errors, social biases, excessive surveillance, and worker manipulation through “nudging” (Gal et al., 2020). In 2018, Amazon had to scrap a machine-learning recruiting tool that had incorrectly inferred, based on the preponderance of male resumes submitted for developer positions, that men made better developers than women (Dastin, 2018). Algorithmic performance management of low-pay roles at Amazon has led to performance assessment and firing without appropriate attention to contextual factors and without recourse to a human boss (Soper, 2021). Uber uses behavioural “nudges” to induce drivers to work longer hours and take worse-paying assignments (Scheiber and Huang, 2017). Physicians and other powerful healthcare workers may be able to resist the imposition of such management practices, but orderlies, administrative assistants, home health aides, and other lower-ranking professionals may not be so privileged.
Exacerbation of healthcare inequities
Although many commentators hope that AI will help to reduce healthcare costs and improve healthcare accessibility (e.g. Khanna et al., 2022), there are at least three ways in which
AI could actually worsen healthcare inequities. First, errors and biases of the sort alluded to above are often linked to existing societal inequities. AI systems used for diagnostics perform worse for patients of demographic categories less represented in training datasets (Guo et al., 2021; Kamulegeya et al., 2023; Lee et al., 2021). AI systems used for healthcare management could directly reproduce past inequities in healthcare resource allocation. This is no idle speculation. One commercial algorithm widely used to identify patients for additional care in the United States has been found to systematically underestimate the care needs of Black patients due to their historically lower levels of care (Obermeyer et al., 2019).
Second, advanced treatments developed using AI, or actually implementing AI – say, custom “digital twins” of individuals used to personalise diagnostics and treatment (Björnsson et al., 2020) – may, like other expensive medical innovations, be far more accessible to socioeconomic elites. Such access gaps exist even in nations with national healthcare (e.g. Kapadia et al., 2022). Disparities in access to high-tech health innovations could continue to widen healthcare gaps between the rich and the poor.
Third, although AI can be used to reduce healthcare costs, under-resourced medical providers and systems will be under pressure to cut costs excessively. Resource-strapped providers might be incentivised to deploy AI systems outside their intended scope, skip important oversight, and, perhaps, even automate tasks better left partially or entirely human. In short, as with most emerging technologies, wealthy patients and the providers who serve them will be best situated to capture the upsides of AI in healthcare, while the poorest will be most vulnerable to its downsides (Bozeman, 2020).
Ensuring equity in AI for healthcare is inextricable from ensuring equity in healthcare overall. Demographic biases in AI predictions and recommendations emerge from biases in historical datasets. Moreover, in an inequitable healthcare system, advantages and disadvantages will be unfairly distributed, for old as well as for new technologies. While care can be taken to design and implement healthcare AI as ethically and appropriately as possible (World Health Organization, 2021), there is no technological fix for the socioeconomic inequities that drive health disparities (Costa- Font and Hernández-Quevedo, 2012). Substantial reforms not only to healthcare systems but to the technological, economic, and social arrangements that underpin them may be necessary to achieve good health for all (World Health Organization Council on the Economics of Health for All, 2023).
Shaping the future of artificial intelligence in healthcare
AI has the potential to contribute greatly to improving standards of care, accelerating healthcare innovation, and reducing healthcare costs. Simultaneously, it also carries potential to reduce the reliability of some healthcare work, facilitate the spread of misinformation, contribute to exploitation of patients and healthcare workers, and exacerbate healthcare inequities. Which of these, and of AI’s many other potentialities, are realised will depend upon the interests, priorities, and constituencies empowered to shape the development and deployment of AI in healthcare. For technologies as for policy, the best and only – though still far from perfect – guarantee of democratic benefit is democratic governance (Pacifico Silva et al., 2018).
A broad and diverse citizenry must be continually empowered to determine whether and how AI should be implemented in healthcare. Only democratic decision-making can ensure that we will anticipate and avoid adverse consequences and that we will deploy AI in ways that advance all citizens’ health. Conventional democratic processes are essential (Genus and Stirling, 2018), and newer experiments in public engagement (Kaplan et al., 2021) and strategic foresight (Selin et al., 2023) can also be useful for aligning novel technologies with public values (Ribeiro et al., 2018). The work of democracy is always piecemeal and messy. It will require that patients, healthcare workers, citizens, and communities advocate for their needs and interests, that governments work to advance them, and that those most empowered in the health innovation ecosystem – not only physicians, but administrators, technologists, and financiers – embrace both responsibility and accountability for the health of the communities they serve (Stirling, 2015; Nelson et al., 2021). AI in healthcare will be what our societies make of it. It is up to us to fairly realise and distribute AI’s benefits and to anticipate and prevent its harms.
References
Abdulkareem M, SE Petersen (2021). The promise of AI in detection, diagnosis, and epidemiology for combating COVID-19: Beyond the hype. Frontiers in Artificial Intelligence 4, 652669. https://doi.org/10.3389/ frai.2021.652669.
Accountable Tech (2020). Facebook’s Content Algorithms Undermined COVID-19 Responses, Spread Violent Misinformation in the Wake of George Floyd’s Killing. https://accountabletech.org/wp-content/ uploads/2020/08/Facebooks-Content-Algorithms.pdf.
Al ganess MAA et al. (2020). Optimization method for forecasting confirmed cases of COVID-19 in China. Journal of Clinical Medicine 9(3), 674. https://doi. org/10.3390/jcm9030674.
Ayers JW et al. (2023). Evaluating artificial intelligence responses to public health questions. Journal of the American Medical Association Network Open 6(6), e2317517. https://doi.org/10.1001/ jamanetworkopen.2023.17517.
Bak M et al. (2022). You can’t have AI both ways: Balancing health data privacy and access fairly. Frontiers in Genetics 13, 929453. https://doi.org/10.3389/ fgene.2022.929453.
Bertoline LMF et al. (2023). Before and after AlphaFold2: An overview of protein structure prediction. Frontiers in Bioinformatics 3, 1120370. https://doi.org/10.3389/ fbinf.2023.1120370.
Bertolini A et al. (2021). Liability of Online Platforms. PE 656.318. Brussels: European Parliamentary Research Service. https://doi.org/10.2861/619924.
Björnsson B et al. (2020). Digital twins to personalize medicine. Genome Medicine 12, 4. https://doi. org/10.1186/s13073-019-0701-3.
Bozeman B (2020). Public value science. Issues in Science and Technology 36(4), 34-41. https://issues. org/public-value-science-innovation-equity-bozeman/.
Brinker TJ et al. (2019). Deep neural networks are superior to dermatologists in melanoma image classification. European Journal of Cancer 119, 11-17. https://doi.org/10.1016/j.ejca.2019.05.023.
Broussard M (2023). How to investigate an algorithm. Issues in Science and Technology 39(4), 85-89. https:// issues.org/algorithm-auditing-more-than-glitch-broussard/.
Chin-Yee B, Upshur R (2019). Three problems with big data and artificial intelligence in medicine. Perspectives in Biology and Medicine 62(2), 237-256. https://doi. org/10.1353/pbm.2019.0012.
Christian B (2020). The Alignment Problem: Machine Learning and Human Values. New York: W.W. Norton, Company.
Collins H (2021). The science of artificial intelligence and its critics. Interdisciplinary Science Reviews 46(1-2), 53-70. https://doi.org/10.1080/03080188.2020.1 840821.
Costa-Font J, Hernández-Quevedo C (2012). Measuring inequalities in health: What do we know? What do we need to know? Health Policy 106(2), 195-206. https://doi. org/10.1016/j.healthpol.2012.04.007.
Dastin J (10 October 2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G.
Davenport P, Kalakota P (2019). The potential for artificial intelligence in healthcare. Future Healthcare Journal 6(2), 94-98. https://doi.org/10.7861/futurehosp.6-2-94.
DeCamp M, Lindvall C (2020). Latent bias and the implementation of artificial intelligence in medicine. Journal of the American Medical Informatics Association 27(12), 2020-2023. https://doi.org/10.1093/jamia/ocaa094.
De Fauw J et al. (2018). Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature Medicine 24, 1342-1350. https://doi.org/10.1038/ s41591-018-0107-6.
Deiana G et al. (2023). Artificial intelligence and public health: Evaluating ChatGPT responses to vaccination myths and misconceptions. Vaccines 11(7), 1217. https://doi. org/10.3390/vaccines11071217.
Dubey A, Tiwari A (2023). Artificial intelligence and remote patient monitoring in US healthcare market: A literature review. Journal of Market Access, Health Policy 11(1), 2205618. https://doi.org/10.1080/20016689.2023.2 205618.
N (2018). How artificial intelligence is changing drug discovery. Nature 557, S55-S57. https://doi. org/10.1038/d41586-018-05267-x.
Fornell D (25 September 2023). AI takes on hospital staffing to help battle burnout. HealthExec. https:// healthexec.com/topics/healthcare-management/ healthcare-staffing/ai-optimizes-hospital-staffing.
Forsythe M, Bogdanich W (5 November 2021). McKinsey settles for nearly $600 million over role in opioid crisis. The New York Times. https://www. nytimes.com/2021/02/03/business/mckinsey-opioids-settlement.html.
Gal U et al. (2020). Breaking the vicious cycle of algorithmic management: A virtue ethics approach to people analytics. Information and Organization 30(2), 100301. https://doi.org/10.1016/j. infoandorg.2020.100301.
Ge Y et al. (2021). An integrative drug repositioning framework discovered a potential therapeutic agent targeting COVID-19. Signal Transduction and Targeted Therapy 6, 165. https://doi.org/10.1038/s41392- 021-00568-6.
Genus A, Stirling A (2018). Collingridge and the dilemma of control: Towards responsible and accountable innovation. Research Policy 47(1), 61-69. https://doi. org/10.1016/j.respol.2017.09.012.
Guo LN et al. (2021). Bias in, bias out: Underreporting and underrepresentation of diverse skin types in machine learning research for skin cancer detection: A scoping review. Journal of the American Academy of Dermatology 87(1), 157-159. https://doi. org/10.1016/j.jaad.2021.06.884.
Hadland SE et al. (2019). Association of pharmaceutical industry marketing of opioid products with mortality from opioid-related overdoses. Journal of the American Medical Association Network Open 2(1), e186007. https://doi.org/10.1001/jamanetworkopen.2018.6007.
High-Level Expert Group on Artificial Intelligence (2019). A Definition of AI: Main Capabilities and Scientific Disciplines. Brussels: European Commission. https:// ec.europa.eu/futurium/en/system/files/ged/ai_hleg_ definition_of_ai_18_december_1.pdf.
Johnson PJ et al. (2010). Disparities in public use data availability for race, ethnic, and immigrant groups. Medical Care 48(12), 1122-1127. https://doi. org/10.1097/MLR.0b013e3181ef984e.
Ji Y et al. (2020). Potential association between COVID-19 mortality and health-care resource availability. The Lancet Global Health 8(4), E480. https://doi.org/10.1016/S2214-109X(20)30068-1.
Jiang X et al. (2020). Computers, Materials, Continua 63(1), 537-551. https://doi.org/10.32604/ cmc.2020.010691.
Jumper J et al. (2021). Highly accurate protein structure prediction with AlphaFold. Nature 596: 583-589. https://doi.org/10.1038/s41586-021-03819-2.
Kamulegeya L et al. (2023). Using artificial intelligence on dermatology conditions in Uganda: A case for diversity in training data sets for machine learning. Africa Health Sciences 23(2). https://doi.org/10.4314/ahs.v23i2.86.
Kapadia D et al. (2022). Ethnic Inequalities in Healthcare: A Rapid Evidence Review. London: NHS Race, Health Observatory. https://www.manchester.ac.uk/discover/ news/services/downloadfile.php?f=rho-rapid-review-final-report-v.7.pdf&uid=1139877&hash=8c1f63a905 32ae3ede061168dad317e77b0447d6.
Khanna NN et al. (2022). Economics of artificial intelligence in healthcare: Diagnosis vs. treatment. Healthcare 10(12), 2493. https://doi.org/10.3390/ healthcare10122493.
Kaplan LR et al. (2021). Designing participatory technology assessments: A reflexive method for advancing the public role in science policy decision-making. Technological Forecasting and Social Change 171, 120974. https://doi.org/10.1016/j. techfore.2021.120974.
Lee MS et al. (2022). Toward gender equity in artificial intelligence and machine learning applications in dermatology. Journal of the American Medical Informatics Association 29(2), 400-403. https://doi. org/10.1093/jamia/ocab113.
Mackenzie A et al. (2023). From ‘Black Box’ to Trusted Healthcare Tools: Physiology’s Role in Unlocking the Potential of AI for Health. London: The Physiological Society. https://static.physoc.org/app/ uploads/2023/06/23142343/WEB-From-Black-Box-to-Trusted-Healthcare-Tools.pdf.
Michelhaugh SA, Januzzi JL Jr. (2022). Using artificial intelligence to better predict and develop biomarkers. Heart Failure Clinics 18(2), 275-285. https://doi. org/10.1016/j.hfc.2021.11.004.
Murdoch B (2021). Privacy and artificial intelligence: Challenges for protecting health information in a new era. BMC Medical Ethics 22, 122. https://doi.org/10.1186/ s12910-021-00687-3.
Nagpal S et al. (2022). Genomic surveillance of COVID-19 variants with language models and machine learning. Frontiers in Genetics 13, 858252. https://doi. org/10.3389/fgene.2022.858252.
Naik N et al. (2022). Legal and ethical consideration in artificial intelligence in healthcare: Who takes responsibility? Frontiers in Surgery 9, 862322. https:// doi.org/10.3389/fsurg.2022.862322.
Najjar R (2023). Redefining radiology: A review of artificial intelligence integration in medical imaging. Diagnostics 13(17), 2760. https://doi.org/10.3390/ diagnostics13172760.
Nelson JP (2023). ChatGPT and other language AIs are nothing without humans—A sociologist explains how countless hidden people make the magic. The Conversation. https://theconversation.com/chatgpt-and-other-language-ais-are-nothing-without-humans-a-sociologist-explains-how-countless-hidden-people-make-the-magic-211658.
Nelson JP et al. (2021). Toward anticipatory governance of human genome editing: A critical review of scholarly governance discourse. Journal of Responsible Innovation 8(3), 382-420. https://doi.org/10.1080/23299460.2 021.1957579.
Obermeyer Z et al. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science 366(6464), 447-453. https://doi. org/10.1126/science.aax2342.
Pacifico Silva H et al. (2018). Introducing responsible innovation in health: A policy-oriented framework. Health Research Policy and Systems 16, 90. https://doi. org/10.1186/s12961-018-0362-5.
Plesner LL et al. (2023). Autonomous chest radiograph reporting using AI: Estimation of clinical impact. Radiology 307(3). https://doi.org/10.1148/ radiol.222268.
Prayitno CR et al. (2021). A systematic review of federated learning in the healthcare area: From the perspective of data properties and applications. Applied Sciences 11(23), 11191. https://doi.org/10.3390/ app112311191.
Reddy S et al. (2018). Artificial intelligence-enabled healthcare delivery. Journal of the Royal Society of Medicine 112(1), 22-28. https://doi. org/10.1177/0141076818815510.
Regulation 2016/679. Regulation (EU) No. 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). https://eur-lex.europa.eu/legal-content/EN/TXT/ PDF/?uri=CELEX:32016R0679.
Ribeiro B et al. (2018). Introducing the dilemma of societal alignment for inclusive and responsible research and innovation. Journal of Responsible Innovation 5(3), 316-331. https://doi.org/10.1080/23299460.2018 .1495033.
Sagonowsky E (9 January 2019). Novartis puts AI on the job to help reps say the right things to the right doctors. Fierce Pharma. https://www.fiercepharma.com/ pharma/novartis-equips-sales-team-ai-bid-to-boost-productivity-exec.
Sanger DE, Myers SL (11 September 2023). China sows disinformation about Hawaii fires using new techniques. The New York Times. https://www.nytimes. com/2023/09/11/us/politics/china-disinformation-ai. html.
Scheiber N, Huang J (2 April 2017). How Uber uses psychological tricks to push its drivers’ buttons. The New York Times. https://www.nytimes.com/ interactive/2017/04/02/technology/uber-drivers-psychological-tricks.html.
Selin C et al. (2023). Researching the future: Scenarios to explore the future of human genome editing. BMC Medical Ethics 24, 72. https://doi.org/10.1186/ s12910-023-00951-8.
Sharma VK et al. (2001). An audit of the utility of in-patient fecal occult blood testing. The American Journal of Gastroenterology 96(4), 1256-1260. https://doi. org/10.1016/S0002-9270(01)02272-9.
Shen D et al. (2017). Deep learning in medical image analysis. Annual Review of Biomedical Engineering 19, 221-248. https://doi.org/10.1146/annurev-bioeng-071516-044442.
Soper S (28 June 2021). Fired by bot at Amazon: ‘It’s you against the machine.’ Bloomberg. https://www. bloomberg.com/news/features/2021-06-28/fired-by-bot-amazon-turns-to-machine-managers-and-workers-are-losing-out.
Stirling A (2015). Towards innovation democracy? Participation, responsibility and precaution in innovation governance. Brighton: University of Sussex, ESRC STEPS. https://steps-centre.org/wp-content/uploads/ Innovation-Democracy.pdf.
Tunyasuvunakool K et al. (2021). Highly accurate protein structure prediction for the human proteome. Nature 596, 590-596. https://doi.org/10.1038/s41586- 021-03828-1.
Wang L (2022). Deep learning techniques to diagnose lung cancer. Cancers 14(22), 5569. https://doi. org/10.3390/cancers14225569.
Wang Y et al. (2019). Identifying Crohn’s disease signal from variome analysis. Genome Medicine 11(59). https://doi.org/10.1186/s13073-019-0670-6.
World Health Organization (2021). Ethics and Governance of Artificial Intelligence for Health: WHO Guidance. Geneva: World Health Organization. https://iris.who.int/bitstream/hand le/10665/341996/9789240029200-eng.pdf.
World Health Organization Council on the Economics of Health for All (2023). Final Report: Transforming Economies to Deliver What Matters. Geneva: World Health Organization. https://www.ucl.ac.uk/bartlett/ public-purpose/sites/bartlett_public_purpose/files/who_ councileh4a_finalreport-complete_2105202329.pdf.
Wu H et al. (2023). The application of artificial intelligence in health care resource allocation before and during the COVID-19 pandemic: Scoping review. JIMR AI 2023(2), e38397. https://doi.org/10.2196/38397.
Wunker S (5 June 2023). How AI can revolutionize pharma sales and marketing. Forbes. https://www. forbes.com/sites/stephenwunker/2023/06/05/ how-ai-can-revolutionize-pharma-sales-and-marketing/?sh=3870c7506c4d.
Yu F et al. (2023). Evaluating progress in automatic chest X-ray radiology report generation. Patterns 4(9), 100802. https://doi.org/10.1016/j. patter.2023.100802.
Zarocostas J (2020). How to fight an infodemic. The Lancet 395(10225), 676. https://doi.org/10.1016/ S0140-6736(20)30461-X.
Zheng N et al. (2020). Predicting COVID-19 in China using hybrid AI model. IEEE Transactions on Cybernetics 50(7), 2891-2904. https://ieeexplore.ieee.org/stamp/ stamp.jsp?arnumber=9090302.
Zhou Y et al. (2020). Artificial intelligence in COVID-19 drug repurposing. The Lancet Digital Health 2(12), E667-E676. https://doi.org/10.1016/S2589- 7500(20)30192-8.