Factors Affecting the Attitude of Medical Doctors in Türkiye towards Using Artificial Intelligence Applications in Healthcare Services
PDF
Cite
Share
Request
Original Article
P: 297-308
July 2024

Factors Affecting the Attitude of Medical Doctors in Türkiye towards Using Artificial Intelligence Applications in Healthcare Services

Bezmialem Science 2024;12(3):297-308
1. Anadolu University Faculty of Business Department of Marketing, Eskişehir, Türkiye
2. Ankara Science University Faculty of Humanities and Social Sciences Department of Political Science and Public Administration, Ankara, Türkiye
No information available.
No information available
Received Date: 14.12.2023
Accepted Date: 14.02.2024
Online Date: 31.07.2024
Publish Date: 31.07.2024
PDF
Cite
Share
Request

ABSTRACT

Objective

Artificial intelligence (AI) is transforming various sectors, including healthcare. The aim of this research was to examine the factors that determined acceptance of and intention to use AI applications by medical doctors.

Methods

This research was based on an online survey conducted with 275 medical doctors in Türkiye. The survey was prepared in English and was later translated into Turkish by the researchers. The study employed a convenience sampling technique. The partial least squares-structural equation modeling was employed to ascertain causal relationships for theory confirmation. The data analysis utilized SmartPLS 3. Descriptive statistics were calculated with the SPSS 25 software.

Results

According to the findings, trust (b=0.651; t=25.876; p<0.01) was the strongest positive factor for increased intention to use AI applications. Perceived usefulness (b=0.613; t=22.851; p<0.01) and perceived ease of use (PEOU) (b=0.644; t=14.577; p<0.01), significantly predicted ıntension to use. Technological anxiety was not a significant predictor for intension to use (b=0.067; t=1.014; p=0.093) as well as facilitating conditions (b=0.071; t=1.041; p=0.102).

Conclusion

This research reveals that trust, perceived usefulness, and PEOU are the major positive factors for AI to be accepted and used by medical doctors. The greater trust and ease of use that comes with more knowledge and experience about AI may lead to more action to be taken to benefit from AI in the healthcare sector.

ABSTRACT

Objective

Artificial Intelligence (AI) is transforming various sectors, including healthcare. The aim of this research is to examine the factors that determine acceptance of and intention to use AI applications by medical doctors (MDs).

Methods

This research was based on an online survey conducted with 275 MDs in Turkey. The survey was prepared in English and was later translated into Turkish by the researchers. The study employed a convenience sampling technique. The partial least squares-structural equation modeling (PLS-SEM) was employed to ascertain causal relationships for theory confirmation. The data analysis utilized SmartPLS 3. Descriptive statistics were calculated with the SPSS 25 software.

Results

According to the findings, Trust (β =.651; t = 25.876; p <.01) is the strongest positive factor for increased intention to use AI applications. Perceived Usefulness (β =.613; t = 22.851; p <.01) and Perceived Ease of Use (β =.644; t = 14.577; p <.01), significantly predicted Intension to Use. Technological Anxiety was not a significant predictor for Intension to Use (β =.067; t = 1.014; p =.093) as well as Facilitating Conditions (β =.071; t = 1.041; p =.102).

Conclusion

This research reveals that trust, perceived usefulness, and perceived ease of use are the major positive factors for AI to be accepted and used by MDs. The greater trust and ease of use that comes with more knowledge and experience about artificial intelligence may lead to more action to be taken to benefit from AI in the healthcare sector.

Introduction

Artificial intelligence (AI) is undoubtedly the most debated technological development of today. Aside from the technical improvements in AI, the expansion of AI implementations is the major reason for this debate in business world, media, and science community. AI which entered our lives with automation, production systems and sales/marketing applications is now being used in essential sectors such as education and health. It is frequently suggested that in a global scale, AI is affecting individual as well as industrial activities (1). Therefore, it became a transforming element for the society as a whole.

It can be said that there is a clear relationship between a series of developments in recent years and the widening use of AI. The most important of these is the increasing volume of recorded data due to digitalization. These digital records, known as big data, constitute an almost endless resource that can be processed by AI algorithms through machine learning. Today, all transactions occurring in the digital environment, such as sounds, images, numbers and those that can be referred to as behaviors, have become data that can be used by AI. Another feature of big data that can be accessed, stored, and processed is that it can be collected in real time, too. This offers exceptional opportunities for the development of AI.

On the other hand, the coronavirus disease-2019 pandemic has accelerated digitalization in all areas of life. In line with the increasing digitalization, developments in AI gained momentum. The spread of AI in different sectors and its new functions also followed these developments. AI technology provides a base for continued innovation in various sectors (2). In recent years, as one of the sectors where digitalization and datafication are advancing most rapidly, medicine and health sector have become one of the prominent areas in terms of the use of AI. It is also predicted that the opportunities provided by these technologies will create new industries and roles (3). While AI offers great opportunities, it will also pose a threat, especially to manufacturing and process-oriented sectors; and health sector is among them (3).

Apparently, there is an increasing interest in technology to solve problems of today’s society such as global warming, sustainable use of resources and public health. Among these, the most pressing societal issues regarding health are increasing workload of the health sector, high costs, and the scarcity of trained personnel due to the increasing and aging population (4). Recent developments in machine learning and AI accelerated efforts to mitigate these problems using technological solutions. Policy makers and politicians are also eager to introduce more state-of-the-art tools to the health system (5). In this context, it is also possible to talk about the digitalization of health data which paved the way to applications that adopt a data-driven approach started to be implemented (6, 7). In this context, the healthcare sector has become one of the areas where AI is rapidly developing from managerial, clinical, and patient perspectives. Most importantly, AI is already being applied to clinical tasks normally performed by doctors which draws attention to the positions of them in an AI supported health system (8). Thus, it is possible to suggest that the circumstances for both doctors and patients have been transformed by the introduction of AI (9). In the future, if/when AI becomes a routine part of clinical practice, the self- image of doctors will also be affected (10). From the vantage point of patients, on the other hand, receiving health services is also likely to be converted into a completely new experience. It is also important to note the fact that use of AI should be considered different from the use of AI in other sectors, given the highly sensitive nature of health data and vulnerability of the consumers (2).

The current literature on AI in health sector covers various aspects of the subject, however it does not sufficiently explain the factors shaping the attitudes and willingness of medical doctors (MDs) to use AI applications. Therefore, this research aims to examine the attitudes of MDs towards AI applications and the factors that determine their intention to use. In this context, it is believed that the extended technology acceptance model (E-TAM) will provide a distinctive perspective in explaining doctors’ intentions to use AI tools.

Methods

This study was approved by the Anadolu University Social and Human Sciences Scientific Research and Publication Ethics Board (approval number: 70/78, date: 28.12.2023).

Artificial Intelligence in Health

AI is basically composed of machine learning, algorithms, and (big) data. In the context of these components, AI can be described as a smart machine-based system that recognizes patterns in data which can also apply these patterns to new data for particular tasks and purposes (11, 12). AI has the ability to replace many human tasks and activities in various industries which is likely to have impact in terms of productivity and performance (2). Since health has become one of the major fields which produces big data, the utilization of AI has been foreseeable; especially through machine learning and deep learning (13, 14). It can be suggested that AI has already started to cause a paradigm change in the field of healthcare (15).

It is possible to consider the use of AI in the field of healthcare in three different dimensions. The first of these is public health applications, while the other two are in the field of medicine, that is, clinical applications for doctors and applications for the use of patients and/or for doctor- patient communication (16, 17). AI can be used for diagnosis, treatment, and patient monitoring, as well as image processing and analysis (14, 18). In this context, when we look at the AI applications developed for the use of doctors, it can be said that the developments are shaped in line with the promises of data and technology. Different fields of medical research and practice are aimed at by tech companies in terms of helping doctors find all the information and accurate data at once to make precise diagnosis and establish better plans for treatment (19, 20). For example, thanks to the developments in AI and image processing technology, AI applications in radiology stand out among other specializations (13, 14, 17, 21). This distinction between areas of expertise is also seen in various studies on the relationship between health and AI in the literature. However, it is possible to suggest that other specialties are catching up quite rapidly. Moreover, AI is becoming increasingly capable at clinical tasks aside from diagnosis and early detection (15).

Risks and Concerns Regarding AI in Health

It is unclear if the doctors will adopt AI technology in clinical practice at some point in the future (15). This is very much related to the perception of risks and concerns as well as the potential benefits that are comprehended by the doctors. Like every technological innovation, there are risks and benefits associated with the use of AI. The concerns and arguments regarding potential risks and drawbacks of the use of AI in health sector are various. Data privacy and security, bias, black-box effect, the question of liability, accountability, and doctors’ lack of AI knowledge are among the most significant concerns (8, 10, 13, 22). It is clear that some of these problematic areas are related to the technology itself, while others arise from the utilization and the approach of the users.

Data privacy and security are among major topics of debate when it comes to proliferating digital technologies that utilize data. For machine learning and AI, digital health records of all kinds are essential to achieve desired results. The data used for machine learning is assumed to be flawless by the developers. However, personal health records may be destroyed, stolen, or altered, if there is a security problem with the system. On the other hand, the data is recorded by humans, and they can make mistakes when collecting, classifying, and categorizing the gathered data (22). In these stages implemented by humans (not machines) mostly, it is vital to establish appropriate guidelines and apply them correctly.

AI and machine learning are criticized for giving biased outcomes that may lead to imperfect decisions. The bias may be resulting from bias in training data and/or flawed algorithms. In this sense, the decisions relying on AI supported systems that have social impact are mostly accused of reproducing social inequalities, which also exist in public health. For example, the training data may only represent a certain ethnic or geographical population, while excluding other groups, or some of the critical data about the patients can be in a format that cannot be entered into the system (10). In such cases, the data may not be comprehensive and inclusive enough to give accurate results.

Black-box effect refers to the opacity of algorithms used in AI systems. If the AI system is too complex for any one or more of the stakeholders, then it is considered problematic (23). Some models may lack transparency in terms of interpretable processes including parameters and criteria on which “deep learning” is mostly based on (22). This “black-box” type of AI systems may not be trustworthy both for doctors and patients, since the outcome/diagnosis cannot be explained (22). This issue also raises debates regarding accountability. In an opaque AI system, it is impossible to reduce the risks through verification (23), since the doctor cannot follow the process of decision making. In that context, the allocation of responsibility or liability also becomes an issue in case of unexpected/unwanted effects of decisions made by AI systems and applied by the doctors who have followed these decisions. The question of whether AI systems can be hold responsible for their decisions is beyond the scope of this article. However, it is widely suggested that the doctors who make the final decisions should be hold responsible (23). From the viewpoint of doctors, this may be an obstacle in terms of building trust and willingness to use AI support.

Level of knowledge about AI is another concern from the standpoint of doctors and medical students alike. The efficient and accurate use of AI tools requires knowledge of AI processes starting with data collection and including deep learning. Only with a certain level of knowledge doctors can adopt AI systems to their routines with confidence. On the other hand, if they know the AI processes well, they can support the developers to optimize the system (10).

Questions such as whether AI will replace doctors, whether it will remain only as a decision support technology, and whether the role of doctors (and other healthcare professionals) will change have begun to be asked frequently in recent years and have become one of the popular research topics in the field of AI (8). Aside from the answers to these questions, it is already possible to argue that required skillsets and attitudes for being a good doctor will be redefined in the new era of AI (20). The speed and the extent to which AI will affect the professional practices of doctors is largely related to the answers to the questions above.

Previous Research on Perception and Acceptance of AI among MDs

In recent years, with the increase in the use of tools and applications with AI technology in health sector, various studies have been conducted to examine the knowledge, opinions or attitudes of various professional groups working in this sector. These studies mainly focus on AI in general and on the way in which members of the sector are approaching the use of AI in the field of health and medicine. The results of such research, which analyze the subject in different contexts and for various research purposes, are noteworthy. Despite the fact that most AI tools for clinical use are still at the research or pilot study phase, the members of health sector are already more or less aware of the imminent change. Therefore, the evolution of the perception and attitude of doctors is an important input for the penetration of AI into the healthcare sector. The effective human (doctor) - AI collaboration is especially very vital for successful applications, so recent research also aims at finding out the relevant factors (24). The literature suggests that the factors such as trust in AI and hesitancy to accept the use of AI tools are slowing down the adoption (7, 25).

Doctors’ attitudes and perceptions towards AI are related to several variables.

However, when the relevant literature is examined, it is understood that the relationship between healthcare and AI is questioned by both doctors and researchers in certain areas of discussion.

For example, a study conducted with doctors in the Netherlands, Portugal and the United States focused on ethics. This research revealed that MDs did not have enough insight into AI issues such as bias and social inequalities in health, but they had concerns about the involvement of the private sector and large companies in the process since they were believed to be profit-oriented and unaware of the core values of healthcare. Additionally, in this research doctors expressed their interest in learning AI technology in order to be able to explain the outcomes (6).

Another research on the perceptions of trainee doctors in London found that 58% believed AI would have a positive impact on their training, mainly in terms of research and quality as well as time - effectiveness for allocating more time for other educational activities. The findings showed that trainee doctors were optimistic about clinical AI to keep them updated about latest literature and latest evidence to improve their practice. On the other hand, as for the negative opinions about clinical AI, the respondents were concerned about the development of their practical skills and clinical judgement because of the reduced opportunities to train, which in turn could harm accountability (8).

According to a survey on the opinions of Pakistani medical students and doctors more than one-third of the respondents agreed and strongly agreed the statement that AI would reduce errors in diagnosis, which showed a partial trust in this technology. Plus, despite limited trust in AI, they did not consider it as a threat to their occupation (13).

Another research in which data was collected via an online questionnaire revealed contradicting results about knowledge of AI among MDs. The level of knowledge of AI elements such as deep learning and neural networks were more familiar for the respondents who had a clear understanding of AI, whereas they did not have much idea about supervised and unsupervised learning. However, they were concerned about the safe use of AI in health, even though they had little awareness about the lack of transparency. Other results of this study showed that both medical students and doctors feared deskilling as well as doctors becoming redundant (18). This may be because the knowledge about AI is related to personal interest in technology and innovation.

The findings of another survey conducted in UK revealed that half of the 411 radiographers did not feel confident about understanding AI terminology and 64% told they did not develop any skills regarding use of AI in their field. The overall results of the survey indicated a willingness among the radiographers to receive training about AI applications in their field of expertise (26).

Of 297 participants from England 13.8% who responded to another online questionnaire about AI use in clinical practice indicated that they were aware of the use of AI technology. When they were asked to rate their level of knowledge about AI use in healthcare specifically, the mean rating was 3.68 out of 10, which showed an insufficient level of knowledge (27).

According to the findings of a survey conducted in Italy among 1032 radiologist members of The Italian Society of Medical and Interventional Radiology, most radiologists expected AI to improve their workflow. Even though they had a positive attitude towards use of AI, they were concerned about their possibly reduced reputation (28).

As understood from a bird’s eye view of previous research on the topic, the subject of AI acceptance and use by MDs is a multifaceted issue. These are several factors from the perspective of doctors that determine their level of acceptance and most of these factors are based on their perceptions, rather than lived experience.

Extended Technology Acceptance Model

Technology acceptance can be expressed as the choice of individuals to voluntarily accept new technologies. The primary aim of the TAM is to prognosticate the adoption of novel technologies among end-users and illuminate challenges prior to their ubiquitous integration within the general populace (29). In recent decades, researchers have formulated diverse models aimed at comprehending the dynamics of user acceptance toward technology. However, Davis’s TAM represents the most fundamental and significant foundation for technology acceptance to date. TAM consists of two primary constructs commonly employed in various technological contexts: perceived usefulness and perceived ease of use (PEOU) (30). However, some researchers have expressed concerns about the inadequacy of the original frameworks of the TAM in elucidating users’ intentions towards the adoption of healthcare technologies (29). In specific user contexts, such as the acceptance of AI tools, participants’ intentions to use are contingent upon many social and behavioral factors that remain unaddressed within the confines of the TAM model. Therefore, the current research has focused on incorporating additional social and behavioral variables into the TAM model and how these variables may affect perception of users. Accordingly, variables such as “facilitating conditions (FC)”, “trust”, “perceived risk”, “technological anxiety”, and “resistance to use (RC)” were included in the scope of the research in order to better understand the user’s perception.

Research Methodology

The objective of this study is to scrutinize and elucidate the determinants that mold and impact the attitudes of MDs towards AI applications. The research model, with “AI intention to use (IU)” identified as the dependent variable, is shown in Figure 1. The targeted population for this research comprises doctors working in hospitals and clinics in Türkiye. To ensure the validity of the measurements, the details of the identified determinant structures were adopted from previous studies, as outlined in Appendix A. Proposed hypotheses are shown in Table 1.

Data Collection Method and Measurements

This research was based on an online survey. The survey was conducted with 275 MDs in Türkiye. The study employed a convenience sampling technique. The survey created using Google Forms and participants were contacted via email. All participants were briefed on the research and their explicit consent was obtained. Initial survey for this study was developed in English and was later translated into Turkish by the researchers. Both surveys were carefully crafted to convey the same meaning in terms of perception. The survey was pilot tested for clarity on ten MDs. Data collection was carried out between September 1, 2023, and October 28, 2023.

The survey was divided into two parts. In the first part, there was a brief introduction and five questions related to the profile of participants such as gender, age, institution, clinical specialization, and professional experience. Second part consisted of 21 questions for various constructs shown in Figure 1. The five-point Likert-type scale with a range from 1 (strongly disagree) to 5 (strongly agree) was employed was used to measures for responses.

The survey instruments for each of the constructs were designed to gather exhaustive details and adapted from the literature, including IU -three items (31); PU -two items (32, 33); PEOU -three items (34, 35); RC - three items (36); T -two items (37); TA -two items (31); FC -three items (38) and PR -three items (37, 39).

Statistical Analysis

In this study, the partial least squares-structural equation modeling (PLS-SEM) was employed to ascertain causal relationships for theory confirmation. The data analysis utilized SmartPLS 3. Descriptive statistics were calculated with the SPSS 25 software. The PLS-SEM analysis is divided into two sections for analysis. Part one is based on an evaluation of the outer model’s reliability and validity. The second part is based on a model evaluation within which hypotheses were evaluated.

Results

Demographics

A total of 275 MDs responded to the survey. The sample was slightly skewed toward females (52%). The mean age for the entire sample was 48,3 (standard deviation =12.7). The participants were distributed among different clinical specialties as follows: internal medicine (n=82, 30%), pediatrics: (n=55, 20%, general surgery (n=41, %15), obstetrics and gynecology (n=41, 15%), orthopedics: (n=28, 10%), psychiatry (n=14, 5%), other specialties (n=14, 5%). The survey revealed that 31% of the participants had 15-20 years of work experience and 28% had less than 10 years of work experience.

Measurement Model

The measurement model encompasses assessment procedures for testing the reliability and validity of the measures. The current study followed three measurements suggested by Hair et al. (40); 1) indicator loadings and internal consistency reliability, 2) convergent validity, and 3) discriminant validity.

In this study, the item loads for each construct were obtained through PLS-SEM analysis. Table 2 shows the detail of loadings. All items achieved the recommended loading values of >0.700 (40). Internal consistency reliability refers to the evaluation findings for the statistical consistency across indicators and it was assessed using Cronbach’s alpha (CA) and composite reliability (CR). The values of CA and CR in this study adhered to the threshold set by Hair et al. (40); CA and CR >0.700. It can be seen from the Table 2 that both CA and CR values for all construct shave good internal consistencies. The reliability ranging from 0.732 to 0.816 for the CA and 0.813 to 0.880 for the CR (Table 2).

Convergent validity is a statistical concern associated with the concept of construct validity. The average variance extracted (AVE) is an attempt to determine convergent validity. If the AVE is greater than 0.500; it explains 50% or more of the variance (40). In this research, it is known that all constructs have an AVE score that is greater than 0.500 that explains more than 50% of the variance (Table 2).

According to Hair et al. (40), discriminant validity refers to the degree to which a construct differs from other constructs. Through the implementation of the Fornell-Larcker criterion, it is expected that the square root of the AVEs of all constructs should be greater than the highest correlation value for other constructs in the measurement model. Based on the study findings, the square root of the AVEs for each construct are greater than that it’s shared variance. Thus, discriminant validity is confirmed through the assessment of the Fornell-Larcker criterion (Table 3).

On the one hand, heterotrait-monotrait ratio (HTMT) values exceeding 0.900 appear discriminant validity problems. It is observed that all HTMT values are below 0.900 (Table 4).

Further, discriminant validity can be assessed by examining cross-loadings. Discriminant validity is demonstrated when the loading value on a construct bigger than all of its cross-loading values on other constructs (Table 5).

It has been determined that the values of all indicators for each construct’s outer loading exceed all cross-loading values on other constructs (Table 5). In this context, it can be said that discriminant validity has been emerged.

Following the evaluation of the measurement model, it is evident that the construct is deemed suitable for use. This is evidenced by meeting the criteria for loading indicators, internal consistency reliability, convergent validity, and discriminant validity. Thus, the model can proceed to undergo inspection in the structural model.

Structural Model

The structural model assessment process began with the determination of whether there was collinearity problem. Reporting variance inflation factor (VIF) values is a good indicator of collinearity. The VIF value of each indicator >3.000 indicates the existing of collinearity problem. PU (VIF=1.233), PEOU (VIF=1.890), RC (VIF=1.567), T (VIF=1.241), Technological anxiety  (TA) (VIF=1.712), FC (VIF=1.213) and PR (VIF=1.663) are the predictor of IU (VIF=1.000). As seen all values of VIF were below three. Therefore, collinearity problem did not exist in the current research.

Structural model evaluation is also known as an inner model evaluation, as it examines the relationship between latent variables (41). In reflective models, recent studies suggest conducting an evaluation that includes determination coefficients, path coefficients, and predictability (40, 42). For this purpose, the sample was performed a complete bootstrap analysis with 5000 subsample parameters. At a 5% significance level, the majority of hypotheses were supported, except for H5 and H6.

The TA was not a significant predictor for IU (b=0.067; t=1.014; p=0.093). FC was also reported to have not a significant effect on IU (b=0.071; t=1.041; p=0.102). PU (b=0.613; t=22.851; p<0.01), PEOU (b=0.644; t=14.577; p<0.01), RC (b=-0.416; t=7.150; p<0.01), T (b=0.651; t=25.876; p<0.01), PR (b=-0.530; t=14.771; p<0.01) significantly predicted IU. Final results are shown in Table 6.

The explanatory power of the model was determined by measuring R2 value. The R2 value represents the proportion of variance in the endogenous variable explained by the exogenous variable. From the results of the PLS-SEM analysis, R2 IU was 0.615, implying that PU, PEOU, RC, T, TA, FC, and PR could explain 61.5% of the variance in IU. Hair et al. (40) categorized R2 into the following groups: 0.25 falls into the weak category, 0.50 into the moderate category, and 0.75 into the substantial category. Based on the analysis results, the R2 value could be classified within the moderate category.

Next, f2 is reported to assess the effect sizes of endogenous structures. This involves assessing the change in R2 between a full model and iterations where each time a distinct exogenous construct is excluded (40, 43). According to Hair et al. (40), the f2 value of 0.02 is considered a small effect, 0.15 indicates a medium effect, and 0.35 is characterized as a large effect. The effect sizes of endogenous structures are shown in Table 7.

PR (f2 =0.345) gained the smallest effect while Trust (T) (f2 =0.648) obtained the largest f2 while TA and T had no effect size.

Finally, we calculated Stone-Geisser’s Q2 value to assess the predictive relevance of the model for each endogenous variable (given that our model included only one endogenous variable, the assessment of predictive relevance focused on the model’s ability to predict IU). When the model undergoes predictive relevance, it demonstrates the accuracy of predicting data points for items in the study (40). The obtained Q2 value for this variable was 0.41, indicating substantial predictive relevance.

Discussion

According to the findings of our research which aims to reveal the factors determining the IU the AI applications among MDs in Türkiye, T is the strongest determinant. As the trust to AI technology increases, the MDs are more likely to use its applications. Same strong relation between trust and (behavioral) intention was also found in a previous study in China (25). Another research in China found direct and indirect positive effects of trust on AI acceptance among doctors, indicating that increased trust escalated likelihood of acceptance by positively effecting the performance expectancy (44). PEOU was the second most effective factor. PEOU was also found to be a strong factor that led to positive attitude to use AI in research undertaken in UEA (45). Additionally, the more the MDs perceive AI technology and AI applications as useful for healthcare purposes (PU), their intentions to use them also increase. In a previous survey by Pan et al. (46) PU was found the most effective factor in determining attitudes of doctors towards smart healthcare services in Japan. The results of an online survey with 669 Korean MDs and medical students also found that %83.4 of the respondents considered AI useful for medical field and this contributed to the positive attitude of medical community in Korea (47).

Technological anxiety is not a factor in determining the IU AI in healthcare practices for MDs in Türkiye, according to the results of our research. FC were also found to be not affecting IU AI. When it comes to factors that negatively impact the intention of use, perceived risk comes first, which may explain the strength of trust as the main positive factor that is the opposite of risk. This result supports the findings of a previous qualitative study on doctors’ resistance of AI in Tunisia. In this study, performance risk of AI applications was found to be one of the barriers in terms of acceptance in healthcare, even though it was perceived beneficial to the medical field (48). RC, also negatively impacts IU AI applications among MDs in Türkiye. As the individual is more resistant towards using new technology the IU AI declines. It can be suggested that the general attitude towards using new technology and/or tools determines the attitude towards AI applications in healthcare as well.

Study Limitations

It should be noted that there will be large variations within the population of MDs working in hospitals and clinics; therefore, the results cannot be considered generalizable. Additionally, the possible support of AI in the treatment of one patient may differ significantly from others. The attitude towards specific AI applications may vary.

Conclusion

Artificial intelligence continues to develop at an astonishing speed and touches our entire lives, including healthcare. In the context of this major transformation, the position of doctors becomes very crucial in terms of applying the opportunities of AI technology to medical practice, while eliminating the risks and side effects. The perceptions and attitudes of them will define the trajectory for acceptance and adoption of AI applications in various aspects medical field. This research reveals the fact that trust, perceived usefulness, and perceived ease of use are the major positive factors for AI to be accepted and adopted by MDs. Although it requires further research, this result can be related to the increasing knowledge and experience of doctors regarding artificial intelligence. Providing more trust and ease of use in this way can enable more action to be taken to benefit from AI in the healthcare sector.

References

1
Gupta S, Kamboj S, Bag S. Role of Risks in the Development of Responsible Artificial Intelligence in the Digital Healthcare Domain. Inf Syst Front. 2021;25:2257-74.
2
Esmaeilzadeh P. Use of AI-based tools for healthcare purposes: A survey study from consumers’ perspectives. BMC Med Inform Decis Mak. 2020;20:170.
3
Farrow E. Mindset matters: how mindset affects the ability of staff to anticipate and adapt to Artificial Intelligence (AI) future scenarios in organisational settings. AI Soc. 2021;36:895-909.
4
Sauerbrei A, Kerasidou A, Lucivero F, Hallowell N. The impact of artificial intelligence on the person-centred, doctor-patient relationship: some problems and solutions. BMC Med Inform Decis Mak. 2023;23.73.
5
Sujan M, Furniss D, Hawkins R, Habli I. Human Factors of Using Artificial Intelligence in Healthcare: Challenges That Stretch Across Industries [Internet]. 2020. Available from:https://www.nhsx.nhs.uk/
6
Martinho A, Kroesen M, Chorus C. A healthy debate: Exploring the views of medical doctors on the ethics of artificial intelligence. Artif Intell Med. 2021;121:102190.
7
Buck C, Doctor E, Hennrich J, Jöhnk J, Eymann T. General Practitioners’ Attitudes Toward Artificial Intelligence-Enabled Systems: Interview Study. J Med Internet Res. 2022;24:e28916.
8
Banerjee M, Chiew D, Patel KT, Johns I, Chappell D, Linton N, et al. The impact of artificial intelligence on clinical education: perceptions of postgraduate trainee doctors in London (UK) and recommendations for trainers. BMC Med Educ. 2021;21:429.
9
Varlamov OO, Chuvikov DA, Adamova LE, Petrov MA, Zabolotskaya IK, Zhilina TN. Logical, philosophical and ethical aspects of AI in medicine. Int J Mach Learn Comput. 2019;9:868-73.
10
Svensson AM, Jotterand F. Doctor Ex Machina: A Critical Assessment of the Use of Artificial Intelligence in Health Care. J Med Philos. 2022;47:155-78.
11
Airoldi M. Machine Habitus: Toward a Sociology of Algorithms. John Wiley&Sons; 2021.
12
Livingston M. Preventing Racial Bias in Federal AI. Journal of Science Policy & Governance. 2020:16.
13
Ahmed Z, Bhinder KK, Tariq A, Tahir MJ, Mehmood Q, Tabassum MS, et al. Knowledge, attitude, and practice of artificial intelligence among doctors and medical students in Pakistan: A cross-sectional online survey. Ann Med Surg (Lond). 2022;76:103493.
14
Ahuja AS. The impact of artificial intelligence in medicine on the future role of the physician. Peer J. 2019;7:e7717.
15
Choudhury A. Factors influencing clinicians’ willingness to use an AI-based clinical decision support system. Front Digit Health. 2022;4:920662.
16
Antel R, Abbasgholizadeh-Rahimi S, Guadagno E, Harley JM, Poenaru D. The use of artificial intelligence and virtual reality in doctor-patient risk communication: A scoping review. Patient Educ Couns. 2022;105:3038-50.
17
Lysaght T, Lim HY, Xafis V, Ngiam KY. AI-Assisted Decision-making in Healthcare: The Application of an Ethics Framework for Big Data in Health and Research. Asian Bioeth Rev. 2019;11:299-314.
18
Boillat T, Nawaz FA, Rivas H. Readiness to Embrace Artificial Intelligence Among Medical Doctors and Students: Questionnaire-Based Study. JMIR Med Educ. 2022;8:e34973.
19
Le Nguyen, Tran, Thi Thu Ha Do. “Artificial intelligence in healthcare: A new technology benefit for both patients and doctors.” 2019 portland international conference on management of engineering and technology (PICMET). IEEE, 2019.
20
Liu X, Keane PA, Denniston AK. Time to regenerate: the doctor in the age of artificial intelligence. J R Soc Med. 2018;111:113-6.
21
Sit C, Srinivasan R, Amlani A, Muthuswamy K, Azam A, Monzon L, et al. Attitudes and perceptions of UK medical students towards artificial intelligence and radiology: a multicentre survey. Insights Imaging. 2020;11:14.
22
Verdicchio M, Perin A. When Doctors and AI Interact: on Human Responsibility for Artificial Risks. Philos Technol. 2022;35:11.
23
Smith H. Clinical AI: opacity, accountability, responsibility and liability. AI Soc. 2021;36:535-45.
24
Ismatullaev UVU, Kim SH. Review of the Factors Affecting Acceptance of AI-Infused Systems. Hum Factors. 2024;66:126-44.
25
Fan W, Liu J, Zhu S, Pardalos PM. Investigating the impacting factors for the healthcare professionals to adopt artificial intelligence-based medical diagnosis support system (AIMDSS). Ann Oper Res. 2020;294:567-92.
26
Rainey C, O’Regan T, Matthew J, Skelton E, Woznitza N, Chu KY, et al. Beauty Is in the AI of the Beholder: Are We Ready for the Clinical Integration of Artificial Intelligence in Radiography? An Exploratory Analysis of Perceived AI Knowledge, Skills, Confidence, and Education Perspectives of UK Radiographers. Front Digit Health. 2021;3:739327.
27
York TJ, Raj S, Ashdown T, Jones G. Clinician and computer: a study on doctors’ perceptions of artificial intelligence in skeletal radiography. BMC Med Educ. 2023;23:16.
28
Coppola F, Faggioni L, Regge D, Giovagnoni A, Golfieri R, Bibbolino C, et al. Artificial intelligence: radiologists’ expectations and opinions gleaned from a nationwide online survey. Radiol Med. 2021;126:63-71.
29
Kamal SA, Shafiq M, Kakria P. Investigating acceptance of telemedicine services through an extended technology acceptance model (TAM). Technol Soc. 2020;60:101212.
30
Carter L, Bélanger F. The utilization of e‐government services: citizen trust, innovation and acceptance factors*. Inf Syst J. 2005;15:5-25.
31
Ortega Egea JM, Román González MV. Explaining physicians’ acceptance of EHCR systems: An extension of TAM with trust and risk factors. Comput Human Behav. 2011;27:319-32.
32
Venkatesh V, Morris MG, Davis GB, Davis FD. User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly. 2003;27:425-78.
33
Kijsanayotin B, Pannarunothai S, Speedie SM. Factors influencing health information technology adoption in Thailand’s community health centers: applying the UTAUT model. Int J Med Inform. 2009;78:404-16.
34
Angst CM, Agarwal R. Adoption of Electronic Health Records in the Presence of Privacy Concerns: The Elaboration Likelihood Model and Individual Persuasion. MIS Quarterly. 2009;33:339-70.
35
Moores TT. Towards an integrated model of IT acceptance in healthcare. Decis Support Syst. 2012;53:507-16.
36
Hsieh PJ. An empirical investigation of patients’ acceptance and resistance toward the health cloud: The dual factor perspective. Comput Human Behav. 2016;63:959-69.
37
Javadi MHM, Rezaie Dolatabadi H, Nourbakhsh M, Poursaeedi A, Asadollahi A. An Analysis of Factors Affecting on Online Shopping Behavior of Consumers. Int J Mark Stud. 2012:4.
38
Riffai MMMA, Grant K, Edgar D. Big TAM in Oman: Exploring the promise of on-line banking, its adoption by customers and the challenges of banking in Oman. Int J Inf Manage. 2012;32:239-50.
39
Kim JB. An empirical study on consumer first purchase intention in online shopping: integrating initial trust and TAM. Electron Commer Res. 2012;12:125-50.
40
Hair JF, Risher JJ, Sarstedt M, Ringle CM. When to use and how to report the results of PLS-SEM. Eur Bus Rev. 2019;31:2-24.
41
Başar Ş, Başar EE. How does the environmental knowledge of Turkish households affect their environmentally responsible food choices? The mediating effects of environmental concerns. Int J Agric Environ Food Sci. 2020;4:348-55.
42
Sukendro S, Habibi A, Khaeruddin K, Indrayana B, Syahruddin S, Makadada FA, et al. Using an extended Technology Acceptance Model to understand students’ use of e-learning during Covid-19: Indonesian sport science education context. Heliyon. 2020;6:e05410.
43
Velsen LV, Tabak M, Hermens H. Measuring patient trust in telemedicine services: Development of a survey instrument and its validation for an anticoagulation web-service. Int J Med Inform. 2017;97:52-8.
44
Yang X, Man D, Yun K, Zhang S, Han X. Factors inuencing doctors’ acceptance of articial intelligence-enabled clinical decision support systems in tertiary hospitals in China. 2023; Available from:https://doi.org/10.21203/rs.3.rs-3493725/v1
45
Alhashmi SFS, Salloum SA, Mhamdi C. Implementing Artificial Intelligence in the United Arab Emirates Healthcare Sector: An Extended Technology Acceptance Model [Internet]. IJITLS. 2019;3:27-42.
46
Pan J, Ding S, Wu D, Yang S, Yang J. Exploring behavioural intentions toward smart healthcare services among medical practitioners: a technology transfer perspective. Int J Prod Res. 2019;57:5801-20.
47
Oh S, Kim JH, Choi SW, Lee HJ, Hong J, Kwon SH. Physician Confidence in Artificial Intelligence: An Online Mobile Survey. J Med Internet Res. 2019;21:e12422.
48
Chaibi A, Zaiem I. Doctor Resistance of Artificial Intelligence in Healthcare. International Journal of Healthcare Information Systems and Informatics. 2022;17:1-13.