Siirry sisältöön
AINL 2020 Workshop on Human-AI Interaction

The Influence of Interactional Style on Affective Acceptance in Human-Chatbot Interaction – A Literature Review

Volume 2 Issue 2

Authors:

Lili Aunimo

Principal Lecturer

Haaga Helia University of Applied Sciences

Published : 04.02.2021

Abstract

This literature review analyses studies that examine human-chatbot interaction and how the interactional style of the chatbot affects the users’ acceptance. The reviewed studies are classified by their thematic and methodological approach. Five major clusters were identified: (1) studies analysing the interrelation between interactional style of chatbots and the affective acceptance by the users; (2) studies examining the effects of interaction style on user experience and system satisfaction; (3) studies exploring how to generate trust in interaction with chatbots; (4) studies investigating user compliance to requests posed by AI-based chatbots; and finally (5) studies that examine cultural differences in technology acceptance in general. This literature review shows that studies on human-chatbot interaction and especially studies on the affective aspects of human-chatbot interaction are relatively scant. The analysis of research methods used in the studies shows that all research settings included a questionnaire. Other methods, such as biometric measurements, sentiment analysis of the conversation text, user compliance or qualitative interviews were used in several studies, but with a varying frequency.

Keywords: chatbots, human-AI interaction, biometric measurement, affective acceptance

1. Introduction

Human-computer interaction (HCI) as a research field examines how users interact with computers and is also interested in the technology and design of the computer programs. Chatbots are of growing importance within HCI since they are designed to interact with people through natural language. Currently, chatbots are developed for use in many areas, such as e-commerce, business, education (Ciechanowski et al., 2019), healthcare, sports or sales to persuade users to take certain actions (see e.g., Cameron et al., 2017 and Priscilla et al., 2018).

Chatbots are computer programs that communicate via natural language in a synchronous manner with humans (Russell and Norvig, 2020). Chatbots can communicate with humans in written and in spoken language. Chatbots are also called dialogue systems, chatter bots, question answering systems, conversational agents or even virtual assistants. The first well known chatbot was ELIZA, developed by a virtual psychologist in the 1960s (Weizenbaum, 1966). Chatbots have traditionally been regarded as a typical example of artificial intelligence (Turing, 1950; Loebner, 2020). Nowadays chatbots are a part of everyday life. Bigger companies and organisations widely use chatbots on their websites. They are also integrated in the operation systems of most handheld devices and smart speakers used by individuals. Examples for this kind of chatbots are Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana.

Chatbots should provide users with relevant information in a simple manner and can be used to automate the interaction between users and organisations or to improve the usability of digital services. To fulfil this purpose, first, the technical functionality must be ensured and second, the communicative competence of the chatbot must be implemented in such a way that the user has a positive experience. Neurer et al (2018) claim that technical aspects only play a subordinate role for user acceptance and that social aspects are key. According to Chavez and Gerosa (2019) behaviour (interactional style) and appearance of chatbots are crucial for chatbot acceptance. Xu et al.’s (2017) study confirms this claim by showing that around 40% of chatbot conversations tend to be emotional rather than informative. Several studies suggest that emotion-related factors, such as feelings, trust and attitude towards chatbots, play an important role in the human-computer-interaction (e.g., Lu et al., 2019; Xu et al., 2017). However, these factors were hardly taken into account in previous studies. In order to better understand the emotional aspects in human-chatbot interaction, investigating further into the factors relevant for people’s affective acceptance of chatbots is necessary. Currently, affective acceptance is insufficiently researched as only a few studies examine this field of study (e.g., Ciechanowski et al., 2019; Portela et al., 2017; Ratajcyk et al., 2019).

Guided by the research question “How does the interactional style of a chatbot influence affective acceptance by humans?” this literature review captures relevant studies in this research field. Furthermore, we look at studies covering factors other than interactional style to theoretically include possible moderation or mediation effects and thus come up with a second research question, namely “Which further factors are relevant for the affective acceptance of chatbots in human-chatbot interaction?”. To answer these research questions this literature review identifies and thematically clusters studies that examine factors that influence the affective acceptance of chatbots. To be able to judge the generalizability and scope of the research results, also the research methodology applied in the studies is inspected. The thematic and methodological classifications of approaches are intended to provide a basis for more specific and further research on affective acceptance in human-chatbot interaction.

In the second chapter of this literature review we will explain the method we used to select the studies we analysed. Thereafter we will describe the thematical and methodological clusters identified.

2. Identification of relevant articles and their classification

In order to identify studies that are relevant for the research questions, we conducted a keyword research on scientific databases and scientific online platforms. Based on the definition of our object of interest, we searched for studies that included the keywords “chatbot” and other synonyms such as “chatterbot” or “conversation interface” or “conversation agent”, as well as the keywords “affective acceptance”, “emotional acceptance”, “user acceptance”, etc. To answer the second research question, the search was expanded to include generic terms such as “human-chatbot interaction”, “human-computer interaction”, “technology acceptance”, etc. For this study scientific databases such as EBSCO, ProQuest, Emerald, Sage Premier, Elsevier Science Direct, ACM Digital Library and IEEE XPlore and scientific online platforms such as ResearchGate and Google Scholar were used. In the research and selection process we followed the systematic approach described by Booth et al. (2012). Systematic research and review combine the strengths of critical review with exhaustive search process. It addresses broad questions to produce the “best evidence synthesis”. (see Booth et al., 2012). Following this approach, we gathered many studies and reduced them firstly according to the relevance of the content of their abstracts for our research question. Based on the first selection of studies, we defined the thematic and methodological categories. Following these categories, a second selection process was executed to further reduce the studies to filter out the most relevant for our research questions.

In a first step, the reviewed studies are classified by their thematic approach. After analysing the research articles, the following five thematic clusters were determined:

  1. Affective acceptance. Studies that examine interrelation between the interactional styles of chatbots and their affective acceptance by users.
  2. User experience. Studies that examine the effects of interaction style on user experience and system satisfaction.
  3. Trust. Studies that examine how to generate trust in interaction with chatbots.
  4. User compliance. Studies that examine user compliance to requests posed chatbots.
  5. Cultural differences. Studies that examine cultural differences in technology acceptance.
    The final number of studies included for each thematic cluster is depicted in Table 1. Each thematic cluster is described in detail in the section 3.
Cluster short name#
1Affective acceptance5
2User experience5
3Trust8
4User Compliance5
5Cultural differences7
30
Table 1: The number of research papers analysed for each thematic cluster. The total number of papers analyzed in this study is 30.

After having selected 30 articles for the detailed analysis, they have been clustered according to their research focus. Subsequently, all articles were classified pursuant to the research method(s) used. The majority of studies applied experimental methodologies. In experimental settings users are interacting with different chatbots which differ in interaction style. A table of the methodologies and the thematic clusters shows the number of studies in each thematic cluster, that apply one of the methodologies (Table 2). The table shows that most studies use questionnaires and that they are often combined with experimental methods such as biometric measurement.

Methodology
Cluster short nameBiometric measurementQuestionnaireSentiment analysisUser actionQualitative interview
1.Affective acceptance2411
2.User experience141
3.Trust18
4.User Compliance331
5. Cultural differences in technology acceptance62
Table 2: A cross-tabulation between the identified thematic clusters and experimental methodologies in the dataset under study. The figures indicate the number of times a research setting is applied in a thematic cluster. Note that one research case may apply several research methods.

The identified methodologies are presented in more detail below.

The first research method identified is biometric measurement. This refers to the use of biometric data such as emotion recognition data based on facial expressions, GSR (galvanic skin reaction) data and eye tracking data. Examples of such research include the research by Elsholz et al. (2019) where psychophysiological reactions are measured to evaluate the user experience. The study by Przegalinska et al. (2019) combines biometric measurements with a questionnaire to evaluate trust in chatbot interaction. Ciechanowski et al. (2019) also use both biometric data and questionnaires in their study on the affective acceptance of chatbots.

Questionnaires are used to collect quantitative or qualitative data – depending on the number of respondents and on the type of questions. They are used to gather both generic demographical data and data specific to the variables under investigation. They are often performed as post-experimental tasks that occur after user interaction with the chatbot. In our data set questionnaires were used in most studies – either alone or in combination with other research methods.

Sentiment analysis of the textual input of the user interacting with a chatbot is a research method that seems well suited for the analysis of the affective acceptance of the chatbot by the user. However, it is rarely used in the studies under investigation. In our data set, we found only one study applying this method. Feine et al. (2019) used data originating from both, sentiment analysis on customers’ utterances and from a questionnaire, to measure chatbot service encounter satisfaction.

User action as a research method means that user behaviour is measured when investigating the affective acceptance of the chatbot by the user. Here, the hypothesis is that a high level of affective acceptance correlates with a high level of user compliance to requests posed by the chatbot. An example of this kind of study is carried out by Adam et al. (2020) where the researchers collected data on user compliance and through a post-experiment questionnaire.

Qualitative interviewing is a research method where the user or another person of interest, such as an expert in the field of chatbots, is interviewed orally. The interviews may be structured, semi-structured or they are thematic interviews with a high-level of freedom. An example for the use of this research method is presented in the paper by Neururer et al. (2018). The authors interviewed experts in the field of AI and human-computer interaction on the authenticity of chatbots (Neururer et al., 2018). They did not employ any experimental setting. However, qualitative interviews may also be applied in conjunction with other research methods.

3. Thematic clusters of research

In the following, the five thematic clusters of research identified are described in detail.

3.1 Studies that examine the interrelation between interactional style of chatbots and the affective acceptance by the users

Ciechanowski et al. (2019) and Ratajcyk et al. (2019) conducted studies based on the uncanny valley theorem, which came to similar conclusions. The uncanny valley concept was first introduced by Masahiro Mori (2012), in order to describe his observation that the more human-like robots appear, the more appealing they are – but only up to a certain point. The uncanny valley can thus be defined as people’s negative reaction to certain lifelike robots (Mori, 2012). Ciechanowski et al. (2019) found that participants experienced lesser uncanny effects and less negative affect with a simpler chatbot that consisted mainly of text in comparison to an Avatar chatbot. They found that the more out of the “norm” a chatbot appeared the more negative reactions it will cause. Voice and animation of the avatar chatbot were important for the participants and it prompted negative reactions if they were unsuccessfully imitating a human being. A more human-like chatbot causes greater expectations in the performance of the chatbot which often resulted in dissatisfaction and the study concluded that bots should not be designed to imitate a human. Ciechanowski et al. (2019) also found that simple chatbots prompted less intense psychophysiological reactions (electro dermal activity (EDA), heart rate (HR), and electromyography (EMG)) . An important limitation for this study is the use of overlapping text, sound and video for the chatbot. Ratajcyk et al. (2019) analysed electro dermal activity (EDA) response for different stimuli with five main hypotheses. Analysing the EDA data, they could confirm two of their hypotheses: Firstly, that there is a negative correlation between EDA reaction and the human-likeness of the chatbot models, which means that the less human-like the chatbot avatar, the stronger the EDA reaction of the participants. Secondly, reaction time for human-likeness question correlates negatively with EDA reaction, meaning questionnaire results correspond to psychophysiological reactions and that – put in other terms – participants know about their emotions (Ratajcyk et al., 2019).

Staying in this line of research, Portela et al. (2017) observed in their study the interactions with chatbots regarding the emotional engagement of the participants. They presented their key findings according to the two main evaluation methods, namely evaluation of the emotional and psychological state of participants and evaluation of the conversation experience. One key takeaway is that participants had previous ideas of relations with chat agents and described different emotional insights. Some participants were sceptical and remained so after the experiment, other participants changed their emotions after they have had a positive experience. The attitude of the participants was crucial for the possibility of creating more elaborate and intense conversations. Regarding the conversation experience, participants reported positive effects when the conversation was more open and the chatbot used social cues and empathetic signs. This was shown by e.g., the participants being highly engaged when the chatbot “remembered” something they stated before. The effect of the response time over participant’s affection remains unclear in this study, with some reacting positively to a delay in response and others showing their disappointment for a delay. The same is to say for unexpected behaviours of the chatbots (Portela et al., 2017).

Moving away from psychophysiological reactions and on to the perceived authenticity of chatbots. The study by Neururer et al. (2018) focused primarily on the perceptions of authenticity of chatbots. They identified five key contributors to authenticity, among which coherence and learning from experience were the main factors. According to Neururer et al. (2018) three other key contributors to authenticity are transparency, anthropomorphism (which is the understanding of human physiology, history and culture) and conversational behaviour (meaning the chatbot writes, pauses and talks like his human counterpart). They concluded that acting predictable and transparent creates trust in chatbots. Also, cultural and conversational awareness supports the development of an agent persona. The study also emphasises, that authenticity of chatbots is a multi-characteristic concept. The interview data and the literature used in the study show that task-orientation of a chatbot does not add to the perceived authenticity, whereas personal coherence or individual conversations do indeed add to authenticity (Neururer et al., 2018).

In the same line of research on perception on chatbots, the study by Go and Sundar (2019) explores which traits affect the perceptions of human-likeness of a chatbot. One of the key findings of the study indicates, that a high level of message interactivity compensates for the impersonal nature of a chatbot that is low on anthropomorphic visual cues. Another important finding is that the message interactivity between participants and chatbots influenced not only the participant’s evaluations of the bot but also the attitude towards the website. They also concluded that revealing the identity of the machine can capitalize on expectations, whereas identifying the agent as human raises user expectations for interactivity. In line with the findings from Ciechanowski et al. (2019), Go and Sundar (2019) claim that the identity cue of a bot (whether a chat agent is identified as a chatbot or a human) sets the participant’s expectations for the performance of the agent. This is also supported by another finding of Go and Sundar, namely that participants showed favourable evaluations when a chatbot identified as a human delivered interactive conversations and more negative evaluations when the communication was less interactive.

3.2 Studies that examine the effects of interaction style on user experience and system satisfaction

The second stream of research consists of studies that examine the effects of interaction style on user experience and user satisfaction. Satisfaction is a well-established construct in information systems research to evaluate the success and effectiveness of a system (Au et al., 2002). Ren et al. (2019) conducted a literature review on chatbot usability studies. The authors found that the largest group of papers were dealing with satisfaction (Ren et al., 2019). Several measures of satisfaction were identified: ease-of-use, context-dependent questions, complexity control, physical discomfort of the interface, pleasure, intention to use the chatbot again, and enjoyment and learnability (Ren et al., 2019). However, there is a difference between user satisfaction and service encounter satisfaction. User satisfaction can be defined as the degree to which users’ needs are satisfied when a product or system is used in a specified context of use (Hornbæk, 2006) whereas service encounter satisfaction refers to the post-consumption evaluation of a service encounter (Verhagen et al., 2014). In connection with chatbots, the service encounter satisfaction plays an important role, because most bots are service chatbots. According to Verhagen et al. (2014) service providers’ friendliness (being polite and responsive, etc.) and professionalization influence service encounter satisfaction. Further, Celsi & Gilly (2003) found that information comprehensiveness and service process efficiency are factors influencing service encounter satisfaction. Ashfaq at al. (2020) found that poor quality of information in terms of up-to-datedness, relevance or correctness leads to poor user experience. If a chatbot provides up-to-date, reliable information, prompt responses, and offers individualized attention users’ satisfaction will rise.

However, research suggests that users consider not only the content of a message or a question but also how it is delivered (Kim et al., 2019). For instance, Xu et al. (2017) found that there are three important measures, which are used to assess the quality of a chatbot, one of which is empathy. The other two, namely appropriateness and helpfulness, refer to the content. Chaves and Gerosa (2019) conducted a comprehensive literature review and found that conversational intelligence, which demonstrates awareness of the topic discussed, social intelligence, and personalization are critical elements which affect user satisfaction.

In the literature, a basic distinction is made between chatbots based on the interaction style: a social-oriented or task-oriented interaction style (Van Dolen et al., 2007). A social-oriented interaction style is characterized by informal language, greetings and small talk whereas a task-oriented interaction style involves formal language and on-task dialogues to achieve functional goals (Chattaraman et al., 2019). Verhagen et al. (2014) found that a social-oriented interaction style elicits a higher level of social presence compared to a task-oriented communication style. The authors state that social presence with friendliness, expertise, and smile as determinants of social presence and personalization are key drivers of satisfaction with a chatbot (Verhagen et al., 2014). Similarly, Kim et al. (2019) found that a casual conversation style produced higher enjoyment compared to a formal conversational style. A study by De Cicco et al. (2020) addressed the implications that chatbots’ interaction styles have on younger consumers using them for online food delivery services. The findings revealed that the interaction with the social-oriented chatbot increased users’ perception of social presence and perceived enjoyment. However, the authors did not find a significant effect of the interaction style on trust and intention to use (De Cicco et al., 2020). Elsholz et al. (2019) tested two different language styles and found that a more modern chatbot version was more often referred to as being ‘easy to use’, whereas the “Shakespearean” chatbot version was more often referred to as being ‘fun to use’. Likewise, Liebrecht and van Hooijdonk (2020) found several linguistic elements, which should be incorporated in the chatbot in order to increase anthropomorphism: empathy, support, humour, informal attitude.

3.3 Studies that examine how to generate trust in interaction with chatbots

Trust is a factor that can strongly affect users’ acceptance of chatbots and users’ intention to use the chatbot (Kasilingham, 2020). It is therefore important to understand how trust in interaction with chatbots is generated. Various studies examine users’ trust in chatbots as one of many theoretical constructs that determine Users’ attitudes and behaviour towards chatbots (Kasilingam, 2020; Lee et al., 2020; Roberta et al., 2020; Toader et al., 2019). However, there are currently only a few studies that have users’ trust in chatbots as their central topic of work (Nordheim et al., 2019; Aoki, 2020; Przegalinska et al., 2019). The studies on trust in chatbots are based on a wide range of definitions or concepts of trust. The range spans from adapting more general concepts of trust in technology (Aoki, 2020), to trust as a risk reduction measure (Kasilingam, 2020), to trust as a conglomerate of different beliefs regarding the chatbot, i.e., competence, benevolence and integrity (Müller et al., 2019). Despite the heterogeneous constructs of trust used in the literature, central research questions emerge: Which factors influence users’ trust in chatbots? How does users’ trust in chatbots influence their behaviour?

Nordheim et al. (2019) differentiate between three factors that impact trust in chatbots: chatbot-related factors, environment-related factors and user-related factors. Within the framework of their model, results from different studies and authors on the various factors influencing trust can be structured in a clear form. The most important chatbot-related factor that influences user trust is the expertise of the chatbot. This includes the ability to give a correct answer, to interpret the user’s question correctly and to formulate the answer specifically, eloquently and quickly (Nordheim et al., 2019). The topic for which the chatbot was designed also influences user trust. Aoki (2020) found that users are more likely to trust chatbots when they give tips on separating waste than on parental support. Another chatbot-related factor is its communication style. A higher self-disclosure of the chatbot promotes the self-disclosure of the user (Lee et al., 2020). A socially competent, polite demeanour is perceived positively by the user. The resemblance of a chatbot to a woman triggers more positive responses from consumers (Toader et al., 2019). In a study with millennials, Roberta et al. (2020) found that socially competent behaviour of the chatbot, in contrast to goal-oriented behaviour, increases the perception of social presence which has a positive effect on the trust of millennials. Transparency and honesty, predictability and controllability are further chatbot-related factors that influence trust (Przegalinska et al., 2019). Environment-related factors include risk, which depends on whether the user must disclose personal data. Another factor of this category is brand (i.e., how the brand environment of the chatbot is perceived). An important user-related factor is the propensity to trust technology (Nordheim et al., 2019). Müller et al. (2019) identified extraversion and agreeableness as personality factors that go hand in hand with a higher level of trust among chatbot users. The relevance of trust as a factor that influences users’ acceptance of chatbots is different for different age groups. Older users, who have little experience with online shopping, trust has a stronger impact on the acceptance of a chatbot than for younger users. (Kasilingam, 2020)

3.4 Studies that examine user compliance to requests posed by AI-based chatbots

Chatbots are widely used with the intention to influence the users’ actions. This type of chatbots may be differentiated from the chatbots that purely serve as sources of information to the user. The goal of such a chatbot often is to sell something to the user, to get users’ compliance to certain health or other guidelines or simply to get users’ compliance to giving feedback to a service provider.

Adam et al. (2020) compare different chatbot interaction strategies to get users to fill in a feedback form after having interacted with the chatbot in a customer service scenario. In their study, they find that human-like behaviour as well as the need to stay consistent significantly increase the chance that users comply with a chatbot’s request.

In the field of healthcare, Cameron et al. (2017) report that AI-enabled chatbots may result in greater user compliance than relying solely on human-human interaction. The WeightMentor Chatbot (Holmes et al., 2019) uses motivational dialogues to ensure user compliance to weight loss maintenance. The literature reports on several studies on chatbots that aim to facilitate or increase sales, see e.g. (Pricilla et al. 2018 or Luo et al., 2019). The studies measure the success of these bots by user compliance to buying.

3.5 Studies that examine cultural differences in technology acceptance in general

It may seem that this thematic cluster of studies does not fit very well with the other clusters because it engages with studies that do not all directly examine issues related to chatbots. Nevertheless, this thematic cluster is relevant to the intent of this literature review. The studies highlighted in this section examine cultural differences in the acceptance of technology. Knowledge of the influence of cultural differences on technology acceptance is necessary to understand and interpret findings on affective acceptance of chatbots in the most meaningful way. This is especially true when it comes to understanding the impact of the interactional style of a chatbot on affective acceptance, because cultural differences play a crucial role in the formal design of communication processes.

The technology acceptance model (TAM) (Davis et al., 1989) defines perceived usefulness and perceived ease of use as criteria that influence acceptance of technology. The Unified Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh et al., 2003) identifies performance expectancy, effort expectancy, social influence and facilitating conditions as crucial for technology acceptance. Studies discussed in this chapter often refer to these theories and stress the fact that there is a need to investigate the influence of culture on the criteria for technology acceptance defined in these theories. Cultural effects on the TAM model and UTAUT model are examined by the following studies: Zakour (2004) analyses cultural differences in information technology acceptance by evaluating six cultural value-dimensions, which are individualism/collectivism, power distance, masculinity/femininity, uncertainty avoidance, monochronic/polychromic time and high context/low context. He finds several differences related to this value-dimensions. Srite and Karahanna (2006) also examine the effects of cultural values of masculinity/femininity, individualism/collectivism, power distance, and uncertainty avoidance on technology acceptance and find, that masculinity/femininity values and uncertainty avoidance affect technology acceptance. Srite (2006) tests the TAM model for two different cultures, China and the USA respectively, including a measurement of cultural values. Fernández Robin et al. (2014) validate the TAM model in Chile, considering the cultural factors of this country. Thowfeek and Jaafar (2012) evaluate the influence of cultural factors on the adaption of e-learning programs by instructors. Carey and Kacmar (2010) examine culturally specific user interface preferences that affect technology acceptance and attitude toward technology. Their findings demonstrate that culturally specific interface features can be identified, measured, and used to predict the chance of acceptance and use of technology.

The effect of societal pressure on users to engage in a certain behaviour varies by culture. This form of social influence on technology acceptance is examined by Bandyopadhyay and Fraccastoro (2007). The study confirms that culture has a significant effect on technology acceptance. The cultural differences in the acceptance of chatbots are explicitly examined by van der Goot and Pilgrim (2019). They used qualitative analysis of in-depth interviews to investigate differences in perceptions of chatbot communication in a customer service context. They interviewed people of different age groups. Findings indicate that the main motivation for using the chatbots is the same for the older age group (54 – 81 years) and the younger age group (19 – 30 years): getting their customer queries answered in a fast and convenient manner. Both age groups show the same frustration when the chatbot does not understand and does not answer their queries. Furthermore, both age groups experience difficulties in assessing the security of the chatbot. Van der Goot and Pilgrim (2020) find differences between the age groups in the need for additional human contact and in the factors that contribute to perceived ease of use and perceived security.

4. Conclusion and future work

This study presents a literature review on studies dealing with the interrelation between the affective acceptance of chatbots by users and the interaction style of the chatbot. Based on the collected database of research papers, the authors constructed a classification of research themes and research methodologies used. Each class is analysed separately.

Studies on affective acceptance of chatbots by humans focus on users’ expectations and human-likeness. Findings reveal that for users that are aware of interacting with a bot, the criteria for affective acceptance are different. If a bot is trying to interact human-like, there is a risk an uncanny effect is triggered, and the user is faced with a feeling of eeriness. An important result of the studies that examine the effects of interaction style on user experience and system satisfaction is that not only what the chatbot says but also how it is said, affects the user experience. Socially oriented chatbots with an informal communication style and small talk capabilities were regarded more acceptable by users. The studies on users’ trust in chatbots revealed that the content of the interaction heavily impacts the user experience of the chatbot. Findings revealed that levels of trust in chatbots vary largely according to user segments. Studies on the interaction style and user compliance to requests posed by chatbots show that both the foot-in-the-door technique and the signs of social presence in the conversation lead to a greater level of user compliance. In the field of sales, research has shown that users’ purchasing intention decreases dramatically if the user is aware of interacting with a bot instead of a human. Research on cultural differences and the effect of technology acceptance on the affective acceptance of chatbots shows that there are cultural conventions in user interface design that also affect the acceptance of chatbots. In addition, the social pressure of using innovative technologies, varies between cultures. The higher this social pressure is, the more likely individuals will accept new technologies.

The results of the analysed studies are snapshots in the development of the rapidly improving technology of chatbots and human-computer interaction. Only the future will show whether research results such as the uncanny valley effect are stable patterns of user behaviour towards chatbots or just the temporary outflow of user dissatisfaction with an immature technology. The increasing competence of chatbots as conversation partners and thus the safeguarding of the basic functionality of this product could steer the focus of the user on affective components like design, interaction style or entertainment value of chatbots. This study presented an initial literature study on how the different interaction styles of chatbots impacts the affective acceptance of chatbots. Due to the scarcity of research literature in this field, a systematic literature review with a larger data set would have been difficult at this stage. However, it will be interesting to perform a replication study in future. Chatbots are becoming more and more mainstream in several fields and thus, also research is becoming more abundant. Future work and studies should therefore focus on examining how to define the competence that enables chatbots to adapt to the individual user. The results of the studies analysed show that the individual expectations and dispositions of users strongly influence their experience with chatbots. To generate a positive user experience, chatbots should therefore be able to recognize and anticipate user expectations. To enable them to do this, it will be necessary to investigate how the expectations and dispositions of the users are expressed in the interaction in a first step. Studies that examine the psychophysiological reactions of users during interaction with chatbots could provide valuable results for this.

References

  • Abdul-Kader, S. A., & Woods, J. C. (2015). Survey on chatbot design techniques in speech conversation systems. International Journal of Advanced Computer Science and Applications, 6(7).
  • Adam, M., Wessel, M., & Benlian, A. (2020). AI-based chatbots in customer service and their effects on user compliance. Electronic Markets. Advance online publication. https://doi.org/10.1007/s12525-020-00414-7
  • Aoki, N. (2020). An experimental study of public trust in AI chatbots in the public sector. Government Information Quarterly, 37. https://doi.org/10.1016/j.giq.2020.101490
  • Ashfaq, M., Yun, J., Yu, S., & Loureiro, S. (2020). I, Chatbot: Modeling the Determinants of Users’ Satisfaction and Continuance Intention of AI-Powered Service Agents. Telematics and Informatics, 54, 101473. https://doi.org/10.1016/j.tele.2020.101473
  • Au, N., Ngai, E. W. T., & Cheng, T. C. E. (2002). A critical review of end-user information system satisfaction research and a new research framework. Omega, 30(6), 451–478. https://EconPapers.repec.org/RePEc:eee:jomega:v:30:y:2002:i:6:p:451-478
  • Bandyopadhyay, K., & Fraccastoro, K. (2007). The Effect of Culture on User Acceptance of Information Technology. Communications of the Association for Information Systems. Volume 19 Article, 23, 23–29. https://doi.org/10.17705/1CAIS.01923
  • Booth, A., Papaioannou, D. & Sutton, A. (2012). Systematic Approaches to a Successful Literature Review. London: SAGE Publications Ltd.
  • Cameron, G., Cameron, D., Megaw, G., Bond, R., Mulvenna, M., O’Neill, S., Armour, C., & McTear, M. (2017). Towards a Chatbot for Digital Counselling. In HCI ’17, Proceedings of the 31st British Computer Society Human Computer Interaction Conference. BCS Learning & Development Ltd. https://doi.org/10.14236/ewic/HCI2017.24
  • Carey, J. & Kacmar, Ch. (2010). Cultural and Language Affects on Technology Acceptance and Attitude: Chinese Perspectives. In International Journal of Information Technology, (16)1. http://intjit.org/cms/journal/volume/16/1/161_1.pdf
  • Celsi, M., & Gilly, M. (2003). eTailQ: Dimensionalizing, Measuring and Predicting Etail Quality. Journal of Retailing, 79, 183–198. https://doi.org/10.1016/S0022-4359(03)00034-4
  • Chattaraman, V., Kwon, W., Gilbert, J. &Ross, K. (2019). Should AI-Based, conversational digital assistants employ social- or task-oriented interaction style? A task-competency and reciprocity perspective for older adults. Comput. Hum. Behav., 90, 315–330. https://doi.org/10.1016/j.chb.2018.08.048
  • Chaves, A. P., & Gerosa, M. A. (2019). How should my chatbot interact? A survey on human-chatbot interaction design. arXiv preprint arXiv:1904.02743.
  • Ciechanowski, L., Przegalinska, A., Magnuski, M., & Gloor, P. (2019). In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Future Generation Computer Systems, 92, 539–548. https://doi.org/10.1016/j.future.2018.01.055
  • Cicco, R. de, Silva, Susana Cristina Lima da Costa e, & Alparone, F. R. (2020). “It’s on its way”: Chatbots applied for online food delivery services, social or task-oriented interaction style? Journal of Foodservice Business Research, 1–25. https://doi.org/10.1080/15378020.2020.1826268
  • Davis, F., Bagozzi, R., & Warshaw, P. (1989). User Acceptance of Computer Technology: A Comparison of Two Theoretical Models. Management Science, 35, 982–1003. https://doi.org/10.1287/mnsc.35.8.982
  • Elsholz, E., Chamberlain, J., & Kruschwitz, U. (2019). Exploring Language Style in Chatbots to Increase Perceived Product Value and User Engagement. In CHIIR ’19, Proceedings of the 2019 Conference on Human Information Interaction and Retrieval (pp. 301–305). Association for Computing Machinery. https://doi.org/10.1145/3295750.3298956
  • Fernández, Robin, C., McCoy, S., Yáñez Sandivari, L., & Yáñez Martínez, D. (2014). Technology Acceptance Model: Worried about the Cultural Influence? In F. F.-H. Nah (Ed.), HCI in Business (pp. 609–619). Springer International Publishing
  • Gentner, T., Neitzel, T., Schulze, J., & Buettner, R. (2020, July). A Systematic Literature Review of Medical Chatbot Research from a Behavior Change Perspective. In 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC) (pp. 735-740). IEEE.
  • Go, E., & Sundar, S. S. (2019). Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions. Computers in Human Behavior, 97, 304–316. https://doi.org/10.1016/j.chb.2019.01.020
  • Holmes, S., Moorhead, A., Bond, R., Zheng, H., Coates, V., & McTear, M. (2019). Usability Testing of a Healthcare Chatbot: Can We Use Conventional Methods to Assess Conversational User Interfaces? In Proceedings of the 31st European Conference on Cognitive Ergonomics (pp. 207–214). Association for Computing Machinery. https://doi.org/10.1145/3335082.3335094
  • Hornbæk, K. (2006). Current practice in measuring usability: Challenges to usability studies and research. International Journal of Human-Computer Studies, 64, 79–102. https://doi.org/10.1016/j.ijhcs.2005.06.002
  • Kasilingam, D. L. (2020). Understanding the attitude and intention to use smartphone chatbots for shopping. Technology in Society, 62, 101280. https://doi.org/10.1016/j.techsoc.2020.101280
  • Kim, S., Lee, J., & Gweon, G. (2019). Comparing Data from Chatbot and Web Surveys: Effects of Platform and Conversational Style on Survey Response Quality. https://doi.org/10.1145/3290605.3300316
  • Lee, Y.‑C., Yamashita, N., & Huang, Y. (2020). Designing a Chatbot as a Mediator for Promoting Deep Self-Disclosure to a Real Mental Health Professional. Proceedings of the ACM on Human-Computer Interaction, 4, 27. https://doi.org/10.1145/3392836
  • Liebrecht, C., & van Hooijdonk, C. (2020). Creating Humanlike Chatbots: What Chatbot Developers Could Learn from Webcare Employees in Adopting a Conversational Human Voice. In A. Følstad, T. Araujo, S. Papadopoulos, E. L-C. Law, O-C. Granmo, E. Luger, & P. B. Brandtzaeg (Eds.), Chatbot Research and Design: Third International Workshop, CONVERSATIONS 2019, Amsterdam, The Netherlands, November 19–20, 2019, Revised Selected Papers (pp. 51-64). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11970 LNCS). Springer. https://doi.org/10.1007/978-3-030-39540-7_4
  • Loebner, The Official Web Site of the Loebner Prize. https://aisb.org.uk/aisb-events/ (retrieved on 17.12.2020)
  • Luo, X., Tong, S., Fang, Z., & Qu, Z. (2019). Frontiers: Machines vs. Humans: The Impact of Artificial Intelligence Chatbot Disclosure on Customer Purchases. Marketing Science, 38(6), 937–947. https://doi.org/10.1287/mksc.2019.1192
  • Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley [from the field]. IEEE Robotics & Automation Magazine, 19(2), 98–100.
  • Müller, L., Mattke, J., Maier, C., Weitzel, T., & Graser, H. (2019). Chatbot Acceptance: A Latent Profile Analysis on Individuals’ Trust in Conversational Agents. In Proceedings of the 2019 on Computers and People Research Conference (pp. 35–42). Association for Computing Machinery. https://doi.org/10.1145/3322385.3322392
  • Neururer, M., Schlögl, S., Brinkschulte, L., & Groth, A. (2018). Perceptions on Authenticity in Chat Bots. Multimodal Technologies and Interaction, 2018, 60. https://doi.org/10.3390/mti2030060
  • Nordheim, C. B., Følstad, A., & Bjørkli, C. A. (2019). An Initial Model of Trust in Chatbots for Customer Service—Findings from a Questionnaire Study. Interacting with Computers, 31(3), 317–335. https://doi.org/10.1093/iwc/iwz022
  • Nuruzzaman, M., & Hussain, O. K. (2018, October). A survey on chatbot implementation in customer service industry through deep neural networks. In 2018 IEEE 15th International Conference on e-Business Engineering (ICEBE) (pp. 54–61). IEEE.
  • Portela, M., & Granell-Canut, C. (2017). A New Friend in Our Smartphone? Observing Interactions with Chatbots in the Search of Emotional Engagement. In Interacción ’17, Proceedings of the XVIII International Conference on Human Computer Interaction. Association for Computing Machinery. https://doi.org/10.1145/3123818.3123826
  • Pricilla, C., Lestari, C.D.P. & Dharma, D. (2018). Designing Interaction for Chatbot-Based Conversational Commerce with User-Centered Design. In 2018 5th International Conference on Advanced Informatics: Concept Theory and Applications (ICAICTA)
  • Przegalinska, A., Ciechanowski, L., Stroz, A., Gloor, P., & Mazurek, G. (2019). In bot we trust: A new methodology of chatbot performance measures. Business Horizons, 62(6), 785–797. https://doi.org/10.1016/j.bushor.2019.08.005
  • Ratajczyk, D., Jukiewicz, M., & Łupkowski, P. (2019). Evaluation of the uncanny valley hypothesis based on declared emotional response and psychophysiological reaction. Bio-Algorithms and Med-Systems. Advance online publication. https://doi.org/10.1515/bams-2019-0008
  • Ren, R., Castro, J., Acuña, S., & Lara, J. (2019). Usability of Chatbots: A Systematic Mapping Study. https://doi.org/10.18293/SEKE2019-029
  • Roberta, C. de, Costa, e. S. S., & Romana, A. F. (2020). Millennials’ attitude toward chatbots: An experimental study in a social relationship perspective. International Journal of Retail & Distribution Management, 48(11), 1213–1233. https://doi.org/10.1108/IJRDM-12-2019-0406
  • Russell, S., & Norvig, P. (2020). Artificial intelligence: a modern approach. Fourth Edition. Pearson. ISBN 978-0134610993.
  • Srite, M. (2006). Culture as an Explanation of Technology Acceptance Differences: An Empirical Investigation of Chinese and US Users. Australasian J. Of Inf. Systems, 14. https://doi.org/10.3127/ajis.v14i1.4
  • Srite, M., & Karahanna, E. (2006). The Role of Espoused National Cultural Values in Technology Acceptance. MIS Quarterly, 30, 679–704. https://doi.org/10.2307/25148745
  • Thowfeek, M. H., & Jaafar, A. (2012). The Influence of Cultural Factors on the Adoption of E-Learning: A Reference to a Public University in Sri Lanka. Applied Mechanics and Materials, 263-266, 3424–3434. https://doi.org/10.4028/www.scientific.net/AMM.263-266.3424
  • Toader, D.‑C., Boca, G., Toader, R., Măcelaru, M., Toader, C., Ighian, D., & Rădulescu, A. T. (2019). The Effect of Social Presence and Chatbot Errors on Trust. Sustainability (Basel, Switzerland), 12. https://doi.org/10.3390/su12010256
  • Turing, A. (1950). “Computing Machinery and Intelligence”, Mind, 59 (236): 433–60, doi:10.1093/mind/lix.236.433
  • Van der Goot, M. J., & Pilgrim, T. (2020). Exploring Age Differences in Motivations for and Acceptance of Chatbot Communication in a Customer Service Context. In A. Følstad, T. Araujo, S. Papadopoulos, E. L.-C. Law, O.-C. Granmo, E. Luger, & P. B. Brandtzaeg (Eds.), Chatbot Research and Design (pp. 173–186). Springer International Publishing.
  • Van Dolen, W. M., Dabholkar, P. A., & de Ruyter, K. (2007). Satisfaction with Online Commercial Group Chat: The Influence of Perceived Technology Attributes, Chat Group Characteristics, and Advisor Communication Style. Journal of Retailing, 83(3), 339–358. https://doi.org/10.1016/j.jretai.2007.03.004
  • Venkatesh, V., Morris, M., Davis, G., & Davis, F. (2003). User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly, 27, 425–478. https://doi.org/10.2307/30036540
  • Verhagen, T., van Nes, J., Feldberg, F., & van Dolen, W. (2014). Virtual Customer Service Agents: Using Social Presence and Personalization to Shape Online Service Encounters. Journal of Computer-Mediated Communication, 19(3), 529–545. https://doi.org/10.1111/jcc4.12066
  • Wallace R.S. (2009). The anatomy of a.l.I.C.e. In Epstein R., Roberts G., Beber G. (Eds.), Parsing the turing test: philosophical and methodological issues in the quest for the thinking computer, Springer Netherlands, Dordrecht, 181–210.
  • Weizenbaum, J. (1966), “ELIZA—A Computer Program for the Study of Natural Language Communication Between Man and Machine”. Communications of the ACM, 9 (1): 36–45, doi:10.1145/365153.365168
  • Xu, A., Liu, Z., Guo, Y., Sinha, V., & Akkiraju, R. (2017). A New Chatbot for Customer Service on Social Media. Proceedings of the 2017 CHI conference on human factors in computing systems. ACM, New York, NY, USA (2017), 3506–3510.
  • Lu, Y., Papagiannidis, S., & Alamanos, E. (2019). Exploring the emotional antecedents and outcomes of technology acceptance. Computers in Human Behavior, 90, 153–169.
  • Zakour, A. B. (2004, February). Cultural differences and information technology acceptance. In Proceedings of the 7th annual conference of the Southern association for information systems, 156–161.

Authors:

Lili Aunimo

Principal Lecturer

Haaga Helia University of Applied Sciences

peer-reviewed

Aunimo, L. and  Kauttonen, J.. eds., 2021. Proceedings of the Workshop on Human-AI Interaction held in conjunction with the 9th Artificial Intelligence and Natural Language Conference, AINL 2020, Helsinki, Finland, October 7–9, 2020.

Viittausohje:

Dobrowsky, D., Aunimo, L., Janous, G., Pezenka, I., & Weber, T. (2020). The Influence of Interactional Style on Affective Acceptance in Human-Chatbot Interaction – A Literature Review. AINL: Artificial Intelligence and Natural Language Conference. Workshop on Human-AI Interaction. 7.-9.10.2020, online. http://urn.fi/URN:NBN:fi-fe2021101451016