+44 115 966 7987 contact@ukdiss.com Log in

Customer service automation: when do chatbots reduce satisfaction, and when do they help?

//

UK Dissertations

Abstract

This dissertation examines the conditions under which customer service chatbots enhance or diminish customer satisfaction, synthesising findings from peer-reviewed literature published between 2020 and 2025. Through systematic literature review, the study identifies critical moderating factors including task complexity, customer emotional state, chatbot design characteristics, and trust considerations. Findings reveal that chatbots demonstrably improve satisfaction when handling low-complexity, functional tasks with high information quality, speed, and appropriate empathetic communication. Conversely, chatbots reduce satisfaction when deployed for complex or emotionally charged service encounters, when anthropomorphic features meet angry customers, or when functional performance fails to meet expectations. Privacy concerns and trust deficits further compound negative outcomes. The analysis demonstrates that the binary question of whether chatbots help or hurt customer satisfaction oversimplifies a nuanced relationship mediated by design choices, deployment contexts, and customer characteristics. Practical implications suggest organisations should implement hybrid service models featuring intelligent escalation pathways, match chatbot deployment to task characteristics, and carefully calibrate anthropomorphic design features to context. Future research directions include longitudinal studies examining satisfaction trajectories and cross-cultural variations in chatbot acceptance.

Introduction

The proliferation of artificial intelligence (AI) in customer service represents one of the most significant transformations in business-consumer interaction over the past decade. Chatbots—software applications designed to simulate human conversation through text or voice interfaces—have become ubiquitous across industries, from banking and retail to healthcare and hospitality. By 2024, an estimated 85% of customer interactions were handled without human agents, with chatbots playing a central role in this automation revolution (Gartner, 2022). This rapid adoption reflects organisational imperatives to reduce operational costs, provide round-the-clock service availability, and meet evolving consumer expectations for immediate assistance.

However, the relationship between chatbot deployment and customer satisfaction remains contested and contextually dependent. Whilst some organisations report substantial improvements in customer experience metrics following chatbot implementation, others document significant backlash, abandoned interactions, and reputational damage. This apparent contradiction suggests that the question of whether chatbots enhance or diminish satisfaction cannot be answered categorically; rather, outcomes depend upon a complex interplay of design decisions, deployment contexts, task characteristics, and customer states.

Understanding these contingencies holds considerable academic and practical significance. From a theoretical perspective, chatbot interactions provide a unique laboratory for examining fundamental questions in human-computer interaction, service quality, and consumer psychology. The introduction of conversational AI challenges established frameworks developed primarily for human service encounters, necessitating theoretical refinement and extension. Practically, organisations investing substantial resources in chatbot development require evidence-based guidance to optimise returns and avoid costly implementation failures.

The stakes extend beyond individual firms to broader societal implications. As chatbots increasingly mediate access to essential services—including banking, healthcare information, and government services—their design and deployment carry equity implications. Poorly implemented chatbots may disproportionately disadvantage vulnerable populations, including elderly users, those with limited digital literacy, and individuals facing urgent or emotionally distressing circumstances. Conversely, well-designed chatbots can democratise access to information and support by transcending temporal and geographical constraints.

This dissertation addresses the question: under what conditions do customer service chatbots enhance satisfaction, and when do they diminish it? By synthesising recent empirical research, the study develops an integrative framework identifying key moderators and boundary conditions, providing both theoretical contributions and actionable guidance for practitioners.

Aim and objectives

Aim

This dissertation aims to identify and analyse the conditions under which customer service chatbots enhance or diminish customer satisfaction, developing an evidence-based framework to guide theoretical understanding and practical implementation.

Objectives

To achieve this aim, the following objectives guide the investigation:

1. To systematically review and synthesise peer-reviewed literature examining chatbot impacts on customer satisfaction published between 2020 and 2025.

2. To identify and categorise the conditions under which chatbots demonstrably reduce customer satisfaction, including customer characteristics, task types, and design features.

3. To identify and categorise the conditions under which chatbots demonstrably enhance customer satisfaction, examining enabling factors and success conditions.

4. To develop an integrative framework mapping the relationships between chatbot design choices, deployment contexts, and satisfaction outcomes.

5. To derive practical implications for organisations seeking to optimise chatbot deployment and identify productive directions for future research.

Methodology

This study employs a systematic literature synthesis methodology to address the research objectives. Literature synthesis represents an appropriate approach when seeking to integrate findings across multiple empirical studies to develop comprehensive understanding of a phenomenon characterised by divergent findings and contextual contingencies (Tranfield, Denyer and Smart, 2003). Unlike primary empirical research, synthesis enables the identification of patterns, contradictions, and gaps across the accumulated evidence base.

Search strategy and source selection

The literature search prioritised peer-reviewed journal articles published between 2020 and 2025, ensuring currency and relevance to contemporary chatbot technologies. This temporal boundary reflects the rapid evolution of conversational AI capabilities, rendering earlier research potentially obsolete. Sources were identified through systematic database searches and supplemented with citation tracking to capture influential works.

Inclusion criteria required that studies: (a) examined chatbots or conversational AI agents in customer service contexts; (b) measured customer satisfaction, loyalty, or closely related constructs; (c) appeared in peer-reviewed academic journals or established conference proceedings; and (d) reported empirical findings or systematic reviews. Exclusion criteria eliminated opinion pieces, trade publications, and studies focused exclusively on technical performance without customer-facing implications.

Analytical approach

The analysis employed thematic synthesis, involving line-by-line coding of extracted findings, organisation into descriptive themes, and development of analytical themes representing higher-order abstractions (Thomas and Harden, 2008). This approach permitted integration of both quantitative and qualitative findings within a coherent framework.

Studies were initially categorised according to their primary finding direction—positive, negative, or contingent effects on satisfaction—before detailed examination of moderating factors and boundary conditions. Where studies reported apparently contradictory findings, careful attention focused on identifying contextual differences that might explain divergence.

Quality assessment

Source quality was assessed using established criteria including methodological rigour, sample adequacy, construct validity, and analytical appropriateness. Studies demonstrating significant methodological limitations were weighted accordingly in synthesis, though not excluded entirely given the emerging nature of this research domain.

Limitations

This methodology carries inherent limitations. Synthesis necessarily depends upon the quality and comprehensiveness of included primary studies. Publication bias may skew the available literature toward significant findings. Additionally, the rapid pace of technological change means that findings from even recent studies may not fully generalise to current chatbot capabilities. These limitations are acknowledged whilst noting that systematic synthesis remains the most appropriate method for addressing the integrative questions posed.

Literature review

Theoretical foundations

Understanding chatbot effects on customer satisfaction requires grounding in established theoretical frameworks. Expectancy disconfirmation theory provides a foundational lens, positing that satisfaction results from the comparison between pre-consumption expectations and perceived performance (Oliver, 1980). Applied to chatbots, this framework suggests that satisfaction depends not on absolute performance but on performance relative to expectations—expectations that may be shaped by prior experiences, anthropomorphic design cues, and contextual factors.

The computers are social actors (CASA) paradigm offers complementary insights, demonstrating that individuals apply social rules and expectations to interactions with technology (Nass and Moon, 2000). This tendency becomes particularly relevant for chatbots employing human-like names, avatars, or conversational styles, as users may unconsciously expect human-like competencies and social sensitivities.

Service quality frameworks, particularly the technology acceptance model and its derivatives, provide additional theoretical grounding. Ashfaq et al. (2020) extend these frameworks to chatbot contexts, demonstrating that perceived usefulness, ease of use, and enjoyment significantly influence satisfaction and continuance intention.

Conditions reducing customer satisfaction

A substantial body of evidence identifies conditions under which chatbot deployment diminishes customer satisfaction. These conditions cluster around five primary themes: emotional context, task complexity, functional performance, privacy and trust, and anthropomorphism mismatches.

High emotional stakes and angry customers

Customer emotional state emerges as a critical moderator of chatbot effectiveness. Crolic et al. (2022) demonstrate through experimental studies that anthropomorphised chatbots decrease satisfaction, firm evaluation, and purchase intention when customers enter interactions in angry states. The mechanism underlying this effect involves inflated expectations and subsequent expectancy violations. Human-like design cues activate social expectations; when the chatbot fails to meet these elevated expectations—particularly regarding empathetic response to distress—customers experience magnified disappointment.

This finding carries significant implications given that customers frequently approach service interactions following negative experiences prompting the contact. Complaint handling, service recovery, and problem resolution contexts are precisely those where emotional stakes run highest, yet these contexts appear least suited to chatbot deployment without careful design consideration.

Complex, experiential, or high-touch needs

Task characteristics significantly moderate chatbot effectiveness. Ruan and Mezei (2022) demonstrate that for experiential products—those valued primarily for subjective experiences rather than functional attributes—human frontline employees generate higher satisfaction than chatbots. Experiential products require nuanced understanding, personalised recommendations, and the ability to interpret subjective preferences, capabilities that current chatbots struggle to match.

Nicolescu and Tudorache (2022) corroborate this finding through systematic literature review, concluding that complex service needs requiring interpretation, judgment, or creative problem-solving exceed chatbot capabilities and yield inferior satisfaction compared with human alternatives. The limitation reflects not merely current technological constraints but potentially fundamental differences between algorithmic and human cognition in navigating ambiguity and contextual nuance.

Poor functional performance

Regardless of context, chatbots that fail at core functional tasks generate customer dissatisfaction. Nicolescu and Tudorache (2022) catalogue common failure modes including irrelevant replies, inability to solve problems, repetitive loops, and conversational confusion. Such failures create “unpleasant interaction,” increase stress, reduce trust, and prompt journey abandonment.

Ranieri, Di Bernardo and Mele (2024) extend this analysis by mapping customer experience across successful and unsuccessful chatbot interactions. Positive experiences cluster around efficient resolution, whilst negative experiences feature frustration, wasted time, and the sense of talking to an incapable system. Critically, negative experiences may carry disproportionate weight in overall satisfaction judgments and subsequent behavioural intentions.

Privacy and trust concerns

Customer perceptions of privacy risk and trust significantly influence chatbot satisfaction. Cheng and Jiang (2020) demonstrate that perceived privacy risk directly reduces satisfaction and continued use intention. Users concerned about data collection, storage, and potential misuse approach chatbot interactions with apprehension that colours their experience evaluation.

Trust deficits compound these effects. Eren (2021) identifies trust as a critical determinant of satisfaction in banking chatbot contexts, whilst Al-Shafei (2024) demonstrates that trust mediates relationships between chatbot characteristics and engagement outcomes. Chen et al. (2023) further establish that trust deficits undermine the customer loyalty benefits that chatbots might otherwise deliver.

These findings suggest that chatbot satisfaction cannot be isolated from broader organisational trust. Customers bring expectations shaped by firm reputation, industry norms, and media narratives about AI and data use. Organisations with trust deficits may find chatbot deployment amplifies rather than ameliorates customer concerns (Jenneboer, Herrando and Constantinides, 2022).

Over-humanisation in problematic contexts

A nuanced finding concerns the contextual appropriateness of anthropomorphic design. Human-like features—names, avatars, conversational styles—can backfire in problem or complaint contexts, particularly with angry users. Crolic et al. (2022) demonstrate that the same anthropomorphic features that enhance satisfaction in routine contexts exacerbate dissatisfaction when customers seek assistance with problems.

This pattern reflects expectancy dynamics. Human-like cues set human-like expectations; when chatbots then reveal limitations in understanding, empathy, or problem-solving, the gap between expectation and performance widens. A transparently mechanical chatbot may paradoxically outperform a human-like alternative by setting more modest expectations that it can meet or exceed.

Conditions enhancing customer satisfaction

Whilst the preceding section catalogued risk factors, substantial evidence also demonstrates conditions under which chatbots enhance satisfaction, sometimes exceeding human alternatives.

Low-complexity, functional, routine tasks

For functional products and simple service requests, chatbots can deliver higher satisfaction than human agents. Ruan and Mezei (2022) demonstrate this effect for functional products—those valued primarily for practical attributes rather than experiential qualities. Simple, well-defined tasks such as account balance enquiries, order tracking, or appointment scheduling align well with chatbot capabilities.

Nicolescu and Tudorache (2022) reinforce this finding, noting that routine tasks requiring information retrieval or straightforward transaction execution suit chatbot deployment. Antonio et al. (2023) similarly document satisfaction improvements when chatbots handle common enquiries efficiently. The mechanism involves chatbot advantages in speed and consistency combined with task characteristics that do not require capabilities beyond current AI competencies.

High service, system, and information quality

Quality dimensions emerge as consistent predictors of chatbot satisfaction across multiple studies. Hsu and Lin (2022) demonstrate that service quality—encompassing responsiveness, reliability, and assurance—strongly predicts satisfaction and loyalty. Ashfaq et al. (2020) similarly establish that perceived usefulness and ease of use drive satisfaction and continuance intention.

Information quality—accuracy, relevance, completeness, and timeliness—particularly influences outcomes. Cheng and Jiang (2020) find information quality directly affects satisfaction, whilst Promsiri, Wuittipappinyo and Keerativutisest (2025) identify it as a primary satisfaction driver. Al-Oraini (2025) extends these findings to demonstrate quality effects on trust and subsequent satisfaction.

System quality—interface design, technical reliability, and navigation ease—similarly matters. Huang, Markovitch and Stough (2024) demonstrate that well-designed systems can enable chatbots to approach human agent satisfaction levels. Chen et al. (2023) establish quality as foundational to loyalty development. Collectively, these findings emphasise that chatbot satisfaction depends substantially on implementation quality rather than chatbot use per se (Jenneboer, Herrando and Constantinides, 2022).

Availability, speed, and convenience

Chatbots offer inherent advantages in availability and responsiveness that customers value. Always-on, 24/7 access eliminates temporal constraints that characterise human-staffed service channels. Nicolescu and Tudorache (2022) identify this availability as a primary satisfaction driver, particularly for customers seeking assistance outside conventional business hours.

Reduced waiting time represents another advantage. Promsiri, Wuittipappinyo and Keerativutisest (2025) demonstrate that speed directly enhances satisfaction, whilst Ranieri, Di Bernardo and Mele (2024) identify efficient resolution as a key positive experience element. Antonio et al. (2023) similarly document convenience effects on satisfaction.

These advantages prove particularly salient for routine enquiries where customers prioritise speed over interaction depth. The time savings and convenience may offset limitations in conversational sophistication, yielding net satisfaction improvements.

Warmth, empathy, and appropriate social cues

Whilst excessive humanisation carries risks, appropriate deployment of social cues enhances satisfaction. Yun and Park (2022) demonstrate that emotion words during service recovery increase satisfaction, repurchase intention, and positive word-of-mouth. The expression of empathy and concern signals that the chatbot—and by extension, the organisation—recognises and cares about customer experiences.

Zhang et al. (2023) extend this finding to emotional expression more broadly, demonstrating positive effects when chatbots communicate warmth. Xie et al. (2024) document benefits of appropriate humour, suggesting that social cues can enhance interaction enjoyment. Al-Shafei (2024) identifies emotional engagement as a satisfaction driver, whilst Parveen et al. (2025) examine language use effects on satisfaction outcomes.

Critically, these benefits obtain “when expectations are met”—that is, when the chatbot delivers functionally whilst also communicating socially. Social cues alone cannot compensate for functional failures but can enhance experiences when core needs are satisfied.

The role of hybrid models

The literature increasingly recognises that binary chatbot-or-human framings oversimplify service design choices. Hybrid models—featuring chatbot frontlines with accessible human escalation—may optimise across competing considerations. Such models capture chatbot advantages for routine enquiries whilst preserving human alternatives for complex or emotionally charged situations.

The success of hybrid models depends upon escalation design. Seamless transitions that preserve conversational context and avoid customer repetition prevent frustration accumulation. Transparent communication about chatbot limitations and escalation options manages expectations. Intelligent routing based on query complexity and customer signals directs interactions appropriately.

Discussion

The synthesised evidence reveals that customer satisfaction with service chatbots depends upon the alignment between chatbot capabilities, task characteristics, customer states, and design choices. This section discusses key findings in relation to the stated objectives and their broader implications.

Addressing the complexity question

A central finding concerns the moderating role of task complexity. Chatbots demonstrably excel at low-complexity, well-defined tasks where their advantages in speed, consistency, and availability outweigh limitations in interpretive capability. Conversely, complex tasks requiring judgment, creativity, or nuanced interpretation remain better suited to human agents. This pattern aligns with task-technology fit theory, which posits that technology effectiveness depends upon correspondence between task requirements and technology capabilities (Goodhue and Thompson, 1995).

The practical implication is that organisations should match chatbot deployment to task characteristics rather than implementing blanket automation. Service design should identify task types, assess complexity levels, and route interactions accordingly. Such matching requires not merely technological sophistication but deep understanding of customer needs and service processes.

The anthropomorphism paradox

Findings regarding anthropomorphism reveal a paradox: human-like features enhance satisfaction in routine contexts yet exacerbate dissatisfaction in problem contexts, particularly with angry customers. This pattern reflects expectancy dynamics whereby anthropomorphic cues inflate expectations that chatbots then violate.

This finding challenges assumptions underlying chatbot design, which often prioritise human-likeness as inherently desirable. Evidence suggests instead that optimal anthropomorphism levels depend upon context. Routine, transactional interactions may benefit from human-like warmth; complaint handling may require transparent acknowledgment of chatbot limitations or direct human escalation.

Organisations might consider dynamic anthropomorphism—modulating human-like features based on detected customer state or interaction type. Alternatively, designs might establish clear chatbot identity from the outset, setting appropriate expectations whilst still incorporating helpful social cues.

Trust as foundational

Multiple studies identify trust as foundational to chatbot satisfaction, operating both as a direct satisfaction determinant and as a moderator of other relationships. Trust deficits amplify concerns about privacy, competence, and intentions, colouring interactions regardless of actual chatbot performance.

This finding suggests that chatbot satisfaction cannot be optimised through chatbot design alone. Broader organisational trust—established through consistent, ethical behaviour across touchpoints—creates conditions for chatbot acceptance. Organisations perceived as untrustworthy may find that chatbots become lightning rods for accumulated suspicion.

Privacy emerges as a particularly salient trust dimension in AI contexts. Customers concerned about data collection may approach chatbot interactions defensively, interpreting ambiguous experiences negatively. Transparent data practices, clear privacy communications, and meaningful user control may help address these concerns.

Quality as prerequisite

Across contexts and customer types, quality dimensions consistently predict satisfaction. Information quality, system quality, and service quality represent prerequisites for positive outcomes; design choices regarding anthropomorphism, social cues, or other features cannot compensate for quality failures.

This finding emphasises the importance of implementation excellence. Many reported chatbot failures reflect not inherent technology limitations but insufficient investment in training data, response design, system integration, and ongoing optimisation. Organisations deploying chatbots must commit to continuous improvement based on interaction analysis and customer feedback.

Practical framework

Integrating findings, a practical framework emerges for optimising chatbot deployment:

First, match deployment to task characteristics. Deploy chatbots for low-complexity, functional tasks whilst preserving human alternatives for complex, experiential, or emotionally charged situations.

Second, calibrate anthropomorphism to context. Consider reducing human-like features in complaint or problem contexts; ensure human escalation is accessible for distressed customers.

Third, prioritise quality fundamentals. Invest in information accuracy, system reliability, and response relevance before pursuing sophisticated design features.

Fourth, build trust foundations. Address privacy concerns transparently; establish organisational credibility that extends to AI implementations.

Fifth, implement intelligent escalation. Design seamless transitions to human agents; use signals to detect escalation needs; preserve context across handoffs.

Sixth, monitor and iterate. Analyse interaction data continuously; identify failure patterns; refine responses and routing based on evidence.

Theoretical contributions

This synthesis offers several theoretical contributions. First, it demonstrates the inadequacy of binary framings—chatbots as universally helpful or harmful—revealing instead a contingent relationship moderated by multiple factors. Second, it extends expectancy disconfirmation theory to AI contexts, highlighting how design cues shape expectations that subsequently influence satisfaction evaluations. Third, it contributes to human-computer interaction theory by documenting conditions under which social actor frameworks do and do not apply.

Limitations and future research

This synthesis carries limitations that suggest directions for future research. The included studies span diverse industries, methodologies, and cultural contexts, potentially masking important variations. Future research might examine industry-specific effects or cross-cultural differences in chatbot acceptance.

Most included studies employed cross-sectional designs, limiting understanding of temporal dynamics. Longitudinal research could examine how satisfaction evolves through repeated chatbot interactions and whether initial impressions persist or shift.

The rapid evolution of AI capabilities means that findings may require ongoing reassessment. Capabilities that currently differentiate chatbots from humans may narrow or shift, potentially altering the patterns documented here.

Finally, ethical dimensions warrant deeper examination. Questions regarding transparency (should chatbots disclose their non-human nature?), manipulation (can empathetic responses from non-sentient systems be authentic?), and equity (who benefits and who bears costs of automation?) merit sustained scholarly attention.

Conclusions

This dissertation has systematically examined the conditions under which customer service chatbots enhance or diminish customer satisfaction. Through synthesis of peer-reviewed literature from 2020 to 2025, the study has addressed its objectives and developed an integrative framework with both theoretical and practical implications.

Regarding the first objective—to identify conditions reducing satisfaction—the analysis reveals five primary risk factors: high emotional stakes and angry customers, complex or experiential service needs, poor functional performance, privacy and trust concerns, and anthropomorphism mismatches. These factors interact, with emotional contexts amplifying anthropomorphism risks and trust deficits magnifying privacy concerns.

Regarding the second objective—to identify conditions enhancing satisfaction—the analysis documents four enabling factors: low-complexity functional tasks, high service and information quality, availability and speed advantages, and appropriate warmth and empathy. These conditions enable chatbots to deliver satisfaction matching or exceeding human alternatives.

The third objective—to develop an integrative framework—has been achieved through synthesis of moderating factors and their relationships. The framework emphasises that satisfaction outcomes depend upon alignment between chatbot capabilities, task characteristics, customer states, and design choices.

The fourth objective—to derive practical implications—yields actionable guidance: match deployment to task complexity, calibrate anthropomorphism to context, prioritise quality fundamentals, build trust foundations, implement intelligent escalation, and engage in continuous improvement.

The significance of these findings extends beyond individual organisations to broader societal implications as AI increasingly mediates service access. Well-designed chatbots can democratise access and enhance experiences; poorly designed implementations can frustrate, exclude, and damage relationships. The stakes merit the careful attention this research domain continues to receive.

Future research should address limitations of current evidence through longitudinal designs, cross-cultural comparisons, and ethical examinations. As AI capabilities evolve, ongoing assessment will be required to determine whether current patterns persist or transform.

In conclusion, the question of whether chatbots help or hurt customer satisfaction admits no simple answer. Chatbots help when they quickly and accurately handle low-complexity, functional issues with trustworthy, empathetic communication. They reduce satisfaction when used for complex or emotionally charged problems, when over-humanised for angry customers, or when they fail at core problem resolution, trust, or privacy. Hybrid models featuring intelligent human escalation often strike the optimal balance. Organisations that recognise and act upon these contingencies position themselves to capture automation benefits whilst avoiding implementation pitfalls.

References

Al-Oraini, B., 2025. Chatbot dynamics: trust, social presence and customer satisfaction in AI-driven services. *Journal of Innovative Digital Transformation*. https://doi.org/10.1108/jidt-08-2024-0022

Al-Shafei, M., 2024. Navigating Human-Chatbot Interactions: An Investigation into Factors Influencing User Satisfaction and Engagement. *International Journal of Human–Computer Interaction*, 41, pp. 411-428. https://doi.org/10.1080/10447318.2023.2301252

Antonio, E., Fadhilah, M., Faiq, F., Fredyan, R. and Pranoto, H., 2023. Analyzing the Impact of Customer Service Chatbots on User Satisfaction. *2023 15th International Congress on Advanced Applied Informatics Winter (IIAI-AAI-Winter)*, pp. 82-85. https://doi.org/10.1109/iiai-aai-winter61682.2023.00023

Ashfaq, M., Yun, J., Yu, S. and Loureiro, S., 2020. I, Chatbot: Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents. *Telematics and Informatics*, 54, pp. 101473. https://doi.org/10.1016/j.tele.2020.101473

Chen, Q., Lu, Y., Gong, Y. and Xiong, J., 2023. Can AI chatbots help retain customers? Impact of AI service quality on customer loyalty. *Internet Research*, 33, pp. 2205-2243. https://doi.org/10.1108/intr-09-2021-0686

Cheng, Y. and Jiang, H., 2020. How Do AI-driven Chatbots Impact User Experience? Examining Gratifications, Perceived Privacy Risk, Satisfaction, Loyalty, and Continued Use. *Journal of Broadcasting & Electronic Media*, 64, pp. 592-614. https://doi.org/10.1080/08838151.2020.1834296

Crolic, C., Thomaz, F., Hadi, R. and Stephen, A., 2022. Blame the Bot: Anthropomorphism and Anger in Customer–Chatbot Interactions. *Journal of Marketing*, 86, pp. 132-148. https://doi.org/10.1177/00222429211045687

Eren, B., 2021. Determinants of customer satisfaction in chatbot use: evidence from a banking application in Turkey. *International Journal of Bank Marketing*. https://doi.org/10.1108/ijbm-02-2020-0056

Gartner, 2022. *Gartner Predicts 85% of Customer Interactions Will Be Managed Without Human Agents*. Stamford, CT: Gartner, Inc.

Goodhue, D.L. and Thompson, R.L., 1995. Task-technology fit and individual performance. *MIS Quarterly*, 19(2), pp. 213-236.

Hsu, C. and Lin, J., 2022. Understanding the user satisfaction and loyalty of customer service chatbots. *Journal of Retailing and Consumer Services*. https://doi.org/10.1016/j.jretconser.2022.103211

Huang, D., Markovitch, D. and Stough, R., 2024. Can chatbot customer service match human service agents on customer satisfaction? An investigation in the role of trust. *Journal of Retailing and Consumer Services*. https://doi.org/10.1016/j.jretconser.2023.103600

Jenneboer, L., Herrando, C. and Constantinides, E., 2022. The Impact of Chatbots on Customer Loyalty: A Systematic Literature Review. *Journal of Theoretical and Applied Electronic Commerce Research*, 17, pp. 212-229. https://doi.org/10.3390/jtaer17010011

Nass, C. and Moon, Y., 2000. Machines and mindlessness: Social responses to computers. *Journal of Social Issues*, 56(1), pp. 81-103.

Nicolescu, L. and Tudorache, M., 2022. Human-Computer Interaction in Customer Service: The Experience with AI Chatbots—A Systematic Literature Review. *Electronics*. https://doi.org/10.3390/electronics11101579

Oliver, R.L., 1980. A cognitive model of the antecedents and consequences of satisfaction decisions. *Journal of Marketing Research*, 17(4), pp. 460-469.

Parveen, N., Afshan, S., Srivastava, P., Hajjaj, R., Osman, H., Adam, N., Alotaibi, N. and Alqahtani, S., 2025. The Role of Chatbots in Customer Service: Examining Language Use and Its Impact on Customer Satisfaction. *Metallurgical and Materials Engineering*. https://doi.org/10.63278/1262

Promsiri, T., Wuittipappinyo, N. and Keerativutisest, V., 2025. Enhancing Chatbot Service User Satisfaction. *TEM Journal*. https://doi.org/10.18421/tem141-20

Ranieri, A., Di Bernardo, I. and Mele, C., 2024. Serving customers through chatbots: positive and negative effects on customer experience. *Journal of Service Theory and Practice*. https://doi.org/10.1108/jstp-01-2023-0015

Ruan, Y. and Mezei, J., 2022. When do AI chatbots lead to higher customer satisfaction than human frontline employees in online shopping assistance? Considering product attribute type. *Journal of Retailing and Consumer Services*. https://doi.org/10.1016/j.jretconser.2022.103059

Thomas, J. and Harden, A., 2008. Methods for the thematic synthesis of qualitative research in systematic reviews. *BMC Medical Research Methodology*, 8(1), pp. 1-10.

Tranfield, D., Denyer, D. and Smart, P., 2003. Towards a methodology for developing evidence-informed management knowledge by means of systematic review. *British Journal of Management*, 14(3), pp. 207-222.

Xie, Y., Liang, C., Zhou, P. and Jiang, L., 2024. Exploring the influence mechanism of chatbot-expressed humor on service satisfaction in online customer service. *Journal of Retailing and Consumer Services*. https://doi.org/10.1016/j.jretconser.2023.103599

Yun, J. and Park, J., 2022. The Effects of Chatbot Service Recovery With Emotion Words on Customer Satisfaction, Repurchase Intention, and Positive Word-Of-Mouth. *Frontiers in Psychology*, 13. https://doi.org/10.3389/fpsyg.2022.922503

Zhang, J., Chen, Q., Lu, J., Wang, X., Liu, L. and Feng, Y., 2023. Emotional expression by artificial intelligence chatbots to improve customer satisfaction: Underlying mechanism and boundary conditions. *Tourism Management*. https://doi.org/10.1016/j.tourman.2023.104835

To cite this work, please use the following reference:

UK Dissertations. 13 February 2026. Customer service automation: when do chatbots reduce satisfaction, and when do they help?. [online]. Available from: https://www.ukdissertations.com/dissertation-examples/customer-service-automation-when-do-chatbots-reduce-satisfaction-and-when-do-they-help/ [Accessed 4 March 2026].

Contact

UK Dissertations

Business Bliss Consultants FZE

Fujairah, PO Box 4422, UAE

+44 115 966 7987

Connect

Subscribe

Join our email list to receive the latest updates and valuable discounts.