Author: Jin, Yuru
Title: Analysis of the trust and willingness of high-net-worth customers in the financial industry to use chatbots
Degree: DFinTech
Year: 2025
Department: Faculty of Business
Pages: 188 pages : illustrations
Language: English
Abstract: The emergence of digital marketing and artificial intelligence (AI) has resulted in widespread adoption of chatbots within diverse sectors, including insurance, banking, retail, travel, healthcare, and education. Chatbots have found application in various domains, such as customer service, virtual assistance, and social media communication. At the same time, the traditional financial industry faces challenges in providing wealth management services to high-net-worth customers who demand timely wealth management advice and expect consistently high service quality. To improve the immediacy and accuracy of high-net-worth customer service and enable relationship managers to provide round-the-clock information services, it is necessary to use chatbots among high-net-worth customers. Thus, chatbots aimed at this user base must be carefully designed and implemented to meet these standards.
Although chatbots have significant potential in numerous fields, there is a considerable variability in consumer acceptance and willingness to utilize them. At present, research on chatbots predominantly focuses on travel recommendations and online shopping assistance. In the financial industry, there is a shortage of studies examining the willingness to use chatbots and the trust-building processes, particularly among high-net-worth customers. The present study addresses this gap by examining the trust in chatbots and willingness to use them among high-net-worth customers in the financial industry. The obtained findings will provide effective guidance for the design and application of chatbots, thus providing valuable contribution to this research domain.
To achieve these aims, the trust belief model was adopted as the theoretical foundation, and exploratory research was chosen as the methodology, whereby data were collected via semi-structured interviews and were subjected to thematic analysis. All data required for this study were provided by high-net-worth customers (i.e., individuals possessing assets exceeding 8 million HKD in value) of a Fortune 500 Securities Company in Hong Kong.
As trust theory was applied to the new AI application scenario of serving high-net-worth customers, the trust belief model was selected, as it consists of three sub-dimensions—integrity, capability, and benevolence—that are particularly relevant for this context. It was further expanded by adding privacy concerns and technology anxiety as secondary background factors, and high-net-worth customers’ gender, age, income, industry, position, education, asset size, products, and risk appetite as influencing factors, resulting in an exploratory research model of chatbots. Subsequent analyses revealed that integrity, capability, and benevolence trust drive customers’ willingness to use chatbots through technical effectiveness, service stability, and institutional guarantees, while privacy concerns and technology anxiety weaken its role by amplifying the vulnerability of the trust dimension. The individual characteristics of high-net-worth customers further regulate this mechanism, which necessitates differentiated responses in service design.
This research elucidates how high-net-worth customers form and maintain trust in chatbots, and reveals how privacy concerns and technology anxiety affect their trust in chatbots and willingness to use them, offering insight into the role of the individual characteristics of high-net-worth customers in the design of chatbot functions. The proposed model systematically reveals the multi-level path of trust construction, the boundary conditions of privacy concerns and technology anxiety, and the potential impact of customer characteristics, providing a theoretical basis and practical support for financial institutions aiming to optimize chatbot design and formulate differentiated trust enhancement strategies.
The exploratory research model can be further enhanced by incorporating factors that influence trust in AI among different customer groups and across diverse cultures, as well as by exploring how cultural background or personality traits affect customer trust in AI.
Rights: All rights reserved
Access: restricted access

Files in This Item:
File Description SizeFormat 
8533.pdfFor All Users (off-campus access for PolyU Staff & Students only)1.93 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show full item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/14070