Gaining Trust in AI: User Behavior with Live Agents and Conversational AI
Simply put, trust is built by taking a risk, and then receiving confirmation or verification that the trust was justified. When a person takes a risk, for example, by interacting with a Conversational AI entity or a live agent, that person may then confirm or deny that the risk yielded trustworthy behavior. An individual who engages with a company’s customer service is risking not only their time and information but also relying on obtaining a positive outcome. Trustworthy behavior is demonstrated by responding to the risk a person takes by “ensuring the trusted person will experience beneficial consequences” (Johnson, d. & Johnson F., 2013).
Accumulating Risk: Conversational AI vs. Disjointed Customer Service
During the past year, Customer A moved to his current city and state from another state. He realized when speaking with colleagues that he did not receive a report of his Individual Retirement Account even though he believed he had communicated his new address in a timely manner to the proper department within the company.
Customer A calls the customer service desk and discovers that his report had been snail-mailed to his old address and eventually returned to the company. He never received it. He realizes that not only was his trust misplaced, but he may have suffered financial loss. By being unaware of current investment information in his portfolio, he had lost out on an opportunity to move to a more advantageous plan. During his interaction with the customer service representative, he concludes that his employer is hampered by a siloed infrastructure: the currency and accuracy of personal information could not be relied upon. He begins to explore job opportunities with a different employer.
The Mutual Nature of Risk and Reward in Customer Service
Each interaction with customer service is a risk for both the individual and the organization. When a user must self-disclose personal information (e.g., name, address, last four digits of Social Security number, and so forth) within a disjointed customer service system, each self-disclosure can accumulate or compound the risk.
When a person is repeatedly requested to self-disclose or commit their time to repeat information, they may form the perception that a customer service department is not trustworthy. This manifests in two conclusions: 1) the company is not supportive; and 2) the company is not cooperative. Support is conveyed by a company’s ability to productively and reliably handle a situation for an internal or external customer. A perception that the organization is wasting a person’s time is not productive and does not communicate support.
Taking risks to build trust makes a person vulnerable. Requiring that person to interact repeatedly with customer service can create a perception that the individual’s vulnerability could be exploited, neglected, or even abused. Moreover, one betrayal is often enough to destroy trust; in fact to establish distrust. And once distrust is established is it resistant to change (Johnson, d. & Johnson F., 2013).
“Distrust leads to the perception that betrayal will reoccur in the future.”
How Conversational AI Lowers the Potential for Misplaced Trust
Automating via AI streamlines a process like a change of address, standardizing the user experience via knowledge workflows and reducing complexity. For example, Aisera automates internal service desk and external customer service interactions and resolutions through an all-in-one conversational platform. Aisera provides a single, scalable AI platform spanning IT, HR, Facilities, Sales, Customer Service and Operations.
The Aisera solution works to:
- Propel service team productivity
- Reduce operations costs
- Improve overall employee and customer satisfaction
A no-code AI service experience like that of Aisera is cloud-native and cost-efficient, requiring no additional resources, prep-work, training or data cleansing. Advanced conversational AI offers users a consumer-like experience with its robust, high-volume intent library. It delivers a personalized conversational experience to users in their preferred channel of choice, whether Slack, Microsoft Teams, Webchat, e-mail—or any other channel. Book a free conversational AI demo to experience Aisera’s artificial intelligence technology today!
After discovering that his change of address and other conventional activities could now be handled by self-service and leveraged across the company using AI, Customer A decided to re-think his job search and stay with his current employer.
Goldberg, S. (2022). The nature of trust: A philosopher’s perspective. Kellogg School of Management at Northwestern University. Video and transcript. Retrieved on January 12, 2022
Johnson, D. W., & Johnson, F. P. (2013). Joining together: Group theory and group skills. Englewood Cliffs, N.J: Prentice-Hall.