How to Guide: Customizing Virtual Assistant Solutions with an AI Experience Platform

In a relentless pursuit of innovation, enterprises are continually seeking to exploit the potent capabilities of Artificial Intelligence (AI) to stay ahead of the curve. The field of AI has seen groundbreaking advancements, particularly in the arena of generative AI and Large Foundational Models (LFMs) such as OpenAI, Google Vertex AI, Anthropic and Meta’s LLama2, as well as Large Language Models (LLMs).

One such application of generative AI that stands on the frontline of corporate innovation is the development of Virtual Assistants.

However, given the unique challenges and intricacies of each enterprise, there is an emerging need for AI Experience (AI Experience) platforms that facilitate the customization of Virtual Assistant solutions. Herein lies the focus of our discussion, underscoring the added value that an AI Experience platform brings to the table in crafting bespoke AI Virtual Assistant tools.

The Shift from Siloed to Disruptive Virtual Assistants

Traditionally, Virtual Assistants have been developed in silos, addressing specific tasks without much integration across different domains or business processes. This approach has led to a fragmented user experience and limitations in the agile enhancement of these assistants.

In stark contrast, a platform-centric approach utilizing generative AI enables enterprises to build disruptive Virtual Assistants that are coherent, integrated, and highly adaptive to evolving business needs. This not only smoothens the user experience but also creates room for scalable upgrades and domain-specific personalization.

Universal Virtual Assistant Concierge

The AI Experience platform acts as a Single-Pane-of-Glass, serving as a centralized control for all enterprise Virtual Assistants. Think of it as a command center that congregates Domain-Expert Virtual Assistants, thus providing a seamless and consistent interface.

For instance, a user could interact with a Virtual Assistant specialized in HR inquiries and smoothly switch to another dealing with IT support without leaving the unified system. This integration permits enterprises to incrementally update Domain Experts with new capabilities, improving the system’s intelligence while maintaining the integrity of other functions. Efficient routing of actionable requests is another benefit, as it accelerates the task execution process, thereby improving productivity.

In Domain-Expert Virtual Assistants, who have already been trained and fine-tuned on domain-specific lexical, context nuances, conceptual depth, specialized reasoning, and terminology rarity, are made available and ready to be used through a catalog.

Out-of-the-Box Integrations and Omni-channel User Experience

To further exemplify customization, the AI Experience platform offers out-of-the-box support for a multitude of integrations that cater to action flows and diverse interaction channels. This endows Virtual Assistants with a stateful omnichannel experience, whether it involves voice commands, chat interfaces, web interactions, or connectivity with social apps.

For instance, a customer may start a conversation via a chatbot on a company website and later continue it through a voice call without losing the context of the interaction.

LLM Gateway: Plugging into the Power of AI

With the AI Experience platform, enterprises can take advantage of the LLM Gateway—a unified API framework providing a streamlined process for integrating third-party Virtual Assistants as well as LFMs and LLMs. This facility empowers businesses to manage and template prompts through a dedicated studio, consequently easing the experimentation process to determine the best model for specific tasks.

Enterprises can rapidly evaluate LLMs to identify which offers the most coherent and contextually appropriate outcomes for each given task, thus enhancing customer service metrics and satisfaction.
Furthermore, the gateway facilitates the capture of prompt patterns, shaping a repository for future reuse and refinement, significantly reducing development cycles, and fostering innovation.

Grounding LLMs with enterprise-specific data

The concept of grounding in the context of AI Experience platforms and domain-specific LLMs entails enriching and fine-tuning these models with detailed, enterprise-specific data.

Grounding achieves this by integrating various sources of information that encapsulate the enterprise’s unique language, terminologies, processes, and customer interactions. This helps further enrich and adapt the AI Experience domain language models to capture the specificity and nuances of the enterprise’s domain.

  • Grounding via Knowledge Base Articles: Detailed articles in an enterprise’s knowledge base are repositories of carefully curated information that is highly relevant to the domain. By grounding the language model with data from these articles, the virtual assistant gains a deep understanding of domain-specific content, terminology, and problem-solving approaches relevant to the enterprise.
  • Grounding via Conversational Logs: Conversational logs from past interactions with users provide real-world examples of the queries, issues, and language used by customers and employees for an enhanced AI CX and AI employee experience. By training the language model on these logs, the virtual assistant can improve its ability to interpret and respond to similar interactions in the future. This process makes it keenly aware of the conversational dynamics and typical user intents within the enterprise context.
  • Grounding via Service Desk Tickets: Integrating data from service desk tickets allows the language model to grasp the common issues faced by users and the appropriate ways to address them. It can understand the patterns in user-reported problems and the successful resolutions, which helps in crafting responses that align with established troubleshooting workflows.
  • Grounding via Service Catalogs: Service catalogs outline the services offered by an organization. By grounding the language model with this information, the virtual assistant learns to recognize service-related inquiries, navigate through the service structure, and provide accurate information or initiate service-specific workflows accordingly.

Additional sources such as internal policy documents, procedure manuals, customer emails, and feedback forms contribute to a more holistic grounding of the language model. These sources collectively contribute to a nuanced understanding that encapsulates the enterprise’s operational ethos, customer service philosophy, and customer-specific knowledge.

Next-Generation Intent-Less Architecture

The future of conversational AI lies in intent-less architectures supported by reinforcement learning. Traditional Virtual Assistants rely heavily on pre-defined intents for training, which can be restrictive and fail to cover the spectrum of user queries.

However, the AI Experience platform boasting a next-generation intent-less architecture can decipher user requests with higher precision, without the need for explicit intent training.

Lastly, AI Search with Answer Extraction and Action workflow execution via connectors and API calls represents a monumental leap. The AI Experience platform can perform inference and execute a sequence of API calls across disparate systems, such as CRM, ERP, databases, and more, to fulfill user requests. This architecture doesn’t merely reply with information but acts, like executing a ticket creation in an IT support system or scheduling an appointment in a CRM, based on the user’s demand.

For instance, when a manager asks the Virtual Assistant for PTO balance for one of his direct reports, the platform can automatically identify which system retains this information, what API calls to make (like SPI call to retrieve the employee ID, and subsequent call to retrieve the PTO balance using the employee ID) and in what sequence, and generate the final answer to the manager request.

Scenario: Customizing a Virtual Financial Assistant with the AI Experience Platform

Let’s imagine a scenario where an enterprise customer (FinCorp) operates in the financial sector and wants to employ the AI Experience platform to customize their Virtual Assistant for enhanced customer service and internal support.

FinCorp wants to deploy a Virtual Assistant that can handle both customer inquiries and assist employees with internal processes, providing a consistent user experience, and integrating with the corporate database, customer relationship management (CRM) system, and internal knowledge repositories.

Step 1: Universal Virtual Assistant Concierge

FinCorp uses the AI Experience platform to select and deploy several Domain Expert Virtual Assistants into the concierge. The concierge includes:

  •       A customer service assistant trained to address banking queries, account information, and financial advice.
  •       An IT helpdesk assistant designed to help employees with technical issues and system access problems.
  •       An HR assistant to assist both employees and HR staff with policy information, leave requests, and benefits inquiries.

The interface allows FinCorp to unify user experiences across different domains and incrementally update each assistant based on the demand and feedback without disturbing other services.

Step 2: Omni-Channel Deployment

The AI Experience platform provides tools that FinCorp utilizes to deploy these Virtual Assistants across various channels like their official website, mobile banking app, social media platforms, and voice interfaces in branch offices. Regardless of where a user starts their interaction, they can continue without losing the context when switching between platforms.

Step 3: LLM Gateway for LLM Assessment & Selection

FinCorp uses the LLM Gateway to experiment with different LFMs / LLMs to find the most suitable foundational model and language model for understanding financial queries and performing specific tasks on such queries. FinCorp can create new prompts or edit existing ones and test various responses to FinCorp customer inquiries to select the models that exhibit the highest accuracy and fluency in the financial context of FinCorp.

Step 4: LLM Grounding

By leveraging the AI Experience platform’s intent-less architecture, FinCorp can automatically adapt the domain-specific Virtual Assistants (and corresponding LLM models used across the various tasks) by grounding the virtual assistants using FinCorp internal knowledge base, service catalog items, service desk tickets, conversational logs between users and other internal FinCorp learning sources.

The outcome of this automated grounding process is domain-specialized Virtual Assistants that are now capable of better understanding, reasoning, and executing FinCorp customer requests without the need for strict intent definitions. For example, a customer asking, “How do I send money abroad?” is understood as an international transfer request without the customer having to navigate a menu or select predefined options.

Step 5: AI Search and Action Workflows

When a customer inquiry about loan application procedures, the AI Search digs through the corporate knowledge base and CRM to aggregate information. Then it employs Action Workflows to guide the customer through the process, automatically filling in application forms with customer data retrieved from the CRM, scheduling meetings with loan officers through calendar integrations, and even initiating credit checks through API calls to external credit bureaus.

Customizing the Experience

Using the AI Experience platform, FinCorp can:

  • Adjust the level of formal language used by the Virtual Assistants based on customer demographics, using A/B testing within the AI Experience platform to determine what resonates best.
  • Use the AI Experience’s analytics and reporting tools to monitor interaction success rates and customer satisfaction, informing further customization of interactions and processes.
  • Develop custom connectors that link the Virtual Assistants to niche databases and proprietary systems specific to FinCorp, ensuring that the full ecosystem of corporate information is accessible through a conversational AI platform.

As FinCorp continues to adapt to user feedback and changing business needs, the AI Experience platform allows them to recalibrate their Virtual Assistants’ functions and capabilities quickly, demonstrating the dynamism and flexibility that only such a platform can provide to enterprise customers.

Conclusions

The AI Experience platform is not merely a luxury for enterprise customers; it’s a foundational investment in the next wave of customer interaction and service excellence. The ability to construct disruptive virtual assistants through an integrated platform amplifies the capabilities of generative AI, LFMs, and LLMs, allowing businesses to provide superior, omnichannel, and adaptive experiences to their users. As AI technologies continue to evolve, the enterprises that invest in such platforms will distinguish themselves as leaders in innovation and customer satisfaction.

Customization through the AI Experience platform enables enterprises to navigate the complexities of digital transformation and remain agile in the face of rapidly changing consumer demands and technological landscapes.

Whether through more intelligent search capabilities, seamless integrations, or predictive analytics, these platforms empower enterprises to reimagine their relationships with customers and redefine the standards of digital assistance. Book a custom AI demo and explore Aisera’s Enterprise LLM for your organization today!

Additional Resources