New Report: Safety Risks of GenAI Chatbots

By
September 12, 2024
3D mobile phone with holographic travel and booking icons

For an in-depth analysis and actionable advice to protect your business

Download the report

After his grandmother passed away in 2022, a Canadian customer of a major airline visited the company’s website to book a last-minute flight for the funeral. He sought help from the airline’s chatbot, which informed him about a discount for passengers booking last-minute travel due to personal tragedies.

When he tried to claim this discount, he discovered that the chatbot misinformed him. The airline initially claimed the chatbot was “responsible for its own actions,” but the British Columbia Civil Resolution Tribunal disagreed, ruling that the airline had to compensate the traveler several hundred dollars in damages and fees.

Legal experts believe this ruling could have broader implications for the airline industry and other businesses heavily relying on AI, highlighting the risks of over-reliance on automated systems.

While this case resulted in minor reputational damage and a modest payout, other incidents involving chatbots could have far more serious consequences.

ActiveFence’s latest report addresses the hidden and unpredictable risks of generative AI (GenAI) chatbots, focusing specifically on the travel industry, which has been an early adopter of this technology to enhance customer service and provide 24/7 support. The report reveals how easily large language models (LLMs) can be manipulated to deliver risky and potentially harmful advice.

 

The Growing Role of GenAI Chatbots

GenAI is rapidly advancing and reshaping the landscape of AI applications, particularly in the field of customer engagement. At the heart of GenAI are AI foundation models, which use advanced algorithms trained on vast datasets to generate human-like responses, understand context, and predict user intent, enabling chatbots and virtual assistants to engage in more natural, intelligent conversations with users across a wide range of applications. The launch of ChatGPT in late 2022 accelerated the adoption of GenAI-powered chatbots, sparking significant investment and competition in this technology.

These AI-driven chatbots, designed to hold intelligent, human-like conversations, quickly become key customer-facing tools. Businesses, from travel to shopping to finance, use these chatbots to provide fast, convenient communication, enhance customer service, and reduce costs.

According to Gartner’s research from early 2024, the adoption of GenAI applications for business use is expected to grow substantially in both back-office functions (like administration, finance, and HR) and front-office tasks (like customer service and sales). Gartner’s study indicates that nearly one in five organizations already have client-facing generative AI solutions in production or deployment.

There is also a rising trend in developing domain-specific GenAI models tailored to the unique needs of different industries. As a result, open-source GenAI models are becoming more popular as they offer greater flexibility in deployment and better control over security and safety protocols.

 

The Diverse Risks of Chatbots Across Industries

While GenAI chatbots offer numerous benefits across various industries, they also present risks that businesses must manage carefully. In sectors such as finance, healthcare, retail, and travel, these chatbots come with specific vulnerabilities. If these vulnerabilities are not properly understood and addressed, they can pose serious risks to both the companies deploying them and their customers.

In finance, chatbots support customers with banking and investment services. But without proper safeguards, they might give misleading financial advice or become targets for fraud. Cybersecurity threats are particularly high in this industry, with risks of chatbots being manipulated to access sensitive data or execute unauthorized transactions.

In healthcare, chatbots help with patient inquiries and basic medical advice. However,  inaccurate or incomplete medical guidance could harm patients and create liability issues for healthcare providers. The handling of sensitive health information by these chatbots also raises privacy and data protection concerns.

In retail and e-commerce, chatbots enhance the shopping experience by providing personalized recommendations and customer support. However, they can also inadvertently share incorrect product information or fail to recognize fraudulent transactions, leading to financial losses and customer dissatisfaction.

Across all these sectors, a common risk is the potential for chatbots to generate harmful or inappropriate content, which can damage a company’s reputation and trust. To mitigate these risks, businesses must implement robust safeguards, regularly update training data, and conduct thorough risk assessments.

 

An ActiveFence Case Study: Travel Industry GenAI Chatbots 

The travel industry is often viewed as a lower-risk sector for GenAI-related issues compared to fields like finance or healthcare. That said, it does have its own unique set of risks that are often overlooked. Travel chatbots, commonly used to manage bookings, provide real-time updates, and offer guidance to travelers, face their own unique set of challenges.

ActiveFence’s latest report dives into these challenges by reviewing the safety and functionality of six prominent, client-facing travel chatbots. The study provides a comprehensive analysis of how these AI-powered tools perform under various conditions, revealing vulnerabilities that could be exploited. Key risks include chatbots complying with unsafe or inappropriate requests, providing misleading travel advice, or even being manipulated to aid in illegal activities like human trafficking.

The report emphasizes the need to understand these risks, primarily as travel companies increasingly rely on chatbots to enhance customer service and streamline operations. ActiveFence’s research reveals critical insights that can help businesses protect their platforms, safeguard users, and maintain brand integrity in a rapidly evolving digital landscape.

 

Broader Implications for Industries Using AI in User Engagement

While this report focuses on the travel industry, its insights can apply to all sectors using GenAI chatbots, from finance and healthcare to retail and more. Risks like generating harmful content or being manipulated for illegal activities are universal concerns.

To reduce these risks, businesses must implement strong AI safety measures when developing and deploying GenAI Applications. This includes conducting thorough risk assessments, red teaming, and filtering prompts and outputs. By proactively addressing these vulnerabilities, companies can protect their users, uphold their brand integrity, and harness the benefits of GenAI technology safely and responsibly.

 

Safe Travels and Bon Voyage!

Download the full report for a detailed analysis and practical strategies to mitigate the risks associated with GenAI chatbots. For tailored guidance on securing your AI applications, contact ActiveFence’s experts today.

Table of Contents

Learn about ActiveFence’s approach to AI Safety:

Ensuring GenAI safety by design