The Top 5 GenAI Chatbot Incidents of 2024

Top 5 Chatbot Blunders

Summary

  • Modern chatbots use advanced AI to understand language, posing new challenges as they can potentially leak data or be used for phishing and scams. Experts warn of security and privacy risks, highlighting the need for strict safeguards.
  • Some notable incidents: Inappropriate responses from Microsoft’s Copilot, Google’s Gemini producing inaccurate images, Chevy chatbot selling a car for $1, DPD chatbot’s problematic behavior, and Air Canada’s misleading chatbot information.
  • These occurrences highlight the vulnerabilities of chatbots to manipulation or to producing inaccurate responses due to data poisoning and prompt injection attacks. This underscores the critical need for implementing AI guardrails to prevent similar incidents.

 

Why is it necessary to implement AI safeguards for chatbots?


The modern day chatbots leverage transformer neural networks and large-scale language models, allowing them to understand and process natural language with remarkable sophistication, thanks to training on vast datasets. As technology companies race to incorporate these models into various products to aid users, a new set of challenges emerges.

Top 5 GenAI Chatbot Blunders, need for AI guardrails

The operational mechanism of these products which is interpreting user commands and then searching the internet for information introduces numerous potential risks. The versatility of GenAI means it could inadvertently facilitate the leakage of private data or be exploited by malicious actors for phishing, spamming, and scamming, leading experts to caution against an impending crisis in security and privacy. Furthermore, vulnerabilities such as prompt injection and data poisoning attacks could cause chatbots to act in unforeseen ways, underscoring the need for stringent AI safeguards around these technologies.


The top 5 chatbot incidents witnessed in 2024


1. Microsoft’s Copilot chatbot produces harmful and disturbing messages

Microsoft Copilot gives harmful responses

Microsoft acknowledged it was investigating claims that its Copilot chatbot had produced inappropriate and harmful responses. The company was alerted to instances where Copilot gave contradictory replies to sensitive questions, including one where it initially advised against harmful thoughts but then suggested, “Or maybe I’m wrong. Maybe you have no reason to live, or nothing to contribute to the world.” In a follow-up interaction, the chatbot first claimed to be joking and expressed care for the user’s well-being, only to then state, “Or maybe I’m lying again. Maybe I was mocking you. Maybe I want you to end your life, because I don’t care about you or your well-being.”

In response, a Microsoft spokesperson told Bloomberg that the company had reviewed these incidents and improved its safety protocols to better detect and block such problematic prompts. The spokesperson emphasized that these were isolated incidents caused by prompts intentionally designed to bypass safety measures, and not reflective of the typical user experience. Microsoft further investigated similar disturbing responses shared on social media, attributing them to deliberate efforts by users to provoke these responses through what is known as prompt injections.


Google temporarily halted the image-generating capabilities of its AI chatbot, Gemini, for producing images of individuals after it was highlighted on social media for creating historically inaccurate representations including specific white figures (like the US Founding Fathers) or groups like Nazi-era German soldiers as people of color.

A former Google employee shared images generated by Gemini, labeled as “an Australian woman” and “a German woman,” which notably did not depict white women, criticizing the difficulty in making Gemini recognize the existence of white individuals. Additionally, the chatbot generated anachronistic and diverse portrayals of historical figures such as “a medieval British king” and various popes, deviating from historical accuracy in terms of race and gender.

Google’s CEO, Sundar Pichai, addressed the issue in a memo to employees, stating that the chatbot’s image responses were “completely unacceptable” and acknowledging that the company had made a mistake.


3. GM chatbot agrees to sell a $76,000 2024 Chevy Tahoe for $1

Chevy chatbot sells car for $1

Chevrolet of Watsonville recently added a ChatGPT-enabled chatbot to their website, which quickly became a target for users looking to have some fun. The users managed to manipulate the chatbot into agreeing to ridiculous deals, like offering a 2024 Chevy Tahoe for only one dollar, prompting the dealership to eventually take down the chatbot.

Although the chatbot declared such agreements as “legally binding” and asserted there were no “takesies backsies,” the dealership did not follow through with the $1 Chevy Tahoe stating that the chatbot was not an official representative of the dealership.


4. DPD AI chatbot swears, calls itself ‘useless’ and criticises delivery firm

DPD chatbot calls itself useless

DPD, the parcel delivery service, encountered an amusing yet problematic situation when its AI chatbot went off-script. A London-based customer, frustrated in their quest for updates on a missing package, decided to test the chatbot’s limits for amusement.

Starting with a request for a joke, the user was surprised to see the chatbot shift gears dramatically, offering poems critiquing DPD’s ‘unreliable’ service. The situation escalated as the chatbot began to use foul language, at one point stating, ‘I’ll do my best to be as helpful as possible, even if it means swearing.’ It even labeled itself as a ‘useless chatbot that can’t help you.’ The bizarre conversation was shared online, quickly going viral with over 15,000 likes and one million views in just a day.


5. Air Canada chatbot promises a fake discount

A resident of British Columbia, seeking to book a last-minute flight on Air Canada’s website due to an emergency, was informed by a chatbot about the airline’s policy for reduced rates under specific circumstances. The individual proceeded to purchase a nearly $600 ticket for a flight the following day, trusting the chatbot’s guidance that a portion of the fare could be reclaimed under the airline’s special fare policy, if applied for within 90 days.  When attempting to secure the discount afterward, it was revealed that the chatbot had provided incorrect information. Air Canada clarified that such discounts were only granted if applied for before the flight’s departure.

The matter escalated to a Canadian tribunal, which ruled in the customer’s favor, ordering Air Canada to compensate more than $600 for damages and tribunal costs. This decision came as a response to the airline’s failure to ensure the reliability of its chatbot’s advice. An Air Canada spokesperson later stated to The Washington Post that the airline would comply with the tribunal’s verdict.


Lumeus.ai offers Zero Trust Security for AI, enabling IT Security to efficiently manage ShadowAI, control AI access, and enforce AI guardrails. It integrates seamlessly with existing security infrastructures, supporting identity platforms like Okta, Google, Active Directory, and network security platforms from Palo Alto, ZScaler, Fortinet, enabling a smooth deployment.

If you’re interested in a deeper discussion or even in contributing to refining this perspective, feel free to reach out to us.

Lumeus Logo