8 AI Nightmares & ChatGPT Cautionary Tales That All Founders Should Know About

You’d be hard-pressed to find anyone in the tech industry who hasn’t heard about ChatGPT. However, like any complex artificial intelligence (AI) system, when it goes wrong, it can result in unexpected responses that are a reminder of its limitations. 

As a quick refresher, ChatGPT is a large language model (LLM) developed by OpenAI, based on the Generative Pre-Trained Transformer (GPT) architecture. It is designed to generate human-like responses to the inputs you give it.

However, the unpredictable nature of ChatGPT’s errors is both fascinating and a cautionary tale for founders, from the relatively harmless, nightmare-inducing AI-generated images from tools like DALL-E and Midjourney to concerning data leakage mistakes and potentially lethal consequences with healthcare AI mistakes. 

In this post, we'll explore some of the biggest AI nightmares to date, what it reveals about the current state of AI technology, and key things you should consider when incorporating AI into your startup.

1. OpenCage’s customer onboarding nightmare 

OpenCage, which has a geocoding API, realized they were starting to get some customers from ChatGPT. 

Initially, they were thrilled by this new, accidental lead gen channel. However, this excitement quickly turned into a customer onboarding and support nightmare. That’s because ChatGPT claimed that OpenCage offers an API to turn a mobile phone number into the location of the phone.

The problem was this feature didn’t exist. And trying to build it would be a privacy nightmare. So, the team wound up with an influx of disappointed customers who wanted a feature that didn’t exist and were likely to churn out.

Since the team at OpenCage doesn’t have access to the data that ChatGPT is pulling from, they don’t have any way to correct the misinformation. 

Key Takeaway: ChatGPT is a black box. You don’t have control of the information that OpenAI used to train it.

2. Zillow’s $304 million operating loss  

A tiny rounding error can cause huge changes in the calculations of the housing market and have real-life consequences. That’s something that Zillow learned the hard way in November 2021 when an AI error percentage mistake led them to shut down their then-biggest revenue source, Zillow Offers.   

Key Takeaway: This is a cautionary tale for founders to make sure you understand and vet all of the data that you are feeding into an AI tool. As Zillow discovered, a tiny data inaccuracy can have massive consequences at scale. 

3. Google loses $100 billion when Bard messes up in a demo 

A factual data mistake uncovered how Google’s Bard is not yet at the level with OpenAI’s ChatGPT.

Just like Zillow, Google learned the hard way that the quality of your inputs and having human oversight matters a lot. If you have garbage inputs, Baird will spit out garbage outputs. 

Key Takeaway: No matter how advanced you think your LLM is, you need human intervention to ensure that the information Baird or ChatGPT spits out is actually correct. 

4. Ticketmaster angers fans with AI dynamic pricing fiasco 

Ticketmaster is no stranger to self-inflicted PR disasters. Just ask Swifties!

In this particular example, Ticketmaster tried to use AI to make the ticket pricing process easier and more transparent for a Bruce Springsteen concert. However, this dynamic pricing solution ended up backfiring because the team failed to consider the environment in which this tool would be used. The end result meant unchecked price gouging, which left customers feeling jaded.   

Key Takeaway: When you are creating your own AI-driven algorithms, you need to do your due diligence and consider the potential factors that could influence them. Or, at least, set up guardrails. 

5. An AI Instagram experiment led to a welfare check for one reporter 

Katie Notopoulos, a Buzzfeed reporter, decided to run an experiment where she had ChatGPT write all of her Instagram captions for two weeks.

The experiment started innocently enough. She wanted to see if the captions that ChatGPT created could sound human-like.

Her followers quickly caught on. And the two-week experiment left her reassuring her followers that this was an experiment and questioning her humanity.

Key Takeaway: As tempting as it may be to have ChatGPT write all of your marketing or social media content, you really need a human editor to review everything.  

6. When a self-driving car went rogue 

Up until this point, all of the cautionary tales we’ve shared have been either annoying, inconvenient, and/or expensive mistakes. However, in some scenarios, an AI mistake can have life-altering or fatal consequences. 

For instance, an AI cruise control feature in a BMW car misread a limit and tried to go 110 mph on a 30 mph road. Fortunately, the driver was alert and able to intervene before he or anyone else got injured. 

Key Takeaway: No matter how much testing you do in controlled environments and how well an algorithm is programmed, this should be a reminder to all founders that AI can be unpredictable in real-life settings, particularly if there are bad actors involved.

7. AI’s overstated and dangerous medical claims  

If it's too good to be true, it probably is within the realms of AI technology and medicine. 

This is a lesson that Epic Systems learned the hard way. In fact, their AI algorithm missed 67% of patients with diseases that it claimed it could predict and misdiagnosed 88%. The worst part, this diagnosis information was shared with several hospitals.   

In another experiment, AI didn’t do much better at diagnosing COVID-19. 

And a medical chatbot went rogue and told a patient to commit suicide. 

Key Takeaway: Even if the tool you are building doesn’t have the potential to influence whether someone lives or dies, medical missteps, like the examples above, are still a reminder to verify the accuracy of the information any AI tool spits out. Also, governments probably should regulate healthcare AI tools.  

8. An AI mistake put an innocent man behind bars 

AI is widely used in facial recognition software. However, news stories like this one still show we have ways to go when it comes to combating potential gender, race, and social biases in AI algorithms.  

In fact, there have been several diversity and inclusion studies that show that facial recognition tech, in particular, is more prone to commit mistakes when it comes to matching the face of darker-skinned individuals.  

It is another reason to be cautious of AI algorithms since it can have potentially life-altering or lethal consequences if left unchecked.

Key Takeaway: You really need to consider any potential biases with the inputs you feed AI. That’s because even the slightest bias can have a major ripple effect at scale. 

How to avoid creating the next AI nightmare  

While AI can be unpredictable in real-life scenarios, there are some things you can do as a founder to reduce the odds of unleashing the next AI nightmare machine. 

If you aren’t technical, some simple things you can do to protect your company include: 

  • Take all AI content with a grain of salt 

  • Fact check all material from AI content tools 

  • Make sure there are sufficient guardrails with human oversight

However, the best thing you can do is spend a lot of time playing around with ChatGPT, Bard, and any AI tools before you use them in your company.

In fact, you can often learn the most about how it works by figuring out ways to break it (a.k.a. jailbreaking). 

***

Whether you are looking to build the next AI startup or just looking to incorporate an existing AI tool into a process in your company, the most important thing you can do is learn as much as you can about how it works and set up proper, human-assisted guardrails.

Want more resources about building, growing, and launching a bootstrapped SaaS business?

Join Our Mailing List