Why Most Chatbots Are Terrible


I just tried to get help with a billing issue through a company’s chatbot. After five minutes of the bot failing to understand my question and offering irrelevant help articles, I gave up and sent an email. The whole experience could have been replaced with a search box and saved everyone time.

Companies are deploying chatbots everywhere – customer service, sales, support, internal knowledge bases. Most of them are actively bad at their jobs. Here’s why.

The Fundamental Problem: They Don’t Understand Context

Modern chatbots have gotten way better at parsing natural language. They can often identify keywords and intent reasonably well. What they can’t do is understand context the way humans do.

Me: “I was charged twice for my last order”

Bot: “I can help you track your order! Please provide your order number.”

Me: “I don’t need to track it, I need the duplicate charge refunded”

Bot: “To update your shipping address, please…”

This isn’t a parsing failure. The bot understands individual words. It just can’t maintain conversational context or handle anything off its predefined script.

Companies Deploy Them Before They’re Ready

The chatbot arms race means companies are rushing half-baked implementations to production. The technology exists to build decent chatbots, but that requires significant investment in training, testing, and maintenance.

Most companies treat chatbots as a “set it and forget it” solution. Deploy the bot, fire some customer service reps, call it cost savings. Then wonder why customer satisfaction tanks.

A chatbot that can’t actually resolve issues just creates an extra frustration layer before customers finally reach a human (or give up entirely). That’s worse than not having a chatbot at all.

The Hand-Off Problem

When a chatbot can’t help, it should seamlessly hand off to a human. Most don’t do this well.

You explain your problem to the bot. Bot fails. Finally get transferred to a human. Human asks you to explain the problem again from scratch because they can’t see your conversation with the bot.

You just wasted 10 minutes explaining things twice. If the initial interaction had just been with a human, you’d be done already.

Good chatbot implementations pass conversation history to the human agent. Most companies haven’t bothered implementing this.

They’re Optimized for Deflection, Not Resolution

From a company’s perspective, the chatbot’s job isn’t to help you. It’s to prevent you from contacting support. The success metric is “percentage of tickets deflected,” not “percentage of problems solved.”

This creates perverse incentives. The chatbot will offer you help articles, FAQs, community forums – anything to avoid creating a support ticket. Whether these actually solve your problem is secondary.

I’ve seen chatbots that make it deliberately hard to reach a human. You have to phrase your request exactly right or try multiple times before you get the option.

The Uncanny Valley of Competence

Early chatbots were obviously dumb. You knew you were talking to a bot, adjusted your expectations accordingly, and quickly moved on to other options when it couldn’t help.

Modern chatbots are competent enough to seem like they should work, which makes the failures more frustrating. They understand your question, they claim they can help, then they provide useless information or get stuck in loops.

It’s the AI equivalent of talking to someone who seems like they’re listening but clearly isn’t. More annoying than someone who’s upfront about not understanding.

Internal Chatbots Are Even Worse

Customer-facing chatbots at least get some attention and resources. Internal knowledge base chatbots that companies deploy for employees are often even worse.

They’re trained on a dumping ground of documentation that nobody’s organized or updated. They confidently point you to outdated policies. They can’t access the tribal knowledge that experienced employees have.

Employees learn to just ask colleagues directly, defeating the whole point of the knowledge base chatbot.

The Training Data Problem

Chatbots are only as good as what they’re trained on. Most companies train them on:

  • FAQs (which cover maybe 20% of actual questions)
  • Help articles (often poorly written)
  • Previous support tickets (including the wrong answers agents gave)

They’re not trained on edge cases, complex situations, or things that require judgment. So they handle simple questions okay and fail on anything interesting.

What Good Chatbots Look Like

Some companies do this well. The common patterns:

Clear scope: The bot is explicit about what it can and can’t help with. “I can help with account questions, billing issues, and password resets. For technical support, I’ll connect you with an agent.”

Easy human escape hatch: Type “agent” or “human” at any point and get transferred immediately. No arguing, no “let me try to help first.”

Context preservation: When you do get transferred, the human agent sees your full conversation history.

Honest limitations: “I’m not sure I understand. Here are some related articles that might help, or I can connect you with someone who can assist.”

Narrow use cases: Chatbots work better when they’re designed for specific tasks (password resets, order tracking) rather than trying to handle all support questions.

The LLM Hype Cycle

With ChatGPT and similar models, we’re in another chatbot hype wave. “Now we can finally build chatbots that actually work!”

Maybe. LLMs are definitely better at natural language understanding. But they also hallucinate confidently incorrect information, which is terrible for customer support.

You need significant engineering around the LLM to make it reliable for production use. Most companies trying to capitalize on the hype aren’t doing that engineering.

We’ll see another wave of terrible chatbots, just with better grammar this time.

The Economic Reality

Good chatbots cost money to build and maintain. Bad chatbots are cheap and still reduce support costs by frustrating some percentage of users into giving up.

From a purely financial perspective, bad chatbots might make sense. Customer satisfaction takes a hit, but if the cost savings outweigh the lost revenue from frustrated customers, the CFO’s happy.

This is why chatbots are particularly bad at companies with captive user bases. If you can’t easily switch providers, they have less incentive to make the chatbot actually helpful.

What You Can Do

Know the magic words: “speak to human,” “representative,” “agent” often trigger escalation to real support faster than going through the bot’s flow.

Be direct and keyword-heavy: Chatbots aren’t good at conversational nuance. “Refund duplicate charge order #12345” works better than “I think there might have been an issue with my recent purchase where…”

Give up faster: If the chatbot isn’t helping after two or three exchanges, don’t keep trying. Find another support channel. Your time is worth something.

Leave feedback: Many chatbots ask “Was this helpful?” at the end. “No” might actually feed back to improve the system (but probably won’t).

The Future

Chatbots aren’t going away. As the technology improves and companies get better at implementation, some will become genuinely useful.

But we’re probably in for several more years of mostly-terrible chatbots as companies chase the hype without doing the work to make them actually good.

The companies that figure this out – chatbots for narrow tasks, easy escalation to humans, actual investment in training and maintenance – will have a genuine competitive advantage.

The rest will keep annoying their customers with digital barriers to support while wondering why satisfaction scores keep declining.

In the meantime, type “agent” and skip the frustration.