Josh

YC founders, would you trust AI to handle 90% of your support chats?

me and my co-founder are building an AI agent because at our last startup we just couldn’t keep up with support.

we tried every chatbot out there. they all felt… robotic. customers hated it.

hiring more people was too slow + too $$$

so we put together this ai chatbot (think intercom fin but deeper) that trains on your old tickets, learns your tone, doesn’t hallucinate, and can actually answer stuff like a real support rep.

here’s the thing though… even when it works, founders get nervous about letting AI talk to their users. like, what if it says something stupid? what if it sounds off-brand?

curious if anyone here has tried automating support.

did it work for you?

where did it break?

would you let AI take over 90% of your chats or nah?

568 views

Add a comment

Replies

Best
Aleksandar Blazhev

As of today, I haven’t found a chatbot that’s reliable enough either. They’re getting better and better, but they still require supervision. In other words, we can’t eliminate the human factor. Still, I believe they’re worth implementing.

Josh

@byalexai yeah 100%. even with all the AI hype, you still need a human in the loop. that said — it’s honestly good enough now to take care of the repetitive stuff without making things up, as long as you let people jump in when needed.

Josh

@byalexai hey just circling back — we actually launched the product i was hinting at here. it’s called @CoSupport AI built around the stuff i mentiooned originally (no hallucinations, custom logic, trained on your data). if you’re curious, it’s live on PH today: https://www.producthunt.com/products/cosupport-ai

thanks again for helping shape the convo 🙏

Edward Look

I think the trust has to be built over time. Maybe a "human approval" step for the AI-drafted responses could be the bridge.

Josh

@edlook Yeah, makes sense. We’ve actually been experimenting with exactly that — an optional “approve before sending” mode when teams don’t fully trust AI replies yet.

Might be the bridge people need before they let it run on autopilot.

Josh

@edlook hey Edward — just wanted to say we launched the product i mentioned earlier.

it’s called @CoSupport AI — live now on PH: https://www.producthunt.com/products/cosupport-ai


And I appreciate your input in this thread 🙌

Alex Khoroshchak

I think a lot of founders underestimate how much the data matters. The AI can only be as good as what you feed it. If your training data is messy, outdated, or inconsistent, it’s going to reflect that — no matter how fancy the model is. I’ve seen people expect magic from AI after dumping in a bunch of random old tickets, and then get disappointed when it doesn’t perform. On the other hand, if you give it clean, structured, and relevant data, it can feel like you’ve hired your best support rep — but one that works 24/7 without breaks.

Anyway, a fallback to human support is essential. There should always be an open door for escalation when needed. I really hate when I’m stuck with a chatbot that can’t help me and gives me no way to reach a real person.

Eugene Nesterenko

сan confirm we went through so many tools before giving up and saying “okay, we’re building our own.”

but honestly, it’s kind of scary when AI takes over all your chats… and actually replies better than you 😅

like — you’ve spent years building your tone of voice, and now this thing mimics you perfectly, down to the emojis 😄

Josh

@enesterenko Haha yes, we had the same “this is scary good” moment. The whole tone mimic thing is cool but also freaky at first.

Ritik Kumar
Launching soon!

When done right, it is a massive unlock. The real shift happens when the AI is trained on your historical convos, adopts your brand tone, and integrates deeply with your tools. The goal is not just speed, but trust at scale.

Viktoriia Yadoshchuk

@ritik_nyraai can't agree more. Achieving training AI on your data, FAQs, helpdesk, etc is a new step.

Josh

@ritik_nyraai quick update: we just shipped @CoSupport AI — the thing i was hinting at here.

built to solve exactly what we talked about (hallucinations, control, traiing on historical convos, and integrations, etc).


live now → https://www.producthunt.com/products/cosupport-ai

Matt Carroll

sort of related but if you really have a way to make ai not hallucinate it seems like you could probably print as much money as you want. do you believe this is actually solved? why not generalize your approach and sell an API that has just this feature alone.

Josh

@catt_marroll honestly… not a bad idea. for now, we’ve just baked it into the product — tight grounding, hard reply limits, fallback paths. might spin it out later if folks keep asking for it.

Matt Carroll

@bdzhel nice! Yeah mostly was curious. I imagine even reducing the rate of hallucinations would be a huge win if you could abstract it to many use cases.

Congrats on building!

Josh

@catt_marroll HI Matt, we finally launched the ai agent platform i mentioned in this convo: @CoSupport AI

trained on your content, no weird replies.

if you’re curious, it’s up now → https://www.producthunt.com/products/cosupport-ai

thanks again for sharing your take

Matt Carroll

@bdzhel nice work on launching! cool launch assets BTW. hope you get some conversions! would be curious to have you post up and give a review of the launch and how it went. 

Dan Bulteel
If it helps, find a global brand where this tech could really make operational savings, offer to do a free pilot in one market, ideally a smaller one, Netherlands etc., run it and capture the post-survey feedback, compare against another market on price, CSAT and accuracy, and use it to build trust to scale. I’m ex- adidas, TikTok, WPP and used to bring a lot of new tech and process into the company so from founder to IT to function, I built up a small playbook of how to move things forward like that. Message me if helpful. Good luck!
Josh

@dbul Appreciate that advice. A pilot with a smaller market and real CSAT data could definitely help build trust before rolling out wider. I’ll DM you — would love to learn more about that playbook you mentioned.

Josh

@dbul btw — finally launched the ai agent platform i mentioned in this convo: @CoSupport AI

trained on your content, no weird replies.


if you’re curious, it’s up now → https://www.producthunt.com/products/cosupport-ai

thanks again for sharing your take

Dan Bulteel

@bdzhel Saw it this morning and got behind it! Seems like it’s doing great, love the product story and positioning, good luck today!

Josh

@dbul Really appreciate the support, Dan! Keep me posted on your launches — excited to try and give feedback as well.

Dan Bulteel

@bdzhel Appreciate you!

Rohid

Have you tried out SiteAssist.io? We just launched SiteAssist .io Today. I think you will get amazed by it. It never gets hallucinated, only answer on your trained data. Highly customizable. OpenAPI Spec so you can use it in Headless mode so no ugly chat widget, you can build your own UI (Or ask us we can help you build custom UI inside your stack).

Let me know if you want to get a demo.

Josh

@rohid nice! always cool to see others building in this space. we’ve gone a different direction (focused more on ai logic than ui flexibility), but sounds like we’re aiming at the same problem.

Rohid

@bdzhel I looked at CoSuport AI. It's cool. We are also focusing on logic, better AI response. Also our main goal is to make AI accessible from anywhere in your website. So we focus both on backend logic plus frontend UI/UX. Happy to see that we both are working on something that will be the future of web interaction.

Josh

@rohid you mentioned launched SiteAssist, something similar like us — we just launched @CoSupport AI today.


It handles replies without hallucinating, trained on your stuff

ph link if you wanna poke around → https://www.producthunt.com/products/cosupport-ai

Rohid

@bdzhel Great, now we are competitors!

Marcus Freeland
As a founder of a customer support platform, I’ll say that I can see how everyone has a bad taste in their mouth from other chatbots. They are robotic and act like they don’t have the information accessible to them to respond accurately. I spent months researching and tweaking before making the AI Assistant module live for users. The assistant really knows your content and refers to it when answering. I think that has been a huge differentiator when comparing. Users are starting to warm up to it (just released the AI features in June), but I can say that I would not have trusted any other chatbot out there. Most of the chatbots feel kind of lazy once you’ve used one that responds well and doesn’t hallucinate. It actually feels like it’s helping you and not just “answering” because it has to.
Josh

@marcusfreeland this hits home. we built our whole stack around that same idea — make ai actually help, not just respond. feels like most bots are just bluffing their way through. yours sounds like it actually does the work.

Marcus Freeland

@bdzhel “Bluffing through” is the perfect expression! Having AI be helpful is a solid way to build, it’s nice to hear about your journey in building this.

Josh

@marcusfreeland yeah we just launched it today actually — so feel free to try it and give your thoughts https://www.producthunt.com/products/cosupport-ai

Christopher Robbins

I would let AI handle basic support requests and elevate issues when needed. Maybe a AI Support Triage agent :)

Josh

@christopherrobbins totally agree... triage-style agents are one of the clearest wins. That’s exactly what we’ve seen with AI agents trained to route vs. resolve.

would you want the agent to just tag/escalate, or actually reply with context?

Christopher Robbins

@bdzhel I think if its highly confident it can give an answer, reply to the support request.. and if they are uncertain, maybe reply with what they think is the answer, then tag / elevate and let the customer know their ticket is being elevated just incase they can't resolve the issue.. and just give a clear link for the customer to close the ticket if they get it figured out.

This would seem like best of both worlds, and transparent to the customer too, everyone realizes AI very well could take over most support roles.. so that transparency will probably go far with the customer!

Kehinde Adeoye
  1. Have you tried testing your AI to see how it functions?

  2. I've seen couple of AI agents assisting customers and personally, I've been helped by some. So it just depends on who you introduce your product to.

  3. Also if you'd like something like a provenance layer that verifies output and provide receipts of it's information for your AI, you can let me know to introduce you to a product like that and they'll like a partnership.

Josh

@kehindeadeoye Good questions. We’ve been running tests on live ticket history and comparing AI replies to human responses. So far, it’s holding up well, but I agree it depends a lot on who you roll it out to first. The provenance layer sounds interesting — would be great if you could intro us to that product.

Sunny_K_S

As of now, I haven't used any platforms that are this capable. But in the future, if I find the perfect tool to do so, I will definitely use it—after testing it thoroughly.

Josh

@sunny_k_s exactly... and during the sales pitch or website demo, it looks perfect. But once you start setting it up yourself, that's when it goes south.

we’re building something that’s easy to test safely (before going live).

Sunny_K_S

@bdzhel great!

Tom Ideaxton

If a chatbot solves my problem, I don't care whether it's a machine or a person, as long as it's in written form. I hate voice assistants in customer service, especially when I have an urgent question and the robot doesn't understand what I want — it's infuriating. But in any case, there should be an option to talk to a real person. Not all questions can be solved by an AI bot.

Josh

@ideaxton same. i just want it to work — don’t care if it’s a robot as long as i get help.

we’ve built in human fallback for that reason too.


and yeah… voice bots are still "not real" enough. THough elevenlabs is moving pretty fast to help you get it right.

Adi Singh

I would! As long as i built the ai to my customs!

Josh

@adi_singh5 nice! what would your dream ai agent actually do?

Adriana

I would AB test: CS agent and AI. Based on results decide if rolling it out to all audience

Josh

@adriana97 smart!! what kind of result would convince you to roll it out fully?

Petra Diener

It depends on the complexity of the software. I helped one of my clients implement an AI Chatbot; it's well trained, and we keep training it on a regular basis as new knowledge becomes available. However, in their case, they've been around for 35 years, and their software is niche and complicated; a chatbot can only help with basic questions/answers. This said, in many cases, this is enough. In other cases, support is almost at a service level and needs to be handled by an expert who also understands a customer's hardware environment. IMHO: You'll never know until you try, and don't underestimate the power of human interaction when it comes to solving business-critical support issues

Josh

@petradiener 100%. some stuff still needs real humans — especially when it’s hardware or super niche.

we’ve focused on ai that knows when not to guess, and defers when needed.

Josh

@petradiener just looping back — we went live with @CoSupport AI today


built it based on convos like this one 🙌"

link here if you wanna check it out: https://www.producthunt.com/products/cosupport-ai

Albert Chou

Intercom Fin is in fact what we use, trained on years and years of our blog posts and weekly newsletters. It is primary responder on over 80% (and rising each week) of issues and is the only respondent on over 80% and rising of those, so overall it is handling about 65% and rising of all issues without intervention. Obviously, I don't know where the ceiling for its unassisted operation is, but I guess to the original question, I think it is "taking over" more than 80% of our support chats already. I don't know if we're configured to have the human responders be on the front line at all, so I can't quite answer the question asked.

Josh

@albert_chou thanks for sharing that — super insightful. over 80% is impressive. curious how much of that is repetitive stuff vs more edge cases? we’re working on something similar, just with more control over the content source.

Ghost Kitty
Comment Deleted