Ben Lang

RunLLM - AI that doesn’t just respond—it resolves

Built on 10 years of UC Berkeley research, RunLLM reads logs, code and docs to resolve complex support issues. Saves 30%+ eng time, cuts MTTR by 50%, deflects up to 99% of tickets. Trusted by Databricks, Sourcegraph and Corelight—try for free on your product.

Add a comment

Replies

Best
Vikram Sreekanti

Hi ProductHunt! My name is Vikram — I’m co-founder & CEO of RunLLM. RunLLM’s an AI Support Engineer that works how you work.

Background

The promise of AI is that customer support will become dramatically more scalable — so that your team can focus on high-value customer relationships. But anyone who’s building a complex product knows that a good support agent requires a lot more than a vector DB and GPT-4.1

The first version of RunLLM started off building an engine that generated the highest-quality answers we could get, and that helped us earn the trust of customers like Databricks, Monte Carlo Data, and Sourcegraph. But what we’ve found over the last 6 months is that there’s so much more we can do to help support teams operate efficiently.

RunLLM v2

In response to that feedback, we’ve built RunLLM v2, and we’re excited to share support for:

🤖 Agentic reasoning: Agents are all the rage, we know, but we promise this is for real. RunLLM’s reasoning engine focuses on deeply understanding user questions and can take actions like asking for clarification, searching your knowledge base, refining its search, and even analyzing logs & telemetry.

🖼️ Multi-agent support: You can now create agents tailored to the expectations that specific teams have — across support, success, and sales. Each agent can be given its own specific data and instructions, so you have full control over how it behaves.

⚙️ Custom workflows: Every support team is different, and your agent should behave accordingly. RunLLM’s new Python SDK enables you to control how your agent handles each situation, what types of responses it gives, and when it escalates a conversation.

Early Returns

Some of our early customers have been generous enough to share their feedback with us, and the results have been impressive:

- DataHub: $1MM of cost savings in engineering time

- vLLM: RunLLM handles 99% of all questions across the community

- Arize AI: 50% reduction in support workload

Try it & tell us what breaks

Spin up an agent on your own docs—for free—ask your hardest question, and see how far it gets. If it stumbles, let us know. We learn fast.

👉 Get started with a free account, then paste the URL to your documentation site. That’s it. In just a few minutes, we’ll process your data and you’ll be able to start asking questions about your own product.

We’re looking forward to your feedback!

Masum Parvej

@vsreekanti Finally, an AI support agent that doesn’t just parrot docs!

Vikram Sreekanti

@masump This is critical for solving harder problems. It's fine to answer simple questions with what's in the docs, but resolving complex tickets requires much more work. That's what we're focused on. 🙂

Peter Farago

@vsreekanti  @masump Yes! It's amazing to think of how far we've come over the old chatbot technology of last decade. We're on the cutting edge of understanding a user's developer environment, pulling and debugging logs, writing validated custom code as a solution for a customer, and more. Our AI Support Engineer handles all this automatically, and updates documents, integrates across all the surfaces a team and its users works (think Docs site, Slack, Zendesk, etc.). We are definitely excited about all the things we can do beyond parroting docs! 🦜

Akash Sharma 💭

Huge congrats on the launch @vsreekanti and the RunLLM team!! 👏🏽 It's been great to follow along how thoughtfully you've approached the core problem statement from the beginning.

Vikram Sreekanti

@mrakashsharma Thanks Akash! Appreciate your support, and we're big fans of the community & content you all are building.

Chenggang Wu

@mrakashsharma Thanks Akash for the support! Means a lot to us.

Astha Rattan 🌊

@vsreekanti  Congratulations on the launch!!

Jaber Jaber
this tool should be built 6 months earlier
Vikram Sreekanti

@jaber23, we wish we could've gotten this out sooner too. Some of it is just figuring out what customers need incrementally as you build, and some of it is that the tech wasn't quite ready yet. (e.g., Gemini 2.5 Flash is pretty important to our ability to do log analysis well). But we have a lot more coming soon — stay tuned!

Peter Farago

@jaber23 Six months ago, it was already pretty awesome. But just think of how much better it is now that we've rebuilt it and added a new agentic planner with fine-grained reasoning and tool use support, a redesigned new UI that enables creating, managing, and inspecting multiple agents, and a Python SDK that allows you to exercise fine-grained control over support workflows! We'd love to get your impression of it. Note that you can try the full product absolutely free! We'll ingest documents and create a fine-tuned LLM to be an expert on your products. Then you can ask it hard questions about something you're familiar with to see how well it could work for you! 😻

Chenggang Wu

@jaber23 Six months ago it was already strong — now it’s a whole new level!

Joey Judd

Okay, this is brilliant—auto-resolving support tickets would save my team so much headache (and sleep). Does it handle really gnarly logs or just the easy stuff?

Chenggang Wu

@joey_zhu_seopage_ai Hey Joey, after the agent uses tool call to fetch logs from systems like GCP, it uses an LLM to further extract parts that are relevant to the support tickets, so yeah it handles really gnarly logs. :)

Vikram Sreekanti

@joey_zhu_seopage_ai Thanks for the feedback Joey! Glad to hear you see the value in this. We agree that stopping at the simple stuff would be boring and a bit underwhelming. We're always focused on solving our customers' hardest problems, so if you see any areas for improvement, send them our way!

Peter Farago

@joey_zhu_seopage_ai We're built to handle advanced technical support (the hard stuff). Our customers consistently tell us that they are able to reclaim at least one third of each support engineer's time and have fewer escalations into engineering. So, based on what our customer are experiencing, we definitely know you could save time, headaches and reclaim some sleep!! 😴

SkyHuang

Congrats! Agentic reasoning + log analysis is a huge leap from simple retrieval.

How do you handle real-time telemetry ingestion at scale, and what’s your approach to chaining agent actions without creating feedback loops?

Vikram Sreekanti

@sky_huang001 Thanks! Great questions:

  1. We use LLMs to filter and analyze logs to avoid reading a bunch of unnecessary data. That filtration process helps us hone in on the most important bits.

  2. We enforce pretty strong guardrails over what the agent can do and which paths it goes down. As it takes actions, it doesn't have full freedom to run amok.

SkyHuang

@sky_huang001  @vsreekanti Thanks for the detailed breakdown — makes a lot of sense. Filtering logs with LLMs to reduce noise is a smart move, and strong guardrails definitely feel necessary as agent chains get deeper.

Curious to see how you evolve the balance between flexibility and safety as more actions get integrated. Really exciting direction!

Peter Farago
@sky_huang001 Thank you for those great questions! Please ask anything you like. Our execs and senior technical team are all here!
Chenggang Wu

@sky_huang001 Breaking down the agentic pipeline into smaller pieces and imposing guardrails on each of them is the key.

Anwar Laksir
Launching soon!

Congrats on the launch! 🔥

RunLLM looks super impressive, love the focus on actually resolving, not just replying.

Vikram Sreekanti

@anwarlaksir Thanks Anwar! Excited to see how we can help support teams improve efficiency with these new features.

Peter Farago
@anwarlaksir Appreciate it. One amazing reaction we get from companies we show this to is that they “can’t believe AI can be this good.” Senior engineers will say things like “this answer is as good as what I’d give, and it was faster!” The team has worked super hard to get these kinds of results. More to come!
Chenggang Wu

@anwarlaksir Thanks so much! Really appreciate the kind words 🙏

We’ve been heads-down trying to make AI actually useful — not just responsive, but truly resolving issues end to end. Excited for what’s ahead!

Daniel Han

An very useful tool which we have been using recently in our Discord server and soon to be GitHub package. At first we expected the answers to be inadequate, however after just asking one hard question about our GitHub package, we immediately knew that the answer was very high quality and seemed to have been written by one of our team members.

Now many of our users just ask (including myself) about questions related to our package or any bugs/issues or suggestions they encounter and it gets it right 95% of the time, and even when it gets it wrong it directly links to the sources of where they have derived the information so the user can investigate.

Chenggang Wu

@danielunsloth Thanks so much, Daniel — means a lot coming from someone building serious tool like Unsloth!

We’ve worked hard to make RunLLM feel like a true member of your team, not just a surface-level chatbot. Thrilled to hear it’s helping your users directly and holding up on the hard questions. If there’s anything you want to see next (or break), let us know — we learn fast.

Vikram Sreekanti

@danielunsloth Thanks, Daniel! Really appreciate the feedback, and looking forward to continuing to collaborate + deepen our support for your community. 🙂

Peter Farago

@danielunsloth Daniel, those are great results!! The team obsesses over answer quality, so it's fantastic to see that you can "feel" the answer quality. In fact, the goal internally is to have the answer quality be as good as a team's top support engineer! 😊

Saurav Chhatrapati

Hi! I'm Saurav - one of the engineers at RunLLM and wanted to share a bit about why I'm so excited about the product that we've built.

It's been incredible going from the foundations we laid out in RunLLM v1 to a complete agent-centric operating mode that has more flexibility in the actions it can take in order to solve customer requests. The tricky part at the core of all LLM-powered applications is making sure that your agent stays within guardrails even as you increase the scope of the actions that it can take. We've iterated a lot on this and I think we've built out something really special that gives users insight into each step the agent is taking. I'm also especially excited about the tool use integrations where agents can now analyze logs and telemetry data. The combination of the two makes RunLLM v2 feel like a big step in the direction towards making a RunLLM agent a core part of your team!

There's a lot more to come and I can't wait to keep building!

Animesh Nighojkar

Looks like it’s built to deeply understand product docs, logs, code and deliver answers that feel reliable. I’m curious to take a closer look and see if it really lives up to the promise of trust‑worthy automation.

Peter Farago
@anighojkar Hi Animesh - Please do! It's actually quite easy to try this out in a self-service way. You can be asking questions about your product in minutes. Just go to runllm.com, create an account (totally free), then copy and paste a URL to your documentation site and we kick off building a fine-tuned LLM on your unique product. From there, you can ask it any hard questions to see the quality of answers. Would love your feedback on the product experience. Cheers!
Chenggang Wu

@anighojkar Looking forward to hearing your feedback!

Rena

Super impressed with how far RunLLM has come — especially love the focus on agentic reasoning and custom workflows. It's clear you're not just chasing trends but actually solving real pain points for complex product teams. Seeing success stories like vLLM and Arize AI is super compelling. Can’t wait to try it out on our docs and see how it handles edge cases. Congrats on the v2 launch!

Peter Farago

@renaluo Thank, Rena! We're looking forward to your feedback. Please let us know if there's anything we can do to support you. The self-service is free and and you can try our AI Support Engineer on your documentation to see how it works!

Chenggang Wu

@renaluo Thank you and looking forward to your feedback!

Riya Som
  • Wow nice and excellent

Peter Farago

@riya_som1 Appreciate it, Riya!

Chenggang Wu

@riya_som1 Thank you for your support!

Hiren Thakkar

congratulations on the launch

Peter Farago

@thehirenthakkar Thank you, Hiren!

Chenggang Wu

@thehirenthakkar Thank you for your support!

Geoff Randle

This product is useful practical for streamlining your work.

Chenggang Wu

@geoff_randle Thank you for your support!

saka5577
Tinkered with runLLM lately. Nice to mess with local LLMs without the hassle. Works okay for small stuff, though bigger models slow down a bit. Decent for casual use.
skylar

RunLLM unlocks private, offline LLM access on your own machine. It delivers a smooth ChatGPT-like experience for open-source models, ideal for developers and researchers prioritizing data control, avoiding vendor lock-in, and fast local prototyping—no cloud costs required. An essential tool for independent, secure AI work.

Dedipta Roy

Very useful information i feel totally satisfied i recommend to all.

Jinxin Yin

Super impressive work. Love how RunLLM moves beyond generic GPT wrappers and actually tackles the hard parts of support automation — log analysis, multi-agent orchestration, and custom workflows. The focus on real-world efficiency (30%+ eng time saved, 99% ticket deflection!) makes it clear this was built with and for technical teams. Big fan of the direction here!

yun gong
This is a great product. I was shocked by my first experience. I hope you keep working hard and you will definitely succeed!
Miguel M.

The future of AI is already here. The automation of complex processes is already common in the technological world. The progress made in all of science is incredible.

Vince Torres

RunLLM is a solid AI support tool, it speeds up debugging and fits right into your current workflow without making things complicated. The clean interface and smart automation really help cut down on repetitive tasks. If you're in tech support and drowning in tickets, this is definitely something worth trying. I'd recommend it to any team looking to boost efficiency without reinventing their process.

Deep Rock

RunLLM has been a game-changer for our team. It reads our logs and code to resolve issues fast—saved us tons of engineering hours and drastically cut down support tickets. Highly recommend giving it a try.