Manouk Draisma

Manouk Draisma

Co-founder LangWatch.ai
222 points

About

Hey Product Hunters! 👋 I'm founder of LangWatch.ai - build up from the painpoint of having limited control over your LLMapplicaiton. We've build the end-to-end evaluation framework for AI engineering teams. Not just obervability or evals, but finding that right eval for your use-case. The past 10+ years I have been working in the start-up tech space, and what a crazy ride it has been... 🤯 Started 10 years ago at a Start-up, which went IPO within the first years I worked there. Building teams, partnerships, connecting with users and customers is what I love. 🤝 ❤️ In the meantime, I will add value where ever possible and support new product launches. Connect with me here and also on LinkedIn! ✌️

Badges

Buddy System
Buddy System
Plugged in 🔌
Plugged in 🔌
Gemologist
Gemologist
Top 5 Launch
Top 5 Launch
View all badges

Maker History

Forums

Manouk Draisma

2mo ago

Building a Lovable clone AI Agent in Python BUT fully tested with Scenario

How do you validate an AI agent that could reply in unpredictable ways?

My team and I have released Agentic Flow Testing, an open-source framework where one AI agent autonomously tests another through natural language simulated conversations. 

Use an Agent to test your Agent

How do you validate an AI agent that could reply in unpredictable ways?

My team and I have released Agentic Flow Testing an open-source framework where one AI agent autonomously tests another through natural language conversations. 

Is there an AI quality Lead in your Dev/AI team?

Every day I speak with AI teams building with LLM-powered applications and something is changing.

I see a new role is quietly forming:

The AI Quality lead as the quality owner.

View more