Jon

Jon

CTO and Entrepreneur
26 points
GPT-4oGitHubVS Code

About

Tech enthusiast, entrepreneur and builder. In a love/hate relationship with alpha-release platforms. Currently passionate about AI and LLM solutions.

Work

Indie maker at Datakafe

Badges

Pixel perfection 💎
Pixel perfection 💎
Bright Idea 💡
Bright Idea 💡
Tastemaker
Tastemaker
Tastemaker 10
Tastemaker 10
View all badges

Maker History

  • MyDailyPod (MyDP)
    MyDailyPod (MyDP)
    An AI that creates ~5 min pods summarizing your content
    Aug 2024
  • 🎉
    Joined Product HuntMay 20th, 2024

Forums

JD Worcester

1yr ago

Finished #2, thank you for all your support! 🎉❤️

Hey Product Hunt Community, We're thrilled to share that Zeacon finished #2 on Product Hunt, and we couldn't have done it without your amazing support! Here's a snapshot of our launch results: 661 Upvotes 208 Comments Day Rank #2 Week Rank #6 1,076 Website Visits 50 Demos Booked Reflecting on Our Journey While we're celebrating our success, we also learned a few things that we'd do differently next time: Begin Connecting and Building Outreach List Earlier Starting earlier would have helped us build stronger relationships and gather more momentum before launch day. DM a Day or Two Before the Launch Giving our supporters a heads-up a bit earlier would have given them more time to rally around us. Ignore Paid Promotions and Inbound Messages We realized that organic support from the community is far more valuable than any paid promotions. Continue to Get Support After Launch Day To secure a spot in the Product of the Week, it's crucial to keep the momentum going beyond the launch day. Top 3 Pieces of Advice for Fellow Makers Start Early Lay the groundwork well in advance. Build your outreach list, connect with potential supporters, and create buzz early on. DMs Are the Best Way to Get Support Personal messages make a huge difference. Reach out directly to your network and ask for their support. Have a Good Tagline A compelling tagline can capture attention and convey the essence of your product quickly. We'd love to hear any feedback or additional tips from this wonderful community. Your insights can help us improve and continue to grow. Thank you once again for all your support! https://www.producthunt.com/prod...

Jon

12mo ago

LLM API of choice: Cost vs. Context Window & Quality

Hi all. Very curious how those building LLM-based apps are using the various LLM models/apis given the differences in cost, context window, quality. Personally, I'm always trying to find the right balance across these factors, creating the best possible app at lowest possible cost. I'll provide a view of how we do this in our app. Would love to know how others are tackling this! --OUR AI USE CASE-- We process large amounts of text from various media sources, and consolidate into more consumable summaries using LLM Chat/Completion. --OUR AI SOLUTION-- We currently use 2 models: -GPT (4o): used for large context windows (128k tokens) where high quality output is required, but executions are costly. -Mixtral (8x-22b): has smaller context window (64k tokens) and I think lower quality than gpt-4o, but much less expensive to run. We created an llm_factory that provides a model based on the following factors: - User's Package (premium=gpt, basic=mixtral) - Context Window (even if the user has a 'basic' package, we let them use the GPT model if their context window exceeds mixtral's 64k, as we want to provide the best quality and avoid context window failures) -Use Case (certain lower-value use cases will always use mixtral)

View more