Zac Zuo

Ollama Desktop App - The easiest way to chat with local AI

Ollama's new official desktop app for macOS and Windows makes it easy to run open-source models locally. Chat with LLMs, use multimodal models with images, or reason about files, all from a simple, private interface.

Add a comment

Replies

Best
Zac Zuo
Hunter
📌

Hi everyone!

When Ollama walks out of your command line and starts interacting with you as a native desktop app, don't be surprised :)

This new app dramatically lowers the barrier to running top open-source models locally. You can now chat with LLMs, or drag and drop files and images to interact with multimodal models, all from a simple desktop interface. And most importantly, it's Ollama, which is one of the most trusted and liked products for users who care about privacy and data security.

Bringing the Ollama experience to people who aren't as comfortable with the command line will undoubtedly accelerate the adoption of on-device AI.

Gabe Moronta

@zaczuo Love it!!!! I've been an Ollama user for a while now, and have told many others about it, but they've never been as comfortable with it, so just finished sending this out to everyone I know! đź’Ş

André J

Needs MCP and agentic features. Maybe soon? 🙏

Marco Visin
@sentry_co wanted to ask the same
André J

@marco_visin Agentic use is perfect for local models. As you don't need speed. You can Que up some tasks overnight in different branches. And let it cook.

Marco Visin
@sentry_co yep. And together with MCP, that would make it a powerful machine
Ivo Dimitrov

That's a significant update! Thank you for your product, really like it 🙌

Will the UI be open source as well, so we can adjust/modify the way it works?

Mcval Osborne

awesome, just downloaded.


I've used tools like LLMStudio in the past but this is super slick.

Question: Is there a place to get an overview of best use cases for different models? I see the overview of models on the home page but contexualizing what certain models are best for would be massively helpful to me.

Quinn Comendant

I really like the UI of Ollama, especially the CLI. There's a lot to love there. Unfortunately, on macOS it's not the best option because it doesn't support MLX, which runs models 10% to 20% faster, and with lower memory usage. There is an open ticket with a pull request for adding a MLX backend from 2023, but it's been stalled for awhile. If you use mac, try LM Studio, mlx-lm, or swama instead.

Gin Tse

Running top vision models *locally* is huge—no more waiting on cloud stuff or privacy worries, tbh. This update is realy next-level, hats off to the team!

Joey Judd

No way—running top-tier vision models like Llama 4 locally is a total game-changer! My laptop always struggled before, so I’m seriously impressed by the new memory management.

Shashwat Ghosh

Very interesting launch @zaczuo and @jmorgan maybe I have got the thesis wrong but I wonder what's the utility of this one for non coders like me. Although I love Claude code, is this one better?

Vasanth

I really love this app! It would be great if it included web search functionality.

Britestak

Awesome Launch!!! Much useful! thank you very much for this ❤️

Ash Grover
Launching soon!

Anything that allows users run LLMs locally is just awesome in my book. I think everyone should have access to LLMs without the cost associated with the cloud as most people have basic needs to assist with their day to day tasks and don't have the requirements of a machine learning researcher. I do think integrated local LLMs within the operating systems will allow us to be more productive than cloud based ones in the future.

Good to see the progress on this!

Alexandre Droual
Finally !! Congrats to the whole team on this huge achievement. Can’t wait for next iteration, maybe a vibe coding extension ?
Shane Mhlanga
Absolutely brilliant update! I first started with ollama and OpenWebui. Until I found other native apps. But this has been the core. It was annoying having to do extra steps to just run a local model quickly, now this is it! Well done!
vivek sharma

Ollama v0.7 quietly rewrites the rules for running multimodal AI on local machines. Llama 4 and Gemma 3 in vision mode? Huge. Improved memory management, reliability, and accuracy make this more than just a version bump it’s a fresh foundation for the next wave of local-first LLMs.

장연주
Pretty design
Ajay Sahoo
Launching soon!

The convenience has become more impactful over and over usage of new tech tools for personal and professional operations, and even from tip to toe even i have a query for me or for someone else, or for something i am using i am using to get solutions to the doubt from Ollama and other previously using tasks based chatbots. Wonderful and embarked usability and preference of all LLMs in one.