Zac Zuo

Qwen3-235B-A22B-Thinking-2507 - Qwen's most advanced reasoning model yet

Qwen3-235B-A22B-Thinking-2507 is a powerful open-source MoE model (22B active) built for deep reasoning. It achieves SOTA results on agentic tasks, supports a 256K context, and is available on Hugging Face and via API.

Add a comment

Replies

Best
Zac Zuo
Hunter
📌

Hi everyone!

The Qwen team continues to push the upper limits of Qwen3 series with their latest release.

The new model has a very long name—Qwen3-235B-A22B-Thinking-2507—but its capabilities are incredibly strong. This model has SOTA results for open models in core reasoning areas like coding (LiveCodeBench) and math (AIME25), making it competitive with top-tiers like Gemini-2.5 Pro.

The best part is you don't need a complex setup to see it in action. You can experience it directly in Qwen Chat.

Ajay Parmar

@zaczuo Exciting to see Qwen3 launch! Curious – do you see Qwen models becoming strong alternatives for small teams who want more control vs OpenAI/Anthropic?

I run a Medium blog (9K+ monthly views) on AI tools and would love to feature this soon.

Joey Judd

Wow, a model that helps you actually *think deeper* OR just get things done faster? That’s honestly genius, ngl. Big props to the Qwen team for this one!

Shashwat Ghosh

Congrats looking forward to using it, albeit I wasn't too impressed with the last update six months back. Hopefully this one has the firepower to match Moonshot AI's Kimi K2

Daniel Knight

This is a very good model. Our team was super impressed in its reasoning capabilities within the cyber space.

Jay

Benchmark results look pretty intriguing, can't wait to give it a go. Great work.