
LFM2
New generation of hybrid models for on-device edge AI
17 followers
New generation of hybrid models for on-device edge AI
17 followers
LFM2 by Liquid AI is a new class of open foundation models designed for on-device speed and efficiency. Its hybrid architecture delivers 2x faster CPU performance than Qwen3 and SOTA results in a tiny footprint.
This is the 2nd launch from LFM2. View more
LFM2-VL
Launched this week
LFM2-VL is a new series of open-weight vision-language models from Liquid AI. Designed for on-device deployment, they offer up to 2x faster inference on GPU and come in 450M and 1.6B parameter sizes.





Free
Launch Team
Hi everyone!
Liquid AI has been consistently pushing hard on on-device models, and now they're adding multimodal capabilities to the LFM2 series.
LFM2-VL is their latest answer. It's a new family of vision-language models designed for speed, with up to 2x faster inference on GPU compared to existing models.
They've released two versions: a tiny 450M and a more capable 1.6B, which is great for developers building for different device constraints.
A new generation of hybrid models built for on-device edge AI. Faster, efficient, and optimized for real world application.
Really like how Liquid AI is thinking about real deployment. On-device speed matters way more than benchmarks sometimes. Excited to see the 450M size too, great balance for lightweight apps without losing capability.