LFM2

LFM2

New generation of hybrid models for on-device edge AI

17 followers

LFM2 by Liquid AI is a new class of open foundation models designed for on-device speed and efficiency. Its hybrid architecture delivers 2x faster CPU performance than Qwen3 and SOTA results in a tiny footprint.
This is the 2nd launch from LFM2. View more

LFM2-VL

Launched this week
On-device vision, now 2x faster
LFM2-VL is a new series of open-weight vision-language models from Liquid AI. Designed for on-device deployment, they offer up to 2x faster inference on GPU and come in 450M and 1.6B parameter sizes.
LFM2-VL gallery image
LFM2-VL gallery image
LFM2-VL gallery image
LFM2-VL gallery image
LFM2-VL gallery image
Free
Launch Team

What do you think? …

Zac Zuo
Hunter
📌

Hi everyone!

Liquid AI has been consistently pushing hard on on-device models, and now they're adding multimodal capabilities to the LFM2 series.

LFM2-VL is their latest answer. It's a new family of vision-language models designed for speed, with up to 2x faster inference on GPU compared to existing models.

They've released two versions: a tiny 450M and a more capable 1.6B, which is great for developers building for different device constraints.

Barnaby

A new generation of hybrid models built for on-device edge AI. Faster, efficient, and optimized for real world application.

Nathan Cooper

Really like how Liquid AI is thinking about real deployment. On-device speed matters way more than benchmarks sometimes. Excited to see the 450M size too, great balance for lightweight apps without losing capability.