Llama

Llama

Meta's open-source family of LLMs

4.9
72 reviews

1.3K followers

An openly accessible model that excels at language nuances, contextual understanding, and complex tasks like translation and dialogue generation.
This is the 7th launch from Llama. View more
Llama 4

Llama 4

A new era of natively multimodal AI innovation
Llama 4 was ranked #3 of the day for April 7th, 2025
The Llama 4 collection of models are natively multimodal AI models that enable text and multimodal experiences. These models leverage a mixture-of-experts architecture to offer industry-leading performance in text and image understanding.
Llama 4 gallery image
Llama 4 gallery image
Llama 4 gallery image
Llama 4 gallery image
Llama 4 gallery image
Llama 4 gallery image
Llama 4 gallery image
Llama 4 gallery image
Free
Launch Team

What do you think? …

Chris Messina
Hunter
📌

The new herd of Llamas from Meta:


Llama 4 Scout:

•⁠ 17B x 16 experts

•⁠ Natively multi-modal

•⁠ 10M token context length

•⁠ Runs on a single GPU

•⁠ Highest performing small model


Llama 4 Maverick:

•⁠ 17B x 128 experts

•⁠ Natively multi-modal

•⁠ Beats GPT-4o and Gemini Flash 2

•⁠ Smaller and more efficient than DeepSeek, but still comparable on text, plus also multi-modal

•⁠ Runs on a single host


Llama 4 Behemoth:

•⁠ ⁠2+ trillion parameters

•⁠ ⁠Highest performing base model

•⁠ Still training!

Jamie G

@chrismessina Just wanna leave a thread here: Llama 4 joke collectors now gather! ...



But besides we love the jokes/memes, Llama is great, people just expected 4 to be better. Fight really hard for 5 Meta!


And definitely thanks for hunting Chris!

Impressive launch for Llama 4! Curious though—how do you manage efficiency and latency challenges with the mixture-of-experts setup, especially in real-time multimodal applications? @ashwinbmeta

Sebastian Thunman

Can't wait to try this out. We're experimenting with running models on-device for our product (desktop app) but haven't been able to get great results yet for the average laptop. Looking forward to see the reality of inference speeds for these models.

Ori Miles

@sebastian_thunman I say Strawberry, think it is insane!