Meta’s newest AI models, Llama 4 Scout 17B and Llama 4 Maverick 17B, are now available on SageMaker JumpStart. These models can process both images and text together for more powerful applications.
Instead of using all of its computing power for every question, a Llama 4 model can intelligently choose which ‘expert’ parts of its brain to activate based on the specific task. This makes advanced AI more accessible and cost-effective.
Llama 4 Scout 17B significantly expands what AI can process at once, from 128,000 tokens in previous models to now up to 10 million tokens (nearly 80x the previous context length). Llama 4 Maverick 17B is a general-purpose model that excels in image and text understanding tasks across 12 languages.
Both models are built with native multimodality, meaning they’re designed to seamlessly understand text and images together. They use an efficient mixture of experts (MoE) architecture, which activates only the most relevant parts of the model for each task, resulting in more compute-efficient training and inference, translating into lower costs at greater performance.
The models’ MoE architecture is similar to having a team of specialists rather than a single generalist. This approach means Llama 4 can deliver more powerful results while using fewer computing resources, making advanced AI more accessible and cost-effective for businesses of all sizes.
Source: https://www.aboutamazon.com/news/aws/aws-meta-llama-4-models-available