Meta Llama 4 Models Available on Azure AI Foundry

Meta has made its Llama 4 models available today in Azure AI Foundry and Azure Databricks, enabling developers to build more personalized multimodal experiences. The models are designed to seamlessly integrate text and vision tokens into a unified model backbone, allowing developers to leverage them in applications that demand vast amounts of unlabeled text, image, and video data.

Llama 4 Scout and Maverick models are available, with the former offering improved performance in tasks such as multi-document summarization, personalization, and reasoning. The model can analyze extensive information, generate summaries or reports from lengthy inputs, and perform complex reasoning across large knowledge sets.

The Llama 4 Maverick model is ideal for precise image understanding and creative writing, making it well-suited for general assistant and chat use cases. It excels in image and text understanding with support for 12 languages, enabling the creation of sophisticated AI applications that bridge language barriers.

Two key innovations set Llama 4 apart: native multimodal support with early fusion and a sparse Mixture of Experts (MoE) design for efficiency and scale. The model’s architecture allows it to process full reports with integrated summaries or answers, making it suitable for enterprises.

Meta built the Llama 4 models with best practices outlined in their Developer Use Guide: AI Protections, including integrating mitigations at each layer of model development from pre-training to post-training. The models come with proven safety and security guardrails developers can expect from Azure.

Developers can now explore the Llama 4 models in the Azure AI Foundry Model Catalog and in Azure Databricks, starting to build with the latest in multimodal, MoE-powered AI backed by Meta’s research and Azure’s platform strength.

Source: https://azure.microsoft.com/en-us/blog/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks