Meta has introduced Llama 4, a suite of three open-weight multimodal models designed to handle text, image, and video tasks across 200 languages. The models, named Scout and Maverick, each feature 170 billion effective parameters and a 10 million token context, while the forthcoming Behemoth boasts 2.88 trillion effective parameters. These models are accessible via cloud platforms like AWS and Hugging Face and include security features such as Llama Guard and Code Shield. Despite their capabilities, the models have limitations, including potential code vulnerabilities. Maverick, for instance, scores 40% on LiveCodeBench, significantly lower than GPT-5's 85%. Additionally, the models have been trained on controversial datasets, which may pose ethical concerns.