Meta has introduced Llama 4, a suite of three open-weight multimodal models designed to handle text, image, and video tasks across 200 languages. The models, named Scout and Maverick, each feature 170 billion effective parameters and a 10 million token context, while the forthcoming Behemoth boasts 2.88 trillion effective parameters. These models are accessible via cloud platforms like AWS and Hugging Face and include security features such as Llama Guard and Code Shield.
Despite their capabilities, the models have limitations, including potential code vulnerabilities. Maverick, for instance, scores 40% on LiveCodeBench, significantly lower than GPT-5's 85%. Additionally, the models have been trained on controversial datasets, which may pose ethical concerns.
Meta Launches Llama 4 with Advanced Multimodal Models
Disclaimer: The content provided on Phemex News is for informational purposes only. We do not guarantee the quality, accuracy, or completeness of the information sourced from third-party articles. The content on this page does not constitute financial or investment advice. We strongly encourage you to conduct you own research and consult with a qualified financial advisor before making any investment decisions.