DeepSeek has launched the preview version of its V4 series of open-source models, now available under the MIT license on platforms like Hugging Face and ModelScope. The V4 series features two MoE models: V4-Pro, boasting approximately 1.6 trillion parameters with 49 billion activated per token, and V4-Flash, with 284 billion parameters and 13 billion activated per token. Both models support a context length of up to 1 million tokens. The new models promise reduced memory usage and computational overhead in long-text reasoning compared to the previous V3.2 version.