DeepSeek V4 has demonstrated performance parity on Huawei Ascend NPUs and NVIDIA GPUs, dispelling rumors of adaptation delays. The V4 technical report highlights that the Fine-Grained Expert Partitioning Scheme has been successfully implemented, achieving 1.50x to 1.73x acceleration for standard inference workloads and up to 1.96x in latency-sensitive scenarios. The team has also open-sourced the CUDA version of the MegaMoE kernel as part of DeepGEMM, confirming that V4 maintains near-theoretical efficiency across both platforms without performance loss.