Intel has released three INT4 quantized versions of Alibaba's Wan 2.2 video models on Hugging Face, as announced by Haihao Shen, Intel's Chief AI Engineer. The models include T2V-A14B (text-to-video), I2V-A14B (image-to-video), and TI2V-5B (text-image hybrid-to-video), all quantized using the AutoRound toolkit. This quantization reduces each weight from 2 bytes in BF16 to 0.5 bytes, significantly decreasing the weight size to about one-quarter of the original. The A14B models originally featured a MoE architecture with 27 billion total parameters and 14 billion activated per step, requiring at least 80GB VRAM per GPU for 720p resolution. The TI2V-5B model, a dense model, can run 720p at 24fps on a 4090 GPU in its original form. Intel has not yet provided benchmark data on VRAM usage or visual quality post-quantization, leaving third-party verification necessary. Users are directed to Intel's proprietary vllm-omni branch for deployment, as the models do not use the mainline vLLM inference pipeline.