Elorian AI has raised $55 million in early-stage funding to advance visual reasoning capabilities in artificial general intelligence (AGI). The funding round was led by Striker Venture Partners, Menlo Ventures, and Altimeter, with participation from 49 Palms and notable AI scientists including Jeff Dean. Founded by Andrew Dai, a former Google DeepMind veteran, and Yinfei Yang, an AI expert from Apple, Elorian AI aims to elevate AI models' visual reasoning from a child-like level to an adult level, enabling them to think natively within a visual space. The company seeks to overcome the limitations of current visual language models, which rely on converting visual inputs into text for reasoning. By developing a new architecture that integrates multimodal training, Elorian AI plans to create models capable of advanced visual reasoning, crucial for applications in robotics, disaster management, and engineering. The company anticipates releasing a state-of-the-art model by 2026, which could significantly impact the embodied intelligence and AI hardware industries.