Google Research has introduced ReasoningBank, a new agent memory framework designed to enhance AI agents' learning capabilities by leveraging past successes and failures. Released on April 22, ReasoningBank allows large model-driven agents to distill experiences into generalized reasoning strategies, storing them in a memory bank for future task execution. This approach improves upon previous methods by focusing on reasoning patterns rather than action sequences, incorporating both successful and failed task experiences. The framework, detailed in a paper published at ICLR and available on GitHub, includes Memory-aware Test-time Scaling (MaTTS) to allocate additional computational resources during inference. This enables agents to explore multiple task trajectories simultaneously, refining strategies through self-comparison. In benchmarks, ReasoningBank demonstrated an 8.3% higher success rate on WebArena tasks and a 4.6% improvement on SWE-Bench-Verified tasks compared to memoryless baselines, with further enhancements when MaTTS is applied.