A recent investigation into the query fan-out of large language models (LLMs) like ChatGPT has uncovered significant variability in their responses. The study highlights that sub-query composition changes over time, leading to different outputs. Additionally, the inclusion of year timestamps has mostly disappeared, and the sources used by these models frequently shift, with 32 new sources added and 44 removed. Furthermore, the companies mentioned in responses rotate with each query run, challenging the assumption of stability in AI-generated search results.