A recent discussion on the potential dangers of open source software (OSS) in AI has sparked debate among experts. Concerns were raised about the possibility of OSS being misused by bad actors, potentially leading to serious consequences. However, some argue that current evidence does not support these fears, noting that most harmful uses of AI have involved proprietary systems like OpenAI or Claude. Additionally, experts in pandemic preparedness suggest that software and sequencing are not the primary constraints in addressing such threats, challenging the notion that OSS poses a unique risk.