In mid-November 2024, Steven Adler, a senior safety researcher at OpenAI, resigned from his position, expressing deep concerns over the rapid advancement of artificial intelligence (AI). In a series of posts on X (formerly Twitter) in late January 2025, Adler articulated his apprehensions about the industry’s swift progression towards artificial general intelligence (AGI). He described the current trajectory as a “very risky gamble” for humanity’s future. Adler emphasized the challenges in ensuring responsible AGI development, noting that even if one laboratory commits to safety, others might “cut corners to catch up, maybe disastrously.”
He highlighted the absence of effective solutions for AI alignment and warned that the competitive nature of AI development could lead to compromised safety measures. Reflecting on personal implications, Adler shared his fears about the future, stating, “Honestly, I’m pretty terrified by the pace of AI development these days. When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: Will humanity even make it to that point?” Adler’s departure is part of a broader trend of safety experts leaving OpenAI due to concerns over the accelerated pace of AI development.
Notably, co-founder and former chief scientist Ilya Sutskever and safety leader Jan Leike have also exited the organization, citing similar apprehensions. These developments underscore the growing internal tensions within AI research organizations, as they balance the drive for innovation with the imperative of ensuring safety. Adler’s resignation serves as a stark reminder of the ethical and existential challenges posed by the rapid advancement of AI technologies.