Steven Adler, a safety researcher at OpenAI, recently announced his departure from the organization, expressing deep concerns about the rapid advancement of artificial intelligence (AI). In a post on X (formerly Twitter), Adler stated, “Honestly, I’m pretty terrified by the pace of AI development these days.” He further questioned the future, pondering, “When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: Will humanity even make it to that point?”
Adler characterized the pursuit of artificial general intelligence (AGI) as a “very risky gamble, with [a] huge downside.” He highlighted the lack of current solutions to AI alignment, stating, “No lab has a solution to AI alignment today.” He expressed concern that the accelerating pace of AI development could outstrip efforts to ensure its safety, noting, “And the faster we race, the less likely that anyone finds one in time.”
This resignation adds to a series of departures from OpenAI, with researchers citing similar apprehensions about the organization’s direction and the broader implications of AI technology. The growing number of resignations underscores the escalating concerns within the AI research community regarding the ethical and safety considerations of rapidly advancing AI systems.
Adler’s departure and the concerns he raised reflect a broader debate within the AI community about the pace of development and the potential risks associated with creating highly autonomous systems. As AI technology continues to evolve, discussions about its ethical implications and the need for robust safety measures are becoming increasingly prominent.