This week, two of the most influential voices in technology offered contrasting visions for the development of artificial intelligence, highlighting the growing tension between innovation and safety.
CEO Sam Altman said Sunday evening: blog post OpenAI describes his company’s trajectory as it races towards artificial intelligence (AGI), tripling its user base to over 300 million weekly active users.
“We are now confident that we know how to build AGI as it has been traditionally understood,” Altman said, asserting that by 2025, AI agents could “join the workforce” and “substantially transform the performance of companies.” I did it.
Altman said OpenAI is moving beyond simple AI agents and AGI, and that the company is starting to work on “true superintelligence.”
The timing of AGI or superintelligence is unclear. OpenAI did not immediately respond to a request for comment.
However, a few hours ago on Sunday, Ethereum co-founder Vitalik Buterin proposal It uses blockchain technology to create a global fail-safe mechanism for advanced AI systems, including a “soft pause” feature that can temporarily restrict industrial-scale AI operations if warning signals appear.
Encryption-based security for AI safety
Buterin speaks here of “d/acc” or dispersion/defensive acceleration. In the simplest sense, d/acc is a variant of e/acc, or effective acceleration, a philosophical movement espoused by Silicon Valley luminaries such as a16z’s Marc Andreessen.
Buterin’s d/acc supports technological progress but prioritizes developments that enhance safety and human agency. Unlike effective accelerationism (e/acc), which takes a “growth at all costs” approach, d/acc focuses on building defensive capabilities first.
“D/acc is an extension of the fundamental values of cryptocurrency (decentralization, censorship resistance, and an open global economy and society) to other areas of technology,” Buterin wrote.
Looking back at how d/acc has evolved over the past year, Buterin wrote about how existing cryptographic mechanisms, such as zero-knowledge proofs, can be used to implement a more cautious approach to AGI and superintelligence systems.
According to Buterin’s proposal, major AI computers would need weekly approval from three international groups to continue operating.
“Because the signature is device-independent (you can even require a zero-knowledge proof that it’s been posted to the blockchain if you want), it’s going to be all or nothing. There’s no real way to authenticate one device. It’ll continue to run without authorizing all the other devices.” It works,” Buterin explained.
The system acts like a master switch that allows all authorized computers to run or not run, ensuring that no one can perform selective enforcement.
Buterin described the system as a kind of insurance against catastrophic scenarios, saying, “Just having a soft pause until these critical moments occur will do little harm to developers.”
In any case, OpenAI’s explosive growth starting in 2023 (from 100 million to 300 million weekly users in just two years) shows how AI adoption is progressing rapidly.
Having grown from an independent lab to a major technology company, Altman acknowledged the difficulty of “building an entire company almost from scratch around this new technology.”
This proposal reflects a broader industry debate about managing AI development. Advocates have previously argued that implementing a global control system would require unprecedented collaboration between leading AI developers, governments and the cryptocurrency sector.
“One year of ‘wartime mode’ can be worth 100 years of work under complacent conditions,” Buterin wrote. “If you have to restrict people, it seems better to limit everyone equally and actually try to work together to organize that, rather than one side trying to dominate everyone.”
Edited by Sebastian Sinclair
generally intelligent newsletter
A weekly AI journey explained by Gen, a generative AI model.