On Monday, Ethereum founder Vitalik Buterin outlined his views on “technological optimism,” inspired by Marc Andreessen, who opined about AI in his Technological Optimism Manifesto last October. I reflected. While Buterin agreed with Andreessen’s positive outlook, he also noted the importance of how AI is developed and the future direction of the technology.
Buterin acknowledged the existential risks of artificial intelligence, including leading to human extinction.
“This is an extreme claim. “There are many islands of civilization that remain unscathed by the worst-case scenario of climate change, man-made pandemics, or nuclear war.” said.
“But if a superintelligent AI decides to turn against us, it could destroy humanity forever, leaving no survivors,” Buterin said. “Even Mars may not be safe.”
Buterin pointed to a 2022 survey conducted by AI Impacts that found that 5% to 10% of participants believed humans would become extinct either because of AI or because humans failed to control AI, respectively. He said the open source movement, which focuses on security rather than closed, proprietary companies and venture capital funds, is ideal for leading AI development.
“If we want a future that is both superintelligent and human, a future where humans are not just pets but actually maintain meaningful entities across the world, this feels like the most natural choice,” he said.
Buterin went on to say that what is needed is active human intention to choose direction and outcome. “The formula of ‘maximize profits’ is not going to reach them automatically,” he said.
Buterin said he likes technology because it expands human potential, pointing to a history of innovation from hand tools to smartphones.
“I believe these things are very good, and I believe it is very good to further expand the reach of humanity to the planets and stars,” Buterin said. “Because I believe that humanity is very good.”
Buterin said he believes innovative technologies will lead to a brighter future for humanity, but rejects the idea that the world should remain as it is by reducing greed and strengthening public health.
“There are certain types of technologies that make the world better much more reliably than other types of technologies,” Buterin said. “There are certain types of technologies that, when developed, can mitigate the negative impacts of other types of technologies.”
Buterin warned of the rise of digital authoritarianism and surveillance technologies used to target those who resist or oppose governments controlled by a small group of technocrats. He said the majority of people would prefer a 10-year delay in highly advanced AI rather than it being monopolized by a single group.
“My basic fear is that the same kind of management skills that allow OpenAI to serve 500 employees and over 100 million customers will also enable a 500-member political elite, or even a five-person board of directors, to maintain an iron fist over politics. That means doing it. “Across the country,” he said.
Buterin has said he sympathizes with the Effective Acceleration (also known as “e/acc”) movement, but has mixed feelings about his enthusiasm for military technology.
“The enthusiasm for modern military technology as a force for good seems to require the belief that in most current and future conflicts the dominant technological power will certainly be one of the good guys,” he said, citing the idea that military technology is good. He said. America is good because it is built and controlled by America.
“Does being an e/acc require one to become an American maximalist, risking everything on the present and future morals of government and the future success of the country?” he said
Buterin warned against granting “extreme and opaque powers” in the hope that a few people will use them wisely, and instead favored the “d/acc” philosophy: defence, decentralization, democracy and differentiation. He said this mindset is adaptable to effective altruists, libertarians, pluralists, blockchain advocates, solar and lunar punks.
“A world that favors national defense is a better world for many reasons,” Buterin said. “Of course, the direct safety benefits are: Fewer people die, less economic value is destroyed, and less time is wasted in conflict.
“What is less appreciated, however, is that a world that favors defense makes it easier for healthier, more open and freedom-respecting forms of governance to flourish,” he concluded.
Buterin emphasized the need to build and accelerate, saying society must regularly ask what we are accelerating towards. Buterin suggested that the 21st century could be humanity’s ‘pivotal century’, one that could determine humanity’s fate for millennia.
“This is a difficult problem,” Buterin said. “But I look forward to watching and participating in our species’ massive collaborative effort to find answers.”
Edited by Ryan Ozawa.