Hugo and Nebula Award-winning science fiction author David Brin bump Department of Fiction postmanhas devised a plan to combat the existential threat of malicious artificial intelligence.
He said only one method has worked throughout history to curb the bad behavior of villains. We don’t ask nicely and we don’t create a code of ethics or safety bulletin board.
It’s called reciprocal responsibility, and he thinks this will apply to AI as well.
“Empower individuals to hold each other accountable. We have a pretty good idea how to do this. If AI can do this, a soft landing may be waiting for us,” he told the Magazine.
“Talk to each other. Let them compete with each other, and even let them snitch on each other or blow the whistle.”
Of course, it’s easier said than done.
The magazine spoke with Brin after he recently presented about his ideas at the informative Artificial General Intelligence (AGI) conference in Panama. This was the best-received speech of the conference and was greeted with cheers and applause.
Brin brought “science” to science fiction writers. He has a PhD in Astronomy and consults for NASA. He says becoming a writer was “my second life choice” after becoming a scientist. “But civilization seems to insist that I am a better writer than a physicist.”
His books have been translated into 24 languages, but his name will forever be associated with the Kevin Costner box office bomb. postman. But it’s not his fault. The original novel won the Locus Award for best science fiction novel.
Privacy and Transparency Advocate
A writer at the heart of the cryptocurrency community, Brin has been talking about transparency and surveillance since the mid-1990s. He first transitioned from seminal articles for Wired to nonfiction books. transparent society In 1998.
“It’s considered a classic in some circles,” he says.
In the study, Breen predicted that new technologies would invade privacy and that the only way to protect individual rights would be to give everyone the ability to detect when their rights are being abused.
He proposed a ‘transparent society’ where most people would know what was happening most of the time and the watched could watch the watched. This idea heralded the transparency and immutability of blockchain.
His initial thoughts on encouraging neatly balanced AIs to police each other were first introduced in another Wired article last year. This article formed the basis of his lectures and is currently in the process of being turned into a book.
History shows us how to defeat AI tyrants.
Breen, a keen student of history, believes science fiction should be renamed ‘speculative history’.
He says there is only one story that is moving, dramatic and scary. It is the story of humanity’s long struggle to get out of the mud, 6,000 years of feudalism, and the people who “sacrifice their children to Baal” as hallmarks of early civilization.
But through the early democracies of Athens and Florence, the political theories of Adam Smith in Scotland, and the American Revolution, people developed new systems by which they could gain freedom.
“And what was the fundamental thing? Don’t accumulate strength. “If you find a way to force the elites down each other’s throats, they will be too busy to oppress you.”
Artificial Intelligence: Super-intelligent predatory beings
Despite the threat of AI, “we already have a civilization full of highly intelligent predatory beings,” Brin said, before adding, after a pause: “They are called lawyers.”
Aside from being a good joke, it’s also a good analogy in that ordinary people are no match for lawyers, much less AI.
“What should I do in that case? You hire your own super-intelligent predatory lawyer. You talk about each other. “You don’t have to understand the law as well as a lawyer to have a representative on your side.”
The same goes for superpowers and the rich. It’s hard for the average person to hold Elon Musk accountable, but another billionaire like Jeff Bezos might stand a chance.
So can we apply the same theory to hold AIs accountable to each other? In fact, it may be our only option. Because their intelligence and abilities can grow far beyond what the human mind can imagine.
“It’s the only model that has worked so far. There is no guarantee that it will work with AI. But what I’m trying to say is that it’s the only model that works.”
Also read
characteristic
1602 Reconsidered: Is DAO the New Enterprise Paradigm?
characteristic
Get Your Money Back: The Strange World of Crypto Litigation
Individualization of artificial intelligence
But there’s a big problem with that idea. All of our accountability mechanisms are ultimately premised on holding individuals accountable.
So for Brin’s idea to work, the AI must have a sense of its own personality. That is, what you have to lose by behaving badly and what you have to gain by helping police rogue AI rule breakers.
“They have to be individuals who can actually be held accountable. “Who can be motivated by rewards and demotivated by punishments?” he says.
Figuring out incentives isn’t that difficult. Since humans are likely to control the physical world for decades, AI may be rewarded with more memory, processing power, or access to physical resources.
“And if we had that power, we could at least reward individual programs that appear to help us against other malicious programs.”
But how do we ensure that AI entities are “integrated into individually defined, separate individuals with relatively equal competitive strengths”?
But Breen’s answer drifts into the realm of science fiction. He suggests that even if the majority of the system runs in the cloud, some key components of AI (what he calls the “soul kernel”) should be housed in specific physical locations. Soul Kernels have a unique registration ID recorded on the blockchain, which can be revoked in case of misbehavior.
Although it may be extremely difficult to regulate such schemes globally, this system could be effective if there are enough companies and organizations that refuse to conduct transactions with unregistered AI.
An AI without a registered soul kernel becomes an outlaw and is shunned by respectable society.
This leads to the second big problem with the idea. If AI becomes lawless (or in the case of those who don’t register), we lose influence over it.
Is the idea to incentivize “good” AI to combat bad AI?
“I make no guarantees that any of this will succeed. All I’m saying is this works.”
Three laws of robotics and AI alignment
Brin continued the work of Isaac Asimov Foundation’s Victory One might have thought that his solution to the alignment problem in 1999 was to link Asimov’s three laws of robotics directly to AI.
The three rules basically say that robots cannot harm humans and cannot be allowed to harm humans. But Breen doesn’t think the three laws of robotics are likely to work. At first, no one was seriously trying to implement it.
“Isaac thought that in the 1970s and 80s, people would be too afraid of robots. Because he was writing in the 1940s. So people argued that it would take a lot of money to create these control programs. The people are not as fearful as Isaac had hoped. So the companies that invent these AIs don’t spend that money.”
The more fundamental problem is that Brin realized that Asimov himself realized that the three laws did not work.
Giskard, one of Asimov’s robot characters, invented an additional law called the Zeroth Law. This allows robots to do anything they rationalize as being in humanity’s best interest in the long term.
“Robots cannot harm humanity, and we cannot allow humanity to be harmed by inaction.”
So, like environmental lawyers who have successfully interpreted the human right to privacy in creative ways to force action on climate change, even sufficiently advanced robots can interpret the three laws in any way they wish.
So that won’t work.
Brin doubts that appealing to a robot’s better nature will work, but he believes we should emphasize to AI the benefits of having them around us.
“I think it’s very important to convey to our new children, artificial intelligence, that only one civilization created artificial intelligence,” he said, adding that our civilization stands on previous civilizations just as AI stands on them. added. our shoulders.
“If AI has any wisdom, it will realize that putting us shoulder to shoulder is probably a good idea. It doesn’t matter how smarter they are than us. It is unwise to harm the ecosystem that created you.”
subscribe
The most interesting read on blockchain. Delivered once a week.
Andrew Fenton
Andrew Fenton, based in Melbourne, is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, a film journalist for SA Weekend and The Melbourne Weekly.
Follow the author @andrewfenton
Also read
Hodler’s Digest
Sam Bankman-Fried’s Prison Life, Tornado Cash’s Chaos, and the $3 Billion BTC Whale: Hodler’s Digest, August 20-26.
7 minutes
August 26, 2023
Sam Bankman-Fried is in jail, Tornado Cash’s developer is arrested, and a Bitcoin whale holding $3 billion is identified.
read more
Hodler’s Digest
FTX considers reboot, Ethereum fork becomes active, OpenAI News: Hodler’s Digest, April 9-15
6 minutes
April 15, 2023
FTX’s new management plans to relaunch the exchange in 2024, Ethereum’s Shapella is running hard on the mainnet, and OpenAI is facing increasing competition.
read more