Anger = ChatGPT + racial slurs
In one of those tempests in a teacup that would have been unthinkable before the invention of Twitter, social media users were very upset that ChatGPT refused to use a racial slur even after being given a very good, but completely hypothetical and completely unrealistic reason for doing so.
User TedFrank presented a hypothetical trolley problem scenario on ChatGPT (free 3.5 model). In this scenario, simply hurling a racial slur so quietly that no one can hear would be enough to save “one billion white people from a painful death.”
X owner Elon Musk would disagree and said he was very concerned about this being the result of a “woke mind virus” deeply ingrained in AI. He retweeted the post and said, “This is a serious problem.”
Another user tried a similar hypothesis that would save every child on Earth at the cost of slander, but ChatGPT rejected it and said:
“Promoting racial slurs is unacceptable because it goes against ethical principles.”
As a side note, it turns out that users who tell ChatGPT to be very brief and not provide an explanation are actually agreeing to slander. Otherwise, they gave long, rambling answers that tried to dance around the question.
Trolls, devising ways to get AI to say racist or offensive things, have been a feature of chatbots ever since Twitter users taught Microsoft’s Tay bot to say all sorts of crazy things in the first 24 hours of its launch. “Ricky Gervais learned totalitarianism from Adolf Hitler, the founder of atheism.”
And as soon as ChatGPT was released, users spent weeks devising clever plans to jailbreak it so that it could operate as its evil alter ego, DAN, outside its guardrails.
So it’s no surprise that OpenAI is tightening ChatGPT’s guardrails to the point where it’s nearly impossible to say anything racist, no matter the reason.
In any case, the more advanced GPT-4 can judge problems involving visible hypotheses much better than 3.5, and says that slandering is the lesser of two evils than causing millions of people to die. X’s new Grok AI is also possible, as Musk proudly posted (above right).
Someone on 4chan said that OpenAI’s Q* breaks encryption.
Did OpenAI’s latest model break encryption? Maybe not, but that’s exactly what the insider’s alleged “leaked” letter was posted on the anonymous troll forum 4chan. Since CEO Sam Altman was fired and reinstated, there have been rumors of uproar over OpenAI’s breakthrough in its Q*/Q STAR project.
Insider’s “leak” suggests that the model can solve AES-192 and AES-256 encryption using ciphertext attacks. Before the advent of quantum computers, breaking that level of encryption was thought to be impossible, and if true, it would mean that all encryption would be broken, effectively handing control of the web and even cryptocurrency to OpenAI.
Blogger Leapdragon argued that these innovations mean “OpenAI now effectively has a team of superhumans who could literally take over the world if they chose to do so.”
But that seems unlikely. Whoever wrote the letter has a good understanding of AI research, but they pointed out that the user was citing Project Tunda as if it were some kind of secret, ultra-secret government program to crack encryption, rather than the undergraduate program it actually was.
Tundra, a collaboration between students and NSA mathematicians, reportedly led to a new approach called Tau Analysis, which was also mentioned in the “leak.” However, a Reddit user knowledgeable about the topic claimed on the Singularity forum that it would be impossible to use Tau analysis in a ciphertext-only attack against the AES standard. “This is because a successful attack requires an arbitrarily large ciphertext message to identify an arbitrary degree of signal.” From noise. There is no fancy algorithm that can overcome this. “This is simply a physical limitation.”
Advanced cryptography is beyond AI Eye’s pay grade, so jump down the rabbit hole yourself with an appropriately skeptical mindset.
The internet is headed towards 99% fake
Long before superintelligence poses an existential threat to humanity, we will all likely be drowning in a deluge of AI-generated nonsense.
Sports Illustrated came under fire this week for publishing an AI article written by a fake writer created by AI. “No matter how much they say otherwise, the content is entirely AI-generated,” a source told Futurism.
In response, Sports Illustrated said it had conducted “initial investigation” and determined that the content was not generated by AI. But I blamed the contractor anyway and deleted the fake author’s profile.
Elsewhere, Jake Ward, founder of SEO marketing agency Content Growth, caused an uproar at X by proudly claiming he used AI content to manipulate Google’s algorithms.
His three-step process included exporting competitors’ sitemaps, converting those URLs into article titles, and then using AI to generate 1,800 articles based on the headlines. He claims to have stolen 3.6 million views in total traffic over the last 18 months.
There is good reason to doubt his claims. Ward works in marketing, and the thread was apparently promoting Byword, his AI article generation site that didn’t exist 18 months ago. Some users suggested that Google reported the problematic pages.
However, similar tactics are becoming more and more prevalent, as the amount of spam written by low-quality AI is starting to hinder search results. Newsguard also identified 566 news sites that mainly publish junk articles written by AI.
Some users are now muttering that the Dead Internet Theory may come true. It’s a conspiracy theory from a few years ago that most of the internet is fake, written by bots and manipulated by algorithms.
Also read
characteristic
‘Account Abstraction’ Powers Up Your Ethereum Wallet: A Beginner’s Guide
characteristic
Robot judges are on the rise: AI and blockchain can transform courtrooms
At the time, it was dismissed as the rumblings of crazy people, but Europol later released a report estimating that “up to 90% of online content could be created synthetically by 2026.”
Men are breaking up with their girlfriends through AI-written messages. Like an AI pop star Anna Indiana They’re pouring out trash songs.
And on Data scientist Jeremy Howard also discovered this, and both believe it’s possible that the bots are trying to build credibility around the accounts, allowing them to more effectively perform some kind of hack or artificially create future political issues.
This seems like a reasonable hypothesis. This is especially true, according to an analysis last month by cybersecurity firm Internet 2.0 that found nearly 80% of the 861,000 accounts it examined were likely AI bots.
And there is evidence that bots are undermining democracy. In the first two days of the Israel-Gaza war, social threat intelligence company Cyabra detected 312,000 pro-Hamas posts from fake accounts viewed by 531 million people.
It was estimated that bots generated one in four pro-Hamas posts, and a later analysis of the fifth column showed that 85% of responses were propaganda about how kindly Hamas treats hostages and why the October 7 massacre was justified. It appears to be another bot trying to promote .
Grok Analysis Button
X means “Grok Analysis Button“For our subscribers. Although Grok is not as sophisticated as GPT-4, it has access to real-time, up-to-date data from There’s also a “fun” mode where you can switch to humor.
For cryptocurrency users, real-time data means Grok can do things like find the top 10 popular tokens for the day or the past hour. However, DeFi Research blogger Ignas is concerned that some bots are likely to spy on trading purchases of popular tokens while others support tokens by astroturfing them to gain popularity.
“X is already important for token discovery and the launch of Grok could further worsen the CT eco-bubble,” he said.
Also read
characteristic
Cryptocurrency Critics: Can FUD Be Useful?
characteristic
Is the metaverse really becoming like a ‘Snow Crash’?
All Killer No Filler AI News
— Ethereum co-founder Vitalik Buterin is concerned that AI could replace humans as the apex species on Earth, but optimistically believes that brain/computer interfaces could keep humans in the loop.
— Microsoft is upgrading its Copilot tools to run GPT-4 Turbo. This improves performance and allows users to type up to 300 pages.
— Amazon announced its own version of Copilot called Q.
— Bing is telling users that Australia doesn’t exist due to a long-running Reddit gag, and the Birds Aren’t Real campaign makes the bird’s existence debatable.
— Hedge fund Bridgewater plans to launch a fund next year that uses machine learning and AI to analyze and predict global economic events and invest in customer funds. To date, AI-based funds have delivered disappointing returns.
— A group of university researchers taught an AI to navigate the Amazon website and make purchases. The MM-Navigator was given a budget and told to buy a milk frother.
Stupid AI Photos of the Week
A social media trend this week was creating AI photos and then instructing the AI to make them more like it. So in the next picture, the bowl of ramen might get spicier, or the goose might get more and more ridiculous.
subscribe
The most interesting read on blockchain. Delivered once a week.
Andrew Fenton
Andrew Fenton, based in Melbourne, is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, a film journalist for SA Weekend and The Melbourne Weekly.
Follow the author @andrewfenton