AI Tupac vs. AI Drake
About a year ago, a fake AI song featuring Drake and The Weeknd garnered 20 million views in two days before Universal Memory withheld the track for copyright violations. But this week, the shoe went in the opposite direction, as lawyers for Tupac Shakur’s estate threatened Drake with a lawsuit. “Taylormade” A diss track against Kendrick Lamar that “featured” Tupac using AI fake vocals. Drake has since deleted the track from his X profile, but it’s not hard to find if you look for it.
Deepfake nudes have been criminalized
The Australian and British governments have announced plans to criminalize the creation of deepfake pornography without the consent of the people featured. AI Eye reported last December that anyone with a smartphone can easily create deepfakes through a variety of apps, including Reface, DeepNude, and Nudeify. According to Graphika, deepfake nude creation websites are receiving tens of millions of views every month.
Baltimore police have accused former Pikesville High School athletic director Dazhon Darien of using AI voice cloning software to create a fake racial storm (“fakeism”) in retaliation for forcing the school principal to resign over theft charges. was arrested. of school funds.
Darien sent recordings of the principal allegedly making racist comments about black and Jewish students to another teacher, who then forwarded them to students, the media, and the NAACP. The principal was forced to resign amid protests, but forensic analysis showed the audio was fake and detectives arrested Darien at the airport as he attempted to fly to Houston with a gun.
At least everyone in the media seems to hate Meta’s new AI integration into Instagram’s search box. That’s because Meta loves chatting too much and isn’t very good at searching. Additionally, after a question is asked in the group, this bot joins a Facebook conversation without a message, says nonsense, and no one responds within an hour.
AI priest stripped of character
An AI Catholic priest was stripped of his priesthood after just two days for supporting incest. California-based Catholic Answers last week introduced its Father Justin chatbot to answer educational questions about the Catholic faith.
But after it started advising people that they could baptize their children with Gatorade and blessed the “joyful event” of a brother and sister getting married, Catholic Answers was forced to apologize and demote the chatbot to plain Justin. CA said, “Among the comments from users, there is widespread criticism that the AI character was expressed as a bride.” “We won’t say he was a layman because he wasn’t a real priest!”
Rabbit R1 review
As soon as wildly popular tech critic Marques Brownlee said that the Rabbit R1 “has a lot in common with the Humane AI Pin,” you knew this device was doomed. Brownlee was apparently due to launch Humane’s device two weeks ago. Rabbit R1 is a portable AI device that interacts primarily through voice and operates apps on the user’s behalf. Brownlee criticized the device as being “borderline dysfunctional” because it’s not nearly finished and has poor battery life, and he said it’s not very good at answering questions.
TechRadar called the R1 “a beautiful and confusing product” and noted that the market cannot support “a product that is still far from ready for mass consumers.” CNET’s reviewer said that while there were moments when “everything went well and you understood the hype,” the negatives far outweighed it. The main problem with dedicated AI devices so far is that they are more limited than smartphones, which already perform the same functions more effectively.
New Video – Rabbit R1: Almost Unreviewable https://t.co/CqwXs5m1Ia
This is the culmination of a trend that has been vexing for years. The way to win the “race” is to deliver a barely finished product, then charge full price and keep making the product. Games, phones, cars, now AI is in the box. pic.twitter.com/WutKQeo2mp
— Marques Brownlee (@MKBHD) April 30, 2024
Fake Live Stream to Attack Women
New apps called Parallel Live and Famefy use AI-generated audience interaction to fake large social media audiences for live streaming. Pickup artists have been known to use the app as social proof to impress women. In one video, influencer ItsPolaKid shows a woman in a bar that he’s ‘live streaming’ to 20,000 people, and she asks him if he’s rich, and they end up leaving together. “The audience is AI-generated so it can listen and react to what you say. It’s really fun. She couldn’t get enough,” the influencer said.
The rule of thumb in social media is that whenever an influencer mentions a product, it’s likely an advertisement. Ethan Keizer, the creator of Parallel Live, has released several promotional videos that have garnered millions of views, pushing a similar line that models can win you over through social proof from a fake audience and get you invited to the VIP section of a club. 404 Media’s Jason Koebler reports that the app uses speech-to-text AI recognition. This means that the fake AI viewer “responds” to what you say out loud and “references what you say out loud while testing the app.”
“No-AI” guarantee for books
British author Richard Haywood is a self-publishing superstar whose post-apocalyptic novel Undead series has sold more than four million copies. He is now fighting zombie “authors” by adding a NO-AI label and guarantee to all his books, a “legally binding guarantee” that each novel was written without ChatGPT or other AI assistance. Haywood estimates that about 100,000 fake books have been created by AI over the past year, and believes AI-free endorsements are the only way to protect authors and consumers.
AI reduces heart disease deaths by one third
AI trained on nearly half a million ECG tests and survival data was used to identify the top 5% of most at-risk heart patients in Taiwan. A study in Nature found that AI reduced overall heart disease deaths in patients by 31% and 90% in high-risk patients.
AI is just as stupid as us
Yann LeCunn, senior AI scientist at Meta, argues that human intelligence may become a limitation of LLM due to training data, as large-scale language models converge around human baselines across multiple tests.
“Unless AI systems are trained to reproduce human-generated data (e.g. text) and lack search/planning/reasoning capabilities, performance will be saturated at or near human levels.”
AbacusAI CEO Bindu Reddy agrees that even as more and more compute and data are being added, “models have hit a wall.” “So in a sense, it’s virtually impossible to get beyond a certain level with just a simple language model,” she said. But “no one knows what ‘superhuman reasoning’ looks like. Even if LLM were to manifest these superhuman abilities, we would not be able to recognize them.”
Safety committee doesn’t believe in open source
The U.S. Department of Homeland Security has recruited the heads of centralized AI companies, including OpenAI, Microsoft, Alphabet, and Nvidia, to its new AI Safety and Security Council. However, the board of directors has been criticized for not including Meta CEO, who has an open source AI model strategy, or anyone working on open source AI. Maybe it’s already been deemed unsafe.
Andrew Fenton
Andrew Fenton, based in Melbourne, is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, a film journalist for SA Weekend and The Melbourne Weekly.
Follow the author @andrewfenton