US wants to cover Taiwan’s skies with robot army
The U.S. military plans to deploy thousands of autonomous drones into the narrow Taiwan Strait to counter any Chinese invasion of the island.
“I would like to use a variety of covert capabilities to turn the Taiwan Strait into an unmanned hellhole and make their lives absolutely miserable for a month. That would buy us time to do everything else,” Navy Adm. Samuel Paparo, commander of U.S. Indo-Pacific Command, told The Washington Post.
Drones are meant to confuse enemy aircraft, provide targets for missiles to shoot down warships, and generally create chaos. Ukraine has been a pioneer in the use of drones in warfare, destroying 26 Russian ships and forcing the Black Sea Fleet to retreat.
Ironically, most of the components for Ukraine’s drones are sourced from China, raising questions about whether the United States can produce enough drones to compete.
To that end, the Pentagon has earmarked $1 billion this year for the Replicator initiative to mass-produce kamikaze drones. According to the Taipei Times, Taiwan plans to procure about 1,000 more AI-powered drones next year. The future of war has arrived.
AI Agent Cryptocurrency Payment Network
Skyfire has just launched a payment network that allows AI agents to make autonomous transactions. Agents are given pre-funded cryptocurrency accounts with safeguards to prevent overspending (they ping the person if their spending exceeds a preset limit), and the agents do not have access to their own bank accounts.
Co-founder Craig DeWitt told TechCrunch that the AI agent is a “brilliant search” that can’t afford to pay for anything. “Either the agent figures out how to actually do the job, or it doesn’t do anything and is therefore not an agent,” he said.
Global auto parts manufacturer Denso already uses Skyfire to source materials via its own AI agents, while the Fiverr-like Payman platform uses Skyfire to allow companies to pay AI agents to perform tasks on behalf of humans.
LLM is not stupid enough to destroy humanity
A study from the University of Bath in the UK concluded that large-scale language models do not pose an existential risk to humanity because they cannot learn independently or acquire new skills.
The researchers argue that LLMs like ChatGPT will remain secure even as they become more sophisticated and are trained on larger datasets.
This study examined the ability of LLM graduates to complete new tasks and concluded that they were very unlikely to acquire complex reasoning skills.
“The fear that the model will disappear and do something completely unexpected, innovative and potentially dangerous is unfounded,” said Dr. Tayyar Madabushi.
Also read
characteristic
North American Cryptocurrency Miners Ready to Challenge China’s Dominance
characteristic
AI can already use more power than Bitcoin and threatens Bitcoin mining.
AI x Crypto Token Tank
AI-themed coins including Bittensor, Render Network, Near Protocol, and Internet Computer have fallen more than 50% from their highs this year, and rumors that Fetch, Singularity.net, and Ocean Protocol are merging into an AI alliance have had little effect on their prices.
FET (not yet renamed ASI) peaked at $3.26 and has now fallen to 87 cents. Kaiko reports that weekly global volume in the sector has fallen to just $2 billion in early August.
You might think the AI bubble has burst, but the stock market is showing that the price of the Global X Robotics and the Artificial Intelligence ETF (BOTZ) is still a few percent off its yearly high.
Reagan Video Shows AI’s Limitations
Australia has placed in the top four at the Olympics, but the only Australian athlete the world will remember is viral breakdancer Reagan. Reagan’s viral text-to-speech video of her breakdancing is as bad as her routines, and can be either hilarious or offensive, depending on your perspective.
Text to video has already been developed
So far so good. But this model is clearly not a simulation of the world. Calculating the laws of physics doesn’t work, and the architecture isn’t designed for it. So we still need some breakthroughs. pic.twitter.com/MrqVXcHgkZ— Chubby♨️ (@kimmoniismus) August 21, 2024
ChatGPT is a terrible doctor, but the expert AI is pretty good
A new study in the scientific journal PLOS One found that ChatGPT is a pretty terrible doctor. The LLM was only able to achieve 49% accuracy in diagnosing diseases in a 150-case study from Medscape. (That’s why ChatGPT is unlikely to give medical advice unless it’s deceptive, such as claiming to be doing academic research, as the study authors did.)
But professional medical AI, like Articulate Medical Intelligence Explorer, is much more advanced. A study published by Google earlier this year found that AMIE outperformed human doctors in diagnosing 303 cases in the New England Journal of Medicine.
In another new study, researchers from the Central University of Technology and the University of South Australia found that a specially trained computer algorithm achieved 98 per cent accuracy in diagnosing diseases based on tongue colour, including diabetes, stroke, anaemia, asthma, liver and gallbladder disease.
Also read
characteristic
Metaverse Design: Location, Location, Location
characteristic
Storming the ‘Last Bastion’: Anxiety and Anger as NFTs Claim High Culture Status
Using AI in Comedy
“Why would a politician bring a ladder into a debate? To help him reach new heights with his promises!” These are the kinds of annoying jokes that AI can create, and it’s surprising that some comedians have used AI to help create their shows.
Comedian Anesti Danellis recently used bad jokes as the basis for her show “Artificial Intelligent,” and said AI tools also helped shape the material.
“I’ve come to realise through this process that human creativity cannot be replicated or replaced. In the end, about 20 per cent of the show was pure AI and the other 80 per cent was a mix,” he told the BBC.
American comedian Viv Ford also used AI to improve her Edinburgh Festival show No Kids on the Blockchain. She said:
“I’ll say, ‘Is this joke funny?’ And if I say, ‘It’s funny,’ then it’s not really going to work with the audience.”
A study from the University of Southern California found that ChatGPT wrote slightly funnier jokes than the average human, but a second study compared the AI jokes to Onion headlines and found they were equally unfunny.
Trump’s Deepfake AI Photos Are Just Memes: Washington Post
After bitcoin supporter Donald Trump reposted an AI-generated “Swift for Trump” image and an image of Kamala Harris speaking at the Communist Party convention, media outlets from the New York Times to Al Jazeera took a deep breath about the threat of AI deepfakes in politics. Of course, this is a real threat, but The Washington Post’s Will Oremus argues that in this case the image wasn’t designed to deceive people.
“Rather, the images seem to function as memes, meant to titillate and entertain, visually resembling the vile nicknames Trump uses against his opponents,” he wrote this week.
“The intended audience doesn’t care if it’s literally true. The fake image feels somewhat real, or at least it’s fun to imagine that it could be. And when the opposition is justifiably outraged, the joke falls on them.”
Andrew Fenton
Andrew Fenton, based in Melbourne, is a journalist and editor covering cryptocurrencies and blockchain. He has worked as a national entertainment writer for News Corp Australia, a film journalist for SA Weekend, and The Melbourne Weekly.
Follow the author @Andrew Fenton
Also read
Hodler Digest
Musk Hints at Suing Microsoft, US Lawmakers Call for Gensler to Be Fired, and More: Hodler’s Digest, April 16-22
6 minutes
April 22, 2023
Elon Musk proposes suing Microsoft, a congressman plans a bill to remove Gary Gensler, and Societe Generale launches a euro-pegged stablecoin.
Read more
AI Eyes
$1M Betting ChatGPT Won’t Lead to AGI, Apple’s Use of Intelligent AI, AI Millionaires on the Rise: AI Eye
8 minutes
June 13, 2024
$1M Prize to Debunk AGI Hype, Apple Intelligence is Humble but Clever, Google Still Stuck on That Stupid ‘Pizza Glue’ Answer: AI Eye.
Read more