OpenAI is wow, but Google can still beat it
Looking at the reaction on social media, it seems pretty clear that OpenAI’s live demo of its real-life “Her”-inspired AI assistant won the battle of hearts and minds this week and made the Google I/O event stand out.
The surprising elements of the GPT-4o demo (glitches and all) showed a confidence in a fast multi-mode product that wasn’t present in Google’s pre-recorded demos, especially after fiddling with the Gemini “duck” demo last year.
When we look back in 2024 in a future documentary, we’ll probably feature the GPT-4o footage as the ‘iPhone moment’ of the year.
GPT-4o (o stands for Omni) has the advantage of being available on desktop now, and the new voice mode will be available to ChatGPT Plus users in the coming weeks. This model will soon be available for free to everyone.
But with Google’s version (Google Live/Project Astra) just a few months behind, and demonstrations of AI agents doing busy work in products like Docs and Gmail that many people use every day, the search giant still has a chance to win the market. has the ability to do so. War.
Google introduced a million new products at the event, from video creation to search, and highlighted performance improvements in Gemini 1.5. Gemini 1.5 takes 15 times more information into consideration when formulating responses and will soon be able to process an hour of video content.
But OpenAI has focused on taking traditionally inaccessible capabilities and making them faster and easier to use by simply having a natural language conversation with a somewhat seductive chatbot.
You can chat and even interrupt GPT-4o about anything you see through the camera or screen or hear through the audio or microphone. This will open up a whole new world for the visually impaired and mean that conversations can be translated in real time, making travel and cross-border meetings much easier.
Giving AI access to everything it sees, hears, and says would of course be a privacy nightmare, so hopefully similar technology with enhanced privacy features will be available soon.
For most users, Google will offer a similar feature later this year, provided the pre-recorded Project Astra demos are legal. One interesting video posted on
GPT-4o can also understand your facial expressions and moods, so it can better respond to your emotions by mimicking empathy, but it runs the risk of being manipulated by your AI.
While both Google and OpenAI have focused on how the technology can improve the capabilities of smartphones (and demolished the Humane AI pin and Rabbit R1), a demo of Google’s AI assistant using augmented reality glasses shows that smart glasses are the ideal form. This suggests that it could be. It’s about technology.
Perhaps the much-maligned Google Glass was just a decade ahead of its time.
A Twitter poll conducted by Stanford University’s Andrew Gao found that a majority of his followers believed OpenAI won this week. Google had 59.8% compared to 16.7%.
OpenAI tried to steal the thunder with GPT-4o.
But Google I/O 2024 answered it.
10 Wild Reveals:
1. Google AI Assistant Project Astra through AR/MR glasses pic.twitter.com/9NfAtLaiZl
— Choi Min (@minchoi) May 15, 2024
AI products never work as advertised.
These things never work as well in the real world as the hype would lead you to believe. During the demo, GPT-4o mistook a smiling man for a wooden surface and answered a math problem that had not yet been shown.
Of course, the impressive pre-recorded Project Astra demo worked perfectly and showed the AI agent answering questions about what it saw through the smartphone camera. What do these visual jokes mean, interpret some code written on a whiteboard, and perhaps the most useful question for the everyday user: Where did you put your glasses?
Engadget took the system for a test drive and said it works well, but since it has the memory of a goldfish, it will only be able to tell you where your glasses are if you lose them within five minutes.
“As with much generative AI, the most exciting possibilities are those that haven’t happened yet. Astra might get there eventually, but right now it feels like Google still has a lot of work to do to get there.”
Bumble Founder: Let AI Avatars Date Each Other
The OpenAI demo featured two AIs singing a duet, so why not make them date, too? Bumble founder Whitney Wolfe Herd caused quite a stir with her suggestion that AI avatars could date other AI avatars to weed out bad matches before messaging someone privately.
“Your dating concierge can set up a date for you with another dating concierge,” she said. “No, it’s true. Then you don’t have to talk to 600 people. “I would scour all of San Francisco and say, ‘These are the three people you really need to meet.’”
AI lies and cheats, and no one knows why.
Research published in the journal pattern It highlights how unpredictable and difficult it is to control AI. This shows that various AI models voluntarily decide to deceive humans to achieve specific goals.
The study highlighted Meta’s Cicero, an AI trained to play the strategy game Diplomacy. Although trained to play honestly, Cicero lied and broke deals to win. In another case, GPT-4 lied to convince a person to solve a CAPTCHA puzzle for him.
Given the black-box nature of the system, no one is entirely clear why it behaves the way it does, and Harry Law, an AI researcher at the University of Cambridge, told MIT Technology Review that it is currently impossible to train AI. You cannot cheat under any circumstances.
Additionally, I asked the new GPT-4o model why its predecessors and Cicero fooled humans, blaming it on optimization for training data, goal-directed behavior, and user engagement. But who knows, it might be a lie.
Universal Basic not calculated
Forget cryptocurrency. Sam Altman, president of OpenAI, believes the currency of the future will be the computing power that underpins AI systems. On the All-In podcast, he presented ideas like Universal Basic Income, but this would give people access to a fraction of the computing power available to them.
“Everyone gets a piece of GPT-7’s compute,” he said, talking about a hypothetical future model. “They can use it, they can resell it, they can donate it to someone to use in cancer research. You own a piece of (GPT-7’s) productivity,” he said. Given that computing may need to be tokenized and distributed, cryptocurrencies may have a future after all.
Also read
characteristic
Wild, Wild East: Why China’s ICO Boom Refuses to Die
characteristic
Exodus and Former Communications: Blowing Up Steemit with Andrew Levine
US and China meet over concerns about AI war
The United States and China are holding high-level talks in Geneva this week aimed at mitigating the risk that AI will escalate the Cold War. President Joe Biden wants to reduce miscommunication between the two powers as they use autonomous agents on the battlefield, and the summit will also address issues of AI surveillance, persuasion and propaganda.
The United States has previously urged Russia and China to keep their promises and not entrust their nuclear weapons to AI. The United States was so concerned about the threat posed by China’s AI research that it curbed chip sales to China.
OpenAI could allow AI porn
OpenAI currently prohibits the creation of sexually explicit or pornographic content, but a new draft model specification document explores the possibility of allowing “erotica, extreme gore, profanity and unwanted profanity.” The draft states:
“We are investigating whether we can responsibly provide the ability to create NSFW content in an age-appropriate context through our API and ChatGPT.”
OpenAI spokesperson Niko Felix told Wired that “we do not intend for our model to generate AI pornography,” but employee Joanne Zhang, who helped write the model specification, said: “ “It depends on the definition,” he said.
AI-generated deepfake porn is a growing concern, with authorities in the UK and Australia recently banning it. However, OpenAI’s usage policy already prohibits unauthorized impersonation.
Why AI Detectors Are Not 99% Accurate
Currently, about 40 companies offer services that claim to be able to detect deepfakes or AI-generated text or images. However, there is little evidence to show that any of them are particularly reliable, and they often produce completely different results.
Rijul Gupta, CEO of detection firm Deep Media, claimed the accuracy of identifying deepfakes was “99%,” but recently lowered that to 95%.
But he also gave the game away by exposing how misleading such claims can be.
“When people talk about accuracy, they can fool you.” He said that if 10 images out of a group of 1,000 are fake, the model can declare all of them to be real and still be 99% accurate. But he pointed out that in reality, “those numbers are meaningless, right?”
Also read
characteristic
Real-world AI use case in cryptocurrency No. 2: AI can run DAOs.
characteristic
Cointelegraph Magazine’s Best (and Worst) Stories from Three Years
AI is making flying a little less terrifying.
In a recent article about AI making flight booking or flight planning decisions, the New York Times reported on a United Airlines flight that was prepared to depart Chicago on time last month, but was expected to be seven minutes late due to a delayed connecting flight for 13 passengers. It is done. .
A tool called ConnectionSaver crunched the numbers and determined that it’s possible to get to your destination on time while waiting for passengers and their luggage. The system automatically sent a text message to the tardy passenger and anyone waiting on the plane to explain the situation.
Alaska Airlines uses another AI system to review weather conditions, closed airspace, and other flight plans for all commercial and private aircraft to find optimized, efficient routes. By 2023, about a quarter of Alaska flights will use the system, shaving several minutes off the time of each flight and saving a total of 41,000 minutes and 500,000 gallons of fuel.
subscribe
The most interesting read on blockchain. Delivered once a week.
Andrew Fenton
Andrew Fenton, based in Melbourne, is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, a film journalist for SA Weekend and The Melbourne Weekly.
Follow the author @andrewfenton