With the threat of artificial intelligence to democracy a top concern for policymakers and voters around the world, OpenAI announced plans Monday to ensure transparency in AI-generated content and improve trustworthy voting information ahead of the 2024 elections.
Since the release of GPT-4 in March, generative AI and its potential for misuse, including AI-generated deepfakes, have become a central part of the conversation surrounding AI’s meteoric rise in 2023. By 2024, we could see serious consequences from such AI. Misinformation has dominated important election periods, including the US presidential election.
“As we prepare for elections in the world’s largest democracy in 2024, our approach is to continue our platform safety work by increasing accurate voting information, enforcing measured policies, and improving transparency,” OpenAI said in a blog post. .
OpenAI added, “We are bringing together the expertise of our safety systems, threat intelligence, legal, engineering, and policy teams to rapidly investigate and remediate potential abuses.”
A snapshot of how we are preparing for the 2024 elections around the world:
• We work to prevent abuse, including misleading deepfakes.
• Provide transparency for AI-generated content
• Improve access to authoritative voting information https://t.co/qsysYy5l0L— OpenAI (@OpenAI) January 15, 2024
Last August, the Federal Election Commission said it would consider a petition to ban AI-generated campaign ads, and FEC Chairman Allen Dickerson said, “The rationale behind this effort is First Amendment rights.” “There are serious concerns hidden there,” he said.
For ChatGPT’s U.S. customers, OpenAI said it will direct users to the nonpartisan website CanIVote.org if it asks “specific procedural election-related questions.” The company said that implementing these changes will inform its approach globally.
“We look forward to continuing to collaborate and learn from our partners to anticipate and prevent potential abuse of our tools ahead of this year’s global elections,” he added.
In ChatGPT, OpenAI said it prevents developers from creating chatbots that pretend to be real people or institutions like government officials and offices. OpenAI also said that applications that aim to prevent people from voting, such as by interfering with voting or misrepresenting who is eligible to vote, are also not allowed.
AI-generated deepfakes, fake images, videos and audio created using generative AI have gone viral over the past year. Several figures, including US President Joe Biden, former President Donald Trump and even Pope Francis, have been the focus of images shared on social media.
To prevent its Dall-E 3 image generator from being used in deepfake campaigns, OpenAI said it will implement content credentials from the Content Source and Authenticity Coalition, which adds a mark or “icon” to AI-generated images.
“We are experimenting with a new tool, the provenance classifier, to detect images produced by Dall-E,” OpenAI said. “Our internal testing has shown promising early results, even when images have undergone common types of modifications.”
Last month, Pope Francis called on leaders around the world to adopt a binding international treaty to regulate AI.
“The inherent dignity of each human being and the brotherhood that unites us as members of one human family must underpin the development of new technologies and serve as a sure standard for evaluating them before they are put into use, so that digital progress can be made in a timely manner. do. Respect justice and contribute to the cause of peace,” Francis said.
To curb misinformation, OpenAI said ChatGPT will begin providing real-time news reporting from around the world, including quotes and links.
“Transparency about information sources and a balance of news sources can help voters better evaluate information and decide for themselves what they can trust,” the company said.
Last summer, OpenAI donated $5 million to the American Journalism Project. Last week, OpenAI signed the following agreements: Associated Press Provides AI developers with access to archives of news articles from global news outlets.
OpenAI’s comments about attribution of news reports come as the company faces several copyright lawsuits, including: new york times. In December times It sued OpenAI and its largest investor, Microsoft, alleging that millions of articles were used to train ChatGPT without permission.
“OpenAI and Microsoft have built businesses worth tens of billions of dollars by stealing the combined works of humanity without permission,” the lawsuit said. Protect: Protectable elements of expression, such as style, word choice, arrangement and presentation of facts.”
OpenAI is new york times‘ The “meritless” lawsuit alleges that the publisher manipulated the prompts to make the chatbot generate responses such as: times‘ article.
Editor: Andrew Hayward