- Prince Harry getting ‘bored’ in California after calling royal return IMPOSSIBLE
- Japan's Kyushu Electric to wait for US LNG policy clarity on Lake Charles | Reuters News Agency
- Kim Kardashian reveals what she wants in next partner after Kanye West
- Russia’s opposition leader Alexei Navalny dies in prison - SUCH TV
- College baseball preview: The storylines, teams and players to watch in 2024
Developers of artificial intelligence platforms could soon release technology that allows users to make images and videos that would be nearly indistinguishable from reality.
Companies such as OpenAI, the developer behind the popular ChatGPT platform, and other AI companies are nearing the release of tools that will allow the creation of widespread and realistic fake videos as early as next year, according to a report from Axios.
According to the report, an AI architect told the outlet that private testing of some of the tools that could soon be in the hands of everyday users revealed that even developers could no longer distinguish fake imagery from reality, something they did not believe was possible so soon.
The rapid development of the technology has many worried about how such tools could be misused, especially with new abilities being released the same year as a presidential election.
“One of the biggest concerns with the advancement of artificial intelligence technology is the constant stream of deepfake videos, which will be made of celebrities, politicians and other influential people. In the hands of bad actors, this technology could have massive impacts on elections, commerce and national security,” Ziven Havens, the policy director of the Bull Moose Project, told Fox News Digital.
Havens pointed to the potential for widespread “false campaign ads” or even “fake statements by world leaders,” arguing that such issues would only be “the tip of the iceberg regarding the dangers of this technology.”
That threat has caused some leaders to consider solutions to make clear what is real and what is fake, including mandatory watermarks on AI-generated content.
Phil Siegel, the founder of the Center for Advanced Preparedness and Threat Response Simulation, told Fox News Digital that those looking to regulate AI will have to consider potential First Amendment ramifications, noting that while the technology does have the ability to create misinformation, some of that AI-generated content could be made with purely “illustrative” intent.
“Even if we ask ‘good actors’ to use things like watermarks, the First Amendment probably allows most satirical and illustrative content to be used,” Siegel said. “Now it is cheaper to show, for example, a vignette where Joe Biden ages in front of your eyes in a very convincing way – while not true, per se, it is illustrative to a point and watermarked or identified. In that case, it is up to the user to absorb the point or not.”
“The increase of cheap and easy ways to make content using AI will increase misinformation, and it will increase satire and illustrative content as well,” Siegel continued. “Consumers and voters will have to be on alert for all of this. Courtrooms will have to be on alert for all of this when evidence is presented. It’s the new normal of media and content.”
But Samuel Mangold-Lenett, a staff editor at The Federalist, noted that there will be ways to “mitigate the risks” associated with fake AI-generated imagery, arguing that deepfakes may not be “the biggest concern when it comes to AI.”
“They pose significant risks to public safety and can do great damage to people’s reputations, but there are ways to mitigate the risks. Laws as written can be enforced, and new ones can be created,” Mangold-Lenett said.
Mangold-Lenett added that the bigger threat is AI causing “humanity to lose touch with reality.”
“That’s one of the greater issues at play,” Mangold-Lenett said. “Similar to how search engines have weakened people’s research skills, sophisticated AI technologies have the potential to weaken critical thinking skills.”
Christopher Alexander, the chief analytics officer of Pioneer Development Group, shared a similar sentiment, telling Fox News Digital that deepfakes are “troubling,” but he also argued that they will be “far from the No. 1 concern.”
“The election concern is particularly laughable because it seems to be derived from the idea that human politicians tell the truth,” Alexander said. “If it is a lie, who cares if a machine is doing it or the president?”
Instead, Alexander argued that the biggest threat is the platforms on which the AI content will be shared, saying that social media “simplifies complex problems into cartoon caricatures of reality and rewards people for being outrageous rather than measured.”
President Biden recently signed an executive order aimed at tackling some of the evolving issues surrounding AI, a move many hailed as a positive first step. Despite that, Havens believes it will take Congress to put better guardrails in place.
“A major solution would be Congress mandating the labeling of AI-generated content online, and soon,” Havens said.