Connect with us

Tech

Big Tech’s voluntary approach to deepfakes isn’t enough, top U.S. cyberdefense official says

Published

on

Big Tech’s voluntary approach to deepfakes isn’t enough, top U.S. cyberdefense official says

Commitments from Big Tech companies to identify and label fake artificial-intelligence-generated images on their platforms won’t be enough to keep the tech from being used by other countries to try to influence the U.S. election, said the head of the Cybersecurity and Infrastructure Security Agency.

AI won’t completely change the long-running threat of weaponized propaganda, but it will “inflame” it, CISA Director Jen Easterly said at The Washington Post’s Futurist Summit on Thursday. Tech companies are doing some work to try to label and identify deepfakes on their platforms, but more needs to be done, she said.

“There is no real teeth to these voluntary agreements,” Easterly said. “There needs to be a set of rules in place, ultimately legislation.”

Deepfakes and AI-generated images have been around for several years, but as the technology improves and the tools to make them become widely available, they’ve become increasingly common on social media platforms. An AI-generated image of a sprawling refugee camp with the words “All Eyes on Rafah” went viral in late May as a way for people to show their support for Palestinians in Gaza. As major elections take place across the globe, some politicians have tried to use fake images to make their opponents look bad.

In February, tech companies, including Google, Meta, OpenAI and TikTok, said they would work to identify and label deepfakes on their social media platforms. But their agreement was voluntary and did not include an outright ban on deceptive political AI content. The agreement came months after the tech companies also signed a pledge organized by the White House that they would label AI images.

Congressional and state-level politicians are debating numerous bills to try to regulate AI in the United States, but so far the initiatives haven’t made it into law. The E.U. parliament passed an AI Act last year, but it won’t fully go into force for another two years.

The spread of false claims about the 2020 election are leading to threats of violence against election officials right now, Easterly said. Some poll workers have quit over the worsening environment, she said. “Those who remain often operate, frankly, in difficult conditions.”

Easterly also said that Chinese hackers are busy hacking into critical infrastructure in the United States, such as water treatment facilities and pipeline control centers, in order to “preposition” themselves to strike if there was ever a conflict between the two countries.

“They are creating enormous risk to our critical infrastructure,” Easterly said. “That is happening right now.”

Continue Reading