SOCIAL MEDIA IMPACT ON POLITICS! BY WILLIAM A GALSTON

REVOLT: There is been a revolt among employees of Facebook now called Meta over the company deliberately encouraging malicious posts on social media.

By William A. Galston

With the formation of I-PAC social media is being used and misused in political campaigns. In Covid-19 times it is particularly affective as it reaches target audience which may not be accessible due to lockdown and Covid-19 bans. Social media played a major role in the victory of the BJP led by Narendra Modi in the 2014 and 2019 parliamentary elections.

On Nov. 25, an article headlined “Spot the deepfake. (It’s getting harder.)” appeared on the front page of The New York Times business section.[1] The editors would not have placed this piece on the front page a year ago. If they had, few would have understood what its headline meant. Today, most do. This technology, one of the most worrying fruits of rapid advances in artificial intelligence (AI), allows those who wield it to create audio and video representations of real people saying and doing made-up things. As this technology develops, it becomes increasingly difficult to distinguish real audio and video recordings from fraudulent misrepresentations created by manipulating real sounds and images. “In the short term, detection will be reasonably effective,” says Subbarao Kambhampati, a professor of computer science at Arizona State University. “In the longer run, I think it will be impossible to distinguish between the real pictures and the fake pictures.
The longer run may come as early as later this year, in time for the presidential election. In August 2019, a team of Israeli researchers announced a new technique for making deepfakes that creates realistic videos by substituting the face of one individual for another who is really speaking. Unlike previous methods, this one works on any two people without extensive, iterated focus on their faces, cutting hours or even days from previous deepfake processes without the need for expensive hardware.[3] Because the Israeli researchers have released their model publicly—a move they justify as essential for defense against it—the proliferation of this cheap and easy deepfake technology appears inevitable.
To illustrate the challenge posed by this development, I note the warning offered by the unforgettable boudoir scene in Marx Brothers comedy classic, “Duck Soup.”
As the 2020 election looms, Chicolini has posed a question with which candidates and the American people will be forced to grapple. If AI is reaching the point where it will be virtually impossible to detect audio and video representations of people saying things they never said (and even doing things they never did), seeing will no longer be believing, and we will have to decide for ourselves—without reliable evidence—whom or what to believe. Worse, candidates will be able to dismiss accurate but embarrassing representations of what they say are fakes, an evasion that will be hard to disprove.
In 2008, Barack Obama was recorded at a small gathering saying that residents of hard-hit areas often responded by clinging to guns and religion. In 2012, Mitt Romney was recorded telling a group of funders that 47% of the population was happy to depend on the government for the basic necessities of life. And in 2016, Hillary Clinton dismissed many of Donald Trump’s supporters as a basket of deplorables. The accuracy of these recordings was undisputed. In 2020, however, campaign operatives will have technological grounds for challenging the authenticity of such revelations and competing testimony from attendees at private events could throw such disputes into confusion. Says Nick Dufour, one of Google’s leading research engineers, deepfakes “have allowed people to claim that video evidence that would otherwise be very compelling is a fake.”[4]
Even if reliable modes of detecting deepfakes exist in the fall of 2020, they will operate more slowly than the generation of these fakes, allowing false representations to dominate the media landscape for days or even weeks. “A lie can go halfway around the world before the truth can get its shoes on,” warns David Doermann, the director of the Artificial Intelligence Institute at the University of Buffalo.[5] And if defensive methods yield results short of certainty, as many will, technology companies will be hesitant to label the likely misrepresentations as fakes.
The capacity to generate deepfakes is proceeding much faster than the ability to detect them. In AI circles, reports The Washington Post’s Drew Harwell, identifying fake media has long received less attention, funding, and institutional support than creating it. “Why sniff out other people’s fantasy creations when you can design your own?” asks Hany Farid, a computer science professor and digital forensics expert at the University of California at Berkeley, “We are outgunned.” Farid says, “The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.”[6] As a result, the technology is improving at breakneck speed. “In January 2019, deep fakes were buggy and flickery,” Farid told The Financial Times. “Nine months later, I’ve never seen anything like how fast they’re going. This is the tip of the iceberg.”[7]
As Nasir Memon, a professor of computer science and engineering at New York University, puts it:
“As a consequence of this, even truth will not be believed. The man in front of the tank at Tiananmen Square moved the world. Nixon on the phone cost him his presidency. Images of horror from concentration camps finally moved us into action. If the notion of … believing what you see is under attack, that is a huge problem.”[8]
Faced with this epistemological anarchy, voters will be more likely than ever before to remain within their partisan bubbles, believing only those politicians and media figures who share their political orientation. Evidence-based persuasion across partisan and ideological lines will be even more difficult than it has been in recent decades, as the media has bifurcated along partisan lines and political polarization has surged.
Legal scholars Bobby Chesney and Danielle Citron offer a comprehensive summary of the threat deep fake technologies pose to our politics and society. These realistic yet misleading depictions will be capable of distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.[9]
Beyond domestic politics, deepfake technologies pose a threat to America’s diplomacy and national security. As Brookings researchers Chris Meserole and Alina Polyakova argue, the U.S. and its allies are “ill-prepared” for the wave of deepfakes that Russian disinformation campaigns could unleash.[10] Chesney and Citron point out that instead of the tweets and Facebook posts that disrupted the 2016 U.S. presidential campaign, Russian disinformation in 2020 could take the form of a “fake video of a white police offer shouting racial slurs or a Black Lives Matter activists calling for violence.”[11] A fake video depicting an Israeli official saying or doing something inflammatory could undermine American efforts to build bridges between the Jewish state and its members. A well-timed forgery could tip an election, they warn.
This is a global problem. Already a suspected deepfake may have contributed to an attempted coup in Gabon and to an unsuccessful effort to discredit Malaysia’s economic affairs minister and drive him from office. There is evidence suggesting that the diplomatic confrontation between Saudi Arabia and Qatar may have been sparked by a fake news story featuring invented quotes by Qatar’s emir. A high-tech Russian disinformation campaign that tried to prevent the election of Emmanuel Macron as France’s president in 2017 was thwarted by a well-prepared Macron team, but might have succeeded against a less alert candidate. In Belgium, a political party created a deepfake video of President Donald Trump apparently interfering in the country’s internal affairs. “As you know,” the video falsely depicted Trump as saying, “I had the balls to withdraw from the Paris climate agreement—and so should you.” A political uproar ensued and subsided only when the party’s media team acknowledged the high-tech forgery.[12] A deepfake depicting President Trump ordering the deployment of U.S. forces against North Korea could trigger a nuclear war.

WHAT CAN WE DO ABOUT DEEPFAKES?
What’s already happening
Awareness of the challenges posed by deepfake technologies is gradually spreading through the U.S. government. On June 19, 2019, the House Intelligence Committee convened a hearing at which several highly regarded AI experts offered testimony about the emerging threat. In a formal statement, the committee expressed its determination to examine “what role the public sector, the private sector, and society as a whole should play to counter a potentially grim, ‘post-truth’ future.” In his opening remarks, committee Chair Adam Schiff warned of a “nightmarish” scenario for the upcoming presidential campaigns and declared that “now is the time for social media companies to put in place policies to protect users from misinformation, not in 2021 after viral deepfakes have polluted the 2020 elections.”[13]
On Sept. 10, 2019, the Senate Committee on Homeland Security and Government Affairs endorsed the Deepfake Report Act of 2019, which passed the full Senate as amended by unanimous consent on Oct. 24 and was referred to the House Energy and Commerce Committee on Oct. 28. This bill would require the Secretary of Homeland Security to issue an annual report on the state of what it terms “digital content forgery technology,” including advances in technology and its abuse by domestic and foreign actors.[14]
In the executive branch, the Defense Advanced Research Projects Agency (DARPA) has spearheaded the effort to fight malicious deepfakes. Programs announced so far include Media Forensics (MediFor) and Semantic Forensics (SemaFor). The former, according to the official announcement, will bring together world-class researchers to “level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video.”[15] Once successfully integrated into an end-to-end system, MediFor would not only detect deepfake manipulations but also provide detailed information about how they were generated.
SemaFor represents a refinement of this effort. Because detection strategies that rely on statistics can be easily fooled and foiled, SemaFor will focus on “semantic errors” such as mismatched earrings that would enable researchers to identify deepfakes that algorithms might overlook. To do this, DARPA plans to invest heavily in machines capable of simulating, but speeding up, the processes of common-sense reasoning and informal logic employed by human beings.[16]
Turning to the private sector: In September 2019, Facebook announced a new $10 million “Deepfake Detection Challenge,” a partnership with six leading academic institutions and other companies, including Amazon and Microsoft.[17] To jumpstart the challenge, Facebook promises to release a dataset of faces and videos from consenting individuals.[18]
Meanwhile, Google has joined forces with DARPA to fund researchers at the University of California at Berkeley and the University of Southern California who are developing a new digital forensics technique based on individuals’ style of speech and body movement, termed a “softbiometric signature.” Although this technique has already achieved 92% accuracy in experimental conditions, this success may prove temporary. One of the researchers, Professor Hao Li, characterizes the current situation as “an arms race between digital manipulations and the ability to detect [them].” Li says, “The advancements of AI-based algorithms are catalyzing both sides.”[19]
Anti-deepfake efforts extend beyond established companies. The face of development in this field already has generated one startup, Deeptrace, whose publications on the international uses of this technology have already created a stir, and another—Truepic—that offers clients verified data sources against which possibly fraudulent videos can be checked.[20]

Legislative options
Chesney and Citron comprehensively survey possible legislative responses to the dangers posed by this emerging technology, and their conclusions are less than encouraging. It is unlikely that an outright ban on deepfakes would pass constitutional muster. Existing bodies of civil law, such as protections against copyright infringement and defamation, are likely to be of limited utility.
In the executive branch, the Defense Advanced Research Projects Agency (DARPA) has spearheaded the effort to fight malicious deepfakes. Programs announced so far include Media Forensics (MediFor) and Semantic Forensics (SemaFor). The former, according to the official announcement, will bring together world-class researchers to “level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video.”[15] Once successfully integrated into an end-to-end system, MediFor would not only detect deepfake manipulations but also provide detailed information about how they were generated.
SemaFor represents a refinement of this effort. Because detection strategies that rely on statistics can be easily fooled and foiled, SemaFor will focus on “semantic errors” such as mismatched earrings that would enable researchers to identify deepfakes that algorithms might overlook. To do this, DARPA plans to invest heavily in machines capable of simulating, but speeding up, the processes of common-sense reasoning and informal logic employed by human beings.[16]
Meanwhile, Google has joined forces with DARPA to fund researchers at the University of California at Berkeley and the University of Southern California who are developing a new digital forensics technique based on individuals’ style of speech and body movement, termed a “softbiometric signature.” Although this technique has already achieved 92% accuracy in experimental conditions, this success may prove temporary. One of the researchers, Professor Hao Li, characterizes the current situation as “an arms race between digital manipulations and the ability to detect [them].” Li says, “The advancements of AI-based algorithms are catalyzing both sides.”[19]
Anti-deepfake efforts extend beyond established companies. The face of development in this field already has generated one startup, Deeptrace, whose publications on the international uses of this technology have already created a stir, and another—Truepic—that offers clients verified data sources against which possibly fraudulent videos can be checked.[20]
Legislative options
As Chesney and Citron point out, the content screening and removal policies of the platforms may prove to be “the most salient response mechanism of all” because their terms-of-service agreements are “the single most important documents governing digital speech in today’s world.”[24] Internet platform providers have an opportunity to contribute—voluntarily—to a 2020 electoral process that honors norms of truth-telling more than in recent elections. If they do not avail themselves of this opportunity—and if deepfakes rampage through next year’s election, leaving a swathe of falsehoods and misrepresentations in their wake—Congress may well move to strip the platforms of the near total immunity they have enjoyed for a quarter of a century, and the courts may rethink interpretations of the First Amendment that prevent lawmakers from protecting fundamental democratic processes.
Facebook’s refusal to remove a crudely altered video of House Speaker Nancy Pelosi played poorly in many quarters, as did the explanation that the company’s rules did not prohibit posting false information. If I were Mark Zuckerberg, I wouldn’t bet the future of my company on the continuation of business as usual.

Courtesy: brookings.edu

Leave a Reply

Your email address will not be published. Required fields are marked *

55 + = 61