Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/deejungx/deepfake-literacy
https://github.com/deejungx/deepfake-literacy
Last synced: about 1 month ago
JSON representation
- Host: GitHub
- URL: https://github.com/deejungx/deepfake-literacy
- Owner: deejungx
- Created: 2023-12-07T04:44:42.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2023-12-07T05:06:35.000Z (about 1 year ago)
- Last Synced: 2024-11-03T06:41:46.292Z (3 months ago)
- Size: 33.3 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# THE MISINFORMATION MACHINE
*excerpt from the book: "The Coming Wave" - Mustafa Suleyman*
In the 2020 local elections in India, the Bharatiya Janata Party Delhi president, Manoj Tiwari, was filmed making a campaign speech—in both English and a local Hindi dialect. Both looked and sounded convincingly real. In the video he goes on the attack, accusing the head of a rival party of having “cheated us.” But the version in the local dialect was a deepfake, a new kind of AI-enabled synthetic media. Produced by a political communications firm, it exposed the candidate to new, hard-to-reach constituencies. Lacking awareness of the discourse around fake media, many assumed it was real. The company behind the deepfake argued it was a “positive” use of the technology, but to any sober observer this incident heralded a perilous new age in political communication. In another widely publicized incident, a clip of Nancy Pelosi was reedited to make her look ill and impaired and then proceeded to circulate widely on social media.
Ask yourself, what happens when anyone has the power to create and broadcast material with incredible levels of realism? These examples occurred before the means to generate near-perfect deepfakes—whether text, images, video, or audio—became as easy as writing a query into Google. As we saw in chapter 4, large language models now show astounding results at generating synthetic media. A world of deepfakes indistinguishable from conventional media is here. These fakes will be so good our rational minds will find it hard to accept they aren’t real.
Deepfakes are spreading fast. If you want to watch a convincing fake of Tom Cruise preparing to wrestle an alligator, well, you can. More and more everyday people will be imitated as the required training data falls to just a handful of examples. It’s already happening. A bank in Hong Kong transferred millions of dollars to fraudsters in 2021, after one of their clients was impersonated by a deepfake. Sounding identical to the real client, the fraudsters phoned the bank manager and explained how the company needed to move money for an acquisition. All the documents seemed to check out, the voice and character were flawlessly familiar, so the manager initiated the transfer.
Anyone motivated to sow instability now has an easier time of it. Say three days before an election the president is caught on camera using a racist slur. The campaign press office strenuously denies it, but everyone knows what they’ve seen. Outrage seethes around the country. Polls nose-dive. Swing states suddenly shift toward the opponent, who, against all expectations, wins. A new administration takes charge. But the video is a deepfake, one so sophisticated it evades even the best fake-detecting neural networks.
The threat here lies not so much with extreme cases as in subtle, nuanced, and highly plausible scenarios being exaggerated and distorted. It’s not the president charging into a school screaming nonsensical rubbish while hurling grenades; it’s the president resignedly saying he has no choice but to institute a set of emergency laws or reintroduce the draft. It’s not Hollywood fireworks; it’s the purported surveillance camera footage of a group of white policemen caught on tape beating a Black man to death.
Sermons from the radical preacher Anwar al-Awlaki inspired the Boston Marathon bombers, the attackers of Charlie Hebdo in Paris, and the shooter who killed forty-nine people at an Orlando nightclub. Yet al-Awlaki died in 2011, the first U.S. citizen killed by a U.S. drone strike, before any of these events. His radicalizing messages were, though, still available on YouTube until 2017. Suppose that using deepfakes new videos of al-Awlaki could be “unearthed,” each commanding further targeted attacks with precision-honed rhetoric. Not everyone would buy it, but those who wanted to believe would find it utterly compelling.
Soon these videos will be fully and believably interactive. You are talking directly to him. He knows you and adapts to your dialect and style, plays on your history, your personal grievances, your bullying at school, your terrible, immoral Westernized parents. This is not disinformation as blanket carpet bombing; it’s disinformation as surgical strike attacks against politicians or businesspeople, disinformation with the aim of major financial-market disruption or manipulation, media designed to poison key fault lines like sectarian or racial divides, even low-level scams—trust is damaged and fragility again amplified.
Eventually entire and rich synthetic histories of seemingly real-world events will be easy to generate. Individual citizens won’t have time or the tools to verify a fraction of the content coming their way. Fakes will easily pass sophisticated checks, let alone a two-second smell test.