While the expression ‘Four Horsemen of the Infocalypse’ has long been used on the internet to refer to criminals like drug dealers, money launderers, pedophiles, and terrorists, the term ‘Infocalypse’ was brought to the mainstream relatively recently by MIT grad Aviv Ovadya, who’s currently working as the Chief Technologist at the Center for Social Media Responsibility (UMSI). In 2016, at the height of the fake news crisis, Ovadya had expressed concerns to technologists in Silicon Valley about the spread of misinformation and propaganda disguised as real news in a presentation titled ‘Infocalypse.
According to his presentation, internet platforms like Google, Facebook, and Twitter earn their revenues is all geared towards rewarding clicks, shares, and viewership rather than prioritizing the quality of information. This, he argued, poses a real problem sooner rather than later, given how easy it has become to post whatever anyone feels like without any filter. With major internet companies largely ignoring his concerns, Ovadya described the situation as “a car careening out of control” which everyone just ignored.
His predictions have now proven to be frighteningly accurate, with the lines between truth and politically motivated propaganda becoming increasingly blurred. Ovadya argues that AI will be widely used over the next couple of decades for misinformation campaigns and propaganda. To combat this dystopian future, he is working with a group of researchers and academics to find ways to prevent an information apocalypse.
The Impact of Deepfakes and Misinformation
One concerning aspect of this misinformation landscape is the rise of deepfakes, or AI-generated videos where celebrities' faces are morphed onto other videos, often of a pornographic nature. Reddit has taken action by banning subreddits like ‘r/deepfakes’ and ‘r/deepfakesNSFW’, which had thousands of members distributing fake pornographic videos of various celebrities, including Taylor Swift and Meghan Markle. Other platforms, including Discord, Gyfcat, Pornhub, and Twitter, have also implemented bans on non-consensual face-swap porn.
Beyond the violation of individual privacy, Ovadya warns that such videos could have a destabilizing effect on the world if used for political gain. These deepfakes can create the false belief that an event has occurred, influencing geopolitics and the economy. All it takes is feeding AI with enough footage of a target politician and having them morphed into another video saying potentially damaging things.
Recognizing the Threat of Fake News
Despite the challenges posed by misinformation, there is a glimmer of hope. People are finally acknowledging that they were wrong to ignore the threat of fake news just two years ago. Ovadya notes, “In the beginning, it was really bleak — few listened. But the last few months have been promising. Some of the checks and balances are beginning to fall into place.” It’s encouraging that technologists are starting to tackle a problem that is expected to worsen in the coming years.
As we move forward, it will be interesting to see how well prepared these technologists will be for the upcoming information warfare, especially when many warning signs have already emerged. Ovadya emphasizes the need for vigilance and action to combat the misinformation crisis that continues to evolve.
What You Will Learn
- Understanding the term 'Infocalypse': Learn about how the term was popularized and its implications in the context of misinformation.
- The role of social media platforms: Discover how platforms prioritize clicks and shares over the quality of information.
- The dangers of deepfakes: Understand how AI-generated content can manipulate perceptions and influence political outcomes.
- Recognizing the shift in awareness: Explore how acknowledgment of fake news threats is growing and what that means for future actions.