As I See It: The Misinformation Crisis
September 13, 2021 Victor Rozek
The person who invented the wheel was probably also the first person to run over his/her own toes. That’s just the nature of technology: it’s helpful, but not always friendly. From the wheel to social media, technology has served as both the engine of progress and an instrument of grief. And, paradoxically, there is perhaps no greater accelerant of progress and regress than computer technology.
The name of Floyd Ray Roseberry is deservedly forgettable, but he recently managed to engineer his ten minutes of digital fame by live-streaming his imagined grievances on Facebook. As a remedy, he claimed to have a bomb in his truck and threatened to blow up two blocks of the capital.
The suspect was apparently able to stream for hours before social media platforms took notice and, in the meantime, clips of his stream migrated to Twitter and YouTube. Legislators, many of whom barely survived the insurrection, were incensed by the continuing lack of control over the spread of dangerous and violence-inciting content. The episode reignited the tensions between tech firms and Congress in their prolonged tug-of-war over the boundaries of the First Amendment.
For social media platforms, managing the spread of misinformation in real-time remains a daunting challenge. Determining veracity, deducing intention, ascertaining the potential for violence, not to mention acting as flow restrictors on the First Amendment, are not tasks easily dispatched by algorithms.
But if accuracy and veracity are the aim, that ship may have already sailed. The Washington Post reports that the German Marshall Fund conducted a three-month study of activity on US sites and found that more than 1 in 5 interactions, which include shares, likes, or comments, happened on “outlets that gather and present information irresponsibly.”
In other words, people either don’t know or don’t mind being lied to, and no one knows if they visit these sites primarily for entertainment or education, or both. But if the cumulative impacts are any indication, it may not matter.
Leonard Pitts Jr, writing for the Miami Herald, offers a more global insight into the impacts of speech when magnified by technology. Pitts argues that we are experiencing “three simultaneous existential emergencies,” not only violent attacks on the government, but also continuing struggles containing Covid, and catastrophic effects of climate change. And while each crisis is distinct, he argues that “ultimately, they are not different threats at all, but rather different manifestations of the same threat. [They are] just facets of a misinformation crisis.”
He offers a quote from Martin Luther King: “Nothing in all the world is more dangerous than sincere ignorance and conscientious stupidity.” Indeed, willful ignorance can easily find online validation, but its projected influence is such that it now prevents solutions to serious, life-threatening problems. Pitts concludes that if something does not change soon, our epitaph as a species may well turn out to be, “Too stupid to live.”
Meaningful change, however, may be problematic. Misinformation is now an industry. For a price there are companies that will spread whatever fiction their clients desire. Russia, and other governments have successfully weaponized social media and operate many hundreds of migrating misinformation sites creating a whack-a-mole challenge for monitoring efforts. Increasingly, the battle for democracy’s survival is being fought by keyboards, not cannons.
Most recently, advancements in AI and machine learning are rapidly paving the way for the ultimate misinformation nightmare: the so-called “deepfake.” These are computer generated videos that falsely show events that never happened and people saying and doing things they never said or did. As the technology is perfected, it will be all but impossible for viewers to distinguish real from fabricated content. The aim is not only to deceive the viewer, but to sow sufficient doubt and confusion so that nothing can be trusted. Once images, audio and video are suspect, everything will be coated with a counterfeit patina – which, for abusers of this technology, is exactly the point.
If you can’t trust your eyes and ears, what can you trust? The answer is: a favored news provider. Or, in other words, a source that confirms your existing bias. It wasn’t always so. Pitts reminds us that in 1972 a national poll was conducted to identify The Most Trusted Person in America. The winner was not a politician, nor a sports icon; not a religious leader, or a celebrity. The honor went to Walter Cronkite, the anchor of the CBS Evening News.
For those too young to remember Cronkite, it must seem unimaginable that the country could agree on the trustworthiness of a single mainstream media presence. On the other hand, the discord, fanaticism, and crackpot theories that abound today would have been unimaginable back then.
As the demand for content restraint grows, so does the backlash against it. Texas Governor Greg Abbott recently signed into law a proposal to ban large social media platforms from blocking or taking down user posts based on their political viewpoints. How far that’s intended to go is not clear. Almost anything can slither under that umbrella. Pronouncements of an extreme political nature, when lacking mainstream support, inevitably devolve into violent expression. A similar law was enacted in Florida but was struck down by the courts. The Texas law may find comparable obstacles.
Regardless, the hostility toward social media providers is misplaced. The Bill of Rights guarantees free speech, but not free reach. No one has the “right” to unlimited use of a privately owned network, untethered from rules and restrictions.
There is a Biblical passage that says: “In the beginning was the Word.” It’s no accident the Bible described God as the Word and that the Word was what created everything. Words are consequential not only because they describe reality, but because they create reality. Which is why, what we hear and read, what we marinate ourselves in each day, is important. The IT professionals who control the platforms, create the monitoring algorithms, and oversee the systems that reach around the globe, bear a unique and daunting responsibility for the integrity of the information they broadcast. They are, for better or worse, creators, and they too will have to live in the reality they help create.
“You can’t fix stupid” by trying to filter information for all… but you can most certainly wreck the truth. Both Facebook and Twitter proved this when they quashed the Hunter Biden story as “misinformation” just before the election. They were WRONG. So basically TWO MEN may have single-handedly changed the outcome of the US Presidential Election. And where are we now??
What’s worse? A few powerful men manipulating the truth, or bunch of “misinformation”. I think history has taught us what powerful, influential men can do when they effectively control information. Adolph Hitler comes to mind.