© 2024 Milwaukee Public Media is a service of UW-Milwaukee's College of Letters & Science
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Computational Propaganda: Bots, Targeting And The Future

Saul Gravy
/
Getty Images

A long time ago, when I was working on my Ph.D. research, I learned to use supercomputers to track the complex 3-D motions of gas blown into space by dying stars.

Using big computers in this way was still new to lots of researchers in my field and I was often asked, "How do you know your models are right?"

Now, a few decades later, hyper-large-scale computational fluid dynamics is so common in astrophysics that no one asks me that question anymore. Machines are so fast, and so powerful, that everyone takes it as a given that they can be deployed to drive my field forward.

The machines, in other words, have long since arrived.

In the wake of events related to the last U.S. election, it now seems that the machines have arrived in a very different domain that threatens to upend the way democracy works (or doesn't). We find ourselves in era of "computational propaganda" — and that should make us all very, very concerned.

Computational propaganda: The phrase, itself, is so strange and new that it's worth a few moments to consider what it means on its own — as well as what it means for our project of civilization. The dictionary defines propaganda as "information, especially of a biased or misleading nature, used to promote or publicize a particular political cause or point of view."

Through most of history, propaganda was mainly a tool of states or large institutions simply because they were the only ones with the resources to deploy it on large enough scales to have a large effect. In addition, the messages these large institutions deployed were themselves large-scale. By that, I mean they couldn't be targeted on a person-to-person basis. Think of movies or billboards or pamphlets being tossed out of airplanes.

But the digital world we so recently built has changed all that. The December issue of the journal Big Datawas dedicated to the problem of computational propaganda. In it, researchers Gillian Bolsover and Philip Howard, of the Oxford Internet Institute, define the dangers that need to be addressed:

"Computational propaganda has recently exploded into public consciousness. The U.S. presidential campaign of 2016 was marred by evidence, which continues to emerge, of targeted political propaganda and the use of bots to distribute political messages on social media."

It's both the targeting and the use of "bots" that provides the base definition for computational propaganda. The supercomputer studies I did for my Ph.D. thesis were an early example of what are sometimes called "Big Compute." This means doing zillions of computations in seconds. Together, they allowed my complex fluid equations to be "animated." The superfast calculations revealed behavior that previously would have been hidden to us.

But with the rise of the Internet and the wireless world, something new and unexpected emerged. Instead of just having Big Compute to solve equations, our digital society evolved a new universe of "Big Data." With every "click," "like" and "follow" we were leaving digital breadcrumbs out in the ether. With the rise of social media, a vast treasure-trove of information was building up that could be mined to predict our preferences, our inclinations and even our future behavior.

Combine the superfast calculational capacities of Big Compute with the oceans of specific personal information comprising Big Data — and the fertile ground for computational propaganda emerges. That's how the small AI programs called bots can be unleashed into cyberspace to target and deliver misinformation exactly to the people who will be most vulnerable to it. These messages can be refined over and over again based on how well they perform (again in terms of clicks, likes and so on). Worst of all, all this can be done semiautonomously, allowing the targeted propaganda (like fake news stories or faked images) to spread like viruses through communities most vulnerable to their misinformation.

As someone who has worked at the hairy edges of computational science my entire career I am, frankly, terrified by the possibilities of computational propaganda. My fear comes exactly because I have seen how rapidly the power and the capacities of digital technologies have grown. From my perspective, no matter what your political inclinations may be, if you value a healthy functioning democracy, then something needs to be done to get ahead of computational propaganda's curve.

Clearly this is, in part, a technological problem. As Bolsover and Howard explain:

"Technical knowledge is necessary to work with the massive databases used for audience targeting; it is necessary to create the bots and algorithms that distribute propaganda; it is necessary to monitor and evaluate the results of these efforts in agile campaigning. Thus, a technical knowledge comparable to those who create and distribute this propaganda is necessary to investigate the phenomena."

But, according to Bolsover and Howard, viewing computational propaganda only from a technical perspective would be a grave mistake. As they explain, seeing it just in terms of variables and algorithms "plays into the hands of those who create it, the platforms that serve it, and the firms that profit from it."

And this is the challenge before us. Computational propaganda is a new thing. People just invented it. And they did so by realizing possibilities emerging from the intersection of new technologies (Big Compute, Big Data) and new behaviors those technologies allowed (social media). But the emphasis on behavior can't be lost.

People are not machines. We do things for a whole lot of reasons including emotions of loss, anger, fear and longing. To combat computational propaganda's potentially dangerous effects on democracy in a digital age, we will need to focus on both its how and its why.


Adam Frank is a co-founder of the 13.7 blog, an astrophysics professor at the University of Rochester and author of the upcoming bookLight of the Stars: Alien Worlds and the Fate of the Earth. His scientific studies are funded by the National Science Foundation, NASA and the Department of Education. You can keep up with more of what Adam is thinking onFacebook and Twitter:@adamfrank4.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Adam Frank was a contributor to the NPR blog 13.7: Cosmos & Culture. A professor at the University of Rochester, Frank is a theoretical/computational astrophysicist and currently heads a research group developing supercomputer code to study the formation and death of stars. Frank's research has also explored the evolution of newly born planets and the structure of clouds in the interstellar medium. Recently, he has begun work in the fields of astrobiology and network theory/data science. Frank also holds a joint appointment at the Laboratory for Laser Energetics, a Department of Energy fusion lab.