© 2024 Milwaukee Public Media is a service of UW-Milwaukee's College of Letters & Science
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Tech Companies Announce Plan To Identify Extremist Content Online

ARI SHAPIRO, HOST:

Four major tech companies - Facebook, Microsoft, Twitter and YouTube - say they're going to work together to identify extremist content. It's a concrete though limited move to get terrorist propaganda off the world's leading Internet platforms. NPR's Aarti Shahani reports.

AARTI SHAHANI, BYLINE: Already on the internet, security researchers, people who combat hackers, share intel. They find something they believe to be malware, code that can attack computers. And they let each other know. They put the so-called hash, the unique DNA of that bad code, into a central database and say, hey, everyone, look out for this thing. Now the internet companies that are dealing with content are doing the same thing not for malware but for ISIS beheading videos and other extremist propaganda.

HANY FARID: It's exactly like that. And, in fact, this is exactly the same type of technology that is now being used to remove that content.

SHAHANI: Hany Farid helped to build that technology. He's a computer scientist at Dartmouth and an advisor to the Counter Extremism Project. The nonprofit's been trying to get the internet companies to take action. And now, finally, Farid notes, they are changing their tune.

FARID: For years, they have said that this is technologically not feasible. That was absolutely not true. It's always been technologically feasible. And they've been dragging their feet for a long time, I think, and for too long - until the pressure simply mounted.

SHAHANI: Pressure has mounted from governments around the world and the media. To be clear, the four tech companies are not going to rely on algorithms to automatically ID all the terrorist calls to violence online.

FARID: There's always a human in the loop at the beginning of the process determining that this content is extremist-related or violates terms of service.

SHAHANI: Once a human tags the video or picture as bad, then the tech can take over and crawl through a platform to look for every instance of it and pull it. Note the platforms are not working together to pull the content. When Twitter hits the delete button, nothing happens on Facebook or YouTube.

FARID: Each of these companies will be individually analyzing this content, comparing it against a database of known bad content and then making their own internal decisions on how to handle that.

SHAHANI: According to the joint's announcement, no outside third party will monitor what content is pulled or how quickly or slowly it's done. Farid says that could limit the effectiveness of the move.

FARID: Look, at the end of the day, these tech companies are private business. And their mandate is to reach customers and to make money.

SHAHANI: He says collaborating to actively remove customers and content is not necessarily in their financial interests. Aarti Shahani, NPR News, San Francisco. Transcript provided by NPR, Copyright NPR.

Aarti Shahani is a correspondent for NPR. Based in Silicon Valley, she covers the biggest companies on earth. She is also an author. Her first book, Here We Are: American Dreams, American Nightmares (out Oct. 1, 2019), is about the extreme ups and downs her family encountered as immigrants in the U.S. Before journalism, Shahani was a community organizer in her native New York City, helping prisoners and families facing deportation. Even if it looks like she keeps changing careers, she's always doing the same thing: telling stories that matter.