Search

What is coordinated inauthentic behavior on social media? - Slate

sambilsambel.blogspot.com
The largely empty upper section of an arena above an electronic sign that says "Make America Great Again."
Were empty seats at President Donald Trump’s June 20 rally in Tulsa, Oklahoma, caused by “coordinated inauthentic behavior”? Nicholas Kamm/Getty Images

At the start of a hearing of the House Permanent Select Committee on Intelligence recently, Rep. Adam Schiff praised representatives from Facebook, Twitter, and Google for having “taken significant steps and invested resources to detect coordinated inauthentic behavior.” The comment passed by without note, as if “coordinated inauthentic behavior”—or CIB, as those really in the know call it—is the most natural thing in the world for tech companies to be rooting out and for members of Congress to be talking about.

Such casual use of the phrase is remarkable when you remember that it was only invented, by Facebook itself, around two years ago. It’s more remarkable still once you know, as former Facebook chief security officer Alex Stamos told me on The Lawfare Podcast, that the company was going to call it “coordinated inauthentic activity” but thought it probably best to avoid the acronym CIA, showing the arbitrariness of how some terms of art get created. And perhaps what makes it most remarkable of all is that no one really knows what it means.

Most commonly used when talking about foreign influence operations, the phrase sounds technical and objective, as if there’s an obvious category of online behavior that crosses a clearly demarcated line between OK and not OK. But a few recent examples show that’s far from the case. This lack of clarity matters because as the election season heats up, there’s going to be plenty of stuff online that will be varying degrees of coordinated and inauthentic, and as things stand, we’re leaving it to tech companies to tell us, without a lot of explanation, when something crosses over that magical line into CIB. That needs to change.

Let’s take the example of the TikTok teens and K-pop fans. After poor attendance at President Donald Trump’s campaign rally in Tulsa, Oklahoma, on June 20, the internet was enamored with the story of how an Iowan grandmother had posted a TikTok video that led to a movement of young people reserving tickets for the rally to artificially inflate expected attendance numbers and mess with the Trump campaign’s data collection. The campaign spread across TikTok and other platforms and was tactical and relatively sophisticated. Participants deleted videos after 24 to 48 hours to help conceal their plans and exchanged advice on how to acquire a Google Voice phone number so they could sign up with fake details. The rally’s empty seats were an embarrassment for the president. Rep. Alexandria Ocasio-Cortez tweeted about how Trump’s campaign had been “ROCKED” and the Zoomers had made her proud. Whether or not the teens actually had any effect on rally attendance is debatable, but either way, on its face, it was a fun story of youth ingenuity with no harm done.

But it’s not hard to imagine the very same set of actions by a different group of actors against a different target getting a different public response. What if QAnon conspiracy theorists or 4chan users targeted a Biden rally? Or a group of Russian or Chinese youth started signing up for anti-lockdown protest events? Obviously the TikTok Teen Tulsa Tomfoolery is a good kind of coordinated inauthentic behavior that is totally distinguishable from these other bad kinds on some kind of principled basis, right?

I asked this question on Twitter (earning an “ok boomer” reply from which I am still reeling). Partly in response, Facebook’s head of security (or, by my unofficial title, chief CIB hunter) explained that the teens’ stunt wouldn’t have met Facebook’s definition of CIB because it did not involve the use of fake accounts or coordinate to mislead users of the platform itself (as opposed to misleading people off the platform). I’m grateful to Facebook for engaging with the debate—these standards are still way too opaque, so public explanation of their thinking is helpful. But in this case, Facebook’s definition is only a small part of the question, not least because most of the activity took place on other platforms. (TikTok has an even more opaque standard of its version of CIB, which I fear we’re going to learn more about the hard way.)

The same cannot be said of another recent CIB controversy. In Popular Information, Judd Legum described how a network of 14 purportedly independent large Facebook pages drove traffic to the conservative site the Daily Wire, one of the most popular publications on Facebook, including by publishing the same articles at the same time with the same text. As New York Times writer Charlie Warzel put it, “seems coordinated and inauthentic to me.” Facebook’s chief CIB hunter explained, again on Twitter, that CIB is reserved for the most egregious violations and this didn’t meet the threshold because the accounts weren’t fake (although the company did admit to Popular Information today, months after Legum’s original reporting, that the pages were breaking its rules on branded content). The New York Times has previously reported that Facebook’s reluctance to act against these pages was driven by fear of appearing biased against conservatives, which Facebook disputes. Whatever the motivation, such incidents and the lack of transparency around them raise the specter of political considerations playing a role in deciding when or how to take action.

All of the major platforms, loath to get drawn into taking political sides, have insisted that their CIB-related rules are based on behavior, not content, in an effort to make the decisions appear neutral. But as these examples, and many others, show, defining what is coordination and what is inauthentic is far from a value-free judgment call. Rare is the piece of online content that is truly authentic and not in some way trying to game the algorithms. Coordination and authenticity are not binary states but matters of degree, and this ambiguity will be exploited by actors of all stripes. Michael Bloomberg’s brief presidential campaign was a case in point, leaving platforms scrambling to decide what to do about campaign employees tweeting identical messages and influencers posting memes for money.

We are just at the very beginning of working out what the norms for acceptable online political mobilization are, and the only way to do this is through open and public debate. How many accounts do you need to constitute a “network”? When is an account inauthentic enough to be classified as “fake”? Is misrepresenting your name, location, or, say, financial ties enough? What exactly constitutes “coordination,” and how exactly do companies decide if the coordination is a grassroots movement or a carefully planned “influence operation”? Why are journalists able to find CIB, or things that look like it, before platforms? Is it OK if the behavior only misleads people off platform instead of other users?

Too often calls for greater transparency are met with the response from platforms that being more open about standards will only allow bad actors to better game the system. If they know the rules, they know how to work around them. This might be true, but transparency is a trade-off. CIB must not only be defined and removed, but also be seen to be defined in advance and then removed in order to restore people’s faith that the standards are indeed all about the behavior and not influenced by other factors. I’m not convinced we’ve got the transparency trade-off right. For now, platforms (often, it seems, quietly nudged by governments) tell us “trust us, we know it when we see it.” Compounding the confusion, platforms work together to detect and remove CIB, but also seemingly have different standards. Worse still, fake accounts are apparently central to a finding that the magical line has been crossed, but platforms hold almost all the information needed to make that call. To their credit, platforms are getting better, but the past few weeks make it clear we’ve got a lot further to go.

And while we need more transparency from platforms about what their rules are so that we can hold them to their own definitions, there’s a deeper issue of why we’re leaving this to be debated on the terms of platforms’ particular policies in the first place. Facebook does have a somewhat detailed definition of CIB (even if it should be more detailed still), but Schiff obviously was not intending to congratulate Twitter and Google for removing what Facebook defined as CIB. We need to work out what this “generic” sense of CIB—the one that exists in the public imagination—really means. When we let platforms decide this alone, they can do so in a way that makes CIB seem like a matter peripheral to their products. But opening up the question could suggest an answer that platforms fear: As things stand, CIB is impossible to clearly define or completely avoid. It may be that we need far more radical reforms than individual CIB-hunting operations (reforms centered on transparency and changing algorithmic amplification) to make sure public discourse isn’t exploited and manipulated in corrosive ways. Congress should not be simply congratulating these platforms for removing CIB, but getting them to tell us exactly what they think it is and why a narrow definition serves society and not their own interests.

If regulators and the public more broadly have been happy to let platforms define the terms of the debate so far, it’s in large part because of a scary narrative about resourceful foreign adversaries that make social media platforms a “battlefield,” requiring “war rooms” and secretive intelligence sharing to tame. But as Renée DiResta of the Stanford Internet Observatory has warned, most online manipulation is entirely homegrown. The line between legitimate political organizing that simply is good at taking advantage of online affordances and CIB will never be easy to draw, but we can’t just throw up our hands and let private companies do it for us without adequate oversight. And we shouldn’t uncritically celebrate it as long as the decisions come out in favor of “our side.” The question of whether the teens should have done what they did (to which I say: go for it! I swear I’m not a fun-hating boomer!) is different from the governance questions and what we demand platforms do in response. There cannot be a “CIB for good” carve-out from prohibitions on CIB, not least because we will never agree on what is “good.”

Oh, and the Iowan grandma who led the Tulsa rally caper? She’s been recruited to join a coalition supporting the Biden campaign, and teens have been contacting her with suggestions for more pranks to play on Trump’s campaign. She’s sure not to be the only one making plans. So I guess the good news is there’s likely to be plenty of opportunities for CIB to be clarified in coming months.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Let's block ads! (Why?)



"behavior" - Google News
July 03, 2020 at 05:05AM
https://ift.tt/2BXxZWH

What is coordinated inauthentic behavior on social media? - Slate
"behavior" - Google News
https://ift.tt/2We9Kdi


Bagikan Berita Ini

0 Response to "What is coordinated inauthentic behavior on social media? - Slate"

Post a Comment

Powered by Blogger.