Future Tense

Is This the Moment That Facebook Caves?

After years of promises, hate still thrives on the social network. Will an advertiser boycott make a difference?

Facebook logo and thumbs-up like icon, blurred.
An advertiser movement against Facebook is coming into focus. Olivier Douliery/AFP via Getty Images

The first movement came from the North Face. Then, an avalanche: By Wednesday, REI, Patagonia, Eddie Bauer, and Ben & Jerry’s had all pledged to pull advertising from Facebook next month over the platform’s reluctance to police hate speech and misinformation. These brands are all known for their commitment to corporate activism. But by late last week, more companies were also dropping their Facebook ads: Verizon, Coca-Cola, Unilever, and advertising agency Goodby Silverstein, which counts HP, BMW, PayPal, and Pepsi among its clients. On Friday, Facebook’s stock closed down over 8 percent, leaving Mark Zuckerberg $7.2 billion poorer. On Monday, Adidas and Ford joined the ranks. Facebook should be—and is—worried.

These brands are joining the #StopHateforProfit campaign, which six civil rights groups, including the NAACP and the Anti-Defamation League, launched June 17. The campaign’s goal is to have large Facebook advertisers “show they will not support a company that puts profit over safety,” citing the proliferation of white supremacist content, incitements of violence against protesters, and widespread voter suppression efforts on the platform as instances of this failure.

While only a fraction of Facebook’s 8 million advertisers have now pledged support, #StopHateforProfit marks the first time Facebook has been put under serious pressure to change by the companies from which it generates revenue. So far, discussions of policy or regulation efforts in the U.S. haven’t made Facebook radically readdress its hate-speech policies. A study in May, for instance, found that white supremacist groups are still “thriving” on the platform. And user boycotts and criticism have only pushed Facebook to change in relatively small increments: Every few months or so, the company announces policy updates, such as adding warning labels to content that may cause harm; it also established an independent oversight board to judge and address content moderation issues. But many critics say a complete overhaul of Facebook’s content policies is still necessary. Will a shove from advertisers make a difference where pleas from advocates and experts haven’t?

Advertising made up about 98 percent of Facebook’s $70 billion in revenue in 2019. In turn, brands rely on Facebook to get their message out: Despite government and media criticism of the platform, Facebook continues to be part of the so-called digital duopoly, along with Google, and is the second-largest player in the digital ad market. So unlike policy discussions that target externalities like hate speech and misinformation, action from advertisers hits the company’s bottom line directly, said Dipayan Ghosh, the Pozen fellow at the Shorenstein Center on Media, Politics and Public Policy at Harvard Kennedy School, who researches digital privacy, artificial intelligence, and civil rights and served as a technology and economic policy adviser in the Obama White House.

When ad revenue is jeopardized, so too are investors—both the average investor and mutual funds, such as Vanguard and Fidelity, with significant stakes in the company—as Rebecca MacKinnon, the director of Ranking Digital Rights and co-founder of the citizen media network Global Voices, pointed out. While some investors have brought forth shareholder resolutions in recent years due to concerns over the social impact of the platform, MacKinnon said, mainstream investors have continued as usual. But if more advertisers join this movement, then those mainstream investors will worry about the impact on revenue. “If I was a mutual fund with major holdings with Facebook, I would be asking them about this and saying, you know, ‘You have a serious problem here, with parts of the market not wanting to be associated with what you seem to represent now,’ ” MacKinnon said.

Basically, Facebook is facing pressure from various corporate forces, which it will have to take seriously—even if, for now, the boycott represents a small percentage of revenue. “If Congress can’t get it right and can’t really regulate the business model, then I should hope that civil society calling on corporate America can start to move the needle and force Facebook’s hand in a more direct way,” said Ghosh (who has worked as a privacy adviser at Facebook). “Because I think as soon as [Zuckerberg] starts to see revenues slip down by a substantial percentage, he will have to act.”

And it’s clear that Zuckerberg has already started to do so. The company emailed some of its most important advertising clients when the boycott started to say that it had taken steps to address harmful content, the New York Times reported. On Friday, Zuckerberg announced that the platform would update its policies to ban more hate speech from ads and take steps to prohibit posts with voting misinformation. Facebook will also put warning labels on posts that violate the platform’s policies but are deemed newsworthy. But for many, these moves don’t go far enough, and Zuckerberg’s address has since been called “11 minutes of wasted opportunity to commit to change.”

Since an organized boycott campaign of this scale appears to be unprecedented in the tech industry, MacKinnon likened the potential impact of the move to brands’ successful boycott of governments. Government policies are the usual target of advertising, sourcing, or trade boycotts, and MacKinnon sees the Facebook boycott as a kind of tech equivalent to, for instance, the ongoing Cotton Campaign, which has driven major clothing brands worldwide to boycott cotton from Uzbekistan and Turkmenistan to help end forced labor. Like #StopHateforProfit, the Cotton Campaign is supported by activists and a research community, MacKinnon said, but “it’s brands who have the power—that money talks.”

It talks because brand boycotts generally indicate an increasing overlap of moral and market pressures. In Facebook’s case, that’s partly due to public awareness of the platform’s outsize role in public discourse. As MacKinnon put it, people are finally waking up to the fact a social media platform of world-spanning scale “clearly doesn’t have itself under control and hasn’t fully owned up to its responsibilities.” The Facebook boycott is also a direct consequence of growing corporate social responsibility across most industries. The (moral) pressure of consumers drives the (market) pressure of corporations, which together almost serve as a “pseudo-regulatory agency,” as Ghosh referred to it. And since the U.S. hasn’t historically been able to use legislation to address hate speech on social media platforms, unlike countries such as Germany and France, it makes sense that the country would largely rely on the market to regulate itself.

The problem with pseudo-regulation is that its goals and outcomes are vague and often limited. Some companies have pledged to remove ads from Facebook, but haven’t clarified whether they will also boycott Facebook-owned Instagram. And critics have already voiced concern over the brands’ commitment to only the month of July (though a few companies, such as Unilever, have pledged to pull ads for the rest of the year). Just consider the language certain brands are using to announce the measure:

“Hitting pause” suggests a temporary measure that will be forgotten the minute Aug. 1 rolls around. In general, advertisers don’t often stay away from influential platforms with targeted advertising for long. “Advertisers eventually return to these companies because their scale enables them to target potential customers very efficiently,” Axios wrote regarding an ad boycott of YouTube in early 2019. “This boycott is likely to end no differently.”

And while some brands have pledged a potentially longer boycott, it’s still precarious to rely on corporations, which are driven first and foremost by the market, to direct what is effectively a policy change. “Is [the boycott] truly because of their commitment to marginalized populations in the United States, or is it because they see it as their corporate interest to join that bandwagon?” said Ghosh. “I would probably presume the latter.” In that case, Ghosh is concerned that a “meaningless half-step”—such as Facebook’s recent decision to allow users to opt out of political ads—will convince marketers to stop the boycott. “That’s what I fear and would hope that we can address by calling attention to it,” said Ghosh.

There’s another possibility—that this is the first indication the Facebook brand has truly become unappealing to advertisers, who may no longer want to be associated with the platform until it’s mended its reputation. Brands can redirect their ad budgets to Google and other media, after all, and the idea of Facebook as a “toxic brand” has been building since the Cambridge Analytica scandal, antitrust investigations, and congressional hearings. Yet while this may be the case for large corporations, smaller brands may find Facebook’s targeted advertising too influential and effective to resist as they try to revive their business amid the coronavirus pandemic, Jason Dille, who works for the ad agency Chemistry, told the New York Times. Many of Dille’s clients considered the boycott, but ultimately had to prioritize staying afloat.

What’s promising is that the brands that have already committed haven’t started to cave just yet. Despite Facebook’s attempts to assuage its boycotters, advertisers aren’t pulling out, and the organizations behind #StopHateforProfit criticized Zuckerberg’s address, vowed to continue the campaign, and laid out 10 specific steps Facebook can take to address its content moderation problems, which include submitting to regular independent audits, establishing a permanent civil rights infrastructure in the company, refunding advertisers whose ads appeared next to content that was later removed, and finding and removing public and private groups centered on hate speech and misinformation. “None of [Facebook’s plans] will be vetted or verified—or make a dent in the problem on the largest social media platform on the planet,” said the statement. “We have been down this road before with Facebook. They have made apologies in the past. They have taken meager steps after each catastrophe where their platform played a part. But this has to end now.”

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.