Facebook Failed to Make Good on a Promise Because it Has Too Much Faith in Its Tech

Facebook CEO Mark Zuckerberg

Facebook CEO Mark Zuckerberg Nam Y. Huh/AP

Facebook's advertising business brings in the vast majority of the company’s billions of dollars in revenue every quarter.

In February, Facebook said it was taking a strong stand against discrimination, and forbidding its millions of advertisers from excluding users based on race, sex, gender and other criteria. It also said it would not run housing, employment, or credit ads that were targeted at or excluded certain races.

Such advertising is already illegal in the US, but Facebook had permitted it anyway, an October 2016 ProPublica investigation found. Last week, ProPublica revisited the issue, and found that it could still buy dozens of rental property ads that excluded African-Americans, Spanish speakers, and others.

Now Facebook is finally shutting down the technology that makes this exclusion possible, a letter from Sheryl Sandberg to the Congressional Black Caucus (CBC) this week says.

“Until we can better ensure that our tools will not be used improperly, we are disabling the option that permits advertisers to exclude multicultural affinity segments from the audience for their ads,” Sandberg writes. The “exclude” function will be disabled starting Friday (Dec. 1), until the company completes an extensive review of its ad policies, according to a separate emailed statement by Rob Goldman, vice president of ad policies.

Why didn’t Facebook just turn off the “exclude” button from its ads earlier, so that advertisers couldn’t omit some groups? Because the company thought its mostly-automated advertising review process would catch any discrimination in housing and employment ads.

That’s a typical stance for Facebook, whose fast-growing advertising business brings in the vast majority of the company’s billions of dollars in revenue every quarter. As the company came under fire this year for aiding the spread of Russian propaganda ahead of the US election, Facebook executives frequently adopted a “Guns don’t kill people, people kill people” attitude about its technology, as Quartz wrote earlier:

Facebook didn’t spread misinformation and propaganda; the Russians did. No one at Facebook anticipated its product would be used in this way, and in any case, such use has always been against its policies. Under this premise, all Facebook has to do when problems arise is tweak its safety mechanisms.

It typically does that by offloading the responsibility of catching nefarious activity to automated systems and, through reporting features to flag inappropriate material, its users. This approach allows the platform to remain a conduit, a pipe, a machine that is harmless when used as directed. The more Facebook can keep human employees out of these processes, the easier it is to shift accountability away from itself and onto the parties who use or misuse the tools it provides.

Facebook’s technology that allows advertisers to target certain users and not others isn’t a bug, but a feature—perhaps it’s most important feature. Reversing that through some sort of extensive review may require dismantling Facebook’s entire money-spinning advertising apparatus.

At the same time, Facebook’s promises to weed fake news off of its platform after the US election aren’t being kept. Quartz found in October that there’s no way for readers to flag the fastest-growing content on Facebook, video, as “fake news” so a third-party fact-checker can evaluate it.

Nearly two months after Quartz identified the problem and notified Facebook, there is still no way to users to flag videos spreading misleading information as “fake news” today.