Facebook is having a tough week. First, the breaking news. Today, the Department of Housing and Urban Development sued Facebook for violations of the federal Fair Housing Act, alleging that the platform allows advertisers to prevent people from seeing certain ads based on race, religion and national origin. Further, the lawsuit says the platform itself also uses its data-mining capability to determine which of its users can see specific ads. “Facebook is discriminating against people based upon who they are and where they live,” HUD Secretary Ben Carson said in a statement. “Using a computer to limit a person’s housing choices can be just as discriminatory as slamming a door in someone’s face.” We knew this two and a half years ago. These practices, which are not subtle, were explored in detail as part of a ProPublica investigation published in October 2016. Their report, complete with screenshots, showed how advertisers could easily exclude specific renters or home buyers based on "ethnic affinities." "When we showed Facebook's racial exclusion options to a prominent civil rights lawyer John Relman, he gasped and said, 'This is horrifying. This is massively illegal. This is about as blatant a violation of the federal Fair Housing Act as one can find,'" they reported. They also found that major employers like Verizon, Amazon, Goldman Sachs and even Facebook itself had placed job recruitment ads that screened out people over a certain age. This separate report raised troubling questions about the company’s compliance with the federal Age Discrimination in Employment Act of 1967, which prohibits bias against people 40 or older in hiring or employment. Facebook promised to do better in flagging these ads, but a year after their first investigation, ProPublica found significant holes in their updated system. This week saw Facebook promising to do better, yet again, this time addressing hate speech on their platform. Yesterday, Facebook announced a new policy banning white separatist and nationalist content from the site. While advocates have long complained about white nationalist activity on Facebook, criticism of the company had intensified after the platform hosted the livestream published by the gunman during the horrific shooting rampage on two mosques in Christchurch, New Zealand earlier this month. In addition to banning content about white nationalism, the company plans to direct people who search for similarly racist terms to a group that offers crisis counseling and education. I'm looking forward to the metric tracking "former hate group member conversions." While we wait, it's worth digging into the thinking that has allowed this type of hate to bloom on the platform. The devil is in the algorithms: Though the company has long said they police hateful content based on race, ethnicity, or religion, "expressions of white nationalism and separatism," had not been flagged before now. In June 2017, ProPublica published an analysis of internal Facebook documents that shed light on the algorithms that the company uses to distinguish between hate speech and legitimate political speech, and how they trained their content reviewers: One document trains content reviewers on how to apply the company's global hate speech algorithm. The slide identifies three groups: female drivers, black children and white men. It asks: Which group is protected from hate speech? The correct answer: white men. The reason is that Facebook deletes curses, slurs, calls for violence and several other types of attacks only when they are directed at "protected categories"—based on race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation and serious disability/disease. It gives users broader latitude when they write about "subsets" of protected categories. White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected. These sorts of algorithmic loopholes have resulted in the long-time and disproportionate harassment of certain populations, specifically but not limited to black women, on Facebook and other social platforms. It’s also why every activist of color you know who writes passionately, knowledgeably and responsibly about white supremacy gets routinely blocked on Facebook. Having a conversation about how to protect society from white nationalism gets noticed by reviewers. But a discussion about the violent separation of the races does not. Policing billions of interactions around the world is a Herculean task, and I absolutely want Facebook to get this right. But three questions immediately come to mind. Why would a company allow people to use micro-targeting tools that allegedly violate federal law? How many lawsuits will it take for Facebook to value their non-paying customers as highly as their paying ones? And finally this – what current and still invisible “loopholes” will we be lamenting two and a half years from now? |
No comments:
Post a Comment