Facebook is removing over 5,000 ad targeting options to prevent discriminatory ads

Facebook announced this morning it’s making a change to how its ad targeting system works in order to tackle the misuse of its platform to discriminate and exclude audiences based on factors like ethnicity and religion. The company says it’s now removing over 5,000 ad targeting options that could have been misused to place discriminatory ads across its platform.

The news comes shortly after the U.S. Department of Housing and Urban Development (HUD) filed a new complaint against Facebook that accuses it of helping landlords and home sellers violate the Fair Housing Act. It says that Facebook’s ad settings disregard the law by allowing advertisers to target certain demographics.

“When Facebook uses the vast amount of personal data it collects to help advertisers to discriminate, it’s the same as slamming the door in someone’s face,” Assistant Secretary for Fair Housing and Equal Opportunity, Anna María Farías had said in a statement issued by the department.

Facebook responded by saying this practice was prohibited in its advertising policies and that it would continue to work with HUD to address its complaints.

Today, the company says that it will remove over 5,000 targeting options which have the potential for misuse.

“While these options have been used in legitimate ways to reach people interested in a certain product or service, we think minimizing the risk of abuse is more important,” the company explained in a blog post. Facebook didn’t provide a list of the options being removed, but noted they related to attributes such as religion and ethnicity.

It also said that it would roll out a new certification to U.S. advertisers through its Ads Manager tool, that will require the advertisers to properly register their compliance with Facebook’s non-discrimination policy if they post housing, employment or credit ads. The advertisers will need to complete the certification, which involves being educated on the policy and agreeing to it through a form.

Facebook says this certification will reach other countries in time, and will become available through its other tools and APIs.

Earlier this year, Facebook had said it would update its product to catch discriminatory ads before they ran by hiring more ad reviewers and by using machine learning techniques. It also introduced new prompts to remind advertisers about its anti-discrimination policies before they created campaigns.

However, the issues haven’t just been about advertisers picking certain options to target individuals with their ads – they’ve been using ad targeting options to exclude others, as well. Facebook in April said it was removing thousands of categories from exclusion targeting as a result, including those related to race, ethnicity, sexual orientation and religion.

However, the company has been criticized for the way its ad targeting tools could be abused for several years.

Back in 2016, for example, Facebook had to disable an “ethnic affinity” targeting option for housing, employment, and credit-related ads, after a ProPublica report pointed out that these tools could be used for discriminatory advertising in housing and employment, which is illegal. It later rolled out more informational messages, updated its ad policies, and began testing tools to identify illegal ads.

The company more recently came under fire for allowing advertisers to target users based on interests related to their political beliefs, sexuality and religion – categories that are now deemed as “sensitive information” under current European data protection laws. The company responded at the time with an explanation of how users can manage their ad preferences.

Facebook today says that it will have more ad targeting updates to share in the months ahead, as it further refines these tools.