Skip to main content

Contents

A ProPublica report has revealed the perils of relying on an unchecked algorithm and self-reported user data.

More than 2 billion Facebook each month use it to share personal and important information about themselves. This includes age, friends, family, residence, employment, education, entertainment, etc. This is a lot of information being shared with a lot of people.

Facebook’s access to that personal information and data is what allows Facebook to make a substantial amount of money selling to advertisers who wish to reach target markets in specific age groups who are in relationships or who are serving in the military.

How can they do this? Information is valuable in this new data driven age. Fortunately, for Facebook and others, social media users are more than willing to share this information. We often see users share who they are in a relationship with and if they serve or have served in the US military. However, deception is not entirely non-existent on social media platforms. People can lie, and sometimes Facebook can’t tell what is a lie and what is the truth.

Therein lies the problem: On September 14th, 2017, ProPublica reported that Facebook approved ads that targeted audiences with phrases like, “Jew Hater,” “Hitler did nothing wrong” and “How to burn Jews.”

Although it seems wrong and downright absurd that Facebook would group people together based on such hateful rhetoric just for the sake of turning a profit—it’s true. What’s worse is that Facebook was not even aware of this.

How did it happen?

Understand that no one who works for Facebook would have created such hateful ad-targeting options for advertisers to reach. The truth is we can blame the machines. That’s right; a computer is responsible for this PR nightmare.

An algorithm developed by Facebook went through users profiles, identified patterns, and new created new audiences based on that developed information.

How is that possible? Believe it or not enough people had typed the words “Nazi Party” or “Jew Hater” as their employer and educational field of study on their Facebook profiles. Since enough users did such a thing, the computer was just being systematical and decided to do what is was programmed to do: put shared backgrounds in the same ad audience.

Understand, the computer and algorithm are designed to know so much, but, in this case, it did not know that the Nazi party was a German political party that was responsible for a mass Genocide or that the term “Jew Hater” is slang description for anti-Semites. If it didn’t know these things, it certainly did not know that grouping these individuals together would create a hate group for advertisers to target.

What the computer did know was that the information posted by users such as schools, employers, jobs, and fields of study appears on thousands of user’s profiles and that information is valuable to advertisers. Now, Facebooks algorithm may not have known it was doing, but Facebook should have.

Facebook’s answer

As a result of ProPublica’s finding and report, Facebook has officially removed the power for advertisers to target users based on background employment and education information inputs. A Facebook spokesperson made the statement, “To help ensure that targeting is not used for discriminatory purposes, we are removing these self-reported targeting fields until we have the right processes in place to help prevent this issue.”

Trust issue

Facebook may have appeared to do the right thing in its entirety, but they haven’t really gone the distance to solve the problem permanently. It’s true that Facebook is cutting down its usage of self-reported user data for ad targeting. However, it still utilizes the employment information that people submit to their profiles when listing industries for advertisers to target. Now they focus more on age and relationship status as their wedge for supplying information to advertisers. Could this be a potential problem in the future? I guess we’ll have to wait and see.

ProPublica’s report is only the latest example this month of the difficulty in relying on user-provided data for ad targeting purposes. Last week Pivotal Research analyst Brian Wieser identified the discrepancy between the number of 18- to 34-year-olds in the US that Facebook says it can reach and the number of 18- to 34-year-olds that actually live in the US, according to the Census Bureau.

Overall, the two examples are not perfectly comparable, but both cases highlight the main underlying issue of social media and that is people can and will lie if they want to. Users can tell Facebook anything they want. If a 10 year old wants to be an 18 year old, they can.

Facebook bases itself on accurate and truthful information, which is what advertisers depend on as well when it comes to taking information from Facebook’s data.

Geoffrey Purkis

Geoffrey is the Founder and CEO of Seattle Web Search. He’s a web developer and SEO (Search Engine Optimization) expert located in Seattle, Washington with extensive experience in the field. Geoffrey specializes in helping his clients find the right combination of web, search, social, and video content to get the best results from their online marketing efforts.