Don’t trust everything Facebook says about ads
If, in the past five years, you have searched for products online then started seeing ads related to those purchases on Facebook, blame the teams who created the software.
Recently, Facebook’s micro-targeting advertising’s morals was forced into focus. A leaked presentation stated that Facebook could identify vulnerable teenagers who exhibited their feelings, including emotions of insecure, worthless, defeat, and stress. Facebook retorted and said the report misled information. The company told the public they do not create tools to mark any individual’s specific emotional states.
However, it is important to remember while Facebook might try to paint the idea that this type of targeting could not occur, they are lying, simply put. An excellent comparison in recent times would be when Mark Zuckerberg insincerely stated his doubts on Facebook’s ability to flip a presidential election. This came after Donald Trump’s shocking win.
Looking deeper, revelations around Facebook include a political advertising sales team, categorized by affiliated parties. This team works to persuade wealthy politicians that Facebook can affect and possibly change the outcome of elections.
Additionally, changing Facebook data into cash remains an extremely tough task. A big factor remains determining how much of any user’s data is unnecessary and of no help with marketing. Yet, on occasion, if used in an efficient way, through machine-learning and trial and error, a marketer discovers the correct mix of geography, time of day, age, and film or music tastes which will draw a clear demographic.
While proof might be thin for the earlier claims of emotional ad targeting, it is worth knowing that Facebook provides “psychometric” targeting that searches for whomever an advertiser thinks is vulnerable to the messages being conveyed. The pitch becomes completely possible with this information.
Therefore, the overwhelming ethics question remains. If Facebook does in fact send ads to depressed teens, should they be stopped? Should they apply morals to their decisions? Data cannot be ethical. Operators must monitor it.
An example can be pulled straight from the Facebook data science team. They created a tool that would recommend new Pages to users based on what they liked. The tool began creating recommendations founded in ethnic stereotypes. The end of the tool came when it recommended former president Barack Obama to anyone who liked Jay Z, which statistically speaking is true; however, Facebook did not wish to appear as promoting behaviors that can be seen as discriminatory.
The question remains as to whether or not this is acceptable. Society accepts Jay Z, therefore, is it acceptable to connect him to a president? In today’s world multiple truths remain shunned in public despite statistical proof. Is an ad ethical when it targets middle aged women from wealthy areas asking them to purchase $100 yoga pants? How about when the ad is for payday loans and focuses on African Americans in lesser incomes? Or when Hispanics see ads promoting military service? Should there be a line? If so, where can it be drawn?
Not only does this dilemma deserve answers, but it needs action. Facebook can chose to never limit the use of their data; therefore, the only way to have them consider otherwise would be to demand, loudly and without halting, that they halt or change their policies. People have done so recently. Think Trump and all the “fake news” accusations. Zuckerberg finally relented and installed anti-fake news technology. However, the next opportunity to seize information will present itself and Facebook will not resist. Can you blame them? After all, they hold the data and the consumers on their side.