Don’t trust everything Facebook says about ads.
If, in the past five years, you have searched for products online then started seeing ads related to those purchases on Facebook, blame the teams who created the software.
Facebook’s micro-targeting advertising’s morals was forced into focus. A leaked presentation stated that Facebook could identify vulnerable teenagers who exhibited their feelings, including emotions of insecure, worthless, defeat, and stress. Facebook retorted and said the report misled information. The company told the public they do not create tools to mark any individual’s specific emotional states.
However, it is important to remember while Facebook might try to paint the idea that this type of targeting could not occur, they are lying, simply put.
An excellent comparison in recent times would be when Mark Zuckerberg insincerely stated his doubts on Facebook’s ability to flip a presidential election. This came after Donald Trump’s shocking win.
Mark Zuckerberg flatly denied what everyone was telling him, his users and the world. “Of all the content on Facebook, more than 99% of what people see is authentic,” he wrote. He also cautioned that the company should not rush into fact-checking.
Changing Facebook data into cash remains a tough task, but it is possible.
A big factor remains determining how much of any user’s data is unnecessary and of no help with marketing. If used in an efficient way, through machine-learning and trial and error, a marketer can find the correct mix of geography, time of day, age, and film or music tastes which will draw a clear demographic.
Is an ad ethical when it targets middle aged women from wealthy areas asking them to purchase $100 yoga pants? How about when the ad is for payday loans and focuses on African Americans in lesser incomes? Or when Hispanics see ads promoting military service?
Should there be a line? If so, where can it be drawn?
Not only does this dilemma deserve answers, but it needs action. Facebook can chose to never limit the use of their data; therefore, the only way to have them consider otherwise would be to demand, loudly and without halting, that they change their policies.
People have done so recently. Think Trump and all the “fake news” accusations. So Zuckerberg has been trying to fix the problem by showing people more material from friends and family and by prioritizing “trusted publishers” and local news sources over purveyors of fake news.
Facebook’s response included the following in a statement:
In the past year, we’ve worked to destroy the business model for false news and reduce its spread, stop bad actors from meddling in elections, and bring a new level of transparency to advertising. Last week, we started prioritizing meaningful posts from friends and family in News Feed to help bring people closer together. We have more work to do and we’re heads down on getting it done.
But the Cambridge Analytica scandal shows people may not be OK with Facebook’s data gathering, improved or not.
Founder and CEO Mark Zuckerberg continues to deliver non-apologies, saying many things have already been fixed and more changes are on the way, amid repeated calls for him to testify before Congress. The hashtag #deletefacebook has trended on Twitter.
As a media company and one of Americans’ top sources of information, Facebook’s de facto anonymity and general lack of responsibility for user-generated content make it easy for propagandists to exploit. Making matters worse, it isn’t willing to impose tighter identification rules for fear of losing too many users, and it doesn’t want to be held responsible in any way for content, preferring to present itself as a neutral platform.
Simply put, Facebook has no real incentive to fix the actual problem. Their business is selling their users privacy to advertisers and they will continue to prioritize making Wall Street happy until the government has to step in and regulate. Even Zuck doesn’t think this is a bad idea.