Facebook’s 7-point rebuttal to ‘The Social Dilemma’

The docu-drama, ‘The Social Dilemma’, which began streaming on Netflix from September 9, 2020 onwards, has created quite a stir. This documentary-drama hybrid explores the dangerous human impact of social networking, with tech experts sounding the alarm on their own creations.

Directed by Jeff Orlowski and written by Orlowski, Davis Coombe, and Vickie Curtis, the film features interviews with former Google design ethicist and Center for Humane Technology co-founder Tristan Harris, his fellow Center for Humane Technology co-founder Aza Raskin, Asana co-founder and Facebook like button co-creator Justin Rosenstein, Harvard University professor Shoshana Zuboff, former Pinterest president Tim Kendall, AI Now director of policy research Rashida Richardson, Yonder director of research Renee DiResta, Stanford University Addiction Medicine Fellowship program director Anna Lembke, and virtual reality pioneer Jaron Lanier.

‘The Social Dilemma’ highlights the dark side of social media and how the technology and a handful of tech designers have significant control over the way billions of us think, act, and live our lives. As one of the persons interviewed in the docu-drama says, “If you want to control the population of your country, there has never been a tool as effective as Facebook.”

Not surprisingly, Facebook has come out with a sharp rebuttal to the film in a post and alleges that ‘The Social Dilemma’ buries the substance in sensationalism. Defending the platform and social media, Facebook presented a 7-point defence:

1. Addiction – Facebook builds its products to create value, not to be addictive – Our News Feed product teams are not incentivised to build features that increase time-spent on our products. Instead, we want to make sure we offer value to people, not just drive usage.

We collaborate with leading mental health experts, organisations and academics, and have our research teams devoted to understanding the impact that social media may have on people’s well-being.

We want people to control how they use our products, which is why we provide time management tools like an activity dashboard, a daily reminder, and ways to limit notifications.

2. You are not the product – Facebook is funded by advertising so that it remains free for people: Facebook is an ads-supported platform, which means that selling ads allows us to offer everyone else the ability to connect for free. This model allows small businesses and entrepreneurs to grow and compete with bigger brands by more easily finding new customers. But even when businesses purchase ads on Facebook, they don’t know who you are. We provide advertisers with reports about the kinds of people who are seeing their ads and how their ads are performing, but we don’t share information that personally identifies you unless you give us permission. We don't sell your information to anyone. You can always see the ‘interests’ assigned to you in your ad preferences, and if you want, remove them.

3. Algorithms – Facebook’s algorithm is not ‘mad.’ It keeps the platform relevant and useful: Facebook uses algorithms to improve the experience for people using our apps – just like any dating app, Amazon, Uber, and countless other consumer-facing apps that people interact with every day. That also includes Netflix, which uses an algorithm to determine who it thinks should watch ‘The Social Dilemma’ film, and then recommends it to them.

This happens with every piece of content that appears on the service. Algorithms and machine learning improve our services.

4. Data – Facebook has made improvements across the company to protect people’s privacy: Over the last year, we have made significant changes as part of our agreement with the Federal Trade Commission. We’ve created new safeguards for how data is used, given people new controls on how to manage their data and now have thousands of people working on privacy related projects so that we can continue to meet our privacy commitments and keep people’s information safe.

Despite what the film suggests, we have policies that prohibit businesses from sending us sensitive data about people, including users’ health information or social security numbers, through business tools like the Facebook Pixel and SDK.

5. Polarisation – We take steps to reduce content that could drive polarisation: The truth is that polarisation and populism have existed long before Facebook and other online platforms were created and we consciously take steps within the product to manage and minimize the spread of this kind of content.

The overwhelming majority of content that people see on Facebook is not polarising or even political – it’s everyday content from people’s friends and family. We reduce the amount of content that could drive polarisation on our platform, including links to clickbait headlines or misinformation.

6. Elections – Facebook has made investments to protect the integrity of elections: We’ve acknowledged that we made mistakes in 2016. Yet the film leaves out what we have done since 2016 to build strong defenses to stop people from using Facebook to interfere in elections. We’ve improved our security and now have some of the most sophisticated teams and systems in the world to prevent attacks. We’ve removed more than 100 networks worldwide engaging in coordinated inauthentic behaviour over the past couple of years, including ahead of other major global elections since 2016.

To make ads – in particular political and social ones – more transparent, in 2018 we created an Ad Library which makes all ads running on Facebook visible to people, even if you didn’t see the ad in your own feed.

7. Misinformation – We fight fake news, misinformation, and harmful content using a global network of fact-checking partners: The idea that we allow misinformation to fester on our platform, or that we somehow benefit from this content, is wrong. Facebook is the only major social media platform with a global network of more than 70 factchecking partners, who review content in different languages around the world. Content identified as false by our fact-checking partners is labelled and down-ranked in News Feed. We removed over 22 million pieces of hate speech in the second quarter of 2020, over 94% of which we found before someone reported it – an increase from a quarter earlier when we removed 9.6 million posts, over 88% of which we found before some reported it to us.

We know our systems aren’t perfect and there are things that we miss. But we are not idly standing by and allowing misinformation or hate speech to spread on Facebook.

Media
@adgully

News in the domain of Advertising, Marketing, Media and Business of Entertainment

More in Media