(Shutterstock)
Kaitlynn Mendes, Western University; Jacquelyn Burkell, Western University; Jane Bailey, L’Université d’Ottawa/University of Ottawa, and Valerie Steeves, L’Université d’Ottawa/University of Ottawa
In September, the Wall Street Journal released the Facebook Files. Drawing on thousands of documents leaked by whistle blower and former employee Frances Haugen, the Facebook Files show that the company knows their practices harm young people, but fails to act, choosing corporate profit over public good.
The Facebook Files are damning for the company, which also owns Instagram and WhatsApp. However, it isn’t the only social media company that compromises young people’s internationally protected rights and well-being by prioritizing profits.
As researchers and experts on children’s rights, online privacy and equality and the online risks, harms and rewards that young people face, the news over the past few weeks didn’t surprise us.
Harvested personal data
Harvesting and commodifying personal data (including children’s data) underpins the internet financial model — a model that social psychologist and philosopher Shoshana Zuboff has dubbed surveillance capitalism .
Social media companies make money under this model by collecting, analyzing and selling the personal information of users. To increase the flow of this valuable data they work to engage more people, for more time, through more interactions.
Read more: Explainer: what is surveillance capitalism and how does it shape our economy?
Ultimately, the value in harvested personal data lies in the detailed personal profiles the data supports — profiles that are used to feed the algorithms that shape our newsfeeds, personalize our search results, help us get a job (or hinder) and determine the advertisements we receive.
In a self-reinforcing turn, these same data are used to shape our online environments to encourage disclosure of even more data — and the process repeats.
Social media companies monitor young people to bombard them with unsolicited content in service of corporate profits. (Glen Carrie/Unsplash)
Surveillance capitalism
Recent research confirms that the deliberate design, algorithmic and policy choices made by social media companies (that lie at the heart of surveillance capitalism) directly expose young people to harmful content. However, the harms of surveillance capitalism extend well beyond this.
Our research in both Canada and the United Kingdom has repeatedly uncovered young people’s concern with how social media companies and policy-makers are failing them. Rather than respecting young people’s rights to expression, to be free from discrimination and to participate in decisions affecting themselves, social media companies monitor young people to bombard them with unsolicited content in service of corporate profits.
As a result, young people have often reported to us that they feel pressured to conform to stereotypical profiles used to steer their behaviour and shape their environment for profit.
For example, teen girls have told us that even though using Instagram and Snapchat created anxiety and insecurity about their bodies, they found it almost impossible to “switch off” the platforms. They also told us how the limited protection provided by default privacy settings leaves them vulnerable to unwanted “dick pics” and requests to send intimate images to men they don’t know.
Several girls and their parents told us that this can sometimes lead to extreme outcomes, including school refusal, self harm and, in a few cases, attempting suicide.
The surveillance capitalism financial model that underlies social media ensures that companies do everything they can to keep young people engaged.
Young people have told us that they want more freedom and control when using these spaces — so they are as public or private as they like, without fear of being monitored or profiled, or that their data are being farmed out to corporations.
Teen girls have told us that even though using Instagram and Snapchat created anxiety and insecurity about their bodies, they found it almost impossible to ‘switch off’ the platforms. (Shutterstock)
Teenagers also told us how they rarely bother to report harmful content to the platforms. This isn’t because they don’t know how, but instead because they have learned from experience that it doesn’t help. Some platforms were too slow to respond, others didn’t respond at all and some said that what was reported didn’t breach community standards, so they weren’t willing to help.
Removing toxic content hurts the bottom line
These responses aren’t surprising. For years, we have known about the lack of resources to moderate content and deal with online harassment.
Haugen’s recent testimony at a Senate Committee on Commerce, Science and Transportation hearing and earlier reports about other social media platforms highlight an even deeper profit motivation. Profit depends on meaningful social engagement, and harmful, toxic and divisive content drives engagement.
Basically, removing toxic content would hurt the corporate bottom line.
Guiding principles that centre children’s rights
So, what should be done in light of the recent, though not unprecedented, revelations in the Facebook Files? The issues are undoubtedly complex, but we have come up with a list of guiding principles that centre children’s rights and prioritize what young people have told us about what they need:
-
Young people must be directly engaged in the development of relevant policy.
-
All related policy initiatives should be evaluated on an ongoing basis using a children’s rights assessment framework.
-
Social media companies should be stopped from launching products for children and from collecting their data for profiling purposes.
-
Governments should invest more resources into providing fast, free, easy-to-access informal responses and support for those targeted by online harms (learning from existing models like Australia’s eSafety Commissioner and Nova Scotia’s CyberScan unit).
-
We need laws that ensure that social media companies are both transparent and accountable, especially when it comes to content moderation.
-
Government agencies (including police) should enforce existing laws against hateful, sexually violent and harassing content. Thought should be given to expanding platform liability for provoking and perpetuating these kinds of content.
-
Educational initiatives should prioritize familiarizing young people, the adults who support them and corporations with children’s rights, rather than focusing on a “safety” discourse that makes young people responsible for their own protection. This way, we can work together to disrupt the surveillance capitalism model that endangers them in the first place.
Kaitlynn Mendes, Professor of Gender, Media and Sociology, Western University; Jacquelyn Burkell, Associate Professor, Information and Media Studies, Western University; Jane Bailey, Professor of Law and Co-Leader of The eQuality Project, L’Université d’Ottawa/University of Ottawa, and Valerie Steeves, Full Professor, Department of Criminology, L’Université d’Ottawa/University of Ottawa
This article is republished from The Conversation under a Creative Commons license. Read the original article.
"Voices of the RSC” is a series of written interventions from Members and Officials of the Royal Society of Canada. The articles provide timely looks at matters of importance to Canadians, expressed by the emerging generation of Canada’s academic leadership. Opinions presented are those of the author(s), and do not necessarily reflect the views of the Royal Society of Canada.