Social media is not good for health, both physical and mental. This is, by this point now, a fact and not even a new one at that. It has been known for quite some time how harmful social media can be and the platforms of it, including Meta, Twitter/X, TikTok, YouTube, Instagram and more.
Day in and day out, we see new studies or research, and experts discussing the risks they pose and how people should be careful and cautious when using them.
While these platforms do have some benefits, their widespread prevalence and integration into one’s life, where one cannot imagine being without at least one social media platform, has also raised many red flags.
A recent issue with Meta has revealed that their Project Mercury, launched in 2019 as an internal study, was shut down after research showed causal evidence of social media platforms’ significant harm to users’ mental health.
The project has come under scrutiny after unredacted US court filings alleged that the study found that people who deactivated Facebook and Instagram for a week reported lower levels of depression, anxiety, loneliness, and negative social comparison.
Instead of making these findings public or acting on them, Meta allegedly halted the study, dismissing it internally as tainted by the “existing media narrative.”
What Is Project Mercury?
Allegedly, in late 2019 a research project code-named ‘Project Mercury’ was launched by technology company Meta, formerly Facebook, Inc., that also owns Facebook, Instagram, WhatsApp, and more, in collaboration with survey firm Nielsen, as part of an internal study to “explore the impact that our apps have on polarisation, news consumption, well-being, and daily social interactions”.
Probably as no surprise to regular people, internal documents of the study stated that “people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness, and social comparison”.
According to reports, the legal brief states that the participants were a random sample of consumers who voluntarily turned off their Facebook and Instagram accounts for a month.
However, and probably, again to no surprise to anyone, Meta, instead of publishing these results or continuing research into the topic, decided to shut down the project altogether and internally claimed that the study’s negative results were tainted by the “existing media narrative” of the company.
These claims have reportedly been revealed to the public after Motley Rice, a law firm, filed to sue Meta, Google’s YouTube, TikTok and Snapchat on behalf of school districts, parents, and state attorneys general around the country on Friday (November 21).
As part of these proceedings, a legal brief was filed in the United States District Court for the Northern District of California that includes newly unredacted information from various research studies, including Meta’s Project Mercury and its findings.
Read More: What Is Social Media’s Happiness Trap And How To Not Fall In It
According to the filings, an unnamed staff researcher in internal communications wrote, “The Nielsen study does show causal impact on social comparison,” with a sad-face emoji.
While another employee raised concerns that “If the results are bad and we don’t publish and they leak, is it going to look like tobacco companies doing research and knowing cigs were bad and then keeping that info to themselves?”
In response to the allegations, Meta spokesman Andy Stone in a statement said the study was stopped because “its methodology was flawed.”
He defended the company’s actions, stating, “We strongly disagree with these allegations, which rely on cherry-picked quotes and misinformed opinions in an attempt to present a deliberately misleading picture.”
He added, “The full record will show that for over a decade, we have listened to parents, researched issues that matter most, and made real changes to protect teens—like introducing Teen Accounts with built-in protections and providing parents with controls to manage their teens’ experiences.”
According to a Reuters report, some of the other allegations against Meta claimed in internal documents include:
Meta intentionally designed its youth safety features to be ineffective and rarely used, and blocked testing of safety features that it feared might be harmful to growth.
Meta required users to be caught 17 times attempting to traffic people for sex before it would remove them from its platform, which a document described as “a very, very, very high strike threshold.”
Meta recognized that optimizing its products to increase teen engagement resulted in serving them more harmful content, but did so anyway.
Meta stalled internal efforts to prevent child predators from contacting minors for years due to growth concerns, and pressured safety staff to circulate arguments justifying its decision not to act.
In a text message in 2021, Mark Zuckerberg said that he wouldn’t say that child safety was his top concern “when I have a number of other areas I’m more focused on like building the metaverse.” Zuckerberg also shot down or ignored requests by Nick Clegg, Meta’s then-head of global public policy, to better fund child safety work.
Beyond all other concerns, if this is true, then it raises a crucial question of corporate responsibility and whether tech companies should disclose such risks even when doing so may hurt their business.
Image Credits: Google Images
Sources: Reuters, The Economic Times, The Hindu
Find the blogger: @chirali_08
This post is tagged under: Meta, Project Mercury, social media, meta social media, social media harm, social media risks, Project Mercury social media, Project Mercury ai, Project Mercury scandal, facebook
Disclaimer: We do not hold any rights or copyright over any of the images used; these have been taken from Google. In case of credits or removal, the owner may kindly email us.
Other Recommendations:
Why Gen Z Can’t Quit Social Media Despite It Causing Anxiety & Stress To Them
































