In 2019, Meta introduced a research initiative known as Project Mercury. Its purpose was to study how Facebook and Instagram influence users, particularly their emotional well-being. The company hired Nielsen to analyse how people felt when stepping away from these platforms—whether their mood improved or deteriorated.
The results were stark. Users who stopped using Facebook for a week reported lower anxiety, reduced loneliness, less comparison with others, and a noticeable decline in daily dread. Such findings should have triggered broader scientific investigation, perhaps even public disclosure or formal warnings. Instead, the project became the centre of a legal dispute, and Meta quietly terminated it.
The story resurfaced years later due to newly unsealed court documents. These internal records revealed additional insights into Project Mercury, each detail more troubling than the last. One employee summarised the risk clearly: “If this leaks, we will look like tobacco companies.” Meta claimed the study was flawed, yet the unresolved question persists—if the results held nothing concerning, why were they concealed?
Project Mercury was only the beginning. According to court filings, a year after internal alarm bells rang, lawmakers asked Meta a direct question: Does Instagram harm teenage girls? Meta’s public answer was no. Yet internally, the company already had evidence. Their own data showed that heavy teen users experienced spikes in anxiety, depression, isolation, and body image distress. The conclusion was simple—Meta possessed knowledge it did not disclose, misleading the U.S. Congress.
The filings exposed further practices that shocked even former employees. According to internal references, a “17-strike rule” existed: an account could be reported up to sixteen times for suspected sex-trafficking activity before removal. Reaching seventeen strikes triggered action. Staff considered this threshold dangerously high given the seriousness of the offence. Meta rejected the claim, stating that such accounts are removed immediately.
Other internal recommendations were ignored. One universal fear among parents is strangers messaging their children online. In 2019, Meta researchers proposed a measure to prevent such scenarios: teen accounts would be private by default. Although the solution was straightforward, the company’s growth team ran projections and saw a potential loss of 1.5 million monthly active teenage users. The plan was abandoned. For four years, no action was taken. During that period, teenagers encountered billions of unwanted interactions. Instagram’s “Accounts You May Follow” feature recommended 1.4 million inappropriate adult profiles to minors in a single day. Internal teams repeatedly pushed for reform, yet younger users continued to face exposure. Only in 2024 did Meta implement default teen accounts, long after widespread harm.
The most disturbing revelations concern user mental health. Internal researchers warned of escalating self-harm content, clusters promoting eating disorders, depressive spirals, and addictive infinite-scroll loops. Night-time notifications disrupted sleep, intensifying psychological strain among teenagers. Meta received multiple proposed solutions, but senior executives either delayed or rejected them. Increased usage meant prolonged engagement, and prolonged engagement meant accelerated growth. Growth was prioritised above all else. Though Meta refuted the accusations, internal correspondence suggested a different reality.
Legal proceedings continue. Multiple lawsuits are underway, and a forthcoming hearing may decide whether all documents are made public. Yet the implications extend far beyond a courtroom. Society operates in a repetitive cycle: a technology crisis emerges, outrage follows, a corporation denies the problem, apologises, introduces superficial safety features, and eventually returns to silence, until the next disaster arrives. The world has long depended on an illusion that Silicon Valley will repair its own damage—that innovation compensates for risk, that notifications are harmless, that adolescents can confront algorithmic systems alone.
Project Mercury offers a glimpse behind this facade. The lawsuit may finally prevent the curtain from sliding shut again. Until that happens, a sobering truth remains. These platforms are moulding an entire generation. Their influence has produced more harm than benefit. The current question is not what these companies knew, but whether society is prepared to confront it. They promised progress, yet delivered fear. They shaped communities, fractured lives, and left scars across nations—from propaganda and manipulation to the downfall of those who believed in them. This is the realm of unchecked power, where myth dissolves, and truth stands exposed.
Follow StoryAntra for more stories and quick reads that peel back the curtain, question the headlines, and reveal the truth behind the systems shaping our world.


0 Comments