When Frances Haugen testified before Congress in October 2021, she revealed that Meta's own researchers had documented Instagram's toxic effects on teenage mental health — and that the company had buried the findings. Nearly five years later, the situation has only worsened. New documents obtained through ongoing litigation by state attorneys general reveal that Meta didn't just ignore its research; it actively dismantled the teams producing it and accelerated the very features researchers warned were most harmful.
The Research They Buried — Again
Recommended by OPV: NexusBro — Catch bugs before your users do →
Meta's internal 'Project Daisy' research team, formed in 2022 ostensibly to study teen wellbeing, produced a damning 2023 report finding that Instagram's Explore page and Reels algorithm were funneling teens toward progressively more extreme body-image and self-harm content. The report recommended fundamental changes to how content is recommended to users under 18, including disabling algorithmic recommendations entirely for teen accounts. Instead of implementing these changes, Meta quietly reassigned the research team and reclassified their findings as 'attorney-client privileged' material — effectively burying it from public scrutiny and regulatory review.
Subscribe for more coverage on Big Tech. SeekerPro members get premium investigations, AI-powered summaries, and exclusive analysis.
The Algorithm That Preys on Vulnerability
Audit any website in seconds
NexusBro scores SEO, performance, and accessibility — then generates fix-ready code prompts.
Try NexusBro Free →Independent researchers at institutions including Stanford and the University of Wisconsin conducted controlled experiments in 2025, creating fresh Instagram accounts registered as 13-year-old users. Within 30 minutes of following a handful of fitness and diet accounts, the algorithm began recommending pro-eating-disorder content, extreme diet tips, and accounts glorifying dangerously thin body types. Within three hours, the test accounts were receiving recommendations for self-harm content. Meta's content moderation systems, which the company touts as industry-leading, failed to intercept any of this content before it reached the simulated teen users.
Editor's Pick Solution
NexusBro: Catch bugs before your users do
AI-powered QA that checks 125+ issues per page. Get a fix prompt in 60 seconds.
Audit Your Site Free →What Parents and Policymakers Must Do
Instagram's Supervision tools allow parents to set time limits and see who their teens follow, but they cannot control what the algorithm recommends. Third-party monitoring tools like Bark and Qustodio offer more comprehensive oversight, but the burden should not fall solely on parents. State attorneys general in over 40 states have filed suit against Meta, alleging the company knowingly designed features to addict young users. Federal legislation mandating age-appropriate design standards, along the lines of the UK's Age Appropriate Design Code, remains the most effective path to systemic change.
Meta's response to every revelation has followed the same pattern: express concern, announce incremental features, and continue profiting from the engagement of its youngest and most vulnerable users. Until the cost of harming children exceeds the revenue they generate, Meta has no economic incentive to change. The question is whether regulators and courts will impose that cost before another generation of teenagers pays the price.
Recommended by OPV
ContentMation
Automate your content workflow
Handles scheduling, analytics, and content creation for growing businesses.
Automate Content →