Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 21 janvier 2026Flux principal

Two Major Studies, 125,000 Kids: The Social Media Panic Doesn’t Hold Up

Par : Mike Masnick
21 janvier 2026 à 20:22

For years now, we’ve been repeatedly pointing out that the “social media is destroying kids” narrative, popularized by Jonathan Haidt and others, has been built on a foundation of shaky, often contradictory research. We’ve noted that the actual data is far more nuanced than the moral panic suggests, and that policy responses built on that panic might end up causing more harm than they prevent.

Well, here come two massive new studies—one from Australia, one from the UK—that land like a sledgehammer on Haidt’s narrative—and, perhaps more importantly, on Australia’s much-celebrated social media ban for kids under 16.

The Australian study, published in JAMA Pediatrics, followed over 100,000 Australian adolescents across three years and found something that should give every policymaker pause: the relationship between social media use and well-being isn’t linear. It’s U-shaped. Perhaps most surprisingly, kids who use social media moderately have the best outcomes. Kids who use it excessively have worse outcomes. But here’s the kicker: kids who don’t use it at all also have worse outcomes.

This isn’t to say that all kids should use social media. Unlike some others, we’re not saying any of this shows that social media causes good or bad health outcomes. We’re pointing out that the claims of inherent harm seem not just overblown, but wrong.

From the study’s key findings:

A U-shaped association emerged where moderate social media use was associated with the best well-being outcomes, while both no use and highest use were associated with poorer well-being. For girls, moderate use became most favorable from middle adolescence onward, while for boys, no use became increasingly problematic from midadolescence, exceeding risks of high use by late adolescence.

This seems like pretty strong evidence that Haidt’s claims of inherent harm are not well-founded, and the policy proposals to ban kids entirely from social media are a bad idea. For older teenage boys, having no social media was associated with worse outcomes than having too much of it. The study found that nonusers in grades 10-12 had significantly higher odds of low well-being compared to moderate users—with boys showing an odds ratio of 3.00 and girls at 1.79.

Meanwhile, researchers at the University of Manchester just published a separate study in the Journal of Public Health that followed 25,000 11- to 14-year-olds over three school years. Their conclusion? Screen time spent on social media or gaming does not cause mental health problems in teenagers. At all.

From the Guardian’s coverage of the UK study:

The study found no evidence for boys or girls that heavier social media use or more frequent gaming increased teenagers’ symptoms of anxiety or depression over the following year. Increases in girls’ and boys’ social media use from year 8 to year 9 and from year 9 to year 10 had zero detrimental impact on their mental health the following year.

Zero. Not “small.” Not “modest.” Zero.

The UK researchers also examined whether how kids use social media matters—active chatting versus passive scrolling. The answer? Neither appeared to drive mental health difficulties. As lead author Dr. Qiqi Cheng put it:

We know families are worried, but our results do not support the idea that simply spending time on social media or gaming leads to mental health problems – the story is far more complex than that.

The Australian researchers, to their credit, are appropriately cautious about causation:

While heavy use was associated with poorer well-being and abstinence sometimes coincided with less favorable outcomes, these findings are observational and should be interpreted cautiously.

But while researchers urge caution, politicians have been happy to sprint ahead.

Australia leapt into the fray, and the ban has so far proven to be a complete mess.

The entire premise of Australia’s ban—and similar proposals floating around in various US states and across Europe—is that social media is inherently harmful to young people, and that removing access is protective. But both studies suggest the reality is far more complicated. The Australian researchers explicitly call this out:

Social media’s association with adolescent well-being is complex and nonlinear, suggesting that both abstinence and excessive use can be problematic depending on developmental stage and sex.

In other words: Australia’s ban may be taking kids who would have been moderate users with good outcomes and forcing them into the “no use” category that the study associates with worse well-being. It’s potentially the worst of all possible policy outcomes.

The UK study’s co-author, Prof. Neil Humphrey, reinforced this point:

Our findings tell us that young people’s choices around social media and gaming may be shaped by how they’re feeling but not necessarily the other way around. Rather than blaming technology itself, we need to pay attention to what young people are doing online, who they’re connecting with and how supported they feel in their daily lives.

That’s a crucial distinction that the moral panic crowd keeps glossing over: correlation running in the opposite direction than assumed. Kids who are already struggling, and who aren’t getting the support they need, might use social media differently—not the other way around.

This shouldn’t be surprising to anyone who has been paying attention. We’ve covered study after study showing that the relationship between social media and teen mental health is complicated, context-dependent, and nowhere near as clear-cut as Haidt’s “The Anxious Generation” would have you believe. As we’ve noted before, correlation is not causation, and the timing of teen mental health declines doesn’t actually line up neatly with smartphone adoption the way the narrative claims.

But nuance doesn’t make for good headlines or popular books. “Social Media Is Complicated And The Effects Depend On How You Use It, Your Age, Your Sex, And A Bunch Of Other Factors” doesn’t quite have the same ring as “Smartphones Destroyed A Generation.”

No one’s beating down my door to write a book detailing the trade-offs and nuances. Instead, Haidt’s book remains on the NY Times’ best seller list almost two years after being published.

The Australian study also highlights something else that should be obvious but apparently needs repeating: social media serves genuine social functions for teenagers. Being completely cut off from the platforms where your peers are socializing, sharing, and connecting has costs. The researchers note:

Heavy use has been associated with distress, while abstinence may cause missed connections.

This is what we’ve been saying forever. These platforms aren’t just “distraction machines” or “attention hijackers” or whatever scary framing is popular this week. They’re where social life happens for a lot of young people. Cutting kids off entirely doesn’t return them to some idyllic pre-digital social existence. It cuts them off from their actual social world.

Both sets of researchers make the same point: online experiences aren’t inherently harmless—hurtful messages, online pressures, and extreme content can have real effects. But blunt instruments like time-based restrictions or outright bans completely miss the target, and are unlikely to help those who need it most. The Australian authors recommend “promotion of balanced and purposeful digital engagement as part of a broader strategy.”

That’s… actually sensible policy advice? Based on actual evidence?

Imagine that.

Meanwhile, Australia is out there celebrating how many accounts it’s deleted, tech companies are scrambling to comply with fines of up to $49.5 million, the UK is actively considering following Australia’s lead, and policymakers around the world are looking at Australia as a model to follow.

Maybe—just maybe—they should look at the actual research coming out of Australia and the UK instead.

Aujourd’hui — 22 janvier 2026Flux principal

Congress Wants To Hand Your Parenting To Big Tech

Par : Joe Mullin
21 janvier 2026 à 23:38

Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. The Senate Commerce Committee recently held a hearing on “examining the effect of technology on America’s youth.” Witnesses warned about “addictive” online content, mental health, and kids spending too much time buried in screen. At the center of the debate is a bill from Sens. Ted Cruz (R-TX) and Brian Schatz (D-HI) called the Kids Off Social Media Act (KOSMA), which they say will protect children and “empower parents.” 

That’s a reasonable goal, especially at a time when many parents feel overwhelmed and nervous about how much time their kids spend on screens. But while the bill’s press release contains soothing language, KOSMA doesn’t actually give parents more control. 

Instead of respecting how most parents guide their kids towards healthy and educational content, KOSMA hands the control panel to Big Tech. That’s right—this bill would take power away from parents, and hand it over to the companies that lawmakers say are the problem.  

Kids Under 13 Are Already Banned From Social Media

One of the main promises of KOSMA is simple and dramatic: it would ban kids under 13 from social media. Based on the language of bill sponsors, one might think that’s a big change, and that today’s rules let kids wander freely into social media sites. But that’s not the case.   

Every major platform already draws the same line: kids under 13 cannot have an account. FacebookInstagramTikTokXYouTubeSnapchatDiscordSpotify, and even blogging platforms like WordPress all say essentially the same thing—if you’re under 13, you’re not allowed. That age line has been there for many years, mostly because of how online services comply with a federal privacy law called COPPA

Of course, everyone knows many kids under 13 are on these sites anyways. The real question is how and why they get access. 

Most Social Media Use By Younger Kids Is Family-Mediated 

If lawmakers picture under-13 social media use as a bunch of kids lying about their age and sneaking onto apps behind their parents’ backs, they’ve got it wrong. Serious studies that have looked at this all find the opposite: most under-13 use is out in the open, with parents’ knowledge, and often with their direct help. 

A large national study published last year in Academic Pediatrics found that 63.8% of under-13s have a social media account, but only 5.4% of them said they were keeping one secret from their parents. That means roughly 90% of kids under 13 who are on social media aren’t hiding it at all. Their parents know. (For kids aged thirteen and over, the “secret account” number is almost as low, at 6.9%.) 

Earlier research in the U.S. found the same pattern. In a well-known study of Facebook use by 10-to-14-year-olds, researchers found that about 70% of parents said they actually helped create their child’s account, and between 82% and 95% knew the account existed. Again, this wasn’t kids sneaking around. It was families making a decision together.

2022 study by the UK’s media regulator Ofcom points in the same direction, finding that up to two-thirds of social media users below the age of thirteen had direct help from a parent or guardian getting onto the platform. 

The typical under-13 social media user is not a sneaky kid. It’s a family making a decision together. 

KOSMA Forces Platforms To Override Families 

This bill doesn’t just set an age rule. It creates a legal duty for platforms to police families.

Section 103(b) of the bill is blunt: if a platform knows a user is under 13, it “shall terminate any existing account or profile” belonging to that user. And “knows” doesn’t just mean someone admits their age. The bill defines knowledge to include what is “fairly implied on the basis of objective circumstances”—in other words, what a reasonable person would conclude from how the account is being used. The reality of how services would comply with KOSMA is clear: rather than risk liability for how they should have known a user was under 13, they will require all users to prove their age to ensure that they block anyone under 13. 

KOSMA contains no exceptions for parental consent, for family accounts, or for educational or supervised use. The vast majority of people policed by this bill won’t be kids sneaking around—it will be minors who are following their parents’ guidance, and the parents themselves. 

Imagine a child using their parent’s YouTube account to watch science videos about how a volcano works. If they were to leave a comment saying, “Cool video—I’ll show this to my 6th grade teacher!” and YouTube becomes aware of the comment, the platform now has clear signals that a child is using that account. It doesn’t matter whether the parent gave permission. Under KOSMA, the company is legally required to act. To avoid violating KOSMA, it would likely  lock, suspend, or terminate the account, or demand proof it belongs to an adult. That proof would likely mean asking for a scan of a government ID, biometric data, or some other form of intrusive verification, all to keep what is essentially a “family” account from being shut down.

Violations of KOSMA are enforced by the FTC and state attorneys general. That’s more than enough legal risk to make platforms err on the side of cutting people off.

Platforms have no way to remove “just the kid” from a shared account. Their tools are blunt: freeze it, verify it, or delete it. Which means that even when a parent has explicitly approved and supervised their child’s use, KOSMA forces Big Tech to override that family decision.

Your Family, Their Algorithms

KOSMA doesn’t appoint a neutral referee. Under the law, companies like Google (YouTube), Meta (Facebook and Instagram), TikTok, Spotify, X, and Discord will become the ones who decide whose account survives, whose account gets locked, who has to upload ID, and whose family loses access altogether. They won’t be doing this because they want to—but because Congress is threatening them with legal liability if they don’t. 

These companies don’t know your family or your rules. They only know what their algorithms infer. Under KOSMA, those inferences carry the force of law. Rather than parents or teachers, decisions about who can be online, and for what purpose, will be made by corporate compliance teams and automated detection systems. 

What Families Lose 

This debate isn’t really about TikTok trends or doomscrolling. It’s about all the ordinary, boring, parent-guided uses of the modern internet. It’s about a kid watching “How volcanoes work” on regular YouTube, instead of the stripped-down YouTube Kids. It’s about using a shared Spotify account to listen to music a parent already approves. It’s about piano lessons from a teacher who makes her living from YouTube ads.

These aren’t loopholes. They’re how parenting works in the digital age. Parents increasingly filter, supervise, and, usually, decide together with their kids. KOSMA will lead to more locked accounts, and more parents submitting to face scans and ID checks. It will also lead to more power concentrated in the hands of the companies Congress claims to distrust. 

What Can Be Done Instead

KOSMA also includes separate restrictions on how platforms can use algorithms for users aged 13 to 17. Those raise their own serious questions about speech, privacy, and how online services work, and need debate and scrutiny as well. But they don’t change the core problem here: this bill hands control over children’s online lives to Big Tech.

If Congress really wants to help families, it should start with something much simpler and much more effective: strong privacy protections for everyone. Limits on data collection, restrictions on behavioral tracking, and rules that apply to adults as well as kids would do far more to reduce harmful incentives than deputizing companies to guess how old your child is and shut them out.

But if lawmakers aren’t ready to do that, they should at least drop KOSMA and start over. A law that treats ordinary parenting as a compliance problem is not protecting families—it’s undermining them.

Parents don’t need Big Tech to replace them. They need laws that respect how families actually work.

Republished from the EFF’s Deeplinks blog.

❌
❌