Given that this is a case about addiction, that feels like a shockingly bad thing to say in defense of your product. Can you imagine saying the same thing about oxycodone or cigarettes?
[0] https://www.npr.org/2026/03/25/nx-s1-5746125/meta-youtube-so...
I also hope the reasons are obvious.
It should be no surprise that children can be manipulated by highly intelligent adults.
Why is this not only OK but the best way for Mark to spend every waking moment of his life?
Money thing? But often would he think about his bank account versus his products, maybe it’s pure drive?
Even his medical initiative Chan-Zuckerberg biohub is a self-congratulatory shell game. I worked in the same building as them for years, literally all they did was have parties, conferences, networking events and self-congratulatory schmooze things and never prioritized actual lab research or clinical advancements.
So, if you want "these egomaniacal billionaires" to end world hunger then you're effectively asking them to form private militias and impose peace by force in the developing world. The new colonialism. Is that what you want?
>to end world hunger then you're effectively asking them to form private militias and impose peace by force in the developing world
Does this happen to be your space? If this comment were posted to a forum of experts, I imagine they would hotly debate whether a range of ideas would work.
I struggle to imagine the private militia concept would be suggested in that context; with that said, I know nothing.
And which politician would want to vote that in? Certainly no one with any rich friends who donate to their campaigns. Which means no politician that supports this is ever going to have the budget to get elected in the first place.
And then you have the problem that you cannot just fix this in one country. Because then all these rich people will find tax loopholes to claim they’re not nationals and thus exempt from this tax. So you have to convince every rich person and every politician in every country to change.
And now that you’ve created a wealth vacuum, you need to ensure that nobody rises up to flip the system again, using their wealth to manipulate everyone into repealing these new laws.
And now we are at the stage of having to change the nature of humanity…
The problem we have is that economics is driven by scarcity and consumption; and humans are largely driven by greed (or at the very least, a desire to make life comfortable). And we can’t have a future where rich people aren’t greedy, without changing the entire way economics works. Which also requires changing human nature too.
Is it human nature to rise up once a breaking point is reached? Since I concede it is not in our nature to finish our shift at our third job and go knock on neighbors’ doors, rock the vote. (agitating to elect the least greedy capable people)
Quick, keep my hope alive!
I mind the tolerance of society when some of these billionaires make their money on the back of negative externalities.
When "small" conflicts, like unpermissioned surveillance they use to psychological leverage against us, literally paying for content that gets eyeballs without taking any responsibility for the misinformation and hate they are financing to get produced, actively algorithmically pushing attention getting material without taking any responsibly for misinformation or hate material they are actively promoting, when they get paid for ads, but take no responsibility for taking money from scams and promoting them, and all the other seemingly "minor" but pervasive negative externalities that they hyper scale, people get hurt, and all of society gets degraded.
As everyone points out: incentives. If you don't take perverse incentives away from billionaires, or continue to give them perverse safe harbors, then those billionaires will relentlessly reinvest and innovate, in more harms, at ever greater scales. Things we still think are minor ethical issues, are not when they are hyper scaled.
This isn't some passive, life is rough sometimes situation, that people should be expected to weather. This highly financed, highly managed psychological, social and political harm, for profit. Even if the harm is distributed and seemingly low in any given incident. It adds up to a visibly degraded society.
Somehow social media gets treated with all the lack of responsibility of a neutral web site server. But they are highly active in how they operate. They should be responsible for their very active choices.
I don’t. That presupposes that they have anything to contribute to begin with.
Their wealth beyond some millions (edit: being generous) is built on exploitation. That’s not necessarily a transferrable... skill.
This obviously means that tech is going to have no choice but to do "age verification". And I don't think there's much of a way to do that that wouldn't be uncomfortable for a lot of us.
In fact it's even in the EU Commission's official guidance on how it should be done : https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:C... (point 46).
I understand why they would want the opposite. They can f*ck right off.
For example see the glossary in https://en.wikipedia.org/wiki/Substance_dependence
Substances like caffeine, sugar, and painkillers are definitely still referred to as “addictive”.
Whereas substances like sertraline (antidepressant) are referred to as a “dependence” because it’s dangerous to discontinue abruptly (as you said) but there isn’t any psychological addiction involved.
Based on the fact that many people here disagree about fundamental things, as well as the fact that “liberal” is a highly overloaded term, I think it should be obvious that it’s not obvious what you mean.
Personally, I am leery of any technical definition of “addictive” that operates outside the traditional chemical influences on physiology. So I would not describe gambling in that sense.
One might have a malady that causes gambling to take on the same physiological vibe for you, but that’s not what it means for gambling itself to be addictive.
If that is the (heavily simplified) case, is there a distinction for you between a chemically-induced dopamine release from smoking and, say, and a button you can press that magically releases dopamine in your brain?
I don’t smoke, but if I did, I’m also fairly certain I would find it hard to stop.
It not at all certain that you would find it hard to stop if you suddenly decided to try smoking. There would naturally be a risk, but how high that risk is is a debated subject if you have none of the risk factors for addictions.
There is a particular hard drug that I could be easily addicted to if it were cheaper and more accessible. Nothing else like it gives me irresistible craving for more. Not nicotine, ADHD meds or speed, benzos, and not even opioids have the same effect. So after I discovered this about myself, I went on a little journey to self test myself other possible addictions.
Social media? Nope. Video games and tv? yes. Gambling, hoarding, shopping: No. Sex: yes. Exercise: yes
I can’t rationalize any of it.
If you don’t want to call that addiction, fine, but you can’t deny that it happens.
One is physical addiction and the other is psychological.
But I'm also feeling a parallel here to people who think that mental health issues aren't real medical problems and that people can just "get better" whenever they want. And that's concerning. We shouldn't be more lenient on things that are "only" psychologically addictive.
Indeed, and if we want those behaviours to remain as things considered to be choices rather than the nearly inescapable negative life-destroying feedback loops (activities with high addiction potential, for lack of a more concise term), they should be treated with special reverence and highly restricted from outside influence. Put another way, if we want liberal societies to be sustainable, I'd argue all forms of overtly addictive behaviour should—in many cases—be banned from public advertisement and restricted from surreptitious advertisement in entertainment, and we should have definitions for those.
For ages we've not had cigarette ads on public broadcasts, and yet people still "choose" to smoke, meanwhile there's been a increasing presence of cigarettes among Oscar winning movies in the last 10 years.
If you are addicted to smoking and trying to avoid being reminded of it, you'd realistically have to stop watching movies and participating in that aspect of culture in order to regain control of that part of your life. Likewise, with gambling, you don't only have to stop going to the casino, you have to stop engaging with sports entertainment wholesale.
Fortunately it also has minimal to no value to society, so even if we overreacted and banned it completely it’d be fine
In the US, regardless of what type of addiction you have, it is considered mental health. Open market insurance like ACA does not cover mental health, so there is no addiction treatment available. Sure, you can be addicted to a substance where your body needs a fix, but it is still treated as mental care. This seems to go directly against what your thoughts are on addiction, but that doesn't say much as you're just some rando on the interweb expressing their untrained opinions. So am I, but I'm not the spouting differing opinions with nothing more to back them up than how you feel.
It's one thing if an adult smokes and gambles, it's another thing if a child does. It seems to me that stuff you do in youth tends to stick around for life.
Can we definitely say gambling addiction is less serious than alcohol addiction when there's individuals who find the former harder to quit than the latter?
I wish we'd delete that word from the English language.
Not careful enough apparently: Nicotine isn't that addictive on its own, tobacco is.
That is a very strong claim to make when the current scientific consensus strongly disagrees.
https://pmc.ncbi.nlm.nih.gov/articles/PMC4536896/
>However, nicotine can also act non-associatively. Nicotine directly enhances the reinforcing efficacy of other reinforcing stimuli in the environment, an effect that does not require a temporal or predictive relationship between nicotine and either the stimulus or the behavior. Hence, the reinforcing actions of nicotine stem both from the primary reinforcing actions of the drug (and the subsequent associative learning effects) as well as the reinforcement enhancement action of nicotine which is non-associative in nature.
You can find other studies about the addictiveness differences between cigarettes, vapes, chew, patches, pouches, etc. Basically, the methods with the most ceremony and additional stimulus are more addictive.
* I'd even change this to say modern nicotine salts in vapes are likely to lead to dependency faster than tobacco. A 5% nicotine salt pod will contain as much nicotine as a full pack of cigarettes, and so vapers tend to consume far more nicotine in a single sitting than they ever could with a cigarette. That combined withe constant availability means users of nicotine vapes & pouches (aka, no tobacco) are likey to have a more difficult time quitting than cigarette smokers.
Bottom line, its still dangerous to dismiss nicotine's addictive potential with or without tobacco as a delivery method.
Intuitively, why would you chew nicontine gum to stop smoking if it was just as addictive as cigarettes?
To be sure. But still an obviously dumb thing for a CEO to say though.
The problem is that this runs directly into the evidence that is mounting from GLP-1 agonists.
A lot more things are tied to the pathways we associate with "addiction" than we thought.
This just comes off as poorly obfuscated self selection. You own a bunch of Meta, Alphabet and other media stocks?
No, but unfortunately I can very easily imagine people saying it, just like the people who made loads of money from pushing those products did. Also just like the people who are profiting from the spread of gambling are saying now.
Why would someone choose to do a thing if it harms them? There are good arguments against laws that restrict personal freedoms, but this isn't one of them.
Though to be fair, I was mostly pointing out the fact that this was a pretty dumb thing to say for a case like this, especially in a jury trial.
In other words is not the posts by the influencers, but techniques such as infinite schooling, and so on.
This is why meta and google could not relay on User-Generated Content Safe Harbor (Section 230) part of the law.
More to the point, though, your comments here are all straw men. This was specifically a case about targeting children with addictive features of their products.
https://www.reddit.com/r/nosurf/comments/k3vzaa/how_to_break... - used the reddit link because the existence of r/nosurf is another example of people who want to stop but find it difficult.
A statement that's been brought up even by HN commentators
Facebook is not a free market where you can choose. You're compelled to use it for several different reasons (and before some wiseass comments "you're not forced to. you can delete it" yes I know)
- They captured the early market. There was a small window of time in which to get users
- They ruthlessly bought up the competition
- They've deleted links to competitors
- They outright hijacked people's email addresses. It makes it hard to transfer users to another service or to email them outside the walled garden
- Even while they change privacy settings for users to make things more public, they wall off public pages. Your local neighborhood has a place where they post information? Even if everyone selects "Public" in the audience you can't see it without an account
Edit: Oh, and shadow profiles. And making it nigh-impossible to delete an account permanently
-- Billionaires
This is what parents are for.
Education? Safety? Environment? Justice? Is this not also, then, what parents are for?
Clearly those two things are not the same.
It could be perhaps as simple as allowing third-party websites and apps for watching Youtube on your phone. And it's okay if this would be a premium paid feature, so there's no counter argument that "it costs them money to host videos".
This is not an entirely new idea either. Before Spotify became popular, people would integrate Last.FM into their media players to get music recommendation based on their listening history, and you could listen to music via YouTube directly on the last.fm website.
Cory Doctorow wrote a great article on it:
"Interoperability Can Save the Open Web" https://spectrum.ieee.org/doctorow-interoperability
> While the dominance of Internet platforms like Twitter, Facebook, Instagram, or Amazon is often taken for granted, Doctorow argues that these walled gardens are fenced in by legal structures, not feats of engineering. Doctorow proposes forcing interoperability—any given platform’s ability to interact with another—as a way to break down those walls and to make the Internet freer and more democratic.
Most notably, he retells how early Facebook used to siphon data from its competitor MySpace and act on user's behalf on it (e.g. reply to MySpace messages via Facebook) - and then when the Zuck(er) was top dog, moved to made these basic interoperability actions illegal by law to prevent anyone doing to him what he did to others.
We need platforms to offer that interoperability and simply connect to these “marketplaces.” Take Shopify for example, sellers use that platform to list on Amazon, Google Shopping, TikTok shop, etc. We need open source alternatives to those where the sellers own the platform and these marketplaces are forced to be interoperable or left behind by those that are.
For Facebook, Instagram, Twitter, each person having their own website where they post and that post being pushed to these platforms is also another way to force interoperability on them or be left behind.
It’s a tall task, but achievable and it will happen given enough time.
There's an acronym for this: POSSE (Publish [on your] Own Site, Syndicate Elsewhere). Part of the IndieWeb movement, for those who want to explore this worthwhile idea further.
It really comes down to merit and how much value you can bring to the actual sellers in these marketplaces with the software. If enough sellers switch, marketplaces will follow.
0. https://github.com/openshiporg/marketplace
The solution to this used to be that governments provide the platform. You would think this wouldn't be hard to do, since people have now shown that this can work and so it's a guaranteed money maker, or as close as you're going to get.
Yet I can't find a single initiative.
So any such rules will just make all internet platforms disappear ... and nothing.
Beyond that, there really isn't much that a small shop couldn't manage unless you are trying to be the next FAANG (and lotsa luck with that).
Among social media, Mastodon (and anything Fediverse) has it the worst, obviously, but Telegram and Whatsapp are rife with spams and scams, Twitter back when it still had third-party apps was rife with credential and token compromises (mostly used to shill cryptocurrencies).
As for the price tag reference - we've seen that with SMS. It used to be the case that sending SMS cost real money, something like 20 ct/message. It was prohibitively expensive to run SMS campaigns. But nowadays? It's effectively free at scale if you go the legit route and practically free if you manage to get someone's account at one of the tons of bulk SMS providers compromised. Apple's iMessage similarly makes bad actors pay a lot, because access to it is tied to a legitimate or stolen Apple product serial.
Imagine how much less you would use text messages if they still had a per-message cost.
Do you understand that this is all literally made up? The rules can change anytime and society can exert its will to make better world rather than letting a dozen people decide how technology will shape humanity (mostly in a negative capacity if you look at the current state of things).
And make it a worse system, is what you happened to leave off.
>Do you understand that this is all literally made up
You mean the existing system that evolved from billions and billions of interactions? Explain what is 'made up' about it.
The thing is if you start 'making up' random ass laws that piss people off, they will run screaming back to the billionaires to pwn them with locked down systems. Apple is a great example here. Shit is locked down and people love it.
I'm sorry but that's deeply undemocratic, todays generation should have a direct say in how new things effect their lives.
Failure to do this might literally condemn our species to extinction, and this only took less than 200 years to achieve. I'm sorry but they've proven their failure and it's time to make drastic changes.
Good news is many people agree with this across the electorate, so now you get to decide which people you want shaping society. The previous world order of US imperialism is going to end and I rather have the people decide what to do than those that want to continue running head first into extinction.
I don't disagree.
Of course Chinese imperialism probably won't be much better.
The key thing with Apple is that Stuff Just Works. Not necessarily with non-Apple things (the compatibility ranges from almost excellent aka AirPods on Windows/Android to disastrous aka ever tried to transfer files from an Android phone to a Mac), but as long as you stay in the Apple ecosystem of macOS + Apple TV + iOS + AirPods, the user experience is generally really, really frictionless.
In contrast, with Windows, it's an unholy mess of catastrophic drivers, Windows Update, aggressive pushing of AI and advertising. And external hardware is a hit and miss, with compatibility issues being around everywhere.
And Android, oh dear god. I used to prefer Android over iOS because the hardware was cheaper and I could reasonably root it so I could do actual backups that worked... but ever since Covid, more and more apps break on detecting root and there's still no backup solution, so I bit the bullet and went second-hand Apple. At least I got backups now.
Personally, I admit, I have aged - I'm 34, I don't want to fiddle and mess around with my daily driver constantly, and frankly I don't have time to bother with ads, so that's why I went with Apple for most of my things. When I want to fiddle, I got a fleet of Raspberrys plus a decent homelab. But there, I can choose to fiddle around if I want to.
While the thing that gives you quick dopamine might win in the very short term, you can still step back and recognize when it's not satisfying in the long term and you're not even enjoying it that much.
And people aren't stupid. Junk food exists, yet lots of people choose to eat more wholesome food as the majority of their diet.
The problem with instagram or youtube is that you can't separate the good from the bad.
It's like if every time you went to store Y to buy milk, you would be exposed to highly manipulative marketing trying to get you to buy junk food. You would probably want to go to a different store instead.
What I'm suggesting is the possibilities of different stores, with different philosophies and standards, so that people can choose where they go. Corner stores (where almost everything is junk food) exist, yet people still choose to go to real supermarkets.
But that's very much the norm at supermarkets?
That's also true for heroin. Plenty of people really want to break the addiction.
The slop exists because people are attracted to it.
Absolutely not. It's much easier to make a one-time switch than to be continuously resisting temptation. Changing the things in your environment is an important tool to break bad habits. The book "Atomic Habits" talks about this at length.
They've already built all the tools they need around this at the moment, it's just they give them to advertisers rather than end-users.
I realize “less addictive algo” is a different thing to pay for than removing ads - but it’s, if anything, an even harder sell - I think the layperson wouldn’t even acknowledge that they are vulnerable to being psychologically manipulated. They think they spend so much time on these apps because it’s so enjoyable.
From most parents’ point of view, paying a monthly bill for their children to have a less toxic experience on TikTok, or YouTube will be considered an extravagance instead of a responsible safety expense.
I still scrobble to Last.fm from Spotify (and other media players). I rarely use it for discovery anymore, but it's occasionally interesting to look at my historical listening trends.
However, I've always thought that it's pretty bizarre for Section 230 protections to apply when the social media company has extremely sophisticated algorithms that determine how much reach every user-generated piece of content gets. To me there's really no distinction between the "opinion" or "editorial" section of a traditional media publication and the algorithms which determine the reach of a piece of user-generated content on Twitter, YouTube, etc.
I’d be strongly in favor of interoperability laws to pry open the monopolies.
(One dynamic you do need to be careful about especially at first - interoperability also means IG can pull your friend graph from Snapchat, so it can also make it easier for big companies to smother smaller ones that are getting momentum based on their own social graph growth due to their USP. I don’t think this is insurmountable, just something to be careful of when implementing.)
Drop the algorithm altogether? I subscribe to channels for a reason.
And how does this prevent addictive algorithms which will win through social selection?
The winning third party algorithm will be the one that gives people the same rush the first party algorithms currently do, because people will use it for the same reasons; they get to see cute AI animals do crazy things forever.
The youtube algorithm has been personalized for much more than 10 years and has never prioritized any kind of lectures or artful films over anything else it thinks a viewer will watch. You're asking for them to bring back an era that never existed.
If you're not getting those sorts of recommendations it's because you ddon't actually watch that kind of content, or you're removing your history.
That would make it very hard, nigh impossible, for a platform like YouTube or TikTok to exist as it does today, and would instead favor people self-curating mechanisms like RSS readers etc.
There is no solution for this kind of verdict beyond appeal, or changes to the law to rule such suits out, because it's not rooted in any logical or legal principle beyond the idea that people should not be responsible for their own actions (or their children's actions). But there's no limiting factor to that belief. You can't fix it with RSS or federation or making people select who they follow or chronological feeds. Those would just get blamed for "addiction" instead.
Ordinary media, like newspapers, books, radio, and TV, have worked this way forever — people publish “channels” and you decide what channels to follow. A channel can be held accountable.
The algorithm model is different. People just publish “content” into the platform, and the platform makes a custom channel for each viewer, inserting content from people you’ve never heard of and didn’t ask to follow. And it optimizes that custom channel for whatever addicts you the most. That’s fundamentally a different beast than opt-in media consumption.
There's really no difference. Media companies all aggressively optimize for engagement, often to the point of A/B testing headlines.
HN also shows different pages to different people. The set of headlines and their ranking is constantly changing, and user settings change whether you see dead/flagged articles or not.
The idea there's some fundamental difference here is people working backwards from the wealth of the operators to some conclusion they'd like to be true, usually one that lets them blame other people for their own decisions. But there's no validity to this.
That isn't what would happen.
What would happen is that only the platforms which can afford legal teams - in other words, the big platforms - would host user posted content under strict arbitration only terms, and every other platform (including Hacker News, which uses an algorithmic feed) would simply not. Removing one of the cornerstones of free speech on the web in favor of regulation will only centralize the web more.
And you wouldn't see mass adoption of "self curating mechanisms" because most people aren't like Hacker News people and would find the premise of having to manually curate data feeds from every they visit to be a tedious waste of their time.
I also think that platforms like Youtube and Tiktok shouldn't be illegal. I don't even think that personalized algorithms should be illegal - it's surprising that one has to point this out on a forum of programmers - but algorithms have no inherent moral dimension and the ability to use an algorithm to find and classify relevant content can be useful. The same algorithm that surfaces extremist content surfaces non-extremist content. The algorithm isn't the problem, rather the content and the policies of these platforms are the problem. And I don't think the solution to either is de facto making math illegal and free speech more difficult.
Edit to include: I mean this is coming the same day as the Supreme Court throwing out the piracy case against Cox Communications 9-0. Remember that this case originated with $1 billion dollar jury verdict against them! Was reversed by an appeals court 5 years later and completely invalidated today. Juries should not handle complex civil litigation, I'm sorry
Suing Facebook for systematically behaving badly is one thing, if you can prove it and prove it harmed you.
Suing _everybody_ is one random person getting rich for… being mad at the world she was born into?
Whenever the McDonald's coffee case comes up, I always see caveats about how the actual case was a lot less sensational than the "woman sues McDonald's for coffee being too hot" headline implies.
I strongly disagree. I'm very familiar with the details of the actual case, and the Wikipedia article gives a good overview: https://en.wikipedia.org/wiki/Liebeck_v._McDonald%27s_Restau... . Yes, the plaintiff received horrific third degree burns when she spilled the coffee on herself, but lots of products can cause horrible harm if used incorrectly - people cut fingers off all the time with kitchen knives, for example.
I find the headline "Woman sues McDonald's for their coffee being too hot" a completely accurate description of what happened, with no hyperbole and no "ridiculousness" at all.
- Other customers had suffered similar burns, and McDonald's knew about it and did not change the policy
McDonald's, then, was willfully and inevitably causing injury to random customers in order to save themselves a few cents in coffee.
In light of those facts, I think a $2M verdict was too low, and the executives who decided to continue keeping the coffee that hot should have been criminally charged with reckless endangerment.
Did you just make up that 140 number? To add to the other sibling comment, SCA (https://sca.coffee/) requires that water contacts the grounds at a temp of 195-205 F and that the coffee be at a temp of 175-185 up to 30 mins after brewing in order to certify home brewers:
> The SCA ensures that the brewer's carafe is appropriately sized for its designated machine and can maintain the coffee's warmth. Specifically, the brew must stay within the range of 176 °-185°F (80°-85°C) for at least 30 minutes post-brew. While retaining this warmth, the machine must never actively reheat the brew, ensuring the coffee's nuanced flavors remain intact. (from https://us.moccamaster.com/blogs/blog/certified-by-the-sca-m...)
Then you say
> Other customers had suffered similar burns, and McDonald's knew about it and did not change the policy
Again, lots of people cut their fingers off, accidentally, with knives. I don't think this means knife makers were "willfully and inevitably causing injury to random customers" because their product was too sharp.
The "official" recommendation for keeping coffee that I've seen, eg here[1], has always been around 80-85C, which translates to 176-185F.
Good home brewers, like this[2], will hold the coffee at that temperature.
It wasn't just that the plaintiff spilled coffee on herself, it's that she spilled it while she was in her car and didn't immediately try to get it off (not blaming her, she was elderly). So yeah, I'm not surprised that spilling a very hot liquid on yourself and then sitting in it for an extended period causes severe burns.
Nothing wrong with getting mad at the world when the world is complete and utter garbage to you.
You might be blaming the wrong people. Looking at a lot of those "shockingly large verdicts", in that they would have bankrupted the company and forced it to be dissolved and reformed as perhaps a less objectionable version of itself: cool, shoulda done that. Sad we didn't.
Are we conflating matters of merit with matters of judgment, here?
Actively ignoring harm caused by your product. TV/radio has sold attention, but there were pretty strict rules on what you can/can't broadcast, and to whom. (ignoring cable for the moment) Its the same for services, things that knowingly encourage damaging behaviours are liable for prosecution.
Wouldn't it be better if apps/websites targeting kids didn't use A/B testing to be more addictive?
Pokemon is addictive, computer games are addictive. Its whether they are knowingly causing harm, and or avoiding attempts to stop that harm.
I don't have an answer to fix this whole mess, but it starts with our attitude towards addiction. We've built a system that rewards addiction in all sorts of places. Granted, every addiction is different, and I'm of the opinion that it's not (drug = bad), it's how you use it and react to it. We can control the latter, but we choose to ignore it because we're too busy with anything else. This is a tale as old as time...
Exactly what I keep coming back to.
For me, it feels like you could cut this problem down substantially by eliminating section 230 protection on any algorithmically elevated content. Everywhere. Full stop.
If you write or have an algorithm created that pushes content to users, in ANY fashion, that is endorsement. You want that content to be seen, for whatever odd reason, and if it's harmful to your users, you should be held responsible for it. It's one thing if some random asshole messages me on Telegram trying to scam me; there's little Telegram can do (though a fucking "do not permit messages from people not in my contacts" setting would be nice) but there is nothing at all that "makes" Facebook shovel AI bullshit at people, apart from it juices engagement, either by genuine engagement or ironic/ragebaiting.
And AI bullshit is just annoying, I've seen "Facebook help" groups that are clearly just trawling to get people's account info, I've seen scam pages and products, all kinds of shit, and either it pisses people off so Facebook passes it around, or they give Facebook money and Facebook shoves it into the feeds of everyone they can.
It's fucking disgusting and there's no reason to permit it.
I don't see a good way to make a definite legal distinction between the icky stuff versus normal an unobjectionable things which are, technically, also forms of elevation-by-algorithm:
rank_by_age(items) // Good
rank_by_age_and_poster_reputation(items) // Probably
rank_by_on_topic_ness(items, forum_subject)
rank_by_likes(items)
rank_by_engagement_likelihood(items) // Bad?
rank_by_positive_sentiment_toward_clients(items) // Bad rank_by_age(items) // Good
rank_by_age_and_poster_reputation(items) // Probably
rank_by_on_topic_ness(items, forum_subject)
rank_by_likes(items)
<-- here -->
rank_by_engagement_likelihood(items) // Bad?
rank_by_positive_sentiment_toward_clients(items) // Bad
Age is deterministic. When was the thing posted?Poster reputation is deterministic. How many times has this poster received positive feedback based on their content?
On-topic-ness is deterministic, if a bit fuzzy. That said I think the likes will reflect this, if you post a thread about cooking potatoes in the gopro subreddit, your post will be downvoted and probably removed via other means in which case it's presence in the feed is already null.
Likes are again, deterministic. How many people upvoted it?
In contrast:
Engagement likelihood is clearly a subjective, theoretical measure. An algorithm is going to parse a database for other posts like this, see how much attention it got, and say "is this likely to drive engagement." That's what I'm talking about.
And positive sentiment towards clients I can't quite read? I'm guessing you're referring to like, community sponsors but I'm not 100% certain. But that almost certainly is a subjective one too, and even if not, it's giving people with money the ability to put their thumb on the scale.
> On-topic-ness is deterministic, if a bit fuzzy.
If you permit that exception (even for good reasons) then it reveals how the original "algorithmic elevation" is too vague and unenforceable.
All someone needs is a ToS footnote like "this forum is provided for truthful international news and engaging with $COMPANY in a positive way." Poof, loophole. Anything the moderator (or moderator-algorithm) decides is "untrue" or "negative" becomes off-topic and can be pushed down.
Yes. People make free speech arguments about this, but the list and order of stuff returned by algorithmic non-directed (+) lists is clearly a form of endorsement. Even more so is advertising, which undergoes a bidding process. Pages which show ads should be liable if those ads are fraudulent, especially if they're so obviously fraudulent that casual readers suspect them immediately.
(+) Returning a list of stuff in a user-specified query, on the other hand, is not endorsement. Chronological or alphabetical order or distance-based or even random is fine.
Note that section 230 is, of course, US specific and other countries manage without it.
Not only is this seemingly the most desired feed among end users, it was also the default one. MySpace didn't have a choice in the matter, they had to show a chronological timeline, because they didn't have a machine-learning algorithm nor a way to make one. They could tweak it based on engagement metrics but on the whole, it was just here's what all your friends have posted, in reverse order, scroll away. And then eventually you'd hit the end where it's like "you're up to date" and then you go on with your fucking day.
But of course platforms hate that. They want you there, all day, scrolling through an infinite deluge of bullshit, amongst which they can park ads. And we know they hate this, because not only have platforms refused to bring back chronological feeds, they actively removed them if they existed at one time. Not only is this doable, it's the most efficient way that requires the least compute from their servers, but platforms reliably chose the inverse... because it makes them more money.
Also specifically on this:
> My point is that this would mean algorithmic feeds can only contain vapid, pointless content
The vast majority of these sites is vapid, pointless content RIGHT NOW, even if it attempts to convince you it isn't.
Not enough to diffuse liability. 15 years ago when recommender algorithms were the new hotness, I saw every single group of students introduced to the idea immediately grasp the implication that the endgame would involve pandering to base instincts. If someone didn't understand this, it's because
> It is difficult to get a man to understand something, when his salary depends on his not understanding it. - Upton Sinclair
I watched 80s horror movies when I was in elementary school and had nightmares for years. Should I sue now?
How about parents be held responsible for how they care for their kids or not? Maybe a culture that judged parents more strongly for how they let their kids spend their time would be an improvement.
When people say that Tetris and Civilization are “addictive” they aren’t implying anything malicious about the development, it’s more of a compliment about the game (and maybe a little lament about staying up too late).
But the addictive nature of social media feels different and I can’t figure out what that distinction is.
What I go into the app to do: see if there are any updates from those businesses.
What the app presents me on launch: a bunch of nonsense selected for what will best-distract me. And you know what? Sometimes it does catch my attention for a minute or two!
What the app doesn't let me do: disable the nonsense, or even default to the tab of accounts I'm following. Hell they even intentionally broke ways to achieve this with iOS' scripting, you'd think that'd be niche-enough they wouldn't care, but apparently enough people were doing it that they bothered to break it.
The algo feed is addictive on-purpose. I would turn it off if I could, and there's a damn good reason they don't let you do that. I "choose" to engage with it sometimes, which sometimes gets people coming out to go "oh-ho! So your revealed preference is that you like the feed!" but that's plainly silly, as that's highly contextual and my in-fact actual preference would be to never see that feed again in my life, and in fact I've spent a little time trying to make that happen. It's only my "revealed preference" in a world where I've had to compromise by occasionally losing a couple minutes to this crap because the app won't let me go straight to what I actually want. That's my true preference, the "revealed" one is only ever briefly flirted-with in a context in which I'm prevented from attaining my actual preference.
Consider a person who struggles with eating junk food. They don't keep junk food at home, in fact. That is their preference, to not keep it around, because they don't want to eat it and know they will if it's there. Now concoct some scenario in which, in exchange for something else they want, they have to take delivery of a couple bags of potato chips and a box of cookies every week. And sometimes, they eat some of that before tossing it out or giving it away! "Ah-ha, so their revealed preference is that they want junk food!" Like, no, of course not.
There's a reason these apps have to prevent you from using any part of them except with the presentation they like: because they'd being addictive on purpose, and tons of users do not want the addictive parts, at all, but do want other parts.
OK, let me try to analyze it:
1. Humans are idiots.
2. We have idiot glitches where we obsess over some particular thing. This is our own business and our own fault, and is impossible to tease apart from just liking stuff a lot and benefitting from it.
3. These glitches tend to accumulate in certain areas, and then some companies find themselves in the position of profiting from human glitchy idiocy, even though they didn't want to be behaving like scammers.
4. Then some of them get cynical about it and focus on that market segment, the obsessed idiots. This can include gambling and social media.
A really well built hammer doesn't make you want to spend all your time using a hammer, it's just good when you need a hammer. That's a well-made product that you choose to keep using.
In general professionals must be licensed and bonded. The state requires a degree and a test for the first license, then, for my spouse's, something like 8000 additional hours of training, and something like 100 hours of continuing education per year. a CEU is 1 hour of continuing education. you have ~5 years of time to transition your license by doing the above training and CEU - as a rolling window. Doctors, nurses, etc all have to do this sort of thing.
Would any of you put up with that kind of stuff to make $80k a year?
When it comes to behavioral psychology research, there is a strong understanding of concepts such as behavioral reward schedules; interval-based rewards, time-based rewards, variably-interval-based rewards. People have a very clear understanding of what sort of stimulus is and is not prone to addiction. You can get a mouse in a cage to become hopelessly addicted to pressing a lever for a reward depending on what reward schedule you use, and this does not translate to a mouse who can just get the reward at a regular interval. (or perhaps merely a less-addicting interval) The mouse in the cage pressing a button set to a variable-ratio reward is equivalent to an old person using a slot machine in a very literal and direct way. This also translates to social media with permanent scrolling. So many of the stories such, but the variable interval is the extremely enticing (or enraging) story that just might be the next one.
Because it's a figure of speech, not a clinical diagnosis. Literal and figurative addictions are different beasts.
Intent, premeditation, scale are major differentiators. When they know they will cause harm, they concentrate and fine tune it for the effect, turn it into a firehose, and target it at specific individuals it's very, very different from what random ads, games, of movies do. These companies literally designed their products with the intent to make them addictive and target children, knowing the full implications and ignoring the harm they caused.
You're comparing a drug dealer who only sells to kids to a store clerk who also sells icecream to kids. It doesn't take more that scratching the surface to realize the similarity is very fleeting.
- Social media is still somewhat new, and the broader public is only now discovering that it's a clear net negative both personally and for society. Because this is such a new realization, I think a LOT of people have not really figured out how this problem should be dealt with. (both personally, via social norms, but also with regard to laws and regulations.
- No matter how awesome of a parent you are, 100% of your kids friends will have social media and they will introduce it to you kid. That may do less harm than if they have it themselves, but some harm will still be done.
- There are network effects to consider. It's true that it's your personal fault if you use cocaine -- however we also understand that cocaine is so addictive that it really cannot be used safely. Social media is metaphorically the same. It's a personal failing if you're a social media addict, however broadly almost everyone is susceptible to it. In my mind, that is an argument for regulation.
Now that said, I have zero faith that our government can actually build sensible regulation here.
They've created algorithms that use slot machine like experiences that keep kids hooked to the screen.
These algorithms feeds users barely moderated content that feeds their worst instincts. With almost surgical precision when wanting to illicit engagement.
Then when research shows them the harm their causing they bury it, hire lobbyist, and double down.
Switch out a few words up there and you have the big tobacco playbook.
0: https://en.wikipedia.org/wiki/Regulations_on_children's_tele...
Those ads didn't adjust themselves on a per-child basis to their exact interests.
Does any of that obviate the need for safe urban design, anti-CSAM and anti-molestation laws, or laws prohibiting the local dive from serving a cold one to my 11 year old? Will simple appeals for "parental responsibility" suffice as an argument for undoing those child safety systems we put in place, or will they be met with derisive dismissal? Why should your "solution" be treated any differently? In fact you offer none. Yours is the non-solution solution, the not-my-problem solution, the go-away solution. Not good enough on its own, sorry.
Now, we call the police, and arrest parents, if kids are outside, unsupervised. https://www.cnn.com/2024/12/22/us/mother-arrested-missing-so...
When I was a child in the 80s and 90s, we had "jobs" as kids... Mowing lawns, Paper routes and so on. Now if you go offer to mow your neighbors lawn, the cops get called: https://www.fox8live.com/2023/07/26/officer-surprises-young-...
Parents are afraid to let their kids out of their site, and for those of us who have been pragmatic because we understand the data (and not the fear) they tend to look down on us.
Talk to any one who is Gen X and they will tell you that we basically got thrown out side all day (and had fun). Parents cant say "go outside and play" so kids end up getting handed devices... and they are going to play and explore and do the dumb things that gets them in trouble.
> those child safety systems we put in place
Except we have denormalized things that SHOULD be perfectly fine. And as fewer kids get to go outside unattended with friends, it pushes their peers to go "online" to socialize.
Maybe the government needs to run commercials "Its 10am, why isnt your child outside playing with the neighbor kids unsupervised"
I have had CPS called on me by an overbearing school administrator. Have you had that happen to you? Let me tell you, it's not a fun experience.
Enough of this "blame the parents" mentality! Ironic given that the goal for all these platforms is growth at all costs. Where do you think "growth" comes from, after all? If you make being a parent so goddamn difficult that it's more rational to just not do it, guess what, poof goes your sweet, sweet growth.
So tired of this line of thinking. The parents are put into an impossible situation. Stuck between kids who by definition and by design will test the boundaries that they're given, and tech platforms that are propped up with not just trillions of dollars of valuation, but the societal expectation that you engage with them. Want your kids to compete in sports? Well, they need to have WhatsApp and Instagram to keep track of team events!
Give me a break. Equating controlling social media and devices to "look both ways when crossing the street" is disingenuous at best. There are no companies that make billions of dollars in advertising revenue telling your kids to jaywalk. But Facebook gladly weaponizes their algorithm to drive "engagement" - and, surprise, children with still-forming prefrontal cortices are drawn to content that reinforce their natural self-criticisms and doubts. So now my child, who has to be on Instagram to keep track of sports schedules, is also force fed toxic content because that's what a mechanical algorithm thinks is most "engaging" based on my derived psychological and demographic profile.
You want to talk about CSAM? X proudly proclaims that they have every right to produce deep-fake pornography with the faces of underage children. What action shall I, as an individual parent, take if my 15 year old girl's face is suddenly pasted onto sexually explicit video and widely shared thanks to xAI's actions? Shall I be held responsible for how I "let this happen" to my child?
If YouTube detects that a child is watching 5 hours of video a day, should Google alert child protective services?
It's not, that illegal as well. You cannot target kids with TV advertising.
I homeschool our youngest because the school system here sucks, based on the experiences of our older two. I'm always exhausted. I solved this (the "parents must be more involved") by watching my kid play roblox, arguing with them about spending their money on gift cards instead of lego, posters, or whatever that isn't so fleeting; i also don't let them have a cellphone. They turn 10 in June. We don't have TV or CATV, i have downloaded most of the old TV programs that kids liked, and grandma doesn't watch kid's shows so he really doesn't have a perspective on what everyone else's viewing habits are. He watches YT on his Switch about fireworks, cars, and then also some of the idiots with too much money acting goofy, plus what i would call "vines compilations" of just noises and moving pictures, i don't get it, but it seems harmless. For the record, pihole no longer blocks youtube ads, so i was just told there are ads on the Switch, now.
But anything beyond that, i can't watch nor do i want to watch their every interaction on a computer. I gotta cook, the weather isn't always conducive to send them outside to play, as well. When i was growing up and was bored, there wasn't too much i could do about it. Today, my youngest has virtually anything on the planet just peeking around the corner. America's Funniest home videos and a blue square shooting red squares at orange squares? yeah, ok.
===========
It's getting to the point where i think people who have really strong opinions on topics like this need to disclose any positions they might have that influence their opinion. My disclosure is that i have no positions in any company or entity.
Everyone in the US has been fed a lie that if we just work hard and don't interfere with the billionaire class, that someday, we, too, can be rich like them. It's a bum steer, folks. For each 1 billionaire that "came up from the slums" or whatever, there's 100 that are billionaires because their families did some messed up stuff, probably globally, sometime in the last 200 years. And offhand, knowing the stories of a bunch of billionaires: 10 in the US that were honestly self-made, didn't fraud, cheat, or skirt regulations to become that way seems almost a magnitude too high.
i bring all of the above 2 paragraphs fore, because if one has a position in facebook, of course they're going to rail against facebook losing 230 protection for any part of their operation, instagram, FB feed, whatever. If a person has a position in GOOG, or Apple, or Tesla. What's that Upton Sinclair quote that's been mentioned twice? If someone believes that, given luck and grit, they too could make a "facebook" sized corp, but not if the government says "you can't addict children to sell ads", then i consider them a creep.
record: my oldest two are early 20s, now.
A really good designer could make a highly engaging app or an editor can write clickbait headlines all with without testing.
Look at the plaintiff in this case: it's a mentally unstable person who blames her life problems on social media. Never mind the fact that she had been diagnosed with mental illnesses as an early teen, or that an overwhelming majority of people who use social media don't develop eating disorders or other mental illnesses as a result of it (and in fact the incidence of say bulimia peaked 30 years ago in spite of almost universal social media adoption among young people). This is not at all like smoking where 15% of smokers will get lung cancer.
And due to some absurd legal reasoning the plaintiff was allowed to pseudonymously extort $3 million out of tech companies. Worst of all I see people on a technology forum applauding this out of some sort of resentment towards large companies!
Companies do this research for all sorts of reasons (including legal compliance, demonstrating due diligence to regulators, to understand users and improve products, etc etc etc). For example, it's not like Zuck commissioned an internal study to show how they're harming children, more like some internal team was seeking to understand why kids love a certain feature which led them to conclusions that make the company look bad.
To your third point, that research is usually leaked by whistleblowers or conducted by third parties, not because of the altruism of these companies.
Finally, the platforms aren't doing enough and with this court case, it seems like they've persisted in finding ways to hook children because of financial incentives.
The sources cited in this article are a good primer for understanding what these companies are doing: https://www.transparencycoalition.ai/news/meta-suppressed-re...
The argument is not that it is vaguely "somehow damning".
The argument is that the existence of the research and its findings, and that it was in the hands of the firms, and that the actively chose to suppress it, is evidence of one specific fact relevant to liability—that, at the time that they made relevant business decisions that occurred around or after the review and decision to suppress the reports, they had knowledge of the facts contained in the report.
> The most obvious reason being that they obviously didn't do a very good job of suppressing it given that we hear this claim every day.
The success of suppression is not relevant to what the decision to suppress is used to prove.
> The second being that they could have just not done this research at all and then there would have been nothing to "suppress"
The fact that, had they made different decisions previously, they would not have had knowledge of the facts that they actually had when they made later business decisions is also not relevant to what the existence and suppression of the research is used to prove.
> (this terminology is also very odd... if 3M analyzes different sticky notes and concludes that their competitors sticky notes are better than theirs but does not release the results, is that suppression?).
It would obviously be suppression of the report (which isn't a legal term of art but a plain-language descriptive term), but unless they later made fact claims about their product that were contrary to what was in the suppressed report and were being sued for fraud or false advertising, that suppression probably wouldn't be useful as evidence of anything that would produce legal liability.
> The third is that studies with the same results have come out probably every year since 2010 and have been routinely cited in the mainstream press.
Which is addditional, though weaker, evidence of the firms knowledge of the same conclusions (weaker, because its pretty hard to prove that the firm had particular knowledge of any of those studies, but it is pretty easy to prove that they had knowledge of the studies that there is documentation of the commissioning, reviewing, discussing internally, and deciding to suppress.)
But it doesn't in any way counter the weight of the evidence of the suppressed reports, it weighs in the same direction, just in much smaller measure.
Unfortunately for you and social media sites, the legal standard for defective products has no "percentage" of people harmed to incur liability. Product liability is showing product was defectively designed and caused foreseeable harm to a specific plaintiff.
> absurd legal reasoning
It's certainly not surprising you think protecting minors in legal cases (she was a minor when the case was filed) is "absurd legal reasoning".
Addressing the actual legal questions in the case might be more fruitful than hurling shit against a wall.
*Except for your time and mental health of course
Always doing wholesome stuff with your kids is certainly not easy or trivial, but there is a cascading effect here. If your child does not expect to be able to just watch TV all the time it's easier to keep them interested in other things. Once that expectation is burned in you'll be fighting it for a while. And once that expectation is burned in, a small child will _never_ say "I've had enough youtube, I don't need any more."
So I really don't want to be self-righteous about always doing wholesome stuff with your kids (we definitely do not succeed 100% of the time) -- but rather point out that letting them use addictive media has negative, cascading consequences that actually do make it harder for you as a parent. It's analogous to drinking to relax. You get relief now, and pay for it later. Not actually a good tradeoff much of the time.
It was really annoying turning on a show for 30 minutes then for the next week hearing about that new toy they just have to get. It was exhausting.
> Jurors were charged with determining whether the companies acted negligently in designing their products and failed to warn her of the dangers.
So if you do so while providing warnings and controls for people, that might make it OK in the eyes of the law?
But then again, I manage to get myself addicted to a video game usually once a winter for a few weeks, and don’t play games for the rest of the year. There’s really no solution to this, but I don’t want to live in a world where everyone is hopelessly addicted to shallow digital experiences.
We had 10 years+ plus of having products like Facebook, Twitter, YouTube, hell even LinkedIn with a basic content model of "you build your own graph of people who you pull content from" and their job was to show it to you and puts ads in there to fund the whole enterprise. If I decided to follow harmful content? That was a pact between me and the content creator, and YouTube was nothing more than a pipe the content flowed through. They were able to build multi-billion dollar businesses off of this. That's really important, this was enormously profitable. But then the problem happened that people's graphs weren't interesting enough, and sometimes they'd go on the thing and there were no new posts from people they followed, and this was leaving money on the table. So they took care of that problem by handing over control of the feed to the reward function.
More accurately, especially for Meta products: they completely took control away from you. You didn't even have the option to retain the old, chronological social graph feed anymore. And it was ludicrously profitable. So now the laws of capitalism dictate that everyone else has to follow suit. I now have extensions on my browser for Instagram and YouTube to disable content from anything I don't follow - because I still find these apps useful for that one original purpose they had when they blew up and became mainstream. Why are these browser extensions? Why can't I choose to not see this stuff in their apps? That's the major regulation hole that led to this lawsuit, imo.
It's the same thing you see with people blaming smartphones for brainrot. We've had 15 to 20 years of smartphones with more or less the same capabilities as they have today and for the vast majority of that time my phone didn't make books less interesting or make me struggle to do chores or manage my time. For a full decade or more I saw my phone as a net positive in my life, was proud to work for Twitter and generally saw technology like the Louis CK bit about the miracle of using a smartphone connected to WiFI on an airplane. But in the last five years or so, things have noticeably and increasingly gone to shit. Brainrot is a thing. All my real life friends who are the opposite of terminally online or technical are talking about it. I don't use TikTok but it seems like that is absolutely annihilating attention spans. The topic of conversation over drinks is how we've collectively self-diagnosed with ADHD and struggle with all kinds of executive function.. but also are old enough to remember a time when none of this existed. Complete normies are reading Dopamine Nation and listening to Andrew Huberman trying to free themselves.
I don't know what the exact solution is, but there's at least a simpler time we can point to when we all had smartphones and we were all connected via platforms and we all posted and consumed stupid pictures of each other and it wasn't.... _this_.
I'd add one additional layer: it's not just that the algorithm picks what you see, it's that the entire UX is built around keeping you in the loop. On YouTube Kids, even with autoplay off, the end-of-episode screen shows a grid of recommended videos. My toddler doesn't care about "the algorithm" in any abstract sense. He just sees more fire truck videos and wants the next one. The transition out of the app is designed to fail.
Your point about smartphones not being the problem is key. I was at Google during the era you're describing, when the phone was a net positive. The hardware didn't change. The business model did.
At least legal experts are critical of the decision: '“I don’t think it should have ever gotten to a jury trial,” said Erwin Chemerinsky, dean of the UC Berkeley School of Law'
Anecdote, but it does seem like a lot of younger folks I speak with are exhausted by the dark patterns and dopamine extraction that top-k social media platforms create.
If agents/AI/bots inadvertently destroy the current incarnation of social media through noise, I think we'll be better for it.
This sounds like the original internet.
Before adtech took over.
Getting back to community is key.
To me this statement reads as both inaccurate and ignorant of human nature. Social media was actually better when it was about individual ego (Myspace/LiveJournal); as obnoxious as that can be, today everything is worse because of petty tribalism. Most conflicts on social media are inter-tribal, whether it’s racial, political, national, or feuding “stan” culture groups. The worst problems come from groups who organize on platforms like Discord or Kiwi Farms to direct harassment campaigns against perceived enemies (or random “lolcow” victims).
Simple observation of the present world and history will tell you that a platform focused on “collective improvement” will only appeal to a small subset of potential users. Of course such a platform would not be a bad thing. Places like this (such as The WELL) used to be common when the internet was dominated by academics, futurists, and tech enthusiasts. But average people are not interested in this kind of platform, and will not participate in good faith in such an environment.
> But average people are not interested in this kind of platform, and will not participate in good faith in such an environment.
I'm not ignorant of human nature and tribalistic tendencies. The undercurrent of my comment is of an optimistic hope (or cope) that we can move past competitive individual validation programming. I'm aware that it's due to our nature, but also aware that it's exploited by dark patterns and extraction at scale through software.
Since we don’t live in a perfect world, I suppose some regulation of the industry would be fair, just as we mitigate the harms of gambling somewhat through regulation. I just worry about regulation being used as a Trojan horse to stifle political organization and/or open communication about corruption, cronyism, and oppression.
It may be that the future is more small platforms where conflict is limited to in-group conflict rather than global platforms where all of humanity’s disagreements are surfaced and turned into fodder for monetization.
Regulation could work, but in my opinion the problem isn't devious mastermind product people attempting to entrap humanity -- it's self entrapment in a recursive way.
Regulators could add red tape and boundaries for what is or isn't kosher or legal, but in the end can prohibition fix systemic integration with addictive technological superagonist of our own creation?
Regulation isn’t perfect; in the best case all it can do is limit the worst harms. It’s still a bad idea to engage in regulated gambling, as you are very likely to lose money. Almost everyone knows this, yet many people do it, and I can’t see that changing any time soon.
Do you have a mechanism for this in mind, incentives-wise? I can't see this making money.
(If we hit the stretch goal, we can upgrade to a raspberry pi!)
Said little sites may run for a bit and die, and the massive monolith remains, at least until another monolith replaces them.
I suspect in just a few more decades, we shall reinvent the 90s and 2000s p2p networks from first principles.
They worked great when most actors were well behaved and got abandoned pretty quickly once that changed.
There's a p2p network I use today which doesn't have that problem, probably due to its small size. Meanwhile all the big platforms do — including this one!
(EDIT: to clarify, I don't mean to build an alternative monopoly, I mean to build alternatives that are big enough to survive as a business, and big enough to be useful; A few million users as opposed to the few billions Facebook and Youtube (allegedly) have)
The reason it's hard to imagine such a thing today is because the tech giants have illegally suppressed competition for so long. If Google or Meta were ordered to break up, and Facebook/Youtube forced to try and survive as standalone businesses, all the weaknesses in their products would manifest as actual market consequences, creating opportunity for competitors to win market share. Anybody with basic coding skills or money to invest would be tripping over themselves to build competing products which actually focus on the things people want or need, because consumers will be able to choose the ones they like.
It would cost tons man. You don't understand the scale these apps operate on at all. Meta has their own data center footprint that rivals AWS or any other cloud company and they had that before AI, and it's not just all to run ads on. On demand photo and video streaming and storage for free for all of humanity is incredibly expensive.
Social media with only millions of users is basically worthless because it won't capture enough of an average person's circle to be useful to them
Maybe you missed my edit? I specifically said not a clone of the monopolies, but a competitor big enough to be a sustainable business. The economics of a monopolist's empire are irrelevant.
> Social media with only millions of users is basically worthless because it won't capture enough of an average person's circle to be useful to them
There's so much wrong with this statement. First of all, I will never meet anywhere near a million people in my lifetime. A regular human being's real social connections won't be anywhere near that big.
But even if it is (or users want to discover/follow random people), it doesn't take a computer science genius to discover how to interoperate between social networking apps. Meta and Google would never do this, but that's because they're anti-competitive monopolists; if you're a startup trying to gain marketshare and win on your product's quality, interop with other networks is a no brainer. We probably don't even need regulation to require interop, as the market will see it as a useful thing to develop on its own.
Even ignoring the adverse selection of who'd subscribe, their ARPU is higher than that in North America: https://www.statista.com/statistics/251328/facebooks-average...
We've tied our incentives to a structure which is not in alignment with continued survival. The real question is how can we incentivize ourselves to continue to exist?
The "the incentive structure says we should all destroy our brains" thing is just a small aspect of that.
> We've tied our incentives to a structure which is not in alignment with continued survival. The real question is how can we incentivize ourselves to continue to exist?
The continued survival of individuals or humanity as a whole? The individuals seem to survive OK, and arguably there's nothing that could convince them to prefer the survival of the amorphous group, save for some kind of brainwashing.
We shouldn't be optimizing for quarterly returns, but for the next ten thousand years.
The incentives would be those which have motivated people throughout history: to create something which benefits humanity.
Next, text only platforms are nice, but niche on the modern internet. People seem to love multimedia which takes tons of bandwidth/cpu.
Paid for services don't mean spam free either. If it's worth people to pay for, it's worth spammers paying to get in and spam.
Then you have all the questions on what happens if you grow, how do you deal with working with all the laws around the world, how do you deal with other legal issues.
Having a site/service of any size can quickly become an expensive mess.
They are going to be (and AI slop already is) so much worse. Once they get ads to work well / seem natural the dark patterns will pop right back up and the money spigot will keep flowing upwards
I don't recall a lot of complaints about Facebook or Instagram when it was actually your friends' content. But now it's force-feeding everybody their own "guilty pleasure" viewing material 24 hours a day. It's fucking sick.
What does that even mean?
ublock origin for blocking them on desktop. If you're on an iphone... uninstall youtube?
my quality of life has increased substantially... although sometimes the app bugs out and shorts still make it on my home page. I spend like 10 minutes scrolling through shorts and get a weird shock "how the fuck did I end up here?", restart the app and boom shorts gone again.
No
https://www.drugsandalcohol.ie/27213/
> Only 22 percent of respondents said they would be willing to work closely on a job with a person with drug addiction compared to 62 percent who said they would be willing to work with someone with mental illness.
https://publichealth.jhu.edu/2014/study-public-feels-more-ne...
Product liability is a subdivision of tort law that allows for recovery for damages caused by the makers or distributors of a product. This case has nothing to do with Section 230, the plaintiff successfully argued that the product was defectively designed and caused harm to the plaintiff.
Section 230 immunity is not a shield against all liability, it's only a shield against hosting problematic user content.
The guy who made the drugs is guilty. The guy who sold the drugs to kids is guilty. But parents who failed to warn kids about drugs and to oversee them properly are also guilty...
Now if we're in a discussion around the cartels, plenty of people do bring up (and there's also those that get annoyed by it) that the drug users are actually the ones funding the cartels via their drug use.
Along these lines, I think another fun comparison might be opioid use and Purdue.
I never suggested that
eg: I grew up in a very nasty place. My neighborhood had a few pregnant 13 year old girls and a lot of drunks and smokers, including kids in their early teens. My parents kept me away from it all, while also both having full-time jobs. They put a lot of work into filtering whom I could be friends with and where I was allowed to be. THAT is the job of a parent.
But at systemic level, we must consider the effect of social dynamics globally, not only how the most virtuous citizen deal with the direct situation. Pauperisation of the masses will mechanically lead to more social problem on the overall, even if they will always be brilliant heroes to point to as possible through exceptional behavior. And society that are structurally helping everyone to fall in distress or weak situation also help the exceptional people go further as they are freed from many cognitive loads they would have to deal with otherwise.
Maybe you don't do this. Certainly I don't. But when looking around, its much less rosy and... lets say in blue collar families its too common to drug kids with screens so parents have off time. Heck, some are even proud how modern parents they are. Any good advice is successfully ignored, and ideas of passing some proper time with kids instead are skillfully avoided. People got lazy and generally expect miracles from life without putting in any miracle-worth efforts.
Companies just maximize their profits till laws allows them (and then some more), and expecting nice moral behavior by default is dangerously naive and never true.
But sure, "Parents often give too little fucks for long term welfare of their children", that's definitely it. Parents just hate their kids! What a useful perspective you've brought to the discussion.
Still, given all that, I don't do cheap excuses like that. Its pathetic and weak and simply untrue. Things are harder but thats it, not impossible like your side of argument wants to conveniently claim. Quality time well spent with kids is highly proportional to outcome of raising efforts. No way to hide from that simple fact, and nowhere to hide from results of parenting, everybody can see them in plain sight.
But if you setup your life so that pathetic things like career are your upmost importance and you have no time nor energy for anything else, those are your choices and thats fine. Just not getting why folks then have kids, just to skip on actually raising them and then whine how unruly they are, raised by toxic groups with no role models. Having and raising kids is not some fucking checkbox to tick and move on, its 20+ years full commitment and biggest achievement in one's life, or biggest failure. Worth some proper effort, no?
Its also funny how they “discovered” they were influencing elections after they influenced the 2008 and 2012 elections.
How did the author not know this when she sought out and joined the company in like 2013!
The parts about playing Settlers of Catan with Zuckerberg was funny. I wonder what his side of the story was and if people were really letting him win.
Her book doesn’t cover the amount and I couldn’t find anything public where she discloses.
I figured it was a lot based on standard FB salary+stock and the years she was there.
- She was trying to work to change things
- She was pregnant and otherwise had young children and needed the money
- She was not trying to change things. She was working to get countries ingratiated with FB execs
- She didn’t get pregnant until years into her work. She chose to have a second child while staying employed. She was already “rich” with millions likely earned when her first child was born and could have worked anywhere (but not making what FB paid). Wasn’t she an attorney? Prestigious attorney salaries are definitely enough to support children and a spouse who is a teacher.
Besides a general 'don't be too good' I'm really not sure what companies should do about it. It just seems like it'll lead to some judges allowing rulings against companies they don't like.
Television's goal was always viewer retention as well, they were just never able to target as well as you can on the internet.
The subsequent effects - namely being easier to consume and more addictive - eventually resulted in legislation catching up, and restrictions on what Juul could do. It being "too good" of a product parallels what we're seeing in social media seven years later.
Like most[all] all public health problems we see individualization of responsibility touted as a solution. If individualization worked, it would have already succeeded. Nothing prevents individualization except its failure of efficacy.
What does work is systems-level thinking and considering it an epidemiological problem rather than a problem of responsibility. Responsibility didn't work with the AIDS crisis, it didn't work on Juul, and it's not going to work on social media.
It is ripe for public health strategies. The biggest impediment to this is people who mistakingly believe that negative effects represent a personal moral failure.
Well, a drug addict wants to consume his drug. Because his drug is good at keeping abstinence syndrome at a bay and probably the tolerance hasn't build up to levels when the addict couldn't feel the "positive" effects of it.
The user feels an impulse to consume the content, but whether they want it we can know only by questioning them. They can lie consciously or unconsciously, but there are no better ways to measure a desire to consume it. When talking about doom scrolling I never met a person who said they want to do it, but there are people who do it nevertheless.
> This just seems ripe for selective enforcement if not codified in law.
I agree. I'm not sure how they define "addiction" and how they measure "addictiveness". It is the most important detail in this story.
Unless you hurt children, then its mostly legal and a slap on the wrist.
disassemble the intentionally addictive properties they built into their platforms to maximise engagement and revenue at the cost of the mental health of their users.
Broadly speaking, Section 230 differentiates between publishers and platforms. A platform is like Geocities (back in the day) where the platform provider isn't liable for the content as long as they staisfy certain requirements about havaing processes for taking down content when required. A bit like the Cox decision today, you're broadly not responsible for the actions of people using your service unless your service is explicitly designed for such things.
A publisher (in the Section 230 sense) is like any media outlet. The publisher is liable for their content but they can say what they want, basically. It's why publishers tend to have strict processes around not making defamatory or false statements, etc.
I believe that any site that uses an algorithmic news feed is, legally speaking, a publisher acting like a platform.
Example: let's just say that you, as Twitter, FB, IG or Youtube were suddenly pro-Russian in the Ukraine conflict. You change your algorithm to surface and distribute pro-Russian content and suppress pro-Ukraine content. Or you're pro-Ukrainian and you do the reverse.
How is this different from being a publisher? IMHO it isn't. You've designed your algorithm knowingly to produce a certain result.
I believe that all these platforms will end up being treated like publishers for this reason.
So, with today's ruling about platforms creating addiction, (IMHO) it's no different to surfacing content. You are choosing content to produce a certain outcome. Intentionally getting someone addicted is funtionally no different to changing their views on something.
I actually blame Google for all this because they very successfully sold the idea that "the algorithm" ranks search results like it's some neutral black box but every behavior by an algorithm represents a choice made by humans who created that algorithm.
> (c) Protection for “Good Samaritan” blocking and screening of offensive material
> (2) Civil liability
> (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
"in good faith" is key here. Here's another opinion [2]:
> One argument advanced by those who want to limit immunity for platforms is that these algorithms are a form of content creation, and should therefore be outside the scope of Section 230 immunity. Under this theory, social media companies could potentially be held liable for harmful consequences related to content otherwise created by a third party.
So far the Supreme Court has sidestepped this issue despite cases making it to the Appeals Court. Until the Supreme Court addresses, none of us can say with any certainty what is and isn't protected.
[1]: https://www.law.cornell.edu/uscode/text/47/230
[2]: https://www.naag.org/attorney-general-journal/the-future-of-...
Remember, according to that link, 230 does not give platforms any new rights. It simply makes it easier for them to end cases faster and cheaper, that they would have already won on 1st amendment grounds.
Even in this post you contradict yourself. If S230 doesn't grant more rights, why does it matter? If it makes it easier, then it's giving you something, just like anti-SLAPP statutes give you something (and matter).
Also, this isn't a First Amendment issue. Nobody is questioning whether a platform can publish their own content or somebody else's. The issue is liability for what it is expressed. Publishing your own content comes under a strict liability [1] standard. Section 230 establishes that publishing third-party content does not, which again contradicts the point that that "230 does not give platforms any new rights".
Wouldn't you agree there's a difference between being able to post defamatory or false statements with or without liability?
> (c) (c)Protection for “Good Samaritan” blocking and screening of offensive material
> (1) Treatment of publisher or speaker
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
This is a protection for being a platform for third-party (including user-generated) content.
Some more discussion on this distinction [2]:
> Section 230’s legal protections were created to encourage the innovation of the internet by preventing an influx of lawsuits for user content.
It goes on to talk about publishers, distributors and Internet Service Providers, the last of which I characterize as "platforms".
By the way, my view here isn't a fringe view [3]:
> One argument advanced by those who want to limit immunity for platforms is that these algorithms are a form of content creation, and should therefore be outside the scope of Section 230 immunity. Under this theory, social media companies could potentially be held liable for harmful consequences related to content otherwise created by a third party.
This is exactly my view.
[1]: https://www.law.cornell.edu/uscode/text/47/230
[2]: https://bipartisanpolicy.org/article/section-230-online-plat...
[3]: https://www.naag.org/attorney-general-journal/the-future-of-...
Just look at the Cox decision from the Supreme Court today. As long as the (Internet) service isn't designed for or sold as a method of downloading copyrighted material, the provider isn't responsible for any actions by its users. In other words, intent matters.
I find that technical people really get stuck on this aspect of the law. They look for technical compliance or an absolute proof standard because we're used to doing things like proving something works mathematically. But the law is subjective and holistic. It looks at the totality of evidence and applies a subjective test.
And intent here is fairly easy to establish. We could take an issue like Russia and look at all the posts and submissions and see how many views and interactions those posts got. We then divide them into pro-Russian and pro-Ukraine and establish a clear bias. We also look at any modifications made to the algorithm to achieve those goals.
This is nothing like Cloudfare DDoS protection.
I don't have time right now to provide a full/quality answer with more examples - you can do a bit of seraching online to learn more.
Also from personal expeirence as well (from family and friends). When their kids comeover they have tiktok on their phone and roblox on their laptop
If the answer is just "No" to both of those questions, then it sounds like a regular video game that can be addictive (like everything else), but it wasn't specifically designed to be addictive, like some social networks are designed.
https://pure.psu.edu/en/publications/the-system-is-made-to-i...
The funniest one? The 10-k discussing legal issues as risk regarding addiction
Hardwick et al. (2025) “They’re Scamming Me”: How Children Experience and Conceptualize Harm in Game Monetization https://papers.ssrn.com/sol3/Delivery.cfm/5164006.pdf?abstra...
Kou, Hernandez, Gui (2025) “The System is Made to Inherently Push Child Gambling in my Opinion”: Child Safety, Monetization, and Moderation on Roblox https://pure.psu.edu/en/publications/the-system-is-made-to-i...
Song et al. (2025) How Predatory Monetization Designs Manifest in Child-Directed Online Games (SOUPS 2025) https://www.usenix.org/system/files/soups2025-song.pdf
Kou & Gui (2023) Harmful Design in the Metaverse and How to Mitigate It: A Case Study of User-Generated Virtual Worlds on Roblox https://sites.psu.edu/healthandplay/files/2023/05/Harmful-De...
Tunca et al. (2025) Navigating parental concerns in children’s engagement with Roblox https://pmc.ncbi.nlm.nih.gov/articles/PMC12821821/
Roblox Corporation (2024 Form 10-K) https://www.sec.gov/Archives/edgar/data/1315098/000131509825... I find this hilarious.
Not all games are created equal, I loved Zelda tears of the kingdom and the sounds and rewarding game were in my opinion addictive however they are not in the same league as roblox
The best part is when you get a cohort of a few families to go camping and teenage daughter forced dad to drive 45 minutes each way for cell service to avoid breaking the daily login chain.
I don’t think people appreciate how these mechanisms impact society as a whole
YouTube allows you to "show fewer shorts" but what if you don't want them popping up at all?
AI Slop is the best thing to happen to these platforms - because it will lower trust and engagement as people (hopefully) become tired of inauthenticity. Rage bait is potent when the event in the video _actually_ happened, but when you realize it was AI generated, the manipulation feels even more obvious (though it was always there).
These platforms should also allow users to understand how the algorithm has categorized them, and be able to configure it. YouTube, Instagram, et al. would be safer places for viewers if they allowed users to tell them what they want to be exposed to, and what they don't. Big tech is dodgy about this currently, because the more control the user has the lower the engagement (good for the user, bad for profit).
Jury finds Meta liable in case over child sexual exploitation on its platforms
Even if they do what you're saying, lots of people who've used any Meta property in the last 15 years has a potentially viable case, and no future work can swat those away
I'm a former Google engineer, now running a children's mental health startup (Emora Health), and my toddler is already on YouTube Kids.
So this verdict hits on every axis for me.I wrote up my full take here [1], but the short version: I don't think the "Big Tobacco moment" framing that NYT is pushing actually holds up.
Litigation is negative reinforcement, and if you've ever tried telling a toddler "no" you know how well that works long-term.The families in this case absolutely deserve to be heard. The harm is real. But courts can only punish — they can't redesign a recommendation algorithm.
The change has to come from people who understand these systems building better ones.
Haidt has been saying for years what this verdict just confirmed. The evidence was never the bottleneck. The will to design differently was.
I will give you a simple experiment. Try blocking Blippi from YouTube Kids, man, it's crazy, even if you block the main Blippi and Moonbug channels. 100s of channels have Blippi content cross-posted. And it keeps popping up. I know it's easy to build a Blippi block feature using AI that blocks across channels.
Thats the kind of solutions we need. I know we have the tools. Just need intent and purpose
[1] https://www.emorahealth.com/clinical-insights/social-media-v...
Parent here. Acting like it’s impossible and you have no choice but to let them have their way is a cop-out. Telling kids “no” and enforcing boundaries is part of the job.
> my toddler is already on YouTube Kids.
> I will give you a simple experiment. Try blocking Blippi from YouTube Kids, man, it's crazy, even if you block the main Blippi and Moonbug channels. 100s of channels have Blippi content cross-posted
I have a better solution that I use: If I can’t stay involved enough to monitor what the kids are choosing to watch, I don’t let them loose watching YouTube. They get to go play outside or with LEGOs or do puzzles or any of the other countless activities that are fun for kids.
This isn’t a problem that is solved by creating advanced filtering that lets you block anything related to Blippi (whoever that is) isn’t going to solve the problems of letting your kids loose on YouTube. They’re going to find another cartoon you dislike. The solution is to parent, set boundaries, enforce them, and find other activities for them.
I believe you're conflating two things: parenting discipline and product design. The question isn't whether I can physically take the TV away. I do.
When I say "block Blippi," I don't mean I dislike the content. I mean I'm done with screen time and the UX makes that transition harder than it needs to be. Autoplay is off, but the end-of-episode screen still shows a grid of next videos. Of course he wants the next one.
So I block Blippi. Except Blippi's main channel cross-posts through Moonbug into hundreds of other channels. It's a hydra
YouTube already does content fingerprinting for music industry DRM. The technology to let a parent say "block this creator everywhere, and let me turn it back on when I choose" exists today. They just haven't built it for parents. Because the system isn't designed for children. It's designed for engagement.
So yes, parental responsibility matters. But "just don't use it" isn't a scalable answer when the product is specifically engineered to undermine your choices. That's the design problem I'm talking about.
My issue is with YouTube's UX. I watch an episode with my son, we're singing along, he's excited about putting out the fire. Episode ends. Even with autoplay off, the next recommended videos show up — and of course he wants to watch the next one.
So I block Blippi. Except Blippi's main channel cross-posts into Moonbug, which cross-posts into hundreds of other channels. It's like trying to kill a hydra. Here's what gets me: YouTube already does content fingerprinting for DRM enforcement in the music industry.
The technology to let me block Blippi across every channel — and turn it back on when I want to exists. They just haven't built it for parents. My point that we can build systems designed for children if we had the intent
Kind of like how tobacco companies now pay out billions every year and its a major source of funding for states.
Hopefully this means more health services available. But it will just serve like an ongoing tax.
The similar case about child predators was brought by NM’s attorney general.
This stop-bot thing can be annoying at times.
How is it that these days social media can circumvent all these safeguards and then somehow blame the parents if a kid is watching something inappropriate on an app designed for kids (like YouTube kids)?
The issue is that politicians are beholden to social media companies because they can literally get them or their opponent elected. After reading Careless People, I was amazed at how leaders of so many countries wanted to meet Zuck because he wields so much power.
I really hope this ruling is the beginning of the end of the free rein they've had.
In a lot of countries there are specific laws banning the deliberate targeting of advertising to children (and in contexts where you would reach children, heavily regulated), but for over a decade Meta would allow you to target within the ranges of 13 to 18 years old.
That's to say nothing of the scams and deepfake celebrity ads they let run. Imagine if a deepfake ad of Warren Buffet promoting an investment opportunity ran on TV, the network would get sued into oblivion. On Meta though, there's no repercussions.
> When presented with internal research and documents showing that Meta knew young children were in fact using its platforms, Zuckerberg said he "always wished" for faster progress to identify users under 13. He insisted the company had reached the "right place over time".
Soon there will be government IDs required to use social media sites because parent's can't take phones away from their kids.
The exact same can happen to Big Tech. The goal is to get them to stop the bad behavior now.
Maybe the social media companies could do more to combat all these. They certainly have a level of profit compared to what they provide to the average person that makes people squirm.
But does anyone believe for a second that YouTube is responsible for a person's internet / video watching addiction? It's like saying cable television is responsible for people who binge watch TV.
It's hard to square this circle while sports gambling apps and Polymarket / Kalshi are tearing through the landscape right now with no real pushback
Yes? Is there an algorithm or not?
We don't promote cigarettes (or at least in countries that have decent consumer laws) because it harms users, candies should be in the same category, it should probably exist but it shouldn't be promoted. When social media actively promote things that cause psychological harm while being aware (as we do have countless studies that proves it) of it to CHILDREN, then yes, screw them, we must force a change.
We should also more forward, imagine now if instead of having a thousand of engineers & businessmen VS teenagers, we could leverage their intellect to actually help the world (and still make money out of them), it is possible, we must force innovation if corporations aren't complying.
It ends up feeling much closer to “what’s interesting in my corner of the web right now?” and much less like a system trying to keep you trapped inside it.
Small scope, obviously, but I think more social tools should feel like utilities, not casinos.
Trial courts will decide pretty much anything. Then the case gets appealed over whether the trial court correctly interpreted things you probably perceive as uncomplicated, like the 1st Amendment.
> It comes on the heels of a Delaware court decision clearing Meta’s insurers of responsibility for damages incurred from “several thousand lawsuits regarding the harm its platforms allegedly cause children” — a ruling that could leave it and other tech titans on the hook for untold future millions.
Children don't have disposable income to buy ads/subscriptions. They don't have experience to write about. The only thing they have that adults don't is time which translates into engagement metrics.
In an ideal world, the adults that buy/manage the computers would create age-restricted account for children, and the OS would give this information to the browser, which would just transmit it via HTTP. This is the safest method to verify ages. If an operating system doesn't want to support this, it's ultimately the adult's responsibility to install one that supports it. This would mean there would be no burden on the adults (the majority of the planet) to verify their ages, so there would be no burden on the platforms to restrict ages either.
If platforms could verify ages without inconveniencing their main user base, I wonder if platforms would just start banning all minors, or if there is some reason to allow minors in the platform that justifies all the liability surrounding them.
They have their hands directly on their parents heart strings, and their parents have a credit card.
This isn't anything new, think about the toy ads we had on TV when we were young.
Parental controls and age-restrictions are almost universally half-baked, buggy fig leafs to displace negative attention from software and content providers.
I’ve argued in the past that the right way to create the change in corporations we want is to change the laws, and people have made valid points that Congress has basically given up on doing that. But even so, civil cases with fines don’t seem like that way to make lasting change. In the analogues to the tobacco fights, there are LAWS that regulate tobacco company behaviors as a result. The civil case here isn’t going to result in any law. So what are companies supposed to do? Tiptoe around some ill defined social boundary and hope they don’t get sued? Because apparently the defense of, “no I didn’t target that person and I didn’t break any laws” is still going to get you fined. What happens when a company from a conservative location gets sued in a liberal location for causing a social ill? Oh, we’re cool with that. But what if a company from a liberal location gets sued in a conservative location for the same thing? Oh, maybe we don’t like that as much. I’m taking the libertarian side here. I know plenty of people who don’t watch TV, don’t use Facebook, and I know plenty of people that recognized that they were spending too much time on digital platforms and decided to quit or cut back. So a healthy person can self regulate on these apps, I’ve seen it and done it. I’m just not sure how much responsibility Meta and YouTube bear in my mind. If they’re getting fined $3M plus some TBD punitive amount, are we saying that this 20 year old person lost out on earning that much money in their life or would need to spend $3M on therapy because of Meta or YouTube? It feels a little steep off a fine for one person.
If Meta and YouTube really were/are making addictive products, wouldn’t a lot more people be harmed? Shouldn’t this be a class action suit where anyone with mental trauma or depression be included?
I don’t know the details of the case, but I highly doubt that this one plaintiff was targeted specifically, and I doubt their case is that unique. I read tons of news articles about cyber bullying, depression, suicide attempts, and tech addiction. Does every one get to sue Meta and YouTube for $3M now?
If I sell you gizmo, and I know, or should know, that using the gizmo could seriously harm you, and I don't tell you or do anything about it, I am liable for damages you incur.
Should Apple or Samsung be held liable for making the phone that the plaintiff probably used to use these apps? How much responsibility do they bear?
Further, Facebook/Instagram and YouTube are free products from the perspective of the plaintiff. These corporations didn’t sell anything to the plaintiff, so can they even be held liable? They did sell the plaintiff’s data to advertisers, which I think you might be able to hold them responsible if they misused that data, but this isn’t what the case was about.
I’m not rooting for depression or suicidal thoughts or anything, but this doesn’t feel like the right direction we need to be moving in as society. We can’t simultaneously argue for free speech and freedom of choice and also claim that we aren’t capable of making our own choices to live our lives responsibly.
Some of your examples are not very compelling.
In the case of Remington, there was another party that (presumably) the jury found was more directly responsible. Also, the victims of sandy hook were not Remingtons' customers.
Apple does not make you install instagram on your phone. And I doubt that you could find really compelling evidence that Apple knew in great detail the harms that were being caused, and rather then seek to mitigate them, instead made them worse in order to earn more money.
I'm not sure there a requirement that a product be paid for in order to be subject to product liability law.
I think I agree that these product liability cases are not the best way for a society to deal with these problems. I would prefer to see the democratic process arrive at some reasonable solution, based on the desires of the majority of the population. But there has been almost no movement in that direction, and I have my doubts there will be (in the US).
And I think it's important to see that this is about 16 (and younger) year old children, not adults.
Meta did not make her install Instagram on her phone.
> I'm not sure there a requirement that a product be paid for in order to be subject to product liability law.
You’re the one who originally used the word sell when pointing out this case was brought as a product liability case, not me. And selling something is the first step in establishing product liability. But even if the court allowed a liability case to go where there was no commercial sale, Meta and YouTube could have argued that their product would not be considered defective/harmful by a reasonable person- almost by definition the number of users of Instagram and YouTube make that argument- and thus they should not be liable for one person claiming a defect.
Like I said before, this should be a class action. One person doing it is a money grab and the jury just wanted to stick it to “big bad tech companies.” I probably wouldn’t care so much if they had found Instagram liable but excluded YouTube, but the fact that YouTube has to pay some of the damages means the jury was not thinking that hard.
But it's not absolute. Some drugs are illegal for adults as well, for example. Why? Because they're too addicting.
So are Instagram and Youtube just nicotine, or are they heroin?
I agree that a big part of this is educating children about these hazards, but that also doesn't mean we should allow these companies to data science the shit out of our attention and will power. Many adults have concerning relationships with social media too -- exposure, pressure, and manipulation are key ingredients that are difficult for anyone to deal with.
Cocaine is illegal because it is addictive.
Cigarettes directly cause physical harm and even death. Social media can sometimes, under certain circumstances, depending on who exactly you're interacting with on social media, indirectly contribute to emotional harm.
Cigarettes are also physically addictive. Your body actually becomes dependent on them and will throw a fit if you try to stop using them. Social media is only "addictive" in the loose sense that all fun, mentally engaging activities are.
I'm not saying social media is fine for kids and we shouldn't do anything to reduce their use of it (TV and video games can be equally unhealthy IMO). I'm not even necessarily against legislation on the subject. But there's a huge difference between fining a company for breaking a law, and fining them for making a perfectly legal product "too fun" because you let your kids spend all their time on it and that turned out to be unhealthy.
This type of civil litigation where the courts effectively create and enforce ex post facto laws based on their opinion about whether perfectly reasonable, 100% legal actions indirectly contribute to bad outcomes is not a great aspect of our legal system IMO.
The best example of this is heroin, which has both a severe physical and mental addiction component, and it's the mental addiction that makes relapse so common.
Mental addictions rewire the brain's chemistry, causing the user to seek and only find joy in the substance. This is a better comparison for social media (albeit not as destructive and instantaneously harmful as narcotics)
There can still be social ills associated with these forms of natural "addiction" (e.g. gambling), and I'm okay with regulating those ills, but I'm less okay with the courts doing so unilaterally based on their subjective opinions with no concrete law backing them up.
There is a difference in creating a food that tastes good vs creating a food that tastes good, but instantly wants you to eat the whole bag.
Although to some extent they're correlated, sometimes the things that are most enjoyable you wouldn't describe as "addicting" and vice-versa.
Eating a nice full meal is more enjoyable than eating doritos on your couch, but you wouldn't describe it as addicting.
If anything, I find my experience of youtube today to be less enjoyable than in the past
https://hn.algolia.com/?query=superkuh%20addiction&type=comm...
https://www.techdirt.com/2026/03/26/everyone-cheering-the-so... "Everyone Cheering The Social Media Addiction Verdicts Against Meta Should Understand What They’re Actually Cheering For"
The idea that "porn addiction" exists is entirely a social media concept with no support in the literature or medical standards. It's pushed by for-profit treatment groups just like "anti-gay" camps are. Gambling disorder, of course, is grandfathered in.
Well, that's laughable.
The result, in these corner cases where eating people is profitable? Shelob.
I feel, and it's obvious to most that the only way a society can truly reform is by a shared consensus over their value system. This verdict could be thrown out by the appelette court(i feel it would be), so this is not the culmination of values resulting in what many hoped for.
It does not seem to me that this is a country where consensus on what, if anything, to put above capital will come about any time soon and with capital it's always been ask for forgiveness rather than permission.
The only time true justice that happens is when the harm becomes obvious being the shadow of a doubt(e.g. smoking) that even a monkey can tell it's time, game is up.
Perhaps if one day we can look into the brains of people with the clarity of glass and the precision of electrons and tell, will that time come when we all recognize how bad of an idea social media was.
It's a spectrum of risk between the user and the creator. My opinion is that there's enough scientific evidence that social media to show that it has a negative impact on kids and teenagers as their brains are still developing. I think a social media ban on kids is a good thing (similar to a driver's license or age of drinking).
So this verdict hits on every axis for me.I wrote up my full take here [1], but the short version: I don't think the "Big Tobacco moment" framing that NYT is pushing actually holds up.
Litigation is negative reinforcement, and if you've ever tried telling a toddler "no" you know how well that works long-term.The families in this case absolutely deserve to be heard. The harm is real. But courts can only punish — they can't redesign a recommendation algorithm.
The change has to come from people who understand these systems building better ones.
Haidt has been saying for years what this verdict just confirmed. The evidence was never the bottleneck. The will to design differently was.
I will give you a simple experiment. Try blocking Blippi from YouTube Kids, man, it's crazy, even if you block the main Blippi and Moonbug channels. 100s of channels have Blippi content cross-posted. And it keeps popping up. I know it's easy to build a Blippi block feature using AI that blocks across channels.
Thats the kind of solutions we need. I know we have the tools. Just need intent and purpose
[1] https://www.emorahealth.com/clinical-insights/social-media-v...
Parenting is rough! Good for you, for sticking to your guns.
> The plaintiff, Kaley, started using YouTube at age 6 and Instagram at 11.
Who was at the wheel here? If we call up all Kaleys teachers from this time frame and ask them "were Kaleys parents checked out" what do you think the answer would be? For as bad as education has gotten, I sympathize with with teachers because parents have gotten FAR worse.
It's not like we don't know these things about peoples behavior on devices... maybe it's something that should be talked about in school, along with how credit works, and how to file taxes.
Do we need to tell parents "it's 10am, have your kids touched grass yet?"... "It's 10pm did you take the tablet and phone away so they go the fuck to sleep?" --
"touch grass" as a meme/slang is literally people poking fun at the constantly on line. It's "hazing" and "bullying" to drive social correction.
There are plenty of things in life that can be addicting; drugs, sex, money, power, adrenaline, entertainment, technology... The list goes on. If we remove everything addicting from life, you better believe something else will rise up to take its place.
The solution therefore isn't to remove everything addicting from life, but rather to raise everyone with the forethought to know what might be addictive, the self-awareness to realize when you are addicted to something, and the self-control (and support systems if and when necessary) to stop.
I don't know what the answer is, but it feels wrong to lean _entirely_ on personal responsibility. We live in a world in which we were simply not evolved to live in. People literally make a good living by engineering and exploiting our weaknesses for profit.
> raise everyone with the forethought to know what might be addictive, the self-awareness to realize when you are addicted to something, and the self-control (and support systems if and when necessary) to stop
If only it were that easy. If you've ever known somebody who struggles with a serious addiction you'll know that even when they know it's destroying their life they still can't stop.
Personal responsibility IS important, but we also don't allow cigarette companies to advertise on billboards with cute characters (remember Joe Camel?)
They weren't just consciously creating an attractive platform, they were consciously creating a manipulative platform.
The question we should be asking: are these technologies a net-positive to society?
It seems though, increasingly, that the ability to avoid addiction is less about pulling one up by one’s own bootstraps, and in many ways determined more by genetics. That is to say, what might have been possible for you is much harder for others.
Look no further than GLP-1. People who have struggled for years - decades - with overeating are almost immediately able to cut back on addictive eating. It’s not that they suddenly discovered willpower. It’s a biochemical effect.
It’s no wonder then that kids are more susceptible to addictive building behaviors. Their minds are pliable and teachable.
Why would we not legislate things that take advantage of that?
On the other, it's very different when companies explicitly design their products to be as addictive as possible.
We've been through this with Big Tobacco already. Nicotine and other tobacco substances are addictive on their own, but tobacco companies were prosecuted for deliberately making cigarettes as addictive as possible, besides also marketing to children. The parallels with Big Tech and social media are undeniable.
It is not, like, a moral thing to become addicted to something. And the ability to pull yourself out of it is determined, whether you are conscious of it or not, by your broader circumstances and by the same predispositions that brought you there in the first place. At the end of the day we are all fucked up animals reeling from the ongoing consequences of prematurational helplessness..
We should feel together in our problems like this, not distinguish ourselves by how we might individually overcome them. You are not "better" finding yourself standing over a beggar addict, you are lucky, never forget that. If for no other reason that it's not a sustainable world view otherwise, it leads to insecurity, anger, and relapse.
The dark truth of the world is that everyone is doing the best they can. How could they not? Why would they not? What is this thing that separates you from the addict or murderer? Unless you have maybe some spiritual convictions, I can't imagine what it is..
Just really, I know you had a powerful personal journey, but don't let it establish to you that we are all fundamentally alone, because we are not, and its good to help people who maybe need more help.