Imagine you are trapped in a groundhog day like time loop - but you are not the person who remembers previous loops. "Z" is. He tries to convince you to do something, over and over and over, thousands or millions of times, refining his approach based on your reactions while you remember nothing. Are you really confident that your free will protects you from being taken advantage of in this situation?
Now imagine that instead of a time loop, Z has a million clones of you. He tries his persuasion on one of them at a time, refining it until it works reliably before using it on you. You are just as vulnerable.
Now suppose he has a billion people, not identical to you but drawn from the same distribution. He has a harder computational problem, mapping the high dimensional manifold of their responses to create a model of you sufficiently accurate to manipulate you. But with enough data he can approximate the results of the previous case without more than a tiny fraction of his experimentation being visible to you.
Any relationship where one party gets to surveil and monitor not only the other party, but millions or billions of like parties, has the potential to be a deeply abusive one. We should not tolerate such situations whether the surveilling party is a government or not.
The first is “Addiction by Design: Machine Gambling in Las Vegas” by Natasha Dow Schüll. The second, and arguably more direct and fascinating, is “The Age of Surveillance Capitalism” by Shoshanna Zuboff. Both are incredibly eye-opening in their treatment of technology and how it is designed to influence behavior.
Addictive drugs directly increase wanting via directly activating the downstream targets of dopaminergic populations which predict the valence of stimuli and control of wanting and motivation. By taking a chemically addictive drug you don't even have to enjoy the stimuli related to it. You will still be conditioned to want it and be motivated to re-experience the stimuli surrounding it.
This is vastly different in mechanism and result than simply seeing or hearing a screen. These things cannot directly increase incentive salience regardless of actual valance of the stimuli. You have to actually enjoy the thing and the experiences to form habits.
Do you see the difference now? One thing, the chemical drugs, are addictive. The other things are enjoyable. One will addict everyone because they're addictive. The other only leads to addiction-like behaviors in the context of say, random interval operant conditioning, if you actually enjoy the thing intrinsically first and are of the fairly small subset of that subset that is predisposed to behavioral addictive behaviors.
I do not buy this argument. Of course, most of the content on these platforms is innocuous, and may as well be paint drying.
What's harmful are the harnesses that these companies have built to exploit the content.
> Of course not. Because infinite scroll is not inherently harmful.
Yes it is [0].
> Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful.
Yes, they can be [1] [2].
> These features only matter because of the content they deliver. The "addictive design" does nothing without the underlying user-generated content that makes people want to keep scrolling.
These harnesses only work because people feed the machine. The harnesses are still harmful.
This whole argument is predicated on a strawman that makes no sense.
A gun doesn't work without bullets. But if a company designs and hands out the gun to the world, they should be liable for the consequences, even if they rely on users for the ammunition.
[0] https://doi.org/10.1145/3544548.3580729
[1] https://doi.org/10.1145/3491101.3519829
[2] https://counterhate.com/research/deadly-by-design/