I did get offered a discount in the cancelation flow, but nowhere could I have given a custom reason of my cancelation. They'll never know.
In my mind, national Id's (and the extra meta data of the person) should be public and only used for identification, not for authentication or authorization. Meaning there needs to be two or three extra steps after providing it to allow a transaction to occur. This needs to be a legal requirement for companies if they enter into contracts with a person.
If we need to prove we are not-minors or authenticate we are real or authorize access of personal information, the government should provide an api to auth the request, since they are the issuer of the document (the ID), so only they actually have the means to prove you are real and you are above 18. This can allow for a company to ask the gov, "is this person real and is this person above 18", and the gov shows me the request (otp, ussd, email, OS popup etc) to confirm the request and to select what info that company can pull. So its is 3 legged system, no third party companies involved. If the gov wants to create these constraints, they need to be the ones to provide the mean to authenticate (both for the consumer and the company). Also, when the gov shows the request to the user/citizen, it need to show exactly what the company is asking for and the full details of the company and the human representative that is making the request (almost similar to OAuth).
The problem runs much deeper than just "Whats the risk, my facepic is public already". Oh and this has nothing to do with minor and wont protect them in any way - only way to protect them is to take internet access away. The internet is not a child-friendly place and wasn't built by or for children. We should not bend to make it child friendly as it will destroy the internet in the long term.
I hate age verification as a concept and I wouldn't personally go through it to use chatgpt, but "failure modes here are very high risk" is unnecessarily alarmist.
Product managers hate this, they want _minimum_ clicks for onboarding and to get value, any benefit or value that could be derived from the data is miniscule compared to the detrimental effect on signups or retention when this stuff is put in place. It's also surprisingly expensive per verification and wastes a lot of development and support bandwidth. Unless you successfully outsource the risk you end up with additional audit and security requirements due to handling radioactive data. The whole thing is usually an unwanted tarpit.
Depends on what product they manage, at least if they're good at their job. A product manager for social media company know it's not just about "least clicks to X", but about a lot of other things along the way.
Surely the product managers at OpenAI are briefed on the potential upsides with having the concrete ID for all users.
The effect is strong enough that a service which doesn't require that will outcompete a service which does. Which leads to nobody doing it in competitive industries unless a regulator forces it for everybody.
Companies that must verify will resort to every possible dark pattern to try to get you over this massive "hump" in their funnel; making you do all the other signup before demanding the docs, promising you free stuff or credit on successful completion of signup, etc. There is a lot of alpha in being able to figure out ways to defer it, reduce the impact or make the process simpler.
There is usually a fair bit of ceremony and regulation of how verification data is used and audits around what happens to it are always a possibility. Sensible companies keep idv data segregated from product data.
Yes, but again, a good product manager wouldn't just eyeball the success percentage of a specific funnel and call it a day.
If your platform makes money by subtle including hints to what products to prefer, and forcing people to upload IDs as a part of the signup process, and you have the benefit of being the current market leader, then it might make sense for the company to actually make that sacrifice.
No one wants to upload an ID and instead is moving to a competitor!
To still suspect that this must be an evil genius plan by OpenAI doesn't make sense.
Comments on the internet is rarely proof of anything, even so here.
If no one wants to upload an ID, we'd see ChatGPT closing in a couple of weeks, or they'll remove the ID verification. Personally, I don't see either of those happening, but lets wait and see if you're right or not. Email in the profile if you want to later brag about being right, I'll be happy to be corrected then :)
The normalization of identity verification to use internet services is itself a problem. It's described much better than I could by EFF here:
https://www.eff.org/deeplinks/2026/01/so-youve-hit-age-gate-...
> we hope we’ll win in getting existing ones overturned and new ones prevented.
All the momentum is in the other direction and not slowing down. There are valid privacy concerns, but, buried in this very article, the EFF admit that it’s possible to do age-gating in a privacy-preserving way:
> it’s possible to only reveal your age information when you use a digital ID. If you’re given that choice, it can be a good privacy-preserving option
If they want to take a realistic approach to age-gating they should be campaigning to make this approach only option.
They want to build “the scream room” from Frank Herbert’s _The Jesus Incident_.
There is immense power to wield over someone when you know what they did in the scream room.
Meanwhile, the adult market is huge and sureshot revenue from a user base that is more likely to not mind the ads.
Also:
"Users can check if safeguards have been added to their account and start this process at any time by going to Settings > Account."
Its a feature for advertisers, and investors also wanna know. Did you think you are the customer?
That kind of context is super useful for making stored data relevant, and also selling shit to you.
https://news.ycombinator.com/item?id=45604313 ("Chat-GPT becomes Sex-GPT for verified adults (twitter.com/sama)")
Surely they're using "history of the user-inputted chat" as a signal and just choosing not to highlight that? Because that would make it so much easier to predict age.
https://allowe.com/games/larry/tips-manuals/lsl1-age-quiz.ht...
> Peter Piper picked pickled (peppers)
> How many molecules are there in a glass of water? (as many as there are)
There's also this one:
> Which is not a city in Mexico? (San Diego)
which appears to have been false at the time, and is still false now.
OMG, That's absolutely unhinged to describe something that takes place entirely in Homer's head as a "workplace chant."
SO I am pretty sure that they might be using that information as well. I don't see any reason for them not to.
So if you wrote something to chatgpt and then removed it/ didn't ask it? Yea they might be using that history too.
I think it's more common than not for the large platforms, to try to log everything that is happening + log stuff that isn't even happening.
I have had passwords accidentally be pasted into chatgpt if I were using my bitwarden password manager sometimes and then had them be removed and I thought I was okay
It is scary that I am pretty familiar with tech and I knew it was possible but I thought that for privacy they wouldn't. I feel like the general public might be even more oblivious.
Also a quick question but how long are the logs kept in OpenAI? And are the logs still taken even if you are in private mode?
That's for sure, most people don't know how much they're being tracked, even if we consider only inside the platform. Nowadays, lots of platforms literally log your mouse movements inside the page, so they can see exactly where you first landed, how you moved around on the page, where you navigated, how long you paused for, and much much more. Basically, if it can be logged and re-constructed, it will be.
> Also a quick question but how long are the logs kept in OpenAI? And are the logs still taken even if you are in private mode?
As far as I know right now, OpenAI is under legal obligation to log all of their ChatGPT chats, regardless of their own policies, but this was a while ago (this summer sometime?), maybe it's different today.
What exactly you mean with "private mode"? If you mean "incognito/private window" in your browser, it has basically no impact on how much is logged by the platforms themselves, it's all about your local history.
For the "temporary mode" in ChatGPT, I also think it has no impact on how much they log, it's just about not making that particular chat visible in your chat history, and them not using that data for training their model. Besides that, all the tracking in your browser still works the same way, AFAIK.
I was referring to temporary mode when I was saying (but I also considered private window to be much safe as well but wow looks like they log literally everything)
So out of all providers, gemini,claude,openAI,grok and others? Do they all log everything permanently?
If they are logging everything, what prevents their logs from getting leaked or "accidentally" being used in training data?
> As far as I know right now, OpenAI is under legal obligation to log all of their ChatGPT chats, regardless of their own policies, but this was a while ago (this summer sometime?), maybe it's different today.
I also remember this post and from the current political environment, that's kind of crazy.
Also some of these services require a phone number one way or other and most likely there is a way the phone number can somehow be linked to logs, then since phone numbers are released by govt., usually chances are that if threat actors want data on large & OpenAI contributes to them, a very good profile of a person can be built if they use such services... Wild.
So if OpenAI"s under legal obligation, is there a limit for how long to keep the logs or are they gonna keep it permanently? I am gonna look for the old article from HN right now but if the answer is permanently, then its even more dystopian than I imagined.
The mouse sharing ability is wild too. I might use librewolf at this point to prevent some of such tracking
Also what are your thoughts on the new anonymous providers like confer.to (by signal creator), venice.ai etc.? (maybe some openrouter providers?)
> If they are logging everything, what prevents their logs from getting leaked or "accidentally" being used in training data?
The "tracking data" is different from "chat data", the tracking data is usually collected for the product team to make decisions with, and automatically collected in the frontend and backend based on various methods.
The "chat data" is something that they'd keep more secret and guarded typically, probably random engineers won't be able to just access this data, although seniors in the infrastructure team typically would be able to.
As for easy or not that data could slip into training data, I'm not sure, but I'd expect just the fear of big name's suing them could be enough for them to be really careful with it. I guess that's my hope at least.
I don't know any specific "how long they keep logs" or anything like that, but what I do know, is that typically you try to sit on your data for as long as you can, because you always end up finding new uses for it in the future. Maybe you wanna compare how users used the platform in 2022 vs 2033, and then you'd be glad, so unless the company has some explicit public policy about it, assume they sit on it "forever".
> Also what are your thoughts on the new anonymous providers like confer.to (by signal creator), venice.ai etc.? (maybe some openrouter providers?)
Haven't heard about any of them :/ This summer I took it one step further and got myself the beefiest GPU I could reasonably get (for unrelated purposes) and started using local models for everything I do with LLMs.
I am gonna assume in this case that the answer is forever.
I actually looked at kagi assistant for the purposes of this as someone mentioned and created a free kagi account but looks like that they are using AI models api themselves and the logs which come with that. Wouldn't consider it the most private (although like bedrock and aws says that they provide logs for 30 days but still :/ I feel like there is still a genuine issue )
I don't want to buy a gpu for my use case too though being honest :/
Either I am personally liking the proton lumo models or confer.to (I can't use confer.to on my mac for some reason so proton lumo it is)
I am probably gonna be right on proton lumo + kagi assistant/z.ai (with GLM 4.7 which is crazy good model)
I am really gpu poor (just got a simple mac air m1) but I ran some liquidFM model iirc and it was good for some extremely basic tasks but it fumbled at when I asked it the capital of bhutan just out of curiosity
> Users who are incorrectly placed in the under-18 experience will always have a fast, simple way to confirm their age and restore their full access with a selfie through Persona, a secure identity-verification service.
Yea, my linkedin account which was 15 years old and was a paid pro user for several years got flagged for verification (no reason ever given, I rarely used it for anything other than interacting with recruiters) with this same company as their backend provider. They wouldn't accept a (super invasive feeling) full facial scan + a REAL ID, they also wanted a passport. So I opted out of the platform. There was no one to contact - it wasn't "fast" or "easy" at all. This kind of behavior feels like a data grab for more nefarious actors and data brokers further downstream of these kinds of services.
Facebook made journalists a lot less relevant, so anything that hurts Meta (and hence anything that hurts tech in general) is a good story that helps journalists get revenge and revenue.
"Think of the children", as much as it is hated on HN, is a great way to get the population riled up. If there's something that happens which involves a tech company and a child, even if this is an anecdote that should have no bearing on policy, the media goes into a frenzy. As we all know, the media (and their consumers) love anecdotes and hate statistics, and because of how many users most tech products have, there are plenty of anecdotes to go around, no matter how good the company's intentions.
Politicians still read the NY Times, who had reporters admit on record that they were explicitly asked to put tech in an unfavorable light[1], so if the NYT says something is a problem, legislators will try to legislate that problem away, no matter how harebrained, ineffective and ultimately harmful the solution is.
[1] https://x.com/KelseyTuoc/status/1588231892792328192?lang=en
I'm glad ChatGPT will get a taste of VC startup tech quality ;)
New boss, same as the old boss.
Politicians, CEOs, Lawyers it's standard practice because it's so effective.
Then maybe OpenAI should just close shop, since (SaaS) LLMs do neither in the mid to long term.
At this point, just use gemini (yes its google and has its issues if you need SOTA) or I have recently been trying out more and more chat.z.ai for simple text issues (like hey can you fix this docker issue etc.) and I feel like chat.z.ai is pretty good plus open source models (honestly chat.z.ai feels pretty SOTA to me)
https://help.kagi.com/kagi/ai/llms-privacy.html
I also didn't find in kagi what provider its using for glm 4.7 (I am assuming the same as glm 4.6) which is Cerebras,DeepInfra,Fireworks.ai and some of these use large companies like aws etc. so you are still putting trust into them
I somehow made kagi assistant mention proton lumo and it sort of agreed with me that its a good option too
I think Proton lumo (for simple queries, although i still don't know which model they use & it's pretty restrictive plus I wish they might have used glm 4.7) but its good.
Cerebras terms and policy needs to be seen again by me but I had talked to cerebras on discord once and they mention that they don't log too but I might ask them again but cerebras might make more sense but for basic codegen proton's lumo is good too.
My eyes are on proton lumo & confer.to but kagi's pretty good too (as much as they can be fwiw but they still rely on api, I feel like kagi's make sense if you are already using kagi search or want AI to use kagi search feature imo)
I looked at some of these privacy policy and
I have heard good things about Kagi in fact, that's the reason why I tried orion and still have it in the first place but I haven't bought Kagi, I just used the free searches orion gives & I don't know if it has Kagi's assistant.
I think proton's Lumo is another good bet.
If you want something to not track you, I once asked cerebras team on their discord if they track the queries and responses from their website try now feature and they said that they don't. I don't really see a reason why they might lie about it given that its only meant for very basic purposes & they don't train models or anything.
You also get one of the fastest inferences for models including GLM 4.7 (which z.ai uses)
You might not get search results though but for search related queries duck.ai's pretty good and you can always mix and match.
But Cerebras recently got a 10 billion $ investment from OpenAI and I Have been critical of them from now on so do be wary now.
Kagi Assistant does seem to be good if someone already uses Kagi or has a subscription of it from what I feel like.
So there you go, maybe it wont give exactly what regulators say they want, but it will give exactly what they truly want.
Why would it encourage this for anyone?
It is truly not just children who need protection.
Great way to ensure nobody seeks mental health treatment.
There are dark sides to the rollout that EFF details in their resource hub: https://www.eff.org/issues/age-verification
There is a confluence of surveillance capitalism and a global shift towards authoritarianism that makes it particularly alarming right now.
We don't need to jam ads into every orifice.
I hope there's more value to be had not doing ads than there is to include them. I'd cancel my codex/chatgpt/claude if someone planted the flag.
OpenAI seems to think it has infinite trust to burn.
Apple is a has-been. Anthropic is best positioned to take up the privacy mantle, and may even be forced to, given that most of their revenue is enterprise and B2B.
I hope there's some layer between Apple and Gemini but only those at the helm can be trusted to make that happen and I don't trust them to choose users over the dollar.
In the press release Apple said they will be running this on their own hardware (both on-device and private cloud). They're not going to be directly routing requests to Gemini hosted by Google.
This obviously doesn't preclude some kind of data sharing arrangement, but there is at least some indirection between the two.
Actually, all the AI companies together should choose a micropayment system to focus on. I know in fashion, I've seen what would seem like competing brands center around a common "pillar of influence."
Also, if (long-tail?) AI companies work together, they could install appliances and terminals around cities. The most immediate use case - transport timetables. It seems like a no-brainer the more I think about it. Especially good for tourists who don't speak the local language. Governments may end up wanting to do that anyway and could subsidize the cost. It really depends on how fixated people are to owning their own screen, versus using someone else's. Those city screens could end up billboards anyway - especially for local businesses. They could print for a fee too and third parties could pay to get their app listed. Also, it's worth considering the increase in wealth inequality and rising hardware costs for people to own and stream into their own device. So this could be like the Internet Cafe 2.0.
Incidentally, there's a recent thread about someone streaming HN to a cheap display: https://news.ycombinator.com/item?id=46699782 - why not have such displays around town? I guess one major problem is vandalism.
https://www.w3.org/TR/1999/WD-Micropayment-Markup-19990825/
Why would you want to use a terminal for mass transit instead of your phone?
Look how cheap x402 transactions are (ie almost free) https://gemini.google.com/share/cbf1adb1570c It's a new thing - have business models adapted accordingly?
https://www.x402.org/ >AI agent sends HTTP request and receives 402: Payment Required
>AI agent pays instantly with stablecoins
Smells like a weird ad to me.
More is written about it here: "https://gemini.google.com/share/6113940b8e1e "What are the best examples of x402 payments being made currently?"
From what I've read on HN, running advertising technology is an expensive and complex undertaking. I'd be trying to skip it altogether and keep the subsequent costs, intrusions and headaches away from users.
The other good thing about micropayments: being able to instantly divert some of the revenue back to content and training sources. That'd make it more righteous too and make it more conducive to cooperation from them (eg realtime pings.) Content will improve as a result, better justifying the costs. Could lead to less bot rampage too lowering bandwidth costs overall.
It'd also remove the temptation (hopefully) for AI companies to resort to black-hatting: scamming, backdoors and trojans to recoup their costs. That's seriously important and code-checking can become less of a priority and thereby result in time-saving for end users.
"I keep a cheap travel eSim plan active on it so that if I am somewhere sketchy I can leave my main phone at home." https://news.ycombinator.com/item?id=46639157
It's a personal choice - you are also tied to a battery charger. Wait, solar panels are getting better.
Why is it so difficult to run a mobile app on a PC? Why can't there be a device that I connect to my laptop to turn it into a phone (voice + texts) whenever the need arises? Weird. What's with the identification required at SIM point-of-sale? Is someone trying to track me or something?
Again, your use case is not a business model. What next bring back pay phones?
No ads is a point of product differentiation. One among many. But in some sense ads are a natural resource curse that pervade the whole company. Again, I point to Apple vs Google/Meta.
I wouldn't hold my breath...
I'd listen to someone who has managed to not see an ad or pay for a subscription in 20 years. Sounds pretty impressive.
Me, I've seen a lot of ads and happily paid for chatgpt plus, pro, and the api. Not that I think that privileges my opinion.
Is it not in OpenAI's best interested for them to accidentally flag adults as teens so they have to "verify" their age by handing over their biometric data (Persona face scan) or their government ID? Certainly that level of granularity will enhance the product they offer for their advertisers.
> While this is an important milestone, our work to support teen safety is ongoing.
agreed, I guess we'll be seeing some pushbacks similar to Apple's CSAM but overall it's about getting a better demographics on their consumers for better advertising especially when you have a one-click actions combined with it. We'll be seeing handful of middleware plugins (like Honey) popping up, which I think the intended usecase for something like chat based apps
Use an encouraging tone. Adopt a skeptical, questioning approach. Call me on things which don't seem right. List possible assumptions I'm making if any.
The padding in OpenAI's statement is easy to see through:
> The model looks at a combination of behavioral and account-level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time, and a user’s stated age.
(the only real signal here is 'usage patterns' - AKA the content of conversations - the other variables are obfuscation to soften the idea that OpenAI will be pouring over users' private conversations to figure out if they're over/under age.).
Worth also noting 'neutered' AI models tend to be less useful. Example: Older Stable Diffusion models were preferred over newer, neutered models: https://www.youtube.com/watch?v=oFtjKbXKqbg&t=1h16m
But when youtube rolled out, I saw this video on taxes simply for tricking the youtube algorithm which had like A LOT of views.
I went to the comments and much of them were teenagers bashing the yt idea and commenting in jest about how yes they got helped in their taxes etc.
I simply don't see how openAI would be any different.
Youtube is still a one in a million though, Nothing else like that exists but there are many chat providers like OpenAI which are actually pretty good nowadays & don't want your id.
What I wonder lately is how an adult person can be empowered by tech to bare the consequences of their action and the answer usually is that we cannot. We don''t have the means of production in the literal marxist definition of the phrase and we are being shaped by outside forces that define what we can do with ourselves. And it does not matter if those forces are benevolent or not, it matters that it is not us.
The winter is coming and we are short on thermal underware.
The chinese open models being reason for hope is just a very sad joke.
"Q: ipsum lorem
ChatGPT: response
Q: ipsum lorem"
OpenAI: take this user's conversation history and insert the optimal ad that they're susceptible to, to make us the most $
That was just a spur-of-the-moment question. I've been using ChatGPT for over six months now.
Horoscopes must feel like literal magic to you.
You’ve recently been feeling a quiet tension between wanting stability and craving some kind of change. On the surface, things look mostly under control, but there’s a sense that one small adjustment—something you’ve been postponing—could shift your mood more than you expect.
See also: Cold Reading(What I'm saying is calling 30-35 "dead on" makes you look like somebody who is easily impressed by parlour tricks)
I only asked ChatGPT to guess my age because I'm assuming OpenAI is going to have the LLM assume your age going forward, which is an interesting use of the technology. Rather than ask up front, it just guesses based on the utilization of the tool. I understand "dead-on" may have been the wrong use of the term, let's just say it was fairly accurate...
I probably would be impressed by a good parlour trick or an accurate horoscope. Lmao
I don't know how OpenAI plans to do this going forward, just quickly read the article and figured that might be a good question to ask ChatGPT.
Edit: I just followed that up with, "Based on everything I've asked, what gender am I?" It refused to answer, stating it wouldn't assume my gender and treats me as gender neutral.
So I guess it's ok for an AI agent to assume your age, but not your gender... ?
I don't really feel like diving into the ethics of OpenAI at the moment lol.
I find it strange that having it assume your age isn't off limits - according to the article, it's about to become a major feature.
Seriously though this is the most easily game-able thing imaginable, pretty sure teens are clever enough to figure out how to pretend to be an adult. If you've come to the conclusion that your product is unsuited for kids implement actual age verification instead of these shoddy stochastic surveillance systems. There's a reason "his voice sounded deep" isn't going to work for the cashier who sold kids booze
It’s absolutely crucial for effective ad monetization to know the users age - significant avenues are closed down due to various legislation like COPPA and similar around the world. It severely limits which users can even be subject to ads, the kind of ads, and whether data can be collected for profiling and targeting for ads.
If this was for safety, they could’ve done it literally any time in the past few years - instead of within days of announcing their advertising plans!
> ChatGPT automatically applies additional protections designed to reduce exposure to sensitive content, such as:
* Graphic violence or gory content
* Viral challenges that could encourage risky or
harmful behavior in minors
* Sexual, romantic, or violent role play
* Depictions of self-harm
* Content that promotes extreme beauty standards,
unhealthy dieting, or body shaming
That wording implies that's not comprehensive, but hate and disinformation are omitted.After that, the algorithm gets very complex and becomes a black box. I'm sure they spent billions training it.
I've been very aggressive toward OpenAI on here about parental controls and youth protection, and I have to say the recent work is definitely more than I expected out of them.
To me, this feels nefarious with the recent push into advertising. Not only are people dating these chat bots, but they are more trusting of these AI systems than people in their own life. Now, OpenAI is using this "relationship" to influence user's buying behavior.
But consider OPs point — ChatGPT has become a safety-critical system. It is a tool capable of pushing a human towards terrible actions, and there are documented cases of it doing this.
In that context, what is the responsibility of OpenAI to keep their product away from the most vulnerable, and the most easily influenced? More than zero, I believe.
So is Catcher In The Rye and Birth of a Nation.
> the most vulnerable, and the most easily influenced
How exactly is age an indicator of vulnerability or subject-to-influence?
No, those are books. Tools are different, particularly tools that talk back to you. Your analogy makes no sense.
> How exactly is age …
In my experience, 12-year-old humans are much easier to sway with pleasant-sounding bullshit than 24-year-old humans. Is your experience different?
It's really really not. "Safety-critical system" has a meaning, and a chat bot doesn't qualify. Treating the whole world as if it needs to be wrapped in bubble-wrap is extremely unhealthy and it generally just used as an excuse for creeping authoritarianism.
Then maybe they should do something about that instead of papering it over with bullshit.
When I read the chat logs of the first teenager who committed suicide with the help and encouragement of ChatGPT I was immediately thinking about ways that could be avoided that make sense in the product. I want companies like OpenAI to have the same reaction and try things. I'm just glad they are.
I'm also fully aware this is unpopular on HN and will get downvoted by people who disagree. Too many software devs without direct experience in safety-critical work (what would you do if you truly are responsible?), too few parents, too many who are just selfishly worried their AI might get "censored".
There are really good arguments against this stuff (e.g. the surveillance effects of identity checks, the efficacy of age verification, etc.) and plenty of nuance to implementations, but the whole censorship angle is lame.
I suppose mainly because I don't think a non-minor committing suicide with ChatGPT's help and encouragement matters less than a minor doing so. I honestly thing the problem is the user interface for GPT being a chat. I think it has a psychological effect that you can talk to ChatGPT the same way you can talk to Emily from school. I don't think this is a solvable problem if OpenAI wants this to be their main product (and obviously they do).
I wholeheartedly reject the fully sanitized "good vibes only" nanny world some people desire.
I'm pretty sure AI has saved more lives than it has taken, and there's pretty strong arguments to say that someone whose thinking of committing suicide will likely be thinking about it with or without AI systems.
Yes, sometimes you really do "have to take one for the team" in regards to tragedy. Indeed, Charlie Kirk was literally talking about this the EXACT moment he took one for the team. It is a very good thing that this website is primarily not parents, as they cannot reason with a clear unbiased opinion. This is why we have dispassionate lawyers to try to find justice, and why we should have non parents primarily making policy involving systems like this.
Also, parents today are already going WAY to far with non consensual actions taken towards children. If you circumcised your male child, you have already done something very evil that might make them consider suicide later. Such actions are so normalized in the USA that not doing it will make you be seen as weird.
Some kids are mature enough from day one to never need tech overlords to babysit them, while others will need to be hand-held through adulthood. (I've been online since I was 12, during the wild and wooly Usenet and BBS days, and was always smart enough not to give personal info to strangers; I also saw plenty of pornographic images [paper] from an even younger age and turned out just fine, thank you.)
Maybe instead of making guesses about people's ages, when ChatGPT detects potentially abusive behavior, it should walk the user through a series of questions to ensure the user knows and understands the risks.
My feelings have absolutely nothing to do with censorship. That’s just an easy straw man for you to try and dismiss my point of view, because you’re scared of not feeling safe.
This seems to be a side project of their goal and a good way to calibrate the future ad system predictions.
But age verification is all over the place. Entire countries (see Australia) have either passed laws, or have laws moving through legislative bodies.
Many platforms have voluntarily complied. I expect by 2030, there won't be a place on Earth where not just age verification, but identity is required to access online platforms. If it wasn't for all the massive attempts to subvert our democracies by state actors, and even political movements within democratic societies, it wouldn't be so pushed.
But with AI generated videos, chats, audio, images, I don't think anyone will be able to post anything on major platforms without their ID being verified. Not a chat, not an upload, nothing.
I think consumption will be age vetted, not ID vetted.
But any form of publishing, linked to ID. Posting on X. Anything.
I've fought for freedom on the Internet, grew up when IRC was a thing, knew more freedom on the net than most using it today. But when 95% of what is posted on the net, is placed there with the aim to harm? Harm our societies, our peoples?
Well, something's got to give.
Then conjoin that with the great mental harm that smart phones and social media do to youth, and.. well, anonymity on the net is over. Like I said at the start, likely by 2030.
(Note: having your ID known doesn't mean it's public. You can be registered, with ID, on X, on youtube, so the platform knows who you are. You can still be MrDude as an alias...)
What?
I'd argue the overwhelming majority was agnostic at best to harm being a factor.
And adults.
Lots of upsides for them.
Likely demographic: Male, 35-65
Potential interests: technology, privacy focused products
Ideal ad style: avoid emotional hooks, list product features in an objective-seeming manner.
this trillion market is not about empowering users to style their pages more quickly, heh
https://allowe.com/games/larry/tips-manuals/lsl3-age-quiz.ht...
> Users [...] will always have a [...] simple way to confirm their age and restore their full access with a selfie through Persona, a secure identity-verification service.
Nice, so now your most secret inner talk with the LLM can be directly associated with face and ID. Get ready for the fun moment when Trump decide that he needs to see what are your discussions with the AI when you pass the border or piss him off...