I think this is... fine? Am I just totally naive. I think it's fine to say "You don't really have privacy on this app" - as long as there are relatively good options of apps that do have privacy (and I think there are). TikTok is really a public by default type of social media, there's not much idea of mutual following or closed groups. So sure, you don't have privacy on tiktok, if you want it you can move to snapchat or signal or whatever platform of your choice.
Like, it's literally a platform that was run under the watchful eye of the CCP, and now the US version is some kleptocratic nightmare, so I just don't see the point in expecting some sort of principled stance out of them.
In some ways I think it's worse for places like Facebook to "care about privacy" and use E2EE but then massively under-resource policing of CSAM on their platform. If you're going to embrace 'privacy' I do think it's on you to also then put additional resources into tackling the downsides of that.
Children are just too effect of a tool when building a surveillance state. We should have banned children from owning open computers a long time ago just like we do with Alcohol, Driving licenses, etc.
Instead children would own special devices that are locked down and tagged with a "underage" flag when interacting with online services, while adults could continue as normal. We already heavily restrict the freedom of children so there is plenty of precedent for this. Optionally we could provide service points to unlock devices when they turn 18 to avoid E-waste as well.
This way it's the point of sale where you provide your ID, instead of attaching it to the hardware itself and sending it out to every single SaaS on the planet to do what they wish.
Would be a nightmare to implement and achieve the goal, but I have to say I think it’s more right than wrong. All of the data is very clear about the harms.
China has restrictions for social media and screen time for kids — how do they implement this?
I actually think this would be easier to implement than many of the current ID verification methods I've seen being pushed. We already have the infrastructure for selling age restricted goods, this is nothing new. Manufacturers that are unable to restrict their hardware in a "child" mode don't have to do anything and could simply continue selling to adults only.
It's obvious we're moving in a direction where we are going to get these restrictions in one way or another, and this is the only way I've come up with that doesn't come with serious privacy implications.
Most importantly, this solution would be simple for anyone to understand. You don't need to be a cryptography expert to understand there are child safe devices and then there are unrestricted devices for adults.
Would the parents comply though? Many of the restrictions work because most adults agree is OK. For example for alcohol, children could drink as much as they want at home, if adults would permit it.
If most adults would be convinced there is an issue, one probably has enough lock-down modes even nowadays, not sure it is a "technical" problem.
I strongly believe that most would actually. All parents I've talked to have had issues with parenting their children's online activity. They know there are harmful things they want to prevent them from accessing but it is simply to hard to configure and set up existing tools for it. (Besides every single friend they have don't have any restrictions so it all seem pointless.)
I can also see also large support for uploading ID to various services when talking about kids, but when you re-frame the question to adults, most seems to really dislike the idea immensely.
Sure there will be children with access to unrestricted devices, just like we had kids with porn mags hidden in a forest somewhere back in the day, or how that one sketchy guy was buying alcohol, etc. But I think this is an acceptable level of risk for whatever harm people want to prevent.
Definitely makes it easier for parents. It also normalizes screen time limits for kids. When none of your kids' friends have screen time limits, it's harder to enforce. When at least there's a few of them, it's easier to get buy-in from your kids.
It's a nightmare to some extent to prevent underage people from consuming alcohol if you want to phrase it that way. But we don't try to ban stores from selling alcohol because of concerns children will be drinking it. Instead we require the store checks for ID.
Which has never worked. Korea had a system to prevent kids from gaming after midnight for something like 15 years. All it did was make Korean kids very good at memorizing their parents ID.
Maybe it does work exactly as intended. It gives parents more leverage to restrict their kids gaming but many parents just don't care. And it's ok I guess, the society probably needs some flexibility in raising the next gen.
In China they link the ID to a phone number (via mobile carriers) and the online services require you to authenticate using the phone (SMS etc.) Unless the kids are able to secretly access the parent's phone there's no low-effort way to work around the system.
I don't know about Korea but if memorizing an ID number works, then that's just a badly designed system.
I'm not sure what your argument is really, unless you're saying there's technically and absolutely no feasible way to securely verify the age of a person before allowing them to access an online service (even if you allow the government to be authoritarian)
Parents are also allowed to restrict their children access to alcohol and cigarettes, but it seems a government ban on them buying those things works better
Doesn't seem to be a universal truth to me. As a teenager I had rather easy access to both cigarettes and alcohol in spite of usual age-restrictions legally imposed. I didn't care what gov't thinks about it. I did care about what my parents would do if I caught drunk though. That was my real barrier.
Given the ease with which kids who want them can get any of those things in schools, it's not clear that the government ban is actually doing anything of significance or that the reduction in usage isn't more a result of convincing people that those things are actually bad for them so they choose not to partake despite the continued widespread availability.
Notice that consumption of those things is also down for adults even though adults are not banned from getting them.
I don't think debazel was saying that children should have been banned from owning computers for the benefit of the children. He was saying that children should have been banned from owning computers so that the government would have no excuse to regulate what's allowed on computers.
Locking down children’s devices doesn’t stop adults sharing illegal content with other adults though, so there would still be pressure to monitor communications between adults.
At some points, laws become an ineffective tool to prevent malevolent people to act in detrimental manners, no matter what it states. But prejudices of wicked states will always continue to impact more badly general public as ever more drastic laws lacking any balance become enacted.
At the same time, I remember growing up in the internet's wild west and bad encounters weren't an issue for me because of the golden rule I was taught from the start: you don't give your personal information and you don't interact with complete strangers. Learning to navigate the web instead of being in a walled garden was helpful in many ways.
The better question to ask ourselves is, does the capability to gather more information also lead to more power to act on this information? If the investigative resources are spread thin already it's not like they're gonna catch more criminals with investing more there. Repelling questionable individuals off the platform with lots transparancy -is- an effective way, but just a specific tool for a symptom.
I think a part of a better solution is to give parents and children better tools to manage their social graph themselves. Essentially the real problem is discovery and warding off of social outliers in a way that doesnt out all responsibility on opaque algos or corporations.
A part of their e2e keys could be shared using an intentionally obtuse way like mailing an item or a physical "friend code". That way parents and vetted friends can have their privacy.
You don't need to tie an id to someone's person to get positive confirmation on someone's poor behaviour. If someone crossed the line then parents can see it and escalate. In additon, what would happen to a child with abusive parents who can then arbitrarily restrict and deny a childs freedom to communicate? I did not have this myself, but without free access to other minds and information I would have been duller. Does a large information dragnet really serve our collective interests or are more precise tools needed?
> I think a part of a better solution is to give parents and children better tools to manage their social graph themselves. Essentially the real problem is discovery and warding off of social outliers in a way that doesnt out all responsibility on opaque algos or corporations.
This is actually a key consideration for the proposed implementation. The biggest issue for parents when restricting their children's online activity is that they simply don't understand the tool available for it.
By having a "child mode" iPhone, parents don't have to know any of that. They simply buy the iPhone Kids for their children and then get a plain iPhone for themselves.
If these restrictions were to actually be enforced by law as well, then it would make it very easy for teachers and other guardians to check if a device is appropriate for the child using it.
From what I've seen, the bad effects don't necessarily just come from free access to the internet, but that everyone around them in their social group has a video camera that can covertly record, they're all immature children and thus you cannot slip up once or you get kid cancelled, and they start doing a collective dissociative freeze response in a self-imposed emergent panopticon as a result.
So if the teen phone turned into a restricted "call mom" device with no cameras and with neon yellow obvious fuck you coloring and a restricted set of apps, and police took away a full phone much like they take away cigs and beer it might be enough to break the critical mass to create this issue. They can have dedicated cameras for video club, use the family computer, have an xbox or switch and have whatever tech experience that millenials had, the last generation to not have exponential increases in anxiety , depression and sexlessness.
It's the covert camera + internet that it's the key issue.
> Instead children would own special devices that are locked down and tagged with a "underage" flag when interacting with online services, while adults could continue as normal.
California is mandating OSes provide ages to app stores, and HN lost their mind because it's a ban on Linux.
> California is mandating OSes provide ages to app stores,
They forgot to put in the provision which exempts apps which do not need an age rating? As in: everything os related.
Sounds like a good way to get rid of snap at least since that is where all the commercial bloat is located. Last time I did a fresh Debian install I do not remember installing any app from the os repository which would require age restrictions (afaik).
Agreed. Putting the burden on parents is quite something:
1. You end up being the bad guy, other parents don't restrict their kids internet usage etc. Some folks would argue to just not set up restrictions and trust them. But it's a slippery slope and puts kids in a weird position. They start out with innocent YouTube videos, but pretty quickly a web search or even a comment can lead them to strange places. They want to play games online, but then creeps abuse that all the time. Even if you trust them to not do anything "wrong", it's a lot to put on their shoulders.
2. If you want to put restrictions in place, even if you're an expert, the tools out there are pretty wonky. You can set up a child protection DNS, but most home routers don't make it easy (or even allow you) to set a different DNS server. And that's not particularly hard to circumvent. I suppose a proxy would be a more solid solution, but setting that up would be major yak shaving. Any "family safety" features (especially those from Microsoft) are ridiculously complicated and often quite buggy. Right now, I got the problem on my plate that I need to migrate one of my kid's accounts from a local Windows account to a Microsoft account (without them loosing all their stuff), because for local accounts, it seems the button to add the device is just missing? Naturally, the docs don't mention that, I had to do research to arrive at that hypothesis. The amount of yak shaving, setup and configuration you have to do for a reasonable setup is just nuts.
3. If you're not good with tech - I don't see how you have _any_ chance in hell to set up meaningful restrictions.
Some countries are banning social media - sure, that's one thing. But there's a _lot_ of weird places on the internet, kids will find something else. I for one would appreciate dedicated devices or modes for kids < 18. Would solve all this stuff in a heartbeat.
TikTok has a drug-like effect on the brain. Multiple studies show a clear link between excessive TikTok engagement and increased levels of anxiety, depression, and stress. Maybe it is time we regulate it like a drug?
Is that because of engaging with tiktok, or because of the content on tiktok? If the app was exclusively pictures of kittens and nice flowers you saw on your commute, would it have a detrimental effect?
I don't understand why all teh child safety systems require age verification. Why not have a single setting on a smartphone that sends a 'child' flag to every single app or website, which then reacts accordingly? As long as you ensure that the browser can't be changed or modifed, it should be fine.
Does it matter. It's just some arbitrary company. They do have the freedom to decide those things however they want, right? The customer can then decide whether to switch or not.
It matters because if it works and people continue using the platform then other providers will follow and the only remaining E2EE providers will be niche.
Ultimately your neighbors must buy the argument. The reason why this argument wins is not because framing is so tricky, but because it connects with the values of your neighbors. Trying to convince people that these aren't actually their values is swimming upriver.
I fail to see the link between private conversations/DM and E2EE.
To quote a comment I made some time ago:
- You can call your service e2e encrypted even if every client has the same key bundled into the binary, and rotate it from time to time when it's reversed.
- You can call your service e2e encrypted even if you have a server that stores and pushes client keys. That is how you could access your message history on multiple devices.
- You can call your service e2e encrypted and just retrieve or push client keys at will whenever you get a government request.
E2EE only prevents naive middlemen from reading your messages.
Sure, however kids these days often can't socialize irl - should kids be isolated from friends because they're unable to have any private conversations at all?
During times in which I was unable to socialize irl (eg school holidays), and unable to talk to my friends online, I can confirm that the isolation was not good for my mental health.
You could have reasonable legal system where privacy is guaranteed. But you do not need end to end encryption for that to be thing. It really is orthogonal issue.
This might be off-topic but on-topic about child safety... but I'm surprised people are being myopic about age verification. Age verification should be banned, but people ignore that nowadays most widely used online services already ask for your age and act accordingly: twitter, youtube, google in general, any online marketplace. They already got so much data on their users and optimize their algorithms for those groups in an opaque way.
So yeah, age verification should be taken down, as well as the datamining these companies do and the opaque tunning of their algorithms. It baffles me: people are concerned about their children's DMs but are not concerned about what companies serves them and what they do with their data.
Monitoring children's DMs is the responsibility of the parents, not megacorps. If a parent wants to install a keylogger or screen recorder on their child's PC, that's their decision. But Google should not be able to. Neither should... literally anyone else except maybe an employer on a work-provided device.
> Monitoring children's DMs is the responsibility of the parents, not megacorps
Absolutely. But what responsibilities do megacorps have? Right now, everyone seems to avoid this question, and make do with megacorps not being responsible. This means: "we'll allow megacorps to be as they are and not take any responsibilities for the effects they cause to society". Instead of them taking responsibilities, we're collecting everyone's data and calling it a day by banning children from social networks... and this is because there are many interests involved (not related to child development and safety).
Human operators were not required of The Bell Telephone Company by law. Bell switched to mechanical switching stations as soon as doing so was economically advantageous.
(Reconsider my post. I'm arguing for no regulation.)
I'd say that at minimum social networks need to be required to show how their algorithm works and allow users control over their data. They must be able to know why a content was served to them. Nowadays social networks are so pervasive in society, affecting it and molding it to unknown interests, that this is the bare minimum for a free society.
Ideally, users should be able to modify the algorithm, so they can get just what they want, while simultaneously maximizing free speech. If something isn't illegal, it shouldn't be hidden or removed.
They should have a responsibility of transparency, accountability and empathy towards users. They should work for the user and in the interests of the user. But multiple constraints make this impossible in practice.
> Monitoring children's DMs is the responsibility of the parents, not megacorps.
Yup, but the tools provided make that easy or hard.
But putting that emotive bit to one side, Megacorps have a vested interest in not being responsible to children. They need children's eye balls to drive advertising revenue. If that means sending them corrosive shit, then so be it.
Its a bigger issue than encryption, its editorial choice.
Why? Plenty of children benefit from talking to other people. Some children need careful monitoring, and some children shouldn't be allowed to use DMs, but it's not universal and should be up to the parents.
There are a variety of ways (see "Verifiable Credentials") that ages can be verified without handing over any data other than "Is old enough" to social media services.
> Age verification obliviates anonymity on the internet.
How so?
Please explain in detail, because there are already schemes such as "verifiable credentials" which allow people to prove they are of age without handing over ID to online services.
> You need to 100% trust those verification services.
First link - mitigation: use a well supported standard like OIDC, not a home-cooked scheme. Duh.
Second link - this is part of the problem such schemes as verifiable credentials are designed to address, random third parties collecting ID they don't need.
Yes, any system needs to be executed well. Neither of these really display that.
If _the government_ can't be trusted not to use a dumbass scheme, then no, it isn't a duh moment. You don't exactly get to dictate how the government implements it!
The point is that systems today, aren't really well executed. So it is unreasonable to expect them to be well executed.
If you can't trust people not to build the bomb well - then don't let them build a bomb.
In the context of "Age verification should be banned" though, we're already talking about legislative intervention. If there's no particular problem with schemes that are like that then we don't necessarily need a blanket ban on age verification.
Perhaps what we're really saying is "Ban age verification that collects lots of personal information".
Or perhaps we could distil it down further to "Ban unnecessary collection and storage of PII". In which case, Congrats! You've arrived back at the GDPR :)
Which I think is a good thing, and should be strengthened further.
(Also the other response to "because most implementations are not going to be like that" is "why not?". People are already building such ecosystems.)
> If there's no particular problem with schemes that are like that then we don't necessarily need a blanket ban on age verification.
There is a problem with schemes like that.
The way computer security works is, attacks always get better, they never get worse. A scheme that nobody has found any privacy holes in when it's enacted will have one found a week after.
The way governments work is, the compromise bill passes if the people who care about privacy support it because then it has the votes of the people who care about privacy and the people who want to ID everyone. But then when the vulnerability is found, the people who care about privacy can't get it fixed because they can't pass a new bill without also having the votes of the people who want to ID everyone, and those people already have what they want. More specifically, many of them then have what they really want, which is to invade everyone's privacy, as they were hoping to do once the vulnerability was found.
Which means you need it to be perfect the first time or it's already ossified and can't be fixed. But the chances of that happening in practice are zero, which means it needs to not happen at all.
/goes on to discuss how government legislation of specific schemes is the issue, not the schemes themselves.
Then we don't legislate specific schemes? The GDPR doesn't do that, for instance, it spells out responsibilities and penalties but doesn't say "Though shalt use this specific algorithm".
Remember, this discussion started with a call to ban all age checks, which itself is a government action and restriction on the agency of private business.
There are ways that private entities can implement age checks both securely and without leaking much other information, so it seems very heavy-handed to ban them. Private entities are building such systems between themselves already, without government mandates on the specifics.
The difference is that IRL establishments don't sell off that data to anyone else, nor do they have the ability to collate that data with data from other establishments to make a profile of you.
The problem with this discussion is that this is a wonk solution for wonkish times. You're trying to thread the needle between various reasonable compromises. Ironically due to social media, that is simply not how politics and lawmaking works any more. Instead it's an emotionally driven fight between various different sorts of moral panic, and the only option is to get people more mad about surveillance than "think of the children".
You might be able to get somewhere by getting a tech company on your side, but they generally also hate adult content and don't mind banning it entirely.
(people are not going to get age verification _banned_ any time soon! That's simply not going to happen!)
Slippery slope can be argumental if you provide the actual argumental reasoning for it as I was thought it could be used as deductive argumentation (though that does not say much). On itself it is a fallacy.
I don't see how verifiable credentials with zero knowledge proofs provide that however.
The Party doesn't care about the Proles, only the members of the Outer Party.
I think that it's rather funny that people like to appeal to 1984 as if the only point of Mr. Orwell was that surveillance is bad, missing the entire point about stuff like the control of the language or the idea that the only self-justification of the (Inner) Party is power for the sake of power (see also: The Theory and Practice of Oligarchical Collectivism).
I'd even go as far as to say that if "telescreens are horrible" is the only thing that someone takes away from 1984, they've frankly missed the point.
I don’t really understand how we are supposed to believe in e2ee in closed proprietary apps. Even if some trusted auditor confirms they have plumbed in libsignal correctly, we have no way of knowing that their rendering code is free of content scanning hooks.
We know the technology exists. Apple had it all polished and ready to go for image scanning. I suppose the only thing in which we can place our faith is that it would be such an enormous scandal to be caught in the act that WhatsApp et al daren’t even try it.
(There is something to be said for e2ee: it protects you against an attack on Meta’s servers. Anyone who gets a shell will have nothing more than random data. Anyone who finds a hard drive in the data centre dumpster will have nothing more than a paperweight.)
> if the server operator was malicious, they could just push different client-side JavaScript
Same as with OS updates, browser updates, dependencies used by the OS, dependencies used by the browser. Also you can run malicious software such as keyloggers and you're compromised.
That argument doesn't mean E2E (even web based) is snake oil. Browsers just give you more points of failure.
Agree, but a significant point missed in the article is that of data vulnerability. with E2EE the company db is useless to an external attacker.
For some companies (eg facebook, google, tiktok) i would be mostly worried about the company itself being untrustworthy. For others I would be mostly worried about the company being vulnerable.
> It is worth noting that this law also applies to non-web applications where the service provider supposedly being secured against is also the client software distributor; thus, the “end-to-end encryption” offered by Whatsapp and Signal, amongst other proprietary services, is equally bogus. (Both Whatsapp and Signal ban use of third party clients, and enforce this policy.)
As much as I want to agree with you, the people who like TikTok make up a significant amount of the population, and their opinions do matter--arguably more than yours, due to sheer numbers.
Smugly dismissing them doesn't do you any favors except for making you feel good about yourself for a few seconds.
You say that like the typical 18 year old has any idea what they're doing when it comes to proper encryption and communication safety. That is never going to be the case.
It's a communication channel attached to the most popular social network for young people. Obviously they're going to use it a lot. They use it for the extreme convenience.
I feel like this makes sense for a platform that targets teens. Plus, I wouldn't trust TikTok to implement E2E encryption properly—who knows what they've snuck into their client.
What kind of application is not targeted at both teens and adults?
Youtube, twitter, bluesky, whatsapp? Every app with a social aspect will be used by teens. And no, tiktok is not "only for teens" or "specially targeted at teens", nowadays everyone uses it and creates content on it.
I think it's very safe to assume that no major US based platform has 'real' E2E encryption. They're almost certainly all a part of PRISM by now, and it'd contradict their obligations to enable government surveillance. So the only thing that's different is not lying about it. Though I expect the other platforms are, like when denying they were part of PRISM, telling half truths and just being intentionally misleading. 'We provide complete E2E encryption [using deterministically generated keys which can be recreated on demand].'
Aside from the fact that you can get Metadata and that some communication frequently happens outside of E2EE - what US law do you believe mandates moderation? I'm curious.
Obviously carrier pigeons carrying messages encrypted with post-quantum ciphers where keys have been sent ahead of time using USPS because no one would be so rude as to read someone elses mail.
It's never been controversial, it's the BBC. doing it's usual job of laundering the arguments the establishment want you to hear for domestic consumption.
The thing is, it _is_ controversial. At least amongst the general public.
Obviously not in somewhere like Hacker News where there’s a clear consensus, but if you asked a random sample of the UK population “should law enforcement be allowed to compel tech companies to hand over all DMs of confirmed paedophiles?”, I’d bet very good money the majority would say “yes”.
The notion that “Big Tech” can absolve themselves of the responsibility to help law enforcement find child abusers by saying “it’s all encrypted, not my problem”, does not sit well with a large sector of the population.
Whether it’s good or bad is an ultimately political question, and both sides of the debate tend to talk past each other on this topic, but it’s undeniably a controversial point within the broader population.
Fun fact - there is a big correlation between World Wars and compulsory education. Of course governments and big corporations "care" about children. Of course!
> But critics have said E2EE makes it harder to stop harmful content spreading online, because it means tech firms and law enforcement have no way of viewing any material sent in direct messages.
Like they give a damn. I report accounts that explicitly sell fake credit cards, citing laws that make it illegal and 95% of the time "we checked and there is no violation here, we know that you're not happy but don't give a crap".
So the argument of security is utter bullshit and they just want to snoop.
Reminder, Larry “citizens shouldn’t get any privacy” Ellison now owns tik tok. If you’re still using it or have friends and family using it you should stop immediately. It WILL eventually be used against you if this regime gets its way.
As if. If people haven't stopped using TikTok with all of the other reasons for stopping, then because Ellison is damn sure not going to move the needle.
The core tension here isn’t really about encryption itself, it’s about moderation models.
Most large platforms rely heavily on server-side visibility for abuse detection, spam filtering, recommendation systems, and safety tooling. End-to-end encryption removes that visibility by design. Once a platform is built around centralized analysis of user content, adding strong E2EE later isn’t just a feature toggle — it conflicts with large parts of the existing architecture.
I hate the BBC so much - "controversial privacy tech" "E2EE ... the best way to protect conversations from .. even repressive authorities" "End-to-end encryption has been criticised by governments, police forces"
They're saying this at the same time as they're clutching pearls over Iran's repression of protestors. Typical of the ethical consistency I would expect from them.
This according to many researchers is the best case study example for corporations gaslighting users into accepting surveillance by companies and governments alike.
TikTok’s stance against end-to-end encryption is unsurprising but still concerning. TikTok is a source of information on many topics, such as the genocide in Gaza, which traditional media underreport and many governments try to suppress. The network effect of big social media platforms means many people will likely talk about these topics in TikTok DMs. No matter what legal controls TikTok claims to enforce, there is no substitute for technological barriers for preventing invasions of privacy and government overreach. This is yet another example where corporations and governments sacrifice people’s autonomy and privacy in the name of security.
It's a pretty terrifying world we live in now, where an unencrypted addictive short-form video platform is considered a source of information more than news agencies or even community-managed forums.
"The situation is made more complex because TikTok has long faced accusations that ties to the Chinese state may put users' data at risk."
And yet, it's even more complex than that, since it's now owned by cronies of the current US President. I've never had a TikTok account, but conceptually I was mostly pretty okay with being spied-upon by China. I'm never going to China.
Yes. China gives a shit that user rdiddly, at 36 minutes before 00:55 UTC on March 4, 2026, said that China is spyihg to the point that they are going to be abducted for it.
> Grooming and harassment risks are very real in DMs [direct messages] so TikTok now can credibly argue that it's prioritising 'proactive safety' over 'privacy absolutism' which is a pretty powerful soundbite
I wondered how it could be considered 'controversial', but they do quote at least a couple groups speaking against it. The NSPCC for instance, who incidentally also warned parents about a Harry Potter video game because their children might want to learn more about the game:
>“Parents should also be aware that players may want to find out more about the game using other platforms such as YouTube, Twitch, Reddit and Discord, where other game fans can discuss strategies and experiences.
It is controversial.. amongst people who have concerns about private communications and society, from a regulatory and governance perspective.
It's uncontroversial amongst people who value their privacy.
The tension between the two camps (there are obviously nuances and this is a false dichotomy) is at a current peak. It's an ongoing controversy. It's a matter of public debate.
You might have liked it better if the angle had been "...which the government, controversially, wants to clamp down on" or something.
Calling something controversial is a favorite propaganda technique employed by "news" outlets. It's another form of selective reporting and framing. It carries negative connotations, and has really no objective standard by which it can be wrong since you'll always find somebody against any issue.
The UK government seems a lot more willing to embrace the panopticon in the name of protecting people from terrorists, child sex traffickers, human rights activists, Catholics, jaywalkers, you name it.
Like, it's literally a platform that was run under the watchful eye of the CCP, and now the US version is some kleptocratic nightmare, so I just don't see the point in expecting some sort of principled stance out of them.
In some ways I think it's worse for places like Facebook to "care about privacy" and use E2EE but then massively under-resource policing of CSAM on their platform. If you're going to embrace 'privacy' I do think it's on you to also then put additional resources into tackling the downsides of that.
IMO no consumer service should have private 1:1 messaging without e2e. Either only do public messaging (ie. Like a forum), or implement e2e.
Instead children would own special devices that are locked down and tagged with a "underage" flag when interacting with online services, while adults could continue as normal. We already heavily restrict the freedom of children so there is plenty of precedent for this. Optionally we could provide service points to unlock devices when they turn 18 to avoid E-waste as well.
This way it's the point of sale where you provide your ID, instead of attaching it to the hardware itself and sending it out to every single SaaS on the planet to do what they wish.
China has restrictions for social media and screen time for kids — how do they implement this?
It's obvious we're moving in a direction where we are going to get these restrictions in one way or another, and this is the only way I've come up with that doesn't come with serious privacy implications.
Most importantly, this solution would be simple for anyone to understand. You don't need to be a cryptography expert to understand there are child safe devices and then there are unrestricted devices for adults.
If most adults would be convinced there is an issue, one probably has enough lock-down modes even nowadays, not sure it is a "technical" problem.
I can also see also large support for uploading ID to various services when talking about kids, but when you re-frame the question to adults, most seems to really dislike the idea immensely.
Sure there will be children with access to unrestricted devices, just like we had kids with porn mags hidden in a forest somewhere back in the day, or how that one sketchy guy was buying alcohol, etc. But I think this is an acceptable level of risk for whatever harm people want to prevent.
Are you saying that kids now buy their phones with pocket money without their parents knowing?
> It's obvious we're moving in a direction where we are going to get these restrictions in one way or another
It’s not obvious, it’s just sad. I still hope reason will prevail in this.
I don't know about Korea but if memorizing an ID number works, then that's just a badly designed system.
I'm not sure what your argument is really, unless you're saying there's technically and absolutely no feasible way to securely verify the age of a person before allowing them to access an online service (even if you allow the government to be authoritarian)
Notice that consumption of those things is also down for adults even though adults are not banned from getting them.
The better question to ask ourselves is, does the capability to gather more information also lead to more power to act on this information? If the investigative resources are spread thin already it's not like they're gonna catch more criminals with investing more there. Repelling questionable individuals off the platform with lots transparancy -is- an effective way, but just a specific tool for a symptom.
I think a part of a better solution is to give parents and children better tools to manage their social graph themselves. Essentially the real problem is discovery and warding off of social outliers in a way that doesnt out all responsibility on opaque algos or corporations.
A part of their e2e keys could be shared using an intentionally obtuse way like mailing an item or a physical "friend code". That way parents and vetted friends can have their privacy. You don't need to tie an id to someone's person to get positive confirmation on someone's poor behaviour. If someone crossed the line then parents can see it and escalate. In additon, what would happen to a child with abusive parents who can then arbitrarily restrict and deny a childs freedom to communicate? I did not have this myself, but without free access to other minds and information I would have been duller. Does a large information dragnet really serve our collective interests or are more precise tools needed?
This is actually a key consideration for the proposed implementation. The biggest issue for parents when restricting their children's online activity is that they simply don't understand the tool available for it.
By having a "child mode" iPhone, parents don't have to know any of that. They simply buy the iPhone Kids for their children and then get a plain iPhone for themselves.
If these restrictions were to actually be enforced by law as well, then it would make it very easy for teachers and other guardians to check if a device is appropriate for the child using it.
So if the teen phone turned into a restricted "call mom" device with no cameras and with neon yellow obvious fuck you coloring and a restricted set of apps, and police took away a full phone much like they take away cigs and beer it might be enough to break the critical mass to create this issue. They can have dedicated cameras for video club, use the family computer, have an xbox or switch and have whatever tech experience that millenials had, the last generation to not have exponential increases in anxiety , depression and sexlessness.
It's the covert camera + internet that it's the key issue.
California is mandating OSes provide ages to app stores, and HN lost their mind because it's a ban on Linux.
They forgot to put in the provision which exempts apps which do not need an age rating? As in: everything os related.
Sounds like a good way to get rid of snap at least since that is where all the commercial bloat is located. Last time I did a fresh Debian install I do not remember installing any app from the os repository which would require age restrictions (afaik).
That's correct. You need to provide your age to install grep.
1. You end up being the bad guy, other parents don't restrict their kids internet usage etc. Some folks would argue to just not set up restrictions and trust them. But it's a slippery slope and puts kids in a weird position. They start out with innocent YouTube videos, but pretty quickly a web search or even a comment can lead them to strange places. They want to play games online, but then creeps abuse that all the time. Even if you trust them to not do anything "wrong", it's a lot to put on their shoulders.
2. If you want to put restrictions in place, even if you're an expert, the tools out there are pretty wonky. You can set up a child protection DNS, but most home routers don't make it easy (or even allow you) to set a different DNS server. And that's not particularly hard to circumvent. I suppose a proxy would be a more solid solution, but setting that up would be major yak shaving. Any "family safety" features (especially those from Microsoft) are ridiculously complicated and often quite buggy. Right now, I got the problem on my plate that I need to migrate one of my kid's accounts from a local Windows account to a Microsoft account (without them loosing all their stuff), because for local accounts, it seems the button to add the device is just missing? Naturally, the docs don't mention that, I had to do research to arrive at that hypothesis. The amount of yak shaving, setup and configuration you have to do for a reasonable setup is just nuts.
3. If you're not good with tech - I don't see how you have _any_ chance in hell to set up meaningful restrictions.
Some countries are banning social media - sure, that's one thing. But there's a _lot_ of weird places on the internet, kids will find something else. I for one would appreciate dedicated devices or modes for kids < 18. Would solve all this stuff in a heartbeat.
I see you Mr Quaker Oats
ID please.
Seems entirely reasonable.
Possibility entirely ineffective, but then again I don’t often see children walking around with bottle a of booze.
It’s ok for a platform to not feature private conversations. They should just have no DM feature at all, then; make all messages publicly visible.
Private conversations are indeed not for all ages. Parents should be able to grant access to that on individual basis.
To quote a comment I made some time ago:
- You can call your service e2e encrypted even if every client has the same key bundled into the binary, and rotate it from time to time when it's reversed.
- You can call your service e2e encrypted even if you have a server that stores and pushes client keys. That is how you could access your message history on multiple devices.
- You can call your service e2e encrypted and just retrieve or push client keys at will whenever you get a government request.
E2EE only prevents naive middlemen from reading your messages.
During times in which I was unable to socialize irl (eg school holidays), and unable to talk to my friends online, I can confirm that the isolation was not good for my mental health.
So yeah, age verification should be taken down, as well as the datamining these companies do and the opaque tunning of their algorithms. It baffles me: people are concerned about their children's DMs but are not concerned about what companies serves them and what they do with their data.
Hogwash.
Where are these mythical people who aren’t concerned with both?
They're called politicians.
Absolutely. But what responsibilities do megacorps have? Right now, everyone seems to avoid this question, and make do with megacorps not being responsible. This means: "we'll allow megacorps to be as they are and not take any responsibilities for the effects they cause to society". Instead of them taking responsibilities, we're collecting everyone's data and calling it a day by banning children from social networks... and this is because there are many interests involved (not related to child development and safety).
fake and scam AD.
they literally profit from those ADs. When the AD distributes malware or make scam, they don't take any responsibility
Clear, simple, direct: Whatever was required of The Bell Telephone Company and nothing more.
It's a good thing those human operators couldn't listen in to whichever conversation they wanted.
(Reconsider my post. I'm arguing for no regulation.)
Ideally, users should be able to modify the algorithm, so they can get just what they want, while simultaneously maximizing free speech. If something isn't illegal, it shouldn't be hidden or removed.
Hypothetically speaking: What if it's a neural network in which each user has his/her own unique weights which are undergoing frequent retraining?
Would it not be an undue burden to necessitate the release of the weights every time they change?
Also, what value would the weights have? We haven't yet hit the point of having neural networks with interpretability.
Wouldn't enforcing algorithmic interpretability additionally be an undue burden?
> They must be able to know why a content was served to them.
What if the authors of the code are unable to tell you why?
The apples to oranges in this comparison is probably top five on HN ever.
They should have a responsibility of transparency, accountability and empathy towards users. They should work for the user and in the interests of the user. But multiple constraints make this impossible in practice.
Yup, but the tools provided make that easy or hard.
But putting that emotive bit to one side, Megacorps have a vested interest in not being responsible to children. They need children's eye balls to drive advertising revenue. If that means sending them corrosive shit, then so be it.
Its a bigger issue than encryption, its editorial choice.
The children yearn for the mines(?).
That said, these platforms are making it impossible for parents to monitor anything. They're literally designed to profit off addiction in children.
Why?
> They already got so much data on their users
There are a variety of ways (see "Verifiable Credentials") that ages can be verified without handing over any data other than "Is old enough" to social media services.
Allowing for more effective propaganda, electrol control, and lights a fire on the concept of a government _representing_ anyone.
How so?
Please explain in detail, because there are already schemes such as "verifiable credentials" which allow people to prove they are of age without handing over ID to online services.
You need to 100% trust those verification services. And considering their success rate [1], you shouldn't.
[0] https://thinkingcybersecurity.com/DigitalID/
[1] https://discord.com/press-releases/update-on-security-incide...
First link - mitigation: use a well supported standard like OIDC, not a home-cooked scheme. Duh.
Second link - this is part of the problem such schemes as verifiable credentials are designed to address, random third parties collecting ID they don't need.
Yes, any system needs to be executed well. Neither of these really display that.
The point is that systems today, aren't really well executed. So it is unreasonable to expect them to be well executed.
If you can't trust people not to build the bomb well - then don't let them build a bomb.
Perhaps what we're really saying is "Ban age verification that collects lots of personal information".
Or perhaps we could distil it down further to "Ban unnecessary collection and storage of PII". In which case, Congrats! You've arrived back at the GDPR :)
Which I think is a good thing, and should be strengthened further.
(Also the other response to "because most implementations are not going to be like that" is "why not?". People are already building such ecosystems.)
There is a problem with schemes like that.
The way computer security works is, attacks always get better, they never get worse. A scheme that nobody has found any privacy holes in when it's enacted will have one found a week after.
The way governments work is, the compromise bill passes if the people who care about privacy support it because then it has the votes of the people who care about privacy and the people who want to ID everyone. But then when the vulnerability is found, the people who care about privacy can't get it fixed because they can't pass a new bill without also having the votes of the people who want to ID everyone, and those people already have what they want. More specifically, many of them then have what they really want, which is to invade everyone's privacy, as they were hoping to do once the vulnerability was found.
Which means you need it to be perfect the first time or it's already ossified and can't be fixed. But the chances of that happening in practice are zero, which means it needs to not happen at all.
/goes on to discuss how government legislation of specific schemes is the issue, not the schemes themselves.
Then we don't legislate specific schemes? The GDPR doesn't do that, for instance, it spells out responsibilities and penalties but doesn't say "Though shalt use this specific algorithm".
Remember, this discussion started with a call to ban all age checks, which itself is a government action and restriction on the agency of private business.
There are ways that private entities can implement age checks both securely and without leaking much other information, so it seems very heavy-handed to ban them. Private entities are building such systems between themselves already, without government mandates on the specifics.
(at least not yet)
To get it from Discord you need to sneeze.
The internet has scale and availability, that physical locations do not.
You might be able to get somewhere by getting a tech company on your side, but they generally also hate adult content and don't mind banning it entirely.
(people are not going to get age verification _banned_ any time soon! That's simply not going to happen!)
This is the next two steps into 1984.
Once you start mandating this, there's no going back.
The next generation will start associating wrongthink with government IDs. (Wait, we already do that, right?)
Is it? I thought that was a logical fallacy?
> This is the next two steps into 1984.
How so?
> Once you start mandating this, there's no going back. > The next generation will start associating wrongthink with government IDs.
Could you provide some more details on why you think this? For a start I talked about a scheme in which you don't hand over ID.
I don't see how verifiable credentials with zero knowledge proofs provide that however.
I think that it's rather funny that people like to appeal to 1984 as if the only point of Mr. Orwell was that surveillance is bad, missing the entire point about stuff like the control of the language or the idea that the only self-justification of the (Inner) Party is power for the sake of power (see also: The Theory and Practice of Oligarchical Collectivism).
I'd even go as far as to say that if "telescreens are horrible" is the only thing that someone takes away from 1984, they've frankly missed the point.
Once it gets big enough in your location you buy it for that sweet sweet intel.
We know the technology exists. Apple had it all polished and ready to go for image scanning. I suppose the only thing in which we can place our faith is that it would be such an enormous scandal to be caught in the act that WhatsApp et al daren’t even try it.
(There is something to be said for e2ee: it protects you against an attack on Meta’s servers. Anyone who gets a shell will have nothing more than random data. Anyone who finds a hard drive in the data centre dumpster will have nothing more than a paperweight.)
https://web.archive.org/web/https://www.devever.net/~hl/webc...
Same as with OS updates, browser updates, dependencies used by the OS, dependencies used by the browser. Also you can run malicious software such as keyloggers and you're compromised.
That argument doesn't mean E2E (even web based) is snake oil. Browsers just give you more points of failure.
For some companies (eg facebook, google, tiktok) i would be mostly worried about the company itself being untrustworthy. For others I would be mostly worried about the company being vulnerable.
Fixed a bit.
Smugly dismissing them doesn't do you any favors except for making you feel good about yourself for a few seconds.
I'm mindful that it's less secure than other apps, but for a lot of chats it doesn't matter.
It's a communication channel attached to the most popular social network for young people. Obviously they're going to use it a lot. They use it for the extreme convenience.
And in a perfect world essentially shouldn’t have to be, at least inside expensive walled garden app stores.
Youtube, twitter, bluesky, whatsapp? Every app with a social aspect will be used by teens. And no, tiktok is not "only for teens" or "specially targeted at teens", nowadays everyone uses it and creates content on it.
If you run (say) a restaurant, you get big spikes in business from TikTok videos in ways you don't get from Facebook or Instagram or others.
TikTok is the platform everyone is one right now.
TikTok is a social media app, and it gets heavily abused as it is.
You can’t moderate an E2EE platform.
It really depends on whether you think your government is more dangerous than, say, suicide trends, grooming, scamming.
I know the answer is pretty easy for US citizens to answer right now.
Obviously not in somewhere like Hacker News where there’s a clear consensus, but if you asked a random sample of the UK population “should law enforcement be allowed to compel tech companies to hand over all DMs of confirmed paedophiles?”, I’d bet very good money the majority would say “yes”.
The notion that “Big Tech” can absolve themselves of the responsibility to help law enforcement find child abusers by saying “it’s all encrypted, not my problem”, does not sit well with a large sector of the population.
Whether it’s good or bad is an ultimately political question, and both sides of the debate tend to talk past each other on this topic, but it’s undeniably a controversial point within the broader population.
Like they give a damn. I report accounts that explicitly sell fake credit cards, citing laws that make it illegal and 95% of the time "we checked and there is no violation here, we know that you're not happy but don't give a crap".
So the argument of security is utter bullshit and they just want to snoop.
https://digitaldemocracynow.org/2025/03/22/the-troubling-imp...
Most large platforms rely heavily on server-side visibility for abuse detection, spam filtering, recommendation systems, and safety tooling. End-to-end encryption removes that visibility by design. Once a platform is built around centralized analysis of user content, adding strong E2EE later isn’t just a feature toggle — it conflicts with large parts of the existing architecture.
They're saying this at the same time as they're clutching pearls over Iran's repression of protestors. Typical of the ethical consistency I would expect from them.
And yet, it's even more complex than that, since it's now owned by cronies of the current US President. I've never had a TikTok account, but conceptually I was mostly pretty okay with being spied-upon by China. I'm never going to China.
China will come to us.
Or should that be:
China will come to the US.
Voluntarily.
Means they read every message
because tiktok is addicting, and they know it…
>“Parents should also be aware that players may want to find out more about the game using other platforms such as YouTube, Twitch, Reddit and Discord, where other game fans can discuss strategies and experiences.
It's uncontroversial amongst people who value their privacy.
The tension between the two camps (there are obviously nuances and this is a false dichotomy) is at a current peak. It's an ongoing controversy. It's a matter of public debate.
You might have liked it better if the angle had been "...which the government, controversially, wants to clamp down on" or something.
After you notice it, you'll notice it everywhere.