Lots of speculation on Twitter about this—a failed attempt to re-open all closed subreddits and instate their own moderators. I can't imagine it'd be that, although I do enjoy the conspiracy, and more likely they were using the window of reduced traffic to make some larger changes and they went awry
Wild guess: visiting a private sub requires an extra call to a service/db to check if the user can view it. Normally there are only a small number of these checks because private subs were usually small communities. Now, many large subs having switched private is causing some poor mircoservice somewhere to get hammered.
I never worked at this scale, but could it also be that different subs are horizontally scaled and with so many people reverting to the subs that are still open the load is unevenly balanced?
Good question! And few people get to work at this scale, so it's not an unreasonable guess. I'll join you in speculating wildly about this, since, hey, it's kind of fun.
IMHO sharding traffic by subreddit doesn't pass the smell-test, though. Different subreddits have very different traffic patterns, so the system would likely end up with hotspots continuously popping up, and it'd probably be toilsome to constantly babysit hot shards and rebalance. (Consider some of the more academic subreddits vs. some of the more meme-driven subreddits — and then consider what happens when e.g. a particular subreddit takes off, or goes cold.)
Sharding on a dimension that has a more random/uniform distribution is usually the way to go. Off the top of my head (still speculating wildly and basically doing a 5-minute version of the system-design question just for fun), I'd be curious to shard by a hash of the post ID, or something like that. The trick is always to have a hashing algorithm that's stable when it's time to grow the number of shards (otherwise you're kicking off a whole re-balancing every time), and of course I'm too lazy to sort that out in this comment. I vaguely remember the Instagram team had a really cool sharding approach that they blogged about in this vein. (This would've been pre-acquisition, so ancient history by Silicon Valley standards.)
As for subreddit metadata (public, private, whatever), I'd really expect all of that to be in a global cache at this point. It's read-often/write-rarely data, and close-to-realtime cache-invalidation when it does change is a straightforward and solved problem.
For really really large sets, you'll still eventually want to reduce read compute costs by limiting specific tenants to specific shards in order to reduce request fan out for every single read request. If say I get a super quiet forum, would it make sense to query 2 shards or 6000? Clearly there's a loss of performance when all read requests have infinite fan out.
A good, but wrong, assumption is to assume Reddit's engineers know what they're doing.
The founding Reddit team was non-technical (Even Spez. I've been to UVA; it's not an engineering school; and spez had never done any real engineering before coming onto Reddit). They ran a skeleton crew of some 10ish people for a long time (none from any exceptional backgrounds. One of them was from Sun & Oracle, after their prime).
Same group that started with a node front-end, python back-end monolith, with a document-oriented database structure (i.e., they had two un-normalized tables in Postgres to hold everything). Later they switched to Cassandra and kept that monstrosity -- partly, because back then no one knew anything about databases except sysadmins.
Back then they were running a cache for listing retrieval. Every "reddit" (subreddit and the various pages, like front-page, hot, top, controversial, etc.) listing is kept in a memcache. Inside, you have your "top-level" listing information (title, link, comments, id, etc.). The asinine thing is that cache invalidation has always been a problem. They originally handled it using RabbitMQ queues: votes come in, they're processed, and then the cache is updated. Those things always got backed up, because no one thought about batching updates on a timer (or how to use lock-free) (and no one knew how to do vertical scaling, and when they tried, it made things even harder to reason about). You know what genius plan they had next to solve this? Make more queues. Fix throughput issues by making more queues, instead of fixing the root cause of the back-pressure. Later, they did "shard"/partition things more cleanly (and tried lock-free) -- but they never did any real research into fixing the aforementioned problem (how to handle millions of simple "events" a day... which is laughable thinking back to it now).
That's just for listings. The comment trees are another big bad piece. Again, stored un-normalized -- but this time actually batched (not properly, but it is a step up). One great thing about un-normalized databases and trees, is that there are no constraints on vertices. So a common issue was that you could back-up your queue (again) for computing the comment trees (because they would never get processed properly) (and you could slow the entire site to a crawl because your message broker was wasting its time on erroneous messages).
Later, they had the bright idea to move from a data center to AWS -- break everything up into microservices. Autoscaling there has always been bungled.
There was no strong tech talent, and no strong engineering culture -- ever.
-------
My 2 cents: it's the listing caches. The architecture around it was never designed to handle checking so many "isHidden" subreddits (despite that they're still getting updates to their internal listings) -- and it's coming undone.
I read this as a pretty scathing dressing down of incompetent engineering at reddit. But after having breakfast, what I'm realizing again is that perfect code and engineering are not required to make something hugely successful.
I've been parts of 2 different engineering teams that wrote crap that I would cuss out but were super succesfuly, and most recently I joined a team that was super anal about every little detail. I think business success only gets hindered on the extremes now. If you're somewhere in the middle, it's fine. I'd rather have a buggy product that people use than a perfect codebase that only exists on github.
Agreed. In fact, I believe success stories actually skew the other way. Those that actually build something that gets off the ground and is successful will in many cases not have the time to write perfect code.
Yep, this is definitely just speculation, but I think this is it. Code/queries that worked fine at small load for private subs just doesn't work at scale when tons of subs are private.
They stopped updating/supporting the code about 6 months before the big redesign (last push Oct '17, redesign launched Apr '18). Call me a bit of a conspiracy-theorist, but they just happened to raise $200 million on their Series C less than 3 months prior to abandoning their commitment to open-source.
My personal guess is it's down on purpose so they can say only 5% of subs who said they would go dark actually did, we just happened to have a service outage that day so they can push their own narrative t investors. Spez doesn't care anymore, he's focused on that IPO payday and to tell with everyone else. He's a liability to the company now, but the board isn't acting.
Yup, this seems super plausible. Even things like the frontpage feed and user comment history probably work on the assumption that most of the data they're pulling is probably visible (which leads to the "just filter it in the backend" approach), but also every external link or bookmark into a now-private sub will trigger the same kind of check.
This is likely shifting load in very unpredictable ways... I'm sure a sibling comment is right that it's probably less load overall in general, but it'll be going down codepaths that aren't normally being exercised by >95% of requests and aren't working on the assumption that virtually all content is being hidden.
There's probably some microservice instances that are currently melting themselves into a puddle until they can deploy additional instances, additional DB shards, or roll out patches to fix dumb shit that wasn't dumb until the usage assumptions were suddenly inverted. Meanwhile there's tons of other instances that are probably sitting idle.
Having worked at this scale, this is a fine guess! This scenario would have been a distant edge case for them. They likely didn't optimize for it. BOOM.
Personally I'd like to believe that the servers themselves are standing in solidarity with the blackout over the API changes. I, for one, welcome our new robot overlords.
I would imagine that a normal visit generates more backend traffic, given that it needs to fetch posts, thumbnails, etc. whereas a visit to a private sub wouldn't need to check more than authorization.
I could easily be wrong though, I haven't done web development for years.
They use a microservice architecture. Some services could scale well in servicing all those assets. What handles checking access to private subs may not.
You can’t treat the scaling as a binary feature, that it does or doesn’t
Sure, but that type of traffic is expected and they can handle it with things like caching and autoscaling. I'm suggesting that a part of the system that usually doesn't get a lot of requests wasn't designed to handle a huge influx of requests.
All that other stuff's easy to cache. Authorization's cacheable, too, kinda, with some trade-offs, but they may not have bothered if it'd never been a problem before. Or this particular check my have been bypassing that caching, and it'd never cause a problem before because there weren't that many of those checks happening.
You start getting a lot more DB queries than usual, bypassing cache, the DB's remote, it's clustered or whatever, now you've got a ton more internal network traffic and open sockets, latency goes up, resources start to get exhausted...
Not so wild IMO. I frequently have trouble loading pages on Reddit so I suspect any additional pressure could push it over the edge. It might be as simple as more users checking in to see if their favorite Reddit has gone private or shut down.
If a DB check is needed to see if a sub is private or not it has to happen for every request. You can’t just limit the check to private subs because it’s not known if they are private or not at the time.
Reddit goes wrong often so I expect this outage could have any number of causes.
> You can’t just limit the check to private subs because it’s not known if they are private or not at the time.
That's not necessarily true. Perhaps the status of subreddits is cached (because there's no reason to hit the DB 100 times/second to check if r/funny is private or public). But for a given request to a private sub, it would need to check each user.
You don’t reckon it’s just disgruntled users running “set all my past comments to Boo Reddit Boo” scripts? I don’t imagine it’d take a huge proportion of users doing that simultaneously to slag the servers.
It's worth noting that a very old user (/u/andrewsmith1986 IIRC) responded that at that time it was possible to add arbitrary users as mods without needing interaction/feedback/etc. from the user in question. If that was the case, then any user being a mod on any particular sub at that time doesn't really mean much.
Obviously I can't reference the comment in question right now, but I'll try to remember to circle back and add a reference when(/if?) Reddit comes back up.
I remember the change to requiring people to accept an invite. Adding people as moderators of distasteful subreddits without their consent was a common form of abuse just prior.
And it would in turn be worth noting that the creators of reddit had a philosophical and political commitment to free speech that drove their light-touch approach to moderation. It's not like the existence of that subreddit is evidence of an endorsement on their part.
> a philosophical and political commitment to free speech
Surely this is a meme by now? Any CEO that has ever said this about a website they control is just pandering to the crowd. Musk's Twitter has complied with more government takedown requests than the previous regime had.
It is now, but it was very different at the time. The old guard of the internet supporting absolute freedom of speech didn’t used to be associated with Nazis. Some good examples that still exists today are the Electronic Frontier Foundation and to some extent the American Civil Liberties Union.
Would you mind explaining what #Stolzmonat means or what is behind it? I see that it is german for pride month. Is it meant to be ironic or something like that?
It's a counter-protest by white male supremacists ("neo-nazis"/Proud Boys) who believe that diversity and equality diminish them. It's a parody of Pride Month.
It is pointing out the degeneracy that plagues society in a lighthearted manner, by using the German flag, which could easily be interpreted as a new iteration of the gay pride flag to the untrained eye
I'd be all up for an initiative that attempts to burst the liberal bubble which seems to reject the possibility that people have other things to do than follow the latest inclusivity trends online, but come on. You know well what this is about. Had to scroll like 3 posts down to find out that this panders to reactionary clowns[0].
If this were a protest I would expect polemic discourse, yet the "official account" is just throwing shit around and screaming with excitement when someone "gets offended"[1].
This is partly why I now prefer the term "free exchange of ideas" over "freedom of speech". I believe it more closely captures the essence of what makes free speech worth protecting, while also conveniently excluding stuff like this (among other things, like spam or antisocial behavior).
It also makes it clearer what's going on when people are waving the "free speech" banner over things like harassment and abuse. Allowing abuse not only doesn't increase the free exchange of ideas. It also often directly decreases it because it drives off people who get targeted by racism, sexism, et cetera, ad nauseam.
When the use cases for a given signal have proven to be reliably overwhelmed by the abuse cases for that same signal, the signal is no longer making a net positive contribution to freedom, causing net harm rather than help. At this point I don't think such a signal should be protected as free speech.
Except it won't work. Freedom of speech limited to well composed political speech is not what it sounds. Modern day China(PRC) guarantees in the constitution that kind of freedom of speech, just they label any speeches that isn't conformant to current Party statements to be terrorism conspiracy. Even Nazi Germany had that constitutional clauses.
It just starts from porn and ends in gas chambers. And in the middle is freedom limited to "meaningful" contents and activities. If you looked up and compared state of freedom of speech, CSAM/CP regulation, internet censorship, especially authority intervention, crime rate, and standpoint on democracy-totalitarian axis for various random nations, it should all line up well against each others.
So you haven't seen how a real communist party label opposing voices as not-a-speech? They literally do that. You'd think the problem is in their arbitrary mislabeling, not the selective application of freedom, sure, it isn't a problem at all so long you're the chooser.
Thank you; I was wanting another term so as to not conflated "freedom of speech" with something considerably different. I want to make things unambiguous/clear as possible.
It does infuriate me when some people may use "freedom of speech" as their excuse for "You must let me have a place to speak", when that isn't even guaranteed in the first place.
Freedom of the press does often get lumped in with speech in these sort of discussions, for good reason. That's another reason I prefer "ideas"; it's agnostic to the medium through which the ideas are conveyed (though again, in this specific case the images in question weren't intended to convey an "idea").
That doesn't really change much. Even if you relegate it to "free discussion of ideas" someone still can bombard it with bullshit ones "backed" by some circular reasoning, with backers either unable to comprehend or wilfully ignoring any logical counter-arguments. Get enough of those people and you get toxic wasteland
Freedom of speech is a legal concept that clearly doesn't cover CSAM. Free speech is a principle but it also doesn't cover CSAM. Fire in a crowded theater doesn't actually work as a legal defense but obscenity does
That's another reason I prefer the term "free exchange of ideas"; by using different wording it helps avoid the confusion created by people conflating the general principal of such freedoms with any specific legal provisions that exist in the U.S. constitution. (Though I agree in this case my wording is in agreement with how the courts define "freedom of speech" in practice.)
As discussed ITA, that case was later overturned; and it’s worth remembering that its origin was as an analogy to justify criminalizing pamphleteering against the draft [0] (one could not imagine a more salient example of “political speech”)
Copyright/trademark violations, shouting "fire" in a crowded theater, direct personal threats of bodily harm, basically a lot of stuff which is already widely considered to not be protected speech legally, but which is "speech" (or perhaps "press") in the plain language sense of the term (and importantly, is not exactly an "idea" in the plain language sense of that term).
I think it helps strengthen the argument rhetorically, since people can't as easily use the existence of such "exceptions" as an argument for adding more, or bypass the principle entirely with slogans like "freedom of speech isn't freedom of reach". (Suppressing the "reach" of certain ideas obviously does inhibit their free exchange.)
This approach hasn't been "battle tested" yet though so we'll see how it goes in practice.
It doesn't seem to do anything other than play word games with what counts as an 'idea' instead of what counts as 'speech'. An attempt to reset the existing case law, philosophy, etc., without facing it head on. Perhaps we need to reconsider if images count as speech. What about algorithms? Is an algorithm an idea? Even if it is an algorithm that generates an image?
The issue I see developing is that any attempt to carve out what is not desired by some group is going to create standards that will let other groups carve out what wasn't intended in the first place. Look at how encryption is under attack and one common way it is attacked is by claims of how it promotes the spread of CSAM. So government asks for reasonable backdoors that will only ever be used to stop such material, yet tech circles realize that any such backdoor will allow for arbitrary power to block any material.
I think flag burning and provocative art are unquestionably intended to convey "ideas". In fact, they fit into that category far more cleanly than they do into "speech" in my opinion.
Nudity... depends on the purpose but probably not. I agree that's unfortunate if your stance is that there should be no restrictions on porn, but I'm not sure the arguments for why freedom of speech is a good idea really apply to porn in the first place. I think it'd be better to make that argument on its own merits rather than try to conflate the two.
> I think flag burning and provocative art are unquestionably intended to convey "ideas".
That seems very open to interpretation to me; it seems to me these kind of things express a "feeling" much more than an "idea", and they might also be considered "antisocial" that you mentioned in your previous comment.
I meant "nudity" only in the sense of "nudity", nothing more. e.g. "I want to make a nudist TV cooking programme", or stuff like that. No concrete "ideas" are being exchanged with that as such.
I'm not a free speech absolution by any means, but I have generally favoured the exact opposite: "free expression" instead of "free speech" as that covers so much more. I think we can have an expansive "free expression" which includes many things while also having reasonable limits on that based on e.g. "does this reasonably harm people in a significant way?"
Yeah, I suppose there is some ambiguity there. Though I'd argue a standard like "does this harm people?" would even more open to interpretation and prone to abuse. Just about any idea can be framed as "harmful" to some person or group given a sufficient level of motivated reasoning (in fact, almost all modern cases of mass censorship seem to try to justify themselves that way). I much prefer a clear principle with few or no exceptions.
I suppose there may be room for both. "The freedom to exchange ideas" is after all a subset of "freedom of expression" (though not necessarily of "freedom of expression, with a bunch of exceptions").
In your "ideas" phrasing the exceptions seem implicit rather than explicit by virtue of not covering everything. I think it's better to be explicit about "you can do whatever you want, except [..]".
That some people will try to abuse this seems inescapable no matter what; we'll still be argueing the details 200 or 2,000 years from now because there is no way to capture any of this in clear neat rules. The best we can do is come up with some decent set of ground rules which convey the intent and purpose as best as possible. This is why we have judges to, well, judge, and "reasonably harm people in a significant way" seems like a lot clearer of a guideline for this than a much more vague "ideas".
Flag burning wasn't protected as free speech in the US until 1989. I have a list of stuff that was banned or censored in the past that would be considered unobjectionable by almost everyone today, and I suspect things would have been better if we had "freedom of expression" instead of "freedom of speech" (or "free exchange of ideas", for that matter).
Fair point. I agree with the sentiment of "you can do whatever you want, except [..]", in the sense that I think we should err on the side of personal freedom. To be clear, I don't think focusing on "the free exchange of ideas" means other freedoms aren't important, and I'm not proposing a constitutional amendment or anything. It's just that from a rhetorical perspective I prefer to use terminology that encourages the strongest possible interpretation of the argument I'm making, and I think, for me at least, "the free exchange of ideas" does that best for all the reasons I named in my original comment and its replies.
> This is partly why I now prefer the term "free exchange of ideas" over "freedom of speech".
What do you need in order to have a free exchange of ideas? Oh that's right, free speech. Free exchange of ideas is one of the benefits of free speech. But it isn't free speech. Also what about speech to entertain? What about speech to criticize? What about speech to just mindlessly express yourself?
> like spam or antisocial behavior
What counts as antisocial behavior? Rap music? Heavy metal? Is protesting government antisocial behavior? What about criticizing politicians? And more importantly, who decides?
It's amazing how little people know about free speech. It should be mandatory to have a class on civics and of course free speech. People have such a childish and superficial understanding of free speech. And of course these people always tend to be for censorship.
To be clear, I'm not saying freedom of speech is a bad thing. On the contrary, I agree protecting speech is a great way to ensure the free exchange of ideas. (As you noted, if you can't speak your ideas then you can't freely exchange them.)
But there's a lot of "speech" that should be and is already prohibited, both legally (CP being an obvious example), or through social standards of conduct (screaming expletives at random passers by is liable to get you kicked out of just about any venue).
In my view, focusing on the goal of freedom of speech (which is to say, the free exchange of ideas) rather than on speech itself communicates much more clearly on why the principle is important and where the line is. It makes it obvious why CP is not legally protected but Nazism is. Both are despicable, but one needs to be protected in order to preserve the free exchange of ideas while the other doesn't.
And again, I'm not saying other forms of expression that don't convey ideas shouldn't be allowed; just that the reasons why they should or shouldn't be allowed are separate from the reasons why the free exchange of ideas is important.
Yes, this wording does narrow my argument somewhat. But I think it largely narrows it in ways that make it easier to defend rhetorically, especially when trying to apply the principle to non-government entities such as the large social media conglomerates that own our modern town square.
At the time reddit was not some unknown back corner of the internet and had already begun working with law enforcement to enforce anti-CSAM laws due to material being treaded in private messages and private subreddits. That it took a media exposure take down the specific subreddit indicated that it was likely on the legal side of the line, through it was going close enough to the line to make others uncomfortable. If the material was actually illegal, wouldn't it have made reddit the largest clear-net site containing CSAM? In such a case, I find it hard to conceive that media exposure and not legal actions is why led to it shutting down, and with no admins being arrested it seems the most reasonable assumption left is that it was on the technically legal side.
This would likely be like the use of underage subjects in nudist art, painted, drawn, or photographed. Such art is generally not considered pornographic and are legally protected, and some even displayed in museums and the like, yet websites will still ban the material to not have any relationship to it and to not have users trying to push the boundary. I'm speaking to the extent that rules are enforced, many websites have issues with enforcing rules in general due to the amount of user generated content being much larger than the amount of moderation available, so there will be something slipping through moderation from time to time.
Most states in the US could use a photo of a teenager in a bikini as enough 'evidence' to bring charges of Possession of Pornography involving a Juvenile, depending on what the actual photo depicts. Whether the contents of the image could be found to substantiate a conviction for the charge would be a trial/appeals issue. Nudity is not required for an image to be considered CSAM in most US states (or at the federal level), there are also Federal precedents that make cartoon depictions of under-aged characters count as CSAM.
I thought part of the reason the US is so blessed with a thriving startup community is the light-touch the law has towards its startups. For example; didn't the billion dollar acquisition YouTube gain a lot of its initial growth through copyright infringement?
They're the same thing and used interchangeably. The sexual abuse is from the sharing lascivious pictures of someone taken without their consent, with the additional context that children are not understood to be capable of consent for sharing lascivious pictures. It is not physical abuse.
Use of the images can be abusive even if the creation of them wasn't. Revenge porn is an obvious example. Even legally and consensually created images can be used to abuse someone.
I suspect there's a transformation through intent.
This does lead to poor policing (like the famous Google banning man for taking photo of his child's genitals to send to doctor story).
It appears that in present-day English people use the word 'abuse' to mean more than physical abuse. If you're a prescriptivist that may upset you, but isn't that the nature of being a prescriptivist? Does it perhaps beg the question "why be a prescriptivist?"? Aren't you, perhaps, literally standing against the world?
I want to reserve the phrase "child abuse" for real child abuse, so that it gets taken as seriously as possible and not diluted. I think that is a sufficiently good motivation to stand on this hill.
What would you call whatever that japanese underage comic thing is? It features no persons and no one was abused yet it still illegal. I don't object to the term as applied, but I don't particularly feel great about the word 'abuse' being used for cartoon characters since it degrades the experience of human survivors.
Not in the U.S., federally at least, where the justice department's guidance specifies that CP is media which "appear to depict an identifiable, actual minor."
Lawyers and courts say child pornography because laws are not restricted to sexual abuse material. Drawn child pornography is illegal in many places for example.
It's the only term I'd ever seen for it, in any context, until the last couple years. And I'm not a 4channer. Pretty sure it's just another euphemism-treadmill thing. (not that I mind in this case, I think the new term's fine, and certainly not worth fighting over)
What's "the industry"? The relatively new industry in moderating Internet communication? I've certainly only seen the term "CSAM" coming from cops and prosecutors very recently, and they're rarely shy about using their jargon in public communication.
[EDIT] I believe you that it's the term in vogue now, to be clear, but I'm skeptical it was as dominant, even if present, until relatively recently. If it was, then that entire world only recently started using it consistently in public communication, certainly. But, again, it's also fine, I don't mind the new term.
>And it would in turn be worth noting that the creators of reddit had a philosophical and political commitment to free speech that drove their light-touch approach to moderation
That's nonsense. The Sears debacle showed that reddits leadership team was fine with deleting posts if it was going to cost them money to not delete them.
That 'political commitment to free speech' sure disappeared quickly when r/jailbait and u/violetacrez hit the main stream media.
spez was fine with hosting a community of child predators because it was one of the most popular subs. It was the top recommended result when you searched for reddit on google.
You can support free speech without actively providing a community for predators
reddit used to be owned by Conde Nast. Sears got upset about a post and complained to Conde Nast, who then told spez to take it down. If you have a political commitment to 'free speech' that folds if you might have to face some consequences for defending it, you don't have that strong of a commitment in my opinion. Certainly not strong enough to justify hosting a community of child predators
Sears had an XSS injection issue, where you could change their breadcrumbs by manipulating the URLs. Some redditors changed and shared a link to a grill as a "Body part roaster" and had fun. Sears found out and got mad
Hi! I've done a bunch of trust and safety work and I see this trope a fair bit. Please help me understand what the difference is between, say, platforming racist harassment because of a "political commitment to free speech" and platforming racist harassment because you just kinda like racist harassment?
I get that it might be different in the heads of the people who have worked very hard to create those platforms. I'm just not seeing any different in its effects on the world or on the targets of the racial harassment.
> Please help me understand what the difference is between, say, platforming racist harassment because of a "political commitment to free speech" and platforming racist harassment because you just kinda like racist harassment?
The difference is intent. Intent matters. Intent is the difference between murder and manslaughter, or between a conspiracy and mere speech.
Intent matters sometimes. To some people. But here, in either case the intent is to enable terrible people to, e.g., shout the n-word at people. So I don't see much of a difference in those terms.
Well it obviously mattered enough for the Founding Fathers of the US to enshrine freedom of speech in the Bill of Rights, and in the last 200-ish years it also mattered enough for US courts not to overturn or politicians of various parties to change it.
Now where I'm from (Germany), not just "hate speech" is against the law, but it's also unlawful to insult another person. It's complicated, but for the latter it's mostly sufficient that they feel insulted by what you said to them personally.
Now while I don't go around insulting people in person or on the Internet, I personally think - for instance - that it should be allowed to call a person an asshole, if they behave like an asshole. Yet, if I did that here, or even online to another German person, they could go to the police and press charges. If the public prosecutor is sufficiently bored, this very low barrier could also be used to dox me in an otherwise reasonably anonymous setting, since the resulting lawsuit could result in my data getting subpoenaed from, say, Twitter and my ISP. This has happened to other people here in the past.
Now while I'm neither in favor of either hate speech nor randomly and viciously insulting people online, I consider the law in Germany as outlined unreasonable in an online setting. I think freedom of speech is more important fundamentally than another person's right to not feel hurt, or for some powers that be to silence or punish me because I said something inconvenient that they merely claim to meet some of the criteria for speech that is restricted here.
Mind you, this is the case all the while freedom of speech is enshrined in the German constitution as well. But I think it is a pretty good example of why I think freedom of speech should not be curtailed just in the name of another person's feelings about said speech. Even if a person, as you do, doesn't see a direct and tangible benefit in allowing that kind of speech, I would argue that a larger fraction of people are against disallowing it, because of the indirect consequences and where that line of lawmaking leads.
Another thing to consider is this: Say you're modestly happy with the current government wherever you live, and you'd be happy for them to have an "easy" way to curtail freedom of speech. Would you also be happy for the opposing political side to do the same thing? What if some extremists came to power?
This kind of reasoning is why free speech absolutists are so staunchly defending freedom of speech, even if it may be inconvenient or insulting to themselves or others.
You're conflating a lot of things here. One is the free exchange of ideas with freedom to harass people. Another is legal versus socially accepted. A third is the difference between "the cops should be able to arrest your for X" and "I am choosing to spend my days creating a platform for X". These are all importantly different.
You're also shading over exactly who gets free speech. If digital Klansman get to freely harass black people, many of those black people will not participate in public spaces, silencing them. Indeed, that sort of ethnic cleansing is often the goal of racial abuse. See, e.g., Loewen's "Sundown Towns". So whatever "free speech absolutists" think they're up to, in practice the result is often a diminishing of the free exchange of ideas that the Founding Fathers were clearly pursuing.
I don't think this is a very accurate description of how things work in Germany. It's exceedingly rare in any Western jurisdiction for the aggrieved party to press charges. This power is usually left to government prosecutors, who are probably more impartial than the complainant.
how about this example: twitter would silence and deplatform some guy using the oh so terrible "n-word" ("because sticks and stones may..." is not a thing anymore). Now because this person is deplatformed, I cannot find his "hatespeech" when doing a quick background check, so I hire him in my company as responsible for recruiting. Now he makes sure no "n-words" get employed.
big win? whomever votes to silence the guy gets to judge.
If the only possible way you can catch a bigoted manager is by hoping that he spent a lot of time hurling racial abuse at Black people under his own name, then I think you really need to work on your management processes.
sure, I find out about it a year later, since i am a small company, and now 3 black people were not hired because of it. Is this a big win? also, I do believe that the word abuse is being devalued, and that such simple insults do not really qualify as abuse in general, lets not forget that a great deal of black people love to use the same words when talking to eachother, black celebrities get famous singing the word, self describing as such.
That is a very white understanding of what abuse means.
Also, it seems wild to me that you think a small company means you somehow have less ability to supervise your employees. What's your plan if you hire a racist who wasn't dumb enough to post openly? Just let him go to it?
There is probably a line. But you don't know where it is and neither do I. You and I might agree that X is to one side of that line, but if we ban that behavior, then we have initiated a process that we might call line-discovery -- the search for the line that X was to one side of -- and line-discovery is highly prone to outcomes that result in bans on content from the other side of that line. So we don't want to engage in line-discovery, even though there are obvious examples of things to one or the other side of the line.
You may think you can ban the obvious things without ultimately engaging in line-discovery, but, the argument goes, you are mistaken. You will ultimately find yourself doing line-discovery.
You start out with obvious-sounding prohibitions on racism and hate speech, but eventually you're arguing about, say, whether it's racist to report on polling showing that violent protests are unpopular. [0]
And that's because banning any speech always leads to line-discovery.
So it comes down to a question of which scenario is worse:
A. You ban obviously bad stuff while accepting some risk of banning things that aren't actually over the line.
B. You privilege all content to avoid that outcome.
Some people are outraged by this framing and think it's obvious that you would want to risk banning some behavior to the right side of the line if it means eliminating the most obnoxious speech. But, basically, that is not obvious to everyone, no matter how many times they are reminded that there is some really bad stuff out there. [1]
[1] Interestingly, this is really not so different from the argument about evidentiary standards for punishing criminal behavior, except in that case the politics are flipped. There conservatives would rather risk punishing some innocent people if it means the absolute worst actors are guaranteed harsh punishment, but liberals think it's worth risking some amount of literal rape and murder in order to prevent punishing the innocent. So I think, actually, both sides are entirely capable of seeing this from the other side; they just don't want to.
Yes. I am addressing the second-order effects of each motivation.
Let's grant that the harms of the kind of speech you're worried about are exactly the same in either case. [0] Platforming "racist harassment" because of a political commitment to free speech implies that other forms of controversial speech will get the same treatment, preventing the kind of line-discovery I described in my previous comment.
"Platforming racist harassment because you just kinda like racist harassment" leads to who knows what. All we know about that person is that they like racist harassment. Maybe other stuff gets banned. Maybe not. Either way, it's unlikely to be in service of avoiding harmful second-order effects.
So that's an enormous difference between the two motivations. In the first case the position is in defense of an ethic of open dialogue and an attempt to prevent second-order effects that are harmful to that dialogue.
In the second case -- who knows.
It seems to me that the first motivation is much more likely to prevent the kinds of second-order effects I'm worried about and that distinguishes it from the second one.
Many people have said this better than me, but there are plenty of people who have thought they can do better than the current status-quo regarding user-generated content on the internet.
They end up conforming or losing money. There’s no one reason for this. You try to run a website visible to the world, you’re gonna be subject to a world full of reasons.
> the creators of reddit had a philosophical and political commitment to free speech that drove their light-touch approach to moderation
The notion that reddit ever was a bastion of free speech is absurd. They didn't "light touch" on upskirt, revenge, and kiddy porn because of "philosophical and political commitment to free speech", they did it because they didn't want to accept any more responsibility for content than they absolutely had to, and that's because it is not financially viable to moderate large communities using paid labor. That is why you see so many social media companies pushing against rules for online content; not because they're champions of free speech.
If it were about "a commitment to free speech", they wouldn't allow completely unaccountable and anonymous members to delete content, silently mute users, and ban users....including employing automatic tools that would ban people preemptively based on subreddits they posted in, or automated tools for powermods to ban someone across all the subreddits they moderated.
If you pissed off a powermod, your account could end up getting banned from nearly all the major, common subreddits - not just from theirs, but they'd communicate in private channels to other powermods that they wanted someone to be banned elsewhere.
Oh, and they were happy to moderate, severely, anyone who revealed any personal details about a reddit user. Which conveniently helps protect people doing stuff like upskirting and posting revenge porn.
"philosophical and political commitment to free speech", my ass.
>It's not like the existence of that subreddit is evidence of an endorsement on their part.
It is though. 230 be damned. These were not small or hidden communities. They were frequently on the front page. Generally, and especially in this case silence is violence. The optics of that sub and the frequenters thereof are terrible. Do you want to try and justify their inaction further or concede this point? It should have never been allowed in the first place. Spez/Reddit at al should continue to be shamed for their long-standing tacit approval of these communities. Earning respectability requires public contrition for bad decisions that affect the public and non participants. As is typical, the communities were only shuttered when the victims cries grew loud enough to affect their brand image. Cf fph, wpd, fappening, t_d, Boston bomber fiasco, all the racist subs, and countless other controversies that spez/Reddit fumbled. Reddit deserves to close. The management team is evidently not competent or mature enough for the task and has repeatedly proven that their inability to learn from their mistakes and failure to become the proactive stewards needed will result in preventable harm to people who do not even use the platform.
I mean, define "endorsement". Permitting something to exist when you have the power to do otherwise is a mild form of endorsement. A commitment to free speech is, to an extent, an endorsement of all the speech that results.
> A commitment to free speech is, to an extent, an endorsement of all the speech that results.
Absolutely not. I'd argue that anyone should be free to talk with others about their opinion, but that doesn't mean I agree with that opinion. And letting then speak without shutting them down doesn't mean I agree either, just means I agree that they should be able to speak freely.
What kind of dystopian viewpoint is that? You go around stopping everyone from saying stuff you disagree with?
Platforms like reddit are in no way similar to personal property like a house that you live in.
A better analogy would be, imagine you rent your house to someone else. You make a rule that tenants may display political messages in their windows, but only for one political party.
That would be illegal. You can prohibit all signs if you want, but specifically choosing what signs someone gets to display violates their first amendment rights and could trigger a fair housing lawsuit. It doesn't matter that you aren't the government and that you own the property.
The renting analogy fits even less though. Renters have protections against evictions that don't exist for websites. If I break the rules of my lease it would take a month or 2 minimum to get kicked out. If I break Reddit's rules I can get banned immediately.
> I'd argue that anyone should be free to talk with others about their opinion,
I _think_ I agree with that. Don't hold me to it, but it feels right.
> but that doesn't mean I agree with that opinion.
Yup, sure, agreed.
> And letting then speak without shutting them down doesn't mean I agree either, just means I agree that they should be able to speak freely.
There is a world of difference between "not actively preventing someone from speaking" and "setting up a system whereby someone's speech is enabled and broadcast". Casting this to the real-world - if someone's yelling their opinions on a street corner, and I simply walk by without stopping them, then no, that's not an endorsement. But if I notice them yelling, and walk up and hand them a microphone - or (more closely mirroring social media setups) I install a public-access microphone, and stand there observing who uses it without trying to control it - then yes, through inaction I have endorsed what they choose to do with it.
> What kind of dystopian viewpoint is that? You go around stopping everyone from saying stuff you disagree with?
In areas I control and am responsible for, yes. If a guest in my home started spewing (what I consider to be) unacceptable speech, then (depending on my history with and pre-existing respect for them), I'd either take them aside and ask them to reconsider their choices, or jump straight to asking them to leave.
Enabling and endorsement are two different things, no need to conflate the two. If I'm a dentist and tell my patient that they could use any toothpaste they want but that I don't recommend the specific brand that they use, how is that an endorsement in any way? I'm allowing them the choice without endorsing in this case.
In your analogy, the dentist is selling the bad toothpaste and saying "Go pick any from the shelf over there but not that one". Why is he selling it then? The dentist can't say "well it's a free market" as if that somehow absolves him. He sells the bad toothpaste, that's a tacit endorsement.
You can be a proponent of free speech and not allow people to stand on your porch yelling heinous things, but that's not what Reddit was doing. They knowingly profited from that speech.
The context is discussion of social media platforms where the platform already owns all the content and has the tools to decide what gets published and what doesn't.
It's not just that he allowed them to exist, he created a special one-of-a-kind "Pimp Daddy" trophy to award to the moderator of r/jailbait and r/creepshots.
There is some whitewash in the comments there: "[violentacrez] received the trophy because all the work he did to moderate the site..." as if he got the award for keeping things clean, but consider that he contributed the vast majority of those subreddits' content himself by cruising social media for salacious pictures of minors to share while he was in his 40's, and the award is named "Pimp Daddy."
IIRC Violentacerz modded like 50 different porn subreddits, and he did a goob job by moderation standards so he was appreciated by the admins for being the overseer of the porny side of reddit.
Sure, and one can absolutely criticize him for that, but I think if one wants to criticize how /r/jailbait and similar subs were handled it's better to do that directly rather than making a more nebulous insinuation that stands on weaker ground.
Anyone trying to lose weight and skip a meal today? Open a reddit client and do a subreddit search with "teen" as a prefix. Or don't, you know. Silly me for assuming that this had been solved after any of the various sketchy-porn related subreddit purges.
I'm not naive, user content is hard to moderate. But is it hard to say "any subreddits with these keywords go on a list for review"?
I added that invitation flow in response to widespread abuse of the 1-way add moderator button. As an aside, the invitation message felt too bland and clinical during dev, so I added "gadzooks!" to the beginning, which became a meme for a while.
It seems like a funny distinction to make in the first place. Was he an admin of the site when it was hosting that sort of stuff? Anyone who was an admin at the time is responsible for the policies that allowed it to exist on the site…
He was also part of the team that gave the owner of the sub awards. It's not really credible to claim that spez wasn't somehow very enthusiastic about it.
More seriously, you really start to feel "ancient" when your body goes from "new year, pretty much the same as old year" ... to ... "new year, who removed vital organs and bodily fluids while I slept last night?! The bastards!!!"
Of course, it is a classic trope nevertheless ... and I did start to feel ancient even 15+ years ago ... but once you start noticing real changes, then you REALLY get that feel (and you know more is coming, lol): https://youtu.be/MqBNSMbEzI0
>Back in the day, you used to be able to add anyone as a moderator and it auto accepted.
>People would make shitty subs and add people, take a screenshot, shut down the sun or make private, then use that screenshot to start a witch hunt. Violentacrez could have added you as a mod of the sub and you'd be in the same situation.
>TL:DR I used to mod a sub with Barack Obama and Snoop Dogg.
Consider: spez was the voluntary "mod" of the entire platform as CEO, and he maintained that subreddit.
Excusing him for the unsolicited mod invite is just optics management. It would be like saying lowtax had no responsibility as co-signer for the existence of subforums that his paid or unpaid staff maintained.
Back when Obama hosted an AMA on Reddit, a bunch of users added his account as a moderator to a bunch of subreddits, including some pretty objectionable ones. This prompted a change that moderators would be invited instead.
Doesn't seem that unbelievable when you look into some of the other stuff he's done. For example he secretly used his admin powers to edit user comments from users he didn't like or who criticized him.
I believe the issue the commenter above was taking was that just because someone commits, for lack of a better term, comment fraud, we shouldn't jump to suggesting he's also a paedophile.
Oh, i took that comment to suggest that the downtime could (mostly jokingly, i assume) have something to do with Spez dealing with a post he didn't like.
This is something I’ve seen repeated on Reddit often. Virtually any meta thread on Reddit about Reddit will have several comments containing these allegations. I can’t imagine how much time someone might need to spend to dig through all the noise to get to the truth, but it rings of something that might have a kernel of truth, given the prevalence and uniformity of such accusations.
In either case, it’s easily as speculative as the parent comment above, maybe slightly more so, since the parent came from Twitter.
>“Yep. I messed with the “fuck u/spez” comments, replacing “spez” with r/the_donald mods for about an hour,” Huffman, who co-founded Reddit with Alexis Ohanian in 2005, wrote.
He did not admit to it until evidence was compiled and hit the front page, and let it appear for part of the week to participants and onlookers as if there was massive internal strife.
It is really really wild for other comments to try to pretend the comment above is about something other than the admin (spez) using his powers to edit reddit comments of people he didn't like or political opponents.
This isn't that surprising, you used to be able to add anyone on the entire site as a moderator and it'd autoaccept. It's doubtful he actually moderated it in any capacity. He's still moderator of some random subreddits.
What's funny about this to me is that the actual moderators of r/jailbait thought "I know how I can insult u/spez, I'll make him a moderator of my sub, so he'll look like a scumbag, like I am"
I'm not sure if this is true, but if I were a creator and admin of a site, I'd assume I'm automatically a mod of every subreddit or subforum. It doesn't necessarily mean spez was specifically moderating that sub.
Part of spez's job is to be the lightning rod for controversial decisions. The board I'm sure is pushing for the same things (increases in pricing, driving users to the official app) in order to boost metrics before the IPO. Aside from the somewhat pointless AMA where his frustrations came out a bit too much, if you assume that the effective removal of API access had to happen, what do you think he's done wrong during this?
Many take exception to his handling of Reddit’s relationship with Apollo and Christian Selig specifically.
Steve Huffman has reportedly told employees that Selig threatened Reddit. Selig posted a (perfectly legally recorded and disclosed) call recording showing the alleged “threat” was a misunderstanding over which the Reddit employee on the call apologized immediately.
Huffman serves in an official capacity at the Anti-Defamation League. People are (rightly, I think) critical of his handling partly in light of that.
Maybe he knows where all the skeletons are buried, so the board removing him would be to difficult (or they're not ready to pay his golden parachute/buy his silence)
Back in the day you could create a subreddit and invite anyone to mod it. The invite would be automatically approved. I suspect this is what happened here.
He probably alludes to the common conservative trope that jews/blacks/environmentalists/gays/trans people are 'pedophiles' (depending on which era you look at)
Pedophilia is coming en vogue on the left. If you can't see it now, you'll see it soon. Spez will likely claim that he really was a /r/jailbait moderator for leftist credibility, I would guess 3-5 years from now.
Reddit 100% has stated that it was because of the sub blackout.
> According to Reddit, the blackout is responsible for the problems. “A significant number of subreddits shifting to private caused some expected stability issues, and we’ve been working on resolving the anticipated issue,” spokesperson Tim Rathschmidt tells The Verge.
You can't be using Reddit very often if you think these are a relic from 10+ years ago. I would say I experience a Reddit outage at least weekly. My friends and I have a running joke about how often it's down.
To be clear: You mean you have timeouts and failures using Reddit's own "new and improved" web UI and mobile client? Because using RedReader, old.reddit.com, and other third-party apps, I don't actually recall the last time Reddit didn't load for me.
The "elevated error rates" always presents as an "oops, you broke reddit!" landing even on old.reddit. I imagine since it is an "elevated error rate" rather than a total outage that it might be localized to geo or some other kind of shard. I'm on the US West Coast, though, so I can't imagine I'm in a minority.
(Which is confirmed by the number of people responding to GP.)
Little late, but I think I see how we have such different experiences. Assuming other comments are right, and Reddit's pulling pretty much entirely from cache, you probably just scroll longer than I do - long enough to run out of the first ~1000 (cached) posts, and hit uncached items.
You'd get timeouts, and I'd never see them - despite being West Coast (Canada) as well. Or at least, that's my best guess so far.
Cache is probably a good guess. I don't do infinite scrolls but I do use Reddit mostly for hobby subreddits which aren't as popular and less likely to be in cache.
I imagine it probably has some to do with specific geography as well. Cloudflare will proxy back to nearest node and maybe some are better than others.
I use reddit daily, am constantly refreshing certain subreddits. Fwiw I use new reddit but I have all fancy settings disabled so it looks and works like old.reddit. I also use the iOS app daily. I’m also on the US West coast fwiw. And no reddit premium or anything like that. I literally never have outages or “You broke reddit” or stuff like that.
Edit: I wonder if it’s because all the subreddits I’m on are low or medium traffic. I’ve unsubscribed from the front page and /r/all and tend to only read niche subs.
Yeah, I'm also US West Coast and I only ever use old.reddit or BaconReader. The Reddit Status Twitter and number of people responding to you confirm this isn't an isolated incident, though.
Semi-frequently, I see outages that go unreported on the status page. They used to have error rate and backlog depth graphs on their status page too so it was obvious (in a good, transparent way) when they were having issues even if a human hadn't (yet) updated the status page, but those graphs were removed.
Sure, twice monthly is a lot more than "not for 10+ years"!.
Anecdotally I think it's more and that the threshold for "errors above normal" is probably set pretty high. It feels like their infrastructure isn't very reliable and depending on which backend Cloudflare is routing you to, YMMV.
Not true at all in regards to 10+ years ago. Once at week, if not more, Reddit fails to load for me. I refresh the page and it appears I've been logged out. Continue refreshing the page until my session is revived and things are back to normal. I'll often open a post and the header will load but Reddit will fail to load the comments with a click to retry loading comments.
Eastern US here. I will get those errors a few times a week but I'd say 80% of the time it works on a refresh and most of the rest of the time it's back within 30s.
Also very common for when you post a comment that it appears to not have posted it and returns an error, then you retry multiple times and get multiple comments posted. And even if you don't get hit by one of those outages, they tend to get saved in the comments from everyone else having hit them.
For me in the EU it fails once a year or so and only on the few occasions I use reddit's own webpage instead of a third party app or web like libreddit.
I see those every now and again but it's usually a one-off, it'll load properly after a refresh. This time it was down or extremely slow for 10-15 minutes which is definitely uncommon.
I run into issues on occasion, usually with loading comments. I use the web interface.
Interestingly, HN is being really slow for me right now and also gave an error when I first loaded it. Maybe something more global is going on, like network or cdn issues?
But this time it's different because most high traffic subs have already gone dark, so the traffic must be minimal as there will be very few posters relative to what the server can handle. And yet, I'm getting "You broke reddit" when I try to visit which is quite ironical.
Probably a massive shift of traffic to still open subs. That'd probably take it down due to caches suddenly having all the wrong data. No inside info, though.
HN seems to be groaning under the load too and might go down with Reddit.
Maybe it's expensive to generate the 403 for a private subreddit.
A place I once worked had a 404 handler that was extremely expensive, but nobody noticed for a while because 404s are relatively rare. One time a vulnerability scanner took down the site because it was just hitting known vulnerability paths which all 404'd. The code that executed during a 404 was n^2.
Maybe the protest was so successful at driving traffic away from Reddit it looked like something was broken to their monitoring which tried to failover unsuccessfully? If that happened it might be the first case of a site being brought down by the polar opposite of a DDoS.
Yeah, my assumption was that something in some layer of their application isn’t well optimized when asked to return posts from a subreddit that has “gone dark” in whatever fashions the mods chose to do that.
For example, maybe it causes reads from the database take a lot longer than they normally would, locking up the database or causing the process the crash (again, that’s just pure speculation).
one I've been wondering about is user overview pages. People use those a lot (it's actually my bookmark for getting onto reddit) and yesterday I noticed that a post I made wasn't in my overview, and it's because that sub had gone dark early.
What happens when a user has 99% of their posting in subs that are now hidden, and the API is programmed to produce a fixed 30 comments of history on the overview page? The answer is extremely deep database pulls... you might pull a year of comment history to get 30 comments that aren't hidden. And depending on how they do that, it may actually pull the whole comment history for that timespan, since most of the time posts aren't hidden like this.
I worked at a backend team at work with some very overburdened legacy tables in mongo, and this is the kind of thing we'd think about. Yeah you can use an index, but then you have to maintain the index for every record, and change it every time a sub goes private/public (and we literally were hitting practical limits on how many indexes we could keep, we finally instituted a 1-in-1-out rule). And how often does that happen? Even deleted comments are overall probably a minority such that indexes don't matter, but, this is relational data, you have to know which subreddits are closed before you can filter their results, and mongo sucks at joins. And the mongo instance can become a hotspot, so, just filter it in the application instead for those "rare" instances. Even if they are doing it in mongo, the index/collection they're joining may suddenly be 100x the size, which could blow stuff up anyway.
edit: for me, one overview page is now taking me back one month in comment history. And I comment a lot on subs that are currently closed, so it could easily be throwing away 5-10 comments for every comment it displays.
I'm guessing hit on the open subreddit mostly goes directly out of caching layer while hit on private one incurs DB hit to check whether user belongs there
I wouldn't be surprised if so many big subreddits being dark is causing issues around denied API calls.
As for the forced reopening, beside the conspiracy this is something that could happen. It's a private company, moderator on strike are a loss of business, they would be 100% in their right to remove all the "traitors" (I'm not saying this would be a smart move, simply that if they really plan to go down this self destructive path it's the best time to do this and prove potential investor they still have control).
> It's a private company, moderator on strike are a loss of business, they would be 100% in their right to
Legally, of course. Morally, it is completely unacceptable. This isn't "oh they're jerks"; this is "the system is broken".
A meatspace analogy:
You host a weekly gathering at a restaurant. You decide to temporarily boycott the restaurant to protest some behavior of theirs -- your actions are a loss of business, _so the restaurant decides to host your weekly gathering without you_.
We'd never accept that in the real world, but for some reason we do online -- we fall back to the legal argument that It's A Private Business (which is true) and completely ignore that Reddit doesn't own the community, that the community doesn't _belong_ to Reddit. They own the platform (the restaurant); they don't own the community.
Oh I completely agree this would be the most stupid thing they could do. I wrote it because this is something I can picture happening at some point (I do expect the protest to multiply), you say they don't own the community but I'm pretty sure they think they can control it.
> a failed attempt to re-open all closed subreddits and instate their own moderators
My first thought this morning was "if I was reddit, I would re-open all closed subreddits and instate my own moderators"... I'm kind of surprised they didn't. I'd have to imagine that if they wanted to, the attempt wouldn't fail - this seems like it could be done almost trivially.
Attempting this to be more of a discussion prompt than speculation, but as an advertiser or other consumer analytics customer of reddit, would this sort of protest and the potential fallout be concerning? I'm not sure if all this is a drop in the bucket for those customers or if it's significant. If it was significant, would reddit be scrambling to save face for these advertising customers? An outage like this would certainly skew the difference in their analytics for today's relative traffic.
I can't imagine it either, though it wouldn't be outside the realm of some of the stupidity that has gone on. But it would almost certainly be the death knell for the site - those subreddits who didn't "go dark" now would.
> (I used to work as a backend developer at Reddit - I left 6 years ago but I doubt the way things work has changed much)
> I think it's extremely unlikely that this is deliberate. The way that Reddit builds "mixed" subreddit listings (where you see posts from multiple subreddits, like users' front pages) is inefficient and strange, and relies heavily on multiple layers of caches. Having so many subreddits private with their posts inaccessible has never happened before, and is probably causing a bunch of issues with this process.
definitely not the case, reddit has stated that they would not be doing that, at least for the two-day protest. would be incredibly bad PR at this scale, especially after repeatedly and directly making this statement.
however, they have stated that may do this if the protest extends beyond the 48h mark.
That is an interesting point. Would assume your subreddit isn’t yours so at any point you could grow and generate content and users and Reddit could take it over and add in their own moderators?