Truth-seeking spaces as PR hazards
fidelity of information transmission, memetic antibodies, PR hazard containment
Trigger warning: Discussion of discussion of triggering topics
The sub-soil of society
There’s a kind of space1 that’s both rare and unbelievably precious. I’m going to call it “truth-seeking space” based on a conversation that I had with someone from the EA community, and it’s defining feature is this: It’s common knowledge for people in this space that you do not have to censor your thoughts. You can explore any subject, from any angle, in open-ended discourse. If you are judged, it’s on the quality of the discourse, not on its outcome (or where it falls with regards to the prevailing orthodoxy, common taboos, the Overton window or whatever.). What differentiates this space from broader discourse is that all participants are aware that all the other participants are aware that you should not receive a social penalty for blurting out “But, the Emperor has no clothes!!” (By contrast, you would definitely receive a social penalty if the same conversation were happening in a more normal social context.)
These spaces are valuable for a number of reasons.
One, they’re the breeding ground for the eventual overhaul of the current ideology, or orthodoxy (both of I’m using in a loose definition to mean “whatever the water right now is that no one sees because everyone is swimming in it is called”.) They are what Victor Hugo called the sub-soil below society’s foundation, where revolutions are born. 2 In this sense, they are a necessary condition of progress.
Second, they train memetic antibodies. It is no sin to engage with morally reprehensible ideas. How would anyone know that an idea is reprehensible if they did not engage with it? It’s a good idea to go through the horrible ideas of the past (nazism, communism, insert your personal nightmare) and try to steelman them, then develop refutations of your own. Memorising the most common/intelligent/whatever refutations of yesterday’s Horrible Ideas is not very useful if you want to be on guard against the Horrible Ideas of tomorrow. Chances are they will not have enough superficial resemblance for a simple pattern-matching algorithm to detect them. Think of it this way: our culture already has strong antibodies against nazism, communism, etc. It doesn’t have nearly as many antibodies against whatever the Most Horrible Idea of the 21st century will be. But that amount is not zero. And they disproportionately come from the truth-seeking spaces, where people are allowed to do their outlandish thought experiments even if they lead to repugnant conclusions.
Truth-seeking spaces are PR hazards – for their participants and the movements affiliated with them
Let’s consider the process of information dissemination. First abstractly, then with two real examples.
Complex, nuanced concepts are rarely transmitted with perfect fidelity. (Insert something something postmodernism infinite interpretations something.) At best, there are some minor points the recipient doesn’t quite catch, or sees in a slightly different light, etc. At worst, the recipient ends up with a horribly garbled and distorted version diagonally opposed to the original idea. This can happen even if all transmitters act in good faith!
Generally, different formats lend themselves to different degrees of fidelity. Long-form content – books, one-on-one conversations, long podcasts – have a good chance at achieving a high fidelity. Most blogs are somewhere in the middle. Social media is horrible. 3
A good example of this happening in recent years is the Financial Independence, Retire Early (FIRE) movement. One of the blogs that started it all was Early Retirement Extreme (ERE), by a former Danish astrophysicist. ERE has many nuanced ideas, such as the application of systemic thinking to lifestyle design to create synergic effects, tradeoffs between different kinds of capital (e.g. financial capital, social capital, time, skills,…), income robustness scores (having several uncorrelated sources of income) and much more. Interestingly, one facet of ERE is that “ERE has a graceful failure mode, because badly done ERE is just regular FIRE”. (I originally wrote much more about this, but then realised it was better to split it into a separate article.)
The most significant populariser of FIRE was probably Mr. Money Mustache (MMM). His blog covers similar ground, but with relevant differences: his budget was somewhat larger than ERE’s extremely low budget, he emphasized replacing one’s job income with investment income (an easier, but also much riskier alternative to ERE’s multiple streams!), and so on. Nowadays, there are thousands of FIRE blogs, talking about the “simple formula” to early retirement, and public perception of the movement is that its adherents deprive themselves because they are irrationally adverse to working hard(er) so they “don’t have to put off their life”. By contrast, MMM’s and ERE’s frugality is not seen as sacrifice! Instead, the practices that reduce monetary spending often have side effects like a healthier lifestyle and a richer social life. But those nuances were mostly lost. The process of idea transmission did not happen very faithfully.
Effective altruists are quite concerned with this process. I think this concern is warranted - you’d expect a low-fidelity-of-transmission version of EA to just be regular altruism, but for some reason this is not what usually happens. In response to these concerns, the awareness/inclination model was put forth. Simply put, for movements like EA it’s not a good idea to increase awareness if it comes at the cost of making people less favourably inclined towards EA. (Of course, the actual model is much more nuanced and I am not transmitting the idea very faithfully.) I cannot reasonably attempt to summarise effective altruism here, but it’s worth pointing out that it’s not just altruism and also not naive utilitarianism. Scott Alexander has written about the EA Tower of Assumptions, meaning the pyramid from basic assumptions up to specific projects. He points out that most “criticisms of EA” is really just criticism of specific stuff on the upper echelons, not actually criticism of EA itself. (If you want an introduction to EA, this is what convinced me personally.)
Public perception of EA seems to have become something like “arrogant white academics and techbros think they are saving the world, but they mostly waste time worrying about an AI apocalypse”. This is sometimes mixed with the accusation that EAs are funging against doing concrete good things in favor of supporting pie-in-the-sky Sci-Fi doomerism or whatever.
I think that EA was initially doing a really good job of sticking to high fidelity formats, but the recent controversies have brought it to the attention of a wider public by use of lower fidelity formats (shorter articles, etc) that were already critical. So here’s another case of unintended and unwanted consequences of decreased fidelity of message transmission. (Just my impression, which may be off completely, I was not extending a huge amount of energy into following along.)
An example for a text which unintentionally messes up being mindful to this process pretty badly is this: An inquiry concerning the wisdom of building cathedrals. Someone not familiar with EA coming across that post would be justified in doing a downward adjustment of their opinion of EA.
(Low fidelity of information transmission isn’t the only reason truth-seeking spaces are PR hazards for the movements in whose ecology they grow. Most outlandish ideas considered will be somewhere between neutral and awful through sheer statistics, and someone who hasn’t yet sufficiently trained their memetic antibodies might end up endorsing some of them or even acting on them. This means a PR hazard exists even if all information were transmitted with perfect fidelity.)
(This is where I say that I have the urge to put a giant and horribly annoying banner over this blog saying that the epistemic status of everything on here is uncertain until specified otherwise, that you shouldn’t assume I endorse anything I’ve written unless I’ve literally posted it today, and that I’ve changed my mind often in the past.
I’d like to see this blog in part as a practice in coming up with outlandish ideas, knowing that 98% of them will be discarded. But I’ve been to self-conscious to make this vision reality. Maybe I should make regular threads with this explicit purpose?)
So, what can be done about this PR hazard?
Let me first say that I don’t know of anyone who has found a really good workable solution. Here are a few things people are doing to deal with this in practice, in no particular order:
- Subscription or Registration only platforms
This is particularly effective when it’s coupled with a paywall. Tradeoff: lots of free competition, so the paywall often keeps people out who you would want to get in.
- Format
A forum I know which consistently produces high quality discourse has a strict “no memes” policy, which is explicitly in place to filter for people who tolerate long-walls-of-text-only. Generally, long form content filters out people who don’t have the required time or attention span. Isaac Newton deliberately wrote the Principia Mathematica to be difficult to understand, so there would be less people annoying him. (It also explicitly doesn’t want to be signalboosted, to avoid suffering Eternal September.)
Conversing in a format not accessible for everyone immediately does not protect against people who already hold a bad opinion or have malicious intent taking things out of context or repeating them in a different spirit than the one in which it was intended. So, the above are, to some extent, viable “PR hazard containment” strategies, but they do not solve the problem very well.
A last alternative is having the space literally be a physical space. Talking to people face-to-face faces different constraints than writing with anons on the internet. The first, most obvious constraint is that the participants need to be in the same physical space. I can’t claim much personal experience for what happens when they are. (Anecdotally, the kids at elite universities now hand off their phones before they start excessive partying (citation needed)).
I don’t really have a good solution. Which is, by the way, a main reason I’m keeping this blog anonymous.
I’m tempted to make an anti-safe space joke here, but I probably shouldn’t do that
Page 1363 of 2745 in this edition of Les Misérables: https://freeclassicebooks.com/Victor%20Hugo/Les%20Miserables.pdf
It took me forever to find the relevant quote in the English version, but less than fifteen seconds to find it in the original. Here’s the original:
Someone who encountered any given movement primarily on social media will have a very hard time passing an Ideological Turing Test on it.
> Let me first say that I don’t know of anyone who has found a really good workable solution.
You must not be much for cults or religions. Every religion naturally creates a space where certain norms prevail; the problem has always been that truth-seeking spaces tend to be secular, and full of inflexible males without many females to act as social glue.
This is not a problem with an easy solution. I'm guessing you've read my post on Venus and Mars, so you know that what you describe as truth-seeking is far less popular with the female gender. Maybe, a truth seeking space founded by rich, educated bachelors? ...With a solid dress code?