Anti-Polarization Techniques
In preparing for a KW Hungry Minds discussion on codes of conduct and safe spaces, I have been digging up resources on effective discussions and collaboration. This entry is a convenient place to post some resources related to the topic, and to collate some practices I have learned about over the years.
The underlying goal for all of these practices is effective collaboration -- in particular when participants have strong disagreements. Some practices work better when each member in the group acts on good faith, but some seem resilient to bad actors.
Simplicity Circle Guidelines
Back when I was young and impressionable, I attended a number of workshops on voluntary simplicity. The ground rules established for such gatherings influenced me a lot:
- Speak honestly, from your own experiences
- Be open to new ideas
- Be respectful of time limits since some people have to leave at the advertised end time; save "tangents" for after the workshop to discuss with anyone who has the time to stay (write your ideas down so you don't forget them)
- Look for and applaud common ground rather than point out differences
- Validate and respect other people's positions
- Question rather than challenge
- Control yourself, not others!
- Share your thoughts and allow others to do the same
- Conversations are a barn raising, not a battle ground
- Enjoy!
Simplicity circles are not intended to convert the group to one way of thinking or doing that will simplify people's lives. Judgment of other people's choices is not acceptable. Each individual knows best how to simplify his or her own life. Consensus is not required. However, it is possible to find areas of common concern, and to focus on supporting each other in their own choices. Simplicity circles are dialogues, not debates.
In addition, I scanned a number of handouts on good facilitation. In particular, the "Troubleshooting" section seemed full of good advice. Those scans were from (now dead) webpages, so I linked to archive.org versions of them below. Here is a PDF of the scans: sharing-circle-resources.pdf .
- Troubleshooting
- Dialogue vs Debate
- Role of the Participants
- Overview of a Typical Study Circle
- Study Circle Material
- Study Circle Handout
- What is a Study Circle?
- Developing Original Study Material
- Questions
- Role of the Facilitator
Grok Duels
I learned about this concept in a WPIRG workshop back in the early 2000s, but I have not found anything else about them on the internet since.
The idea is simple: start with two parties who disagree about a particular issue. Let's call these parties Person A and Person B. The two parties discuss the issue at hand, but there is a twist:
- Person A makes a statement
- Then Person B must repeat Person A's statement back to them to Person A's satisfaction. If Person A feels something is misrepresented in the statement, then Person A points out the misrepresentation and Person B tries again. This continues until Person A agrees that Person B has reflected Person A's point accurately.
- Now Person B makes a statement.
- Then Person A must reflect Person B's statement back to them to Person B's satisfaction.
This procedure means that Person A and Person B must explicitly acknowledge and repeat back the points their opponents make, even if they do not agree with those points. It avoids two people talking past each other.
I have only seen this exercise carried out once, and both participants acted in good faith. This is probably resilient against bad-faith actors, provided that there is some kind of consequence for acting in bad faith.
Non-Violent Communication
I also learned about this from WPIRG. It does not directly address polarization, but is intended to facilitate conversations with fewer accusations. It was founded by Marshall Rosenberg, and is (was?) reasonably well-known in lefty circles. (It is popular enough that there is a right-wing backlash, anyways).
Rosenberg characterizes communication styles as "jackal" (boo! hiss!) communication vs giraffe (yay!) communication. Jackal communication is competitive and comparative and defensive. Giraffe have big hearts and eat thorns. From what I understand, the foundations of nonviolent giraffe communication consist of:
- Observe without judgment
- Identify and express your feelings. ("Feelings" here is pretty specific.)
- Find the need behind the feeling.
- Make a request (not a demand!) for the need to be met.
If this sounds like therapy-speak, that's because it is. Nonviolent communication sounds stilted (and no doubt can be weaponized) but it seems effective. It is also robust in the sense that it can be used even if the other communication partner communicates violently.
The primary sources for nonviolent communication stuff are behind paywalls, but there are some resources on the Youtubes. Here is a reasonable introduction. In this video Rosenberg has switched from "jackal" to "wolf" to characterize violent communication.
Intellectual Turing Tests
This comes out of the rationalist movement (Slate Star Codex/Astral Codex Ten, lesswrong, and friends). The exercise is to present an argument that you disagree with so convincingly that people cannot tell whether you genuinely support the position or not.
You can read a series of Ideological Turing Tests on polyamory on the Thing of Things blog here: https://thingofthings.wordpress.com/2020/09/25/poly-itt-results/ . This post also claims that the Intellectual Turing Test was invented by Bryan Caplan.
Adversarial Collaborations
This also comes out of the rationalist movement. It is an exercise where people who disagree on an issue collaborate to write a position paper together. The paper contains the strongest arguments from both sides. I do not think the paper has to contain only arguments both sides agree with, but both sides have to approve the paper.
You can read some adversarial collaborations from Slate Star Codex here: https://slatestarcodex.com/2019/12/09/2019-adversarial-collaboration-entries/
I do not know whether this is resilient to bad faith arguments or not. The failure mode is that bad faith arguments will scuttle the paper.
Long Bets
This comes via the Long Now Foundation, which has an associated website. The idea is that we have strong opinions about the future, but we are not held accountable to our bad predictions. So in a long bet, opposing sides bet money on their position, and then write position papers arguing why they think their preferred outcome will come true. Associated with the bet is an expiration date. When the deadline passes, we see what happened, and whether the justifications for those positions proved true.
Way back in 1980, there was a famous Long Bet between Julian Simon and Paul Ehrlich. They debated whether we were running out of global resources, and made a bet that certain commodities (copper, chromium, nickel, tin, and tungsten according to Uncle Wikipedia) would be higher in price in 10 years? Ehrlich believed that increasing pressures on these commodities would drive prices up, and Simon believed that we would develop alternatives to these commodities that would drive prices down. Ehrlich lost the bet, and the economic optimists have never let the Malthusians forget it. Although Long Bets are a good way for opposing sides to find the truth, it is probably not a great tool for depolarization, because the winners of the bets conclude that the losing arguments are without merit.
This technique is robust against bad actors, but it takes a long time to evaluate the results. The real failure mode is that many results are inconclusive because the question criteria are not precise enough, and can be interpreted in different ways. Overall I feel this is a good technique for evaluating theory vs outcomes.
Science
I am not sure that science is all that effective at reducing polarization -- academic tribalism is strong. But it is a good technique for overcoming our psychological tendencies for post-hoc rationalization.
The important part of science is the falsifiable hypothesis. If different opposing parties can agree on a hypothesis and an experiment to test that hypothesis, then we get substantial information after carrying out the experiment. At its best, science is reproducible, so if you think it is wrong you can rerun experiments to confirm the results, or you can propose a different experiment that will prove the hypothesis wrong. More than anything else, this is how I feel we have advanced technological progress in the world.
The failure mode is that we do not live in an ideal world, and science does not work at its best. All too often we criticize experiments on methodological grounds. In some sense this is appropriate, but in another sense this increases the cost of doing science dramatically. As we get picker and pickier about methodology, the cost of performing experiments that address these issues goes up and up. Then science turns into a religion: the scientific priesthood tells you truths, but you have no real way to confirm or even discuss these truths because you don't have a well-funded research lab. Thus you have to take scientific findings on faith, or you have to reject them wholeheartedly. (Discussing the ways in which science has become a religion could be a blog entry of its own.)