Monday, August 31, 2015

Between rationality and politics

Last year, there was an incident where Arthur Chu criticized LessWrong/Rationalists for abandoning political effectiveness in favor of being "rational".  This has been on my mind increasingly, partly because Arthur Chu went on to become a popular columnist whom I like, and I went on to learn more and more about how pathological the Rationalist community really is.

This has some personal significance, and is somewhat disillusioning.  A decade ago, I became interested in skepticism primarily because I liked thinking about how we think, and how to improve upon the process of thinking on a meta level.  I was also on board with the political project of skepticism (fighting bad beliefs), but became less interested in it over time, which left just the critical thinking component of skepticism.

But if I just have the critical thinking, these are basically the same values held by the Rationalist community.  In fact, some of the things I like to write about, the Rationalist community addresses better than I do.  And then, to discover how the Rationalist community behaves...

But let's go back to the issue of rationality vs politics.

Lies and Shunning

At first, it sounded like "politics" meant lying for a cause.  Part of the original context related to false or misleading statistics.  That seems difficult to justify.  I am in favor of, say, infographics which popularize information while eliding truth just a little.  Certainly there are problems with clickbait pop science, and with partisan "research", but we also can't spend forever getting exactly every fact right, sacrificing all accessibility to get every detail.  So I see both sides and, okay, it's really hard to justify misleading statistics.

But I think I see now that it's not really about lying or eliding truth.  It's about other rational values, such as...
  • Always hear out an argument and consider it in its strongest form before calmly coming to a conclusion.
  • Never shun people no matter how bad they may be.
The idea of not shunning people seems fairly innocuous, or even positive, until you see it taken to its extreme.  Take for instance, the Neoreactionaries (NRx).  They're a small and inconsequential group who believe that we should get rid of democracy and return to the days of white supremacy (no, seriously).  For some reason, many NRx are welcomed in LessWrong, to the point that LessWrong enjoys a space in this map of neoreactionaries.

Scott Alexander, the second most popular LessWrong writer, has talked about how NRx have inspired his writing, even though he thinks they're wrong.  He contrasts them with feminists, who he says has correct "object-level" beliefs but bad meta-beliefs.  On its face, he's basically saying he likes neoreactionaries because they talk the Rationalist talk.  This is funny because there's a Rationalist saying, "rationality is about winning", which means that rationality isn't about how you sound, it's about the ultimate consequences.  Valuing people who talk the talk is basically a bias towards the in-group.  And what an in-group they've chosen!

It's hard to tell exactly what effect it's had for the community to have extended exposure to NRx ideas.  In my limited experience, my impression is Rationalists are perfectly willing to argue for racist things, more than the general public.  I think NRx may have moved the whole Overton window of the community, and maybe Rationalists just think they're immune.  To them, the only valid way to reject NRx ideas is by considering them at great length, and if you absorb some of their ideas in that time, hey, maybe they were good ideas worth adopting, because Rationalists couldn't possibly come to a wrong conclusion on an issue they've thought about.

EA and AI

I should mention that there appears to have been some sort of Rationalist diaspora.  From what I've heard, the community used to be more centralized on the LessWrong website, but has now spread out to new websites and to new ideas.  It is near certainty that what I criticize does not necessarily apply to the entire diaspora.

Probably one of the best things to come out of Rationalism is the Effective Altruism movement (EA).  They believe in figuring out which charities do the most good for your dollar and then donating lots of money to them.  They're associated with organizations like GiveWell and Giving What We Can.

They're pretty hard to fault.  I mean, we can criticize the details, such as the particular things that they prioritize in their calculations.  I'm also really iffy on the idea of "earning to give".  But one of the problems with EA is that telling people that their donations are ineffective sometimes just discourages them from donating at all.  Likewise, if I criticize EA, I think that might just discourage people from donating at all.

More recently, EA came under fire because their EA Global conference prominently featured AI risk as one of their causes.  That means people were talking about donating to the Machine Intelligence Research Institute (MIRI) to do artificial intelligence research in order to prevent extinction by a malevolent AI.  Said research involves trying to build a benevolent AI.  In response to criticism, Jeff Kaufman, who is known for advocating against AI research within EA, called for pluralism.  (Scott Alexander, for his part, argued that AI risk was at least somewhat reasonable and anyway less than a third of his donations go to MIRI.  How inspiring.)

So this is another case of not shunning people, but instead welcoming them.  And as a consequence, some people in the community begin to regard it as correct, and most regard it as somewhat reasonable.  But really, what place does AI risk have in an evidence-based charity group?  It seems to be based more on philosophy--a very idiosyncratic take on utilitarianism, and a bunch of highly questionable probability estimates.

Incidentally, that particular kind of utilitarianism is the kind advocated by LessWrong, and more specifically, its founder Eliezer Yudkowsky.  Eliezer Yudkowsky has long argued for the importance of AI risk, and is the founder of MIRI.  In some ways, convincing people to donate to MIRI was underlying motivation for teaching people his ideas about rationality.  He wanted to show people that his own cause was right, despite being contrarian.  And it worked!  Not only do a lot of Rationalists accept the value of MIRI, there's also a preponderance of other strange beliefs held by Eliezer, including his favorable views towards cryonics and paleo diets.

So basically, the EA movement is weighed down by the history of the Rationalist community and its particular in-group biases.

Political rationality?

Given the way the Rationalist community has turned out, I'm glad I never got involved, despite my intellectual values clearly leaning in that direction.  One question is whether I can synthesize anything better.

I wish I could, but I don't think I can.  I feel conflicted about the whole thing.

On the one hand, I have these rational values.  Arguments should be treated based purely on content.  It's easy to be blanket-skeptical about things which are actually reasonable.  Ideas that sound too crazy to entertain can be right.  Even if something is too crazy, rebutting it point by point can be helpful to other people who find it somewhat reasonable.

On the other hand, I also believe arguments are about power.  If you stick to purely rational arguments, you'll lose your audience and miss the point.  And rational arguments aren't even very effective to help yourself come to the correct conclusions.  I believe in the Overton window, and I believe it's something we need to fight over--actually fight, not just debate.  I believe anger is so useful that I'll fake it if necessary.  Finally, I believe in the goodness of shunning people, and shutting down arguments.

I don't think I am always consistent about which tack I take.  And I don't think I have the ability, or commitment, to map out consistent rules for it.  Better to take it on a case-by-case basis and spend that time on other things.

This all makes me glad that I'm changing my blog title in a month.

0 comments: