Pet peeves in technical & philosophical writing

Assuming that an equation counts as an explanation

For example, that F=ma explains something about the dynamics that unfold within a Newtonian system. It doesn’t; it’s just a very terse summary of an observed invariant. A proper explanation would say why F always equals ma: what are the underlying phenomena that cause this to be the case?

Large equations, in general

I’ve encapsulated this as “equations wouldn’t pass code review”: why is it considered perfectly acceptable to put everything in an insane one-liner in mathematical writing, whereas in software we seem to have broadly agreed that “descriptive variables and a sequence of small steps building on previous results” is the way to go?

Invalid or oversimplified assertions about the nature of rationality

There’s this phenomenon where as long as someone’s writing in a particular style and with a particular attitude, they seem to get a pass on having their arguments logically analysed in any detail even though “analysing things logically” is their main selling point. I guess you kind of assume they’ve done their homework, or something. In any case, the arguments often turn out to be susceptible to even a fairly casual sanity check.

In this video Julia Galef gave an anecdote about the following scenario, in which a rationalist technique was used to make the right decision in the face of uncertainty:

  • a person can accept a new job with a $70k salary increase, but they’d have to move away from their friends and family for it, so they are hesitant to accept it.

  • the rationalist hack is to prompt the person to reverse the situation, and ask whether they would take a $70k salary decrease to live closer to their friends and family.

  • the answer is “no!”, therefore the rational decision is to take the job. The hesitance comes from the status-quo bias: people prefer to stick with what they have.

I totally acknowledge the value of this reversal technique to aid in good decisionmaking*, but I reject the implication that we can use it to do a simple comparison between “$70k” and “living close to friends and family” in order to arrive at the correct decision.

For starters, what if we went further and said:

  • imagine the situation is reversed; you already have the +$70k and we ask you to take a -$70k hit to move closer to friends and family.

  • you are hesitant, because of the status quo bias.

  • we ask you to imagine the reverse scenario – which brings us back to the current situation. Would you take a $70k increase if it meant moving away from F+F? We have already seen that you would be hesitant, therefore…

Is it valid to do this kind of hypothetical scenario inception? I argue that it is, and that to make the scenario as realistic as possible, the status quo bias should be present in all cases.

More importantly, these simplified examples often seem to assume that the only thing being decided is whether to have $70k or to live near F+F. In reality, the person making the decision also has to live with having made the decision. In the F+F case, they’d also have to live with having turned down a high-paid job offer to stay in their comfort zone. Their self-image would take a big turn in the lacking ambition direction. In the other case, they’d have to live with having prioritised money over family: avaricious. The difference between the two realities would not just be the two values being compared.

The point is that there are often complex ramifications to each available option, and where a simplified calculus might make it look like a person was being “irrational”, I think closer inspection often shows that they were just taking more into account than we thought.

* The reversal technique can also be useful for counterbalancing the negativity bias when predicting the outcome of an action: imagine wanting both the thing you want and the opposite, to get an idea of whether your predictive machinery is biased toward the failure case.

Other examples

  • Another example of this not looking closely enough can be found in Jacques Peretti’s book Done. Here, the supposed irrational behaviour is a tendency to buy lottery tickets and insurance at the same time – one is “sensible”, the other is “frivolous”, so they shouldn’t be done at the same time (…or something? :shrug:). But if you think about it, these are actually pretty similar activities in some ways: both are a form of betting on an unlikely outcome, and both have negative EVs from a strictly numerical point of view.