Predictably Updating Towards Sympathy
(See footnote 1 for some context.[^1])
So there’s a thought I’ve been crystallising recently: I seem to be more sympathetic on priors than others when considering the faults of other people or (especially) organizations. So how can I justify this? I jest, but this is obviously the order in which I came across this thought; I seem to have this behavour. Why would I behave this way, and is it in fact justified?
I think it boils down to the fact that, I seem to think that, were I to learn more information about the details of this person or organisation, I would grow more sympathetic. Why would I think this? Well, being a person is notoriously hard. I’m not going to motivate this statement any further. But it is likewise for being an organization[^2] - doing and making stuff in the real world is difficult. It seems likely to me that any details about why a thing is the way it is would involve constraints or restrictions which I, as someone looking in from the outside, can currently only see the shadow of.
There is also some empirical evidence here. Among others I follow game developer Rami Ismail on Twitter, who often shares brass tacks details about the difficulties of (video game) software development. My own smallish experience with coding provides similar information: it is not the case that backdoors or loopholes are deliberately left, or that there needs to be some large oversight or mistake for such things to occur. They are simply the natural or default outcome, and much effort and/or attention needs be paid to avoid them. And often, there is not time nor resources for much effort and/or attention to be paid. These issues are often on the wrong side of the profit margins of whoever makes decisions - correctly or incorrectly so.
Beyond just software development, human coordination is difficult. Good communication and cooperation takes effort. At the best of times, smoothly running an organisation or project of any size requires high-quality communication and coordination between however many people (where anything over ~5 benefits greatly from structures and project management processes); ensuring that responsibilities are divided and communicated, both people and tasks are followed up on, the creative vision is not lost track of during the process, and so on.
So with these observations in mind spring forth my belief: “If I were to hear about the details which pose constraints on organizations or people, I would become more sympathetic towards their struggles.” And, according to Bayes, if I can predict that, if I were to receive information at all, that information would cause me to become more sympathetic, I can simply make that update right now and become more sympathetic right away.
There are some things which cause my sympathy to not go all the way. This is because, in some cases, the organization does not, in fact, have good reasons for why they are doing the things they are doing, or the things they are doing are sufficiently bad such that “ordinary” reasons like “things are difficult” do not weigh up. The FTX situation comes to mind - there were details which updated me not towards sympathy, and the overall consequences were sufficiently bad that sympathy does not seem correct. But I think that organizations are - overall - not like FTX.
I think the argument can be made that some classes of organizations are like FTX, or bad in other ways, such that one should not on priors have sympathy towards them. But I think you need some amount of evidence either of their position in industry or about their conduct before doing so. For the first, Amazon, large banks, and Electronic Arts (EA, the video game company), springs to mind. For the second, Amazon serves again as example, or [video game companies doing bad crunch practice]. I see organizations like Hello Games (No Man’s Sky), or Bungie (large video game industry player, but not a titan the way EA is).
If your response to my position is “But they still should have fixed those bugs”, then I’m not sure how to help you. If you think “but I still can’t see why [org] isn’t devoting resources to [x] when [x] is such a large issue”; then we’re in the same boat. We are both on the outside looking at only the shadows of the host of considerations these organizations have to keep in mind (legal, coordination-wise, software constraints, or otherwise). But I’d like you to consider the proposition that, were you to hear details about their internal goings-on, you would also becomre more understanding of their conduct.[^3]