Tuesday, March 4, 2014

On Causal Density

The most useful concept I have encountered in engaging with public debate and public policy over the past decade is the notion of "causal density." It provides us with a framework for evaluating almost any given subject.

I first encountered the concept in Jim Manzi's excellent bookUncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society. Manzi posits that causal density is a measure of the potential number of factors that could be the root cause of a given phenomenon. In areas of low causal density, causality is (comparatively) easy to determine. In areas of high causal density, causality is elusive, because there are so many potential factors that could be affecting your subject matter. Or, as Arnold Kling puts it, "causal density is a bear."

Academic disciplines can be characterized quite neatly using causal density. Randall Munroe hit on this, perhaps inadvertently, in a wonderful xkcd comic.

(Image taken from xkcd.)

Although Munroe is arranging his fields by "purity," you could just as easily replace purity with "causal density" and you'd have roughly the same chart. Math in its purest form has extremely low causal density. Admittedly, I am not a mathematician, but it is very easy, when dealing with numbers, to determine which inputs changed whatever outputs changed. As you work your way from right to left on his comic, it gets progressively harder to discern causality. Physics is comparatively easy, in that sense (and perhaps that sense alone!); we can control environments and variables very effectively. Chemistry is perhaps a bit more complicated. Biology, which deals with the interaction of living systems, is more difficult still. The further to the left you are in Munroe's comic, the more slippery causality becomes.

The toughest nuts to crack in the causality world are what academia identifies as the liberal arts. Below is my mental model, put together with the greatest of care (read: very sloppily) in the GIMP Image Editor on Ubuntu. I would be willing to reconsider some of the placements here, but I had a reason for each of them.


Admittedly, some of this is a product of ignorance; I don't know all of these fields. But that (sloppy) chart is basically how I view the disciplines. The further to the right, the more probable that explanations of causation are correct. The further to the left, the less probable.

A few points on this:

- I would separate "predictive theories of history" from the historical discipline more broadly, and the comparative method in history more narrowly. When history is merely trying to chronicle the past, it is less prone to causality issues that some of the other liberal arts, which are attempting to explain how humans operate, rather than how they once operated. Also worth noting is the comparative method. Comparison is an explicit attempt to discern causality, by trying to find similar cases and then to identify the asymmetries between them. The method has its limitations, and the causal density is still incredibly high. But it is a way to combat our knowledge problems. It is the theory of history--Hagel, Marx, etc.--that has the highest causal density, because it is essentially an attempted synthesis of all human knowledge and experience. And it is, then, the most problematic of the disciplines, from a causal density perspective.

- One of the challenges of the human-focused disciplines is the ever-changing nature of the subject. Human nature may be unchanging--and to a large extent, I believe it is--but the changing context has such an impact that models that make sense in one situation can become completely useless due to small changes. We just don't see that type of external shock to the "core laws" in the hard sciences.

- Causal density and complexity are not synonymous, at least in the discourse. The "harder" subjects often have lower causal density. This is because the "easier" subjects allow us to get away with more rhetorical and intellectual "hand-waving." The complexity of physics--a field of low causal density--should give us pause. Our experimental method gets us pretty close to The Truth that underlies reality, and that truth is massively complicated. (When I say "The Truth," I mean "the actual causes of given events." Such causes clearly exist, but they are difficult to determine accurately.) Political science is so much more complicated that our models barely scratch the surface of true understanding of causation.

- Many of the models in social science end up getting the correct answer via approximation. But we shouldn't equate that periodic "success" with the Truth, and we should be aware that the Truth may change in the future.

- I believe that many of our political disagreements are because politicians assume that we live in environments of low causal density. But the real world and human interaction is an area of extremely high causal density, and thus we should adopt lower confidence in our beliefs.

This reliance on causal density justifies, in my mind, certain deeply-held views:

- Tinkering with an economy is largely a fool's errand, because it is so difficult to get a handle on all the pieces. (On this, Hayek is completely right, in that the curious task of economics should be to demonstrate to men--and women--how little they really know about what they imagine they can design.)
- Trying to standardize medicine--to change it to "paint by numbers" style diagnosis and treatment--will probably cause problem.
- Decentralization is very often better than centralization, because we are prone to error in areas of high causal density, and reduced size reduces the scope of the error.
- Trial and error is more useful than theorizing.
- "Culture"--as a catch-all for "collective ways of seeing the world" or "stories we tell ourselves," as a friend from school described it--is a useful fiction, at the very least, because it encompasses potential causes that we cannot disaggregate. This implies that we should respect "cultures" that have been successful and do our best to preserve them; oftentimes, in pushing for a change, we may be throwing the baby out with the bathwater. (I don't intend for this to imply that we should always support the status quo.)

A good shorthand for this is to think in terms of potential external inputs into a system. The more possible inputs, the more complicated determining causation can get.

My overall point here: Discerning causality in public policy is incredibly difficult, because people are incredibly complicated, and complex, dynamic systems are very difficult to fully understand. But with that said, we shouldn't hesitate to create grandiose historical theories, or theories of politics or sociology. They are entertaining and stimulating, at the very least, and they can help shape the way we see the world. But we shouldn't accept those theories without great skepticism, and if we do employ these ideas, we need to find ways to limit our downside and limit the degree of potential errors. (This is what I would call a Talebian posture.)

No comments:

Post a Comment