A paradigm shift in empirical economics?

Error message

  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/menu.inc).
  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /var/www/drupal-7.x/includes/common.inc).
Published by Anonymous (not verified) on Fri, 12/06/2015 - 2:17am


Empirical economics is a more and more important part of economics, having taken over the majority of top-journal publishing from theory papers. But there are different flavors of empirical econ. There are good old reduced-form, "reg y x" correlation studies. There are structural vector autoregressions. There are lab experiments. There are structural estimation papers, which estimate the parameters of more complex models that they assume/hope describe the deep structure of the economy.

Then there are natural experiments. These papers try to find some variation in economic variables that is "natural", i.e. exogenous, and look at the effect this variation has on other variables that we're interested in. For example, suppose you wanted to know the benefits of food stamps. This would be hard to identify with a simple correlation, because all kinds of things might affect whether people actually get (or choose to take) food stamps in the first place. But then suppose you found a policy that awarded food stamps to anyone under 6 feet in height, and denied them to anyone over 6 feet. That distinction is pretty arbitrary, at least in the neighborhood of the 6-foot cutoff. So you could compare people who are just over 6 feet with people who are just under, and see whether the latter do better than the former. 
That's called a "regression discontinuity design," and it's one kind of natural experiment, or "quasi-experimental design." It's not as controlled as a lab experiment or field experiment (there could be other policies that also have a cutoff of 6 feet!), but it's much more controlled than anything else, and it's more ecologically valid than a lab experiment and cheaper and more ethically uncomplicated than a field experiment. There are two other methods typically called "quasi-experimental" - these are instrumental variables and difference-in-differences.
Recently, Joshua Angrist and Jorn-Steffen Pischke wrote a book called Mostly Harmless Econometrics in which they trumpet the rise of these methods. That follows a 2010 paper called "The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics." In their preface, the authors write:

[T]here is no arguing with the fact that experimental and quasi-experimental research designs are increasingly at the heart of the most influential empirical studies in applied economics. 

This has drawn some fire from fans of structural econometrics, who don't like the implication that their own methods are not "harmless". In fact, Angrist and Pischke's preface makes it clear that they do think that "[s]ome of the more exotic [econometric methods] are needlessly complex and may even be harmful." 
But when they say their methods are becoming dominant, Angrist and Pischke have the facts right.Two new survey papers demonstrate this. First, there is "The Empirical Economist's Toolkit: From Models to Methods", by Matthew Panhans and John Singleton, which deals with applied microeconomics. Panhans and Singleton write:

While historians of economics have noted the transition toward empirical work in economics since the 1970s, less understood is the shift toward "quasi-experimental" methods in applied microeconomics. Angrist and Pischke (2010) trumpet the wide application of these methods as a "credibility revolution" in econometrics that has finally provided persuasive answers to a diverse set of questions. Particularly influential in the applied areas of labor, education, public, and health economics, the methods shape the knowledge produced by economists and the expertise they possess. First documenting their growth bibliometrically, this paper aims to illuminate the origins, content, and contexts of quasi-experimental research designs[.]

Here are two of the various graphs they show:


The second recent survey paper is "Natural Experiments in Macroeconomics", by Nicola Fuchs-Schuendeln and Tarek Alexander Hassan, It demonstrates how natural experiments can be used in macro. As you might expect, it's a lot harder to find good natural experiments in macro than in micro, but even there, the technique appears to be making some inroads.
So what does all this mean?
Mainly, I see it as part of the larger trend away from theory and toward empirics in the econ field as a whole. Structural econometrics takes theory very seriously; quasi-experimental econometrics often does not. Angrist and Pischke write:

A principle that guides our discussion is that the [quasi-experimental] estimators in common use almost always have a simple interpretation that is not heavily model-dependent.

It's possible to view structural econometrics as sort of a halfway house between the old, theory-based economics and the new, evidence-based economics. The new paradigm focuses on establishing whether A causes B, without worrying too much about why. (Of course, you can use quasi-experimental methods to test structural models, at least locally - most econ models involve a set of first-order conditions or other equations that can be linearized or otherwise approximated. But you don't have to do that.) Quasi-experimental methods don't get rid of theory; what they do is to let you identify real phenomena without necessarily knowing why they happen, and then go looking for theories to explain them, if such theories don't already exist.
I see this as potentially being a very important shift. The rise of quasi-experimental methods shows that the ground has fundamentally shifted in economics - so much that the whole notion of what "economics" means is undergoing a dramatic change. In the mid-20th century, economics changed from a literary to a mathematical discipline. Now it might be changing from a deductive, philosophical field to an inductive, scientific field. The intricacies of how we imagine the world must work are taking a backseat to the evidence about what is actually happening in the world.
The driver is information technology. This does for econ something similar to what the laboratory did for chemistry - it provides an endless source of data, and it allows (some) controls. 
Now, no paradigm gets things completely right, and no set of methods is always and universally the best. In a paper called "Tantalus on the Road to Asymptopia," reknowned skeptic (skepticonomist?) Ed Leamer cautions against careless, lazy application of quasi-experimental methods. And there are some things that quasi-experimental methods just can't do, such as evaluating counterfactuals far away from current conditions. The bolder the predictions you want to make, the more you need a theory of how the world actually works. (To make an analogy, it's useful to catalogue chemical reactions, but it's more generally useful to have a periodic table, a theory of ionic and covalent bonds, etc.)
But just because you want a good structural theory doesn't mean you can always produce one. In the mid-80s, Ed Prescott declared that theory was "ahead" of measurement. With the "credibility revolution" of quasi-experimental methods, measurement appears to have retaken the lead.

Update: I posted some follow-up thoughts on Twitter. Obviously there is a typo in the first tweet; "quasi-empirical" should have been "quasi-experimental".