Tuesday, July 14, 2015

Canadians: Successful Austerians?

A lot has been written, as it turns out, on Canada's experience with fiscal discipline in the mid-1990s. In 1993, Jean Chretien's Liberals were elected with a majority government and, with their February 1995 budget, embarked on a period of fiscal tightening. The change in policy was a response to a long period of federal budget deficits, and accumulating federal debt, which you can see in the following charts. The first shows the Canadian federal government surplus as a percentage of GDP, and the second shows the U.S. and Canadian federal government debt/GDP ratios.
The Canadian finance minister at the time, Paul Martin, explained what he was doing at the 1995 Jackson Hole conference. His program focused primarily on spending cuts, particularly reductions in corporate subsidies, and reforms to unemployment insurance and federal transfers to the provinces. As well, the original budget proposed a 15% cut in federal public employment.

You can see the results in the above charts. Martin was even more successful in reducing federal deficits than he expected, and the Canadian government ran surpluses, on average, from the late 1990s until the 2008-09 recession. As well, the debt/GDP ratio fell from a high of close to 80% to about 40% before the Great Recession. Further, government expenditures on goods and services (all levels of government) fell significantly, as a fraction of GDP:


So, given those kinds of fiscal cutbacks, maybe you're thinking that we should have seen a contraction in aggregate economic activity in Canada. But that didn't happen. Here's what the time series of real GDP in the U.S. and Canada (normalized to 100 in 1995Q1) look like:
So, if you think that government spending, deficits, and government debt matter a lot for aggregate economic activity, this might give you pause. As you might imagine, this episode in Canadian macroeconomic history tends to be celebrated on the right. But left-leaning types aren't quite so happy about it. Canadian electors didn't seem to have a problem with it though. Chretien's Liberals were re-elected with a majority government in 1997, and again in 2000.

Of course, the basic outline from the above four charts doesn't give us that much to go on. We know there could have been a lot going on at the time in Canada, other than changes in fiscal policy. For example, suppose you are a textbook Keynesian, and you read this piece on the Cato Institute's web site. So what, you might say. Maybe those Canadians cut spending when cyclical conditions were very favorable - we know that Keynesian economics only applies to periods of slack. That's roughly what Stephen Gordon had in mind in this post. So, we might look more closely at Canadian labor market data for this period. Here's the labor force participation rate, the employment/population ratio, and the unemployment rate:
David Andolfatto discusses the early-1990s labor market slump in Canada in this blog post. This only looks like a recovery "well under way" (as Gordon puts it) if you're looking at it with hindsight. In February 1995 it wouldn't have looked so great. There had been a persistent decrease in the employment/population ratio and in the labor force participation rate beginning about 1990, and the unemployment rate stood at 9.6% - below its peak but still quite high. It was pretty common for people to observe those features in the post-Great Recession U.S., and call that labor market slack. So, if you are a textbook Keynesian, those observations would have to puzzle you. How can we radically cut the size of government in a slump and not see signs of big fiscal multipliers and disastrous aggregate performance?

But, not so fast. Maybe monetary policy is helping things along? Indeed, in some discussions of the Greek crisis, people have suggested that Greece could have solved its problems in a more straightforward way if it were not tied to the Euro. A country with an unsustainable deficit has to correct that problem, but it's easier to do that if devaluation is an option. This has two effects, according to this logic. First, we can now pay back our debt to foreign creditors in devalued currency - we have implicitly defaulted on part of our debt, and have avoided the costliness of negotiating with nasty Germans and being thrown out of office by a grumpy electorate. Second, assuming sticky prices, our exports are now cheaper and our imports more expensive. Therefore, as long as the Marshall-Lerner condition holds, there should be an increase in net exports - i.e. a boost in Keynesian aggregate demand.

Indeed, during the 1990s, Canada experienced a depreciation in its exchange rate with the U.S. dollar, and a large increase in net exports:
Net exports practically moves in lockstep with the exchange rate. But, note that much of the increase in net exports in the 1990s had already happened by 1995. Further, beginning in 2000, the exchange rate depreciation and the increase in net exports quickly reverse themselves. It seems to me that, if a country is really interested in exploiting the effects of devaluation - whatever they may be - it needs the devaluation to be permanent. In this case, note that there is actually a net exchange rate appreciation from 1995 to 2005.

What was going on with monetary policy? One standard approach might be to look at short-term overnight interest rates. Here's the fed funds rate and the overnight interest rate in Canada, which is what the Bank of Canada targets day-to-day:
So, if we used the overnight nominal interest rate to measure the "tightness" of monetary policy (which may or may not be appropriate), we might say that Canadian monetary policy was tighter than U.S. policy from 1985 through the end of 1995, U.S. policy was tighter for a two-year period from 1996 to 1998, and it's more or less a wash after that. But it might be more appropriate to think about monetary policy in terms of the ultimate goals of the central bank. In 1991, the Bank of Canada adopted an inflation target of 2% for the consumer price index. How did that work out?
You can see in the chart that the Bank of Canada has been successful, since 1991, at achieving their 2% inflation target. In fact, the CPI is below the 2% growth path from 1991, and inflation in Canada has on average been lower in Canada than in the U.S. Note that this is particularly the case for the immediate post-1995 period. Thus, if the job of the central bank is controlling inflation, I think we would say that a lower inflation rate is an indicator of tighter monetary policy. In other words, whatever was causing the exchange rate to depreciate in Canada during the 1990s, it wasn't monetary policy.

What are we to conclude from this? Post-1990 Canada certainly looks like a country with successfully managed fiscal and monetary policy. The federal government brought fiscal policy under control and reduced the burden of government debt, and the central bank adhered to its announced inflation target. How did they do it? In the 1995 federal budget, some well-liked social programs, including government health care, remained more or less intact, and cuts were made in a politically palatable way. Of course, it didn't hurt that the U.S. economy was booming at the time. Are there lessons that we can translate to Greece in the current era? Only the obvious I think. Political stability and good government are important.

Friday, June 19, 2015

All Mathy and No Place to Go

If you're having trouble understanding Paul Romer's elaborations on mathiness, this post from last week probably won't help much either. As Romer admits, he's fighting mathiness with wordiness, and for me the words aren't always making sense.

Here are some of the words which seem to be at the heart of his argument, which is a response to David Andolfatto's post:
The provably false statements that economists like Andolfatto make (and he is certainly not alone) may be more than mere signals. They might be an irreversible commitment to stay at an institution where his club is already in control because they prevents someone from ever being employable at a competitive institution where logic is still valued.
I read that several times, and I think I'm closer to understanding it than after my first shot at it, but I'm still not quite clear on what Romer is trying to say. Let's work through this:

1. "The provably false statements that economists like Andolfatto make..." Andolfatto made some statements. They are "provably false" statements, which would be a bad thing if it were true, especially if David knows they are false. Then David would be a liar of course. Further, there other economists "like Andolfatto" who are doing the same thing. He's apparently a member of an identifiable group of economists who are doing something systematic, perhaps knowingly making false statements. Potentially we have a group of lying scoundrels on our hands.

2. "...may be more than mere signals." So making false statements is a signal of something, according to Romer. If you have been reading Paul's missives, you'll know that a "provably false statement" is a signal of a general state of "mathiness," an affliction - indeed, potentially a conspiracy to deceive - whereby the perpetrators subdue unwitting readers with a torrent of impenetrable math designed to cover up poor economics.

3. "They might be an irreversible commitment to stay at an institution where his club is already in control..." The institution is the place where Andolfatto and his ilk work, apparently. There may be a club controlling the place where he works, and possibly David has made an "irreversible commitment" to stay there - on the scale of a pact with the devil. David went down to the crossroads, and came back playing some serious bottleneck guitar, but at what cost?

4. "...because they prevents someone from ever being employable at a competitive institution where logic is still valued." This is the cost. Make "false statements" and you're stuck working where you're working now, because people who work in places where "logic is still valued" won't hire you.

Well, as far as I can tell, Paul Romer's drive for an eradication of "mathiness," and his support of "truth," and "norms of science," has little to do with valuing logic, and everything to do with Paul's own professional disputes, which are really of little interest to most of us. It's nice that people have an opportunity to discuss the economics of growth and innovation, which is fundamental in economics - though we perhaps have not progressed as far as we might have hoped in understanding it. But Romer's conspiracy theories are an a par with grassy knoll theories. Start probing, and there's nothing there. In particular, there are no falsehoods in Andolfatto's blog post, as far as I can tell. I disagree with David on a daily basis, but he's one of the more honest and un-clubby economists I have known in my professional career. How many other economists attach their names to their referee reports?

So, my advice for Paul Romer is to lighten up. I think he would rather be remembered for his fine early-career research on economic growth than for late-career grumpiness and nastiness.

Monday, June 8, 2015

Economics and Deception

I was reading Noah Smith's "Economic Argments as Stalking Horses," which at the minimum got me interested in finding out what a stalking horse is. Animals understand which other animals will cause them harm, they key into the movements of those other animals, and they'll get out of the way quickly if they understand that something potentially harmful is in the vicinity. Birds, for example, don't feel threatened by horses, but they are smart enough to figure out that humans are bad news. So, if you hide behind a horse, you can get close enough to a bird to shoot it. Indeed, you can train a horse to be the thing you hide behind to hunt birds, and then "stalking horse" becomes its job description. Interesting.

This may not be the best analogy for the idea Noah wants to get across, but it at captures the notion that deception is involved, so we'll go with it. Noah is interested in this quote from Russ Roberts, on Twitter (which I avoid like the plague):
Just a curious coincidence that economists who like stimulus want bigger government and those who oppose it prefer smaller.
One can find counterexamples, of course - John Taylor and Greg Mankiw wouldn't object to being called Keynesians - but I think we could characterize this as a strong empirical regularity. And in principle the existence of this regularity may be completely innocent, with no deception involved. Maybe the world consists of two sets of people. There is a set of non-interventionists convinced by the data that governments are somewhat inept, and should be given a fairly narrow set of tasks to do. The government may be viewed by these people as being just as inept at stabilization policy as at running a health care system, for example. Or non-interventionists may think that the economy works in a self correcting way in that, even if the government is not inept, its ability to intervene is extremely limited. The second set of people - the interventionists - thinks that governments are smart. Indeed, interventionists may think the average Jane/Joe is quite stupid - so stupid that she or he needs the government to help her or him make economic decisions. From the point of view of interventionists, the government should have a lot on its plate, including stabilization policy. The private sector is messed up in long-run and short-run fashions, and needs fixing.

In principle, the non-interventionists and the interventionists could both be honest, and using - to the best of their respective abilities - the best available economic theory and evidence. But, you may wonder, what is the best available economic theory and evidence on the matter at hand? What does economic science say the government should do? There has certainly been plenty of work on optimal taxation, both in static and dynamic contexts, and with and without private information. Also, mechanism design in part grew out of problems in which we want to elicit preferences. For example, we have something - national defense for example - which is clearly a public good. How much of it do we want? Well, if you ask people how much they want and make it clear that you will tax them more the more national defense they say they want, they'll say they don't want much. The result will be under-provision of national defense. If you apply mechanism design to that problem, you can figure out how to get an efficient quantity of public goods provision, while getting everyone to truthfully report how much they want. Perhaps surprisingly (but not surprising if you pay attention), Mike Woodford's work on New Keynesian models was motivated in part by work in public finance. In the foreword to Woodford's "Interest and Prices," he describes the genesis of his ideas:
My advisor, Bob Solow, always insisted on the unity of microeconomics and macroeconomics, and wore both hats with equal flair. He challenged me, while I was still writing my dissertation, to try to integrate sticky prices into the kind of intertemporal general-equilibrium models that were then becoming the dominant paradigm for macroeconomic analysis. Another of my teachers, Peter Diamond, insisted upon the importance of public economics that would allow an integrated treatment of inefficient allocation of resources across sectors and macroeconomic stabilization.

So, in modern economics, the theory of government could be thought of as an integrated whole. If we're thinking about "stabilization" and the size of government, those are just parts of the same problem. You really can't separate "short run" from "long run." How much of that thinking actually finds its way into discussions among people who actually have some influence on fiscal policy decisions? Very little. But monetary policy is another story. I think it's fair to say that central bankers are pretty serious about using available economic theory and evidence.

So perhaps we should focus more narrowly on monetary policy. Think of this as part of government, and that we could have non-interventionist and interventionist views about it. As I stated above, the non-interventionists and the interventionists could in fact be completely up-front about what they are doing - no deception involved. But ... we know how people are. Sometimes people shade the truth; they leave out facts that might be important to the case at hand; they misrepresent the views of people with a different take on things; they may even lie outright. How are you going to figure that out? Well, we are all students of human behavior, and we've encountered deceptive people since we started interacting with other two-year-olds. But let's confine attention here to economists - who of course may sometimes behave like two-year-olds, just like anyone else.

Why might an economist be motivated to be deceptive?

1. Follow the money: What is this person paid to do? If they are telling you something that might keep the gravy train rolling, or elicit another gravy train, maybe you should be suspicious.
2. Follow the money (more subtle version): If you are thinking that academic economists are immune, forget it. A successful paradigm pays off big time. Maybe you can make your paradigm successful by exaggerating the virtues of your own paradigm, or trashing your competitors.
3. Too old, too lazy, or insufficiently talented, to change: The new paradigm may be the better one. But maybe you invested in the old paradigm. So the new paradigm will depreciate your human capital severely. It's lower cost for you to oppose the new paradigm than to invest.

So, (1), (2), and (3) are impediments to science, and we would like to root these things out if we could. It's a policing problem. But, good economic science is a public good, so how do we get people to cooperate and do the policing? Maybe we have to depend on the fact that people are hard-wired to punish bad behavior. We had a long discussion about this in the comments section of this post. In any case, before we even get started with policing, we have to know how to identify the bad behavior. Understanding motivation obviously helps, but there are also some signals we should pay attention to - here, think in terms of a Spence signalling model:

i) A high ratio of destruction to construction: Constructive arguments are costly, and the more information the deceiver provides, the easier it is to catch him/her. So the artful deceiver uses up a lot of words in trashing his/her perceived opponents, and few words in detailed explanation of his or her own ideas.
ii) Repetition: If you want to tell lies, you clearly can't admit what you're up to. No matter how bald-faced the lie, the deceiver will keep repeating it, hoping that repetition will somehow create the ring of truth.
iii) Behaves like a politician: Politicians will try to impugn the character of their opponents. If you can get people to believe your opponent is a bad person, maybe they'll believe your opponent also does bad science.
iv) Uses up a lot of words discussing general bad behavior (of others of course): It takes one to know one.

Now, to help you out, let's do examples. Alan Meltzer wrote a piece for the Wall Street Journal in May 2014 called "How the Fed Fuels Future Inflation." Keep in mind here that a year has passed, but the world doesn't look so different now from a year ago, in regard to what Meltzer is discussing. Basically, the Fed's balance sheet is large, and inflation is low. But, Meltzer says:
Never in history has a country that financed big budget deficits with large amounts of central-bank money avoided inflation. Yet the U.S. has been printing money—and in a reckless fashion—for years.
The quote pretty much encapsulates the basic ideas in the piece. Let's do some fact-checking:

1. Was the U.S. budget deficit "big" in 2014, when Meltzer wrote his piece? At 2.8% of GDP, this was larger than at some times. But in the three previous cyclical troughs (looking only at the federal government deficit), the numbers were 9.8% (2009), 3.4% (2004), and 4.5% (1992). Further, in 2014 we were in a slow recovery phase, with GDP much below its long-term 3% growth trend, which would tend to make tax revenue low. So, I think it's a stretch to call the federal government's budget deficit "big." We know that out-of-control fiscal policy can be associated with big inflations, but it's hard to make the case that the federal government is out of control in this instance.

2. Is the creation of "central bank money" always associated with inflation? Well, apparently not, and we have an important example sitting under our nose of course. It's well known that, as Meltzer says, base money has increased by several-fold since the beginning of the last recession in the United States. Here's the what's happened to the inflation rate - raw PCE deflator and core PCE:
Both of these indicators have been below the Fed's 2% inflation goal for more than 3 years. Even though some of the recent low inflation reflects low oil prices, there is no sign that this is turning around. And the U.S. is not an anomaly. In Japan, the central bank's balance sheet increased by 120% from January 2008 to May 2014, and by 155% from January 2008 to May 2015. In Japan, the year-over-year inflation rate was 1.4% in May 2014 (excluding tax effects and food), and -1.7% in April 2015. In Switzerland, which has the world's largest central bank balance sheet (relative to GDP) in the world, as far as I know, the monetary base increased by 690% from January 2008 to May 2014, and by 870% from January 2008 to April 2015. But, the year-over-year CPI inflation rate in Switzerland was 0.2% in May 2014, and -0.9% in March 2015. So, a more accurate statement would be: "As yet, large-scale asset purchases by central banks, given zero or near-zero nominal interest rates, has failed to produce higher inflation."

Is Meltzer deceiving us? First, we have to ask whether he's motivated to do so. I don't think this is a "follow the money" motivation - either the obvious or subtle version. But Meltzer is definitely wedded to a paradigm, which is the quantity theory of money. He might be able to modify that paradigm (though I have my doubts) to reconcile it with the observations he's confronted with, but clearly he's not trying too hard. My guess is that the motivation is #3. Meltzer is 87, so the fact that he is alive and writing for the WSJ is a feat in itself. If I'm putting sentences together at that age I'll be amazed. Nevertheless, the quantity theory is what he knows, and it appears he's going to stand by it. Learning is done in this case, I think.

Is Meltzer giving off signals of deception? Definitely. He spends more words impugning policymakers than making the case for his worldview. As well, if you Google "Krugman Meltzer," Krugman will show you many cases where Meltzer makes the same charges, so there is repetition. However, though his opponents are named, he doesn't make them out to be bad characters. There's some politics in the piece though. In particular, when discussing the deficit, Meltzer mentions only the executive branch. We all know that the legislative branch plays an important role in fiscal policy.

Another good example is Paul Krugman, which is a lot more fun. Here, we'll look at two blog posts and his most recent column in the NYT. The first blog piece is from two days ago, and is a direct response to Noah's blog post. Krugman is trying to convince us that he's innocent. He argues that he's only looking at the evidence and drawing the obvious conclusions:
First of all, the case for viewing most recessions — and the Great Recession in particular — as failures of aggregate demand is overwhelming.
The idea is that thinking of most recessions in this way will be helpful. "Overwhelming" means this is pretty much universal. We'll all benefit from this insight. A "failure of aggregate demand" means something within a particular paradigm. Basically, "viewing" recessions as failures of said aggregate demand requires that you work within that paradigm, which is AD/AS, IS/LM. That's obvious if you've read Krugman. He thinks that paradigm is the cat's meow. So, within the paradigm, "aggregate demand failure" means the aggregate demand curve shifted left. You'll note first that, even for a well-schooled AD/AS, IS/LM fanatic, this is very non-specific. What happened to shift the AD curve? Was it an autonomous drop in consumption, an autonomous drop in investment, a shift in the money demand function, or what? Shouldn't we care?

Since we're into overwhelming territory, we would have to make the case that at least someone would find the information that the AD curve had shifted to the left useful. Suppose I am Ben Bernanke, and it's October 2008. Paul Krugman walks in the door and tells me the AD curve has shifted left. Does this help me to decide how to get banks to take my discount window loans, which I'm concerned they are not taking up for fear that they might be stigmatized as a result? Does this tell me which large financial institutions I should support, and how, and which ones I should just let go? Does this tell me why there are now large differences among particular market interest rates that did not exist several months previously? You get the idea.

On the other hand, suppose I am an empirical macroeconomist, and I'm trying to understand the Great Recession. I have coffee with Paul Krugman, and he tells me not to worry. This looks pretty much like other recessions - just an insufficiency of demand. As an empirical macroeconomist, I may not know what Paul is talking about, as there now exist practicing macroeconomists who have never seen AD/AS, IS/LM. But, for example, Christiano, Eichenbaum, and Trabandt know AD/AS, IS/LM (at least I'm pretty sure the first two do). They're well-known empirical macroeconomists, and they're not coming out and saying the Great Recession is about aggregate demand deficiency. They have a model, and the model has stochastic shocks which they give names to, and none of those shocks appear to have the name "aggregate demand." They have taken the time to write a whole paper in which they try to figure out how the shocks account for the Great Recession, and what the propagation mechanism is. Conclusion:
We argue that the vast bulk of movements in aggregate real economic activity during the Great Recession were due to financial frictions interacting with the zero lower bound.
Sorry, but that's not in AD/AS, IS/LM.

Now, consider another person. This one is older than the average undergraduate - old enough to actually care about the Great Recession, and not think of this as ancient history. This person is, say, 30, and worked on Wall Street during the financial crisis. She saw a lot of stuff. Volatile financial market activity, unemployed people, large financial institutions in trouble, etc. Now she's motivated to go back to school and take some economics so she can understand all that stuff. She takes an intro-to-macro course, learns AS/AD, and is told that the Great Recession is just like all the other recessions - AD shifts left. Given her experience in the world does this get her excited? Does this information somehow put all her practical experience into perspective? I don't think so.

So, Krugman's post is looking suspicious. He's making strong claims without much to back it up. You can say it's all in his other blog posts, but I've read those too, and I don't buy it. There's not much dissing in this particular post, but Krugman sneaks something in at the end:
The point is that while it’s definitely OK to scrutinize economists’ motives — to ask whether they’re responding to logic and evidence, or just talking their political book — assertions that it’s all politics deserve the same scrutiny. Is my behavior consistent with claims that my views are purely a reflection of my political preference? And if it isn’t — which I don’t think it is — what’s driving such claims? Might it be … politics, deployed on behalf of economic doctrines that have lost the substantive debate?
I forgot to include this above. It's a great strategy to muddy the waters by impugning the motives of those who are trying to police you.

The second exhibit is Krugman's NYT column from today. This is concerned with
...people who keep saying the same thing no matter how much evidence accumulates that it’s completely wrong.
Here, Krugman is in full destructive mode. Who is he fighting?

a) People (Meltzer, goldbugs, etc.) who are worried about too much inflation.
b) Bob Lucas who, he claims, "accused Christina Romer, the administration’s chief economist, of intellectual fraud."
c) People who don't like Keynesian economics.
d) Obamacare naysayers.

We'll leave him alone on (a) and (d), as he's got a case there, though he's been beating it to death. Bob Lucas certainly doesn't deserve this treatment, though. On (b), if you investigate, you will find the supposedly offending discussion in the transcript of a conference from 2009. So this is ancient news by now. Further, if you read carefully, you'll find that Lucas's remarks come in a panel discussion, and that Krugman is quoting him out of context to make him look bad. I say Lucas is not guilty.

On Keynesian economics, we've got another bold claim, that Keynesianism is
...an approach that, among other things, correctly predicted quiescent inflation...
Here, I had to look up "quiescent" to make sure I know what it means. He's saying that Keynesian economics correctly predicted the low inflation that we've had. Of course, basic AS/AD, IS/LM doesn't "predict" inflation. Inflation is not in there, as it's a static model. Presumably what Krugman means is that the Keynesian Phillips curve does a good job of predicting inflation. We should check that out. This is a scatter plot of quarterly year-over-year PCE inflation and the unemployment rate since the end of the last recession. The line connects the observations from 2009Q3 on the far right to 2015Q1 on the far left:
So, the Phillips curve would predict that, when unemployment is falling, in a world in which inflation expectations aren't moving around much (which we could argue is the case here), inflation should be rising. Do you see that in the chart? Neither do I. Indeed, some Phillips curve adherents have been confidently predicting for a long time that the falling "output gap" will lead to inflation moving back to 2%. That hasn't happened, and those people should feel embarrassed in the same way that Alan Meltzer should feel embarrassed. Thus, maybe we shouldn't give Krugman a pass on (a). He's just as wrong as Meltzer, but with more company, which is a deeper problem.

Krugman goes on in the same vein in this blog post, which is an attempt to defend himself against the charge of hypocrisy. Looks to me like he's digging himself in deeper.

So, is Krugman being deceptive? First, motivation:

1. Follow the money: The Krugman enterprise is a very successful one. He is well-paid by the New York Times, he commands a large speaking fee, and he also has a gig with CUNY. Krugman serves a large constituency that is not really interested in science, or in learning the intricacies of economics. Other people who read Krugman have a smattering of economics, and I'm sure they find it flattering, and assuring, to be told by a guy with a Nobel prize that what they know is all they need to know about what is going on in the world. Behaving in the way he does just feeds his success in his chosen occupation.
2. Follow the money (subtle version): Doesn't apply. Krugman is past caring about success in the academic journals.
3. Too old, too lazy, or insufficiently talented, to change: Plenty of talent here, of course, and I'm going to stick up for my cohort of over-60s, and say he's not too old either. There's an element of too-lazy though. At the peak of his academic career, Krugman became accustomed to being at the top of his field. He had a prestigious position at a top school, a well-respected academic record, and influence in the international trade community. I think he's not entirely content earning a good living as a writer/blogger/speaker and political pundit. He seems to want to be influential as a macroeconomist. But he can't do that in the conventional way, as it's too costly. So what else can he do but try to cut the macro profession down to size? I think that's part of the destructive impulse here.

Second, the signals:

i) A high ratio of destruction to construction: If you've been reading Krugman, you've seen a lot of this. Typically he can't write a blog post or NYT column without slamming someone - real or imagined. The last NYT column is probably the worst it gets - it's quite unremitting.
ii) Repetition: Again, using the last NYT column as an example, there's little in here he hasn't said before. For example, you can do a search on the supposed bad behavior of Bob Lucas vis-a-vis Christina Romer, and you'll find that repeated again, and again, and again, over the space of years.
iii) Behaves like a politician: In a discussion of how people stick to ideas contradicted by evidence, he for some reason slips in a claim that Bob Lucas is a bad guy. What for?
iv) It takes one to know one: Here's a selection of Krugman's helpful advice from the stuff I've been referencing:
...the place to start fighting is within yourself...making the same wrong prediction year after year, never acknowledging past errors or considering the possibility that you have the wrong model of how the economy works — well, that’s derp... the peddlers of politically inspired derp are quick to accuse others of the same sin...The first line of defense, I’d argue, is to always be suspicious of people telling you what you want to hear...Fighting the derp can be hard, not least because it can upset friends who want to be reassured in their beliefs. But you should do it anyway: it’s your civic duty.

I don't want to leave you on a negative note. After all, my goal is to try to help you to escape unnecessary negativity. Generally, I think that we haven't been thinking about the inflationary process in the right way. But if we give our models a chance, and pay attention to what the data is telling us, the answers are staring us in the face. In most monetary models I know about, and that includes New Keynesian models, low nominal interest rates in fact cause inflation to be low. That's a monetary theory of inflation. The central bank controls nominal interest rates. If nominal interest rates are high, inflation is high. If nominal interest rates are low, inflation is low. And the data is consistent with that theory. Japan has had low nominal interest rates for 20 years, and that has produced persistently low inflation. Many central banks in the world, including those in the U.K., the U.S., the Euro area, Denmark, Switzerland, Sweden, and Japan, have targeted nominal interest rates at persistently low levels. And inflation in those countries is persistently low.

The problem, as I argue in this paper (with David Andolfatto) is a policy trap. Taylor-rule policymakers, operating under the mistaken impression that low nominal interest rates will ultimately produce higher inflation, are stuck with perpetually low nominal interest rates and low inflation. There are elements of that idea that are not new, but we do some novel things in the paper, I think. Basically, though, I think this is an idea we should run with. I'm happy to run away from it once it doesn't work though. Wouldn't want to be derpy.

Monday, June 1, 2015

Sunday, May 31, 2015

Seasonality, Measurement, and First Quarter U.S. GDP

After the latest revisions to U.S. real GDP by the Bureau of Economic Analysis (BEA), the estimate for real GDP growth in the first quarter of 2015, seasonally adjusted at an annual rate, was -0.7%. So, have we entered a recession or what? In answering that question, we can learn something about how GDP is measured, and how seriously we want to take GDP measurement and the interpretation of quarterly real GDP growth rates.

Real GDP in the United States since the beginning of 2007 looks like this:
If you focus on what's happened since the end of the last recession, in 2009Q2, you'll notice that GDP has not grown in every quarter. Indeed, if we focus just on the quarterly growth rates in the first quarter, we get:

2010Q1: 1.7%
2011Q1: -1.5%
2012Q1: 2.3%
2013Q1: 2.7%
2014Q1: -2.1%
2015Q1: -0.7%

So, the average first-quarter growth rate since the end of the recession has been 0.4%, while the average growth rate over that period was 2.2%. This might make you wonder whether there is something funny going on with the seasonal adjustment of the data. The same thought occurred to Glenn Rudebusch et al. at the S.F. Fed, and they showed that, in fact, there is residual seasonality in the real GDP time series. They ran the supposedly-seasonally-adjusted real GDP time series through Census X-12 (a standard statistical seasonal adjustment filter), and came up with an estimate of first-quarter 2015 real GDP growth that was 1.6 percentage points higher than the reported number at the time (before the latest revisions). Apparently the BEA has been made aware of this problem, and is working on it.

Why do we have this problem? In the United States, the collection of economic data is a decentralized activity, conducted by several government agencies. The Bureau of Labor Statistics collects labor market and price data (why the consumer price index is a labor statistic I'm not sure), the Fed collects financial data, the Congressional Budget Office collects data on government activity, and the Census Bureau collects demographic data. Finally, the Bureau of Economic Analysis collects data for the National Income and Product Accounts (NIPA), and international trade statistics. It's this hodgepodge of data collection that makes FRED useful (shameless advertising), as FRED puts all of that data together (plus much more!) in a rather user-friendly way.

When the BEA constructs an estimate for real GDP, it uses as inputs data that comes from other sources, including (I think) some of the other government statistical agencies listed in the previous paragraph. Some of the data used by the BEA as inputs has been seasonally adjusted before it even gets to the BEA; some has not been adjusted. What the BEA does is to seasonally adjust all the inputs, and then construct a real GDP estimate. It might be surprising to you, as it is to me, that the resulting GDP estimate could exhibit seasonality. But, behold, it does. Maybe somebody can explain this for us.


As economists, what we would like from the BEA are estimates of real GDP that are both seasonally adjusted and unadjusted, for reasons I'll discuss in what follows. But given how the BEA currently does the data collection, that's impossible, and the BEA reports only seasonally adjusted real GDP. It might help if we had a single centralized federal government statistical agency in the United States, but if you have ever dealt with Statistics Canada, you'll understand that centralization is no guarantee of success. Statistics Canada's CANSIM database is the antithesis of user-friendliness. Go to their website and do a search for anything you might be interested in, and you'll see what I mean. For example, the standard GDP time series go back only to 1981, and the standard labor market time series to 1990. Why? Statisticians in the agency have decided that GDP prior to 1981, for example, is not measured consistently with post-1981 GDP, so they don't splice together the post-1981 data and the pre-1981 data. You have to figure out how to do that yourself if you want a long time series.

Getting back to seasonal adjustment, why do we subject time series data to this procedure? Many time series have a very strong seasonal component, for example, monthly housing starts:
That's not too bad, as we can eyeball the raw time series and roughly discern long-run trends, cyclical movement, and the regular seasonal pattern. But try getting something out of the monthly percentage changes, where the seasonal effects dominate:
However, if we take year-over-year (12-month) percentage changes, we get something that is more eyeball-friendly:
That's not unrelated to what seasonal adjustment does. If you're interested in the details of seasonal adjustment, or want to acquire your own seasonal adjustment software, go to the Census Bureau's website.

As macroeconomists or macroeconomist policymakers, if we're making use only of seasonally-adjusted data, we're throwing away information. Basically, what we're getting is the executive summary. I don't think we would like it much, for example, if the government provided us only with detrended time series data - detrended using some complicated procedure we could not reverse-engineer - and wouldn't provide us with the original time series. I have not been able to find much recent work on this, but at one time there was an active (though small) research program on seasonality in macroeconomics. For example, Barsky and Miron wrote about seasonality and business cycles, and Jeff Miron has a whole book on the topic. Is the seasonal business cycle a big deal? It's hard to tell from U.S. data as, again, the BEA does not appear to provide us with the unadjusted data (I searched to no avail). However, Statistics Canada (though they fall down on the job in other ways) publishes adjusted and unadjusted nominal GDP data. Here it is:
So, you can see that the peak-to-trough seasonal decrease in nominal GDP in Canada is frequently on the order of the peak-to-trough decrease in nominal GDP in the last recession. Barsky and Miron found that the seasonal cycle looks much like the regular business cycle we see in seasonally adjusted data, in terms of comovements and relative volatilities. So, the seasonal ups and downs are comparable to cyclical ups and downs, in terms of magnitude and character. Not only that, but these things happen every year. In particular, we get a big expansion every fourth quarter and a big contraction every first quarter. That raises a lot of interesting questions, I think. If seasonal cycles look like regular business cycles, why isn't there more discussion about them? There are macroeconomists who get very exercised about regular business cycles, but they seem to have no interest in seasonal cycles. How come?

Aside from seasonality, what else could be going on with respect to first-quarter 2015 U.S. real GDP? A couple of years ago, the Philadelphia Fed introduced an alternative GDP measure, GDP-Plus. As we teach undergraduates in macro class, there are three ways we could measure gdp: (i) add up value-added at each stage of production, for every good and service produced in the economy; (ii) add up expenditure on all final goods and services produced within U.S. borders; (iii) add up all incomes received for production carried on inside U.S. borders. If there were no measurement error, we would get the same answer in each case, as everything produced is ultimately sold (with some fudging for inventory accumulation), and the revenue from sales has to be distributed as income. In most countries, we don't actually try to follow approach (i), but there are GDP measures that follow approaches (ii) and (iii). In the U.S. we call those GDP (actually a misnomer, as it's really gross domestic expenditure) and GDI (gross domestic income). Over a long period of time, the two measures look like this:
You can see that for long-term economic activity, it makes little difference whether we're looking at GDP or GDI. Over short periods of time, it does make a difference:
You can see in this last chart that the differences in quarterly growth rates could be large. Note in particular the first quarter of 2015, where GDI goes up and GDP goes down.

The GDP-Plus measure, developed by Aruoba et al., uses signal extraction methods to jointly extract from GDI and GDP the available information that is useful in measuring actual GDP - treated as a latent variable. The most recent GDP-Plus observation is +2.03% for first-quarter 2015. So that's another indication that the BEA estimate, at -0.7%, is off.

Further, the recent labor market information we have certainly does not look like labor market data for an economy at the beginning of a recession. Employment growth has been good:
Weekly initial claims for unemployment insurance, relative to the labor force, are at an all-time low:
And the unemployment rate is at 5.4%, the same as in February 2005, in the midst of the housing market boom.

So, it would be hard to conclude from the available data that a recession began in the first quarter of 2015 in the U.S. But, I think the more you dig into macroeconomic data, and learn the details of how it is constructed, the more skeptical you will be about what it can tell us. Most macro data is contaminated with a large amount of noise. We can't always trust it to tell us what we want to know, or to help us discriminate among alternative theories.

Thursday, May 21, 2015

Don't get mathy with me, or I'll give you a good shunning.

I had heard Paul Romer is disgruntled, and now that he's written down his thoughts, we can perhaps sort this out. We'll start with his recent blog post on "Protecting the Norms of Science in Economics." Here is Paul's view of science:
My reading of the evidence convinces me that a group of scholars can make progress toward the truth only if they share a commitment to the norms of science, a set of norms that support a reputational equilibrium that encourages trust and that rewards progress toward truth.
Think of truth as existing at the top of a mountain. Once we get to the top of the mountain we'll know it, as we'll be able to see a long way, but while we're climbing the mountain we're in a fog, and we can't see the top of the mountain. But we might be able to discern whether we're moving up, down, or just sitting in one place. Paul thinks that we can't just let scientists run loose to take various paths up the mountain with different kinds of gear, and with different companions of their choosing. According to him, we have to organize this enterprise, and it's absolutely necessary that we write down a set of rules that we will abide by, come hell or high water. And when he says "reputational equilibrium" most economists will know what he has in mind - there will be punishments (imposed by the group) for deviating from the rules.

Paul isn't just throwing this out as a vague idea. He has a specific set of rules in mind. We'll go through them one by one:

1. We trust that what each person says is an honest account of what he or she thinks is true. So, that seems fine. We'll all agree that people are at least trying to be honest.

2. We all recognize that reasonable people can differ and that no one has privileged access to the truth. Sure, people are going to differ. Otherwise it would be no fun. But there's that word "truth" coming up again. I really don't know what truth in science is - if I ever find it the surprise will likely induce cardiac arrest, a stroke, or some such. To my mind, we only have a set of ideas, which we might classify as useful, not-so-useful, and useless. One person's useful idea may be another's useless idea. Particularly in economics, there are many of us who can't be convinced that our works of genius are actually not-so-useful or useless. Truth? Forget it.

3. We take seriously the claims of people who disagree with us. What if the people disagreeing with us are idiots?

4. We are ready to admit that others might be right that each of us might be wrong. At first I thought there was a typo in this one, but I think this is what Paul intended. Sure, sometimes two people are having a fight, and no one else gives a crap.

5. In our discussions, claims that are recognized by a clear plurality of members of the community by as being better supported by logic and evidence are the ones that are provisionally accepted as being true. This is absurd of course. We don't take polls to decide scientific merit. Indeed, revolutionary ideas - the ones that take the biggest steps toward Romerian truth - would be the ones that would fail, by this criterion. Scientists, particularly the older ones, become heavily invested in the status quo, and don't want to give it up. In casting their negative votes, they may even by convinced that they are adhering to (1)-(4).

6. In judging what constitutes a “clear plurality,” we put more weight on the views of people who have more status in the community and are recognized as having more expertise on the topic. The problem with (5) of course kicks in with a vengeance here. What community? Recognized how? What expertise relative to what topic? I get no weight because I work at the University of Saskatchewan and not Harvard, or what?

7. We update the status of a members of our community on the basis of his or her contribution to progress a clearer understanding of what is true, not on the basis of “unwavering conviction” or “loyalty to the team.” This I suppose is intended to answer my concerns from (6) about what "status" might mean. I guess our status is our ranking in the profession, according to goodness. Do a good thing, and you move up. Do a bad thing, and you move down. Who decides what's good and bad, and how good, and how bad? What prevents a promotion based on "loyalty to the team," disguised as a good thing?

8. We shun, or exclude from the community, someone who reveals the he or she is not committed to these working principles. Well, I would be happy to be shunned by this community - it really doesn't look like it's built for success. Faced with these rules, I'll deviate and find my own like-minded community.

So, to me those rules seem strange, particularly coming from an economist who, like the rest of us, is schooled in the role of incentives, the benefits of decentralization, and the virtues of competition. We might wish that things were more clear-cut in economics, but it's not going to happen. Our models have to be so simple that they are guaranteed to be wrong - they're always inconsistent with some phenomena, hopefully the ones we're not focused on when we construct the model. There can be radically different theories, with different implications, that are all consistent with the empirical evidence we have (which is often not so great). This just reflects the technological limitations of science - our ability to construct and analyze models, and our ability to collect data. Why not just embrace the diversity and move on?

At this point, you may be wondering what's bugging Paul. He must have something specific he's concerned about. To get some ideas about that, read Romer's recent AER Papers and Proceedings paper. This paper is in part about "mathiness." What could that mean? It certainly doesn't mean that using mathematics in economics is a bad thing. Paul seems on board with the idea that mathematical precision lends clarity to our economic ideas, while potentially keeping people honest. Once you write your economic argument down in formal mathematical terms, it's hard to cheat. Math, unlike the English language (or any other language on the planet), is unambiguous.

But, in trying to get our ideas across, math can work against us. A sophisticated mathematical argument may be impenetrable to the average reader. And a rigorous, mathematically-detailed, internally consistent model is not necessarily a good model. The model-builder may have left out details that are essential for addressing the economic problem at hand, or there may be blatant inconsistencies between the model and the empirical regularities that are germane to the problem. Even though Paul gives specific examples, however, I'm still not entirely clear on "mathiness." As far as I can tell, it's related to the impenetrability problem. A dishonest economist can construct a mathematically sophisticated model, churn out some results without being too careful, claim success, and hope no one notices the errors and inconsistencies. That would certainly be a problem, and I could imagine recommending rejection to the editor if I were asked to referee such a paper, or rejecting the paper if I were in an editorial position.

Is that what's going on in the growth papers that Paul cites in his AER P&P piece? Are the authors guilty of "mathiness," - dishonesty? I'm not convinced. What McGrattan-Prescott, Boldrin-Levine, Lucas, and Lucas-Moll appear to have in common, is that they think about the growth process in different ways than Paul does, with somewhat different models. Sometimes they come up with different policy conclusions. Paul seems to think that, after 30 years, more or less, of research on the economics of technological change, we should have arrived at some consensus as to whether, for example, Paul's view of the world, or the views of his competitors, are somehow closer to Romerian truth. His conclusion is that there is something wrong with the winnowing-out process, hence his list of rules, and the attempt to convince us that M-G, B-L, L, L-M, and Piketty-Zucman too, are doing crappy work. I'm inferring that he thinks their papers were published in good places (our usual measure of value-added science) because they are well-connected big shots. It could also be that Paul just doesn't like competition - in more ways than one.

Monday, April 13, 2015

Sticky Prices, Financial Frictions, and the Ben Bernanke Puzzle

Noah Smith's Bloomberg post on the wonders of sticky price models caught my eye the other day. I'm going to use that as background for addressing issues on financial stability and monetary policy raised by Ben Bernanke.

First, Noah is more than a little confused about the genesis of sticky-price New Keynesian (NK) models. In particular, he thinks that Ball and Mankiw's "Sticky Price Manifesto" was a watershed in the NK revolution. Far from it. Keynesian economics went in several different directions after the theoretical and empirical revolution in macroeconomics. There was the coordination failure literature - Bryant, Diamond, and Cooper and John, for example. There was the sunspot literature. In addition, Mankiw, and Blanchard and Kiyotaki, among others, thought about menu costs. The "Sticky Price Manifesto" is in part a survey of the menu cost literature, but it reads like a religious polemic. You can get the idea from Ball and Mankiw's introduction to their paper:
There are two kinds of macroeconomists. One kind believes that price stickiness plays a central role in short-run economic fluctuations. The other kind doesn't... Those who believe in sticky prices are part of a long tradition in macroeconomics... By contrast, those who deny the importance of sticky prices depart radically from traditional macroeconomics. These heretics hold disparate views... heretics are united by their rejection of propositions that were considered well-established a generation or more ago. They believe that we mislead our undergraduates when we teach them models with sticky prices and monetary non-neutrality. A macroeconomist faces no greater decision than whether to be a traditionalist or a heretic. This paper explains why we choose to be traditionalists. We discuss the reasons, both theoretical and empirical, that we believe in models with sticky prices...
This is hardly illuminating. There are (were) only two kinds of macroeconomists? I've known (and knew in 1993) more kinds of macros than you can shake a stick at, and most of them don't (didn't) define themselves in terms of how they think about price stickiness. Further, they don't ponder choices about research programs as if, for example, they are living in 1925 in Northfield Minnesota, and choosing between lifetime paths as Roman Catholics or Lutherans. Why should we care what Ball and Mankiw think is going on in the minds of their staw-men opponents, or in the classrooms of those straw-men? Why should we care what Ball and Mankiw "believe?" Surely we are (were) much more interested in figuring out what we can learn from them about recent (at the time) developments in the menu cost literature.

Bob Lucas discussed Ball and Mankiw's paper at the Carnegie-Rochester conference where it was presented, and had this to say:
The cost of the ideological approach adopted by Ball and Mankiw is that one loses contact with the progressive, cumulative science aspect of macroeconomics. In order to recognize the existence and possibility of research progress, one needs to recognize deficiencies in traditional views, to acknowledge the existence of unresolved questions on which intelligent people can differ. For the ideological traditionalist, this acknowledgement is too risky. Better to deny the possibility of real progress, to treat new ideas as useful only in refuting new heresies, in getting us back where we were before the heretics threatened to spoil everything. There is a tradition that must be defended against heresy, but within that tradition there is no development, only unchanging truth.
Noah seems to think that Lucas was being unduly harsh, and that he was somehow feeling threatened by these "upstarts." It's pretty clear, actually, that Lucas just thinks it's a bad paper - religion, not science - and that Ball and Mankiw could do a lot better if they put their minds to it.

Where did NK come from? Which of the three threads in post-macro revolution Keynesian economics - coordination failures, sunspots, menu costs - morphs into Woodfordian NK models? To a first approximation, none of them. Perhaps NK owes a little to the menu cost approach, but it's really a direct offshoot of real business cycle theory. Take a Kydland and Prescott (1982) RBC model, eliminate some bells and whistles, add Dixit-Stiglitz monopolistic competition, and you have Rotemberg and Woodford's chapter from "Frontiers of Business Cycle Research." Add some price stickiness, and you have NK. So, NK basically leapfrogs most of the "Keynesian" literature from the 1980s. It's much more about RBC than about Ball and Mankiw.

Further, it's worth noting that Mike Woodford, the key player in NK macro, was at the University of Chicago from 1986 to 1992, the latter 3 years in the Department of Economics with - guess who - Bob Lucas. Indeed, they wrote a paper together. It's about - guess what - a kind of sticky price model with non-neutralities of money. Later on, Lucas wrote about sticky prices with Mike Golosov. So, I think we could make the case that the influence of Lucas on NK is huge, and that of Ball and Mankiw is tiny. Was it the case, as Noah contends, that Lucas was left, disappointed in the dust, by the purveyors of sticky price economics? Of course not - he was helping it along, and doing his own purveying.

How does a basic NK model - Woodford's "cashless" variety, for example - work? There is monopolistic competition, with multiple consumption goods, and the representative consumer supplies labor and consumes the goods. There's an infinite horizon, and we could add some aggregate shocks to total factor productivity (TFP) and preferences if we want. Without the price stickiness, in a competitive equilibrium there are relative prices that clear markets. There's no role for money or other assets (in exchange or as collateral), no limited commitment (everyone pays their debts) - no "frictions" essentially. Financial crises, for example, can't happen in this world. Then, what Woodford does is to add a numeraire object. This object is a pure unit of account, existing as something to denominate prices in terms of. We could call it "money," but let's call it "stardust," just for fun. So far, this wouldn't make any difference to our model, as it's only relative prices that matter - the competitive equilibrium relative prices and equilibrium quantities don't change as the result of adding stardust. What matters, though, is that Woodford assumes that firms in the model cannot change prices (at least sometimes) without bearing a cost. With Calvo pricing, that cost can either be zero or infinite, determined at random each period. Even better, the central bank now has some control over relative prices, as it has the power to determine the price of stardust tomorrow relative to stardust today.


Then, if there are aggregate shocks in this economy, we can do better with central bank intervention than without it. Shocks produce relative price distortions essentially identical to tax distortions, and central bank intervention can alter relative prices in beneficial ways, by reducing the distortions. Basically, it's a fiscal tax-wedge theory of monetary policy. Why monetary policy can do this job better than fiscal policy is not clear from the theory.

Noah tells us that "sticky-price models have become the dominant models used at central banks." You might ask what "used" means. Certainly Alan Greenspan wasn't "using" NK models. My best guess is that he wouldn't even want to hear about them, as he knows little about modern macroeconomics in the first place. Ben Bernanke, of course, is a different story - he clearly learned modern macro, published papers with serious models in them, and knows exactly what the working parts of an NK model are about. Which brings us to a post of his from last week.

The issue at hand is how central banks should think about financial stability. Is this solely the province of the financial regulators, or should conventional monetary policy intervention take into account possible effects on private-sector risk-taking? I think it's well-recognized, if not blatantly obvious, that there were regulatory failures that helped cause the financial crisis. Whether legislated financial reforms were adequate or not, I don't think any reasonable person would question the need for new types of financial regulation in the wake of the financial crisis. Also, most economists would not question the need for intervention by the central bank in a genuine financial crisis. The Fed was founded in part to correct problems of financial instability during the National Banking era (1863-1913) in the United States, and there was a well-established approach to banking panics by the Bank of England in the 19th century (which Bagehot, for example, wrote about). Friedman and Schwartz wrote extensively about how the failure of the Fed to intervene during the Great Depression helped to exacerbate the depth and length of the Depression.

But should a central bank intervene pre-emptively to mitigate financial instability or, for example, to reduce the probability of a financial crisis? That's an open question, and I don't think we have much to go on at this point in time. In any case, to think about this constructively, we would have to ask how alternative policy rules for the central bank jointly affect financial stability, asset prices, GDP, employment, etc., along with economic welfare, more generally. It would help a lot - indeed it seems it would be necessary - for the model we work with to have some working financial parts to it - credit, banks, collateral, the potential for default, systemic risk, etc. I actually have one of these handy. It's got no sticky prices, but plenty of other frictions. There's limited commitment, collateral, various assets including government debt, bank reserves, and currency, and private information. You can see that I didn't take either of the roads that Ball and Mankiw imagined I should be choosing, and I'm certainly not unique in that regard. The model I constructed tells us that conventional monetary policy can indeed exacerbate financial instability. There is an incentive problem in the model - households and banks may have the incentive to post poor-quality collateral to secure credit - and this incentive problem will tend to kick in when nominal interest rates are low.

Bernanke gives us some evidence from research which he claims informs us about the problem of financial stability and monetary policy. The paper he cites and summarizes is by Ajello et al. at the Board of Governors. Here's what Bernanke learned from that:
As academics (and former academics) like to say, more research on this issue is needed. But the early returns don't favor the idea that central banks should significantly change their rate-setting policies to mitigate risks to financial stability. Effective financial oversight is not perfect by any means, but it is probably the best tool we have for maintaining a stable financial system. In their efforts to promote financial stability, central banks should focus their efforts on improving their supervisory, regulatory, and macroprudential policy tools.
I agree with the first sentence. But what about the rest of it, which is his takeaway from the Ajello et al. paper. Maybe we should check that out.

So, the Ajello et al. model is a kind of reduced-form NK model. For convenience, NK models - which at heart are well-articulated general equilibrium models - are sometimes (if not typically) subjected to linear approximation, and reduced to two equations. One is an "IS curve," which is basically a linearized Euler equation that prices a nominal government bond, and the other is a "NK Phillips curve" which summarizes the pricing decisions of firms. These two equations, given monetary policy, determine the dynamic paths for the inflation rate and the output gap - the deviation of actual output from its efficient level. Often, a third equation is added: a Taylor rule that summarizes the behavior of the central bank. The basic idea is that this reduced form model is fully grounded in the optimizing, forward-looking behavior of consumers and firms, and so conforms to how modern macroeconomists typically do things (for good reasons of course).

If Ajello et al. could get financial crises into a model of monetary policy, that would be very interesting. There is some work on this, for example Gertler and Kiyotaki's Handbook of Monetary Economics chapter, but that's not what Ajello et al. are up to. What they do is to take the first two equations in a standard reduced form NK model, and then append a third equation that captures the effects of financial crises. The financial part of the model isn't grounded in any economic theory - the authors are just taking the express route to the reduced form. There are two periods. The endogenous variables are the current output gap, the current inflation rate, and the probability of a financial crisis in the second period. The second period financial crisis state has exogenous output gap and inflation rate, and the second period non-crisis state has another exogenous output gap and inflation rate. The probability of a financial crisis depends on the first period nominal interest rate, inflation rate, and output gap. That's it. The authors then "calibrate" this model, and start doing policy experiments.

What's wrong with this model? For starters, there's no assurance that, if we actually take the trouble to figure out what a financial crisis is, how to model it at a fundamental level, and then somehow integrate that with basic NK theory, that we're going to get a reduced form in which we can separate the non-financial-frictions NK model from the financial frictions NK model in the way that the authors of the paper have done it. They appeal to a paper by Woodford, but Woodford doesn't help me much, as he's basically taking the express route too. Think about it. We're starting with a model that is frictionless, except for the sticky prices, and we're asking it to address questions that involve default and systemic financial risk. What grounds to we have for arguing that this involves tweaking an NK reduced form in a minor way? But, suppose that I buy the specification the authors posit? What grounds do we have for setting the parameters in the third equation? I'm not sure what the signs of the parameters should be, let alone the magnitudes, and it puzzles me that the authors can be confident about it either.

Bernanke is taking it seriously though, as you can see from the quote above. We can see that Bernanke has strong priors, but there is essentially zero take-away in the paper, so why is he trying to use the paper to convince us he's right?

So, as Noah says, NK models are "used" in central banks, and here's an example of what that can mean. There is nothing inherently offensive about work on sticky prices, just as there's nothing inherently offensive about cats. In fact, there is some very interesting work on sticky prices. Last week, I saw a paper by Fernando Alvarez and Francesco Lippi. They do very sophisticated work, with careful attention to the available data on product pricing, and I think a lot can be learned from what they're up to. But if we want to think about the effects of monetary policy, and how monetary policy should be conducted, we better be thinking about the details of central bank liabilities, central bank assets, the role of the central bank as a financial intermediary, private banks, collateral, government debt, credit, etc. A basic NK model throws all of that out and focuses exclusively on sticky price frictions. Why is that a problem? Well, suppose a financial crisis comes along. What then do you know about malfunctioning credit markets, the role of central bank lending vs. open market operations, the effects of unconventional monetary policies such as quantitative easing? Not much, right? And it's pretty clear that you're not going to learn much about these things by tweaking a reduced form NK model.