Why revisions matter
Via James I saw this excellent post by Lars Christensen on why data revisions don’t matter for NGDP targeting. I think it shows how much traction that the NGDP people are getting, when critiques like this start to appear – and it is good they are making a concerted effort to answer them.
Now I’m not actually someone who thinks that NGDP targeting isn’t what should be done (at this point, I’m still in agreement with the 2011 version of myself) – I don’t think it is terribly far off, and it provides a rule which is the main thing, so if it was to become core policy I wouldn’t be terribly concerned.
Now data revisions. I think Christensen overstates how little they matter – even more than those who criticise NGDP targeting overstate how important it is. In truth, the revisions issue is an important one because we are LEVEL targeting, and LEVEL targeting makes policy history dependent. There are three real differences between flexible inflation targeting and NGDP targeting for a large economy, one of which is that point that NGDP targeting is level targeting and inflation targeting is growth rate targeting (for a small open economy, changes in tradeable good prices cause further issues – and I think NGDP doesn’t do this appropriately) … note, one other is the fact that NGDP targeting allows less discretion around the rule and an easier way to “judge” policy, something every economist outside of a central bank sees as a good thing 😉 … note the third is that one is anchoring expectations of price growth unrelated to the market place, one is ahchoring expectations of the level of nominal income unrelated to the market place – here we can ask “which one is more important for business and household decisions”.
So through the arguments:
We target a forecast in both cases, but forecasts are poor for both NGDP and inflation
This is true. However, just before Christmas I was reading a paper about how inflation is the variable economic models have some of the most success at forecasting – as compared to GDP forecasts which are significantly worse. I was going to write on this, and probably will at some point. But in the interim, here is the RBA 😀
We should be targeting off a market, as that provides expectations
This is just the forecast story again – and we can do that with inflation targeting as well.
The potential problem
“Furthermore, arguing that NGDP data can be revised might point to a potential (!) problem with NGDP, but at the same time if one argues that national account data in general is unreliable then it is also a problem for an inflation targeting central bank. The reason is that most inflation targeting central banks historical have use a so-called Taylor rule (or something similar) to guide monetary policy – to see whether interest rates should be increased or lowered.”
Indeed this is a problem for inflation targeting as well. But lets think a bit.
CPI, business surveys, and the unemployment rate are virtually never revised (apart from methodology changes), NGDP is revised constantly. Central banks target a certain measure of core CPI and they use a Taylor rule which relies on deviations from potential output. What they estimate is the OUTPUT GAP not potential itself. Oft times, this estimate of potential will use data from business surveys and the unemployment rate as well as the oft revised GDP numbers – and as a result the size of any revisions and any potential error are a lot smaller.
Inflation is dubious
All data is dubious – CPI has had more time spent on it for the fact it is used for policy setting. If we are worried about whether CPI is systematically biased (which is level terms it likely is, but in growth terms it is not) then the issue is far far worse for the GDP stats!
Conclusion
“However, the important point is that present and historical data is not important, but rather the expectation of the future NGDP, which an NGDP futures market (or a bookmaker for that matter) could provide a good forecast of (including possible data revisions). Contrary to this inflation targeting central banks also face challenges of data revisions and particularly a challenge to separate demand shocks from supply shocks and estimating potential GDP.”
It is exactly right that the point is to set expectations – central bankers know that. With expectations of inflation anchored, they can then just walk around changing their policy stance to respond to the evolution of demand in the economy – this is what central bank policy should do, aims to do (ignoring the ECB of course 😉 ), and what the NGDP targeters want! In this way data revisions are pretty irrelevant in so far as both sides are asking each other to do the same thing.
But there is the kicker, the flexible inflation targeters are targeting GROWTH in demand and anchoring expectations of PRICE growth. NGDP targeters are targeting the LEVEL of demand and anchoring expectations of NOMINAL INCOME. As soon as we target a level instead of a growth rate we make history relevant – this very history that is filled with data revisions and changes. As a result, this is definitely a more important issue for NGDP targeting than for flexible inflation or NGDP growth targeting – which is why it is being raised!
It is a cost of level targeting, those in favour of level targeting have to point out the counterveiling benefits of said targeting (outside of a liquidity trap, where the gains are widely accepted but can be done through commitments rather than a change in the rule) that will swamp this and other costs.
Good points but I think your final point is crucial: NGDPLT is superior in a liquidity trap, which may be enough to overcome the drawbacks of data revision. You say that other commitments could be made but central banks are often constrained by their statutory targets. With interest rates declining over a long period of time it may be time to consider whether the ZLB is going to be a common problem for IT.
But instead of targeting NGDP, we can simply revise PTA’s to have a clause where, when cash rates hit zero central banks switch from targeting growth to a level – thereby providing them the commitment they need in that circumstance.
I’m not convinced we are going to be stuck in a ZLB type world for a persistent period of time – if we think that we are moving towards a situation where people are retiring and trying to draw down savings, fundamental interest rates are pretty likely to head upwards in the long term.
On the first point I remain unconvinced by a policy rule that varies with the, apparently poor, data. If it is good enough for the difficult time then it is probably good enough for the easy times!
Really? If we have previously determined that we prefer growth targeting to level targeting outside of the ZLB, and we can make a policy rule that allows authorities to “precommit” at the ZLB, then why does it make sense to use level targeting?
Before the crisis economists were saying “oww, if we hit the ZLB we just need to precommit and we’re sorted” – the advantage of NGDP targeting is simply that it is an imperfect proxy for this commitment … as Woodford said, this isn’t best policy, or the best way to pre-commit, but its the only practical form of precommitment that seem politically feasible right now!
We can replicate it, and ensure that we keep the benefits of growth targeting, just by setting it up that we “make up for shortfalls during periods where we’ve hit the ZLB”.
We make up for shortfalls during periods where the CB decides we’re at a ZLB but not otherwise? Doesn’t that make it rather hard for agents to form expectations, given that whatever nominal variable you target will have neither a stable growth nor level path? I’m sure I’m wrong because Woodford/Mishkin took your view in their WSJ article, but it just seems really odd.
The ZLB only happens very very occasionally – as I was suggesting above. During the Great Depression people thought we had fallen into a new normal as well, when in truth we are experiencing a very specific occasional phenomenon.
In that environment, you have to just communicate the change in state – when the policy instrument is non-zero, prices will go up X% if there is no change in specifics for the market … however, when the policy rate hits zero the Fed will buy bonds and accept inflation > X%. We can commit to this policy credibly (as our credibility on inflation targeting may undercut it) by instead targeting nominal income during that period – a way that also allows us to communicate how policy changes during this rare period to the public.
Nice post. I agree that revisions matter.
But one point which I think has been overlooked: the reason why the CPI is not revised is not fundamental to the concept of a price index, it’s simply a policy choice. As price index methodology changes *we don’t apply it retrospectively*.
In the UK this debate seems partly absurd – we have about ten different measures of inflation – a new one announced just this week – *exactly because* we keep coming up with new inflation methodology and refuse to apply it retrospectively.
So I think Scott Sumner’s proposal on this subject was necessary and correct; we would need to adjust the desired level path of NGDP – “base drift”, as and when GDP methodology changes happen to revise the past level of NGDP.
Hi Britmouse,
With the CPI I find it interesting to think about why we are changing the underlying basket – it is because the fundamental basket of goods changes through time, and so as a result revising the past series to represent a basket of goods that is not relevant for that time sort of makes sense.
However, it does show the inherent weakness of CPI as a measure of “the price level” and of growth in the CPI as a measure of “inflation”. I’ve always had a preference for skipping the concept of CPI altogether and estimating actual inflation – which is what we do over in NZ with a factor model. We are targeting the comovement in prices, so we might as well actually measure what we are implicitly targeting.