James Galbraith Describes Major Forecast Failure in Model Used by Romers to Attack Friedman on Sanders Plan
Naked Capitalism has the article James Galbraith Describes Major Forecast Failure in Model Used by Romers to Attack Friedman on Sanders Plan.
So why do the Romers say so confidently that Friedman is off base? They are using a different model. And as Galbraith explains long-form, it’s one with a pretty crappy track record in post-crisis America. And Galbraith gives an important warning:
In the real world, forecasts are a very weak guide to policy; when attempting to make major changes the right strategy is to proceed and to take up the challenge of obstacles or changing circumstances as they arise. That is, after all, what Roosevelt did in the New Deal and what Lyndon Johnson did in the 1960s. Neither one could have proceeded if today’s economists had been around at that time.
This is a shocking description of the failures of the ecnomists in the Obama administration who are now throwing stones at Bernie Sanders.
One of the things about Obama that I voted for was the belief that he was a practical guy who would follow Roosevelt’s technique “proceed and to take up the challenge of obstacles or changing circumstances as they arise”
I knew that Obama had failed to meet my expectations in this regard, but until I read this article, I never realized the depths of that failure.
I think I have a special appreciation for the mathematics underpinning the wisdom of constantly checking your results, and adjusting to deviations from expectations.
My explanation below is even more abstract than the James Galbraith explanation of the problems of economic forecasting. I really don’t expect anybody to undersyand what comes below except for people who have also been in my trade of simulation of the behavior of integrated circuits.
When I first got into the business in 1969, there were severe limits on the performance of circuit simulation programs. If you tried to make too large jumps in time in your simulation, the results could predict wild oscillations in the circuits where none existed.
This was due to a method called explicit integration. In that method, you would take a few historical points in time of your simulation, and use it to predict behavior at the next point in time. You never checked to see if that prediction was continuing to satisfy the equations governing the system as a whole. To prevent the problem of oscillation, you were forced to limit yourself to time steps not larger than those limited by the smallest time constants anywhere in your circuit. This limitation would keep the errors to a reasonable level, but it really slowed down the simulator by orders of magnitude. The reason why people stuck to explicit integration was that solving the circuit equations to check for errors was an extremely compute intensive process.
In the early 1970s, The University of California at Berkeley, and others introduced ways to use implicit integration instead of explicit integration. The reason why they could use implicit integration was they had found a technique for rapidly solving the system circuit equations to see if the error was within bounds. Those rapid solving techniques are called sparse matrix techniques. With sparse matrix and implicit integration, you used the predictions from previous time points, but then you checked how accurately those predictions satisfied the equations for the circuit at the time you were trying to estimate. If the predictions were too far off, you would adjust then until they the result was acceptable. If you couldn’t find acceptable results at the new predicted time, you would try a smaller time step into the future at an earlier predicted time. This method allowed you to take much larger time steps than explicit integration allowed while not introducing wild oscillations into the results. The new method was also more suitable for simulating highly non-linear circuits like those found in computers.
In effect, you only had to worry about small time constants in parts of the circuit that were actually changing significantly. Small time constants in parts of the circuits that weren’t very active would not cause your solution to go wild.
I liken this behavior to the Roosevelt economic plan. Forecast something, do something according to your forecast, measure what happened as a result, and adjust if you see deviations from plan.
It’s not rocket science, but it is science and math. I only wish I could explain this to the lay public who are not engineers and mathematicians. Well, at least I can take comfort that I laid it all out there, even if very few will get it.