Overall, I don't have much of an issue with his review of the book itself and you should definitely read it to get a different point of view. A later section is devoted to his critique of information equilibrium that I discuss below, but for the most part where we differ it is because of those differing point of view.
There are only three things that I feel the need to respond to regarding the review of the book itself. These are about error bars, expectations, and scope conditions.
Error bars
Brian says I decry "the lack of error bars in economics texts." I realize now I should have left in the "preferably with error bars" in an earlier draft. I was actually decrying the lack of any theoretical curves going through data
in any available material at all including pdf's of slides or even economics papers where models are purportedly being compared to data (regardless of whether there are error bars or not). I never saw them in papers, and so I then thought:
Maybe they're in books? Nope.
On Wikipedia? Nope. There is lots of data shown in econ chart blogging (for example, on Brian's website), but there are never any curves derived from theory going through the data — except the occasional linear fit. Brian is correct in saying that a lot of physics (and engineering) textbooks don't show error bars (or sometimes even data). But even on Wikipedia, there are no comparisons of economic theory to economic data — while
there are for physics. And there is a huge difference between not showing data for a Lagrange multiplier problem in a classical mechanics textbook (a method validated for literally hundreds of years) and not showing data for a DSGE model in a working paper explaining the liquidity trap (a method that has not shown to be empirically accurate for any data). My inclusion of "error bars" seems to have thrown off the focus here.
Expectations
One place where Brian misses the point I was making is in his discussion of the section of my book that talks about expectations. This could well be my own fault for not being clear enough, but when he writes:
He wastes the reader's time discussing how he was surprised that economics models have the mechanism that expected future outcomes influence present activity.
it does not characterize what I wrote or the point I was trying to make. I was "surprised" that economics models have a mechanism where the *actual* future outcomes influence present activity. I emphasize it by using the words "actual future" five times as opposed to "expected future". There is no issue with using an expected future as an input, so long as that expected future is derived from information known in the present. In fact, I wrote exactly that in my book:
If the future value of inflation [in a model] is just made up from information known at the present time, then there is no information being moved from the future to the present and no information problem.
However, you cannot know the actual future of even a hypothetical universe in the present unless the system is completely deterministic (i.e. does not contain any unknown stochastic or chaotic elements), but rational expectations includes the actual future (in the hypothetical universe the model exists in) in the model. You can have a guess about an expected future, but that isn't the same as knowing the actual future plus an error term of zero mean.
Maybe an example is appropriate here. I can know that if I roll six dice, I should expect a 21 (with a standard error of roughly ± 4). Rolling dice is a well-defined stochastic process. However, I cannot know that if I ask 6 people to pick a random number, I should expect an average of 6 ± 2 where 6 is the actual result of asking 6 people in the future.
That's what rational expectations does.
Scope conditions
Brian also refers to my discussion of scope conditions, but I'm not completely sure he understands the concept. Brian writes:
We are back to Smith's scope conditions. The scope condition for the "inflation will be 2%" model is the current environment -- characterised by inflation sticking near 2%. You do not need a doctorate in theoretical physics to see that this is a fairly silly situation.
That would not be the scope condition for Brian's constant inflation theory. As stated, the constant inflation theory (i.e. π = 2%) Brian presents has no scope conditions. If inflation deviates from 2%, the model is empirically invalid, not out of scope — unless there is something setting the inflation scale.
An example: π = 2% when monetary base growth μ << 10%. In that case, the μ << 10% is the scope condition. Now π ~ 2% might be a scope condition for some other model (e.g. the ISLM model kind of implicitly assumes inflation is low because it doesn't distinguish real and nominal — discussed
here and
here with slides). As described, Brian confuses "scope condition" with a "just-so theory".
In this form, Brian's pseudo-example is: π = 2% when π ≈ 2%, which is just vacuous.
* * *
Information equilibrium
One thing I do want to note is that Brian appears to want to use my book as an entry point to critique my information transfer approach more broadly (which I did not invent, but rather borrowed from
Fielitz and Borchardt's application to complex physical systems). For example, Brian writes:
He cites studies that show DSGE model predictions performing worse than simple econometric techniques, or of course, his information transmission economics techniques.
I actually make no reference to the information transfer models in that context in my book. He subsequently has an entire section of his review set aside to criticize information equilibrium. What follows is my response to his critique of information equilibrium and is largely independent of my book.
Brian uses the old economics trope that "if you really did understand economics so well, you'd (or someone else) could get rich":
Looking for validation in peer-reviewed journals is curious: if the capitalist system is an efficient system for processing information, the commercial success of the techniques should have appeared within months of their appearance in the public domain.
First, I might have been able to make a lot of money in the bond market had I a) set up an instrument to bet against the BCEI forecast in the graph below, and b) had a lot of money to start with:
The forecast and model were described
here.
Second, in my book, I make the case that the capitalist system is not always an efficient system for processing information. I introduce an entire chapter as a discussion of market failure:
As long as information equilibrium holds — for example, the agents choose opportunities in the opportunity set uniformly and don't bunch up, economics is the study of properties of the opportunity set. But what happens when this fails? That's the question I address [in the next chapter], and provide a speculative answer.
Third, Brian provides us with a possible reason — by example — for why information equilibrium might not have been picked up and used by everyone [1]: people might not understand it. People might not understand it because it's over their head. People might not understand it because I haven't explained it very well. People might not understand it because it contains some fundamental error and it is therefore actually impossible to understand. People might not understand it because they're being deliberately obtuse. People might think they understand it, but are actually wrong — leading them to either not use it or use it incorrectly.
I don't know what the reason is, but Brian doesn't appear to understand it. As such, he represents an example of a reason information equilibrium hasn't taken over the world. His description of the information equilibrium reminds me of the times I've gone into a meeting to explain something novel to someone and they say: "Oh, I get it, this is just X" where X is something not only well-known but completely unrelated. The best case of this I've experienced was from Robin Hanson who effectively said of information equilibrium "Oh, I get it, this is just game theory information" (not exactly in that way, but that's the gist of referring me to the work of Aumann and Harsanyi).
Except in this case, Brian doesn't even tell us what X is — it's just X:
The entire information equilibrium theory is just back story for the algorithm he uses to generate forecasts
X = some algorithm. I can't even tell if X is unrelated or not because it isn't specified. In fact, it seems pretty clear the reason it isn't specified is because Brian doesn't know what X is as we'll see below.
In any case, this simply misunderstands what is happening. Information equilibrium is used to derive formulas that are then used to explain data. On such set of formulas
yields supply and demand, for example. These formulas contain free parameters, and I do use algorithms (e.g. nonlinear regression, residual minimization, entropy minimization) to fit these parameters to data. I have also used algorithms to project stochastic processes into the future (e.g.
Mathematica's
TimeSeriesForecast) as well as simple linear extrapolation algorithms. However these algorithms are not specific to information equilibrium, and information equilibrium dictates the form of the input to these algorithms. For example, an autoregressive (AR) process gives the fluctuations around the information equilibrium result
for these stock market forecasts (but not the information equilibrium itself). Mathematically:
F(t) = IE(t) + AR(t)
where F is the forecast, IE is the information equilibrium trend and AR is the AR process (with its errors). Note that
(d/dt) log
IE(t) ~ (
k − 1)
γ per the
dynamic information equilibrium model where
k is the information transfer index and
γ is e.g. NGDP growth rate.
You could of course just posit the formulas and free parameters without information equilibrium, much like how you could just posit
Planck's blackbody radiation formula. However, I wouldn't say that the quantum mechanics underlying Planck's formula is "like the back story in old school video games like Pac-Man, it is expendable."
Now it might make sense for someone to say:
The entire information equilibrium theory is just back story for log-linear regressions and forecasting using autoregressive processes
This is kind of a valid criticism of information equilibrium [2]! But it involves filling in X = log-linear regression and AR processes.
Brian continues:
However, when I read one of his initial papers, the actual algorithm description was just a reference to source code in a computer language I never worked with, nor had access to. From my perspective, the source code was effectively undocumented. I was forced to guess how his algorithm worked. On the basis of that guess, I saw little need to pursue analysing the algorithm.
Prior to the code snippets, these algorithms (and variables) are described:
The parameter fits were accomplished by minimizing the residuals using the Mathematica function FindMinimum using the method PrincipalAxis, a derivative free minimization method.
I can understand that many people have not ever worked with Mathematica (its functional programming style is different from procedural programming), but it is baffling to me for someone to think:
- Information equilibrium is not necessary (incorrect: it is the source of the parameters aa and bb and the formula they are contained in as detailed in the paper)
- Guessing what this code was doing was in any way tasking (it is finding the model parameters yielding values helpfully listed in the previous section of the paper by a common technique described in the text in the case of noisy data that messes up derivative-based methods)
- One the basis of that guess, one would still conclude information equilibrium was not necessary (I am curious as to what Brian's guess was for what this code was doing)
It was possible he was referring to the LOESS function and not the parameter minimization? It's true I did not document that code as well as the other code. However
LOESS (or LOWESS) is a well-described technique in the literature. I'll leave it to readers to decide for themselves as to whether the various code I present is well-documented enough or whether it is "effectively undocumented". Leave a comment with your guess for what the code snippets do!
Additionally let me say that there are actually no forecasts in my one (and only) information equilibrium preprint, therefore the previously quoted statement from Brian:
The entire information equilibrium theory is just back story for the algorithm he uses to generate forecasts
cannot be substantiated by referring to the algorithms in the paper. There are no algorithms that generate forecasts. Maybe Brian is talking about
the code on GitHub? Maybe he doesn't mean forecasts? But then it would have to be "the algorithm he uses to generate parameter estimates". That would indeed be silly: I invented an entire theory just to create parameters I could fit?
Overall, I get the impression that Brian just doesn't like information equilibrium (or possibly me as we've had strong disagreements before on the
Stock-Flow Consistent modeling approach). That's fine. In trying to express his disapproval, he seems to be mixing up things from my blog, from my paper, and from my book. I don't talk about the performance of information equilibrium relative to DSGE models in my book (I do on my blog). I don't have forecasts in my paper (I do on my blog).
Mathematica isn't the only code I've made public (there's
a python implementation in one of my GitHub repositories). In fact, the Mathematica code on GitHub is fairly well-documented [4].
Brian seems unable to articulate exactly what his problem with information equilibrium is — likely tied to his lack of understanding of it [3]. I'm generally responsive to questions on my blog about how to run the models or derive the equations —
even writing entire posts trying to explain things to people who are trying to reproduce my results (and who were in fact successful at doing so). If he's having trouble understanding the
Mathematica code, I can rewrite it in pseudocode or another language. If he has questions, he can ask me
on my blog, in comments below, via email (on the side bar), or
on Twitter.
...
Footnotes:
[2] In a sense, you could see my entire effort on my blog as an attempt to convince economists to give up on complex models and return to simple linear ones. This ignores the ensemble/partition function approach and the deeply integrated possibility of market failure (non-ideal information transfer).
[3] I've also noticed over time that Brian presents himself as more technically savvy than he actually is. Like his inability to understand information equilibrium or Mathematica, he was also unable to understand first order conditions in economics — I would think anybody who has studied applied math would know that the zeros of a first derivative are local optima.
[4] The Mathematica notebook for the "quantity theory of labor and capital" — aka a modified Solow model (click to expand):