Friday, November 23, 2018

A Workers' History update

I'm at about 11,000 words (about 1/3 of my goal — about the same length as A Random Physicist Takes on Economics). At this rate, I'll probably be done much sooner than the end of 2019. I've finally collected all of the data I think I need and analyzed it with the dynamic information equilibrium model. On my more technical blog, I wrote a bit about one of the newer analyses looking at unionization and inequality. In that blog post I tested my draft diagrams for the book (in black and white so they render on a Kindle). Click to enlarge:

I also updated the cover art. I changed the color to be more blue — the same blue as on the cover of A Random Physicist [1].


[1] You can see them side by side (plus an alternate less spaced-out title):

I think I might go for Century Gothic or Futura (on which it was originally based) for the cover font for that Keynesian-era feel ...

And here are the spaced-out title versions:

Ok, one last pair — this one includes two data series in the first version above, but were left out in all the single spaced versions. The data was left out because it's actually left out in the diagram inside the book, but also it increases the size of the spacing between lines by about 10% so the single-spaced version isn't cramped. At this point, I'm leaning Futura but then the kid loves Gorillaz and Studio Ghibli which use Century Gothic (the former on the Demon Days cover, the latter in their English titles).

Tuesday, November 6, 2018

A workers' history of the United States 1948-2020

It's been awhile since I've posted at A random physicist. I figured since I am going to be self-publishing, I am going to have to give myself my own deadlines. I plan on my second book, A workers' history of the United States 1948-2020, coming out in December of 2019, which gives me a little over a year to write and edit. It should come out in time for people to read before the bulk of the 2020 primary season. I was partially inspired to start writing this book based on this article in the NYRB suggesting that nothing came of the 2008 recession — history failed to turn at a potential turning point. I say it's still too early to say that.

The main thesis of the book will be that changes in labor force participation due to social factors are the primary drivers of economic change in the United States in the post-war period. It will be broken into three broad chapters:

I. Women in the workforce
Women entering the workforce and the social changes that both inspired and followed it were the source of the post-war economic boom as well as inflation.
II. The decline of unions
The breakdown in the post-WWII national unity into racial and gender divides broke the social contract resulting in domestic manufacturing jobs (and their unions) predominantly held by men at the time being shifted overseas, only to be finished off by the shipping container.
III. The Great Recession
The collapse of immigration from Mexico not only popped the housing bubble causing the worst recession since the Great Depression, but brought on the subsequent stagnant growth.
The title (currently, just a play on Friedman and Schwartz's monetary history) and cover are of course subject to change, as well as this structure. Feel free to leave comments (or tweet/DM me @infotranecon). And if any publishers out there are interested in working with me, feel free to contact me via:

Monday, February 5, 2018

A short review from Diane Coyle

Diane Coyle, economist at the University of Manchester and recent winner of the inaugural Indigo Prize, wrote a bit about my book at her blog The Enlightened Economist:
[A Random Physicist Takes on Economics] also made me think about the role of context or environment, and why this might be more influential than individual choice processes in determining economic outcomes. Smith alludes to the literature on biological market theory, pointing out, though, that this does not rest at all on the utility of biological agents, be they pigeons or fungi.
The context it appears (touching on information with two other books from Daniel Dennett who I discussed on my blog) in is also interesting so read the whole thing.

Friday, January 5, 2018

How's this book thing going?

I haven't updated the book blog here in awhile because after the initial release, there hasn't been a lot of news. Diane Coyle mentioned on Twitter today that she'd read it and enjoyed it, and is going to write about it at some point in the future —which I'm looking forward to!

I'd like to thank everyone who has bought a copy! Overall, I've sold a few hundred copies (mostly the e-book version) most of which came in the first month with another burst around the holidays. As an aside, the e-book price is based on the Amazon "Kindle single" target pricing. The paperback pricing is based on several factors: a self-imposed "Carbon tax" as I wanted to encourage e-book purchases, the cost of on-demand printing, as well as interpolation between the list prices of two Dover paperbacks I own that my book fits between:


By the way, that is one of the best introductions to differential forms books for science applications around.

It has been a fun experience publishing through the Amazon Kindle bookstore and it's remarkably easy with a minimum of tedious formatting even for the paperback version. I'm in the process of collecting some notes and outlining a future book on dynamic equilibrium that I'm tentatively calling A Dynamic Information Equilibrium History of the United States: 1920-2020 as a pun on Milton Friedman's book with Anna Schwartz. In it, I plan to write a re-interpretation of the economic history of the US based on the dynamic equilibrium model (graphic below). I will make the case that the social change of women entering the workforce is one of the primary events of the post-war period.

Friday, October 6, 2017

My favorite metaphor in the book

One of the things you end up spending a lot of time on when trying to write a book aimed at a non-technical audience is coming up with good metaphors for highly technical concepts. This was one of my favorites: my attempt to explain the SMD theorem in terms of blueberries (individual economic agents) being aggregated into pies or smoothies (an economy). Because the agents interact with each other, aggregation has to yield something more complex than "it becomes a pile of blueberries". But we can only really say what some of the properties are (e.g. it'll taste a bit like blueberries) and not others (it'll be blueberry-shaped).

Friday, September 29, 2017

My information equilibrium paper

Almost two years ago, I submitted my draft pre-print Information equilibrium as an economic principle to the arXiv in the quantitative finance/economics section (q-fin.EC), pictured above. At the beginning of this year, I listed it on the Social Science Research Network (SSRN) which makes it a bit more likely to be found by browsing economists. I briefly talked about it in my book, but if you're interested and down for a bit of math (specifically, differential equations) click on the previous link to download it.

I recently re-read it (as part of my response to a review critical of the idea), and found that it holds up remarkably well given how much more I've learned about information equilibrium. If I was writing it today, I would probably put more emphasis on dynamic equilibrium and ensembles of information equilibrium relationships. I'd also present the Solow model as a particular instantiation of the "Kaldor facts" and include both the "quantity theory of labor" and the "quantity theory of labor and capital".

The introduction really does lay out both the paper and the general concept well. The economy is a complex system and maximum entropy techniques frequently provide insight — but we're left without conservation laws, well-defined constraints, or even a well-defined equilibrium in economics. Information equilibrium is then offered as a solution to this problem. I focus on reproducing well-known results (the Solow model) or empirical regularities (Okun's law), only bringing in the more controversial claims toward the end in a way that does not seem antagonistic and may even be perceived as persuasive (at least I hope).

Unfortunately, I haven't gotten past the "desk rejection" stage in getting this paper published yet. It's understandable from a journal's viewpoint — an easy way to cull the submissions is to see if any of the authors are economists or economics graduate students from recognized institutions. As I describe in my book, I came into this work from a wildly different background in physics and signal processing. However, the reason I did was because economists were attempting to effectively enter my field with their wildly different backgrounds [pdf]! And of course even economists have difficulty getting papers published for things that are far less controversial.

I also knew this would probably be the case, which is why I started my blog:
Instead of trying (and probably failing) to publish it as a paper, I was inspired by Igor Carron to just think out loud with a blog. This blog will be focused on determining if the framework established here is good for anything or just an interesting toy model. Or if it is completely wrong!
My book is part of an attempt to both offer something interesting to economists (which seems to be working), as well as bypass journal editors and go directly to the public. My paper would be the next logical step if you're intrigued by the book!

Wednesday, September 27, 2017

A book review and a response

Brian Romanchuk of kindly reviewed my book, detailing his thoughts on its merits and failures.

Overall, I don't have much of an issue with his review of the book itself and you should definitely read it to get a different point of view. A later section is devoted to his critique of information equilibrium that I discuss below, but for the most part where we differ it is because of those differing point of view.

There are only three things that I feel the need to respond to regarding the review of the book itself. These are about error bars, expectations, and scope conditions.

Error bars

Brian says I decry "the lack of error bars in economics texts." I realize now I should have left in the "preferably with error bars" in an earlier draft. I was actually decrying the lack of any theoretical curves going through data in any available material at all including pdf's of slides or even economics papers where models are purportedly being compared to data (regardless of whether there are error bars or not). I never saw them in papers, and so I then thought: Maybe they're in books? Nope. On Wikipedia? Nope. There is lots of data shown in econ chart blogging (for example, on Brian's website), but there are never any curves derived from theory going through the data — except the occasional linear fit. Brian is correct in saying that a lot of physics (and engineering) textbooks don't show error bars (or sometimes even data). But even on Wikipedia, there are no comparisons of economic theory to economic data — while there are for physics. And there is a huge difference between not showing data for a Lagrange multiplier problem in a classical mechanics textbook (a method validated for literally hundreds of years) and not showing data for a DSGE model in a working paper explaining the liquidity trap (a method that has not shown to be empirically accurate for any data). My inclusion of "error bars" seems to have thrown off the focus here.


One place where Brian misses the point I was making is in his discussion of the section of my book that talks about expectations. This could well be my own fault for not being clear enough, but when he writes:
He wastes the reader's time discussing how he was surprised that economics models have the mechanism that expected future outcomes influence present activity.
it does not characterize what I wrote or the point I was trying to make. I was "surprised" that economics models have a mechanism where the *actual* future outcomes influence present activity. I emphasize it by using the words "actual future" five times as opposed to "expected future". There is no issue with using an expected future as an input, so long as that expected future is derived from information known in the present. In fact, I wrote exactly that in my book:
If the future value of inflation [in a model] is just made up from information known at the present time, then there is no information being moved from the future to the present and no information problem.
However, you cannot know the actual future of even a hypothetical universe in the present unless the system is completely deterministic (i.e. does not contain any unknown stochastic or chaotic elements), but rational expectations includes the actual future (in the hypothetical universe the model exists in) in the model. You can have a guess about an expected future, but that isn't the same as knowing the actual future plus an error term of zero mean. 

Maybe an example is appropriate here. I can know that if I roll six dice, I should expect a 21 (with a standard error of roughly ± 4). Rolling dice is a well-defined stochastic process. However, I cannot know that if I ask 6 people to pick a random number, I should expect an average of 6 ± 2 where 6 is the actual result of asking 6 people in the future.

That's what rational expectations does.

Scope conditions

Brian also refers to my discussion of scope conditions, but I'm not completely sure he understands the concept. Brian writes:
We are back to Smith's scope conditions. The scope condition for the "inflation will be 2%" model is the current environment -- characterised by inflation sticking near 2%. You do not need a doctorate in theoretical physics to see that this is a fairly silly situation.
That would not be the scope condition for Brian's constant inflation theory. As stated, the constant inflation theory (i.e. π = 2%) Brian presents has no scope conditions. If inflation deviates from 2%, the model is empirically invalid, not out of scope — unless there is something setting the inflation scale.

An example: π = 2% when monetary base growth μ << 10%. In that case, the μ << 10% is the scope condition. Now π ~ 2% might be a scope condition for some other model (e.g. the ISLM model kind of implicitly assumes inflation is low because it doesn't distinguish real and nominal — discussed here and here with slides). As described, Brian confuses "scope condition" with a "just-so theory".

In this form, Brian's pseudo-example is: π = 2% when π ≈ 2%, which is just vacuous.

*  *  *

Information equilibrium

One thing I do want to note is that Brian appears to want to use my book as an entry point to critique my information transfer approach more broadly (which I did not invent, but rather borrowed from Fielitz and Borchardt's application to complex physical systems). For example, Brian writes:
He cites studies that show DSGE model predictions performing worse than simple econometric techniques, or of course, his information transmission economics techniques.
I actually make no reference to the information transfer models in that context in my book. He subsequently has an entire section of his review set aside to criticize information equilibrium. What follows is my response to his critique of information equilibrium and is largely independent of my book.

Brian uses the old economics trope that "if you really did understand economics so well, you'd (or someone else) could get rich":
Looking for validation in peer-reviewed journals is curious: if the capitalist system is an efficient system for processing information, the commercial success of the techniques should have appeared within months of their appearance in the public domain.
First, I might have been able to make a lot of money in the bond market had I a) set up an instrument to bet against the BCEI forecast in the graph below, and b) had a lot of money to start with:

The forecast and model were described here.

Second, in my book, I make the case that the capitalist system is not always an efficient system for processing information. I introduce an entire chapter as a discussion of market failure:
As long as information equilibrium holds — for example, the agents choose opportunities in the opportunity set uniformly and don't bunch up, economics is the study of properties of the opportunity set. But what happens when this fails? That's the question I address [in the next chapter], and provide a speculative answer.
Third, Brian provides us with a possible reason — by example — for why information equilibrium might not have been picked up and used by everyone [1]: people might not understand it. People might not understand it because it's over their head. People might not understand it because I haven't explained it very well. People might not understand it because it contains some fundamental error and it is therefore actually impossible to understand. People might not understand it because they're being deliberately obtuse. People might think they understand it, but are actually wrong — leading them to either not use it or use it incorrectly.

I don't know what the reason is, but Brian doesn't appear to understand it. As such, he represents an example of a reason information equilibrium hasn't taken over the world. His description of the information equilibrium reminds me of the times I've gone into a meeting to explain something novel to someone and they say: "Oh, I get it, this is just X" where X is something not only well-known but completely unrelated. The best case of this I've experienced was from Robin Hanson who effectively said of information equilibrium "Oh, I get it, this is just game theory information" (not exactly in that way, but that's the gist of referring me to the work of Aumann and Harsanyi).

Except in this case, Brian doesn't even tell us what X is — it's just X:
The entire information equilibrium theory is just back story for the algorithm he uses to generate forecasts
X = some algorithm. I can't even tell if X is unrelated or not because it isn't specified. In fact, it seems pretty clear the reason it isn't specified is because Brian doesn't know what X is as we'll see below.

In any case, this simply misunderstands what is happening. Information equilibrium is used to derive formulas that are then used to explain data. On such set of formulas yields supply and demand, for example. These formulas contain free parameters, and I do use algorithms (e.g. nonlinear regression, residual minimization, entropy minimization) to fit these parameters to data. I have also used algorithms to project stochastic processes into the future (e.g. Mathematica's TimeSeriesForecast) as well as simple linear extrapolation algorithms. However these algorithms are not specific to information equilibrium, and information equilibrium dictates the form of the input to these algorithms. For example, an autoregressive (AR) process gives the fluctuations around the information equilibrium result for these stock market forecasts (but not the information equilibrium itself). Mathematically:

F(t) = IE(t) + AR(t)

where F is the forecast, IE is the information equilibrium trend and AR is the AR process (with its errors). Note that (d/dt) log IE(t) ~ (k − 1) γ per the dynamic information equilibrium model where k is the information transfer index and γ is e.g. NGDP growth rate.

You could of course just posit the formulas and free parameters without information equilibrium, much like how you could just posit Planck's blackbody radiation formula. However, I wouldn't say that the quantum mechanics underlying Planck's formula is "like the back story in old school video games like Pac-Man, it is expendable."

Now it might make sense for someone to say:
The entire information equilibrium theory is just back story for log-linear regressions and forecasting using autoregressive processes
This is kind of a valid criticism of information equilibrium [2]! But it involves filling in X = log-linear regression and AR processes.

Brian continues:
However, when I read one of his initial papers, the actual algorithm description was just a reference to source code in a computer language I never worked with, nor had access to. From my perspective, the source code was effectively undocumented. I was forced to guess how his algorithm worked. On the basis of that guess, I saw little need to pursue analysing the algorithm.
I assume he is making a reference to my preprint and the code snippets provided in the Appendices. For example:

Prior to the code snippets, these algorithms (and variables) are described:
The parameter fits were accomplished by minimizing the residuals using the Mathematica function FindMinimum using the method PrincipalAxis, a derivative free minimization method.
I can understand that many people have not ever worked with Mathematica (its functional programming style is different from procedural programming), but it is baffling to me for someone to think:
  1. Information equilibrium is not necessary (incorrect: it is the source of the parameters aa and bb and the formula they are contained in as detailed in the paper)
  2. Guessing what this code was doing was in any way tasking (it is finding the model parameters yielding values helpfully listed in the previous section of the paper by a common technique described in the text in the case of noisy data that messes up derivative-based methods)
  3. One the basis of that guess, one would still conclude information equilibrium was not necessary (I am curious as to what Brian's guess was for what this code was doing)
It was possible he was referring to the LOESS function and not the parameter minimization? It's true I did not document that code as well as the other code. However LOESS (or LOWESS) is a well-described technique in the literature. I'll leave it to readers to decide for themselves as to whether the various code I present is well-documented enough or whether it is "effectively undocumented". Leave a comment with your guess for what the code snippets do!

Additionally let me say that there are actually no forecasts in my one (and only) information equilibrium preprint, therefore the previously quoted statement from Brian:
The entire information equilibrium theory is just back story for the algorithm he uses to generate forecasts
cannot be substantiated by referring to the algorithms in the paper. There are no algorithms that generate forecasts. Maybe Brian is talking about the code on GitHub? Maybe he doesn't mean forecasts? But then it would have to be "the algorithm he uses to generate parameter estimates". That would indeed be silly: I invented an entire theory just to create parameters I could fit?

Overall, I get the impression that Brian just doesn't like information equilibrium (or possibly me as we've had strong disagreements before on the Stock-Flow Consistent modeling approach). That's fine. In trying to express his disapproval, he seems to be mixing up things from my blog, from my paper, and from my book. I don't talk about the performance of information equilibrium relative to DSGE models in my book (I do on my blog). I don't have forecasts in my paper (I do on my blog). Mathematica isn't the only code I've made public (there's a python implementation in one of my GitHub repositories). In fact, the Mathematica code on GitHub is fairly well-documented [4].

Brian seems unable to articulate exactly what his problem with information equilibrium is — likely tied to his lack of understanding of it [3]. I'm generally responsive to questions on my blog about how to run the models or derive the equations — even writing entire posts trying to explain things to people who are trying to reproduce my results (and who were in fact successful at doing so). If he's having trouble understanding the Mathematica code, I can rewrite it in pseudocode or another language. If he has questions, he can ask me on my blog, in comments below, via email (on the side bar), or on Twitter.



[1] Actually, I noticed recently the interest rate model has been used. Google translate tells me the model of interest rates for Korea "can be said to be awesome".

[2] In a sense, you could see my entire effort on my blog as an attempt to convince economists to give up on complex models and return to simple linear ones. This ignores the ensemble/partition function approach and the deeply integrated possibility of market failure (non-ideal information transfer).

[3] I've also noticed over time that Brian presents himself as more technically savvy than he actually is. Like his inability to understand information equilibrium or Mathematica, he was also unable to understand first order conditions in economics — I would think anybody who has studied applied math would know that the zeros of a first derivative are local optima.

[4] The Mathematica notebook for the "quantity theory of labor and capital" — aka a modified Solow model (click to expand):