One of the things you end up spending a lot of time on when trying to write a book aimed at a non-technical audience is coming up with good metaphors for highly technical concepts. This was one of my favorites: my attempt to explain the SMD theorem in terms of blueberries (individual economic agents) being aggregated into pies or smoothies (an economy). Because the agents interact with each other, aggregation has to yield something more complex than "it becomes a pile of blueberries". But we can only really say what some of the properties are (e.g. it'll taste a bit like blueberries) and not others (it'll be blueberry-shaped).
Friday, September 29, 2017
Almost two years ago, I submitted my draft pre-print Information equilibrium as an economic principle to the arXiv in the quantitative finance/economics section (q-fin.EC), pictured above. At the beginning of this year, I listed it on the Social Science Research Network (SSRN) which makes it a bit more likely to be found by browsing economists. I briefly talked about it in my book, but if you're interested and down for a bit of math (specifically, differential equations) click on the previous link to download it.
I recently re-read it (as part of my response to a review critical of the idea), and found that it holds up remarkably well given how much more I've learned about information equilibrium. If I was writing it today, I would probably put more emphasis on dynamic equilibrium and ensembles of information equilibrium relationships. I'd also present the Solow model as a particular instantiation of the "Kaldor facts" and include both the "quantity theory of labor" and the "quantity theory of labor and capital".
The introduction really does lay out both the paper and the general concept well. The economy is a complex system and maximum entropy techniques frequently provide insight — but we're left without conservation laws, well-defined constraints, or even a well-defined equilibrium in economics. Information equilibrium is then offered as a solution to this problem. I focus on reproducing well-known results (the Solow model) or empirical regularities (Okun's law), only bringing in the more controversial claims toward the end in a way that does not seem antagonistic and may even be perceived as persuasive (at least I hope).
Unfortunately, I haven't gotten past the "desk rejection" stage in getting this paper published yet. It's understandable from a journal's viewpoint — an easy way to cull the submissions is to see if any of the authors are economists or economics graduate students from recognized institutions. As I describe in my book, I came into this work from a wildly different background in physics and signal processing. However, the reason I did was because economists were attempting to effectively enter my field with their wildly different backgrounds [pdf]! And of course even economists have difficulty getting papers published for things that are far less controversial.
I also knew this would probably be the case, which is why I started my blog:
Instead of trying (and probably failing) to publish it as a paper, I was inspired by Igor Carron to just think out loud with a blog. This blog will be focused on determining if the framework established here is good for anything or just an interesting toy model. Or if it is completely wrong!
My book is part of an attempt to both offer something interesting to economists (which seems to be working), as well as bypass journal editors and go directly to the public. My paper would be the next logical step if you're intrigued by the book!
Wednesday, September 27, 2017
Brian Romanchuk of bondeconomics.com kindly reviewed my book, detailing his thoughts on its merits and failures.
Overall, I don't have much of an issue with his review of the book itself and you should definitely read it to get a different point of view. A later section is devoted to his critique of information equilibrium that I discuss below, but for the most part where we differ it is because of those differing point of view.
There are only three things that I feel the need to respond to regarding the review of the book itself. These are about error bars, expectations, and scope conditions.
Brian says I decry "the lack of error bars in economics texts." I realize now I should have left in the "preferably with error bars" in an earlier draft. I was actually decrying the lack of any theoretical curves going through data in any available material at all including pdf's of slides or even economics papers where models are purportedly being compared to data (regardless of whether there are error bars or not). I never saw them in papers, and so I then thought: Maybe they're in books? Nope. On Wikipedia? Nope. There is lots of data shown in econ chart blogging (for example, on Brian's website), but there are never any curves derived from theory going through the data — except the occasional linear fit. Brian is correct in saying that a lot of physics (and engineering) textbooks don't show error bars (or sometimes even data). But even on Wikipedia, there are no comparisons of economic theory to economic data — while there are for physics. And there is a huge difference between not showing data for a Lagrange multiplier problem in a classical mechanics textbook (a method validated for literally hundreds of years) and not showing data for a DSGE model in a working paper explaining the liquidity trap (a method that has not shown to be empirically accurate for any data). My inclusion of "error bars" seems to have thrown off the focus here.
One place where Brian misses the point I was making is in his discussion of the section of my book that talks about expectations. This could well be my own fault for not being clear enough, but when he writes:
He wastes the reader's time discussing how he was surprised that economics models have the mechanism that expected future outcomes influence present activity.
it does not characterize what I wrote or the point I was trying to make. I was "surprised" that economics models have a mechanism where the *actual* future outcomes influence present activity. I emphasize it by using the words "actual future" five times as opposed to "expected future". There is no issue with using an expected future as an input, so long as that expected future is derived from information known in the present. In fact, I wrote exactly that in my book:
If the future value of inflation [in a model] is just made up from information known at the present time, then there is no information being moved from the future to the present and no information problem.
However, you cannot know the actual future of even a hypothetical universe in the present unless the system is completely deterministic (i.e. does not contain any unknown stochastic or chaotic elements), but rational expectations includes the actual future (in the hypothetical universe the model exists in) in the model. You can have a guess about an expected future, but that isn't the same as knowing the actual future plus an error term of zero mean.
Maybe an example is appropriate here. I can know that if I roll six dice, I should expect a 21 (with a standard error of roughly ± 4). Rolling dice is a well-defined stochastic process. However, I cannot know that if I ask 6 people to pick a random number, I should expect an average of 6 ± 2 where 6 is the actual result of asking 6 people in the future.
That's what rational expectations does.
Brian also refers to my discussion of scope conditions, but I'm not completely sure he understands the concept. Brian writes:
We are back to Smith's scope conditions. The scope condition for the "inflation will be 2%" model is the current environment -- characterised by inflation sticking near 2%. You do not need a doctorate in theoretical physics to see that this is a fairly silly situation.
That would not be the scope condition for Brian's constant inflation theory. As stated, the constant inflation theory (i.e. π = 2%) Brian presents has no scope conditions. If inflation deviates from 2%, the model is empirically invalid, not out of scope — unless there is something setting the inflation scale.
An example: π = 2% when monetary base growth μ << 10%. In that case, the μ << 10% is the scope condition. Now π ~ 2% might be a scope condition for some other model (e.g. the ISLM model kind of implicitly assumes inflation is low because it doesn't distinguish real and nominal — discussed here and here with slides). As described, Brian confuses "scope condition" with a "just-so theory".
In this form, Brian's pseudo-example is: π = 2% when π ≈ 2%, which is just vacuous.
* * *
One thing I do want to note is that Brian appears to want to use my book as an entry point to critique my information transfer approach more broadly (which I did not invent, but rather borrowed from Fielitz and Borchardt's application to complex physical systems). For example, Brian writes:
He cites studies that show DSGE model predictions performing worse than simple econometric techniques, or of course, his information transmission economics techniques.
I actually make no reference to the information transfer models in that context in my book. He subsequently has an entire section of his review set aside to criticize information equilibrium. What follows is my response to his critique of information equilibrium and is largely independent of my book.
Brian uses the old economics trope that "if you really did understand economics so well, you'd (or someone else) could get rich":
Looking for validation in peer-reviewed journals is curious: if the capitalist system is an efficient system for processing information, the commercial success of the techniques should have appeared within months of their appearance in the public domain.
First, I might have been able to make a lot of money in the bond market had I a) set up an instrument to bet against the BCEI forecast in the graph below, and b) had a lot of money to start with:
The forecast and model were described here.
Second, in my book, I make the case that the capitalist system is not always an efficient system for processing information. I introduce an entire chapter as a discussion of market failure:
As long as information equilibrium holds — for example, the agents choose opportunities in the opportunity set uniformly and don't bunch up, economics is the study of properties of the opportunity set. But what happens when this fails? That's the question I address [in the next chapter], and provide a speculative answer.
Third, Brian provides us with a possible reason — by example — for why information equilibrium might not have been picked up and used by everyone : people might not understand it. People might not understand it because it's over their head. People might not understand it because I haven't explained it very well. People might not understand it because it contains some fundamental error and it is therefore actually impossible to understand. People might not understand it because they're being deliberately obtuse. People might think they understand it, but are actually wrong — leading them to either not use it or use it incorrectly.
I don't know what the reason is, but Brian doesn't appear to understand it. As such, he represents an example of a reason information equilibrium hasn't taken over the world. His description of the information equilibrium reminds me of the times I've gone into a meeting to explain something novel to someone and they say: "Oh, I get it, this is just X" where X is something not only well-known but completely unrelated. The best case of this I've experienced was from Robin Hanson who effectively said of information equilibrium "Oh, I get it, this is just game theory information" (not exactly in that way, but that's the gist of referring me to the work of Aumann and Harsanyi).
Except in this case, Brian doesn't even tell us what X is — it's just X:
The entire information equilibrium theory is just back story for the algorithm he uses to generate forecasts
X = some algorithm. I can't even tell if X is unrelated or not because it isn't specified. In fact, it seems pretty clear the reason it isn't specified is because Brian doesn't know what X is as we'll see below.
In any case, this simply misunderstands what is happening. Information equilibrium is used to derive formulas that are then used to explain data. On such set of formulas yields supply and demand, for example. These formulas contain free parameters, and I do use algorithms (e.g. nonlinear regression, residual minimization, entropy minimization) to fit these parameters to data. I have also used algorithms to project stochastic processes into the future (e.g. Mathematica's TimeSeriesForecast) as well as simple linear extrapolation algorithms. However these algorithms are not specific to information equilibrium, and information equilibrium dictates the form of the input to these algorithms. For example, an autoregressive (AR) process gives the fluctuations around the information equilibrium result for these stock market forecasts (but not the information equilibrium itself). Mathematically:
F(t) = IE(t) + AR(t)
where F is the forecast, IE is the information equilibrium trend and AR is the AR process (with its errors). Note that (d/dt) log IE(t) ~ (k − 1) γ per the dynamic information equilibrium model where k is the information transfer index and γ is e.g. NGDP growth rate.
You could of course just posit the formulas and free parameters without information equilibrium, much like how you could just posit Planck's blackbody radiation formula. However, I wouldn't say that the quantum mechanics underlying Planck's formula is "like the back story in old school video games like Pac-Man, it is expendable."
Now it might make sense for someone to say:
The entire information equilibrium theory is just back story for log-linear regressions and forecasting using autoregressive processes
This is kind of a valid criticism of information equilibrium ! But it involves filling in X = log-linear regression and AR processes.
However, when I read one of his initial papers, the actual algorithm description was just a reference to source code in a computer language I never worked with, nor had access to. From my perspective, the source code was effectively undocumented. I was forced to guess how his algorithm worked. On the basis of that guess, I saw little need to pursue analysing the algorithm.
I assume he is making a reference to my preprint and the code snippets provided in the Appendices. For example:
Prior to the code snippets, these algorithms (and variables) are described:
The parameter fits were accomplished by minimizing the residuals using the Mathematica function FindMinimum using the method PrincipalAxis, a derivative free minimization method.
I can understand that many people have not ever worked with Mathematica (its functional programming style is different from procedural programming), but it is baffling to me for someone to think:
- Information equilibrium is not necessary (incorrect: it is the source of the parameters aa and bb and the formula they are contained in as detailed in the paper)
- Guessing what this code was doing was in any way tasking (it is finding the model parameters yielding values helpfully listed in the previous section of the paper by a common technique described in the text in the case of noisy data that messes up derivative-based methods)
- One the basis of that guess, one would still conclude information equilibrium was not necessary (I am curious as to what Brian's guess was for what this code was doing)
It was possible he was referring to the LOESS function and not the parameter minimization? It's true I did not document that code as well as the other code. However LOESS (or LOWESS) is a well-described technique in the literature. I'll leave it to readers to decide for themselves as to whether the various code I present is well-documented enough or whether it is "effectively undocumented". Leave a comment with your guess for what the code snippets do!
Additionally let me say that there are actually no forecasts in my one (and only) information equilibrium preprint, therefore the previously quoted statement from Brian:
The entire information equilibrium theory is just back story for the algorithm he uses to generate forecasts
cannot be substantiated by referring to the algorithms in the paper. There are no algorithms that generate forecasts. Maybe Brian is talking about the code on GitHub? Maybe he doesn't mean forecasts? But then it would have to be "the algorithm he uses to generate parameter estimates". That would indeed be silly: I invented an entire theory just to create parameters I could fit?
Overall, I get the impression that Brian just doesn't like information equilibrium (or possibly me as we've had strong disagreements before on the Stock-Flow Consistent modeling approach). That's fine. In trying to express his disapproval, he seems to be mixing up things from my blog, from my paper, and from my book. I don't talk about the performance of information equilibrium relative to DSGE models in my book (I do on my blog). I don't have forecasts in my paper (I do on my blog). Mathematica isn't the only code I've made public (there's a python implementation in one of my GitHub repositories). In fact, the Mathematica code on GitHub is fairly well-documented .
Brian seems unable to articulate exactly what his problem with information equilibrium is — likely tied to his lack of understanding of it . I'm generally responsive to questions on my blog about how to run the models or derive the equations — even writing entire posts trying to explain things to people who are trying to reproduce my results (and who were in fact successful at doing so). If he's having trouble understanding the Mathematica code, I can rewrite it in pseudocode or another language. If he has questions, he can ask me on my blog, in comments below, via email (on the side bar), or on Twitter.
 Actually, I noticed recently the interest rate model has been used. Google translate tells me the model of interest rates for Korea "can be said to be awesome".
 In a sense, you could see my entire effort on my blog as an attempt to convince economists to give up on complex models and return to simple linear ones. This ignores the ensemble/partition function approach and the deeply integrated possibility of market failure (non-ideal information transfer).
 I've also noticed over time that Brian presents himself as more technically savvy than he actually is. Like his inability to understand information equilibrium or Mathematica, he was also unable to understand first order conditions in economics — I would think anybody who has studied applied math would know that the zeros of a first derivative are local optima.
 The Mathematica notebook for the "quantity theory of labor and capital" — aka a modified Solow model (click to expand):
Monday, September 18, 2017
In my book, I illustrated Gary Becker's random agent model using a text diagram:
Here is an animation of that random agent model tracing out a demand curve when you change the price:
In the graph on the left, the black dot represents the average, and the black line represents the budget constraint. The graph on the right represents the demand curve traced out as we increase the price. A key thing to remember is that in order to achieve this result, we have to explore the full state space (the triangle under the black line). If we don't, then raising the price (or cutting it) doesn't necessarily change the quantity demanded:
Tuesday, September 12, 2017
The first point listed, but the last point I'm addressing is this:
[In my opinion] characterisation of economics as the 'science of the state space' could be used to make [radical] political claims. E.g. does 'economics' only exist because of property rights/capitalism?
I somewhat re-wrote this based on the combining two tweets as I believe @unlearningecon intended.
The reason I'm addressing this one last is because it's the most intriguing and required the most time to sit and think about.
The question is basically whether the "economic state space" I talk about in the book (and which in economics jargon is referred to as the "opportunity set") is constructed by a particular system of laws, property rights, and institutions (e.g. capitalism, but also money in general) and therefore the study of that particular set of laws and institutions that we call economics is specific to those laws and institutions. Does economics as such cease to exist when those institutions change?
I know that Gary Becker and the Chicago school of economics thought that economics basically exists if humans ever make strategic decisions, and therefore economics should push into the study of sociology and politics.
I've always been a Star Trek fan, and indeed in Gene Roddenberry's future "economics" does cease to exist due to essentially the elimination of scarcity. This makes sense because institutions like money, capitalism, and property rights were all designed to deal with scarcity. You can read more about this in Manu Saadia's great book Trekonomics.
I'm going to have to say the real answer is: I don't know. I am ambivalent about how plausible the Star Trek thought experiment is. Can scarcity actually be eliminated? And if so, does the new institution that either mitigates or eliminates scarcity explore the state space? If the answer is yes, then "economics" will probably continue.
But I do think "science of the state space" will have lots of potential uses even if capitalism is crushed. When I say science of the state space, I am actually referring to what is essentially information theory and in particular the concept of information equilibrium which I have been exploring on my blog. In those explorations, I've already found a couple of examples: explaining a transistor using information theory and understanding Generative Adversarial Networks (GANs) as an analogy with information equilibrium. [Update: and traffic models (and see here).]
In particular, I think there is a deep connection between understanding the market algorithm and computer science (MIT recently started a new major combining the two).
I also think this science of the state space may be useful in neuroscience and understanding the brain. After I started my blog, Todd Zorick a neuroscience researcher and I wrote a paper on using information equilibrium to understand EEG measurements and distinguish between states of consciousness.
I have also speculated about the connection between the state space approach and evolution.
It's possible all of these disciplines may have a single framework based on information theory and the science of the state space, finally realizing Norbert Wiener's (who incidentally was a simultaneous progenitor of information theory along with Claude Shannon) desire for single field he called cybernetics.
PS That's a picture of a couple of my Star Trek models.
Update: Forgot to include traffic models above.
PS That's a picture of a couple of my Star Trek models.
Update: Forgot to include traffic models above.
Tuesday, September 5, 2017
I put together a list of the people referenced in my book:
George Akerlof, Gary Becker, Jeremy Bentham, Ben Bernanke, Ludwig Boltzmann, Guenter Borchardt, George Box, Sean Carroll, Keith Chen, Hillary Clinton, Arnaud Costinot, Bo Cowgill, Diane Coyle, Charles Darwin, Gerard Debreu, Lana del Rey, Dave Donaldson, David Dunning, Rochelle Edge, Martin Eichenbaum, Albert Einstein, Queen Elizabeth II, Peter Fielitz, Irving Fisher, James Forder, Cameron Freer, Milton Friedman, Galileo Galilei, Peter Ganong, Carl Friedrich Gauss, Nicholas Georgescu-Roegen, David Glasner, Alan Greenspan, Refet Gurkaynak, Robert Hall, Roy Harrod, Ralph Hartley, Friedrich Hayek, Cesar Hidalgo, Thomas Hobbes, Erik Hoel, Chris House, Nir Jaimovich, Edwin Jaynes, William Jevons, Lyndon Johnson, John Maynard Keynes, Israel Kirzner, Justin Kruger, Paul Krugman, Stanley Kubrick, James Kwak, Venkat Lakshminarayanan, Carl Linnaeus, John List, Robert Lucas, Rolf Mandel, Alfred Marshall, William McChesney Martin, Jason Matheny, Michael Mee, Benjamin Moll, Dale Mortensen, Tom Murphy, Isaac Newton, Pascal Noel, Emmy Noether, Harry Nyquist, Barack Obama, Karl Popper, Ed Prescott, Ronald Reagan, Sergio Rebelo, David Ricardo, Paul Romer, David Romer, Mitt Romney, Paul Samuelson, Laurie Santos, Claude Shannon, Adam Smith, Vernon Smith, Lee Smolin, Hugo Sonnenschein, Joseph Stiglitz, Scott Sumner, Joshua Tasoff, Paul Volcker, Harris Wang, Graeme Wheeler, Eugene Wigner, William of Occam, Alexander Wissner-Gross, Michael Woodford, Janet Yellen, and Eric Zitzwitz
I was curious about how under-represented women were in my book (especially since I was citing two fields that are over-represented with men — physics and economics).
There was a great quote that I don't remember exactly about the Velvet Underground's first album. Paraphrasing, it said that very few people bought it, but everyone who did went on to start a band (looking it up, there are couple of versions). My greatest hope for my book would be something similar: only selling a few copies, but everyone who buys it goes on to help reform economic theory. The blueberry on the cover of my book is actually a reference to the album. It's definitely a bit of arrogance (hubris?) on my part.
Anyway, I think the ideas are more important than selling books so this represents something of a "free version" assembled from blog posts. It's really incomplete and the technical level varies wildly (however, nearly all are far more technical than the book). If these blog posts go over your head but seem interesting, then the book is for you!
* * *
This semi-autobiographical chapter was largely written from scratch and represents a lot of new material. However, some of the basics are covered in a few posts:
I actually excerpted an early version of this chapter after I wrote the first draft:
Another chapter that is largely new, but the basic idea was captured in my post on Paul Romer and "mathiness":
The title was a reference to Pulp's Common People, but is probably totally lost on anyone else due to being way too subtle. Yet another chapter that is largely new. However, it can be considered an expansion on this post:
The chapter title is a reference to the Beastie Boys' Intergalactic. Part of this chapter is new, most of the main idea is presented here:
Advantage: E. coli
The title here is a weird reference to tennis, comparative advantage, and the idea that E. coli bacteria are better at trading than humans. This is a more technical version of the chapter that appears in my book:
An obvious reference to Dickens, this is a far more technical version than appears in the book:
Rigid like elastic
Title reference is supposed to look like a paradox, but then is explained: nominal rigidity is an entropic force like elasticity. This was completely re-written for a general audience. These posts are more technical versions:
[There are actually several posts "X is an entropic force" on my blog. These are the most relevant two.]
SMD theorem + H. This chapter's main premise is captured in this post, but it misses out on the blueberry pie metaphor of the SMD theorem that I'm particularly proud of:
The economic problem
This was the very terse starting point for this chapter:
Economics versus sociology
This is another example where a post was greatly expanded:
Are we not agents?
The title is a reference to Devo's first album. I discovered a paper while the book was being written, and so this chapter was added based on it:
Another chapter that is largely new in the book. One of the ideas I talk about was first presented here:
* * *
Monday, September 4, 2017
Another of @unlearningecon's suggestions was to showcase more of my "(successful) empirical work".
This was a conscious decision. I don't believe in self-publishing technical results without some direct mechanism for peer review. The journal submission process is one such avenue, but so are blogs with comments. Books don't have the same direct nexus with criticism (good/bad reviews on Amazon can function a bit like this, but are not quite the same thing as actual blogs). In fact, this mechanism is precisely why I wanted to have a book blog: so I could show and respond to both positive and negative criticism.
Given that reservation to putting non-peer-reviewed material in the book, it quickly became obvious that I should write a non-technical book aimed at a general audience. I scaled back the math, and that precluded inclusion of my empirical work.
However, if you are interested in exploring further, the empirical work is collected on my blog:
... especially at the aggregated forecast link where I track the performance of the forecasts I make (and comparisons to other models). There are results like this:
The green shaded region is the Information Equilibrium (IE) model forecast for the 10-year US treasury bond interest rate. The red line is a forecast from the "Blue Chip Economic Indicators" report from the end of 2014 (made up of a survey of expert). The purple dashed line is the CBO forecast from the end of 2016. The vertical lines indicate when the forecast was made.
The gray jagged line is the daily US interest rate data (from FRED) since the end of 2014. As you can see, the IE model was a much better forecast than the BCEI experts. I've been tracking this forecast for almost three years. Even the sudden rise in rates after the US presidential election hasn't thrown this forecast off (the bands are 90% confidence intervals for monthly data).
Friday, September 1, 2017
Another of @unlearningecon's good points is this:
Concerned your separation of econ and sociology amounts to 'econ works except when it doesn't, which is what you criticise in mainstream
My response to this is that I've identified a particular mechanism (correlations in state space, which cause agents to not fully explore it), so it's not as vague. The general idea was speculative in the book (I explicitly said it was), and I also made the specific speculative claim that these correlations are caused by social factors. This last part may or may not be true: there may well be "economic" reasons for correlations that don't depend on our human nature.
But another reason I probably fell down on defending this particular claim (or making it more specific) is that it is based on the mathematics of information transfer and I haven't come up with a really good explanation that doesn't rely on math. That basically means I don't understand it very well, and that's true: hence the speculation.
The idea is basically that economics is "information equilibrium" (IE) and sociology explains "non-ideal information transfer" (NIIT) (definitions here). However, IE bounds the system dynamics even if you have NIIT (via some math). The result is that sociology should cause economics to fail in a specific way. I addressed this specific question in a FAQ on my blog:
But mindless atoms don't panic ...
While information equilibrium treats agents effectively as random "mindless atoms" (but really treats them as so complex they look random), the information transfer framework is more general. If agents didn't spontaneously correlate in state space due to human behavior (e.g. panic, groupthink), then the information transfer framework reduces to something that looks like boring standard thermodynamics. However, they do in fact panic. In terms of thermodynamics, this means that the information transfer framework is like thermodynamics, but missing a second law of thermodynamics. The "mindless atoms" will occasionally panic and huddle in a corner of the room and you have non-ideal information transfer as opposed to information equilibrium.
There is less the information transfer framework can say about scenarios where we have non-ideal information transfer, but it still could be used to put bounds on economic variables.
Wait. Isn't this just saying sometimes your theory applies and sometimes it doesn't?
Yes, but in a particular way. For example, the effect of correlations (panic, groupthink) is generally negative on prices.
Additionally, empirical data appears to show that information equilibrium is a decent description of macroeconomic variables except for a sparse subset (i.e. most of the time). That sparse subset seems to correspond to recessions. Since human behavior is one of the ways the system can fail to be in information equilibrium, this is good evidence that information equilibrium fails in exactly the way the more general information transfer framework says it should.
In a very deep way, one can think of information equilibrium being a good approximation in the same way the Efficient Market Hypothesis (EMH) is sometimes a good approximation. Failures of the EMH seem to be correlations due to human behavior.
Thursday, August 31, 2017
Via twitter, @unlearningecon makes a good point:
Potential problem with agents fully exploring the opportunity set is 'localised' optimising which could bias it in general
If I open a new piece of state space (or close one off), then it's likely agents "near" that piece of state space are the ones that are explore it (or leave for nearby open areas). This makes the process of evolution "localized". In a biological system, species in different ecosystems evolve in ways largely independent of species in other ecosystems, which may eventually result in some kind of conflict or catastrophe. More importantly, evolution happens from the current set of species (exploring parts of state space near existing species), creating strong path dependence.
In economic systems, e.g. firms will explore state space near existing firms, which may or may not find the best solutions.
This issue with "localized" optimization turns into a good argument in favor of government intervention in markets to both alleviate issues with path dependence (e.g. by providing workers with help if their industry needs to be reduced in favor of some other more socially optimal industry) and providing new seeds to grow in less explored areas of state space (e.g. funding science and innovation). I missed the opportunity to make this point in my book, but it is a good one.
In future posts, I will discuss unlearningecon's other points.
Thanks to everyone who has bought a copy! The book cracked the top 20 in Two-hour business and money short reads:
It's also #29 in Kindle economics books:
Wednesday, August 30, 2017
This is a draft presentation for a conference I was invited to, but wasn't able to attend. It gives a more technical introduction to the ideas in my book and goes a little further:
Tuesday, August 29, 2017
Jason Smith, a random physicist, has a new book out where he takes aim at some of the core foundations of microeconomics. I encourage every economist out there to open their mind, read it, and genuinely consider the implications of this new approach.
Also via tweet:
Extremely provocative and insightful new take on economics
This is probably the best review I could have ever hoped for!
And now Amazon seems to recognize the paperback and Kindle e-book are the same book, so they appear together on the same page (click the image):
Update: There are also some good comments on Cameron's post that I've tried to respond to ...
Monday, August 28, 2017
Despite spending a long time proofreading and even having the book edited by others, there still are some errors and typos. This post will serve as a repository for these errors and will be updated as they are discovered (feel free to add errors you've found in comments).
Last update: 28 August 2017
Chapter: The critique
"For example, in a 2011 paper by Rochelle Edge and Refet Gurkaynak found that formally complex DSGE models are very poor at forecasting."
Extraneous "by" or "in" in the Kindle e-book. In the paperback edition, the "by" was deleted.
"One is that there could be many different equilibrium states for the overall economy: high unemployment and low unemployment, low inflation and high inflation, low unemployment and low inflation, et cetera."
The list switches from a list of variables to a variable combination in the last item. It's confusing. My preferred re-write is:
One is that there could be many different equilibrium states for the overall economy: high unemployment and low inflation, low unemployment and high inflation, low unemployment and low inflation, et cetera.
This is an example of a macroeconomy with four equilibria.
"... but these are locally unique -- small changes ... "
This should be an em dash.
I had an article on Evonomics back in May of this year:
Hayek Meets Information Theory. And Fails.Modern economic theories of prices-as-information are seventy years out of date.
This article is a slightly more technical (and political) version of the tenth chapter, The economic problem, in my book (the chapters are actually pretty short so it just seems like a lot). The two pieces are completely different versions of explaining almost the exact same subject. The Evonomics article goes more into discussion of Generative Adversarial Networks (GANs) and the possible connection to information equilibrium. There are also some pretty pictures!
Sunday, August 27, 2017
Due to popular demand for a dead-tree version, I've done the reformatting, cover re-design, and pagination checking (as well as correcting one typo) for a paperback edition of A random physicist takes on economics:
Click through the image to order on Amazon US. Also available on Amazon UK (and a few other nations as available via Amazon KDP).
I think I should credit Noah Smith for the phrase "scope conditions" I use in my book. My use of the phrase has been a bit of a tongue-in-cheek jibe ever since Noah ascribed a phrase I had never used to physicists:
I have not seen economists spend much time thinking about domains of applicability (what physicists usually call "scope conditions"). But it's an important topic to think about.
A discussion of scope conditions referencing two of Noah's posts is here.
Being a much more widely read blogger with a much bigger platform at Bloomberg View, he has since made "scope conditions" or "theory scope" more ubiquitous than the more technical physics terms "limit", "scale", or "region of validity".
But I've been attempting to get economists to see and understand this idea for awhile now. The earliest documented evidence is from a comment on Scott Sumner's blog from 2011 (using the physics term "limit"):
My opinion here, but I think finding a theory that reduces to both a monetarist theory and a Keynesian theory in various limits or under specific constraints may be a key to understanding macroeconomics and more focus should be in that direction (unless it has already been done! I haven’t been able to find anything). Both theories appear to save the phenomena in particular regimes so the “correct” macroeconomic theory should reduce to each in particular limits.
Not that I have any issue with "scope conditions" — it's a really good phrase for this concept. It's just that I'd never used the phrase as a physicist. I do have a personal story about the concept however.
During my thesis defense (aka the final exam at UW, where you present the research in your thesis to your committee and then spend an hour or longer responding to the toughest questions about it your committee can think of), I was asked where the model of quark physics I was using was valid (its region of validity). Normally this would be an easy question because the particular assumptions usually yield direct "scope conditions". You assume the speed of light is large and so the model is valid for velocities small compared to the speed of light: v << c.
However, my model lacked a specific property of the underlying theory (you could call it the "microfoundations") quantum chromodynamics (QCD) called confinement — you never see an individual quark, they always come in "colorless" combinations of three red-green-blue or two red-antired.
The problem is that confinement has not been proven analytically from QCD yet, only shown via some experimental and computational methods. So it is much more difficult to translate an assumption that confinement doesn't matter into a specific scope condition. So you can probably guess I struggled with the question in my thesis defense.
I've thought about that question off and on over the past 12 years (almost to the day in August of 2005). My best answer now is that the scope is the same scope as the deep inelastic scattering model: Q² >> 1 GeV² (probing the nucleus at less than 1 fm length scale). Asymptotic freedom (confinement disappears at high energy) of QCD means that confinement doesn't have an effect at high energy except for the input scale for DGLAP evolution (the equations used to change the energy scale of quark and gluon distributions). However, until confinement is understood analytically, this particular scope condition is going to be more hand waving than science.
So in the end I completely agree with Noah: scope conditions are important to think about.
Friday, August 25, 2017
My Kindle e-book is now available on Amazon.com for free as part of Kindle Unlimited, or for 2.99 USD (and for other amounts in other countries). Click through the image to buy!
Update: Consider this the first open thread in comments for book discussion and initial thoughts ...
A Random Physicist Takes on Economics is now formatted, and being published by Amazon KDP. It'll be live in the next 72 hours according to the estimate ...
In the meantime, here's the original first draft of the book's description blurb (since I can't seem to access the final version that'll be on the Amazon page):
A Random Physicist Takes on Economics is a critique of economic methodology from an outsider's perspective. Author Jason Smith leverages random agents and information theory to argue against ubiquitous economic constructs such as so-called "rational" expectations, prediction markets, and utility maximizing agents using examples consisting of nothing more complicated than Dungeons and Dragons dice sets and pints of blueberries. In the end, he calls for economists to present more uncertainty and plead greater ignorance when it comes to questions of politics and policy.
Thursday, August 24, 2017
The main purpose of this blog is to function as a place for discussion of and questions about my e-book A random physicist takes on economics. I will link to and discuss reviews, as well as occasionally provide supplementary material, material that was removed in editing, as well as more technical versions (including equations) of the arguments made in the book.
It's also a means of keeping book-related blogging and commercial endeavors separate from my more speculative research and academic endeavors on the information transfer framework available at my blog Information Transfer Economics.
I can be contacted via DM at Twitter @infotranecon with questions or other requests as well as through comments on this blog or email:
Thank you for reading!
After a long delay, I have finally completed a final draft. All that is left is to set the Amazon e-book formatting and submit. The chapter headings are as they were when the first full draft was completed, but the book is now longer (over 26,000 words) and better written ‒ with many thanks to those that agreed to read it!
A random physicist takes on economics
- Introduction. I finally tell the whole story of how I ended up doing this stuff.
- The critique. I lay out my critique of economics.
- Physicists. This is where I address economists' weird relationship with physics.
- Random people. Basically Gary Becker's 1962 irrational behavior paper explained with grade school math and blueberries.
- Another dimension. Saturating the budget constraint in a large number of dimensions explained with grade school math.
- Advantage: E. coli. Comparative advantage and economic behavior in biological systems.
- Great expectations. How expectations in economics let you get any result you want (explained with dice).
- Rigid like elastic. Entropic forces and sticky prices.
- SMDH. The SMD theorem in terms of blueberries and blueberry smoothies.
- The economic problem. Information equilibrium and the price mechanism.
- Economics versus sociology. When is the economy amenable to economics and when is it amenable to sociology?
- Are we not agents? Causal entropic forces and intelligence.
- Conclusions. Going forward, what are the recommendations for a research program and for policy?
Wednesday, January 4, 2017
This draft book cover contains a modified form of Single Blueberry by Kevin Payravi, Wikimedia Commons. Creative Commons Copyright Attribution-ShareAlike 3.0.