LakeWater Finance BlogOpinions of a finance PhD student whose office happens to overlook Lake Michigan. <br /> Copyright Marco Sammon
http://marcosammon.com/
Fri, 29 Sep 2017 17:26:09 +0000Fri, 29 Sep 2017 17:26:09 +0000Jekyll v3.5.2Expected Discounted Utility as Recursive Utility<h1 id="expected-discounted-utility">Expected Discounted Utility</h1>
<p>Expected discounted utility is one of the most common ways to represent preferences over risky consumption plans. Consider an agent, sitting at time <script type="math/tex">t</script>, who will receive a consumption stream <script type="math/tex">c</script> until <script type="math/tex">T</script>:
\begin{equation}
U_t(c)= E_t \left[ \sum \limits_{s=t}^T \beta^{s-t}u_s(c_s)\right]
\end{equation}
Where <script type="math/tex">\beta</script> is the discount factor and <script type="math/tex">u</script> is a within-period utility function. A problem with expected discounted utility is that it cannot separate preferences for smoothing over time, and smoothing across states.<br />
Consider the following example:
You are stranded on an island at <script type="math/tex">t=0</script>. A man comes in a boat and offers you a choice of two deals (1) Every morning he comes and flips a coin, if it comes up heads, you get a bushel of bananas that day (2) He flips a coin today, if it comes up heads you get a bushel of bananas every day until time <script type="math/tex">T</script>, and if it comes up tails you get no bananas until time <script type="math/tex">T</script>.
It’s initiative that plan 2 is riskier than plan 1, but under expected discounted utility, for any <script type="math/tex">\beta</script> and <script type="math/tex">u</script> the agent is indifferent between the two plans:
\begin{equation}
U(Plan 1) = \sum\limits_{t=0}^T \beta^t \frac{u_t(1)+u_t(0)}{2}= U(Plan 2)
\end{equation}</p>
<h1 id="recursive-utility">Recursive Utility</h1>
<p>The only way to even partially separate preferences for smoothing over time, and preferences for smoothing across states is to use recursive utility (see Skiadas 2009 for a complete proof - this is an if and only if relationship).
Recursive utility has two ingredients, the aggregator, which determines preferences over deterministic plans (time smoothing) <script type="math/tex">f(t,c_t,\upsilon_t \left(U_{t+1}(c)\right))</script> and the conditional certainty equivalent <script type="math/tex">\upsilon_t(c)</script> (state smoothing). The steps below formulate expected discounted utility as recursive utility.
For simplicity, drop the dependence of all functions on time, so we can remove all the subscript <script type="math/tex">t</script>’s. Now, propose a desirable property for the utility function - normalization. Consider any deterministic plan <script type="math/tex">\alpha</script>, then a utility is normalized if <script type="math/tex">\bar{U}(\alpha)=\alpha</script>. Normalize utility <script type="math/tex">U</script>, the expected discounted utility defined above, as <script type="math/tex">\bar{U}(c)=\psi^{-1}(U(c))</script> where <script type="math/tex">\psi_t(\alpha)=\sum\limits_{s=t}^T \beta^{s-t} u(\alpha)</script>. Basically, <script type="math/tex">\psi</script> gives the discounted utility of deterministic plan <script type="math/tex">\alpha</script>, so <script type="math/tex">\psi^{-1}</script> gives the deterministic <script type="math/tex">\alpha</script> required to make the agent indifferent between potentially risky plan <script type="math/tex">c</script> and deterministic plan <script type="math/tex">\alpha</script>.<br />
For expected discounted utility, the aggregator is: <script type="math/tex">f(t,x,y)=\psi^{-1}_t (u(x)+\beta \psi_{t+1}(y))</script>.
The intuition is that with expected discounted utility, the agent’s utility from plan <script type="math/tex">c</script> is a weighted average of their consumption today, and the utility of the equivalent deterministic plan until <script type="math/tex">T</script>.
For utility to be normalized, the aggreator must satisfy <script type="math/tex">f(t,\alpha,\alpha)=\alpha</script> for any deterministic plan <script type="math/tex">\alpha</script>. Put this into the equation above to solve for <script type="math/tex">\psi</script>:
<script type="math/tex">f(t,x,x)=\psi_t^{-1}( u(x) + \beta \psi_{t+1}(x)) = x</script>. Then, apply <script type="math/tex">\psi_t</script> to both sides:
\begin{equation}
u(x) + \beta \psi_{t+1} (x) = \psi_t(x)
\end{equation}
Fix <script type="math/tex">\psi_T=u</script>, and interpret terminal consumption value <script type="math/tex">c_T</script> as consuming <script type="math/tex">c_T</script> for the rest of time (equivalently, imagine letting <script type="math/tex">T</script> go to infinity). This implies we can drop the subscripts on the <script type="math/tex">\psi</script>:
\begin{equation}
u(x)=\psi(x)-\beta\psi(x)
\end{equation}</p>
<p>Rearranging yields <script type="math/tex">\psi(x)=(1-\beta)^{-1}u(x)</script> and <script type="math/tex">\psi^{-1}(x)=u^{-1}((1-\beta)x)</script>. Putting this back into our expression above for <script type="math/tex">f(t,x,y)</script> implies:
\begin{equation}
f(t,x,y)=u^{-1}((1-\beta)u(x)+\beta u(y))
\end{equation}
Given the way the aggregator is defined, we can see that <script type="math/tex">f</script> depends on the curvature of <script type="math/tex">u</script> - in other words, the within period utility function <script type="math/tex">u</script> will influence preferences for smoothing over time. This also gives intuition for how to make an agent not indifferent between deal (1) and deal (2) described above - <script type="math/tex">f</script> needs to be defined independently of <script type="math/tex">u</script> (or <script type="math/tex">\upsilon</script>).</p>
<h1 id="conclusion">Conclusion</h1>
<p>Recursive utility is a general framework, with expected discounted utility as a special case. A future post will explore common functional forms of recursive utility that are common in asset pricing models.</p>
Wed, 09 Nov 2016 00:00:00 +0000
http://marcosammon.com/2016/11/09/euru.html
http://marcosammon.com/2016/11/09/euru.htmlQuick Reads # 2: Does Adding a Constant Always Increase R-Squared?<p>Recall the formula for R-squared:
\begin{equation}
R^2=\frac{\sum\limits_{i=1}^N (\hat{Y_i}-\overline{Y})^2}{\sum\limits_{i=1}^N (\hat{Y_i}-\overline{Y})^2 + \sum\limits_{i=1}^N (\hat{Y_i}-Y_i)^2}= \frac{SS_{model}}{SS_{model}+SS_{residual}}
\end{equation}
In your first statistics class, you are taught that adding more regressors can not decrease R-squared, even if they have no relationship to the dependent variable. <br />
This, however, does not apply to the constant term. At first pass, this seems hard to believe: An unconstrained model should always do at least as well as a constrained model. <br />
The catch is, the variance explained by the constant term is not included in the calculation of R-squared - we subtract <script type="math/tex">\overline{Y}</script> when calculating <script type="math/tex">SS_{model}</script>. <br />
The no constant restriction implicitly sets <script type="math/tex">\overline{Y}</script> to zero. This will increase both the model sum of squares and the residual sum of squares. The model sum of squares effect dominates, however, and <script type="math/tex">R^2</script> is pushed towards one.<br />
I discovered this today, regressing realized GDP on forecasted GDP. Although the sum of squared errors is nearly identical for both models, the model sum of squares is much larger for the no-intercept case:</p>
<p><img src="/Post_Images/10_17_2016/gdp.png" alt="fig" /></p>
<p>My takeaway: Be careful when writing your own regress command in Matlab, or any other language. Omitting a constant term can drastically change R-squared.</p>
Mon, 17 Oct 2016 00:00:00 +0000
http://marcosammon.com/2016/10/17/constant_regression.html
http://marcosammon.com/2016/10/17/constant_regression.htmlCommentary on Understanding Unit Rooters: A Helicopter Tour by Sims and Uhlig (1991)<p>The paper argues: Although classical (frequentist) <script type="math/tex">p</script>-values are asymptotically equivalent to Bayesian posteriors, they should not be interpreted as probabilities. This is because the equivalence breaks down in non-stationary models. <br />
The paper uses small sample sizes, with <script type="math/tex">T=100</script> - This post examines how the results change with <script type="math/tex">T=10,000</script>, when the asymptotic behavior kicks in.</p>
<h1 id="the-setup">The Setup</h1>
<p>Consider a simpler AR(1) model:
\begin{equation}
y_t=\rho y_{t-1} + \epsilon_t
\end{equation}
To simplify things, suppose <script type="math/tex">\epsilon_t \sim N(0,1)</script>. Classical inference suggests that for <script type="math/tex">% <![CDATA[
|\rho|<1 %]]></script>, the estimator is asymptotically normal and converges at rate <script type="math/tex">\sqrt{T}</script>:
\begin{equation}
\sqrt{T}(\hat{\rho}-\rho) \rightarrow^{L} N(0,(1-\rho^2))
\end{equation}
For <script type="math/tex">\rho=1</script>, however, we get a totally different distribution, which converges at rate <script type="math/tex">T</script>, instead of rate <script type="math/tex">\sqrt{T}</script>:
\begin{equation}
T(\hat{\rho}-\rho)=T(\hat{\rho}-1)= \rightarrow^{L} \frac{(1/2)([W(1)]^2-1)}{\int_0^1 [W(r)]^2 dr}
\end{equation}
where <script type="math/tex">W(1)</script> is a Brownian motion. Although it looks complicated, it is easier to visualize when you see <script type="math/tex">W(1)^2</script> is actually a <script type="math/tex">\chi^2(1)</script> variable. This is left skewed, as the probability that a <script type="math/tex">\chi^2(1)</script> is less than one is 0.68 and large realizations of <script type="math/tex">[W(1)]^2</script> in the numerator get down-weighted by a large denominator (it is the same Brownian motion in the numerator and denominator).
In the paper, the authors choose 31 values of <script type="math/tex">\rho</script> between 0.8 to 1.1 in increments of 0.01. For each <script type="math/tex">\rho</script> they simulate 10,000 samples of the AR(1) model described above with <script type="math/tex">T=100</script>. Finally, they run an OLS regression of <script type="math/tex">y_t</script> on <script type="math/tex">y_{t-1}</script> to get the distributions for <script type="math/tex">\hat{\rho}</script> (the OLS estimator of <script type="math/tex">\rho</script>). Below I show the distribution of <script type="math/tex">\hat{\rho}</script> for selected values of <script type="math/tex">\rho</script>:</p>
<p><img src="/Post_Images/10_1_2016/rhgr2.png" alt="fig" /></p>
<p>Another way to think about the data is to look at the distribution of <script type="math/tex">\rho</script> given observed values of <script type="math/tex">\hat{\rho}</script>. This is symmetric about 0.95:</p>
<p><img src="/Post_Images/10_1_2016/opp952.png" alt="fig" /></p>
<p>Their problem with using <script type="math/tex">p</script>-values as probabilities is that if we observe <script type="math/tex">\hat{\rho}=0.95</script>, we can reject the null of <script type="math/tex">\rho=0.9</script>, but we fail to reject the null of <script type="math/tex">\rho=1</script> (think about the area in the tails after normalizing the distribution to integrate to 1), even though <script type="math/tex">\rho</script> given <script type="math/tex">\hat{\rho}</script> is roughly symmetric about 0.95:</p>
<p><img src="/Post_Images/10_1_2016/hypo_test2.png" alt="fig" /></p>
<p>The problem is distortion by irrelevant information: Values of <script type="math/tex">\hat{\rho}</script> much below 0.95 are more likely given <script type="math/tex">\rho=1</script> than are values of <script type="math/tex">\hat{\rho}</script> much above 0.95 given <script type="math/tex">\rho=0.9</script>. This is irrelevant as we have already observed <script type="math/tex">\hat{\rho}=0.95</script>, so we know it is not far above or below.</p>
<p>The prior required to generate these results (i.e. the prior that would let us interpret <script type="math/tex">p</script>-values as posterior probabilities) is sample dependent. Usually, classical inference is asymptotically equivalent to Bayesian inference using a flat prior, but it is not the case here. The authors show that classical analysis is implicitly putting progressively more weight on values of <script type="math/tex">\rho</script> above one as <script type="math/tex">\hat{\rho}</script> gets closer to 1.</p>
<h1 id="testing-with-larger-samples">Testing with Larger Samples</h1>
<p>At first, I found the results counter-intuitive. The first figure above shows that the skewness arrives gradually in finite samples. This is strange, because the asymptotic properties of <script type="math/tex">\hat{\rho}</script> are only non-normal for <script type="math/tex">\rho=1</script>. I figured this was the result of using small samples of <script type="math/tex">T=100</script>. Under a flat prior, the distribution of <script type="math/tex">\rho</script> given the data and <script type="math/tex">\epsilon_t</script> having variance of 1 is:
\begin{equation}
\rho \sim N(\hat{\rho}, (\sum\limits_{t=1}^T y_{t-1})^{-1})
\end{equation}
This motivates my intuition for why the skewness arrives slowly: even for small samples, as <script type="math/tex">\rho</script> gets close to 1, <script type="math/tex">\sum\limits_{t=1}^T y_{t-1}</script> can be very large.</p>
<p>I repeat their analysis, except instead of <script type="math/tex">T=100</script>, I use <script type="math/tex">T=10,000</script>. As you can see, the asymptotic behavior kicks in and the skewness arrives only at <script type="math/tex">\rho=1</script>:</p>
<p><img src="/Post_Images/10_1_2016/rhgr.png" alt="fig" /></p>
<p>I also found that for <script type="math/tex">T=10,000</script> the distribution of <script type="math/tex">\rho</script> conditional on <script type="math/tex">\hat{\rho}</script>, does not spread out more for smaller values of <script type="math/tex">\hat{\rho}</script>, that is a small sample result.</p>
<h1 id="conclusion">Conclusion</h1>
<p>The point of this paper is to show that the classical way of dealing with unit roots implicitly makes undesirable assumptions - you need a sample-dependent prior which puts more weight on high values of <script type="math/tex">\rho</script>. To a degree, the authors’ results are driven by the short length of the simulated series. The example where you reject <script type="math/tex">\rho=0.9</script> but fail to reject <script type="math/tex">\rho=1</script> wouldn’t happen in large samples, as the asymptotic kick in and faster rate of convergence for <script type="math/tex">\rho=1</script> gives the distribution less spread.</p>
<p>For now, however, the authors’ criticism is still valid. With quarterly data from 1950-Present you get about 260 observations. Macroeconomics will have to survive until year 4,450 for there to be 10,000 observations, and that’s a long way off.</p>
Sat, 01 Oct 2016 00:00:00 +0000
http://marcosammon.com/2016/10/01/unit_roots.html
http://marcosammon.com/2016/10/01/unit_roots.htmlWhat's in a Name?<p>Given the success of companies like Apple, Amazon and American Express, I was curious if a company’s name could predict its expected return. It would be worrying if this worked, as there is no fundamental reason why companies starting with A-E should outperform companies starting with, say, F-J. Stories like stock screens returning results alphabetically or investors having an aversion to certain letters seem too far-fetched. I sort into portfolios based on the first letter of a company’s name to check this result.</p>
<h1 id="methodology">Methodology</h1>
<p>I get monthly stock data from 1980-2015 in CRSP, and restrict to ordinary common shares traded on major exchanges. At the end of every month, I extract the first character of a company’s name and give it a numerical value between 1 and 35: <br />
Example 1: First character of 1-800 Flowers is, “1” and it is given a value of 1 (there are no names starting with 0) <br />
Example 2: First character of Apple is “A” and it is given a value of 10 <br />
Example 3: First character of Zing Technologies is “Z” and it is given a value of 35 <br />
I then sort into 10 value-weighted portfolios based on deciles of this number (i.e. the bottom 10% of values are in portfolio 1, etc.). This allows the letter breakpoints to change slightly over time with changes in the frequency of first letters. Because this is a discrete measure, even within a month, portfolios don’t necessarily have the same number of securities. I tried making the portfolios contain the same number of securities and the results were similar. <br />
In addition, I create an “HML” style factor factor, which is the return on portfolio 10 minus the return on portfolio 1 (reading it again, maybe ZMA - for Z minus A - would have been a better name). <br />
I also tried sorting into 6 value-weighted portfolios based on the first letter themselves: Companies starting with 1-9 are in portfolio 1 (although there are no companies like this before 1983), A-E are in portfolio 2, F-J are in portfolio 3, K-O are in portfolio 4, P-T are in portfolio 5, and U-Z are in portfolio 6. I construct another HML style factor, except this time it is portfolio 6 minus portfolio 2, because portfolio 1 is sparsely populated. Again, the portfolios will not have the same number of firms, as some first letters are more popular than others.</p>
<h1 id="empirical-results">Empirical Results</h1>
<p>The table below shows the annualized (multiplied by 12) average returns, annualized (multiplied by <script type="math/tex">\sqrt{12}</script>) standard deviation and annualized Sharpe ratio: <br /></p>
<p><img src="/Post_Images/9_16_2016_2/sumstats_1.PNG" alt="fig" /> <br /></p>
<p>There is little difference in average returns across portfolios and sorting methodologies - they all pretty much track the market. The HML factors have an insignificant alpha once you control for market exposure. <br />
For robustness, I calculated the same statistics using only data from 2000 to 2015:</p>
<p><img src="/Post_Images/9_16_2016_2/sumstats_2.PNG" alt="fig" /> <br /></p>
<p>Post 2000, with the decile portfolios, the firms in portfolio 1 returned on average 2.3% more than firms in portfolio 10, and given this is based on deciles, it is not caused by portfolio 1 being sparsely populated. A t-test shows this difference is marginally statistically significant with <script type="math/tex">p=0.08</script>. I thought it was being driven by a size effect (small companies names are more likely to start with a number), so I repeated the exercise for the largest 500 firms and the effect goes away. Even before excluding small firms the alpha was insignificant once controlling for market, SMB and HML exposure. <br />
For the 6-portfolio sort, you can see they pretty much track the market (except for the 0-9 portfolio, which as mentioned above, is sparsely populated): <br />
<img src="/Post_Images/9_16_2016_2/1.png" alt="fig" /> <br />
<img src="/Post_Images/9_16_2016_2/2.png" alt="fig" /></p>
<h1 id="conclusion">Conclusion</h1>
<p>When searching for new asset pricing factors, it is important to avoid data-snooping. Requiring a good theoretical reason why your sorting technique will generate spreads in expected returns can prevent data mining (ex. sorting on past returns because of persistence in profitability - see Novy-Marx 2015). Here, I sorted on something that shouldn’t predict returns - and it didn’t - but I’m sure if I tried 20 more random sorts, one of them would create a new (totally spurious) factor. At least for now, alphabet risk is priced!</p>
Fri, 16 Sep 2016 00:00:00 +0000
http://marcosammon.com/2016/09/16/alphabet.html
http://marcosammon.com/2016/09/16/alphabet.htmlPricing a Digital Option<p>This post is based on problems 2.10 and 2.11 in, “Heard on the Street” by Timothy Falcon Crack. I was asked how to price a digital option in a job interview - and had no idea what to do!</p>
<h1 id="european-call-options">European Call Options</h1>
<p>A European call option is the right to buy an asset at the strike price, <script type="math/tex">K</script>, on the option’s expiration date, <script type="math/tex">T</script>. A call is only worth exercising (using) if the underlying price, <script type="math/tex">S</script>, is greater than <script type="math/tex">K</script> at <script type="math/tex">T</script>, as the payoff from exercising is <script type="math/tex">S-K</script>. The plot below shows the value of a call option, as a function of the underlying asset’s price, with <script type="math/tex">K=100</script>:</p>
<p><img src="/Post_Images/9_15_2016/1.PNG" alt="fig" /> <br /></p>
<p>Selling a call option with a strike <script type="math/tex">K=100</script> earns you the call’s price, <script type="math/tex">c</script>, today, but your payoff will be decreasing in the underlying price:</p>
<p><img src="/Post_Images/9_15_2016/2.PNG" alt="fig" /> <br /></p>
<h1 id="digital-call-options">Digital Call Options</h1>
<p>A digital call option with <script type="math/tex">K=100</script> is similar - it pays off one dollar if <script type="math/tex">S\geq100</script> at expiration, and pays off zero otherwise:</p>
<p><img src="/Post_Images/9_15_2016/3.PNG" alt="fig" /> <br /></p>
<p>Suppose you have a model for pricing regular call options. If you’re using Black-Scholes the price of the call, <script type="math/tex">c</script>, is a function of <script type="math/tex">K</script>, <script type="math/tex">S</script>, time to expiration <script type="math/tex">T-t</script>, the volatility of the underlying asset <script type="math/tex">\sigma</script>, and the risk free rate <script type="math/tex">r</script>:
\begin{equation}
c=F(K,S,T-t,\sigma,r)
\end{equation}
Now - suppose the model is correct. How can you use <script type="math/tex">F(K,\cdot)</script> to price the digital option?</p>
<h1 id="replicating-the-digital-option">Replicating the Digital Option</h1>
<p>The trick is to replicate the digital option’s payoff with regular calls. As a starting point, consider buying a call with <script type="math/tex">K=100</script> and selling a call with <script type="math/tex">K=101</script>:</p>
<p><img src="/Post_Images/9_15_2016/4.PNG" alt="fig" /> <br /></p>
<p>This is close to the digital option, but not exactly right. We want to make the slope at 100 steeper, so we need to buy more options. This is because a call’s payoff increases one-for-one with the underlying once the option is in the money, so with one option you are stuck with a slope of one. <br />
Consider buying two calls with <script type="math/tex">K=100</script> and selling two calls at <script type="math/tex">K=100.5</script>:</p>
<p><img src="/Post_Images/9_15_2016/5.PNG" alt="fig" /> <br /></p>
<p>As opposed to a slope of 1 between 100 and 101, now we have a slope of two between 100 and 100.5. <br /></p>
<p>Generalizing this idea - consider a number <script type="math/tex">\epsilon>0</script>. To get a slope of <script type="math/tex">\frac{1}{\epsilon}</script>, you buy <script type="math/tex">\frac{1}{\epsilon}</script> calls at <script type="math/tex">K=100</script> and you sell <script type="math/tex">\frac{1}{\epsilon}</script> calls at <script type="math/tex">K=100+\epsilon</script>. Here’s what it looks like for <script type="math/tex">\epsilon=\frac{1}{10}</script>:</p>
<p><img src="/Post_Images/9_15_2016/6.PNG" alt="fig" /> <br /></p>
<p>Given that the slope is <script type="math/tex">\frac{1}{\epsilon}</script>, to get an infinite slope, we take the limit as <script type="math/tex">\epsilon</script> goes to zero. <br /></p>
<p>How much will the above portfolio cost? You earn <script type="math/tex">\frac{1}{\epsilon}F(100+\epsilon, \cdot)</script> from selling the <script type="math/tex">K=100+\epsilon</script> calls, and pay <script type="math/tex">\frac{1}{\epsilon}F(100, \cdot)</script> for the <script type="math/tex">K=100</script> calls. The net cost is:
\begin{equation}
lim_{\epsilon \rightarrow 0} \frac{F(100+\epsilon,\cdot)-F(100,\cdot)}{\epsilon}
\end{equation}</p>
<p>What does this look like? A derivative! It might look more familiar if I re-wrote it as:</p>
<p>\begin{equation}
lim_{\epsilon \rightarrow 0} \frac{F(K+\epsilon)-F(K)}{\epsilon}
\end{equation}</p>
<p>The price of the digital option is the derivative of <script type="math/tex">F</script> with respect to the strike price <script type="math/tex">K</script>.</p>
<h1 id="conclusion">Conclusion</h1>
<p>Many complicated payoffs can be re-created as combinations of vanilla puts and calls. For an overview, see the first few chapters of Sheldon Natenberg’s, “Option Volatility & Pricing”.</p>
Thu, 15 Sep 2016 00:00:00 +0000
http://marcosammon.com/2016/09/15/digital.html
http://marcosammon.com/2016/09/15/digital.htmlQuick Reads #1: Skewness<p>The quick reads series is dedicated to exploring a topic in about 250 words.</p>
<h1 id="intuition">Intuition</h1>
<p>From Harvey and Siddique (2000): “[Risk averse] investors should
prefer portfolios that are [positive]-skewed to portfolios that are [negative]-skewed.” The <a href="https://upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Negative_and_positive_skew_diagrams_(English).svg/2000px-Negative_and_positive_skew_diagrams_(English).svg.png">graphic</a> below shows the difference - negative skew entails a long left tail, and a mean below the median:</p>
<p><img src="/Post_Images/8_9_2016/skew.png" alt="fig" /></p>
<p>The following example, with a $1 portfolio and <script type="math/tex">u(c_t)=log(c_t)</script>, supports this claim - assets 1 and 2 have the same expected return, but 1 has left skewed returns and 2 has right skewed returns. The investor gets higher expected utility from asset 2:</p>
<p><img src="/Post_Images/8_9_2016/example.PNG" alt="fig" /></p>
<h1 id="skewness-and-expected-returns">Skewness and Expected Returns</h1>
<p>Let’s look at the two families of 2<script type="math/tex">\times</script>3 sorted portfolios using <a href="http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html">Ken French’s data</a> from 1980-present: Size/Book-to-Market (BM) and Size/Momentum (Prior). For reference, the CRSP value-weighted index had a skewness of -0.7638 over the same period. <br />
The small/high BM portfolio has the highest expected returns - it also has the largest skewness (in absolute value).</p>
<p><img src="/Post_Images/8_9_2016/sizebm.PNG" alt="fig" /></p>
<p><img src="/Post_Images/8_9_2016/sbm_scatter.png" alt="fig" /></p>
<p>The same point is conveyed by histograms (the fitted normal has the same mean and variance as the underlying data). The small/high BM portfolio has a heavy left tail - mass above the blue line to the left of the mean - not present in the big/low BM portfolio: <br />
<img src="/Post_Images/8_9_2016/hist1.png" alt="fig" /></p>
<p>The relationship is not as strong in the size/momentum portfolios (low prior return = low momentum), but it is still present.</p>
<p><img src="/Post_Images/8_9_2016/sizemom.PNG" alt="fig" /></p>
<p><img src="/Post_Images/8_9_2016/smom_scatter.png" alt="fig" /></p>
<p>Here, the histogram shows a slightly different picture than the summary statistics - in the small/high prior portfolio, there are a few extreme events (momentum crashes) not present in the big/low prior portfolio. Rare disaster risk could weaken the relationship between skewness and expected returns in these portfolios.</p>
<p><img src="/Post_Images/8_9_2016/hist2.png" alt="fig" /></p>
<h1 id="conclusion">Conclusion</h1>
<p>To a first order, the intuition is correct - the more negatively skewed portfolios have a lower price, reflected in the higher expected return. My analysis does not suggest a causal relationship, and future posts will establish this more rigorously.</p>
Tue, 09 Aug 2016 00:00:00 +0000
http://marcosammon.com/2016/08/09/skewness.html
http://marcosammon.com/2016/08/09/skewness.htmlHow to Invest like a Billion Dollar Hedge Fund<p>Investing in hedge funds is expensive - you are charged management fees, performance fees or both. On top of that, some funds enforce minimum investments and lockup periods. This post describes how to invest like a hedge fund, without the fees and restrictions.</p>
<h1 id="sec-filings">SEC Filings</h1>
<p>No hedge fund will tell you exactly how their strategy works, but every quarter, institutions managing over $100 million must report specific holdings in 13F filings (see <a href="https://www.sec.gov/answers/form13f.htm">the SEC’s website</a> for more information). This information isn’t enough to exactly replicate a hedge fund’s equity portfolio, as you only get end of quarter holdings. For example: If a fund enters and exits a position within a quarter, it won’t show up in the 13F. Further, there is a delay of up to 45 days between the end of the quarter and the 13F filing.</p>
<h1 id="manager-selection">Manager Selection</h1>
<p>According to <a href="http://www.reuters.com/article/us-hedgefunds-launches-idUSKBN0OZ1KF20150619">Reuters</a>, there were over 10,000 hedge funds operating in 2015. To get a manageable number of fund managers, I started with Insider Monkey’s 20 <a href="http://www.insidermonkey.com/hedge-fund/">“Billionaire Hedge Funds”</a>. The “billionaire” part is based on total assets, not public stock holdings, which is why BP Capital is included, even though its equity portfolio is “only” $200 million. <br />
This list is not representative of the average hedge fund. These funds are the largest because they’ve been successful. This selection bias doesn’t affect my results, however, as you can select a hedge fund to replicate based on any criteria, including past performance (whether or not this is a good idea is a separate question). <br />
The list has 20 firms, but I didn’t use all of them. I omitted Ichan, Bridgewater and Tiger Management owing to gaps in their 13F filings. I’m concerned that the gaps might correspond to bad performance, and that using only the non-missing years overestimates their actual returns. This introduces meaningful selection bias, as you will not know how long a reporting gap will last (or even if it will happen at all) in real time. Another change is using S.A.C. Capital instead of Point72 for Steve Cohen. Unlike Mr. Cohen, you wouldn’t have to pay a <a href="http://www.bloomberg.com/news/articles/2014-04-10/sac-judge-approves-record-insider-trading-accord-with-u-s">$1.8 billion</a> insider trading fine for replicating his portfolio. <br /></p>
<p>Even though these funds are all considered “Institutional Investment Managers” by the SEC, they have totally different objectives. Berkshire Hathaway fully owns business in a variety of industries (insurance, food, industrial chemicals), while others hold no majority stakes. These funds also have different investments styles - Third Point is known for value investing while Renaissance Technologies is known for algorithmic trading. Finally, they are managing money for different reasons. Some take money from outside investors, while others are “family offices” (they manage the wealth for a single family). All of these factors affect the firms’ incentives, and are important when picking a firm to replicate. <br /></p>
<h1 id="data-and-portfolio-construction">Data and Portfolio Construction</h1>
<p>13F filings are collected from Thompson Reuters (data from 2013 onward is easy to scrape from the SEC website - this will be the topic of a future blog post). Funds are indexed by manager number, which may change over time. For example, Tudor Investment Corp and Tudor Investment Corporation overlap for two quarters and have different manager numbers. I manually fixed these cases to ensure each manager only has one observation per quarter. <br /></p>
<p>Stock price and return data are from CRSP. I match CRSP to the 13F data on CUSIP. For the holdings that I can’t match on CUSIP, I match on historical CUSIP (NCUSIP). <br /></p>
<p>I construct the hedge fund replicating portfolios as in Section 5 of “Buffett’s Alpha” by Frazzini, Kabiller and Pedersen. I get equity holdings (the 13F also includes equity options and other securities) from the 13F filings, and rebalance the portfolios quarterly. For each fund, I calculate value weights based on the size of their stake in a particular stock (price times shares held), relative to the total size of their public holdings. The weights are kept constant within a given quarter (if a firm delists for any reason before the end of the quarter, I calculate the delisting return, and put the remaining money in cash). I use the 13F holdings at the end of quarter <script type="math/tex">t</script> to form portfolios and calculate value weights for quarter <script type="math/tex">t+1</script>. As mentioned above, this induces a look-ahead bias, as firms may take up to 45 days to actually file their 13F. I replicate the results without a look-ahead bias later in the post. <br /></p>
<p>Excess returns are calculated as returns minus the risk-free rate. The asset pricing factors are: Market (the excess return on the CRSP value-weighted portfolio), SMB (size), HML (value) and UMD (momentum), which along with the risk-free rate, are obtained from Ken French’s data library. I also use BAB (betting against beta), and QMJ (quality minus junk), which are from AQR’s website. <br /></p>
<h1 id="performance-overview">Performance Overview</h1>
<p>It’s difficult to compare fund performance. As mentioned above, funds have different styles and different types of investors. In addition, the funds under consideration range in size from under $1 billion to over $50 billion in equity holdings. Large funds might have a hard time beating the market, as they are restricted to investing in larger companies and/or holding a larger number of companies. This isn’t universally true, however, as Berkshire’s equity portfolio is 10x the size of Duquense’s, but Dunquense has 3 times as many holdings: <br /></p>
<p><img src="/Post_Images/8_8_2016/Basic_Facts.PNG" alt="fig" /> <br /></p>
<p>Every fund except Paulson’s has a higher average excess returns than the market between 2000 and 2015. Again, this is only looking at the most successful firms, so it’s not surprising they’ve done so well. Value investing firms like like Appaloosa Management and Third Point have the highest Sharpe Ratios (for reference, the market Sharpe Ratio from 2000-2015 was 0.28):<br /></p>
<p><img src="/Post_Images/8_8_2016/Returns.PNG" alt="fig" /> <br /></p>
<p>To test for manager “skill”, I ran a regression of excess returns on 5 asset pricing factors. Alphas are annualized (multiplied by 12): <br /></p>
<p><img src="/Post_Images/8_8_2016/Regression_Results.PNG" alt="fig" /></p>
<p>The largest (and most statistically significant) alpha belongs to S.A.C. - a firm which plead guilty to insider trading. Other significant alphas include activist investor Bill Ackman (Pershing Square) and value investor David Tepper (Appaloosa Management). Historical alphas and Sharpe Ratios suggest these funds are good candidates for replication (they also hold few equities - so even small investors could replicate their portfolios).</p>
<h1 id="hedge-fund-replication">Hedge Fund Replication</h1>
<p>Below I plot the performance of the replicating portfolios for Berkshire, Appaloosa and Pershing: <br /></p>
<p><img src="/Post_Images/8_8_2016/old_funds2.png" alt="fig" /></p>
<p>I started the series in 2006, as the first 13F for Pershing is from December, 2005. Had you replicated Berkshire’s portfolio, you would have roughly tracked the market, while the other replicating portfolios nearly doubled the market.</p>
<h1 id="stopping-the-look-ahead-bias">Stopping the Look-Ahead Bias</h1>
<p>The authors of, “Buffett’s Alpha” have a look-ahead bias (up to 45 days) in their calculation of the 13F portfolio. It doesn’t matter for their paper, as they are doing a historical analysis (understanding why Buffett performed so well), but it matters for matching hedge fund strategies. <br />
I re-calculated everything, adding an extra quarter delay between the 13F’s due date, and when the holdings are used to form portfolios - now, the data are available for at least 45 days before forming portfolios. Not surprisingly, this changes nothing for Buffett (as his turnover is low), but it matters if you are trying to match other managers. For example, with an activist like Pershing Square, the alpha becomes statistically insignificant (q+1 uses the, “Buffett’s Alpha” methodology, while q+2 lags an extra quarter). Excess returns are (weakly) lower for all funds: <br /></p>
<p><img src="/Post_Images/8_8_2016/Look_Ahead_Table.PNG" alt="fig" /> <br /></p>
<p>The portfolio replication results do not change much for Berkshire and Appaloosa, but it brings Pershing’s performance close to that of the market: <br /></p>
<p><img src="/Post_Images/8_8_2016/old_funds2nl.png" alt="fig" /></p>
<p>This suggests that portfolio turnover, and time-sensitivity of investment are important considerations when picking a fund to replicate.</p>
<h1 id="conclusion">Conclusion</h1>
<p>Following the 13F filings is not the hard part, but picking the right 13F filings is. Almost all the hedge funds examined had higher average returns that then index, even after removing the look-ahead bias. If you want to invest like a hedge fund - and maybe beat the market - buy the 13F holdings from your fund of choice. But, be careful before investing in this strategy, as my results do not suggest that all replicating portfolios will beat the market - I ignored transaction costs and maximum drawdown (a 10% negative return, followed by a 10% positive return will result in a 1% overall loss). <br />
Finally, just because a fund does well historically, doesn’t mean that performance will continue - so picking the “best” funds may not be the best strategy. Future posts will explore why some funds perform better than others, and how this can be used to select portfolios for replication.</p>
Mon, 08 Aug 2016 00:00:00 +0000
http://marcosammon.com/2016/08/08/13f.html
http://marcosammon.com/2016/08/08/13f.htmlAre All Yields Created Equal?<p>An article in the July 30, 2016 <a href="http://www.nytimes.com/2016/07/31/your-money/why-is-the-stock-market-so-high-ask-the-bond-market.html?emc=eta1&mtrref=undefined">New York Times</a> argues that high stock prices are the result of low bond yields. To paraphrase the article: The S&P 500 index has a dividend yield of 2.1%, 40% higher than the 1.5% yield on 10-year Treasury notes. Investors have moved to stocks, chasing yield. <br />
This post explains why dividend yield and bond yield are not substitutes.</p>
<h1 id="index-dividends-and-10-year-treasury-yields">Index Dividends and 10-Year Treasury yields</h1>
<p>I agree with the first half of the argument - Over the past 15 years, the yield on 10-year Treasury notes has declined, while the dividend yield on the S&P 500 index has (on average) increased: <br /></p>
<p><img src="/Post_Images/7_31_2016/divtreasury.png" alt="fig" /></p>
<p>This doesn’t mean you should sell your bonds and buy stocks. More important is the total return - the sum of yield and capital gains (i.e. the price going up).</p>
<h1 id="dividend-yield">Dividend Yield</h1>
<p>Dividend yield, <script type="math/tex">D/P</script>, can increase for two reasons - either dividends go up, or prices go down: <br /></p>
<p><img src="/Post_Images/7_31_2016/dyieldsp500.png" alt="fig" /></p>
<p>At the index level, dividends don’t move that much and changes in dividends are weakly correlated with dividend yield. <br />
There is, however, a strong inverse relationship between dividend yield and index price level: <br /></p>
<p><img src="/Post_Images/7_31_2016/priceyieldsp500.png" alt="fig" /></p>
<p>The standard deviation of dividend growth is about one eighth the standard deviation of stock returns, so the magnitude of price changes drowns out changes in dividends: <br /></p>
<p><img src="/Post_Images/7_31_2016/dgret.png" alt="fig" /></p>
<p>The point is - dividends have been stable, but the price of the underlying index has not. <br /></p>
<h1 id="10-year-treasury-yield">10-Year Treasury Yield</h1>
<p>Yields on existing bonds can increase for one reason - the price went down. This is because (assuming the U.S. government always pays its debts), you are guaranteed to get the face value for a Treasury note at maturity. There is no such guarantee with stocks. <br />
This doesn’t mean bonds are easier to understand than stocks, as prices can change for many reasons. For example: if interest rates rise, existing bonds go down in price (suppose not, then arbitrageurs would short old bonds, and buy new bonds to earn the difference in yields). <br />
Compare the price of 10-year bonds to S&P 500 index. Both series are normalized to 100 in January, 1986:<br /></p>
<p><img src="/Post_Images/7_31_2016/bondprcstockprc.png" alt="fig" /></p>
<p>This doesn’t mean you would have earned zero buying the 10-year notes. Below, I plot the total return (assuming you reinvest dividends and reinvest coupon payments) for each asset: <br /></p>
<p><img src="/Post_Images/7_31_2016/invest.png" alt="fig" /></p>
<p>The bond investment goes up a little every year, while the stock investment loses nearly 50% of its value twice in the last 30 years. Despite this, the stocks would be worth 4x as much today as the bonds!</p>
<h1 id="timing">Timing</h1>
<p>In a two period asset pricing model:
\begin{equation}
P_t= E_t \left(\beta \frac{u’(c_{t+1})}{u’(c_t)} x_{t+1} \right)
\end{equation}
With risk averse investors, an asset whose payoff, <script type="math/tex">x_{t+1}</script>, covariates negatively with consumption commands a higher price (and a lower return) than an asset that covaries positively with consumption. This is why people buy insurance: It’s good to get a check when your house burns down. In the financial crisis, bond prices went up (yields went down), while stock prices went down: <br /></p>
<p><img src="/Post_Images/7_31_2016/crisis.png" alt="fig" /></p>
<p>This is another reason to treat dividend yields and bond yields differently - if you want to hedge against losing your job in a recession, bonds are a better bet (whether job loss is at the extensive or intensive margin during recessions will be the topic of a future blog post).</p>
<h1 id="individual-stocks">Individual Stocks</h1>
<p>Let’s look at two dividend paying stocks mentioned in the article, Verizon and AT&T (formerly
<a href="http://www.att.com/gen/general?pid=7503">SBC</a>). We see the same pattern as we did for the index - dividends are relatively stable, while prices are not:</p>
<p><img src="/Post_Images/7_31_2016/verizon.png" alt="fig" /></p>
<p><img src="/Post_Images/7_31_2016/att.png" alt="fig" /></p>
<p>Investors might be attracted to these stocks because their dividend yields are now more than double the yield on 10-Year Treasury Notes: <br /></p>
<p><img src="/Post_Images/7_31_2016/telecomdy.png" alt="fig" /></p>
<p>It’s important to remember, however, that both these stocks lost over 30% of their value during the financial crisis. If you were forced to sell, (for example, owing to liquidity constraints) you would have taken a big loss, lowering the effective yield below that of the 10-year note.</p>
<h1 id="conclusion">Conclusion</h1>
<p>There’s something missing from this analysis - the relationship between inflation, bond yields and stock prices. Mild inflation will affect real bond returns more than real stock returns. This omission, however, doesn’t change the main result - even though dividend yields are relatively stable over time, there is no guarantee on the value of the underlying asset. Investors may be able to increase yields 40% by switching from bonds to stocks, but not without substantial additional risk.</p>
Sun, 31 Jul 2016 00:00:00 +0000
http://marcosammon.com/2016/07/31/divyields.html
http://marcosammon.com/2016/07/31/divyields.htmlThe Power of Profitability<p>In, “The Little Book That Beats The Market” (2006), Joel Greenblatt screens for stocks using earnings yield and return on capital. This post describes the performance of a mechanical strategy based on these criteria. After writing the post, I found, “The Other Side of Value: The Gross Profitability Premium” (Novy-Marx, 2012), which presents similar ideas in a more rigorous way.</p>
<h1 id="definitions">Definitions</h1>
<p>Earnings yield is defined as earnings before interest and taxes (EBIT)/enterprise value (EV). Using data from Compustat, I compute:
\begin{equation}
EBIT = Revenue - Operating Expenses
\end{equation}
\begin{equation}
EV= Market Cap. + Debt + Preferred Stock - Cash
\end{equation}
Return on capital is defined as EBIT/net fixed assets (NA). Again, using Compustat data:
\begin{equation}
NA=Total Assets - Depreciation + Net Working Capital (NWC)
\end{equation}
\begin{equation}
NWC=Current Assets - Current Liabilities
\end{equation}
Greenblatt’s return on capital is similar to Novy-Marx’s profitability, defined as:
\begin{equation}
Profitability=\frac{Revenue - Cost of Goods Sold}{Total Assets}
\end{equation}</p>
<h1 id="the-data">The Data</h1>
<p>The returns data are from CRSP and the financial data are from Compustat. I applied the following filters before forming portfolios: <br />
1) Only include ordinary common stock for companies incorporated inside the U.S. traded on a major exchange - NYSE, AMEX and Nasdaq <br />
2) Remove stocks in the bottom 10% of market capitalization <br />
3) Only include stocks that are in both CRSP and Compustat with a primary link type (LC or LU, see <a href="http://www.crsp.com/products/documentation/link-history-data">CRSP’s guide here</a>) <br />
4) For the portfolio formation period, require non-missing revenue, operating expenses and market capitalization <br />
To prevent a look-ahead bias, I use accounting data with a minimum 6-month lag. For example, if a company has fiscal year end in December, data from 2010 Q2, will be used to form portfolios at the end of December 2010. I update the financial information once a year.</p>
<h1 id="portfolio-formation">Portfolio Formation</h1>
<p>The portfolio formation is very similar to the
<a href="http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/Data_Library/six_portfolios.html">6 Portfolios formed on Size and Book-to-Market.</a> Using NYSE stocks, I calculate the median earnings yield (EY), as well as the 30th and 70th percentiles of return on equity(ROE). I use these breakpoints to form 6 value-weighted portfolios: <br /></p>
<p><img src="/Post_Images/7_30_2016/Portfolio_Formation.PNG" alt="formation" /></p>
<p>Portfolios are rebalanced at the end of every month.</p>
<h1 id="performance">Performance</h1>
<p>Compare the value of a dollar invested in the low EY & low ROE firms to the high EY & high ROE firms (in the table above, this corresponds to the top left and bottom right portfolios): <br /></p>
<p><img src="/Post_Images/7_30_2016/performance.png" alt="performance" /></p>
<p>The market portfolio is the value-weighted average of all stocks that remained after applying the filters described above. <br />
The high EY/ROE portfolio cumulatively returns about double the market over the past 15 years. To see if EY or ROE is driving the results, I plot all 6 portfolios, color-coded by above median EY (blue) and below median EY (red): <br /></p>
<p><img src="/Post_Images/7_30_2016/hley.png" alt="hley" /></p>
<p>Once we sort on EY, sorting on ROE has little effect.</p>
<h1 id="asset-pricing-factor">Asset Pricing Factor</h1>
<p>Similar to <a href="http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/Data_Library/f-f_factors.html">Fama and French</a>, I construct an Earnings Yield factor. The long side is an equal-weighted average of the 3 high EY portfolios, while the short side is an equal-weighted average of the 3 low EY portfolios. The long side beats the market, while the short side under-performs: <br /></p>
<p><img src="/Post_Images/7_30_2016/eymkt.png" alt="eymkt" /></p>
<p>The factor is the long side minus the short side: <br /></p>
<p><img src="/Post_Images/7_30_2016/eyfactor.png" alt="eyfactor" /></p>
<p>I wanted to see if the EY factor contains information not in the Size or Value factors (see
<a href="http://marcosammon.com/2016/07/23/short_side.html">here</a> for my factor construction).
From 2001-2005, EY is highly correlated with size. The correlation breaks down during the financial crisis, but resumes in 2009: <br /></p>
<p><img src="/Post_Images/7_30_2016/sizeey.png" alt="sizeey" /></p>
<p>We see the same pattern with the value factor: <br /></p>
<p><img src="/Post_Images/7_30_2016/valey.png" alt="valey" /></p>
<p>To confirm, I regress returns for the high EY and high ROE portfolios on the market, SMB, and HML. The “top 30% return on equity” portfolio is an equal weighted average of the low EY/high ROE and high EY/high ROE portfolios: <br /></p>
<p><img src="/Post_Images/7_30_2016/Table1.PNG" alt="tab1" /></p>
<p>I multiply by 12 to annualize the alphas. While much of the variation in these portfolios is explained by the 3-factor model (<script type="math/tex">R^2</script> of over 90%), the alphas are large and statistically significant. The alpha comes from avoiding the dip in the financial crisis, which is shared by the size and value factors.</p>
<h1 id="conclusion">Conclusion</h1>
<p>Profitability is powerful - we can improve the performance of size and value by sorting on earnings yield. I, however, am 4 years too late, as Novy-Marx makes the exact same point in the abstract of his paper, “Controlling for profitability also dramatically increases the performance of value strategies, especially among the largest, most liquid stocks.” I’m also decades behind Greenblatt, “whose investment firm has averaged 40% annual returns for over twenty years…” Not surprisingly, his best ideas don’t make the book, as an average market return of 6% per year + an alpha of 4% per year (regressing the high EY portfolio on the market) only gets us to 10%, well below Greenblatt’s 40%!</p>
Sat, 30 Jul 2016 00:00:00 +0000
http://marcosammon.com/2016/07/30/profitability.html
http://marcosammon.com/2016/07/30/profitability.htmlIncome and Wealth Inequality Part 1: The Facts<p>This is the first post in a series on income and wealth inequality in the United States. This topic has been gaining mainstream popularity, with books like Capital in the Twenty-First Century (Piketty, 2013) selling over 1.5 million copies. This post presents some basic facts about the distribution of income and wealth over the past 15 years.</p>
<h1 id="income-inequality">Income Inequality</h1>
<p>A common “fact” mentioned in discussions on income inequality is that the average CEO makes over 200 times as much as the median worker (see <a href="https://www.glassdoor.com/research/ceo-pay-ratio/">glassdoor</a> for one such example). <br />
While I don’t have data to test this directly, we can still get a general idea of how incomes are distributed. Below, I’ve tabulated several income percentiles for men aged 18-65 (the 50th percentile is the median):</p>
<p><img src="/Post_Images/7_24_2016/income_dist.PNG" alt="income_dist" /></p>
<p>Over the past 15 years, the 5th, 10th, 25th and 50th percentiles have decreased, while the higher percentiles have increased. This is why the median has stayed about the same but the mean has increased. The ACS data actually understates this result, as the income of very wealthy people is replaced with the average income of people with similar characteristics, so you can’t identify individuals (i.e. Bill Gates, Tim Cook, etc.). See <a href="https://cps.ipums.org/cps/topcodes_tables.shtml">here</a> for more details. <br />
This also means we are not getting the top 1% of the top 1% (i.e. the CEO’s). We can see, however, that the top 1% are earning at least 10x as much as the median household. <br /></p>
<p>Below, I show a frequency distribution of income (weighted by the “person weight” in the ACS data) for 2001 and 2014:</p>
<p><img src="/Post_Images/7_24_2016/2001.png" alt="2001" />
<img src="/Post_Images/7_24_2016/2014.png" alt="2014" /></p>
<p>As you can see, the right tail has become “heavier”, with more individuals well above the median.</p>
<h1 id="wealth-inequality">Wealth Inequality</h1>
<p>People like Tim Cook don’t just earn a lot of money, they also accumulate substantial wealth. For this type of analysis, I can only get very aggregated data (source: <a href="http://www.census.gov/people/wealth/data/disttables.html">census</a>). The wealth distribution is broken into 5 groups (Q1 being the lowest, and Q5 being the highest). Below, I plot the evolution of median net worth by group:</p>
<p><img src="/Post_Images/7_24_2016/MedianNetWorth.PNG" alt="MedianNetWorth" /></p>
<p>Similar to income, it has increased for the top groups, and stayed flat or decreased for the bottom groups. <br />
The median of the top 20% (the 90th percentile) has been flat for the past few years, but the mean has continued to increase:</p>
<p><img src="/Post_Images/7_24_2016/MeanNetWorth.PNG" alt="MeanNetWorth" /></p>
<p>This implies that the net worth of those in the top of Q5 is increasing (if everyone in Q5 had increasing net worth, the median would be growing as well).</p>
<h1 id="rates-of-return">Rates of Return</h1>
<p>One of the factors that may drive wealth inequality is that wealth can be invested, and turned into even more wealth. Below I present two charts showing the total return on housing, stocks and bonds, relative to GDP (source Federal Reserve Economic Data): <br />
<img src="/Post_Images/7_24_2016/gdpbond.png" alt="gdpbond" />
<img src="/Post_Images/7_24_2016/gdphousingstock.png" alt="gdphousingstock" /></p>
<p>According to the census data above, mean net worth increased by an average of 5.65% per year for the top 20% of households and 3.59% per year for the next 20% of households. It’s interesting that this is close to the average growth rate for housing and stocks over the same period (although this is purely suggestive evidence).</p>
<h1 id="future-work">Future Work</h1>
<p>One of the problems with the evidence I’ve presented is its static nature. It’s true that top 20% are increasing their wealth/income relative to everyone else - but that group isn’t necessarily the same people every year. <br />
According to a <a href="http://fortune.com/2015/03/02/economic-inequality-myth-1-percent-wealth/">Fortune article</a>, “70% of the population will have experienced at least one year within the top 20th percentile of income.” A future post will analyze the Panel Study of Income Dynamics (PSID) which unlike the census (a random sample), tracks the same individuals over time. The PSID can be used to compute the transition probabilities between income quantiles, so we can better understand the dynamic nature of inequality over time.</p>
Sun, 24 Jul 2016 00:00:00 +0000
http://marcosammon.com/2016/07/24/inequality_1.html
http://marcosammon.com/2016/07/24/inequality_1.html