Expected Discounted Utility

Expected discounted utility is one of the most common ways to represent preferences over risky consumption plans. Consider an agent, sitting at time , who will receive a consumption stream until : \begin{equation} U_t(c)= E_t \left[ \sum \limits_{s=t}^T \beta^{s-t}u_s(c_s)\right] \end{equation} Where is the discount factor and is a within-period utility function. A problem with expected discounted utility is that it cannot separate preferences for smoothing over time, and smoothing across states.
Consider the following example: You are stranded on an island at . A man comes in a boat and offers you a choice of two deals (1) Every morning he comes and flips a coin, if it comes up heads, you get a bushel of bananas that day (2) He flips a coin today, if it comes up heads you get a bushel of bananas every day until time , and if it comes up tails you get no bananas until time . It’s initiative that plan 2 is riskier than plan 1, but under expected discounted utility, for any and the agent is indifferent between the two plans: \begin{equation} U(Plan 1) = \sum\limits_{t=0}^T \beta^t \frac{u_t(1)+u_t(0)}{2}= U(Plan 2) \end{equation}

Recursive Utility

The only way to even partially separate preferences for smoothing over time, and preferences for smoothing across states is to use recursive utility (see Skiadas 2009 for a complete proof - this is an if and only if relationship). Recursive utility has two ingredients, the aggregator, which determines preferences over deterministic plans (time smoothing) and the conditional certainty equivalent (state smoothing). The steps below formulate expected discounted utility as recursive utility. For simplicity, drop the dependence of all functions on time, so we can remove all the subscript ’s. Now, propose a desirable property for the utility function - normalization. Consider any deterministic plan , then a utility is normalized if . Normalize utility , the expected discounted utility defined above, as where . Basically, gives the discounted utility of deterministic plan , so gives the deterministic required to make the agent indifferent between potentially risky plan and deterministic plan .
For expected discounted utility, the aggregator is: . The intuition is that with expected discounted utility, the agent’s utility from plan is a weighted average of their consumption today, and the utility of the equivalent deterministic plan until . For utility to be normalized, the aggreator must satisfy for any deterministic plan . Put this into the equation above to solve for : . Then, apply to both sides: \begin{equation} u(x) + \beta \psi_{t+1} (x) = \psi_t(x) \end{equation} Fix , and interpret terminal consumption value as consuming for the rest of time (equivalently, imagine letting go to infinity). This implies we can drop the subscripts on the : \begin{equation} u(x)=\psi(x)-\beta\psi(x) \end{equation}

Rearranging yields and . Putting this back into our expression above for implies: \begin{equation} f(t,x,y)=u^{-1}((1-\beta)u(x)+\beta u(y)) \end{equation} Given the way the aggregator is defined, we can see that depends on the curvature of - in other words, the within period utility function will influence preferences for smoothing over time. This also gives intuition for how to make an agent not indifferent between deal (1) and deal (2) described above - needs to be defined independently of (or ).


Recursive utility is a general framework, with expected discounted utility as a special case. A future post will explore common functional forms of recursive utility that are common in asset pricing models.