Standard consumption theory assumes time-separable utility and exponential discounting. Each of these assumptions can be relaxed in economically meaningful ways. Part I introduces habit formation, where past consumption raises the reference point against which current consumption is judged. Part II adds durable goods, whose stocks depreciate gradually and provide utility over many periods. Part III replaces exponential discounting with quasi-hyperbolic preferences, producing the present-bias and self-control problems studied by Laibson (1997).
import numpy as np
from scipy.optimize import brentq
import matplotlib.pyplot as plt
import sympy as sp
import pandas as pd
from collections import namedtuple# Parameters for the habit formation model
HabitModel = namedtuple(
'HabitModel',
['R', 'beta', 'rho', 'alpha', 'T']
)
# Parameters for the durable goods model
DurableModel = namedtuple(
'DurableModel',
['R', 'beta', 'rho', 'alpha_d', 'delta', 'T']
)
# Parameters for the Laibson model
LaibsonModel = namedtuple(
'LaibsonModel',
['R', 'beta', 'delta_hyp', 'rho', 'T']
)Part I: Habit Formation¶
Habit formation means that utility depends not only on current consumption but also on a habit stock that summarizes past consumption. Higher past consumption raises the bar: utility is decreasing in the habit stock, . This section follows Carroll (2000).
The Problem¶
The consumer maximizes
subject to the dynamic budget constraint
and the habit evolution rule
The assumption in (3) is the simplest case: the habit stock equals last period’s consumption. Under this rule, the consumer who ate well yesterday finds today’s modest meal less satisfying.
The Bellman Equation¶
Because the habit stock enters utility, the value function has two state variables. Bellman’s equation is
The second argument of is because under our habit rule. To apply the envelope theorem, define the “unoptimized” value
so that at the optimal consumption rule .
Optimality Conditions¶
The First Order Condition¶
Differentiating the Bellman equation (4) with respect to yields
Without habits, and this reduces to the standard Euler equation. With habits, an extra unit of consumption today raises tomorrow’s habit stock, which reduces tomorrow’s utility (since ). The right-hand side of (6) is therefore larger than in the no-habit case, so the marginal utility on the left must also be larger, which means lower . Habits increase the willingness to delay spending.
Envelope Condition for ¶
Applying the envelope theorem to (4) (treating as constant, since the FOC zeroes out its contribution) gives
This is the same envelope result as in the standard problem: the marginal value of wealth today equals the discounted gross return times the marginal value of wealth tomorrow.
Envelope Condition for ¶
The habit stock enters only through , so the envelope theorem gives
The marginal value of a higher habit stock equals the marginal disutility it causes today.
Combined Euler Equation¶
Substituting (7) into (6) produces
Using (8) to replace with , rolling forward one period, and substituting back into (7) yields
When , this collapses to the standard Euler equation .
# Symbolic derivation of the combined Euler equation
from sympy.printing.mathml import mathml
def show(expr):
ml = mathml(expr, printer='presentation')
print(f'<math display="block" xmlns="http://www.w3.org/1998/Math/MathML">{ml}</math>')
c_t, c_t1, h_t, h_t1 = sp.symbols('c_t c_{t+1} h_t h_{t+1}', positive=True)
R_s, beta_s, alpha_s, rho_s = sp.symbols('R beta alpha rho', positive=True)
# Specific utility: u(c,h) = f(c - alpha*h), with f(z) = z^(1-rho)/(1-rho)
z_t = c_t - alpha_s * h_t
uc = z_t**(-rho_s)
uh = -alpha_s * z_t**(-rho_s)
# Verify the full Euler equation symbolically
# LHS: u^c_t + beta * u^h_{t+1} = f'_t - alpha*beta*f'_{t+1}
# RHS: R*beta * [u^c_{t+1} + beta * u^h_{t+2}] = R*beta * [f'_{t+1} - alpha*beta*f'_{t+2}]
# With constant marginal utility growth k = f'_t/f'_{t+1}, the solution is k = R*beta
z_t1 = c_t1 - alpha_s * h_t1
ratio = z_t / z_t1 # ratio of "effective consumption" across periods
euler_ratio = sp.Eq(sp.Integer(1), R_s * beta_s * (z_t1 / z_t)**rho_s)
print("For $u(c,h) = f(c - \\alpha h)$ with CRRA kernel $f$:")
print()
print("$u^c = f'(z_t)$, where $z_t = c_t - \\alpha h_t$:")
show(uc)
print()
print("$u^h = -\\alpha\\, f'(z_t)$:")
show(uh)
print()
print("The Euler equation in terms of $z_t$:")
show(euler_ratio)A Specific Utility Function¶
Assume the utility function takes the form , where is CRRA. The parameter controls habit strength: when habits vanish, and when is close to 1 only consumption growth above the accustomed level generates satisfaction. The derivatives are
Serial Correlation of Consumption Growth¶
Substituting into the full Euler equation (10) and looking for a solution where marginal utility grows at a constant rate , one can show that . Taking logs and applying a first-order approximation around small consumption changes gives
This is the key testable implication: habit formation produces serial correlation in consumption growth. The coefficient on lagged consumption growth measures habit strength. When , consumption growth is unpredictable (as in the standard random walk result).
# Simulate consumption paths under habit formation
np.random.seed(42)
params = HabitModel(R=1.04, beta=0.96, rho=2.0, alpha=0.0, T=200)
alphas = [0.0, 0.3, 0.6, 0.9]
log_R_beta = np.log(params.R * params.beta)
fig, axes = plt.subplots(1, 2, figsize=(10, 4))
for alpha in alphas:
drift = (1 - alpha) / params.rho * log_R_beta
eps = np.random.normal(0, 0.02, params.T)
dlogc = np.zeros(params.T)
for t in range(1, params.T):
dlogc[t] = drift + alpha * dlogc[t - 1] + eps[t]
logc = np.cumsum(dlogc)
axes[0].plot(logc, lw=1.5, label=rf'$\alpha = {alpha}$')
axes[0].set_xlabel('Period')
axes[0].set_ylabel(r'$\log\, c_t$')
axes[0].set_title('Stronger habits produce smoother consumption paths')
axes[0].legend(frameon=False, fontsize=8)
axes[0].grid(True, alpha=0.3)
# Autocorrelation of consumption growth at different habit strengths
n_sim, T_sim = 1000, 500
autocorrs = []
for alpha in alphas:
drift = (1 - alpha) / params.rho * log_R_beta
corrs = []
for _ in range(n_sim):
eps = np.random.normal(0, 0.02, T_sim)
dlogc = np.zeros(T_sim)
for t in range(1, T_sim):
dlogc[t] = drift + alpha * dlogc[t - 1] + eps[t]
corrs.append(np.corrcoef(dlogc[1:-1], dlogc[2:])[0, 1])
autocorrs.append(np.mean(corrs))
axes[1].bar(range(len(alphas)), autocorrs, tick_label=[str(a) for a in alphas])
axes[1].set_xlabel(r'Habit strength $\alpha$')
axes[1].set_ylabel(r'Autocorrelation of $\Delta \log c$')
axes[1].set_title('Habits create predictable consumption growth')
axes[1].grid(True, alpha=0.3, axis='y')
plt.tight_layout()
plt.show()# Summary table: analytical vs simulated autocorrelation
rows = []
for alpha, ac in zip(alphas, autocorrs):
rows.append({
'Habit strength (alpha)': alpha,
'Analytical autocorrelation': alpha,
'Simulated autocorrelation': round(ac, 4),
})
df_habits = pd.DataFrame(rows)
df_habitsThe table confirms that the first-order autocorrelation of consumption growth matches the habit parameter , exactly as (12) predicts. With consumption growth is white noise; with about 60 percent of last period’s growth carries over.
Part II: Durable Goods¶
A durable good provides utility over multiple periods rather than being consumed immediately. Housing, automobiles, and appliances are classic examples. The consumer now chooses both nondurable consumption and the durable stock , and must account for the fact that durables depreciate gradually.
Stock Accumulation¶
The stock of the durable good evolves according to
where is expenditure on the durable good in period and is the depreciation rate. A lower means the good is “more durable.” This geometric depreciation assumption contrasts with “one-hoss-shay” models where the good works perfectly until it fails completely.
The dynamic budget constraint subtracts both nondurable consumption and durable expenditure from available resources:
The Two-Control Bellman Equation¶
The consumer maximizes subject to (13) and (14). Treating directly as the control variable (since ), Bellman’s equation is
where .
First Order Conditions¶
With two controls, there are two FOCs. Differentiating (15) with respect to gives
Differentiating with respect to gives
The FOC for durables has two terms because choosing a higher both costs units of next-period wealth (the spending must be financed) and delivers value through the durable stock carried into next period.
Envelope Conditions¶
The envelope theorem applied to (15) for each state variable gives
and
The second result says that the marginal value of an extra unit of durable stock equals times the marginal value of wealth. When (a completely nondurable good), : last period’s stock is worthless because it has fully depreciated. When (a perfectly durable good), : an indestructible durable is as valuable as cash.
The Intratemporal Condition¶
Substituting the envelope results into the FOC for durables (17) produces the intratemporal optimality condition
where is the net interest rate. When , the current-period marginal utility from the durable is strictly less than the marginal utility from nondurables. The reason is that the durable will continue yielding utility in future periods; what should be equated to is the total discounted lifetime utility from an extra unit of the durable, not merely the single-period marginal utility.
# Symbolic derivation of the intratemporal condition
r_s, delta_s, R_sym = sp.symbols('r delta R', positive=True)
uc_s, ud_s = sp.symbols("u^c u^d")
# From the envelope + FOC derivation: u^d = [(r + delta)/R] * u^c
rhs = (r_s + delta_s) / R_sym * uc_s
intratemporal = sp.Eq(ud_s, rhs)
print("**Intratemporal optimality condition:**")
print()
show(intratemporal)
print()
# Verify the limiting cases symbolically
limit_nondurable = rhs.subs(delta_s, 1).simplify()
limit_perfect = rhs.subs(delta_s, 0).simplify()
print("When $\\delta = 1$ (nondurable): $u^d =$")
show(limit_nondurable)
print()
print("When $\\delta = 0$ (perfectly durable): $u^d =$")
show(limit_perfect)Cobb-Douglas Utility¶
Assume , where governs the taste for durables. The marginal utilities are
Substituting into (20) and simplifying yields the optimal durable-to-nondurable ratio
The ratio is a constant that depends on preferences () and prices (, ). Whenever nondurable consumption jumps, the durable stock must jump by the same proportion.
# Compute the optimal durable-to-nondurable ratio for different depreciation rates
params_d = DurableModel(R=1.04, beta=0.96, rho=2.0, alpha_d=0.3, delta=0.1, T=100)
r = params_d.R - 1
deltas = np.linspace(0.01, 0.50, 50)
gammas = (params_d.alpha_d / (1 - params_d.alpha_d)) * params_d.R / (r + deltas)
fig, axes = plt.subplots(1, 2, figsize=(10, 4))
axes[0].plot(deltas, gammas, lw=2)
axes[0].set_xlabel(r'Depreciation rate $\delta$')
axes[0].set_ylabel(r'Optimal ratio $d/c = \gamma$')
axes[0].set_title('More durable goods command larger stocks relative to $c$')
axes[0].grid(True, alpha=0.3)
# Spending volatility: x_t/x_{t-1} = (epsilon + delta)/delta
epsilons = np.linspace(0.0, 0.10, 50)
delta_vals = [0.05, 0.10, 0.25, 0.50]
for dv in delta_vals:
ratio = (epsilons + dv) / dv
axes[1].plot(epsilons, ratio, lw=2, label=rf'$\delta = {dv}$')
axes[1].set_xlabel(r'Consumption shock $\epsilon$')
axes[1].set_ylabel(r'Spending ratio $x_t / x_{t-1}$')
axes[1].set_title('Low depreciation amplifies spending volatility')
axes[1].legend(frameon=False, fontsize=8)
axes[1].grid(True, alpha=0.3)
plt.tight_layout()
plt.show()Spending Volatility¶
Suppose nondurable consumption had been constant at and then jumps so that . Because the durable stock must track nondurables in the ratio , expenditure on durables satisfies
For goods with low depreciation, even small consumption shocks produce large swings in durable spending. A 5 percent permanent income shock with doubles durable expenditure. This explains why housing starts and auto sales are among the most cyclically volatile components of GDP.
# Tabulate spending volatility multipliers
rows = []
for eps in [0.01, 0.03, 0.05, 0.10]:
row = {'Consumption shock': f'{eps:.0%}'}
for dv in [0.05, 0.10, 0.25]:
row[f'delta={dv}'] = round((eps + dv) / dv, 2)
rows.append(row)
df_vol = pd.DataFrame(rows)
df_vol.columns = ['Consumption shock'] + [rf'$\delta = {dv}$' for dv in [0.05, 0.10, 0.25]]
df_volThe table shows spending multipliers: each entry is for a given shock size and depreciation rate. A 5 percent shock to permanent income multiplies spending on a good with by a factor of 2, but only by a factor of 1.2 for a good with .
Part III: Quasi-Hyperbolic Discounting¶
Standard exponential discounting implies time-consistent preferences: a plan made today remains optimal tomorrow. Experimental evidence suggests otherwise. Subjects routinely prefer $100 today over $110 tomorrow, yet prefer $110 in 31 days over $100 in 30 days. This reversal is inconsistent with any constant discount factor, but is captured naturally by quasi-hyperbolic preferences introduced by Laibson (1997).
Two Value Functions¶
Suppose a value function exists for period . For any consumption rule , define two value functions:
The first function discounts next period by alone; the second function applies an additional present-bias factor . We write for the hyperbolic discount factor to distinguish it from the depreciation rate in Part II. McClure et al. (2004) argues that at an annual frequency , reflecting the fact that brain regions associated with emotional rewards respond to immediate gratification but not to future rewards.
These functions are well-defined for any feasible consumption rule ; they are not Bellman equations because they do not assume optimality.
Two Consumption Rules¶
Two consumption rules arise naturally:
Solving recursively with in every period yields the standard time-consistent solution. The Laibson consumer uses , which weights the future less and therefore consumes more today.
The Modified Euler Equation¶
For , the envelope theorem gives , and the FOC gives . A useful identity links the two value functions:
Differentiating (26) with respect to and using the envelope and FOC results yields the modified Euler equation
When , this reduces to the standard Euler equation. When , the second term on the right is positive (since and ), which reduces the effective right-hand side. A lower right-hand side requires lower marginal utility on the left, meaning higher consumption. The Laibson consumer spends more.
The magnitude of the present-bias effect depends on the MPC . When the MPC is small (as for a wealthy consumer with many periods remaining), the bias is small. When the MPC is large (as for a liquidity-constrained consumer), the bias is large.
# Symbolic derivation of the identity linking v and frak{v}
dh, b = sp.symbols('delta_h beta', positive=True)
u_val, v_next = sp.symbols('u v_{t+1}')
# v_t = u + beta * v_{t+1}, fv_t = u + delta_h * beta * v_{t+1}
v_t_expr = u_val + b * v_next
fv_t_expr = u_val + dh * b * v_next
# Verify: delta_h * v_t = fv_t - (1 - delta_h) * u
lhs_identity = dh * v_t_expr
rhs_identity = fv_t_expr - (1 - dh) * u_val
identity_check = sp.simplify(lhs_identity - rhs_identity)
print("**Verifying the identity** $\\delta_h v_t = \\mathfrak{v}_t - (1 - \\delta_h)u$:")
print()
print(f"$\\delta_h \\cdot v_t =$")
show(sp.expand(lhs_identity))
print()
print(f"$\\mathfrak{{v}}_t - (1 - \\delta_h)u =$")
show(sp.expand(rhs_identity))
print()
print(f"Difference (should be zero): {identity_check}")# Visualize the present-bias effect across MPC values
params_l = LaibsonModel(R=1.04, beta=0.96, delta_hyp=0.7, rho=2.0, T=60)
delta_vals = [0.5, 0.7, 0.9, 1.0]
mpc_grid = np.linspace(0.01, 0.50, 100)
fig, axes = plt.subplots(1, 2, figsize=(10, 4))
for dh in delta_vals:
bias = (1 - dh) * mpc_grid
axes[0].plot(mpc_grid, bias, lw=2, label=rf'$\delta_h = {dh}$')
axes[0].set_xlabel(r'MPC $\mathfrak{c}^m_t$')
axes[0].set_ylabel(r'Present-bias term $(1-\delta_h)\,\mathfrak{c}^m_t$')
axes[0].set_title('Present bias is largest when MPC is high')
axes[0].legend(frameon=False, fontsize=8)
axes[0].grid(True, alpha=0.3)
# Modified Euler equation rearranged:
# u'(c) = fv^m - (1 - delta_h) * u'(c) * c^m
# u'(c) * [1 + (1 - delta_h) * c^m] = fv^m
# So the Laibson consumer acts as if marginal utility is scaled down by 1/(1 + (1-dh)*mpc)
# Ratio of Laibson to standard marginal utility:
for dh in delta_vals:
scaling = 1 / (1 + (1 - dh) * mpc_grid)
axes[1].plot(mpc_grid, scaling, lw=2, label=rf'$\delta_h = {dh}$')
axes[1].set_xlabel(r'MPC $\mathfrak{c}^m_t$')
axes[1].set_ylabel(r"$u'(c_{\mathrm{Laibson}}) / \mathfrak{v}^m_t$")
axes[1].set_title('How much present bias discounts the future')
axes[1].legend(frameon=False, fontsize=8)
axes[1].grid(True, alpha=0.3)
axes[1].set_ylim(0.5, 1.05)
plt.tight_layout()
plt.show()Exercises¶
Solution to Exercise 1
np.random.seed(123)
n_paths, T = 1000, 200
R, beta, rho = 1.04, 0.96, 2.0
log_Rb = np.log(R * beta)
for alpha in [0.0, 0.5]:
drift = (1 - alpha) / rho * log_Rb
corrs = []
for _ in range(n_paths):
eps = np.random.normal(0, 0.02, T)
dlogc = np.zeros(T)
for t in range(1, T):
dlogc[t] = drift + alpha * dlogc[t - 1] + eps[t]
corrs.append(np.corrcoef(dlogc[1:-1], dlogc[2:])[0, 1])
mean_corr = np.mean(corrs)
print(f"alpha = {alpha}: mean autocorrelation = {mean_corr:.4f} (theory: {alpha})")With the autocorrelation is near zero, confirming that consumption growth is unpredictable. With the autocorrelation is approximately 0.5, matching the theoretical prediction from (12).
Solution to Exercise 2
R, alpha_d, delta = 1.04, 0.3, 0.1
r = R - 1
# Analytical solution
gamma_analytical = (alpha_d / (1 - alpha_d)) * R / (r + delta)
print(f"Analytical gamma = {gamma_analytical:.6f}")
# Numerical verification: find d/c such that u^d/u^c = (r + delta)/R
# For Cobb-Douglas u(c,d) = (c^(1-a) d^a)^(1-rho) / (1-rho)
# u^d / u^c = (alpha / (1-alpha)) * (c/d)
# Setting this equal to (r + delta)/R:
target = (r + delta) / R
def residual(dc_ratio):
return (alpha_d / (1 - alpha_d)) / dc_ratio - target
gamma_numerical = brentq(residual, 0.1, 100.0)
print(f"Numerical gamma = {gamma_numerical:.6f}")
print(f"Difference = {abs(gamma_analytical - gamma_numerical):.2e}")The analytical and numerical solutions agree to machine precision, confirming the derivation of the optimal ratio .
Solution to Exercise 3
rho = 2.0
delta_h = 0.7
m_grid = np.linspace(1, 20, 200)
fig, ax = plt.subplots(figsize=(8, 4))
for kappa in [0.05, 0.10]:
c = kappa * m_grid
u_prime = c ** (-rho)
bias = (1 - delta_h) * u_prime * kappa
ax.plot(m_grid, bias, lw=2, label=rf'$\kappa = {kappa}$')
ax.set_xlabel(r'Market resources $m$')
ax.set_ylabel(r'Present-bias term')
ax.set_title('Doubling the MPC roughly doubles the present-bias distortion')
ax.legend(frameon=False, fontsize=9)
ax.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()Doubling the MPC from 0.05 to 0.10 approximately doubles the present-bias term at each wealth level. This confirms that the Laibson distortion scales with the MPC: consumers who are more responsive to current resources (because they are liquidity-constrained or near the end of life) suffer more from present bias.
Solution to Exercise 4
epsilon = 0.02
deltas = [0.03, 0.05, 0.10, 0.25]
rows = []
for d in deltas:
ratio = (epsilon + d) / d
rows.append({
'Depreciation rate': d,
'Spending ratio x_t/x_{t-1}': round(ratio, 2),
'Spending change': f'{(ratio - 1)*100:.0f}%',
})
df_ex = pd.DataFrame(rows)
df_exWith , a 2 percent consumption shock produces a 67 percent jump in durable spending. With , the same shock produces only an 8 percent increase. This confirms the prediction from (23): more durable goods exhibit far more volatile expenditure patterns.
References¶
- Laibson, D. (1997). Golden Eggs and Hyperbolic Discounting. Quarterly Journal of Economics, 112(2), 443–478.
- Carroll, C. D. (2000). Solving Consumption Models with Multiplicative Habits [Working Paper]. Johns Hopkins University.
- McClure, S. M., Laibson, D. I., Loewenstein, G., & Cohen, J. D. (2004). Separate Neural Systems Value Immediate and Delayed Monetary Rewards. Science, 306(5695), 503–507.