Skip to content

Commit f2f7e72

Browse files
committed
updates
1 parent eaa5456 commit f2f7e72

8 files changed

Lines changed: 88 additions & 85 deletions

lectures/cagan_rational_expectations.md

Lines changed: 25 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -35,8 +35,8 @@ in his famous study of hyperinflation.
3535

3636
{cite:t}`sargent1973rational` pointed out that under assumptions making Cagan's
3737
adaptive expectations equivalent to rational expectations, Cagan's
38-
estimator of $\alpha$ the slope of log real balances with respect to expected
39-
inflation is not statistically consistent.
38+
estimator of $\alpha$ -- the slope of log real balances with respect to expected
39+
inflation -- is not statistically consistent.
4040

4141
This inconsistency matters because of a paradox that emerged when Cagan used
4242
his estimates of $\alpha$ to calculate the sustained rates of inflation that would
@@ -47,11 +47,11 @@ That "optimal" rate is $-1/\alpha$.
4747

4848
For each of the seven hyperinflations
4949
in his sample, the reciprocal of Cagan's estimate of $-\alpha$ turned out to be
50-
less and often very much less than the actual average rate of inflation,
50+
less -- and often very much less -- than the actual average rate of inflation,
5151
suggesting that the creators of money expanded the money supply at rates far
5252
exceeding the revenue-maximizing rate.
5353

54-
A natural explanation is that this paradox is a statistical artifact a
54+
A natural explanation is that this paradox is a statistical artifact -- a
5555
consequence of biased estimates of $\alpha$.
5656

5757
Table 1 reproduces the relevant data from Cagan.
@@ -465,7 +465,7 @@ plims = [plim_alpha_cagan(a, λ, σ_ε2, σ_η2, σ_εη) for a in α_plot]
465465
ws_limit = -λ / (1.0 - λ)
466466
467467
fig, ax = plt.subplots()
468-
ax.plot(α_plot, α_plot, 'k--', lw=1.5, label='No bias (45° line)')
468+
ax.plot(α_plot, α_plot, 'k--', lw=1.5, label=r'No bias (45$\degree$ line)')
469469
label = rf'$\operatorname{{plim}}\hat\alpha$, $\lambda={λ}$'
470470
ax.plot(α_plot, plims, lw=2, label=label)
471471
ax.axhline(ws_limit, color='r', ls=':', lw=1.5,
@@ -641,7 +641,7 @@ Equation {eq}`eq27` is a vector first-order autoregression, first-order moving
641641
average process.
642642
643643
The random variables $a_{1t}$, $a_{2t}$ are the innovations in
644-
the $x$ and $\mu$ processes, respectively the one-period-ahead forecasting errors
644+
the $x$ and $\mu$ processes, respectively -- the one-period-ahead forecasting errors
645645
for $x_t$ and $\mu_t$.
646646
647647
The $a$'s are related to the $\varepsilon$'s and $\eta$'s
@@ -770,8 +770,8 @@ L(\lambda,\,\sigma_{11},\,\sigma_{12},\,\sigma_{22}\mid\mu_t,\,x_t)
770770
\exp\!\left(-\tfrac{1}{2}\sum_{t=1}^{T} a_t' D_a^{-1} a_t\right).
771771
```
772772
773-
Given initial values for $(a_{10}, a_{20})$ equivalently for $(\varepsilon_0,
774-
\eta_0)$ and given a value of $\lambda$, equation {eq}`eq26` or {eq}`eq27` can be
773+
Given initial values for $(a_{10}, a_{20})$ -- equivalently for $(\varepsilon_0,
774+
\eta_0)$ -- and given a value of $\lambda$, equation {eq}`eq26` or {eq}`eq27` can be
775775
used to solve for $a_t$, $t = 1, \ldots, T$.
776776
777777
(We take $a_{10} = a_{20} = 0$.)
@@ -807,8 +807,8 @@ That this must be so can be seen by inspecting representation
807807
808808
On the
809809
basis of the *four* parameters $\lambda$, $\sigma_{11}$, $\sigma_{12}$, and
810-
$\sigma_{22}$ that are identified by {eq}`eq27` i.e., that characterize the
811-
likelihood function {eq}`eq32` we can think of attempting to estimate the *five*
810+
$\sigma_{22}$ that are identified by {eq}`eq27` -- i.e., that characterize the
811+
likelihood function {eq}`eq32` -- we can think of attempting to estimate the *five*
812812
parameters of the model: $\alpha$, $\lambda$, $\sigma_\varepsilon^2$,
813813
$\sigma_\eta^2$, and $\sigma_{\varepsilon\eta}$.
814814
@@ -1124,7 +1124,7 @@ def compute_innovations(x, μ, λ):
11241124
a_{1t} = Δx_t + λ a_{1,t-1}
11251125
a_{2t} = μ_t - x_t + a_{1t}
11261126
1127-
Only λ is required α does not enter the innovation extraction.
1127+
Only λ is required -- α does not enter the innovation extraction.
11281128
11291129
Returns arrays a1 and a2 of length T.
11301130
"""
@@ -1378,12 +1378,12 @@ $\sigma_{\varepsilon\eta} = 0$:
13781378
13791379
| Country | $\hat\lambda$ | $\hat\alpha$ | $\hat\sigma_{11}$ | $\hat\sigma_{12}$ | $\hat\sigma_{22}$ |
13801380
|---------|:---:|:---:|:---:|:---:|:---:|
1381-
| Germany (Oct '20Jul '23) | .677 (.053) | 5.97 (4.62) | .0625 | .0158 | .0091 |
1382-
| Austria (Feb '21Aug '22) | .754 (.059) | 0.31 (1.57) | .0385 | .0148 | .0085 |
1383-
| Greece (Feb '43Aug '44) | .459 (.088) | 4.09 (2.97) | .0675 | .0245 | .0279 |
1384-
| Hungary I (Aug '22Feb '24) | .418 (.067) | 1.84 (0.40) | .0362 | .0089 | .0060 |
1385-
| Russia (Feb '22Jan '24) | .626 (.073) | 9.75 (10.74)| .0524 | .0138 | .0205 |
1386-
| Poland (May '22Nov '23) | .536 (.072) | 2.53 (0.86) | .0566 | .0149 | .0089 |
1381+
| Germany (Oct '20-Jul '23) | .677 (.053) | -5.97 (4.62) | .0625 | .0158 | .0091 |
1382+
| Austria (Feb '21-Aug '22) | .754 (.059) | -0.31 (1.57) | .0385 | .0148 | .0085 |
1383+
| Greece (Feb '43-Aug '44) | .459 (.088) | -4.09 (2.97) | .0675 | .0245 | .0279 |
1384+
| Hungary I (Aug '22-Feb '24) | .418 (.067) | -1.84 (0.40) | .0362 | .0089 | .0060 |
1385+
| Russia (Feb '22-Jan '24) | .626 (.073) | -9.75 (10.74)| .0524 | .0138 | .0205 |
1386+
| Poland (May '22-Nov '23) | .536 (.072) | -2.53 (0.86) | .0566 | .0149 | .0089 |
13871387
13881388
Standard errors in parentheses.
13891389
@@ -1448,7 +1448,7 @@ axes[1].errorbar(range(len(countries)), α_ml, yerr=[2*s for s in α_se],
14481448
axes[1].axhline(0, color='k', lw=0.7, ls='--')
14491449
axes[1].set_xticks(range(len(countries)))
14501450
axes[1].set_xticklabels(countries, rotation=30)
1451-
axes[1].set_ylabel(r'$\hat\alpha$ (±2 s.e.)')
1451+
axes[1].set_ylabel(r'$\hat\alpha$ ($\pm$2 s.e.)')
14521452
14531453
plt.tight_layout()
14541454
plt.show()
@@ -1509,7 +1509,7 @@ The main results of this paper are:
15091509
simultaneously.
15101510
15111511
2. A bivariate Wold representation with a triangular structure shows that
1512-
inflation Granger-causes money creation, but not vice versa consistent with
1512+
inflation Granger-causes money creation, but not vice versa -- consistent with
15131513
empirical findings that feedback runs from inflation to money creation.
15141514
15151515
3. The structural parameter $\alpha$ is *not identifiable* from the likelihood
@@ -1523,7 +1523,7 @@ The main results of this paper are:
15231523
15241524
4. The large standard errors mean that confidence intervals of two standard errors
15251525
on each side of the point estimates include values of $\alpha$ that would imply
1526-
money creators were maximizing seignorage revenue potentially explaining the
1526+
money creators were maximizing seignorage revenue -- potentially explaining the
15271527
paradox noted by Cagan.
15281528
15291529
5. Likelihood-ratio overfitting tests do not decisively reject the one-parameter
@@ -1556,9 +1556,9 @@ def bivariate_ma1_moments(α, λ, σ_ε2=1.0, σ_η2=0.5, σ_εη=0.0):
15561556
15571557
Returns:
15581558
1559-
cxx : dict with keys 0, 1 autocovariances of Δx
1560-
cμμ : dict with keys 0, 1 autocovariances of Δμ
1561-
cxμ : dict with keys -1, 0, 1 cross-covariances E[Δx_t Δμ_{t-τ}]
1559+
cxx : dict with keys 0, 1 -- autocovariances of Δx
1560+
cμμ : dict with keys 0, 1 -- autocovariances of Δμ
1561+
cxμ : dict with keys -1, 0, 1 -- cross-covariances E[Δx_t Δμ_{t-τ}]
15621562
"""
15631563
denom = λ + α * (1.0 - λ)
15641564
if np.isclose(denom, 0.0):
@@ -1728,8 +1728,8 @@ for T in [100, 500]:
17281728
Δx_s = np.diff(x_s)
17291729
λ_h, _ = univariate_ma1_mle(Δx_s)
17301730
λ_hats.append(λ_h)
1731-
print(f"T={T:4d}: mean λ̂ = {np.mean(λ_hats):.4f}, "
1732-
f"std(λ̂) = {np.std(λ_hats):.4f}")
1731+
print(f"T={T:4d}: mean λ_hat = {np.mean(λ_hats):.4f}, "
1732+
f"std(λ_hat) = {np.std(λ_hats):.4f}")
17331733
```
17341734
17351735
The standard deviation shrinks roughly as $1/\sqrt{T}$, consistent with

lectures/doubts_or_variability.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1249,7 +1249,7 @@ mystnb:
12491249
name: fig-bhs-3
12501250
---
12511251
1252-
# Empirical Sharpe ratio the minimum of the HJ bound curve
1252+
# Empirical Sharpe ratio -- the minimum of the HJ bound curve
12531253
sharpe = (r_e_mean - r_f_mean) / r_excess_std
12541254
12551255
def sharpe_gap(p, model):
@@ -1927,7 +1927,7 @@ P, Σ = np.meshgrid(p_grid, σ_grid)
19271927
19281928
W_abs = -2 * norm.ppf(P) / np.sqrt(T)
19291929
1930-
# RW: total type II = βσ²γ / [2(1-β)]
1930+
# RW: total type II = β*σ^2*γ / [2(1-β)]
19311931
Γ_rw = 1 + W_abs / Σ
19321932
comp_rw = 100 * (np.exp(β * Σ**2 * Γ_rw / (2 * (1 - β))) - 1)
19331933
@@ -2009,10 +2009,10 @@ def _read_fred_series(series_id, start_date, end_date):
20092009
20102010
20112011
# Fetch nominal PCE components, deflator, and population from FRED
2012-
nom_nd = _read_fred_series("PCND", start_date, end_date) # quarterly, 1947
2013-
nom_sv = _read_fred_series("PCESV", start_date, end_date) # quarterly, 1947
2014-
defl = _read_fred_series("DPCERD3Q086SBEA", start_date, end_date) # quarterly, 1947
2015-
pop_m = _read_fred_series("CNP16OV", start_date, end_date) # monthly, 1948
2012+
nom_nd = _read_fred_series("PCND", start_date, end_date) # quarterly, 1947-
2013+
nom_sv = _read_fred_series("PCESV", start_date, end_date) # quarterly, 1947-
2014+
defl = _read_fred_series("DPCERD3Q086SBEA", start_date, end_date) # quarterly, 1947-
2015+
pop_m = _read_fred_series("CNP16OV", start_date, end_date) # monthly, 1948-
20162016
20172017
# Step 1: add nominal nondurables + services
20182018
nom_total = nom_nd + nom_sv
@@ -2024,7 +2024,7 @@ real_total = nom_total / (defl / 100.0)
20242024
pop_q = pop_m.resample("QS").mean()
20252025
real_pc = (real_total / pop_q).dropna()
20262026
2027-
# Restrict to sample period 1948Q12006Q4
2027+
# Restrict to sample period 1948Q1-2006Q4
20282028
real_pc = real_pc.loc["1948-01-01":"2006-12-31"].dropna()
20292029
20302030
if real_pc.empty:
@@ -2038,7 +2038,7 @@ years_data = (
20382038
+ (real_pc.index.month - 1) / 12.0).to_numpy(dtype=float)
20392039
20402040
print(f"Fetched {len(log_c_data)} quarterly observations from FRED")
2041-
print(f"Sample: {years_data[0]:.1f} {years_data[-1] + 0.25:.1f}")
2041+
print(f"Sample: {years_data[0]:.1f} - {years_data[-1] + 0.25:.1f}")
20422042
print(f"Observations: {len(log_c_data)}")
20432043
```
20442044

0 commit comments

Comments
 (0)