Skip to content

Commit 7a1c712

Browse files
committed
update to rng keys
1 parent 19c6f00 commit 7a1c712

5 files changed

Lines changed: 23 additions & 13 deletions

File tree

lectures/doubts_or_variability.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -194,6 +194,16 @@ Setting $y_{t+1} = 1$ (a risk-free bond) in {eq}`bhs_pricing_eq` yields the reci
194194

195195
### Hansen--Jagannathan bounds
196196

197+
```{note}
198+
The derivation here uses the Cauchy-Schwarz inequality, which yields the bound
199+
directly from the pricing equation for excess returns.
200+
201+
{doc}`hansen_jagannathan_1991` derives the same
202+
bound by projecting $m$ onto the space of traded payoffs, which additionally
203+
yields the duality with the mean-variance frontier and the tighter
204+
positivity-restricted bound.
205+
```
206+
197207
Let $R_{t+1}^e$ denote the gross return on a risky asset (e.g., the market portfolio) and $R_{t+1}^f$ the gross return on a one-period risk-free bond.
198208

199209
The **excess return** is

lectures/dovis_accounting_mf.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1323,19 +1323,19 @@ def particle_filter(y_data, key, N_particles,
13231323
We demonstrate the particle filter on synthetic data that mimics an institutional disinflation: inflation declines from roughly 30% to 5% while debt rises from 20% to 45% of GDP.
13241324

13251325
```{code-cell} ipython3
1326-
np.random.seed(0)
1326+
rng = np.random.default_rng(0)
13271327
T_sim = 60
13281328
t_reform = 25
13291329
13301330
inflation_data = np.concatenate([
1331-
25 + 5 * np.random.randn(t_reform),
1332-
np.linspace(25, 5, 10) + 2 * np.random.randn(10),
1333-
5 + 2 * np.random.randn(T_sim - t_reform - 10)
1331+
25 + 5 * rng.standard_normal(t_reform),
1332+
np.linspace(25, 5, 10) + 2 * rng.standard_normal(10),
1333+
5 + 2 * rng.standard_normal(T_sim - t_reform - 10)
13341334
])
13351335
debt_data = np.concatenate([
1336-
20 + 2 * np.random.randn(t_reform),
1337-
np.linspace(20, 40, 10) + 3 * np.random.randn(10),
1338-
40 + 3 * np.random.randn(T_sim - t_reform - 10)
1336+
20 + 2 * rng.standard_normal(t_reform),
1337+
np.linspace(20, 40, 10) + 3 * rng.standard_normal(10),
1338+
40 + 3 * rng.standard_normal(T_sim - t_reform - 10)
13391339
])
13401340
13411341
y_data = jnp.column_stack([inflation_data, debt_data])

lectures/gorman_heterogeneous_households.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1685,7 +1685,7 @@ We use the same technology and preference parameters.
16851685
We set household-specific parameters below and impose $\sum_j \phi_j = 1$.
16861686
16871687
```{code-cell} ipython3
1688-
np.random.seed(42)
1688+
rng = np.random.default_rng(42)
16891689
N = 100
16901690
16911691
# Aggregate endowment process parameters
@@ -1695,8 +1695,8 @@ N = 100
16951695
16961696
16971697
# Mean endowments α_j and aggregate exposure φ_j
1698-
αs = np.random.uniform(3.0, 5.0, N)
1699-
φs_raw = np.random.uniform(0.5, 1.5, N)
1698+
αs = rng.uniform(3.0, 5.0, N)
1699+
φs_raw = rng.uniform(0.5, 1.5, N)
17001700
φs = φs_raw / np.sum(φs_raw) # normalize so Σ φ_j = 1
17011701
17021702
# Rank households by mean endowment to assign idiosyncratic risk

lectures/hansen_richard_1987.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,6 @@ from scipy.optimize import minimize
9797
from scipy import stats
9898
import pandas as pd
9999
100-
np.random.seed(42)
101100
```
102101

103102
## Data generation

lectures/rob_markov_perf.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -568,12 +568,13 @@ def nnash_robust(A, C, B1, B2, R1, R2, Q1, Q2, S1, S2, W1, W2, M1, M2,
568568
k_1 = B1.shape[1]
569569
k_2 = B2.shape[1]
570570
571+
rng = np.random.default_rng(0)
571572
v1 = np.eye(k_1)
572573
v2 = np.eye(k_2)
573574
P1 = np.eye(n) * 1e-5
574575
P2 = np.eye(n) * 1e-5
575-
F1 = np.random.randn(k_1, n)
576-
F2 = np.random.randn(k_2, n)
576+
F1 = rng.standard_normal((k_1, n))
577+
F2 = rng.standard_normal((k_2, n))
577578
578579
579580
for it in range(max_iter):

0 commit comments

Comments
 (0)