You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<li><strong>Bottlenecks.</strong> Some arguments that AI R&D is bottlenecked by compute, e.g. see <spanclass="citation" data-cites="whitfill2025bottlenecks">Whitfill and Wu (<ahref="#ref-whitfill2025bottlenecks" role="doc-biblioref">2025</a>)</span>.</li>
333
333
<li><strong>Scale-dependent algorithmic progress.</strong><spanclass="citation" data-cites="gundlach2025algorithmicprogressai">Gundlach et al. (<ahref="#ref-gundlach2025algorithmicprogressai" role="doc-biblioref">2025</a>)</span> argue that algorithmic progress has contributed much less.</li>
334
-
<li><strong>Data contribution.</strong> Berren Millidge argues <ahref="https://www.beren.io/2025-08-02-Most-Algorithmic-Progress-is-Data-Progress/">“Most Algorithmic Progress is Data Progress”</a>. Data has been growing slower than compute.</li>
334
+
<li><strong>Data contribution.</strong> Berren Millidge argues <ahref="https://www.beren.io/2025-08-02-Most-Algorithmic-Progress-is-Data-Progress/">“Most Algorithmic Progress is Data Progress”</a></li>
<td>how fast proportional improvements are getting harder to find. Jones writes that aggregate data are roughly consistent with <spanclass="math inline">\(\beta \approx 3\)</span> if <spanclass="math inline">\(\lambda = 1\)</span>.</td>
<td>elasticity of idea production with respect to research effort. <spanclass="math inline">\(\lambda = 1\)</span> means no duplication effect; <spanclass="math inline">\(\lambda < 1\)</span> allows duplication/congestion.</td>
<td>how strongly nonrival ideas raise final output; this is the degree of increasing returns in goods production. He doesn’t separately calibrate.</td>
<td>the overall degree of increasing returns in this simple semi-endogenous setup. Jones says <spanclass="math inline">\(\gamma = 1/3\)</span> is consistent with data when research intensity has been rising.</td>
1112
-
</tr>
1113
-
</tbody>
1114
-
</table>
1115
-
<p><strong>Visually:</strong></p>
1116
-
<oltype="1">
1117
-
<li><p>Basic model with research effort but no knowledge term. If <spanclass="math inline">\(R\)</span> is constant, then <spanclass="math inline">\(\dot{A}\)</span> is constant, so <spanclass="math inline">\(A\)</span> rises linearly and the growth rate <spanclass="math inline">\(g_A = \dot{A}/A\)</span> declines toward zero: <spanclass="math display">\[\begin{gathered}
1118
-
\dot{A}=R^\lambda\\
1119
-
\xymatrix{*++[F]{R\&D} \ar[r]|(0.4)\lambda & *++[F]{\Delta knowledge}\ar[r] & *++[F]{knowledge}}
1120
-
\end{gathered}
1121
-
\]</span></p></li>
1122
-
<li><p>Add the knowledge term: <spanclass="math display">\[\begin{gathered}
1123
-
\dot{A}=R^\lambda A^{1-\beta}\\
1124
-
\xymatrix{*++[F]{R\&D} \ar[r]|(0.4)\lambda & *++[F]{\Delta knowledge}\ar[r] & *++[F]{knowledge}\ar@/^2em/[l]|{1-\beta}}
<p>So if <spanclass="math inline">\(R\)</span> is constant, <spanclass="math inline">\(g_A\)</span> declines as <spanclass="math inline">\(A\)</span> rises.<ahref="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a> If <spanclass="math inline">\(R\)</span> grows at rate <spanclass="math inline">\(g_R\)</span> along a balanced growth path, then <spanclass="math inline">\(g_A = \frac{\lambda}{\beta}g_R.\)</span></p></li>
1129
-
<li><p>Recursive self-improvement, where knowledge directly raises research input: <spanclass="math display">\[\begin{gathered}
\xymatrix{*++[F]{R\&D} \ar[r]|(0.4)\lambda & *++[F]{\Delta knowledge}\ar[r] & *++[F]{knowledge}\ar@/^2em/[l]|{1-\beta}\ar@/^4em/[ll]|\kappa}
1133
-
\end{gathered}
1134
-
\]</span></p>
1135
-
<p>This yields:</p>
1085
+
<p>He also says he’ll sometimes write <spanclass="math inline">\(\dot{A}_t=R^\gamma_t A_t^\psi\)</span>.</p>
1086
+
<blockquoteclass="blockquote">
1087
+
<p>“The parameter <spanclass="math inline">\(\lambda\)</span> allows for thepossibility of duplicatione ffects,so that doubling the number of researchers at a point in time may potentially less than double the innovation rate; however, any λ>0, including λ=1, is allowed.”</p>
1088
+
</blockquote>
1089
+
<blockquoteclass="blockquote">
1090
+
<p>“The parameter β > 0 captures the rate at which ideas—that is,proportional improvements in productivity—are getting harder to find.</p>
<p>On knife edge assumptions: he says the root of all exponential growth is exponential population growth (but seems a bit fishy).</p>
1094
+
<p>Estimates of parameters:</p>
1136
1095
<ul>
1137
-
<li>if <spanclass="math inline">\(\lambda\kappa < \beta\)</span>, growth slows over time;</li>
1138
-
<li>if <spanclass="math inline">\(\lambda\kappa = \beta\)</span>, you get constant exponential growth;</li>
1139
-
<li>if <spanclass="math inline">\(\lambda\kappa > \beta\)</span>, the model implies a finite-time singularity.</li>
1140
-
</ul></li>
1141
-
</ol>
1096
+
<li>p136: <spanclass="math inline">\(\gamma = 1/3\)</span>, so output growth is 1/3 growth in ideas. He also predicts that long-run growth is much lower.</li>
1097
+
</ul>
1098
+
<p>Description of <spanclass="citation" data-cites="aghion2019artificial">Aghion, Jones, and Jones (<ahref="#ref-aghion2019artificial" role="doc-biblioref">2019</a>)</span>. Adjust the ideas production function:</p>
<p>with constant capital-output ratio you get: <spanclass="math display">\[\dot{A}_t=\kappa A_t^{1-(\beta-\alpha)}L_t\]</span></p>
1101
+
<blockquoteclass="blockquote">
1102
+
<p>“if the fraction of tasks that are automated (α) rises to reach the rate at which ideas are getting harder to find (β), we get a singularity! In particular, once α ≥ β, the model exhibits sufficiently strong increasing returns that there is no balanced growth path. Instead, the growth rate rises rapidly over time and the economy reaches infinite knowledge and income in finite time,assuming that is possible.”</p>
1103
+
</blockquote>
1142
1104
</section>
1143
1105
<sectionid="aghion2019artificial" class="level2">
1144
1106
<h2class="anchored" data-anchor-id="aghion2019artificial"><spanclass="citation" data-cites="aghion2019artificial">Aghion, Jones, and Jones (<ahref="#ref-aghion2019artificial" role="doc-biblioref">2019</a>)</span></h2>
<li>The paper calibrates primarily in terms of <spanclass="math inline">\(p\)</span> and <spanclass="math inline">\(r\)</span>, not by directly estimating <spanclass="math inline">\(\beta\)</span>.</li>
1458
1420
<li>Their reported medians are <spanclass="math inline">\(p=0.3\)</span> and <spanclass="math inline">\(r=1.2\)</span>, and one representative decomposition is <spanclass="math inline">\(\alpha=0.5\)</span>, <spanclass="math inline">\(\lambda=0.6\)</span>, and <spanclass="math inline">\(\beta=0.25\)</span>.</li>
1459
1421
</ul>
1422
+
<p>then they note <spanclass="math inline">\(r=\lambda \alpha / \beta\)</span>, and this is critical</p>
<li>For software R&D they estimate <spanclass="math inline">\(\beta=3\)</span></li>
1504
+
<li>they say in software <spanclass="math inline">\(\beta=0.1\)</span></li>
1505
+
</ul></li>
1506
+
<li>Questions:
1507
+
<ul>
1508
+
<li>Any historical domain where we’ve seen regimes of <spanclass="math inline">\(\beta<0\)</span>?, & so explosion.</li>
1509
+
</ul></li>
1537
1510
</ul>
1538
1511
<p>Implications / thresholds:</p>
1539
1512
<ul>
1540
-
<li>In the one-sector software model, the key threshold is <spanclass="math inline">\(r = \lambda\alpha/\beta_S\)</span>.</li>
1541
-
<li>In the multi-sector model, explosive growth can arise from the combined automation of goods production, software R&D, hardware progress, and aggregate innovation, even when no single channel is decisive on its own.</li>
1513
+
<li><p>Q: how to think about the growth effect of uplift vs automation?</p></li>
1514
+
<li><p>It seems to me AI typically makes things effectively <em>free</em>, rather than being bottlenecked by capital. I think this is a problem for Cobb-Douglas production.</p></li>
<h2class="anchored" data-anchor-id="kokotajlo2025aifuturesmodel-ai-futures-model"><spanclass="citation" data-cites="kokotajlo2025aifuturesmodel">Kokotajlo et al. (<ahref="#ref-kokotajlo2025aifuturesmodel" role="doc-biblioref">2025</a>)</span> AI futures model</h2>
1519
+
<p>https://www.timelinesmodel.com</p>
1520
+
<p>They assume AI automates some fraction of R&D tasks, proportional to the time horizon.</p>
&& \text{(accumulation of software ideas)}\\
1524
+
<p>Short: AI automates some fraction of R&D tasks, which causes an effective multiplier on R&D labor: “parallel uplift and 1/(1-f) are equivalent in the simple model”.</p>
<p>Automating a fraction <spanclass="math inline">\(f\)</span> of R&D tasks multiplies effective R&D labor by <spanclass="math inline">\(1/(1-f)\)</span> (i.e. tasks are perfect complements).</p>
<td>not pinned down separately; together with <spanclass="math inline">\(\zeta\)</span>, Kwa uses <spanclass="math inline">\(\alpha/(\alpha+\zeta)\in [0.12,0.35]\)</span> and <spanclass="math inline">\(\alpha+\zeta\in [0.8,1]\)</span>, implying roughly <spanclass="math inline">\(\alpha\in [0.10,0.35]\)</span>.</td>
<td>not pinned down separately; implied by the same calibration above, roughly <spanclass="math inline">\(\zeta\in [0.52,0.88]\)</span>.</td>
1578
-
</tr>
1579
-
<trclass="even">
1580
-
<td><spanclass="math inline">\(f(t)\)</span></td>
1581
-
<td>fraction of R&D tasks automated; Kwa sets current <spanclass="math inline">\(f\)</span> in Jan 2026 to lie in <spanclass="math inline">\([0.25,0.5]\)</span>.</td>
1582
-
</tr>
1583
-
<trclass="odd">
1584
-
<td><spanclass="math inline">\(v\)</span></td>
1585
-
<td>automation velocity; Kwa uses <spanclass="math inline">\(1/v\in [1.5,4.2]\)</span>, so <spanclass="math inline">\(v\in [0.24,0.67]\)</span>.</td>
<td>effective compute level of the half-automated coder; not directly estimated here, but defined as the point where automation reaches 50%.</td>
1590
-
</tr>
1591
-
</tbody>
1592
-
</table>
1593
-
<p>Relevant notes on interpretation / sources:</p>
1594
-
<blockquoteclass="blockquote">
1595
-
<p>“<spanclass="math inline">\(E_{hac}\)</span> is the effective compute level of the half-automated coder”</p>
1596
-
</blockquote>
1597
-
<blockquoteclass="blockquote">
1598
-
<p>“v is the automation velocity: S must increase by factor of e^(1/v) to get from 50% to 73% automation”</p>
1599
-
</blockquote>
1600
-
<blockquoteclass="blockquote">
1601
-
<p>“alpha/(alpha + zeta) is between 0.12 and 0.35 … This range is based on Yafah’s (Epoch) recommendation to calibrate from lab spending ratios of labor vs capital.”</p>
<li><spanclass="math inline">\(Q\)</span> is the quality of cognitive labour, measured in units of time horizon—this is our AI researcher (e.g. Opus 4.5, GPT-5.2, Gemini 3)</li>
1547
+
<li><spanclass="math inline">\(A\)</span> is the current level of algorithms, also measured in units of time horizon (e.g. GPT-2, GPT-3, Pythia, etc.)</li>
1548
+
<li><spanclass="math inline">\(\beta\)</span> is the ideas getting harder to find parameter</li>
1549
+
<li>The explosion condition here is <spanclass="math inline">\(q > \beta\)</span></li>
1550
+
<li>(Must also assume that <spanclass="math inline">\(Q=cA\)</span>.</li>
<h2class="anchored" data-anchor-id="david-rein-2025-model">David Rein (2025) model</h2>
1663
-
<p>The core idea: idea production depends on quality:</p>
1664
-
<p><spanclass="math display">\[\begin{aligned}
1665
-
dA &= Q^q A^{1-\beta}
1666
-
&& \text{(idea production depends on quality of cognitive labor)}\\
1667
-
Q &= cA
1668
-
&& \text{(quality depends on ideas)}
1669
-
\end{aligned}
1670
-
\]</span></p>
1671
-
<p>Parameters: | | | | ——- | —————————————————————————————————————————– | | <spanclass="math inline">\(Q\)</span> | quality of cognitive labour, measured in units of time horizon; this is our AI researcher (e.g. Opus 4.5, GPT-5.2, Gemini 3). | | <spanclass="math inline">\(A\)</span> | current level of algorithms, also measured in units of time horizon (e.g. GPT-2, GPT-3, Pythia, etc.). | | <spanclass="math inline">\(q\)</span> | elasticity of idea production with respect to cognitive-labor quality. | | <spanclass="math inline">\(\beta\)</span> | ideas-get-harder-to-find parameter. |</p>
1672
-
<p>Implications / thresholds:</p>
1673
-
<ul>
1674
-
<li>The explosion condition is <spanclass="math inline">\(q > \beta\)</span>.</li>
Erdil, Ege, and Matthew Barnett. 2025. <span>“Most AI Value Will Come from Broad Automation, Not from r&d.”</span><ahref="https://epoch.ai/gradient-updates/most-ai-value-will-come-from-broad-automation-not-from-r-d">https://epoch.ai/gradient-updates/most-ai-value-will-come-from-broad-automation-not-from-r-d</a>.
Erdil, Ege, and Tamay Besiroglu. 2024. <span>“Explosive Growth from AI Automation: A Review of the Arguments.”</span><ahref="https://arxiv.org/abs/2309.11690">https://arxiv.org/abs/2309.11690</a>.
1729
1661
</div>
@@ -1745,9 +1677,6 @@ <h2 class="anchored" data-anchor-id="david-rein-2025-model">David Rein (2025) mo
Ho, Anson, Tamay Besiroglu, Ege Erdil, David Owen, Robi Rahman, Zifan Carl Guo, David Atkinson, Neil Thompson, and Jaime Sevilla. 2024. <span>“Algorithmic Progress in Language Models.”</span><ahref="https://doi.org/10.48550/arXiv.2403.05812">https://doi.org/10.48550/arXiv.2403.05812</a>.
Jones, Benjamin F. 2025. <span>“Artificial Intelligence in Research and Development.”</span> NBER Working Paper 34312. National Bureau of Economic Research. <ahref="https://doi.org/10.3386/w34312">https://doi.org/10.3386/w34312</a>.
0 commit comments