Skip to content

Commit b37ba4e

Browse files
committed
literally
1 parent 945c3e5 commit b37ba4e

File tree

4 files changed

+46
-4
lines changed

4 files changed

+46
-4
lines changed

docs/posts/2026-04-12-taking-agi-literally.html

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes">
88

99
<meta name="author" content="Tom Cunningham">
10-
<meta name="dcterms.date" content="2026-04-13">
10+
<meta name="dcterms.date" content="2026-04-14">
1111
<meta name="description" content="Tom Cunningham blog">
1212

1313
<title>Taking AGI Literally | Tom Cunningham – Tom Cunningham</title>
@@ -202,7 +202,7 @@ <h1 class="title">Taking AGI Literally</h1>
202202
<div>
203203
<div class="quarto-title-meta-heading">Published</div>
204204
<div class="quarto-title-meta-contents">
205-
<p class="date">April 13, 2026</p>
205+
<p class="date">April 14, 2026</p>
206206
</div>
207207
</div>
208208

@@ -367,12 +367,12 @@ <h1>Litearture</h1>
367367
</section><section class="quarto-appendix-contents" id="quarto-citation"><h2 class="anchored quarto-appendix-heading">Citation</h2><div><div class="quarto-appendix-secondary-label">BibTeX citation:</div><pre class="sourceCode code-with-copy quarto-appendix-bibtex"><code class="sourceCode bibtex">@online{cunningham2026,
368368
author = {Cunningham, Tom},
369369
title = {Taking {AGI} {Literally}},
370-
date = {2026-04-13},
370+
date = {2026-04-14},
371371
url = {tecunningham.github.io/posts/2026-04-12-taking-agi-literally.html},
372372
langid = {en}
373373
}
374374
</code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre><div class="quarto-appendix-secondary-label">For attribution, please cite this work as:</div><div id="ref-cunningham2026" class="csl-entry quarto-appendix-citeas" role="listitem">
375-
Cunningham, Tom. 2026. <span>“Taking AGI Literally.”</span> April 13,
375+
Cunningham, Tom. 2026. <span>“Taking AGI Literally.”</span> April 14,
376376
2026. <a href="https://tecunningham.github.io/posts/2026-04-12-taking-agi-literally.html">tecunningham.github.io/posts/2026-04-12-taking-agi-literally.html</a>.
377377
</div></div></section></div></main> <!-- /main -->
378378
<script id="quarto-html-after-body" type="application/javascript">

docs/search.json

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -341,5 +341,26 @@
341341
"title": "An Apple-Picking Model of AI R&D",
342342
"section": "Embedding Apple-Picking into Jones-model R&D",
343343
"text": "Embedding Apple-Picking into Jones-model R&D\nWe can embed the apple-picking model into the workhorse R&D function from C. I. Jones (1995) as follows, with the following implications:\n\nHuman-only effort has constant elasticity.\nAgent effort has declining elasticity.\nHuman & agent effort are complements up to a point.\n\nWe start with this function:\n\\[\\utt{\\frac{\\dot{A}}{A}}{growth rate}{of ideas}\n \\propto \\utt{R^\\gamma}{research}{inputs} \\times \\utt{A^{-\\beta}}{fishing}{out}\\]\nThis can be rewritten in a cumulative form, where \\(\\bar{R}_t\\) represents the total research (adjusted for congestion) up to time \\(t\\), then (assuming \\(A_0\\) is small):\n\\[A_t \\approx \\bar{R}_t^{1/\\beta}\\]\nWe can compare this to our apple-picking function, with only human effort \\(x\\): \\[\\ut{a(x)}{apples} = 1-e^{-rx}\\]\nThese can be reconciled if we assume ideas is a nonlinear function of apples-picked (\\(a\\)): \\[A(a)=\\ln\\left(\\frac{1}{1-a(x)}\\right)^{1/\\beta}.\\]\nWe can then consider progress in ideas with both humans and robots picking apples, \\(a(x_H,x_L)\\): \\[A(a)=\\left[\\ln\\left(\\frac{1}{\\lambda e^{-r_Hx_H-r_Ax_A}+(1-\\lambda)e^{-r_Hx_H}}\\right)\\right]^{1/\\beta}.\\]\nImplications:\n\nHuman-only"
344+
},
345+
{
346+
"objectID": "posts/2026-04-12-taking-agi-literally.html",
347+
"href": "posts/2026-04-12-taking-agi-literally.html",
348+
"title": "Taking AGI Literally",
349+
"section": "",
350+
"text": "I feel like I didn’t really take AGI literally until recently.\n\nI’ve been working on the economics of AI for 2 years, but I feel I never really asked myself what the world would be like if computers could literally do all the things that humans could do.\nOn reflection I feel: (1) it might happen; (2) it would be absolutely bananas.\nOf course this a very common view, and I’ve read (or skimmed) a lot of things making this point, but I feel I didn’t internalize them so it’s worth rehearsing the arguments to see if I’ve missed something.\nI think I’m rehashing a very well-trodden debate. Maybe I’m missing arguments that I should already know, & I’d be very gratful for people to point those out. I list some papers at the bottom.\n\nI know smart people who appear to disagree.\n\nSome people seem to believe we could have AGI yet the world would not go bananas. See some examples below: Tyler Cowen, Andrej Karpathy, Seb Krier, Alex Imas, and responses to the FRI survey. I think probably it’s due to a difference in how we’re defining AGI, but then it’s useful to push on this. Presumably it implies they think the strong AGI I’m talking about is vanishingly unlikely, & if so I’d like to understand their reasons.\nI normally am pretty sanguine about most things. In discussions about politics or technology I usually irritate people by saying “this too will pass” and finding historical parallels. I really want to say the same about AI but I don’t feel I can. I would be very happy to be talked out of these opinions.\nI feel odd writing this essay. My economist friends will ask why I’m wasting time with ideas so preposterous, my AI friends will ask why I’m wasting time with ideas so obvious.\n\nMy claims:\n\nI’ll define AGI as being able to do every task that any human can do, including esoteric skills, and physical tasks through a robot.\n\nIf you put AGI into standard economic models then things go crazy almost immediately. The economic effects would be unprecedented in all of human history.\nIf you think about everyday life with AGI, then things go crazy too.\nIf you think about other parts of society - politics, warfare, communication - all completely and utterly bananas.\n\nIn some sense these claims feel obvious. If I wake up one day and I check my phone and my phone says to me “anything you can do I can do better”, then of course the world is going to be utterly different. Maybe this type of AGI is centuries away, & that would be reassuring. But if it’s in my lifetime, or my daughter’s lifetime, then it seems like it would be a tidal wave which would sweep away most things I know."
351+
},
352+
{
353+
"objectID": "posts/2026-04-12-taking-agi-literally.html#footnotes",
354+
"href": "posts/2026-04-12-taking-agi-literally.html#footnotes",
355+
"title": "Taking AGI Literally",
356+
"section": "Footnotes",
357+
"text": "Footnotes\n\n\nhttps://epoch.ai/data/ai-chip-sales/↩︎\nThere’s a famous paper arguing ideas are getting harder to find (Bloom et al. (2020)), but they argue there’s a low ratio between TFP growth and R&D growth, not a declining ratio.↩︎\nhttps://philiptrammell.substack.com/p/is-labor-a-luxury-in-the-long-run↩︎\nhttps://static1.squarespace.com/static/635693acf15a3e2a14a56a4a/t/69cbb9d509ada447b6d9013f/1774959061185/forecasting-the-economic-effects-of-ai.pdf↩︎"
358+
},
359+
{
360+
"objectID": "2026-02-22-ai-cost-curves.html",
361+
"href": "2026-02-22-ai-cost-curves.html",
362+
"title": "AI cost curves",
363+
"section": "",
364+
"text": "It’s useful to draw plots showing achievement vs expenditure, comparing humans & agents.\nYou can read the y-axis in a few ways: (1) score on a benchmark; (2) quality of the output; (3) score on an optimization problem.\nObservations:\n\nIn most cases agents are cheaper than humans but hit a ceiling in capability.\nCan simplify to say agents are free, without much loss.\nWe can see three types of agent growth: (A) cheaper inference; (B) expanded capabilities; (C) test-time growth.\nDistillation shifts cost curves left.\nThis observation is a nice fit for time horizon (more discussion required)\nQ: can you derive these curves from a theory of task complexity?\n\nTODO:\n\nProbably plot ln(expenditure).\nShow separate graphs for inference cost & training cost.\nObservation: for many things it doesn’t matter if we don’t have cardinal scale of quality, as long as we have ordinal scale. E.g. we can still talk about cost reduction, & something about scale effects.\n\nBasic plot:\n\n\n\n\n\n\n\n\n\nThree types of agent change:\n\n\n\n\n\n\n\n\n\n\nadditional plots\nExtra plots:"
344365
}
345366
]

docs/sitemap.xml

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -64,4 +64,12 @@
6464
<loc>tecunningham.github.io/posts/2020-04-05-front-loading-restrictions.html</loc>
6565
<lastmod>2025-10-29T15:46:32.993Z</lastmod>
6666
</url>
67+
<url>
68+
<loc>tecunningham.github.io/posts/2026-04-12-taking-agi-literally.html</loc>
69+
<lastmod>2026-04-14T18:51:41.481Z</lastmod>
70+
</url>
71+
<url>
72+
<loc>tecunningham.github.io/2026-02-22-ai-cost-curves.html</loc>
73+
<lastmod>2026-04-14T21:07:58.125Z</lastmod>
74+
</url>
6775
</urlset>

posts/2026-04-12-taking-agi-literally.qmd

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -128,14 +128,27 @@ Other examples
128128
[^fri]: https://static1.squarespace.com/static/635693acf15a3e2a14a56a4a/t/69cbb9d509ada447b6d9013f/1774959061185/forecasting-the-economic-effects-of-ai.pdf
129129

130130

131+
# Literature
131132

133+
<<<<<<< HEAD
132134

133135
# Literature
134136

135137

138+
=======
139+
Classic writings on this.
140+
:
141+
>>>>>>> f05b9c4 (literally)
136142
- @aghion2019artificial say (1) progressively automating tasks can be consistent with ordinary growth rates if each task is a strong complement; (2) in contrast progressively automating R&D tasks could cause explosive growth.
137143
- @davidson2021could give arguments for explosive growth from AGI.
138144
- In 2023 Matt Clancy and Tamay Besiroglu debated [AI and explosive growth in Asterisk](https://asteriskmag.com/issues/03/the-great-inflection-a-debate-about-ai-and-explosive-growth). Matt Clancy's arguments: (a) slow automation of tasks; (b) bottlenecks from experiments; (c) bottlenecks from regulation.
139145
- @erdil2024explosive give arguments for explosive growth from AGI.
146+
<<<<<<< HEAD
140147
- [Sam Hammond](https://www.secondbest.ca/p/the-limits-of-explosive-growth) replies to Davidson and Erdil, but I found his arguments difficult to follow. He spends a lot of time on the returns to R&D, assuming that we can't get much more efficient than we already are (which would be somewhat surprising), but I didn't feel he directly addressed the increase in effective labor supply. The arguments about bounded utility didn't seem relevant.
148+
=======
149+
- [Sam Hammond](https://www.secondbest.ca/p/the-limits-of-explosive-growth) replies to Davidson and Erdil, but found the arguments difficult to follow. He spends a lot of time on the returns to scale in R&D, assuming that we can't get much more efficient than we already are (which would be somewhat surprising), but I didn't feel he directly addressed the increase in effective labor supply. The arguments about bounded utility didn't seem relevant.
150+
- Tyler Cowen (Feb 2025) ["Why I think AI take-off is relatively slow"](https://marginalrevolution.com/marginalrevolution/2025/02/why-i-think-ai-take-off-is-relatively-slow.html)
151+
- @wiseman2025growth, "We estimate that economic growth will be 3% to 9% higher per year for the 20 years following significant AI automation." But this is based on a model with slow automation of tasks over time, i.e. it's not about AGI, it's about slowly expanding AI abilities.
152+
153+
>>>>>>> f05b9c4 (literally)
141154

0 commit comments

Comments
 (0)