Skip to content

Commit 01ef39d

Browse files
committed
updated post
1 parent 0633774 commit 01ef39d

13 files changed

Lines changed: 629 additions & 222 deletions

_freeze/posts/2025-10-19-forecasts-of-AI-growth/execute-results/html.json

Lines changed: 2 additions & 2 deletions
Large diffs are not rendered by default.

docs/index.html

Lines changed: 53 additions & 26 deletions
Large diffs are not rendered by default.

docs/index.xml

Lines changed: 375 additions & 83 deletions
Large diffs are not rendered by default.

docs/listings.json

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
"listing": "/index.html",
44
"items": [
55
"/posts/2024-10-27-from-citlali.html",
6+
"/posts/2025-10-19-forecasts-of-AI-growth.html",
67
"/posts/2025-09-19-transformative-AI-notes.html",
78
"/posts/2020-10-02-on-deriving-things.html",
89
"/posts/2024-12-26-heavy-tailed-noise.html",

docs/posts/2023-10-22-high-dimensional-world.html

Lines changed: 14 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes">
88

99
<meta name="author" content="Tom Cunningham">
10-
<meta name="dcterms.date" content="2025-11-05">
10+
<meta name="dcterms.date" content="2025-11-06">
1111
<meta name="description" content="Tom Cunningham blog">
1212

1313
<title>Implications of the Manifold Hypothesis | Tom Cunningham – Tom Cunningham</title>
@@ -229,7 +229,7 @@ <h1 class="title">Implications of the Manifold Hypothesis</h1>
229229
<div>
230230
<div class="quarto-title-meta-heading">Published</div>
231231
<div class="quarto-title-meta-contents">
232-
<p class="date">November 5, 2025</p>
232+
<p class="date">November 6, 2025</p>
233233
</div>
234234
</div>
235235

@@ -1012,6 +1012,15 @@ <h1>Related Literature</h1>
10121012
<li>Can interpret PCA as denoising: “quadratically regularized PCA corresponds to a model in which features are observed with N(0,1) errors.”</li>
10131013
</ul>
10141014
</dd>
1015+
<dt><span class="citation" data-cites="poggio2017deepshallow">Poggio et al. (<a href="#ref-poggio2017deepshallow" role="doc-biblioref">2017</a>)</span></dt>
1016+
<dd>
1017+
Argues that deep learning is more efficient when the generating function is <em>compositional</em>.
1018+
</dd>
1019+
</dl>
1020+
<blockquote class="blockquote">
1021+
<p>“The main message is that deep networks have the theoretical guarantee, which shallow networks do not have, that they can avoid the curse of dimensionality for an important class of problems, corresponding to compositional functions, that is functions of functions. An especially interesting subset of such compositional functions are hierarchically local compositional functions where all the constituent functions are local in the sense of bounded small dimensionality. <img src="images/2025-11-06-08-31-30.png" class="img-fluid"></p>
1022+
</blockquote>
1023+
<dl>
10151024
<dt>Markov blanket</dt>
10161025
<dd>
10171026
Given a set of random variables, a <a href="https://en.wikipedia.org/wiki/Markov_blanket">Markov blanket</a> with respect to a single variable <span class="math inline">\(Y\)</span> is a subset that is collectively sufficient to infer <span class="math inline">\(Y\)</span>.
@@ -1322,6 +1331,9 @@ <h1>Perceptual Manifolds</h1>
13221331
<div id="ref-narayanan2010manifoldhypothesis" class="csl-entry" role="listitem">
13231332
Narayanan, Hariharan, and Sanjoy Mitter. 2010. <span>“Sample Complexity of Testing the Manifold Hypothesis.”</span> In <em>Advances in Neural Information Processing Systems (NIPS) 23</em>. Vol. 23. <a href="https://papers.neurips.cc/paper/3958-sample-complexity-of-testing-the-manifold-hypothesis.pdf">https://papers.neurips.cc/paper/3958-sample-complexity-of-testing-the-manifold-hypothesis.pdf</a>.
13241333
</div>
1334+
<div id="ref-poggio2017deepshallow" class="csl-entry" role="listitem">
1335+
Poggio, Tomaso, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, and Qianli Liao. 2017. <span>“Why and When Can Deep – but Not Shallow – Networks Avoid the Curse of Dimensionality: A Review.”</span> <a href="https://arxiv.org/abs/1611.00740">https://arxiv.org/abs/1611.00740</a>.
1336+
</div>
13251337
<div id="ref-udell2016generalized" class="csl-entry" role="listitem">
13261338
Udell, Madeleine, Corinne Horn, Reza Zadeh, Stephen Boyd, et al. 2016. <span>“Generalized Low Rank Models.”</span> <em>Foundations and Trends<span></span> in Machine Learning</em> 9 (1): 1–118.
13271339
</div>

docs/posts/2025-10-19-forecasts-of-AI-growth.html

Lines changed: 84 additions & 65 deletions
Large diffs are not rendered by default.
51.7 KB
Loading

docs/search.json

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -921,7 +921,7 @@
921921
"href": "index.html",
922922
"title": "Blog Posts",
923923
"section": "",
924-
"text": "Economics and Transformative AI\n\n\n\n\n\n\n\n\n\n\nOct 2, 2025\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nOn Deriving Things\n\n\n\n\n\n\n\n\n\n\nJan 30, 2025\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nToo Much Good News is Bad News\n\n\n\n\n\n\n\n\n\n\nDec 26, 2024\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nPremature Optimization and the Valley of Confusion\n\n\n\n\n\n\n\n\n\n\nMay 10, 2024\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nPeer Effects, Culture, and Taxes\n\n\n\n\n\n\n\n\n\n\nApr 28, 2024\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nBloodhounds and Bulldogs\n\n\nOn Perception, Judgment, & Decision-Making\n\n\n\n\n\n\n\nApr 27, 2024\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nThe Influence of AI on Content Moderation and Communication\n\n\n\n\n\n\n\n\n\n\nDec 11, 2023\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nThe History of Automated Text Moderation\n\n\n\n\n\n\n\n\n\n\nNov 18, 2023\n\n\nIntegrity Institute collaborators: Alex Rosenblatt, Jeff Allen, Ejona Varangu, Dave Sullivan, Tom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nThinking About Tradeoffs? Draw an Ellipse\n\n\n\n\n\n\n\n\n\n\nOct 25, 2023\n\n\nTom Cunningham, OpenAI.\n\n\n\n\n\n\n\n\n\n\n\n\nExperiment Interpretation and Extrapolation\n\n\n\n\n\n\n\n\n\n\nOct 17, 2023\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nAn AI Which Imitates Humans Can Beat Humans\n\n\n\n\n\n\n\n\n\n\nOct 6, 2023\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nSushi-Roll Model of Online Media\n\n\nPreviously: “pizza model”, “salami model”\n\n\n\n\n\n\n\nSep 8, 2023\n\n\nTom Cunningham, Integrity Institute\n\n\n\n\n\n\n\n\n\n\n\n\nHow Much has Social Media affected Polarization?\n\n\n\n\n\n\n\n\n\n\nAug 7, 2023\n\n\nTom Cunningham, Integrity Institute\n\n\n\n\n\n\n\n\n\n\n\n\nThe Paradox of Small Effects\n\n\n\n\n\n\n\n\n\n\nAug 2, 2023\n\n\nTom Cunningham, Integrity Institute\n\n\n\n\n\n\n\n\n\n\n\n\nRanking by Engagement\n\n\n\n\n\n\n\n\n\n\nMay 8, 2023\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nSocial Media Suspensions of Prominent Accounts\n\n\n\n\n\n\n\n\n\n\nJan 31, 2023\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nOptimal Coronavirus Policy Should be Front-Loaded\n\n\n\n\n\n\n\n\n\n\nApr 5, 2020\n\n\n\n\n\n\n\n\n\n\n\n\nOn Unconscious Influences (Part 1)\n\n\n\n\n\n\n\n\n\n\nDec 8, 2017\n\n\n\n\n\n\n\n\n\n\n\n\nThe Work of Art in the Age of Mechanical Production\n\n\n\n\n\n\n\n\n\n\nSep 27, 2017\n\n\n\n\n\n\n\n\n\n\n\n\nRepulsion from the Prior\n\n\n\n\n\n\n\n\n\n\nMay 26, 2017\n\n\n\n\n\n\n\n\n\n\n\n\nThe Repeated Failure of Laws of Behaviour\n\n\n\n\n\n\n\n\n\n\nApr 15, 2017\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nSamuelson & Expected Utility\n\n\n\n\n\n\n\n\n\n\nFeb 25, 2017\n\n\n\n\n\n\n\n\n\n\n\n\nEconomist Explorers\n\n\n\n\n\n\n\n\n\n\nFeb 25, 2017\n\n\n\n\n\n\n\n\n\n\n\n\nWeber’s Law Doesn’t Imply Concave Representations or Concave Judgments\n\n\n\n\n\n\n\n\n\n\nFeb 25, 2017\n\n\n\n\n\n\n\n\n\n\n\n\nRelative Thinking\n\n\n\n\n\n\n\n\n\n\nApr 30, 2016\n\n\nTom Cunningham\n\n\n\n\n\nNo matching items"
924+
"text": "Forecasts of AI & Economic Growth\n\n\n\n\n\n\n\n\n\n\nNov 6, 2025\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nEconomics and Transformative AI\n\n\n\n\n\n\n\n\n\n\nOct 2, 2025\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nOn Deriving Things\n\n\n\n\n\n\n\n\n\n\nJan 30, 2025\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nToo Much Good News is Bad News\n\n\n\n\n\n\n\n\n\n\nDec 26, 2024\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nPremature Optimization and the Valley of Confusion\n\n\n\n\n\n\n\n\n\n\nMay 10, 2024\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nPeer Effects, Culture, and Taxes\n\n\n\n\n\n\n\n\n\n\nApr 28, 2024\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nBloodhounds and Bulldogs\n\n\nOn Perception, Judgment, & Decision-Making\n\n\n\n\n\n\n\nApr 27, 2024\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nThe Influence of AI on Content Moderation and Communication\n\n\n\n\n\n\n\n\n\n\nDec 11, 2023\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nThe History of Automated Text Moderation\n\n\n\n\n\n\n\n\n\n\nNov 18, 2023\n\n\nIntegrity Institute collaborators: Alex Rosenblatt, Jeff Allen, Ejona Varangu, Dave Sullivan, Tom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nThinking About Tradeoffs? Draw an Ellipse\n\n\n\n\n\n\n\n\n\n\nOct 25, 2023\n\n\nTom Cunningham, OpenAI.\n\n\n\n\n\n\n\n\n\n\n\n\nExperiment Interpretation and Extrapolation\n\n\n\n\n\n\n\n\n\n\nOct 17, 2023\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nAn AI Which Imitates Humans Can Beat Humans\n\n\n\n\n\n\n\n\n\n\nOct 6, 2023\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nSushi-Roll Model of Online Media\n\n\nPreviously: “pizza model”, “salami model”\n\n\n\n\n\n\n\nSep 8, 2023\n\n\nTom Cunningham, Integrity Institute\n\n\n\n\n\n\n\n\n\n\n\n\nHow Much has Social Media affected Polarization?\n\n\n\n\n\n\n\n\n\n\nAug 7, 2023\n\n\nTom Cunningham, Integrity Institute\n\n\n\n\n\n\n\n\n\n\n\n\nThe Paradox of Small Effects\n\n\n\n\n\n\n\n\n\n\nAug 2, 2023\n\n\nTom Cunningham, Integrity Institute\n\n\n\n\n\n\n\n\n\n\n\n\nRanking by Engagement\n\n\n\n\n\n\n\n\n\n\nMay 8, 2023\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nSocial Media Suspensions of Prominent Accounts\n\n\n\n\n\n\n\n\n\n\nJan 31, 2023\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nOptimal Coronavirus Policy Should be Front-Loaded\n\n\n\n\n\n\n\n\n\n\nApr 5, 2020\n\n\n\n\n\n\n\n\n\n\n\n\nOn Unconscious Influences (Part 1)\n\n\n\n\n\n\n\n\n\n\nDec 8, 2017\n\n\n\n\n\n\n\n\n\n\n\n\nThe Work of Art in the Age of Mechanical Production\n\n\n\n\n\n\n\n\n\n\nSep 27, 2017\n\n\n\n\n\n\n\n\n\n\n\n\nRepulsion from the Prior\n\n\n\n\n\n\n\n\n\n\nMay 26, 2017\n\n\n\n\n\n\n\n\n\n\n\n\nThe Repeated Failure of Laws of Behaviour\n\n\n\n\n\n\n\n\n\n\nApr 15, 2017\n\n\nTom Cunningham\n\n\n\n\n\n\n\n\n\n\n\n\nSamuelson & Expected Utility\n\n\n\n\n\n\n\n\n\n\nFeb 25, 2017\n\n\n\n\n\n\n\n\n\n\n\n\nEconomist Explorers\n\n\n\n\n\n\n\n\n\n\nFeb 25, 2017\n\n\n\n\n\n\n\n\n\n\n\n\nWeber’s Law Doesn’t Imply Concave Representations or Concave Judgments\n\n\n\n\n\n\n\n\n\n\nFeb 25, 2017\n\n\n\n\n\n\n\n\n\n\n\n\nRelative Thinking\n\n\n\n\n\n\n\n\n\n\nApr 30, 2016\n\n\nTom Cunningham\n\n\n\n\n\nNo matching items"
925925
},
926926
{
927927
"objectID": "about.html#upcoming-talks",
@@ -1090,5 +1090,19 @@
10901090
"title": "Inference on Experiments",
10911091
"section": "Misc Questions and Answers",
10921092
"text": "Misc Questions and Answers\n\nQ: Suppose I flip a coin 10 times and see THTHTHTHTT, and conclude that the coin always comes up THTHTHTHTT in every sequence of 10 coin flips, because the probability of this sequence happening by chance is 1/1024. This seems like a bad practice, so under what circumstances are we justified in coming up with hypotheses ex post?\n\nThe prior you should use is the prior you would’ve stated before seeing the result.\nRealistically you have a very tight zero prior over the hypothesis that “the coin always comes up THTHTHTHTT.” And so after you update you’ll still have an extremely low posterior probability on this. (This is harder to rationalize in the classical framework: they would just say “don’t run the test.”)\nOften we only consider some pattern after we see the data. to assess how much we should update we can ask ourselves what our prior would’ve been before seeing the data. Often this is not easy to answer, at worst we can ask a colleague who hasn’t seen the data yet.\n\nQ: I think even Bayesians would defend some uses of null hypothesis testing, e.g. A/A tests, tests of exposure imbalance in experiments, assessing fit of bayesian models with “bayesian p-values” (is the posterior distribution significantly different from the observed data).\n\nI would say that Bayesians still want to generate summary statistics (e.g. p-values), esp. insofar as they are sufficient statistics of the data, but they wouldn’t take the classical point-estimates for granted, they would shrink them: e.g. for an AA test, if I’d run many previous AA tests and they were all well-calibrated, then when I run a new one I’d probably not worry if it was just significant, i.e. p=0.04.\n\nQ: If the 95% CI only covered the true effect in 80% of cases, I (and I suspect others) would be upset. But this kind of coverage guarantee is a frequentist, not bayesian property, as Larry Wasserman notes here.\n\nThere are two very different properties:\n\nConditional on the true effect, what’s probability that the estimator is within 95% CI of that true effect.\nConditional on the estimate observed, what’s the probability that the true effect is within the 95% CI of that estimator.\n\nYour statement is the first, but I’m going to say we only really care about the second. Testing the second statement is significantly harder, because it requires knowing the true effect, but we can test it with a set of exeriments, essentially our shrinkage setup. And we find that the classical point estimates consistently over-estimate effect sizes, and 95% of true effects do not fall in the confidence intervals.\n\nQ: I think I would agree that if you take a Bayesian perspective and do a lot of work assessing that your prior is reasonable, checking the fit of the posterior distribution etc, then you are in pretty good shape for multiple comparisons and early stopping problems. That second step is non-trivial and relatively few people have the time and ability to do it. And if you don’t do it then you’re in a potentially worse situation than the frequentist, to the extent that your attitude is “I don’t need to worry about these issues because I’m a Bayesian”.\n\nFor this argument to work I think you have to assume that Bayesians are hubristic – i.e., when you give them freedom, they do worse than when they are constrained. I’m sure this is true in some times and places, but I think the argument is essentially psychological, not statistical."
1093+
},
1094+
{
1095+
"objectID": "posts/2025-10-19-forecasts-of-AI-growth.html",
1096+
"href": "posts/2025-10-19-forecasts-of-AI-growth.html",
1097+
"title": "Forecasts of AI & Economic Growth",
1098+
"section": "",
1099+
"text": "I’ve collected forecasts of AI’s effect on economic growth over 2025-2035.\n\nThe full list of forecasts is in a table below. Some of the forecasts are of growth in GDP, some GWP (gross world product), some TFP, some labor productivity. I’m also mixing forecasts for the US, EU, and World. Some forecasts aren’t explicitly over 2025-2035, but most are roughly that range. Please send me an email if you think I’m misinterpreting one of these.\n\nEconomists and AI people disagree.\n\n\nMost economists expect 0.1–1.5%/year. The two exceptions are Baily, Brynjolfsson, and Korinek (2023) and Korinek and Suh (2024).\nMost AI-people expect 3–30%/year. A notable exception is Andrej Karpathy who recently says he expects GDP growth to remain on historical trends.1"
1100+
},
1101+
{
1102+
"objectID": "posts/2025-10-19-forecasts-of-AI-growth.html#footnotes",
1103+
"href": "posts/2025-10-19-forecasts-of-AI-growth.html#footnotes",
1104+
"title": "Forecasts of AI & Economic Growth",
1105+
"section": "Footnotes",
1106+
"text": "Footnotes\n\n\nI classified Epoch’s GATE model (Erdil et al. (2025)) as by “AI people”, though the authors are a mixture of academic economists and people who work in AI.↩︎\nIt seems to me quite plausible that these papers over-estimate the productivity impact of existing LLMs: (1) the AB tests showing productivity improvements are on unrepresentatively self-contained tasks and are likely distorted by publication selection; (2) the Eloundou et al. (2023) estimates of very large time-savings from GPT-4 are based just on intuitions.↩︎\nComin and Mestieri (2014) say “the average adoption lag across all technologies (and countries) is 44 years,” but since the 1950s it has been 7-18 years.↩︎\n“Between 1 and 5% of all work hours are currently assisted by generative AI, and respondents report time savings equivalent to 1.4% of total work hours. … implies a potential productivity gain of 1.1%.”↩︎\nSuppose the total valuation of AI-related companies is $10T, which is perhaps around 10% of all capital stock. Using P/E of 15, a $10T valuation implies a stream of $600B in earnings/year, which is 2% of GDP.↩︎"
10931107
}
10941108
]

docs/sitemap.xml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -116,4 +116,8 @@
116116
<loc>tecunningham.github.io/posts/2022-02-26-inference-with-experimental-data.html</loc>
117117
<lastmod>2025-11-05T15:22:51.983Z</lastmod>
118118
</url>
119+
<url>
120+
<loc>tecunningham.github.io/posts/2025-10-19-forecasts-of-AI-growth.html</loc>
121+
<lastmod>2025-11-06T17:25:11.881Z</lastmod>
122+
</url>
119123
</urlset>

posts/2023-10-22-high-dimensional-world.qmd

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -593,6 +593,12 @@ Kevin Murphy (2024) [Probabilistic Machine Learning](https://probml.github.io/pm
593593
: - SVD: an exact decomposition into factors, then PCA is just truncation of the SVD factors. This is a way of analytically achieving a PCA solution, but for more complex cases need computable algorithms.
594594
: - Can interpret PCA as denoising: "quadratically regularized PCA corresponds to a model in which features are observed with N(0,1) errors."
595595

596+
@poggio2017deepshallow
597+
: Argues that deep learning is more efficient when the generating function is *compositional*.
598+
599+
> "The main message is that deep networks have the theoretical guarantee, which shallow networks do not have, that they can avoid the curse of dimensionality for an important class of problems, corresponding to compositional functions, that is functions of functions. An especially interesting subset of such compositional functions are hierarchically local compositional functions where all the constituent functions are local in the sense of bounded small dimensionality.
600+
![](images/2025-11-06-08-31-30.png)
601+
596602
Markov blanket
597603
: Given a set of random variables, a [Markov blanket](https://en.wikipedia.org/wiki/Markov_blanket) with respect to a single variable $Y$ is a subset that is collectively sufficient to infer $Y$.
598604

0 commit comments

Comments
 (0)