Skip to content

Commit 98d0bf1

Browse files
committed
commit
1 parent c4b8c93 commit 98d0bf1

8 files changed

Lines changed: 175 additions & 156 deletions

File tree

_freeze/posts/2025-09-19-transformative-AI-notes/execute-results/html.json

Lines changed: 2 additions & 2 deletions
Large diffs are not rendered by default.

docs/index.html

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -254,54 +254,54 @@ <h3 class="no-anchor listing-title">
254254
</a>
255255
</div>
256256
</div>
257-
<div class="quarto-post image-right" data-index="1" data-listing-date-sort="1759388400000" data-listing-file-modified-sort="1759438659676" data-listing-date-modified-sort="NaN" data-listing-reading-time-sort="43" data-listing-word-count-sort="8406">
257+
<div class="quarto-post image-right" data-index="1" data-listing-date-sort="1759474800000" data-listing-file-modified-sort="1735428686821" data-listing-date-modified-sort="NaN" data-listing-reading-time-sort="10" data-listing-word-count-sort="1848">
258258
<div class="thumbnail">
259-
<p><a href="./posts/2025-09-19-transformative-AI-notes.html" class="no-external"></a></p><a href="./posts/2025-09-19-transformative-AI-notes.html" class="no-external">
260-
</a><p><a href="./posts/2025-09-19-transformative-AI-notes.html" class="no-external"></a></p>
259+
<p><a href="./posts/2024-12-26-heavy-tailed-noise.html" class="no-external"></a></p><a href="./posts/2024-12-26-heavy-tailed-noise.html" class="no-external">
260+
</a><p><a href="./posts/2024-12-26-heavy-tailed-noise.html" class="no-external"></a></p>
261261
</div>
262262
<div class="body">
263263
<h3 class="no-anchor listing-title">
264-
<a href="./posts/2025-09-19-transformative-AI-notes.html" class="no-external">Economics and AI</a>
264+
<a href="./posts/2024-12-26-heavy-tailed-noise.html" class="no-external">Too Much Good News is Bad News</a>
265265
</h3>
266266
<div class="listing-subtitle">
267-
<a href="./posts/2025-09-19-transformative-AI-notes.html" class="no-external"></a>
267+
<a href="./posts/2024-12-26-heavy-tailed-noise.html" class="no-external"></a>
268268
</div>
269269
<div class="listing-description">
270-
<a href="./posts/2025-09-19-transformative-AI-notes.html" class="no-external"></a>
270+
<a href="./posts/2024-12-26-heavy-tailed-noise.html" class="no-external"></a>
271271
</div>
272272
</div>
273273
<div class="metadata">
274-
<a href="./posts/2025-09-19-transformative-AI-notes.html" class="no-external">
274+
<a href="./posts/2024-12-26-heavy-tailed-noise.html" class="no-external">
275275
<div class="listing-date">
276-
Oct 2, 2025
276+
Oct 3, 2025
277+
</div>
278+
<div class="listing-author">
279+
Tom Cunningham
277280
</div>
278281
</a>
279282
</div>
280283
</div>
281-
<div class="quarto-post image-right" data-index="2" data-listing-date-sort="1759388400000" data-listing-file-modified-sort="1735428686821" data-listing-date-modified-sort="NaN" data-listing-reading-time-sort="10" data-listing-word-count-sort="1848">
284+
<div class="quarto-post image-right" data-index="2" data-listing-date-sort="1759388400000" data-listing-file-modified-sort="1759505769965" data-listing-date-modified-sort="NaN" data-listing-reading-time-sort="43" data-listing-word-count-sort="8405">
282285
<div class="thumbnail">
283-
<p><a href="./posts/2024-12-26-heavy-tailed-noise.html" class="no-external"></a></p><a href="./posts/2024-12-26-heavy-tailed-noise.html" class="no-external">
284-
</a><p><a href="./posts/2024-12-26-heavy-tailed-noise.html" class="no-external"></a></p>
286+
<p><a href="./posts/2025-09-19-transformative-AI-notes.html" class="no-external"></a></p><a href="./posts/2025-09-19-transformative-AI-notes.html" class="no-external">
287+
</a><p><a href="./posts/2025-09-19-transformative-AI-notes.html" class="no-external"></a></p>
285288
</div>
286289
<div class="body">
287290
<h3 class="no-anchor listing-title">
288-
<a href="./posts/2024-12-26-heavy-tailed-noise.html" class="no-external">Too Much Good News is Bad News</a>
291+
<a href="./posts/2025-09-19-transformative-AI-notes.html" class="no-external">Economics and AI</a>
289292
</h3>
290293
<div class="listing-subtitle">
291-
<a href="./posts/2024-12-26-heavy-tailed-noise.html" class="no-external"></a>
294+
<a href="./posts/2025-09-19-transformative-AI-notes.html" class="no-external"></a>
292295
</div>
293296
<div class="listing-description">
294-
<a href="./posts/2024-12-26-heavy-tailed-noise.html" class="no-external"></a>
297+
<a href="./posts/2025-09-19-transformative-AI-notes.html" class="no-external"></a>
295298
</div>
296299
</div>
297300
<div class="metadata">
298-
<a href="./posts/2024-12-26-heavy-tailed-noise.html" class="no-external">
301+
<a href="./posts/2025-09-19-transformative-AI-notes.html" class="no-external">
299302
<div class="listing-date">
300303
Oct 2, 2025
301304
</div>
302-
<div class="listing-author">
303-
Tom Cunningham
304-
</div>
305305
</a>
306306
</div>
307307
</div>

docs/index.xml

Lines changed: 137 additions & 128 deletions
Large diffs are not rendered by default.

docs/listings.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@
33
"listing": "/index.html",
44
"items": [
55
"/posts/2024-10-27-from-citlali.html",
6-
"/posts/2025-09-19-transformative-AI-notes.html",
76
"/posts/2024-12-26-heavy-tailed-noise.html",
7+
"/posts/2025-09-19-transformative-AI-notes.html",
88
"/posts/2020-10-02-on-deriving-things.html",
99
"/posts/2024-05-10-premature-optimization.html",
1010
"/posts/2023-01-23-peer-effects-norms-culture-sin-taxes.html",

docs/posts/2025-09-19-transformative-AI-notes.html

Lines changed: 12 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -452,7 +452,7 @@ <h1>We don’t have a standard model of AI</h1>
452452
</dd>
453453
<dt>A pocket model: LLMs share knowledge.</dt>
454454
<dd>
455-
<p>Here is a simple mental model that I often use: <em>LLMs share knowledge</em>. The model is unsatisfactory in many respects but has the virtues of being very simple and very general. Consider an LLM as just a database of answers to questions, containing the set of answers that already exist in the public domain (i.e., in the LLM’s training set).<a href="#fn5" class="footnote-ref" id="fnref5" role="doc-noteref"><sup>5</sup></a> LLMs therefore lower the cost of access to existing knowledge, and people will consult an LLM when they encounter a problem for which (i) they do know the answer, but (ii) they expect that someone else does have know the answer (and the answer was included in the training set).</p>
455+
<p>Here is a simple mental model that I often use: <em>LLMs share knowledge</em>. The model is unsatisfactory in many respects but has the virtues of being very simple and very general. Consider an LLM as just a database of answers to questions, containing the set of answers that already exist in the public domain (i.e., in the LLM’s training set).<a href="#fn5" class="footnote-ref" id="fnref5" role="doc-noteref"><sup>5</sup></a> LLMs therefore lower the cost of access to existing knowledge, and people will consult an LLM when they encounter a problem for which (i) they do know the answer, but (ii) they expect that someone else does know the answer (and the answer was included in the training set).</p>
456456
<p>This is a very crude model of an LLM but I think it gives a reasonable characterization of their adoption and effect so far. Around 1/3 of adults in rich countries are regularly using chatbots, and I think it’s fair to say the majority of the use is solving problems outside the domain of the user’s own expertise, but inside someone else’s expertise (see our <a href="https://www.nber.org/papers/w34255">ChatGPT paper</a>). This knowledge-sharing model predicts that LLMs will flatten comparative advantage, so we should see more home production (people solve their own problems), less trade, and lower returns to experience.</p>
457457
<p>The model has a number of imperfections as a general model of AI: (1) LLMs are often used to do tasks that don’t require knowledge outside the user’s domain, e.g.&nbsp;solving a problem that requires time and patience but not knowledge such as certain types of computer programming, writing, or creating images; (2) the model treats LLMs as strictly bound by the limits of human knowledge, this was a good approximation for early LLMs but it’s clear that AI is progressively expanding the boundary of human knowledge in a variety of ways.</p>
458458
<p>This model is related to the Garicano-Ide-Talamas models in which an AI shares existing knowledge.</p>
@@ -740,7 +740,16 @@ <h1>AI scientists will be unlike human scientists</h1>
740740
<div id="ref-varian2011economic" class="csl-entry" role="listitem">
741741
Varian, Hal. 2011. <span>“Economic Value of Google.”</span> <a href="https://dl.icdst.org/pdfs/files1/f87de5ba3c43760ebcbc2a1d90950dbc.pdf">https://dl.icdst.org/pdfs/files1/f87de5ba3c43760ebcbc2a1d90950dbc.pdf</a>.
742742
</div>
743-
</div></section></div></main> <!-- /main -->
743+
</div></section><section class="quarto-appendix-contents" id="quarto-citation"><h2 class="anchored quarto-appendix-heading">Citation</h2><div><div class="quarto-appendix-secondary-label">BibTeX citation:</div><pre class="sourceCode code-with-copy quarto-appendix-bibtex"><code class="sourceCode bibtex">@online{2025,
744+
author = {},
745+
title = {Economics and {AI}},
746+
date = {2025-10-02},
747+
url = {tecunningham.github.io/posts/2025-09-19-transformative-AI-notes.html},
748+
langid = {en}
749+
}
750+
</code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre><div class="quarto-appendix-secondary-label">For attribution, please cite this work as:</div><div id="ref-2025" class="csl-entry quarto-appendix-citeas" role="listitem">
751+
<span>“Economics and AI.”</span> 2025. October 2, 2025. <a href="https://tecunningham.github.io/posts/2025-09-19-transformative-AI-notes.html">tecunningham.github.io/posts/2025-09-19-transformative-AI-notes.html</a>.
752+
</div></div></section></div></main> <!-- /main -->
744753
<script id="quarto-html-after-body" type="application/javascript">
745754
window.document.addEventListener("DOMContentLoaded", function (event) {
746755
const toggleBodyColorMode = (bsSheetEl) => {
@@ -1162,7 +1171,7 @@ <h1>AI scientists will be unlike human scientists</h1>
11621171
});
11631172
</script>
11641173
</div> <!-- /content -->
1165-
<script>var lightboxQuarto = GLightbox({"closeEffect":"zoom","loop":false,"selector":".lightbox","descPosition":"bottom","openEffect":"zoom"});
1174+
<script>var lightboxQuarto = GLightbox({"closeEffect":"zoom","selector":".lightbox","loop":false,"openEffect":"zoom","descPosition":"bottom"});
11661175
(function() {
11671176
let previousOnload = window.onload;
11681177
window.onload = () => {

docs/search.json

Lines changed: 2 additions & 2 deletions
Large diffs are not rendered by default.

docs/sitemap.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -110,6 +110,6 @@
110110
</url>
111111
<url>
112112
<loc>tecunningham.github.io/posts/2025-09-19-transformative-AI-notes.html</loc>
113-
<lastmod>2025-10-02T20:57:39.676Z</lastmod>
113+
<lastmod>2025-10-03T15:36:09.965Z</lastmod>
114114
</url>
115115
</urlset>

posts/2025-09-19-transformative-AI-notes.qmd

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
---
22
title: Economics and AI
3+
citation: true
34
bibliography: ai.bib
45
reference-location: section
56
citation-location: document
@@ -266,7 +267,7 @@ Structural models of AI.
266267

267268
A pocket model: LLMs share knowledge.
268269

269-
: Here is a simple mental model that I often use: *LLMs share knowledge*. The model is unsatisfactory in many respects but has the virtues of being very simple and very general. Consider an LLM as just a database of answers to questions, containing the set of answers that already exist in the public domain (i.e., in the LLM's training set).[^blog] LLMs therefore lower the cost of access to existing knowledge, and people will consult an LLM when they encounter a problem for which (i) they do know the answer, but (ii) they expect that someone else does have know the answer (and the answer was included in the training set).
270+
: Here is a simple mental model that I often use: *LLMs share knowledge*. The model is unsatisfactory in many respects but has the virtues of being very simple and very general. Consider an LLM as just a database of answers to questions, containing the set of answers that already exist in the public domain (i.e., in the LLM's training set).[^blog] LLMs therefore lower the cost of access to existing knowledge, and people will consult an LLM when they encounter a problem for which (i) they do know the answer, but (ii) they expect that someone else does know the answer (and the answer was included in the training set).
270271

271272
This is a very crude model of an LLM but I think it gives a reasonable characterization of their adoption and effect so far. Around 1/3 of adults in rich countries are regularly using chatbots, and I think it's fair to say the majority of the use is solving problems outside the domain of the user's own expertise, but inside someone else's expertise (see our [ChatGPT paper](https://www.nber.org/papers/w34255)). This knowledge-sharing model predicts that LLMs will flatten comparative advantage, so we should see more home production (people solve their own problems), less trade, and lower returns to experience.
272273

0 commit comments

Comments
 (0)