You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/posts/2025-09-19-transformative-AI-notes.html
+12-3Lines changed: 12 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -452,7 +452,7 @@ <h1>We don’t have a standard model of AI</h1>
452
452
</dd>
453
453
<dt>A pocket model: LLMs share knowledge.</dt>
454
454
<dd>
455
-
<p>Here is a simple mental model that I often use: <em>LLMs share knowledge</em>. The model is unsatisfactory in many respects but has the virtues of being very simple and very general. Consider an LLM as just a database of answers to questions, containing the set of answers that already exist in the public domain (i.e., in the LLM’s training set).<ahref="#fn5" class="footnote-ref" id="fnref5" role="doc-noteref"><sup>5</sup></a> LLMs therefore lower the cost of access to existing knowledge, and people will consult an LLM when they encounter a problem for which (i) they do know the answer, but (ii) they expect that someone else does have know the answer (and the answer was included in the training set).</p>
455
+
<p>Here is a simple mental model that I often use: <em>LLMs share knowledge</em>. The model is unsatisfactory in many respects but has the virtues of being very simple and very general. Consider an LLM as just a database of answers to questions, containing the set of answers that already exist in the public domain (i.e., in the LLM’s training set).<ahref="#fn5" class="footnote-ref" id="fnref5" role="doc-noteref"><sup>5</sup></a> LLMs therefore lower the cost of access to existing knowledge, and people will consult an LLM when they encounter a problem for which (i) they do know the answer, but (ii) they expect that someone else does know the answer (and the answer was included in the training set).</p>
456
456
<p>This is a very crude model of an LLM but I think it gives a reasonable characterization of their adoption and effect so far. Around 1/3 of adults in rich countries are regularly using chatbots, and I think it’s fair to say the majority of the use is solving problems outside the domain of the user’s own expertise, but inside someone else’s expertise (see our <ahref="https://www.nber.org/papers/w34255">ChatGPT paper</a>). This knowledge-sharing model predicts that LLMs will flatten comparative advantage, so we should see more home production (people solve their own problems), less trade, and lower returns to experience.</p>
457
457
<p>The model has a number of imperfections as a general model of AI: (1) LLMs are often used to do tasks that don’t require knowledge outside the user’s domain, e.g. solving a problem that requires time and patience but not knowledge such as certain types of computer programming, writing, or creating images; (2) the model treats LLMs as strictly bound by the limits of human knowledge, this was a good approximation for early LLMs but it’s clear that AI is progressively expanding the boundary of human knowledge in a variety of ways.</p>
458
458
<p>This model is related to the Garicano-Ide-Talamas models in which an AI shares existing knowledge.</p>
@@ -740,7 +740,16 @@ <h1>AI scientists will be unlike human scientists</h1>
Varian, Hal. 2011. <span>“Economic Value of Google.”</span><ahref="https://dl.icdst.org/pdfs/files1/f87de5ba3c43760ebcbc2a1d90950dbc.pdf">https://dl.icdst.org/pdfs/files1/f87de5ba3c43760ebcbc2a1d90950dbc.pdf</a>.
</code><buttontitle="Copy to Clipboard" class="code-copy-button"><iclass="bi"></i></button></pre><divclass="quarto-appendix-secondary-label">For attribution, please cite this work as:</div><divid="ref-2025" class="csl-entry quarto-appendix-citeas" role="listitem">
751
+
<span>“Economics and AI.”</span> 2025. October 2, 2025. <ahref="https://tecunningham.github.io/posts/2025-09-19-transformative-AI-notes.html">tecunningham.github.io/posts/2025-09-19-transformative-AI-notes.html</a>.
Copy file name to clipboardExpand all lines: posts/2025-09-19-transformative-AI-notes.qmd
+2-1Lines changed: 2 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,6 @@
1
1
---
2
2
title: Economics and AI
3
+
citation: true
3
4
bibliography: ai.bib
4
5
reference-location: section
5
6
citation-location: document
@@ -266,7 +267,7 @@ Structural models of AI.
266
267
267
268
A pocket model: LLMs share knowledge.
268
269
269
-
: Here is a simple mental model that I often use: *LLMs share knowledge*. The model is unsatisfactory in many respects but has the virtues of being very simple and very general. Consider an LLM as just a database of answers to questions, containing the set of answers that already exist in the public domain (i.e., in the LLM's training set).[^blog] LLMs therefore lower the cost of access to existing knowledge, and people will consult an LLM when they encounter a problem for which (i) they do know the answer, but (ii) they expect that someone else does have know the answer (and the answer was included in the training set).
270
+
: Here is a simple mental model that I often use: *LLMs share knowledge*. The model is unsatisfactory in many respects but has the virtues of being very simple and very general. Consider an LLM as just a database of answers to questions, containing the set of answers that already exist in the public domain (i.e., in the LLM's training set).[^blog] LLMs therefore lower the cost of access to existing knowledge, and people will consult an LLM when they encounter a problem for which (i) they do know the answer, but (ii) they expect that someone else does know the answer (and the answer was included in the training set).
270
271
271
272
This is a very crude model of an LLM but I think it gives a reasonable characterization of their adoption and effect so far. Around 1/3 of adults in rich countries are regularly using chatbots, and I think it's fair to say the majority of the use is solving problems outside the domain of the user's own expertise, but inside someone else's expertise (see our [ChatGPT paper](https://www.nber.org/papers/w34255)). This knowledge-sharing model predicts that LLMs will flatten comparative advantage, so we should see more home production (people solve their own problems), less trade, and lower returns to experience.
0 commit comments