diff --git a/posts/mcps-as-gpts.mdx b/posts/mcps-as-gpts.mdx
new file mode 100644
index 0000000..54ef609
--- /dev/null
+++ b/posts/mcps-as-gpts.mdx
@@ -0,0 +1,41 @@
+---
+title: MCPs can be great GPTs
+description:
+slug: mcps-as-gpts
+date: Jun 19, 2025
+---
+
+OpenAI [introduced GPTs](https://openai.com/index/introducing-gpts/) back in November 2023. At their core, GPTs are system prompts and knowledge you can chat with. A few months later,
+ in January 2024, they released the GPT Store for people to share their experts with one another. Popular GPTs include
+[Data Analyst GPT](https://chatgpt.com/g/g-HMNcP6w7d-data-analyst), [Astrology Birth Chart GPT](https://chatgpt.com/g/g-WxckXARTP-astrology-birth-chart-gpt) (really, its the #1 at the time of writing),
+and [Scholar GPT](https://chatgpt.com/g/g-kZ0eYXlJe-scholar-gpt) for interfacing with Google Scholar and PubMed.
+
+The idea sounds great, but they failed to reach critical mass. "Distribution is King" [^1], as the kids say. Well, that and OpenAI's product marketing has [historically been lacking](https://futurism.com/the-byte/sam-altman-gpt-name-change) + web search and Perplexity meet a lot of the same needs.
+
+Anthropic released the first public spec for MCPs in [November 2024](https://www.anthropic.com/news/model-context-protocol). They blew up. Vibecoders and AI-assisted developers could equip Cursor or their agent tools someone else wrote.
+We're primarily seeing MCPs for interfacing with APIs. Anthropic includes Linear, PayPal, and Atlassian in their [Remote MCP server examples](https://docs.anthropic.com/en/docs/agents-and-tools/remote-mcp-servers).
+
+That's pretty cool, but it's not really anything LLMs couldn't write before. As developers and vibers move to agentic workflows,
+it makes sense for MCPs to equip the Agent with what it needs, instead of doing it for the agent. Give LLMs the knowledge to write their own "MCPs" for their exact use-case.
+
+Take Next.js for example. When Next.js 15 released, no frontier model had knowledge of it for months. This is the case for any new software release.
+There are two main tools in the Prompt Engineering Toolbox for this:
+
+1. A RAG setup. Setup your own dataset and pipeline for LLMs to query. High effort, high reward.
+2. Web search / agentic search. An agent can search the web or filesystem looking for the docs, stackoverflow answers, etc. No guarantee it does a good job and it opens you up to [prompt injection](https://simonwillison.net/2023/May/2/prompt-injection-explained/).
+
+