Transcript
Tim Williams: Hello and welcome to another episode of the Rubber Duck Radio, I am your host Tim Williams back from a week vacation in D.C. Here with me is none other than Paul Mason.
Paul Mason: Hey Tim, how was Washington?
Tim Williams: It was awesome! I'll tell you, the amount of free museums that are in the capital mall can keep you there for weeks if you're willing.
Paul Mason: So how many did you hit?
Tim Williams: All told we hit... I think... five if I'm counting right? Keep in mind there are monuments and other non-museum things to do while you're there.
Paul Mason: What were your favorite museums?
Tim Williams: Well we've talked about this before, so you might not be surprised to learn that the Museum of Natural History is number one for this dino nerd. The second was the National Air and Space Museum. The exhibits there were top notch.
Paul Mason: I've only been to DC as a kid on a field trip, but I was too young to remember much. What stuck out to you?
Tim Williams: Oh man, where do I even start? So the Natural History Museum has this massive fossil hall called Deep Time, and the centerpiece is a T-Rex... but it's not just standing there like a statue. It's posed mid-fight with a Triceratops. Like, they're locked in battle. It's dramatic as hell.
Paul Mason: That's awesome. Is it a real skeleton?
Tim Williams: Yeah, it's one of the most complete T-Rex skeletons ever found—about 63% real bone. They call it Stan. But here's the thing that blew my mind: the whole exhibit covers 4.6 billion years of Earth's history with about 700 fossil specimens. And there's this palm tree fossil... from Alaska.
Paul Mason: Wait, palm trees in Alaska?
Tim Williams: Right?! That's the point—it shows how dramatically Earth's climate has shifted over time. Alaska used to be tropical. The planet's been through a lot.
Paul Mason: That's wild. What else caught your eye?
Tim Williams: Okay, so the Hope Diamond is there too—the famous blue diamond. And I learned the wildest story about how it got to the Smithsonian. So this thing started as a 115-carat stone sold to King Louis the Fourteenth in 1669. It became part of the French Crown Jewels. Then it gets stolen during the French Revolution in 1792, disappears for twenty years, resurfaces in London recut down to 45 carats.
Paul Mason: And then it ends up in DC?
Tim Williams: Eventually! But here's the part I can't get over. Harry Winston—you know, the famous jeweler—he donated it to the Smithsonian in 1958. And he mailed it. In a plain brown package. Via registered mail.
Paul Mason: No way. The Hope Diamond? Mailed like a sweater from Amazon?
Tim Williams: I'm not making this up. The most famous diamond in the world, worth a quarter billion dollars, just... dropped in a mailbox. And you know the curse? The whole "bad luck follows this diamond" thing?
Paul Mason: Yeah, I've heard that.
Tim Williams: Total fabrication. Pierre Cartier made it up in the early 1900s as a sales tactic. And since it's been at the Smithsonian? Nothing but good luck. The curse appears to have gone dormant.
Paul Mason: That's hilarious. Marketing from a hundred years ago still has people spooked.
Tim Williams: Exactly. Now, the Air and Space Museum—that one hit different for me. You walk in and there's the actual 1903 Wright Flyer. The real one. Not a replica. The actual plane that inaugurated the aerial age.
Paul Mason: That's pretty surreal to see in person.
Tim Williams: Totally. And right next to it, in the Destination Moon gallery, they've got the Apollo 11 command module Columbia and Neil Armstrong's spacesuit displayed together. The actual capsule that went to the moon and back. And the suit he wore when he took that first step.
Paul Mason: I love how they pair those things together. The beginning of flight and the moon landing in one building.
Tim Williams: It's a timeline you can walk through. And here's a fun bit of trivia—the museum's first director was Michael Collins.
Paul Mason: Wait, the Apollo 11 astronaut? The one who stayed in the command module?
Tim Williams: That's the one. The guy who orbited the moon alone while Armstrong and Aldrin were walking on the surface ended up running the museum that houses his own spacecraft. There's something poetic about that.
Paul Mason: That's a full circle moment if I've ever heard one.
Tim Williams: Right? And Paul, I have to tell you this because it made me laugh—they have a full-sized X-wing Starfighter from Star Wars hanging outside the planetarium.
Paul Mason: Shut up. Really?
Tim Williams: I'm serious! From The Rise of Skywalker. And they've also got the original 11-foot studio model of the starship Enterprise from the original Star Trek series. So you've got real space history next to fictional space history, and somehow it works.
Paul Mason: That's perfect. Science fiction inspiring science fact all in one place.
Tim Williams: Exactly. You can even touch an actual moon rock. There's something profound about putting your hand on a piece of another world.
Paul Mason: Sounds like an amazing trip. I definitely need to get back there as an adult.
Tim Williams: Oh, you do. And the best part? All of it is free. Every single museum on the National Mall. You can walk through billions of years of history and the entire history of human flight without paying a dime.
Paul Mason: That's one thing I remember from my visit. I love that it's accessible to everyone like that.
Tim Williams: Yeah, it's one of those things that makes you proud, you know? But here's the thing—stepping away from my desk for a week, being out there in the real world, walking through museums, standing in lines, sitting in restaurants... it hit me in a way I wasn't expecting.
Paul Mason: What do you mean?
Tim Williams: So I've been heads down, right? Sixty, maybe eighty hours a week, building with AI, testing workflows, writing code with LLMs, iterating on prompts. It's become second nature. It's just... how I work now. And I think we're both in that boat.
Paul Mason: Oh, absolutely. It's weird when I have to work without it at this point.
Tim Williams: Right? But here's what struck me—being out in the world, watching people work, seeing how things actually get done outside our bubble... I realized something. Almost nobody is using this stuff.
Paul Mason: Yeah?
Tim Williams: I mean, think about it. We talk about AI like it's this massive, civilization-level shift. And I believe it is! But then you watch a museum docent giving a tour, or a restaurant manager coordinating a dinner rush, or a construction crew working on a building... and they're not using ChatGPT. They're not prompting Claude. They're just... working. The way people have worked for decades.
Paul Mason: That's a really good point. We're in this weird bubble where it feels like everyone must be using AI because it's so transformative. But the reality is...
Tim Williams: We're still the early adopters. Like, *early* early. We're the ones who jumped in head first, who saw the potential and started building habits around it. But the rest of the world? They're just... moving along. Business as usual. As if this step change in technology hasn't even happened yet.
Paul Mason: It's almost disorienting when you think about it. In our world, we're already debating context windows and model selection and agentic workflows. And outside that bubble, people are like, "Oh yeah, I've heard of ChatGPT. Never really used it though."
Tim Williams: Exactly! And it made me think about how insulated we are as developers. We're always at the bleeding edge of tooling. But this feels different. This isn't just a new framework or a new text editor. This is a fundamental shift in how knowledge work could be done. And yet most knowledge workers aren't even aware of what they're missing.
Paul Mason: Do you think that's going to change quickly, or are we looking at a slower adoption curve than the hype suggests?
Tim Williams: That's the million dollar question, isn't it? I don't have a clean answer, but I've been thinking about it a lot since I got back.
Tim Williams: And honestly, the answer might be playing out right now in this really interesting battle between OpenAI and Anthropic. Like, have you noticed how much the ground has shifted in the last year or so?
Paul Mason: Oh yeah. It's been wild to watch.
Tim Williams: So here's the thing. OpenAI spent basically all of 2023 and 2024 chasing consumer wins. ChatGPT, Sora, all these flashy product launches. And they were winning the consumer game, right? ChatGPT hit 800 million weekly active users. That's insane. But while they were doing that, they took their eye off the developer ball.
Paul Mason: Big time. I mean, I was still using their API, but it felt like they stopped caring about what developers actually needed. The focus was all on these consumer-facing products.
Tim Williams: Right. And here's the number that blew my mind: OpenAI's enterprise market share went from 50% in 2023 down to 25% by mid-2025. They got cut in half in two years. That's a massive collapse.
Paul Mason: Wait, cut in half? That's... that's huge.
Tim Williams: Yeah. And meanwhile, Anthropic went from like 12% to 32% in the same period. They basically ate OpenAI's lunch.
Paul Mason: And I know exactly why. It's because Anthropic went all in on developers and enterprise contracts while OpenAI was chasing viral moments.
Tim Williams: Exactly! And the coding numbers are even more stark. Claude now holds 54% of the AI coding market. OpenAI is at 21%. That's a complete flip from where we were two years ago.
Paul Mason: Claude Code specifically has been a game changer. I mean, the fact that Microsoft — the company that sells GitHub Copilot — has widely adopted Claude Code internally? That says everything.
Tim Williams: That's the wildest part! And Claude Code hit one to two billion ARR just six months after launch. That's faster than ChatGPT's trajectory. Anthropic went from under a thousand business customers to over 300,000 in two years. Revenue jumped from a billion to five billion in eight months.
Paul Mason: The terminal-native approach was smart. Instead of trying to be an IDE plugin, they built something that meets developers where they already work.
Tim Williams: Totally. And here's where it connects back to what we were talking about — OpenAI finally woke up. They had their DevDay in late 2025, and it was all about winning developers back. New APIs, an Agents SDK, AgentKit. They even shut down Sora — which was burning 15 million dollars a day, by the way — to refocus on Codex.
Paul Mason: 15 million a day? On Sora?
Tim Williams: Yeah. And downloads had already tanked from 3.3 million to 1.1 million in three months. Disney was supposedly going to invest a billion dollars, and that fell through when they killed it. It was a mess.
Paul Mason: So they're pivoting back to developers because they have to, not because they want to.
Tim Williams: That's exactly it. And I think it ties into this bigger question about what these companies are actually building toward. OpenAI has been very vocal about this AGI goal. But here's the thing — and I've been reading a lot about this — there's a growing consensus among AI researchers that AGI isn't actually possible with current LLM architecture.
Paul Mason: Yeah, I've seen some of that discussion. What's the core argument?
Tim Williams: So Yann LeCun — he's Meta's Chief AI Scientist, Turing Award winner, one of the godfathers of deep learning — he's been very vocal about this. He says LLMs have four fundamental limitations. One: they have no understanding of the physical world. They only know text, not reality. Two: no persistent memory. Every interaction starts fresh. Three: no genuine reasoning. It's just pattern matching, not deliberative thinking. And four: no planning capability. They can't coordinate multi-step goals autonomously.
Paul Mason: That tracks with what I've seen. They're amazing at pattern matching, but when you need actual reasoning or planning, they fall apart.
Tim Williams: Right. And LeCun's point is that a four-year-old has seen about 10 to the 15th power bytes of visual data. LLMs are trained on 10 to the 13th power bytes of text. We're two orders of magnitude short just on data volume, and more importantly, we're learning from text instead of the physical world. He straight up says LLMs will be obsolete within five years. He's advising young developers not to work on LLMs.
Paul Mason: That's... a strong take from someone at his level.
Tim Williams: It is. And he's not alone. François Chollet — the creator of Keras — he calls LLMs an offramp on the path to AGI. There's academic research showing LLMs can't maintain intellectual consistency, they struggle with novel situations, they can be talked out of correct answers with bad arguments. They're what psychologists call System 1 thinking — reactive, intuitive, pattern-matching. But they lack System 2 — that slow, deliberative, logical reasoning.
Paul Mason: So OpenAI is chasing this AGI goal, but the architecture itself might be a dead end.
Tim Williams: That's the argument. And meanwhile, Anthropic said, you know what, let's not worry about AGI. Let's build tools that developers and enterprises actually need right now. Let's win the coding market. Let's win the enterprise contracts. And they did.
Paul Mason: It's like the tortoise and the hare. OpenAI was chasing this lofty, possibly impossible goal, and Anthropic just put their head down and built practical stuff.
Tim Williams: Right. And here's the moral of the story: while OpenAI was focused on this AGI moonshot, they lost half their enterprise market share. Now they're scrambling to get it back. Anthropic's projected to break even by 2028. OpenAI's not profitable until 2030. The strategy difference is playing out in real numbers.
Paul Mason: And it connects to what you were saying about being in a bubble. Like, OpenAI was in their own bubble — the AGI bubble — and they missed what was actually happening on the ground with developers.
Tim Williams: Exactly. And I think that's the lesson for all of us. It's easy to get caught up in the hype, in the big vision, in what could be possible someday. But the real value — the real adoption — comes from building things that solve actual problems for actual people right now.
Paul Mason: So here's the question that's been rattling around in my head since you sent me that research. If LLMs really are hitting a wall — or even just plateauing — what does that mean for us? For developers who've gone all in on this stuff?
Tim Williams: Yeah, and zoom out even further. What does it mean for the people pouring billions of dollars into this technology? Because we're talking about massive bets on a future that might not materialize the way they're expecting.
Paul Mason: Right. The Sequoia analysis. The $600 billion gap.
Tim Williams: Exactly. David Cahn at Sequoia put out this analysis that's been making the rounds. He's saying that to justify the current infrastructure investment in AI, companies need to generate $600 billion in annual revenue that doesn't currently exist. Current AI revenue is around three to four billion. So you're talking about a gap of almost $600 billion that needs to materialize out of thin air.
Paul Mason: That's... a staggering number. How do you even close that gap?
Tim Williams: Here's the math. Microsoft, Google, Meta, Amazon — they're collectively spending over $200 billion a year on AI infrastructure. At a 20% margin, that requires one trillion dollars in revenue. Total AI software revenue right now? Three to four billion. Even if AI revenue grows a hundred times, it's still not enough.
Paul Mason: So either AI revenue explodes way beyond what anyone's projecting, or...
Tim Williams: Or we're in a bubble. And bubbles burst. Cahn actually draws a parallel to the dot-com era. In 1999, about a trillion dollars worth of fiber optic infrastructure was built. By 2005, only 5% of that capacity was being used. The infrastructure was right — the timing was wrong. Companies went bankrupt. The fiber got bought for pennies on the dollar.
Paul Mason: And the survivors — Amazon, Google — built empires on cheap infrastructure.
Tim Williams: Right. So the question becomes: are we in 1999? Is all this GPU infrastructure the new fiber optic cables?
Paul Mason: I mean, there's one big difference. The fiber optic bubble was built on the promise of the internet transforming everything. And it did — just not as fast as investors wanted. Is AI the same way?
Tim Williams: That's the bull case. Sam Altman says the curve is still exponential, we're just in a transition period. Dario Amodei at Anthropic says we're not hitting a wall, we're hitting a different part of the curve. The J-curve argument — sometimes progress looks flat before it jumps again.
Paul Mason: But then you've got the bears. Gary Marcus is out there saying we're heading for another AI winter.
Tim Williams: Yeah, and he's got historical precedent. AI has gone through these cycles before — the 1970s, the 1980s, the 1990s. Hype builds, funding pours in, the technology plateaus, disappointment sets in, funding collapses. The difference this time is that there's real commercial value. Claude Code is doing one to two billion in ARR. That's not vaporware. But the question is whether there's six hundred billion dollars worth of value.
Paul Mason: Okay, so let's say the bubble does burst. What happens to developers like us?
Tim Williams: Here's the thing — and this is where I think the conversation gets interesting. Developers might actually be in the sweet spot. The productivity gains from AI coding tools are real and measurable. We're talking 30 to 50% faster coding, two to five times faster for boilerplate. That's actual value being created right now, not speculative future value.
Paul Mason: Right. I'm using this stuff every day and it's genuinely useful. I shipped features last week that would have taken me twice as long without Claude. That's real.
Tim Williams: Exactly. And here's the key insight: even if LLMs plateau, learning to work with AI is a permanent skill shift. It's like learning to use a compiler or a version control system. The specific tools might change, but the skill of leveraging AI to be more productive — that's durable.
Paul Mason: But there are risks, right? If investment dries up, what happens to all these free and cheap AI tools we've gotten used to?
Tim Williams: Yeah, that's a real concern. Right now, a lot of these tools are subsidized by venture capital money. Companies are burning cash to gain market share. If the funding environment tightens, API pricing could spike. The days of cheap tokens might end. We could see consolidation — smaller AI companies getting acquired or shutting down.
Paul Mason: So enjoy the cheap AI while it lasts.
Tim Williams: Maybe. But also, don't build your entire workflow around a single vendor or a single model. Diversify. Learn the fundamentals. Be the person who knows how to apply AI to real problems, not the person who's dependent on a specific tool that might not exist in five years.
Paul Mason: That's solid advice. But let's talk about the other side of this — the investors. What happens to them if this bubble bursts?
Tim Williams: Oh, they're the ones who could really get hurt. You've got Microsoft with 13 billion in OpenAI. NVIDIA with a three trillion dollar market cap built on AI chip demand. These are massive bets. If AI revenue doesn't materialize, those valuations become hard to justify.
Paul Mason: But Microsoft and Google — they're not going anywhere, right? They have the cash to weather a downturn.
Tim Williams: Big tech will be fine. They'll write off some bad investments and keep moving. The real carnage would be in the startup ecosystem. A nuclear winter for AI startups. The companies that raised money at massive valuations based on AGI promises — they'd be in trouble.
Paul Mason: And the infrastructure? If it's like the fiber optic parallel, compute gets cheap.
Tim Williams: That's actually good for developers. If GPU prices crash because the speculative demand dries up, the cost of running AI models goes down. The survivors pick up the pieces and build real businesses on cheap infrastructure.
Paul Mason: So the bubble bursting might actually be good for the people building real things with AI.
Tim Williams: That's my read. The technology is real. The productivity gains are real. The valuations are the problem. If you're a developer who's learned to use AI effectively, you're creating value. If you're an investor who bet on AGI arriving next year, you might be in trouble.
Paul Mason: There's a scenario that really worries me though. What if LLMs hit a wall and AGI proves impossible with this architecture?
Tim Williams: That's the LeCun scenario. He's saying LLMs are fundamentally limited. They can't understand the physical world, they don't have persistent memory, they can't genuinely reason or plan. If he's right, then all this scaling — more data, more compute, bigger models — it hits diminishing returns. Maybe we need a completely different architecture.
Paul Mason: And if that's true, what happens to all the companies that bet the farm on LLMs being the path to AGI?
Tim Williams: They pivot. Or they die. OpenAI's already pivoting — that's what the developer focus is about. They're realizing the AGI timeline might be longer than they thought, so they need to build sustainable business models now. Anthropic saw this coming and built for developers from day one.
Paul Mason: So the smart play is: assume LLMs plateau, build tools that create value now, stay flexible for whatever comes next.
Tim Williams: That's it. Build on solid foundations, not hype. The future belongs to people who can apply AI to real problems, not the people waiting for AGI to save the day. And honestly? That's always been true. Every major technology shift — the internet, mobile, cloud — the winners were the ones who built practical solutions, not the ones chasing the most ambitious vision.
Paul Mason: There's something else I've been thinking about. Even if LLMs plateau, they're still incredibly useful. Like, a tool doesn't have to be sentient to be valuable. Excel isn't sentient. Git isn't sentient. They're still essential.
Tim Williams: That's exactly right. And I think that's the healthy mindset. I'm not building my career on the assumption that Claude 5 will be sentient. I'm building my career on the assumption that AI-assisted development is a permanent shift in how we write software. The tools work now. If they get better, great. If they plateau, I still have a massive productivity advantage over developers who didn't learn these skills.
Paul Mason: The investors might lose their shirts, but developers who learn to use AI productively will be fine.
Tim Williams: That's the takeaway. The bubble might burst for investors, but not for developers. The technology is real and useful. The valuations are the problem. Focus on creating value with the tools you have, not speculating on what might come next.
Paul Mason: Alright, I think that's a good place to wrap it up. We covered a lot of ground today.
Tim Williams: Yeah, we went from dinosaur skeletons to the $600 billion AI revenue gap. That's quite a journey.
Paul Mason: And somehow it all connects. Being out in the real world, away from the computer, seeing how normal people work — it gives you perspective. We're in this weird bubble where AI feels inevitable, but most of the world is just... going about their business.
Tim Williams: Right. And the companies that remember that — that focus on real problems for real people — those are the ones winning. Anthropic figured that out. OpenAI's relearning it the hard way.
Paul Mason: And for those of us writing code every day? Keep building. Keep learning. Don't bet everything on AGI showing up to save the day, but definitely take advantage of the tools we have right now.
Tim Williams: Exactly. The technology is real. The productivity gains are real. Whether the investment bubble bursts or not, developers who learn to work with AI are going to be fine. It's the people chasing the hype without building real skills who should be worried.
Paul Mason: Well said. Alright, thanks for sharing the DC stories, and thanks for the deep dive on all this. It's given me a lot to think about.
Tim Williams: Absolutely. And hey, next time you're on the East Coast, definitely make a stop in DC. Just... maybe don't mail your valuables via registered mail. Some traditions are better left in the past.
Paul Mason: Laughs. Good advice. Although now I kind of want to see what else Harry Winston mailed in a plain brown package.
Tim Williams: Right? There's probably a whole Netflix documentary waiting to be made. Anyway, thanks for listening everyone. If you enjoyed this episode, subscribe, leave a review, all that good stuff. We'll catch you next time on Rubber Duck Radio.
Paul Mason: See you next time. And remember — build on solid foundations, not hype. Future you will thank present you.
Tim Williams: Couldn't have said it better myself. Take care, everyone.