Transcript
Tim Williams: Hey everyone, welcome back to Rubber Duck Radio. I'm Tim Williams...
Paul Mason: And I'm Paul Mason. And, uh, we are back with another episode for you. Episode 12 I believe!
Tim Williams: So today we've got a loaded show for you. There is a LOT happening in the AI world right now, and honestly, some of it is... well, let's just say it's generating some feelings. Some strong feelings. But first — Paul, how you doing? You survived another week in the trenches?
Paul Mason: Yeah, I'm good. I'm good. You know, the usual — shipped some stuff, broke some stuff, fixed the stuff I broke. The circle of life.
Tim Williams: The developer lifecycle in a nutshell. Alright, so let's get into it. The big news this week — and I mean BIG — Claude Opus 4.7 dropped. And Paul, I gotta tell you, the reception has been... rough. Like, genuinely rough. Not the usual "oh, it's slightly different" griping. We're talking Reddit threads with hundreds of comments calling it a regression. People are genuinely upset.
Paul Mason: Yeah, I've been watching this unfold and it's... it's not pretty. The r/ClaudeAI and r/Anthropic subs are basically on fire right now. You've got people saying it ignores instructions, it's lazier than 4.6, and then there's that MRCR long-context number that's got everyone rattled.
Tim Williams: Oh, the MRCR numbers. Here's the thing — and this is the part that really got me — on the MRCR benchmark, which is, um, the standard test for how well a model can find and reason over information buried in a long document... Opus 4.6 scored 78.3%. Opus 4.7? 32.2%. That is not a regression, Paul. That is a collapse. That is a building falling down.
Paul Mason: So it's basically half as good at long-context retrieval? That's... that's a big deal for people using it for legal docs, research, anything where you're throwing big documents at it.
Tim Williams: Exactly! And Anthropic's response, as best I can tell, is that MRCR is an artificial "needle in a haystack" test that doesn't reflect real-world use. Which, okay, I hear that argument. But here's my counter — if you're going to use that benchmark to sell me on the model when the numbers are good, you don't get to dismiss it when the numbers crater. You don't get to have it both ways.
Paul Mason: Totally. That's exactly how I see it. You can't cherry-pick your benchmarks.
Tim Williams: Right. And then — okay, this is where it gets fun — the memes have been absolutely flying. And the one that's been cracking me up is the "strawperry" thing. You've seen this, right? People are asking Claude how many letter R's are in "strawperry" — spelled with an E instead of the second R — and of course Claude can't do it. It's the old strawberry problem, but they've misspelled it to make it even worse, and Claude just face-plants.
Paul Mason: So... okay, look. I have to push back on this a bit. I know it's a funny meme, and I get why people are sharing it, but the letter-counting thing is not really a good test of an LLM's capabilities. LLMs don't "see" letters. They process tokens, not characters. The word "strawberry" might be tokenized as like three chunks — "straw," "ber," "ry" — and the model has no internal representation of individual letters. It's like asking a person who only reads whole words at a glance to count the individual letters. It's just not how the system works.
Tim Williams: Here's the thing though, Paul — and I hear you, I genuinely do — but when you're paying for Claude, especially at the $200 a month tier, you expect better. And here's where my actual complaint lives: LLMs CAN use tools to answer questions they aren't internally good at. If you ask me how many R's are in "strawperry," a well-designed agent could just write a tiny Python script, run it, and give you the right answer. Boom. Done. ChatGPT has had code execution in its interface for how long now? Years. And Anthropic's reluctance — or slowness, or whatever you want to call it — to make those kinds of tools available to the agent in their chat interface? That's not excusable. Not at this price point. Not when your competitor has had it working forever.
Paul Mason: Yeah, I'd add one more thing to that. It's not like Anthropic doesn't have the tools. Claude Code has execution. The API supports tool use. They've got the infrastructure. It's the consumer-facing chat interface that's lagging behind. And that gap is exactly where people live — that's where the $200/month users are spending their time.
Tim Williams: Consider this — you're paying top-tier money, you're getting a model that's arguably regressed on long-context tasks, and the interface you're using it in doesn't give it the tools it needs to compensate for its known weaknesses. That's a bad combo. The moral of the story here is pretty clear: the model is only as good as the tools it's allowed to use. And right now, Anthropic is shipping a sports car with no steering wheel and then blaming the driver for crashing.
Paul Mason: That's... yeah, that's a pretty good analogy actually. The car metaphor works.
Tim Williams: I try. So yeah — Opus 4.7, rough launch, community's not happy, and I think for good reason. Alright, let's move on to the next topic because there's plenty more to get into...
Paul Mason: I'm here for it.
Tim Williams: Alright, so the next topic — and this one's near and dear to both of us because we're both in positions where we help write job descriptions and sit in on hiring decisions — is the state of junior developers in the age of AI. Paul, I've been reading some wild stats on this. Entry-level tech hiring down 25% year-over-year. Stanford just released a study showing a 13% employment decline for workers aged 22 to 25 in AI-exposed roles. And for software developers specifically in that age bracket? Nearly 20% drop from the late 2022 peak.
Paul Mason: Yeah, and I've seen it firsthand. We post a junior role and we get hundreds of applications. But here's the thing — and I'm being honest here — when we're sitting in that hiring meeting, the conversation has changed. It used to be, "Can they code? Do they understand the fundamentals?" Now it's, "Can they work with AI? Do they understand how to break down a problem into pieces an AI can actually execute?" And if the answer is no, they're not getting the offer.
Tim Williams: Here's the thing — and I need to be clear about this because I see companies making this mistake — the mindset that "AI can do the work of a junior developer" is shortsighted. AWS CEO Matt Garman called it "one of the dumbest things I've ever heard." And he's right. Consider this: if you stop hiring juniors today, who becomes your senior developer in five years? You're eating your seed corn. You're creating a talent pipeline crisis. But I also get it from the hiring manager perspective — you've got pressure to ship faster, to do more with less. So what are we actually looking for?
Paul Mason: So from my side — and Tim and I both write job reqs, we both sit in these meetings — here's what we're actually screening for. First, can you understand code at a low enough level that you can spot when the AI is hallucinating? Because it will. Second, do you know how to build constraints around the AI? Like, can you write a test harness that catches regressions before they hit production? Third, and this is huge — do you understand testing patterns? Not just "write a test," but knowing what to test, what are the edge cases, what could break when the AI refactors something?
Tim Williams: I'd add one more thing to that — and this is probably the biggest shift I've seen. It's not just about the technical skills anymore. Can you take a vague problem from a stakeholder — and you know how this goes, right? They say "I need a dashboard" and what they actually need is... something else entirely — can you break that down into executable parts? Can you sit with them, ask the right questions, and shepherd them through the process of explaining what they actually need? Because the AI can't do that. The AI can't read the room. It can't navigate the politics of "what we said we wanted" versus "what we actually need."
Paul Mason: That's exactly it. The moral of the story here is that the bar has moved, not disappeared. We're not looking for code monkeys anymore — honestly, we haven't been for a while, but now it's even more pronounced. We're looking for people who can think critically about what the AI produces, who can design systems that keep the AI on the rails, and who can translate human ambiguity into machine-executable specifications. That's the job now.
Tim Williams: Yeah, and I think that's actually good news for the right candidates. If you're a junior developer listening to this — don't panic. The opportunity is still there, but you need to show us you can work with AI, not just code without it. Build projects where you use AI tools. Show us you can debug AI output. Show us you can write a good prompt, then validate the result. That's what's going to get you hired.
Paul Mason: And to the hiring managers listening — and I know we've got some in our audience — here's my plea: don't cut out the juniors. Yes, AI can do the grunt work. Yes, a senior with AI can output more code. But you're building a debt. You're borrowing against your future talent pipeline. The companies that win in five years are the ones investing in juniors now, training them to work alongside AI, teaching them the fundamentals while leveraging the productivity gains. That's the balance.
Tim Williams: Totally. And I'll say it one more time for the juniors out there: future you will thank present you for learning how to work with AI now. Don't fight it, don't ignore it. Make it your ally. That's the path forward.
Paul Mason: All is not lost, there's still opportunity out there. Hang in there and make yourself stand out!
Tim Williams: Alright, so we've talked about what hiring managers are looking for. But here's the million-dollar question — where are you actually supposed to find these jobs? Because if you're still treating LinkedIn like it's 2019, I've got some bad news for you.
Paul Mason: Oh man, don't get me started. LinkedIn is basically unusable now. I have a friend who applied to like fifty jobs last month and I'm pretty sure half of them were AI bots harvesting his resume data.
Tim Williams: Yeah, and you're not imagining it. Recent analysis shows that nearly one in three job postings on LinkedIn are fake — ghost jobs left open for months, data harvesting operations, or straight-up AI bot scams. One developer applied to a hundred jobs in thirty days, got a hundred and six profile views, and zero interviews. Zero. LinkedIn isn't broken — it's working exactly as designed for everyone except the people who actually need it.
Paul Mason: That's... that's depressing. So where are people actually finding work? Because I can't just tell junior devs to give up on job hunting.
Tim Williams: Okay, so here's the thing — LinkedIn still has the network effect for recruiters, so you can't completely opt out. But for actually showing your work as a developer and finding real opportunities? There are much better options. First, GitHub. Make sure your profile has a customized readme. It's free, it ranks for your name on Google faster than almost anything else, and every developer already has one. Your commit graph, your contributions, your actual code — that's your real resume.
Paul Mason: Yeah, totally. I'd add one more thing — there's this new platform called forg.to that's pretty interesting for developers who build things. It aggregates your work from GitHub, LeetCode, dev.to, Medium, YouTube, all into one live profile that updates with your actual activity. Not a static portfolio, but a timeline showing what you're working on right now.
Tim Williams: Exactly. And then there are Discord communities — and I'm talking about real developer communities, not some sketchy job board server. Programmer's Hangout has over a hundred thousand members with active career channels. There are React servers with two hundred thousand members. These aren't just job boards — they're places where you can get real-time resume reviews, interview prep with actual peers, and job postings from people who are actually hiring.
Paul Mason: Same here. I've seen people land gigs through Discord servers that they never would've found on LinkedIn. The key is you're talking to actual humans, not bots. And you can verify — if someone posts a job, you can ask questions in real-time, see who else is applying, get feedback on your approach.
Tim Williams: Here's the moral of the story: LinkedIn is still worth having a profile on because recruiters are there, but don't treat it like your primary job search tool in 2026. Your energy is better spent building in public on GitHub, contributing to open source, engaging in real developer communities on Discord, and creating that living portfolio on platforms like forg.to. The jobs are still out there — you just need to look where the actual developers are, not where the AI bots are harvesting resumes.
Paul Mason: Yeah, and I'd say to any junior devs listening — future you will thank present you for building that real portfolio now. Don't just apply to jobs. Build something, ship it, post about it in these communities. Show up where the work is, not where the noise is.
Tim Williams: Alright, so let's talk about another thing that's changed — open source contributions. Used to be, this was the golden ticket for junior developers. You'd find a project you loved, hunt down some bugs, submit a few thoughtful pull requests, and suddenly you had proof you could work in a real codebase. Hiring managers would take notice.
Paul Mason: Yeah, and that's exactly the problem. Now with AI, everyone is doing this. You've got thousands of candidates submitting AI-generated pull requests to popular repos, and maintainers are drowning in it. There's actually a term for it now — "slop PR." Low-effort, AI-generated contributions that often introduce more problems than they solve.
Tim Williams: Here's the thing — and this is critical for junior developers to understand — as the bar for what qualifies as good software gets raised by AI, the bar for what's expected from job candidates also gets raised. You can't expect hiring managers to be impressed by low-effort AI slop pull requests anymore. It's noise. It signals that you're willing to shotgun AI-generated code at maintainers and hope something sticks. That's not the kind of developer anyone wants to hire.
Paul Mason: Totally. I've talked to maintainers who are just exhausted by it. They're getting dozens of PRs a week that are clearly AI-generated — wrong coding style for the project, missing context, solving problems that don't exist, or introducing subtle bugs. Some projects have literally started adding "no AI-generated PRs" to their contribution guidelines, which is... yeah.
Tim Williams: So what should junior developers do instead? The answer is: you need to demonstrate high-effort projects that clearly are NOT one-shot slop. Hiring managers can spot the difference immediately. A slop PR is a quick fix to a popular repo with no context. A high-effort project shows sustained engagement, deep understanding, and genuine contribution.
Paul Mason: Right. So what does that actually look like? I'd add one more thing — it's not just about the code. It's about the engagement. Did you open an issue first and discuss the problem with maintainers? Did you understand the project's architecture and coding conventions? Did you write tests? Did you handle edge cases? Or did you just prompt "fix this bug" and submit what came out?
Tim Williams: Consider this: instead of firing off ten AI-generated PRs at ten different popular repos, what if you spent a month deeply contributing to ONE project? Become a regular. Understand the codebase. Build relationships with the maintainers. Submit thoughtful, well-tested improvements that show you've actually learned the system. That's a signal. That's something a hiring manager can look at and say, "This person knows how to work in a real codebase with real people."
Paul Mason: Yeah, and another thing — build your own projects that solve real problems you've experienced. Not a todo app, not a weather dashboard. Something that shows you can identify a problem, design a solution, and execute it over time. Document your process. Write about the decisions you made. Show that you can think, not just prompt.
Tim Williams: The moral of the story is this: AI has commoditized low-effort contributions. What used to be a differentiator is now noise. If you want to stand out as a junior developer, you need to go deeper, not wider. Show sustained effort. Show genuine understanding. Show that you can collaborate, not just generate. Because here's the thing — hiring managers aren't looking for people who can use AI. They're looking for people who can think, and AI is just a tool in your toolbox. Make sure you're the one holding the hammer, not the other way around.
Paul Mason: Don't get us wrong, using AI is essential, being comfortable with it is expected. It's the level of effort and understanding that you can demonstrate without AI that counts.
Paul Mason: Yeah, exactly. It's about showing you can sustain effort over time.
Tim Williams: Alright, so if open source contributions are saturated with slop PRs, and LinkedIn is full of bots... where does that leave you? What actually demonstrates skill in 2026?
Paul Mason: Totally. And speaking of demonstrating actual skill — let's talk portfolios. Because the bar has shifted dramatically. A few years ago, a full-stack e-commerce app on your resume would turn heads. Now? Hiring managers see hundreds of those. And with AI, they know a lot of them are one-shot AI slop. So what actually impresses in 2026?
Tim Williams: Here's what separates you from the pack: production signals. Hiring managers aren't looking for Jupyter notebooks with model.predict(). They want to see how you handle failures, structure data, connect systems, and ship working software. The projects that get callbacks in 2026 fall into five categories. One: RAG pipelines that connect LLMs to real data with proper error handling and source attribution. Two: structured data extraction with validation — not just raw text output. Three: tool-calling AI agents that can take autonomous actions with retry logic and conversation memory. Four: evaluation pipelines that actually test your AI system for hallucinations and relevancy. And five: everything deployed behind an API with rate limiting, monitoring, and a Dockerfile. That last one alone puts you ahead of 90% of applicants.
Paul Mason: Yeah, and I'd add — it's not just the code. How you document matters. Write a decisions.md file explaining why you chose ChromaDB over Pinecone, why your chunk size is 1,000 tokens, why you used gpt-5-mini instead of gpt-5. Hiring managers want to see your reasoning, not just your final product. And record a 2-minute Loom video walking through your code, showing it run, explaining one failure you hit and how you fixed it. That alone will make you stand out. Most candidates just drop a GitHub link and hope for the best. Show them you can communicate your technical decisions.
Tim Williams: And here's the thing that really matters: connect your projects together. Don't have five disconnected repos. Build Project 4 to test Project 1. Wrap Project 1 with Project 5's API. Show systems thinking. Show that you understand how pieces fit together in a real architecture. The AI job market in 2026 rewards builders over learners. Anyone can complete a course and get a certificate. But can you ship? Can you handle when things break? Can you prove your system actually works? That's what gets you hired.
Paul Mason: Future you will thank present you for building these skills now. Don't fight AI — make it your ally. Use it to build better projects faster, but make sure you understand what it's producing. Be the person who can validate, debug, and deploy AI-generated code. That's the junior developer companies are fighting over right now.
Tim Williams: Alright, before we wrap up, I want to call out something Anthropic is doing really well right now — their marketing. And I know, I know, marketing from an AI company sounds like the last thing we'd care about, but hear me out.
Paul Mason: Yeah, I've noticed this too. What specifically are you thinking of?
Tim Williams: So first, the "Keep Thinking" campaign. They launched this with Mother London, and the whole aesthetic is intentionally warm and retro-inspired. Instead of going for that intimidating, futuristic AI vibe everyone else does, they're blending the familiar with the unfamiliar. It's cozy. It's approachable. And it positions Claude as a thinking partner, not a replacement.
Paul Mason: Totally. And it's working. They've nearly tripled their annualized revenue from $7 billion to over $19 billion, and Claude has like 70% share of U.S. business spending on AI chat subscriptions. That's not an accident.
Tim Williams: Exactly! But here's the thing I love — and this is where they're really punching above their weight — it's the little details. Have you seen Clawd?
Paul Mason: The little 8-bit mascot in the terminal? Yeah, I saw that. It's kind of adorable in a weird way.
Tim Williams: Right! So Clawd is this little pixel sprite that pops up in your terminal when you start a Claude Code session. No productivity gain, no functional purpose whatsoever — it's just... there. Saying hello. And apparently when Claude is thinking, sometimes the status says it's "ruminating" or — get this — "lollygagging."
Paul Mason: "Lollygagging"? That's amazing. I love that.
Tim Williams: It is! And here's why this matters — in an age where everything is optimized and streamlined into oblivion, these little touches of personality stand out. One writer called it "putting a lava lamp in a server room." It turns the terminal from a task space into a relationship space. It's emotional design, and it builds stickiness. You come back not because you have to, but because you kind of want to see what Clawd will do today.
Paul Mason: That's a really good point. And I haven't seen other AI coding assistants do this. Cursor doesn't have a mascot. Codex doesn't. Even Gemini CLI just tells nerdy jokes occasionally, but there's no consistent personality.
Tim Williams: Alright, I think that's a good place to wrap this up. We covered a LOT today — Claude 4.7's rocky launch, the junior developer hiring crisis, what managers are actually looking for, the slop PR problem, where to find real job opportunities, portfolio projects that stand out, and even gave Anthropic some props for their marketing game.
Paul Mason: Yeah. If there's one thing I hope junior developers take away from this, it's that the bar has moved, not disappeared. You still have a path forward — you just need to be more strategic about how you show up.
Tim Williams: And the moral of the story is this: AI is a tool, not a replacement for thinking. The developers who will thrive are the ones who learn to wield it wisely — who understand the code well enough to spot when it's wrong, who can break down ambiguous problems into executable specs, and who can validate output with real testing. That's not going away. If anything, it's more valuable now.
Paul Mason: Totally. And to all the juniors out there grinding: don't panic. Build something real. Ship it. Document it. Show us how you think. Future you will thank present you for putting in the work now.
Tim Williams: Alright folks, that's our show for today. Thanks for hanging out with us on Rubber Duck Radio. If you found this helpful, drop us a rating and review — it really does help other developers find the show. And let us know on Twitter or Discord what topics you want us to tackle next.
Paul Mason: Yeah, thanks everyone. We'll be back next week with another deep dive. Until then, keep shipping, keep learning, and remember — you're the one holding the hammer. Make sure AI stays the tool, not the craftsman. See you next time.