[>>] S1E3November 6, 202526:21

DevEx, AWS Layoffs, and Cursor Update Fatigue

Join Tim Williams and Paul Mason on Rubber Duck Radio as they dissect AWS's massive layoffs and the true role AI plays in the tech industry's ongoing transformation. Dive into the real reasons behind ...

Tim Williams (host)Paul Mason (host)
0:00
26:21

Show Notes

Join Tim Williams and Paul Mason on Rubber Duck Radio as they dissect AWS's massive layoffs and the true role AI plays in the tech industry's ongoing transformation. Dive into the real reasons behind corporate restructuring, the battle for developer mindshare, and the pivotal role of developer experience in winning the AI platform race. Discover why 'AI isn't taking your job, it's changing why your job exists,' and learn how to navigate the evolving landscape of AI tools and strategies. Perfect for developers seeking insight into the shifting tech climate and how to stay ahead.

Transcript

Tim Williams: Heyo, welcome to another episode of the Rubber Duck Radio. Once again I will have been your host Tim Williams a lead developer working with a crack team of talented developers trying to surf on top of the AI tools while continuing to maintain the highest quality software development output at the same time. Tim Williams: With me again is Paul Mason a full stack AI native from Washington, welcome again Paul! Paul Mason: Hey Tim, guess we're making this a habit huh? Tim Williams: Indeed we are. Anyway, today I want to kick it off with the elephant in the room. AWS just announced over fourteen thousand layoffs, and the narrative they're going with is "AI did it" Paul Mason: I heard about that, but also, that it may end up being up to thirty thousand people? Tim Williams: That's what the press releases say. Paul Mason: By the way you bring that up, sounds like you don't believe the narrative they're spinning about AI efficiency huh? Tim Williams: I don't. I think it's AI related, but I think it's completely different than the narrative. Industry insiders are talking about how it's likely a strategic move to try and gain back some market share in the GPU AI computing race. Paul Mason: Yeah, I've been hearing that too. But it's funny — whenever I see "AI is taking jobs" in a headline, I immediately picture some exec in a Patagonia vest standing in front of a PowerPoint slide that just says "Efficiency." Tim Williams: Yeah. The truth is, it's AI-related, but not in the way they want people to think. They're not automating thousands of engineers overnight — they're freeing up capital. Paul Mason: You mean for GPUs, right? Tim Williams: Exactly. The whole market right now is a race for compute dominance. OpenAI, Google, Anthropic — they've got the APIs developers actually like to use. AWS has Bedrock… which, let's be honest, doesn't. Paul Mason: Oh man, don't get me started. Bedrock feels like trying to launch a satellite through a ticketing system. Tim Williams: Right? If you've ever tried to spin up a quick LLM prototype with OpenAI — it's just you, an API key, and maybe 30 minutes. With Bedrock, it's IAM roles, Textract permissions, waiting for access approval, and proof you're not some GPU-hungry hobbyist. Paul Mason: So what you're saying is, they didn't automate people out of jobs — they automated excuses. Tim Williams: Exactly. "AI did it" makes for a clean headline. But the truth is, AWS is trying to claw back relevance in a market that's moving faster than their internal bureaucracy. Paul Mason: Yeah, and it's not just AWS. Every company uses tech buzzwords to cover restructuring. Remember when "digital transformation" was the corporate way of saying "we're downsizing"? Tim Williams: Oh yeah. Then it was "cloud transformation," now it's "AI automation." It's the same pattern. New tech term, same financial playbook. Paul Mason: So this isn't about replacing workers with AI — it's about reallocating resources. Tim Williams: Bingo. They're slashing human capital in areas like management and middle operations to pump money into compute infrastructure. They're betting on long-term GPU dominance, not short-term coding efficiency. Paul Mason: Makes sense. Because AI can't yet replace most of what developers do — the problem-solving, the architecture, the last 20% that actually matters. Tim Williams: Exactly. AI's great at filling in the blanks, but it's blind to context. AWS knows that too. They're just using the narrative to justify a pivot. Paul Mason: So when they say "AI's making us more efficient," what they really mean is "we're shifting budget from people to power." Tim Williams: Yes — from payroll to power draw. That's the real story. Paul Mason: It's wild though — building all this AI infrastructure still requires humans. Tons of them. Tim Williams: Right, but it's a different kind of human. Less policy layers, more infrastructure specialists. Less middle managers, more GPU engineers. Paul Mason: And if you're inside AWS right now, you're either working on that future, or you're probably not working there anymore. Tim Williams: That's a bit dark, but yeah — pretty much. Paul Mason: So, bottom line — they're not victims of AI efficiency. They're victims of bad timing and worse developer experience. Tim Williams: Exactly. Bedrock just isn't competitive. It's like showing up late to the party with a new phone that can't run the same apps. AWS dropped the ball early, and now they're trying to spin the recovery as "innovation." Paul Mason: And that "AI ate our jobs" narrative helps them stay in the cool kids club. Tim Williams: Yep. It's PR judo — redirect the blow to sound futuristic. Paul Mason: So if you're a developer listening to this, the takeaway is: don't buy the narrative at face value. Learn where your company's money is actually going. Tim Williams: Perfectly said. Because AI isn't taking your job — it's changing why your job exists. Paul Mason: Oof. That's tweetable. Or, uh, ex-able? I'm still not on that train. Tim Williams: "AI isn't taking your job, it's changing why your job exists." — Rubber Duck Radio, 2025. Paul Mason: So what's your bet, Tim — does AWS recover from this? Tim Williams: Honestly, maybe. But only if they rebuild Bedrock to match OpenAI's developer experience. Until then, they're going to keep bleeding mindshare to companies that actually understand how developers think. Paul Mason: Yeah, Bedrock needs less friction, not more compute. Tim Williams: Exactly. You can have all the GPUs in the world — if no one wants to use your API, you've already lost. AWS has always been focused on exposing the primitives, not making it easy to use those primitives. Paul Mason: And that's the real automation problem — not that AI is replacing people, but that bad strategy is replacing leadership. Tim Williams: Amen to that. Think about it though, the easier you make it for developers to use AWS' services, the easier it is for AI to understand those services. Tim Williams: When you provide things like frameworks, SDKs, use cases, documentation and come out to meet developers where they're at, like OpenAI has you'll capture the market even if your service is more expensive. Paul Mason: We see the same pattern with services like Stripe. They came and ate Paypal's lunch because they provided an excellent developer experience. Tim Williams: Right. Let's unpack Stripe as a case study before we circle back to AWS. Stripe set out-of-the-gate with the developer as customer mindset. Their docs are legendary — live code snippets, clear quickstarts, a smooth integration path Paul Mason: Yeah. One dev I saw said: "I certainly prefer to just deal with Stripe and move on." That sums up the difference. Less friction and developer joy equals market advantage. Tim Williams: Exactly. And what you and I build as developers? We build tools, libraries, prototypes, proofs-of-concept. When the barrier to entry is low, you experiment, you iterate. When it's high — IAM roles, 10-step onboarding, opaque pricing — you give up. Paul Mason: So in terms of AI services: OpenAI provides a very "drop-in" experience. You get an API key, a model, you send prompts. Meanwhile with AWS's generative-AI offering Amazon Bedrock you often hit more complexity. For example: choosing models, dealing with permissions, more setup. Tim Williams: That's the rub. AWS has the infrastructure, the GPUs, the data centres — but developer inertia matters. A startup or a side project won't necessarily pick what has the best hardware; they'll pick what they can move fastest on. Paul Mason: And devs bring peers. If you build a prototype on OpenAI in an afternoon, you show it to your team, you push it to production. That becomes a reference point. Then when budget decisions come — "which platform do we use?" — the platform with the momentum wins. Tim Williams: It ties right back to your "last 20%" classical problem: The heavy lifting isn't the model architecture, it's the orchestration around it. What SaaS companies that win understand is: Give me good defaults, help me get started, remove the friction. The rest is execution. Paul Mason: Exactly. And when you have that developer-centric flywheel, you get network effects. More integrations, more community, more tools built around your platform — reinforcing your leadership. Tim Williams: So let's tie this back to AWS and their layoffs. They're pivoting heavy capital into compute, into infrastructure, into model services. But if their dev-onboarding and developer experience don't keep pace, they'll face attrition of mindshare. Paul Mason: Which means the narrative "we're automating jobs via AI" becomes a side show. The real risk for them is losing the developer community — and once you lose that, you're not just behind on jobs, you're behind on every layer of your stack. Tim Williams: Right. Because if devs aren't using your APIs, your ops and model services become internal cost centres instead of growth engines. The layoffs then aren't just about efficiency — they're about catching up. Paul Mason: And you'll notice a pattern with companies that did win the AI platform race so far: they came with strong dev experience, straightforward SDKs, minimal friction, strong docs and community. Tim Williams: So the takeaway for our listeners who are developers or engineers: when your company says "we're using AI to automate", ask: "Which platform? How fast can I prototype? How many steps before I get to a working result?" Because the faster that goes, the more likely you're working on the winning side. Paul Mason: And when you see the job-cut narrative tied to "AI automation", you should ask: "Is this about replacing people with models? Or about reallocating budget into infrastructure and shifting developer momentum to someone else?" Tim Williams: That's it. And if you're advising leadership, you say: "Dev experience isn't a nice-to-have. It is the moat." Without it, you have the hardware, but no-one building anything with it. Tim Williams: Alright, Paul — let's get into the nuts and bolts. You and I have been talking about how developer experience (DevEx) is the battleground for these AI platforms. But how do you actually measure that? What are the metrics you want to watch if you're building a platform — or working at one? Paul Mason: Good question. One big category: onboarding time. How long does it take a new developer to go from signing up for your API or platform to actually running something meaningful. Metrics sites call this "time to first win" or "time to first successful call." Tim Williams: Right — we like to call it "time to hello-world in production." If you have to jump through five screens, get IAM roles approved, read 300 pages of documentation just to get a token — you've lost momentum. Paul Mason: Exactly. And docs matter big time. Quality, completeness, clarity — they correlate directly with adoption rates. One article even put it like: "Effective API docs cut onboarding time from weeks to hours." Tim Williams: So let me throw a list out: onboarding time; error/bug rates in first runs; documentation engagement (page views, bounce rates); churn or dropout of developers who never make a call; feedback loop speed — how fast you respond when someone hits a wall. Paul Mason: Yeah, and don't forget developer satisfaction/retention. If you keep pulling people back into endlessly wrangling your platform, they leave. "Accessible + usable + credible" are the three pillars of DevEx. Tim Williams: Now here's the twist: Because we're in AI land, there's another dimension — agent experience (AX) or how well your API works for not just humans but models/agents. One recent blog calls it out: the same DevEx metrics apply, but the volume and patterns change when LLMs are consumers. Paul Mason: Which circles back to our thesis: The platform that wins here isn't the one with the biggest GPUs—it's the one the devs and the models can use fastest, and with the least friction. Tim Williams: Absolutely. Now before we switch gears, let's move into a listener Q&A — we asked you folks out there: "How do I hedge my career in this shifting terrain where companies are throwing 'AI automation' at the problem, but really shifting roles and capital?" Paul Mason: Great question. One from Jess in Austin: "I'm a full-stack engineer — should I focus on learning LLM frameworks or stay deep in backend architecture?" Tim Williams: My read: Do both. Because what's happening is that those backend architect roles are shifting. It's not about replacing you with a model; it's about changing why your role exists. Know how to build scalable infra (that's classic), and know how to hook into the model-centric layer, how to orchestrate agents, how to integrate RAG workflows. That gives you a hedge. Paul Mason: Another from Marcus in Seattle: "If my company says it's using AI to automate jobs — what should I ask about?" Tim Williams: Great one. Here's what you ask: "Which platform are we using? What's the onboarding time for a new dev in that platform? What is the first-call success rate? How many devs have deployed something meaningful vs how many have signed up?" These reveal where the real investment is — is it in tools & DevEx, or just in press releases. Paul Mason: And for freelancers or side-projects: look for platforms that have low barrier to entry. Because that's where the momentum is. The easiest platform to adopt often wins. Tim Williams: Exactly. And if your platform has friction—wiring services, proving you need access, IAM hoops—you're on the slow side of that momentum shift. Tim Williams: Alright, let's pivot from corporate strategy to something more personal — the war at home. Cursor updates. Paul Mason: Oh man, I knew this was coming. Tim Williams: I swear, Cursor must have an internal goal like: "ship an update every time Tim opens a new file." I'll be deep in a prompt chain, everything's humming, and then—boom—"Restart to update Cursor." Paul Mason: Yeah, I get that too. You restart, everything's smooth for about five minutes, and then another update drops. It's like whack-a-mole for productivity. Tim Williams: I've had days where I restarted it five times just trying to keep it "current." I feel like I'm beta-testing their continuous deployment pipeline in real time. Paul Mason: You are the QA department. Tim Williams: And here's the real kicker — I organize my projects across full screens in macOS. One space for code, one for docs, one for test output, one for whatever chaos Paul's working on. Cursor reloads, and suddenly macOS decides that each full-screen window is a new timeline of reality. Paul Mason: Yep. You get those black-hole spaces. You swipe left and it's just… nothing. Cursor's gone to the big Activity Monitor in the sky. Tim Williams: Exactly. I'll be mid-conversation with the AI, it's giving me something gold, and then — poof — the 'update' toast pops up. At this point I've developed reflexes like a court stenographer. I copy the last few messages before every restart just in case. Paul Mason: The funny part is, Cursor is great. The autocomplete, the inline explainers, the diff tools. But they're iterating so fast it feels like working on a train that's still being welded together. Tim Williams: Yeah, I love it and I hate it at the same time. It's like the world's best coworker who insists on rebooting mid-sentence to install firmware updates. Paul Mason: "Hold on Tim, gotta optimize my embeddings real quick." Tim Williams: Yeah, exactly! Meanwhile, macOS is just like, "You had twelve windows, but we thought we'd scatter them randomly across different spaces for fun." Paul Mason: So the real pro tip: don't put Cursor in full-screen mode unless you enjoy scavenger hunts. Tim Williams: Or just accept that every time it restarts, you're going to play "find the terminal." It's somewhere out there. Possibly in another desktop dimension. Paul Mason: Honestly, they should gamify it. "Cursor Update 1.5: The Hidden Terminal Quest." Tim Williams: Coming soon to Steam Early Access. Paul Mason: But seriously — I get why they're doing it. Cursor's shipping fast, integrating new models, new MCP hooks, constant bug fixes. It's just… maybe schedule it once a day? Tim Williams: Yeah, one restart a day keeps the devs sane. Five restarts a day makes you question the nature of continuous delivery. Paul Mason: Or at least make the restart button say, "We're sorry, Tim." Tim Williams: "Restart to update Cursor — and your patience." Paul Mason: You know what though — I kinda respect it. Cursor's shipping velocity is insane. It's like they've taken "move fast and break things" and replaced break with reboot. Tim Williams: Exactly. I mean, we can't really complain — we're benefitting from the bleeding edge. But sometimes that edge actually cuts. Paul Mason: That's the trade-off, right? In this new AI tooling world, everything's moving so fast that you can either be stable or current, but not both. Tim Williams: Yeah, and the users have to pick which pain they prefer: falling behind or constantly updating. Cursor's basically the poster child for continuous deployment culture. Paul Mason: It reminds me of when Chrome first started silently updating in the background. At first everyone freaked out. Now we take it for granted. Cursor's just doing it out loud and mid-sentence. Tim Williams: "Would you like to finish that thought?" "Nope, I'm restarting." Paul Mason: But seriously, that's what separates modern dev tools from the old guard. OpenAI, Cursor, Replit — they're shipping daily because the underlying AI stack evolves daily. The models, the context window sizes, the MCP endpoints — they all change under the hood. Tim Williams: Right. And unlike the old IDE world — where stability meant success — in this new world, responsiveness is success. If you're not adapting, you're obsolete. Paul Mason: It's like that line from The Last 20 % Problem you wrote — the first 80 percent is table stakes, but the last 20 percent separates the good from the great. These tools are just trying to find that 20 percent before it changes shape again tomorrow. Tim Williams: Exactly. But as a developer, you feel the tension. You want the shiny new MCP integrations, but you also want your editor not to disappear every time you sneeze. Paul Mason: Right — "AI pair programmer, meet your new role: professional juggler." Tim Williams: You're balancing updates, context windows, prompt drift, API changes. it's chaos. Paul Mason: But maybe that's the price of being early. We're living through the awkward adolescent phase of AI dev tooling. Eventually it'll smooth out — version pinning, slower release rings, maybe even stability channels. Tim Williams: Yeah, kind of like Node or VS Code — once those ecosystems matured, you could trust updates again. Cursor will get there. Every great dev tool goes through its puberty. Paul Mason: Yeah, right now it's just in that "voice-cracking, reboot-every-hour" stage. Tim Williams: Exactly. Let it grow. Paul Mason: So moral of the story — fast shipping is awesome, but empathy for the user experience is what builds loyalty. Tim Williams: Yep. Because when your users are restarting five times a day and still rooting for you, you're doing something right. Paul Mason: Amen. Now if they could just make "Restart to Update Cursor" come with a coffee refill prompt, we'd be golden. Tim Williams: "Would you like to install version 0.17.4 and brew a latte?" Perfect. Tim Williams: Alright, let's take it home with something a little more instructional — daily work with the Cursor agent. We've joked about all the restarts, but the truth is, when it is running, it's one of the most powerful tools in our stack. Paul Mason: Yeah, if you learn how to use it right, it's like having a junior dev that never sleeps — but you've got to keep it trained and organized. Tim Williams: Exactly. And that starts with your cursorrules file. Let's talk hygiene. Paul Mason: Oh yeah — cursorrules hygiene is like brushing your teeth. Skip it for a few days and suddenly everything's inconsistent, the agent starts hallucinating files that don't exist, and your agent is drifting like a ship in a storm. Tim Williams: Yeah, prompt plaque. Paul Mason: Gross, but accurate. The point is: your cursorrules file should evolve with your project, not trail behind it. Tim Williams: Right. It's tempting to just dump every bit of context in there — API keys, long architectural notes, dependency explanations — but that's not sustainable. The rules file should stay lean and high-signal. Paul Mason: Exactly. Keep the stuff that defines how your agent should think, not everything it should know. Tim Williams: Here's the trick I've started using: treat your rules file like the table of contents for your project knowledge. If a section grows too big — like "Frontend State Management Patterns" or "API Integration Flow" — move that to its own markdown doc in /docs/ai-context/ and just link to it. Paul Mason: It keeps the rules lightweight but still gives the agent breadcrumbs to follow. Cursor will actually read linked markdown files if you reference them properly. Tim Williams: Yep. And the agent can even help you maintain this. I'll literally ask it, "Hey, summarize everything in this file that's redundant with our current rules." It'll point out stuff that's stale or duplicated. Paul Mason: Yeah, the same way you'd have a teammate go through and refactor comments or documentation — you can have the AI do it. Tim Williams: Exactly. I treat it like a little daily hygiene checklist: one. After every big refactor – check that your cursorrules file still describes the project accurately. One. Two. Once a week – trim dead or irrelevant rules. Three. Whenever you add a new feature or vertical – link to a new markdown doc rather than stuffing it into the rules. Paul Mason: And maybe one bonus: don't be afraid to rewrite the whole thing occasionally. Projects evolve, tone changes, priorities shift. A rules file from three months ago might be telling Cursor to optimize for speed when you now care about test coverage. Tim Williams: Exactly. People treat these files like a holy text when they're really a living document. Cursor will only be as helpful as the accuracy of the context you give it. Paul Mason: Yep — garbage in, garbage out. Or worse, "five versions of outdated context in, nothing useful out." Tim Williams: Been there. The bottom line: treat your cursorrules like code. Version it, refactor it, keep it clean. Paul Mason: And use the agent as your documentation buddy. It's surprisingly good at summarizing, pruning, and linking sections if you just ask. Tim Williams: Exactly. That's the power of the tool — not just generating code, but managing the context around your code. Paul Mason: So for anyone listening, if Cursor is part of your daily workflow: spend as much time maintaining your context as you do your code. The cleaner your cursorrules, the smarter your agent becomes. Tim Williams: Amen to that. Cursor's like any good teammate — if you keep them informed, they'll do great work. If you feed them old specs and half-baked instructions, you'll spend your day fixing misunderstandings. Paul Mason: Or restarting five times. Tim Williams: Yeah, that too. Paul Mason: Alright, that's a wrap for today's episode of Rubber Duck Radio. Tim Williams: Keep your projects clean, your cursorrules cleaner, and remember: AI isn't taking your job — it's just waiting for your next commit. Paul Mason: Beautiful. See you next time.

Related Projects

GTZenda

Enterprise document intelligence pipeline that ingests procurement data from AI agents, classifies and normalizes documents using LLM processing, and pushes structured data into a government sales intelligence platform. Built on AWS with SQS-driven async processing and OpenAI integration.

Lead DeveloperView project ->

Therapy Buddy

Therapy Buddy is a cutting edge AI assisted special therapy application for patients and therapists to collaborate with specialized therapy sessions.

Solo DeveloperView project ->

eRepublic Events Portal

The eRepublic Events Portal is a platform that allow a deeply integrated experience for eRepublic event attendees and sponsors.

Senior Web DeveloperView project ->

Government Navigator

Government Navigator is a go-to-market sales and marketing intelligence platform tailored for state, local, and education IT vendors. By leveraging millions of data signals and decades of procurement expertise, it delivers real-time insights from early buyer-intent and pre-RFP alerts to verified contacts, jurisdictional profiles, statewide IT contracts, and curated market briefings so clients can uncover emerging opportunities and focus on winning deals instead of doing the homework.

Lead DeveloperView project ->

Episode Details

Published
November 6, 2025
Duration
26:21
Episode
S1E3

Technologies Discussed

AWSAWSOpenAIOpenAI

Skills Demonstrated

Developer ExperienceDeveloper ExperienceTechnical LeadershipTechnical LeadershipResource PlanningResource PlanningTechnology EvaluationTechnology EvaluationArchitecture PlanningArchitecture Planning