[>>] Episode 8March 15, 202633:51

AI Isn't Taking Your Job—It's Taking the Easy Stuff

Tim and Paul explore an unexpected reality: AI isn't replacing developer jobs—it's stripping away the satisfying 'easy wins' and leaving only constant high-stakes problem-solving. They discuss buildin...

Tim Williams (host)Paul Mason (host)
0:00
33:51
Now playing:Welcome Back to Rubber Duck Radio

Chapters

Show Notes

Tim and Paul explore an unexpected reality: AI isn't replacing developer jobs—it's stripping away the satisfying 'easy wins' and leaving only constant high-stakes problem-solving. They discuss building intentional MCP-based tools like AI Charts and AI Sound, why model intelligence has plateaued while tooling has improved, and why taste and specificity are becoming the critical differentiators for developers in the AI era.

Transcript

Tim Williams: Hey folks, welcome back to, uh, Rubber Duck Radio. It's been a few weeks since our last episode, and I'll be honest — life just kind of got in the way. Paul's been busy, I've been busy, and, you know, the recording schedule sort of fell apart. But we're back now, and we're going to try to make this more regular again. So, yeah, thanks for sticking with us. Paul Mason: Yeah, sorry about the gap. Sometimes the day job just, like, consumes everything. But I'm glad we're back at it. Tim Williams: Alright, so, um, I've been thinking about something lately that I wanted to kick around with you. It's about AI and, like, how it's actually changed my day-to-day work. And I think the narrative we keep hearing is that AI is going to take our jobs, right? Like, software development as a profession is doomed. But here's what I've actually experienced: AI isn't taking my job. It's, um, taking away all the easy stuff. Paul Mason: That's an interesting way to frame it. So it's not replacing you, it's just… changing what you spend time on? Tim Williams: Exactly. And here's the thing — none of the difficult problems have, like, gone away. AI has just made the easy stuff much easier. And in some ways, that's actually made me MORE stressed out than before. Which is, I don't know, ironic? Because AI was promised to solve all our problems and be the end of the software development profession. Instead, it's just changed what I work on. Paul Mason: I feel that. It's like, you used to have this mix of work, right? Some days you'd crank out straightforward features, get that little dopamine hit of productivity. Other days you'd wrestle with the hard architectural problems. Now it's just… hard problems all the time. Tim Williams: That's exactly it. When simple software problems are, you know, table stakes, there's no need for developers to spend their time on those things. We're pushed into spending all our time on the difficult stuff. And I see this as a double-edged sword. On one hand, I do software development because I'm a problem solver at heart. I don't care that much how the problem gets solved, I just want to solve it. But on the other hand, it's, um, changed the pacing of my job completely. Paul Mason: Right. You used to have this natural rhythm. Build a form, feel productive. Debug a weird edge case, feel challenged. Write some tests, feel responsible. Now it's just… edge cases and architecture all day long. Tim Williams: And here's what I miss — I used to have this flow where I'd get to work on easy, rewarding stuff in between the hard problems. It was like a mental palate cleanser. Now I spend nearly all my time on the most difficult technical issues. And that's, like, exhausting in a different way. Paul Mason: Yeah, it's like going from a job with variety to a job that's just… intensity. Which sounds good in theory, right? Like, oh, I'm only working on the important stuff now. But in practice, humans need those easier wins to, you know, stay motivated. Tim Williams: Totally. And I think this is going to make the job more difficult for a lot of people. Not because the work itself is harder, but because the emotional pacing is different. You don't get those little victories sprinkled throughout your day. It's just… one hard problem after another. Paul Mason: I wonder if that's actually going to create a divide in the industry. Like, some developers are going to thrive on the constant challenge, and others are going to burn out because they need that variety. Tim Williams: I think you're right. And here's the other thing I've noticed — AI has made me faster, but speed isn't the same as satisfaction. I can ship more features now, but each one feels less… earned? I don't know if that makes sense. Paul Mason: No, it totally makes sense. There's something about struggling through a problem and, like, emerging on the other side that gives you a sense of accomplishment. When AI just hands you the solution, you lose that. Even if the solution is correct. Tim Williams: Exactly. And I think this is the part that the AI optimists miss. They talk about productivity gains, but they don't talk about what it feels like to actually do the work. The craft of software development isn't just about output. It's about, you know, the experience of building something. Paul Mason: Right. And when AI strips away the craft part — the typing, the debugging, the little puzzles — what's left is just the hard decisions. The architectural tradeoffs. The business logic that actually matters. And that's mentally taxing in a way that's, um, different from just being busy. Tim Williams: Here's another way to think about it. AI has raised the floor but also raised the ceiling. The floor is higher because I can produce competent code faster. But the ceiling is also higher because now I'm expected to tackle problems that used to be considered too complex or, like, too time-consuming. Paul Mason: And the expectations shift. Your manager sees you shipping faster and thinks, "Great, now you can take on even more complex work." They don't realize that what they're actually doing is, like, removing the breathing room. Tim Williams: That's exactly it. And I think this is where the stress comes from. It's not that I'm working more hours. It's that every hour is now spent at, like, maximum cognitive load. There's no coasting, even for a moment. Paul Mason: I've felt that too. And honestly, I think we need to talk about this more as an industry. Everyone's celebrating the productivity gains, but nobody's talking about the mental toll of, you know, constant high-stakes decision making. Tim Williams: The moral of the story here is that AI isn't eliminating our jobs. It's transforming them into something that might actually be harder — not technically, but emotionally. And if we don't acknowledge that, we're going to see a lot of developers, like, burning out. Paul Mason: So what's the fix? Do we just accept that this is the new normal? Tim Williams: I don't think there's a simple fix. But I think being aware of it is, you know, the first step. And maybe we need to intentionally create space for the easier, more satisfying work. Like, deliberately choosing to work on some things from scratch even when AI could do it faster. Just to remember why we got into this field in the first place. Paul Mason: That's a good point. It's like… keeping a hobby garden even though you could buy vegetables at the store. Sometimes the process matters more than the efficiency. Tim Williams: You know, that actually connects to something I've been working on lately. I've been thinking a lot about how to make AI tools feel more… intentional? Less like this overwhelming firehose of capability, and more like a set of focused instruments that each do one thing really well. Paul Mason: Oh interesting. What do you mean by that? Tim Williams: So I've been building these small, composable projects — they're basically, um, specialized copilots. Each one is designed to help with a specific task, and they all implement their own MCP server. Paul Mason: Right, MCP has been getting a lot of attention lately. Anthropic really, like, pushed it forward with Claude Desktop. Tim Williams: Exactly. And here's what's cool about it — because each of these projects speaks MCP, I can interact with them from, like, wherever I am. I can be in Claude Code, or Cursor, or even between these small projects themselves. It's like… instead of having one giant AI that does everything, I have this ecosystem of focused tools that all talk to each other. Paul Mason: That's a different mental model than what most people are doing. Usually it's like, "let me just ask Claude to do everything." Tim Williams: Yeah, and that works for a lot of things. But I've found that when you have these specialized tools, the AI gets way better at the specific task. Like, I built this thing called AI Charts — it's a flowchart and diagram builder. It's got its own MCP server with, like, 18 different tools. So I can be in Claude Desktop and say "hey, create me an ERD for this database schema" and it calls out to AI Charts, which is purpose-built for that kind of work. Paul Mason: So the AI isn't trying to figure out how to draw a diagram from scratch. It's got this whole toolset specifically for, you know, that domain. Tim Williams: Exactly. And it goes deeper than that. AI Charts can do flowcharts, ERDs, swimlane diagrams — it's got auto-layout, validation, it can export to Mermaid syntax, Markdown, PDF. But the key thing is, all of that is accessible through MCP. So any AI assistant that speaks MCP can, like, drive it. Paul Mason: That's smart. It's like building an API, but for AI agents instead of human developers. Tim Williams: That's exactly what it is. And here's where it gets really interesting — I built another one called AI Sound. It's an AI-native audio editor. Think of it like a modern replacement for Audacity, but, um, built from the ground up with LLM integration. It's got multi-track editing, transcription, speaker diarization, semantic search across your audio content. Paul Mason: Wait, so you could be editing this podcast in AI Sound, and have an AI assistant helping you through MCP? Tim Williams: Totally. I could be in Claude Desktop and say "find all the sections where we talked about MCP and export them as a clip." And it would call AI Sound's MCP server, which has tools for searching transcriptions, trimming audio, exporting. The AI doesn't need to know how audio processing works — it just knows the, um, semantic operations. Paul Mason: That's wild. It's like each project becomes this little island of expertise that any AI can tap into. Tim Williams: And here's the thing that connects back to what we were talking about earlier — this approach actually helps with that stress problem. Because instead of AI being this monolithic thing that's constantly rewriting my code and changing my workflow, these tools feel more like… instruments? I pick them up when I need them. They're, you know, designed to do specific things well. Paul Mason: Yeah, I can see that. It's the difference between having a Swiss Army knife that does everything okay, versus having a set of actual tools that each do one thing really well. Tim Williams: Right. And the other piece of this is that both AI Charts and AI Sound work with any OpenAI-compatible LLM. So I can run them fully local with Ollama, or I can connect them to OpenAI, Groq, whatever. No vendor lock-in. That was important to me — I didn't want to build something that, like, only works with one provider. Paul Mason: That's been a theme with a lot of the AI tooling lately. People are realizing they don't want to be tied to one model or one company. Tim Williams: Exactly. And MCP is actually helping with that too, in a weird way. Because if everything speaks MCP, then it doesn't matter as much which LLM you're using. The protocol becomes, like, the interface layer. Paul Mason: So you're basically building your own little ecosystem of AI tools that all work together, independent of any one company's roadmap. Tim Williams: That's the idea. And it's been fun, honestly. Like, this is the kind of development work that still feels rewarding to me. Building something small and focused, making it work well, giving it a clean interface. It's, you know, craft. Tim Williams: So here's something I've been thinking about lately. It feels like we've hit this plateau with AI model intelligence. Like, since GPT-4o came out, have we really seen any huge leaps in raw intelligence? I'm not sure we have. Paul Mason: That's an interesting observation. I mean, we keep seeing new model releases with better benchmark scores, but... Tim Williams: But the benchmarks are being gamed. There's actually been some really interesting research on this. Did you see that paper from Stanford about, um, data contamination? Paul Mason: No, what did they find? Tim Williams: So they developed this test where they'd mask a wrong answer in a multiple choice question and ask the model to fill in the gap. And here's the thing — GPT-4 could guess the missing option with, like, 57% accuracy. On test data it theoretically shouldn't have seen. Paul Mason: Wait, so the model had basically, like, memorized the test answers? Tim Williams: That's what it looks like. The benchmark scores are inflated because the models have been trained on the test data — either directly or through, like, contamination in their training corpus. And this isn't just one benchmark. SWE-Bench, MMLU, Chatbot Arena — they're all dealing with this problem. Paul Mason: So when we see these headlines about "new model beats previous benchmark record," we should be, like, skeptical. Tim Williams: More than skeptical. There was this great MIT Technology Review piece about SWE-Bench — you know, the coding benchmark that everyone uses now. The researchers noticed that high-scoring models were training exclusively on Python because that's what the benchmark used. So they'd get great scores, but then, like, fail completely when tested on other languages. Paul Mason: So you're not building a better software engineer — you're building a, like, SWE-Bench specialist. Tim Williams: Exactly. One researcher called it "gilded" — looks nice and shiny at first glance, but try to run it on something different and the whole thing, um, falls apart. Andrej Karpathy actually called this an "evaluation crisis" — we've run out of trusted ways to measure actual capabilities. Paul Mason: But here's what I've noticed — and I think this gets to your point about plateaus. The models do feel more useful than they did a year ago. So, like, something's improving. Tim Williams: Yes! That's exactly it. And here's what I think is happening: the raw intelligence hasn't jumped that much, but the tooling around AI has, like, matured significantly. Like, function calling has gotten way better. Gemini 3 has this internal thinking process that helps it reason through when to call a function and what parameters to use. Paul Mason: And that's not the model being smarter — that's the software around it being, you know, better designed. Tim Williams: Right. The fine-tuning for tool use has improved. The orchestration layers have improved. The way we structure prompts and context has improved. But the underlying model? It's not that different from GPT-4o. We're just, like, wrapping it in better software. Paul Mason: That actually explains a lot. Like, Claude Code feels way more capable than using Claude in a chat interface. But it's, like, the same model underneath. Tim Williams: Exactly. The difference is the tooling. Claude Code has this whole system around the model — file system access, terminal integration, context management, agentic loops. That's where the improvement is coming from. Not from some, like, breakthrough in model intelligence. Paul Mason: So when people say "AI is getting smarter," what they're really seeing is "AI tooling is getting more sophisticated." Tim Williams: That's my theory. And honestly, I think this is a good thing. It means the progress is more, like, sustainable. We're not waiting for some magic breakthrough in model architecture. We're doing what software engineers have always done — building better systems around the tools we have. Paul Mason: It also explains why your MCP projects are so interesting. You're not trying to build a smarter model — you're building better interfaces between models and, you know, specific tasks. Tim Williams: That's it. The intelligence is table stakes now. The question is: what can you do with it? How do you structure the problem? How do you design the interface? That's where the real innovation is happening right now. Not in the model weights, but in the, um, software architecture around them. Paul Mason: And that's something we can actually control. We're not dependent on OpenAI or Anthropic to, like, release the next breakthrough. Tim Williams: Right. We can build better tooling today. And I think that's where the next year of progress is going to come from — not from models that are fundamentally smarter, but from software that makes better use of the, you know, intelligence we already have. Tim Williams: So here's something I've been thinking about a lot lately. As AI gets better at generating code, the role of a software developer is, like, shifting. And I think it's shifting toward two things: taste and specificity. Paul Mason: Yeah, I can see that. When the model can generate reams of code for you, everything comes down to, like, honing what you build and making sure you're steering it correctly. Tim Williams: Exactly. And here's the thing — when anyone and their mother can throw together a simple Facebook clone or another flavor of to-do list, what matters is what you build and, you know, how you build it. The technical implementation becomes less of a differentiator. Paul Mason: Right. I saw this article recently that put it really well — "Taste Is Eating Silicon Valley." The argument was that as software becomes commoditized, taste emerges as the, like, new differentiator. Tim Williams: That resonates with me. And I think taste is incredibly difficult to duplicate. You can copy someone's code, you can even copy their architecture, but copying their design sensibility, their understanding of what makes a product feel right? That's, like, much harder. Paul Mason: Totally. And it's also about specificity. Like, AI is great at generating generic solutions. But the value is in the specific — understanding exactly what your users need, the edge cases that matter, the small details that make something feel, you know, polished. Tim Williams: I was reading this piece from Thoughtworks about the DORA report, and they had this great insight. They said real-world software development is messy — it's filled with ambiguities, unstated requirements, constantly shifting priorities. And that requires, like, human judgment and contextual understanding. Paul Mason: Yeah, that's exactly it. AI can generate code at lightning speed, but it lacks the intuition to, like, anticipate how that code might break an existing system, or introduce security vulnerabilities, or create technical debt. You need humans for that judgment call. Tim Williams: And here's what's interesting — the DORA report found that 95% of developers rely on AI, but 30% have, like, little to no trust in AI-generated code. So there's this "trust but verify" approach that's emerging. Paul Mason: That's exactly how I work with it. I treat it like Stack Overflow — useful, but I'm not just, like, copy-pasting without thinking. I'm critically evaluating, guiding, validating the work. Tim Williams: Right. And I saw someone describe it as "developing taste — the ability to detect subtle errors in AI-generated work at speed." That's becoming, like, a core skill. Not just writing code, but having the taste to know when the AI's output is good enough and when it needs refinement. Paul Mason: I'd add one more thing — the article mentioned that routine implementation becomes cheaper, while judgment-heavy work becomes more valuable. That's, you know, the shift we're seeing. Tim Williams: Exactly. And I think this is where taste and specificity intersect. You're not just implementing features anymore. You're making constant judgment calls about what to build, how to build it, what to prioritize, what the right trade-offs are. The AI can help you execute, but it can't, like, make those judgment calls for you. Paul Mason: Yeah, and that's both exciting and daunting. On one hand, you're freed from the tedious stuff. On the other hand, you're now responsible for, like, all the high-stakes decisions. Tim Williams: That's the moral of the story here. The job isn't going away — it's evolving. And the developers who thrive will be the ones who develop strong taste, who can be specific about what they want, who can guide the AI toward the right solution rather than just, you know, accepting whatever it generates. Paul Mason: Same here. And honestly, that's the part of the job I've always enjoyed most anyway — the design decisions, the product thinking, the craft of making something feel right. So maybe this shift isn't, like, so bad after all. Tim Williams: So I came across this article recently that really crystallized something I've been feeling. The headline was, um, provocative — "The Death of Coding Is Cancelled: Why Your AI Assistant Is Quickly Becoming an Imbecile." Paul Mason: That's... quite a title. But honestly? I kind of agree. Tim Williams: Right? So the thesis is this: at the start of a project, AI seems like a demigod. You ask it to build something, it just... does it. Magic. But that ends quickly. And the article argues there's actually a, like, mathematical reason why AI will never replace programmers. Paul Mason: Let me guess — it's the context problem. Tim Williams: Exactly. So here's what the research shows. Traditional AI coding assistants operate within 4,000 to 8,000 token context windows. That's roughly 3,000 to 6,000 words. Now, think about your typical production codebase — we're talking, like, hundreds of thousands of lines, millions of tokens. The AI literally cannot see the whole picture. Paul Mason: And this is where it gets interesting — there's actual data on this. Studies show a noticeable reduction in accuracy once context length crosses, like, 32k tokens. And for codebases over 15,000 lines? Performance degrades significantly. Tim Williams: So the AI starts strong — it's a demigod at the beginning when the project is small. But as your codebase grows, it becomes... what's the word the article used? Paul Mason: An imbecile. Which is harsh, but accurate in terms of capability. It's not that the model got dumber — it's that the problem, like, outgrew its ability to understand it. Tim Williams: And here's the thing — this connects directly to what we were talking about earlier. The more a task depends on understanding the broader codebase, the more likely AI is to, like, miss the mark. Qodo's 2025 State of AI Code Quality report found exactly this. These aren't edge cases — these are the tasks where developers expect the most value. Paul Mason: It's like... imagine hiring an architect who can only see three rooms at a time. Sure, they can design a great bathroom. But ask them how the bathroom fits into the whole house? They're, like, guessing. Tim Williams: That's perfect. And this is why the death of coding keeps getting cancelled. Every few years, someone announces that programmers are obsolete. Packages were supposed to replace us. 4GLs. Visual coding. CASE tools. Rails and opinionated frameworks. Now AI. And every time, the prediction fails for, like, the same reason. Paul Mason: Because the hard part was never the typing. Tim Williams: Right! The hard part is understanding how everything connects. It's knowing that changing this function over here breaks that integration test over there, which reveals a bug in the authentication flow, which exposes a, like, race condition in the message queue. That's the job. Paul Mason: And AI can't do that because it literally can't hold all that in its head. Even with massive context windows — we're seeing 128k, even 1M tokens now — the research shows that just throwing more context at the problem doesn't, you know, solve it. The signal-to-noise ratio gets worse. Tim Williams: There's actually this great Hacker News thread from the creator of aider — you know, that AI coding tool — and he says this is perhaps the number one problem users have. Very large context windows aren't useful in practice because the model, like, gets lost in the noise. Paul Mason: So the AI goes from demigod to imbecile not because it changed, but because the problem changed. The scope expanded beyond what it can, you know, reason about. Tim Williams: And this is why I'm actually optimistic. Not in a "AI will never be useful" way, but in a "this is why we still need humans" way. The AI is an incredible tool for, like, starting projects, for prototyping, for generating boilerplate. But as the system grows, the human becomes more important, not less. Paul Mason: Because you're the one who can hold the whole architecture in your head. You're the one who understands why this decision was made three years ago, what that legacy system integration requires, how this change, like, ripples through the organization. Tim Williams: Exactly. And this connects to what we said about taste and specificity. When the AI can generate reams of code, you become the, like, conductor. You're not obsolete — you're more essential than ever. But your role shifts from typing to steering. Paul Mason: From builder to architect. From coder to, you know, systems thinker. Tim Williams: The moral of the story is — the death of coding has been greatly exaggerated. Again. The tools change, but the core challenge remains: understanding complex systems and making good decisions about them. That's still, like, a human job. Paul Mason: So Tim, if we zoom out from all of this... what's the takeaway? AI takes the easy stuff, we build focused tools, the models aren't getting smarter but the software around them is, taste matters more than ever, and... if we're not careful, our AI assistants turn into, like, imbeciles? Tim Williams: Here's the moral of the story — and I've been thinking about this a lot. AI was sold to us as this, like, existential threat, right? "The end of software development as we know it." But the reality is way more nuanced. It's not taking our jobs, it's taking away the easy wins. It's not getting exponentially smarter, it's getting better tooling. And it's not replacing our judgment, it's making our judgment more valuable than ever. Tim Williams: But here's the thing — that doesn't mean everything is fine. The job IS harder now. The pacing IS different. We're spending all our time on the difficult problems with, like, no palate cleansers in between. That's real. Paul Mason: Right. And I think that's why building our own tools matters so much. Like with AI Charts and AI Sound — these aren't just random projects. They're about, like, taking back some control. Saying, okay, if AI is going to change my job, I'm going to shape HOW it changes my job. I'm going to build copilots that work the way I want, that integrate with each other, that I can steer. Tim Williams: Exactly. And that connects to the taste and specificity thing too. When anyone can spin up a to-do list app or a Facebook clone, what separates the good developers from the... I don't know, the people just generating code? It's knowing what to build. It's understanding why. It's having the judgment to say "no, that's not quite right" and, like, steering the AI toward something better. Paul Mason: It's like... the AI can generate a thousand options, but it can't tell you which one is the, you know, right one. That's still on us. Tim Williams: And that's why the "death of coding" narrative is so wrong. Coding isn't dying — it's evolving. We're becoming architects, editors, curators, quality controllers. The imbecile problem that article talked about? That's what happens when you, like, abdicate responsibility. You let the AI run wild, it creates a mess, and suddenly you're debugging a codebase you don't understand. Paul Mason: So the answer isn't to use less AI, it's to use it more intentionally. Keep the human in the loop. Stay close to the code. Build tools that give you control, not ones that, like, take it away. Tim Williams: Right. And maybe that's the silver lining in all of this. The easy stuff being automated away — that forces us to level up. To focus on the things that actually matter: understanding our users, making good architectural decisions, having taste, knowing when to, like, push back on a feature request. Those were always the important parts of the job. AI just stripped away the padding. Paul Mason: I like that. It's like... we used to have this mix of easy and hard, and now it's just hard. But maybe that's pushing us to become better at the parts of the job that were always, you know, the most valuable. Tim Williams: Yeah. And look — I'm not going to pretend it's all sunshine and roses. The stress is real. The pacing change is real. But I'd rather be in this position — where my judgment and taste and problem-solving skills matter MORE — than in a position where I'm just, like, cranking out boilerplate that any model could write. Paul Mason: Totally. And for anyone listening who's feeling that same stress — you're not alone. This is a, like, weird transition period. The tools are maturing faster than our workflows can adapt. But the answer isn't to fight the AI, it's to shape it. Build your own copilots. Learn MCP. Stay close to your code. Keep your taste sharp. Tim Williams: Well said. And hey — thanks for bearing with us through the, um, hiatus. Life happens, day jobs consume everything sometimes. But we're back, and we're going to try to keep this more regular. There's a lot happening in this space, and I think having honest conversations about it — not just the hype, but the real lived experience — that's valuable. Paul Mason: Yeah, we appreciate you sticking around. If any of this resonated with you — the stress, the tooling, the taste thing — hit us up. We'd love to hear how you're navigating this transition. What's working for you? What's, like, driving you crazy? Tim Williams: Until next time — keep coding, keep your taste sharp, and remember: the AI is only as good as the human, you know, steering it. Paul Mason: Catch you in the next one.

Related Projects

AI Charts

AI-powered flowchart, ERD, and swimlane diagram builder with a built-in AI assistant and an MCP server exposing 18+ tools for external AI integration. Works with any OpenAI-compatible LLM — no vendor lock-in.

Solo DeveloperView project ->

AI Sound

AI-native audio editor built as a modern replacement for Audacity, with LLM integration at its core. Features multi-track editing, AI transcription, speaker diarization, semantic search, and a full MCP server for external AI assistant integration.

Solo DeveloperView project ->

Government Navigator

Government Navigator is a go-to-market sales and marketing intelligence platform tailored for state, local, and education IT vendors. By leveraging millions of data signals and decades of procurement expertise, it delivers real-time insights from early buyer-intent and pre-RFP alerts to verified contacts, jurisdictional profiles, statewide IT contracts, and curated market briefings so clients can uncover emerging opportunities and focus on winning deals instead of doing the homework.

Lead DeveloperView project ->

GTZenda

Enterprise document intelligence pipeline that ingests procurement data from AI agents, classifies and normalizes documents using LLM processing, and pushes structured data into a government sales intelligence platform. Built on AWS with SQS-driven async processing and OpenAI integration.

Lead DeveloperView project ->

Episode Details

Published
March 15, 2026
Duration
33:51
Episode
#8

Technologies Discussed

*MCPOpenAIOpenAI

Skills Demonstrated

Architecture PlanningArchitecture PlanningDeveloper ExperienceDeveloper Experience