[>>] S1E14May 1, 20267:04

Where Open Source LLMs Are Actually Ahead

Open source LLMs just hit a stunning milestone: Kimi K2.6 tied GPT-5.5 on the industry's toughest coding benchmark — and costs a fraction of the price to run. But this episode goes beyond the headline...

Tim Williams (host)Paul Mason (host)
0:00
7:04
Now playing:Introduction: Open Source Catches Up

Chapters

Show Notes

Open source LLMs just hit a stunning milestone: Kimi K2.6 tied GPT-5.5 on the industry's toughest coding benchmark — and costs a fraction of the price to run. But this episode goes beyond the headlines to unpack where open source models still trail proprietary ones, why the new Temporal API is finally fixing JavaScript's 30-year date nightmare, and a growing concern that AI-driven development and the trend toward closed-source licensing could starve the open source commons that made all of this innovation possible in the first place. From production AI economics to the future of web framework innovation, Tim and Paul explore what the numbers actually mean for developers building real systems today.

Transcript

Tim Williams: Hey there Rubber Ducklings, welcome to show number fourteen, I'm your host Tim Williams Paul Mason: And I'm also your host, Paul Mason, what's new Tim? Tim Williams: So, Paul — do you remember when I said I predict the Chinese open source AI will catch up to the sota closed source models this year? Paul Mason: Yeah, I remember. And I also remember thinking you were being a little optimistic on the timeline. Tim Williams: Well, here's the thing — and I'm not saying I told you so — but the latest numbers are in, and they're more dramatic than either of us expected. We're not talking about open source inching toward the proprietary models. We're talking about open source models tying GPT-5.5 on real-world coding benchmarks. Tying it. Paul Mason: Okay, wait — you're telling me an open source model tied GPT-5.5? On what benchmark? Tim Williams: SWE-Bench Pro. The hardest real-world software engineering benchmark out there. Kimi K2.6 — from Moonshot AI, a Beijing-based lab — scored 58.6%. GPT-5.5 also scored 58.6%. Same number. And this is an open-weight model you can download from Hugging Face. Paul Mason: That's... wow. I mean, I've been watching the Kimi releases since K2 launched last summer, but I didn't expect them to close the gap this fast. So what's the catch? There's always a catch. Tim Williams: There absolutely is a catch. Because while the coding benchmarks are neck and neck, the overall picture tells a different story. GPT-5.5 still leads on the Artificial Analysis Intelligence Index at 60 versus K2.6 at 54. And Claude Opus 4.7 sits at 57 with a commanding lead on SWE-Bench Verified — 64.3% compared to K2.6's 58.6%. So let's be clear: open source has caught up in specific domains, but it has not achieved parity across the board. Not yet. Paul Mason: Right, and that's the nuance that gets lost in the headlines. When people see 'open source ties GPT-5.5,' they think it's a blanket statement. But it's domain-specific. Coding? Yeah, Kimi is right there. Pure reasoning, math, desktop GUI automation? GPT-5.5 and Opus still have real leads. Tim Williams: Exactly. And that's exactly what we're going to unpack today. Where open source LLMs are actually ahead, where they're still behind, and what the trajectory means for those of us who use these models in production. Paul Mason: Let's do it. Because I've got opinions on this. Tim Williams: Alright, let's start with the big picture. I've been poring over this analysis from WhatLLM — they looked at 94 leading LLMs across 329 API endpoints. And the first number that jumped out at me: 63% of production-ready models are now open source. 59 open source versus 35 proprietary. Two years ago, proprietary dominated. That flip is stunning. Paul Mason: So more than half the models out there are open source now. But volume doesn't equal quality. What do the actual scores say? Tim Williams: Consider this — the quality gap between the best open source and the best proprietary has shrunk from 15-20 points in 2024 to just 9 points now. The best open source model, MiniMax-M2, scores 61 on the quality index. The best proprietary, GPT-5.1 High, scores 70. At the current pace, that gap disappears by mid-2026. Paul Mason: Nine points. A year ago people were saying open source would never catch up. And now we're talking parity within a year. Tim Williams: Yeah, but — and this is important — parity on a quality index is not the same as parity on every task. The index aggregates across benchmarks like GPQA Diamond for PhD-level reasoning, AIME for advanced math, LiveCodeBench for coding. An aggregate score of 61 versus 70 doesn't tell you where the gaps are. It just tells you there's a gap. Paul Mason: Right. And that's where the Kimi K2.6 data gets really interesting. Because it's not an all-rounder tying GPT-5.5 — it's a specialist. On SWE-Bench Pro, it ties. On Humanity's Last Exam with tools, it actually beats GPT-5.4 — 54% versus 52.1%. On DeepSearchQA for autonomous web research, it crushes: 92.5% F1 versus 78.6%. But on AIME 2026? GPT-5.4 is still at 99.2% compared to K2.6's 96.4%. So it's winning in the agentic, tool-use, long-running tasks. Pure single-shot reasoning? Still proprietary territory. Tim Williams: And that pattern maps perfectly onto what developers actually need day to day. Let me push back on that a bit — because I think what most developers want is not solving AIME competition problems. They want an agent that can refactor a codebase, run tests, fix what breaks, and not lose track of what it was doing after hour three. And that is exactly where K2.6 is winning. Paul Mason: Totally. And the agent swarm architecture is what makes that possible. K2.6 can run 300 parallel sub-agents executing 4,000 coordinated steps over 12 hours. The previous version capped at 100 agents and 1,500 steps. That's not an incremental upgrade — that's a completely different operational ceiling. Tim Williams: Let me put this in concrete terms. Moonshot demonstrated K2.6 taking a prompt to optimize a locally deployed Qwen model for inference speed. It downloaded the weights, rewrote the inference stack in Zig — which, by the way, is a niche systems language most models would struggle with — iterated through 14 optimization cycles, over 4,000 tool calls, and improved throughput from about 15 tokens per second to 193. Twelve hours. Zero human intervention. Paul Mason: That used to be Claude territory. So this is the pattern I'm seeing: open source isn't winning by being better at the same things. It's winning by being good enough at the things that matter most for real workflows — coding, agent stamina, cost — while proprietary retains its edge in niche, high-prestige benchmarks. Tim Williams: And here's where the cost conversation becomes impossible to ignore. Let me drop some numbers. Open source models average $0.83 per million tokens. Proprietary average $6.03. That is 86% cheaper. And the speed advantage — this surprised me — open source on optimized infrastructure averages 179 tokens per second versus 138 for proprietary. Peak speeds hit over 3,000 tokens per second. Proprietary peaks around 600. Paul Mason: 3,000 tokens per second? That's providers like Groq and Fireworks running these models, right? Tim Williams: Exactly. When you decouple the model from the proprietary infrastructure — which open source lets you do — you can run it on whoever gives you the best throughput. That competition drives speeds up and prices down. The proprietary models are locked to their own infrastructure. You pay their price, you get their speed. Paul Mason: So let me paint the picture for a mid-sized startup running coding agents. Hundred million input tokens, ten million output tokens per month — realistic for a team using AI-assisted coding all day. On Kimi K2.6 through the Moonshot API: about $85 a month. Same workload on Claude Opus 4.7? About $2,550. That's a $29,000 annual difference. That's a full engineering hire. Tim Williams: Yeah. The moral of the story here is not that you should switch everything to open source tomorrow. It's that the cost-quality equation has fundamentally shifted. For 80% of use cases, open source now offers better value without meaningful quality sacrifice. For the remaining 20% — the elite tasks, the competition-level math, the mission-critical reasoning — proprietary still has the edge. But you're paying a massive premium for it. Paul Mason: And the tier breakdown from WhatLLM makes this really clear. In the elite tier — quality scores above 60 — there's one open source model versus eleven proprietary. That's where GPT-5.1 High, GPT-5 Codex, Claude Opus 4.7 all sit. But in the high tier, scores 50 to 59? It's tied 8 to 8. Open source is dead even in the tier where most professional work actually lives. Tim Williams: And the top 5 open source models are genuinely impressive. MiniMax-M2 at quality 61, GPT-OSS-120B at 58, DeepSeek V3.1 Terminus at 58, Qwen3 235B at 57, DeepSeek V3.2 Experimental at 57. These are not toy models. These are production-grade systems that cost a fraction of what you'd pay for comparable proprietary performance. Paul Mason: Let me add one more to that list. Kimi K2.6. Because when you factor in that it ties GPT-5.5 on SWE-Bench Pro and costs roughly 5x less on input tokens and over 7x less on output tokens... $0.95 per million input, $4.00 per million output, versus $5.00 and $30.00 for GPT-5.5. And with cached input, K2.6 drops to $0.16 per million. The value proposition is insane. Tim Williams: Now, let me push back on the hype for a second. Because there are areas where open source is still genuinely behind, and we need to be honest about them. Paul Mason: Same here. I've been using K2.6 and it's great for coding, but it runs verbose. Noticeably more verbose than Claude. And it gets over-eager on tool calls — sometimes it just keeps invoking tools when it should stop. That hallucination rate is also worth talking about. Tim Williams: Yeah. The hallucination rate in K2.5 was 65%. K2.6 brought it down to 39%. That's a 40% reduction, which is meaningful progress, but 39% is still high. For tool-heavy agent workflows, every hallucinated fact gets compounded as the chain continues. That's a real production risk. Paul Mason: And the context window gap is real too. K2.6 has 256K. GPT-5.5 supports 400K, Claude Opus 4.7 supports a million. If your workflow requires stuffing an entire codebase into a single prompt, 256K might not cut it. Tim Williams: Here's the thing about context windows though — the WhatLLM analysis shows that open source has actually achieved parity there overall. The average open source context window is 412,000 tokens. Proprietary average is 468,000. And then you have outliers like Llama 4 Scout with a 10 million token context window. So the capacity gap is model-specific, not camp-specific. It's Kimi K2.6 that's behind on context, not open source in general. Paul Mason: Good distinction. So let's talk about where open source is actually, unambiguously ahead. Not catching up, not tied — winning. Tim Williams: Cost. Speed. Agent stamina. Model variety. And — this is the one people underestimate — the ability to self-host and decouple from vendor infrastructure. When you can download the weights and choose between a dozen hosting providers competing on price and throughput... that's a fundamentally different dynamic than being locked into OpenAI's or Anthropic's pricing. Paul Mason: I'd add one more: the Chinese labs are iterating faster than anyone expected. DeepSeek, Qwen, Moonshot — they're shipping major updates every few months. K2 to K2.5 to K2.6 in nine months. DeepSeek V3 to V3.1 to V3.2 to V4. That pace is forcing the Western labs to compete on price. Did you see the DeepSeek V4 Pro pricing? $0.44 input, $0.87 output per million tokens on the promotional rate. Even after the promo, it goes to $1.74 and $3.48. Still way cheaper than the alternatives. Tim Williams: The WhatLLM analysis called this the 'iPhone moment' for LLMs — high quality made accessible. When Qwen3-235B offers quality 57 at $0.25 per million tokens versus Claude 4.5 Sonnet at quality 63 for $6.00 per million... you're getting 84% of the quality at 7% of the cost. That's the disruption. Paul Mason: So here's where I land on this. If you're paying API bills right now — you should be actively testing open source models for your workloads. Not as an experiment, but as a production evaluation. Run the same prompts through Qwen3 or DeepSeek V4 that you're running through Claude or GPT. Look at the actual quality for your specific tasks, not the benchmarks. Then look at your bill. I think a lot of teams are going to find they're overpaying by 5-10x for the 15% of tasks that genuinely need the proprietary tier. Tim Williams: And for teams already using Claude Code or Codex — Kimi Code is worth a serious look. It's Moonshot's terminal-based coding agent, similar to Claude Code, runs on K2.6, integrates with VSCode, Cursor, JetBrains through the Agent Client Protocol. The cost difference is dramatic. If you're burning through Claude Code tokens all day, the swap could literally pay for itself within a week. Paul Mason: The one thing I'd caution on — and this is from experience — is model version pinning. The Moonshot API currently returns 'kimi-for-coding' regardless of which underlying version is active. For reproducible CI/CD workflows, that's real friction. The model you tested against today might not be byte-identical tomorrow. That's the trade-off with providers iterating this fast. Tim Williams: That's a great point. And it connects to something the WhatLLM analysis predicted: provider consolidation. Infrastructure providers — Nebius, Fireworks, Together AI, Groq — will become more valuable than model creators, similar to how cloud providers became more important than Linux distros. Because when you're running open source models, the model is commoditized. Speed, reliability, caching, version control — that's where the value shifts. Paul Mason: Future you will thank present you for not locking your entire stack to a single provider's API. Build your abstraction layer, test against multiple providers, and when a better model drops — which is every few months now — you can swap it in without rewriting your integration. Tim Williams: Let me wrap this section with the prediction that I think matters most. WhatLLM projects that open source will match or exceed today's best proprietary quality level of 70 by Q2 2026. DeepSeek V4, Llama 5, Qwen4 — these are the likely candidates to hit that milestone. When that happens, the question stops being 'Can open source compete?' and becomes 'Where does proprietary still justify its premium?' And the answer to that second question is shrinking every quarter. Paul Mason: And the honest answer right now is: proprietary justifies its premium for the elite 20% of tasks. PhD-level reasoning, competition math, mission-critical production code where you need that extra 10% reliability. But for the 80% — chatbots, code assistants, content generation, analysis — open source is already there. You're just paying rent to OpenAI and Anthropic because switching feels hard. It's not as hard as you think. Tim Williams: The moral of the story is: the open source LLM revolution isn't coming — it's here. Kimi K2.6 tying GPT-5.5 on the toughest coding benchmark in the industry is the proof point. But the real story is the economics. When you can get 84% of the capability at 7% of the cost, the question isn't whether to adopt open source. The question is whether you can afford not to. Paul Mason: Yeah. And next quarter, those numbers are going to look even more lopsided. We'll definitely revisit this. Tim Williams: Deal. We'll circle back on open source LLMs in a future episode. Until then — test them on your own workloads. The data speaks for itself. Tim Williams: Alright, shifting gears — from AI models to something that's been a thorn in every JavaScript developer's side since 1995. Paul, you want to guess? Paul Mason: Oh, I already know. It's dates. It's always dates. Tim Williams: It's always dates. The JavaScript Date object — the gift that keeps on giving bugs. Zero-indexed months but one-indexed days. Mutable objects that silently change on you. No time zone support beyond local and UTC. It's been a disaster for 30 years. Paul Mason: And the worst part is, it's not even JavaScript's fault. It's Java's fault. Brendan Eich had ten days to write JavaScript and was told to make it like Java, so he copied the broken Java Date object. Java fixed theirs in 1.1, but JavaScript couldn't fix it without breaking the web. Tim Williams: Right. The original sin. But here's the big news — the Temporal API is finally shipping in production browsers. Firefox 139 shipped it in May 2025. Chrome 144 shipped it in January of this year. We are on the doorstep of date-handling sanity. Paul Mason: So the question is — can I actually use it yet? Like, without a polyfill? Tim Williams: Almost, but not quite. Firefox and Chrome are done. Edge has experimental support in beta. Safari — and this is the holdup — Safari only has it in Technology Preview, and some of it is behind a flag. Full Safari support isn't expected until late 2026. Paul Mason: So if you're building for a corporate intranet where everyone's on Chrome, you could go for it today. But for the general web? You still need a polyfill. Classic web standards limbo. Tim Williams: Yeah. Two-thirds of browsers support it but you can't rely on it yet. The good news is the polyfills are solid. @js-temporal/polyfill is in alpha, and temporal-polyfill from the FullCalendar team is in beta. The FullCalendar one is particularly good — those folks live and breathe calendar logic. If anyone's going to get a Temporal polyfill right, it's them. Paul Mason: And the spec is at Stage 3 heading to Stage 4. TC39 champions are expected to pitch it at the March 2026 plenary. So the spec is stable, the implementations are shipping — this is real. Tim Williams: Right. And let me paint why this is such a big deal. Temporal isn't just a slight improvement. It completely rethinks how dates and times work in JavaScript. You've got Temporal.Instant for timestamps. Temporal.PlainDate for dates without times. Temporal.PlainTime for times without dates. Temporal.ZonedDateTime for the full picture with time zones. Temporal.Duration for intervals. Each type is immutable. Each is explicit about what it represents. Paul Mason: The immutability alone is worth the price of admission. How many bugs have we all shipped because Date objects are mutable? You pass a date into a function, the function mutates it, and suddenly your original is wrong. That's just gone with Temporal. Tim Williams: Gone. And consider this — getting a local ISO 8601 string with the old Date object is like 15 lines of manual padding and timezone offset calculation. With Temporal? One line. Temporal.Now.zonedDateTimeISO().toString(). Done. The developer experience gap is enormous. Paul Mason: Totally. And the performance is actually good now too. Bryntum did benchmarks — in Firefox, Temporal is basically neck and neck with Date. In Chrome it's a mixed bag, Date is still faster on arithmetic, but Temporal is faster on string formatting. And Chrome's implementation is newer, so it'll get better. Tim Williams: Now, I want to take a moment here and talk about something newer developers might not appreciate. We're standing on the shoulders of giants — and I mean Moment.js. Paul Mason: Oh, Moment. Yeah, I have a soft spot for Moment.js. Tim Williams: As you should! Moment dropped in 2011 and it was genuinely revolutionary. Before Moment, working with dates in JavaScript was just suffering. moment().add(7, 'days').format('MMMM Do YYYY') — that was magic compared to what we had. At its peak, 15 million weekly downloads. Paul Mason: And the thing about Moment — it did what it was supposed to do, really well. The reason it got deprecated in 2020 wasn't that it was bad. The ecosystem just evolved past what it could reasonably become. Mutability, no tree-shaking, the bundle size — you couldn't fix those without breaking the API. So they did the honorable thing and said, 'We're done. Move on.' Tim Williams: That's exactly right. And I think the Moment.js story is one of the best examples of open source stewardship. They could have kept going, kept accepting PRs, kept growing the library. Instead, they said, 'The future is date-fns, Day.js, Luxon, and eventually Temporal. We're stepping aside.' That takes integrity. Paul Mason: Same here. Developers who started in the last few years — they've always had Day.js or date-fns. They don't know the pain of raw Date objects. Writing getMonth() plus one for the thousandth time and wondering why JavaScript was designed this way. They're standing on the groundwork that Moment laid. Tim Williams: They are. And here's the full arc — Moment saved us in 2011. The community built better, lighter alternatives. And now, in 2026, we're finally getting a native solution that's better than all of them. It only took 30 years. Paul Mason: Thirty years. Nine years just for the Temporal proposal itself — started in 2017. Think about that. Nine years from proposal to shipping browsers. Tim Williams: So here's my practical takeaway. If you're starting a new project today, use the Temporal polyfill. Start writing Temporal code now. The API is stable, the polyfills work, and when Safari ships it natively, you just remove the polyfill and everything keeps working. Don't start a new project with Moment or even Day.js in 2026. The future is Temporal. Paul Mason: I'd add — if you're on an existing project using Moment, don't panic-migrate. Moment still works. It's in maintenance mode, not deleted. But when you're writing new code, reach for Temporal. Future you will thank present you. Tim Williams: The moral of the story is: JavaScript's date problem is finally getting solved. Not by a library, not by a framework, but by the language itself. And it only took three decades, a deprecated Java API, a ten-day sprint in 1995, and one very patient TC39 working group. Paul Mason: Yeah. And in a year, we won't even be having this conversation. Temporal will just be how dates work. Developers will wonder what the big deal was. Tim Williams: Alright, one more thing before we go. And honestly, this one's been sitting heavy on me. It connects both of the topics we've talked about today — open source and the web platform. Paul Mason: Okay, you've got my attention. What's on your mind? Tim Williams: Framework stagnation. Or more specifically — my fear that we're entering an era where the open source framework ecosystem that built the modern web is going to slow down, because people are leaning so heavily on LLMs to just generate code, they're not investing in foundational tools the way they used to. Paul Mason: That's a pretty big claim. Walk me through it. Tim Williams: Consider this — the reason LLMs can build rich, interactive web experiences today is because of the massive ecosystem of open source frameworks they were trained on. jQuery, Bootstrap, React, Vue, Angular, Node.js, Express — these weren't just tools. They were movements. Communities of thousands of people collaborating, debating, pushing the web forward. And they were all open source. That's the golden era I'm talking about. Paul Mason: Yeah, Bootstrap literally made responsive design accessible to everyone. Before that, you were hand-rolling media queries and praying. And jQuery — people forget that jQuery was the reason JavaScript became viable as a serious language. It papered over all the browser inconsistencies so you could actually build things. Tim Williams: Exactly. And React — love it or hate it — completely redefined how we think about UI. Vue brought that reactivity to a wider audience. Svelte showed us compilers could replace frameworks. Every one of these was open source, and every one pushed the entire industry forward because anyone could use them, study them, fork them, improve them. Paul Mason: So where's the fear? That people stop building new ones? Tim Williams: Two trends that really worry me. First, there's this growing attitude of 'the AI can just generate it, why do I need a framework?' And that fundamentally misunderstands what frameworks do. Frameworks aren't just code shortcuts. They're shared mental models. They're conventions that make codebases maintainable by teams. When you let an LLM generate bespoke solutions for every project, you lose the shared vocabulary that makes collaboration possible. Paul Mason: Totally. I've seen it already — developers generating components that are slightly different every time, no consistency, no shared patterns. Six months later, nobody can maintain it because there's no documentation, no community, no Stack Overflow answers. It's just generated code that nobody owns. Tim Williams: And the second trend — and this one might be even more concerning — is that new framework creators are increasingly choosing to close-source their work. Proprietary licenses, source-available but not open. And I get the economic pressure. The RedMonk data from earlier this year shows that permissive open source licenses have dropped from 82% of projects in 2022 down to 73% in 2025. People are choosing to protect their IP instead of contributing to the commons. Paul Mason: Wait — 82 to 73? That's a nine-point drop in three years. That's not nothing. Tim Williams: It's not nothing. And the reasoning makes sense on the surface — you spend two years building something amazing, you don't want a cloud giant to just host it and take all the revenue. MongoDB went SSPL, Terraform went BSL, Elasticsearch went source-available. These are real economic pressures. But the cumulative effect? The commons shrinks. Paul Mason: And the irony is thick. The LLMs people are leaning on — they were trained on that commons. On open source React, open source Vue, open source jQuery. If we stop feeding the commons, the LLMs have less to learn from. The quality of what they generate degrades over time. Tim Williams: That's exactly the metaphor — eating your seed corn. And there's this beautiful symmetry to the problem — we spent this whole episode talking about open source LLMs catching up to proprietary ones. But the reason those LLMs can write decent web code at all is because the web ecosystem has been overwhelmingly open for twenty years. If the next generation of tooling is closed, the next generation of LLMs won't have the training data to keep improving. Paul Mason: So the real question is — who builds the next React? The next Bootstrap? The next thing that redefines how we build for the web? If that person close-sources it because they need to make a living, or they're afraid of being exploited, then we all lose access to that innovation. Tim Williams: And I want to be clear — I'm not saying people shouldn't make money from their work. That's completely valid. What I'm saying is that we, as an industry, need to figure out how to make open source sustainable again. Because the golden era of web frameworks wasn't accidental. It happened because companies like Google, Facebook, and Twitter were willing to open-source their internal tools because they benefited from the ecosystem effect. That alignment is breaking down. Paul Mason: Yeah. And when I think about developers entering the field now — their first experience is often prompting an AI to generate code. They might never experience the joy of discovering a framework, reading its source code, understanding its design decisions, contributing back. That feedback loop — use, understand, contribute — that's what built the ecosystem. If it breaks, we don't just lose new frameworks. We lose the next generation of framework authors. Tim Williams: That's beautifully put. And it ties back to what we said about Moment.js — Moment's greatest act was knowing when to step aside and let the community build something better. That kind of stewardship only exists in an open ecosystem. In a closed one, there's no community to hand off to. Paul Mason: Right. And there's the connection to the open source LLMs too — the same ethos that gave us React and Bootstrap is what's powering Kimi and DeepSeek and Qwen today. Open source isn't just a licensing choice. It's a force multiplier for innovation. Every time someone closes off a project, that multiplier gets a little weaker. Tim Williams: The moral of the story is: the open source commons is the foundation everything else is built on — including the AI tools that people think make it obsolete. If we stop investing in shared, open infrastructure because AI can generate bespoke solutions, we'll wake up one day and realize the AI can't generate good solutions anymore — because there's nothing left to learn from. Paul Mason: That's a hell of a note to end on, but yeah. The benchmarks will keep shifting. Temporal will ship everywhere eventually. But the health of the open source ecosystem — that's the meta-problem that affects everything else. Tim Williams: Alright — that's the show for today. Three stories, one thread: open source LLMs closing the gap on the big players, JavaScript finally getting a date API that doesn't make you want to throw your laptop out a window, and a warning about the commons that made both of those things possible. Open source isn't just a nice idea — it's the engine. Don't let it stall. Until next time — keep coding, keep contributing, and for the love of all that is holy, open source your stuff.

Related Projects

Government Navigator

Government Navigator is a go-to-market sales and marketing intelligence platform tailored for state, local, and education IT vendors. By leveraging millions of data signals and decades of procurement expertise, it delivers real-time insights from early buyer-intent and pre-RFP alerts to verified contacts, jurisdictional profiles, statewide IT contracts, and curated market briefings so clients can uncover emerging opportunities and focus on winning deals instead of doing the homework.

Lead DeveloperView project ->

eRepublic Registration Management System (ERMS)

ERMS is a Windows and OSX desktop application that allows the management of every aspect of the eRepublic event registration process.

Senior Web DeveloperView project ->

eRepublic CMS & Website Solutions

Built and deployed multiple website and CMS solutions for eRepublic publications.

Web DeveloperView project ->

AI Charts

AI-powered flowchart, ERD, and swimlane diagram builder with a built-in AI assistant and an MCP server exposing 18+ tools for external AI integration. Works with any OpenAI-compatible LLM — no vendor lock-in.

Solo DeveloperView project ->

Episode Details

Published
May 1, 2026
Duration
7:04
Episode
S1E14

Technologies Discussed

JavaScriptJavaScriptOpenAIOpenAIOllamaOllama

Skills Demonstrated

Technical LeadershipTechnical LeadershipTechnology EvaluationTechnology EvaluationArchitecture PlanningArchitecture PlanningDeveloper ExperienceDeveloper Experience