<?xml version="1.0" encoding="UTF-8"?>
<rss
  version="2.0"
  xmlns:dc="http://purl.org/dc/elements/1.1/"
  xmlns:content="http://purl.org/rss/1.0/modules/content/"
  xmlns:atom="http://www.w3.org/2005/Atom"
  xmlns:media="http://search.yahoo.com/mrss/"
>
<channel>
<title><![CDATA[AI - Joachim Zeelmaekers]]></title>
<description><![CDATA[Articles tagged "AI" on Joachim Zeelmaekers.]]></description>
<link>https://joachimz.me/tag/ai/</link>

<generator>Astro</generator>
<lastBuildDate>Mon, 06 Apr 2026 00:00:00 GMT</lastBuildDate>
<atom:link href="https://joachimz.me/tag/ai/rss.xml" rel="self" type="application/rss+xml"/>
<ttl>60</ttl>
<language>en-us</language>
<item>
<title><![CDATA[From Slack Pings to Agent Prompts: How Focus Got Harder]]></title>
<description><![CDATA[I used to protect focus from Slack, meetings, and PR reviews. Now the same interruptions are still there, plus a steady stream of agent prompts.]]></description>
<link>https://joachimz.me/blog/from-slack-pings-to-agent-prompts-how-focus-got-harder/</link>
<guid isPermaLink="true">https://joachimz.me/blog/from-slack-pings-to-agent-prompts-how-focus-got-harder/</guid>
<category>software-engineering</category><category>AI</category>
<dc:creator><![CDATA[Joachim Zeelmaekers]]></dc:creator>
<pubDate>Mon, 06 Apr 2026 00:00:00 GMT</pubDate>
<media:content url="https://joachimz.me/images/blog/from-slack-pings-to-agent-prompts-how-focus-got-harder.webp" medium="image"/>
<content:encoded><![CDATA[<p>For a while, one of the best things I did for myself was stop treating notifications like a live feed.</p>
<p>I would check PR reviews at fixed times, usually around 9 AM, 1 PM, and 4 PM. And I tried to protect two two-hour focus blocks every day.</p>
<p>Was that ever fully real? Not really. Something urgent always slipped through. But even a messy version of that setup helped to get things done.</p>
<p>And I believe that was the point behind all the old context switching talk. If you’re in the middle of a hard problem, interruptions take you out of the zone as an engineer.</p>
<p>If I’m tracing a bug through three services, trying to understand why something only breaks in production, or working through a design decision I still don’t trust, a random Slack ping doesn’t just “take a minute.” It breaks the flow entirely. Then I get to spend the next twenty minutes rebuilding the model in my head.</p>
<p>I used to describe it as being in “the zone,” and I always told people how hard it was to get back into the zone. Not to push people away, but to let people realize that each interruption had a price, and I believe this is becoming more prominent now.</p>
<h3 id="what-used-to-work">What used to work</h3>
<p>Back then, most of the interruptions came from outside the work.</p>
<ul>
<li>Slack</li>
<li>meetings</li>
<li>PR reviews</li>
<li>incident noise</li>
<li>somebody with a “quick question”</li>
</ul>
<p>Part of the job, yes, but at least the source was clear.</p>
<p>This made me think batch processing these should help here. And in a way it did for a while, since it at least gave enough room to process both the interruptions and the work.</p>
<h3 id="what-changed">What changed</h3>
<p>Currently, the old interruptions are still there, but we added a new category on top of them.</p>
<p>Agents.</p>
<p>And I don’t mean that in some dramatic anti-AI way. I use them constantly, and that’s exactly why I am writing this. I’ve been experiencing it first-hand.</p>
<p>If I have a few agents running in parallel, my day starts looking different fast. One is exploring the next task. One is exploring a refactor. One is looking at the latest tech updates (because who can keep up manually nowadays). Then one of them finishes. Another needs clarification. Another wants me to choose between two directions. Another comes back with a wall of questions because my prompt was missing one detail that suddenly matters a lot.</p>
<p>So now the interruptions don’t just come from Slack and meetings. The work itself keeps interrupting me.</p>
<p>That’s the part that feels different.</p>
<p>In the past, context switching mostly came from other people pulling you away from the task. Today, a lot of it comes from managing the task through multiple half-finished threads at the same time.</p>
<h3 id="why-it-feels-heavier">Why it feels heavier</h3>
<p>What makes this harder for me is that it all feels productive.</p>
<p>Slack is obviously an interruption. A meeting is obviously an interruption. But when an agent finishes a task or asks for clarification, it doesn’t feel like an interruption in the same way. It feels like progress, and often it is progress, but sometimes it’s just another thing that wants a piece of your attention NOW.</p>
<p>That is how I end up with days where a lot happened, and I feel tired by the end of it. Not necessarily in a bad way, but in a different way than I am used to.</p>
<p>When you spend the whole day switching between sessions, reviewing outputs, answering follow-up questions, checking Slack, doing PR reviews, deploying services, and moving tickets, you realize how easy it is to forget or miss things. Not because you’re not trying your best, but mainly because of the constant focus you have to keep up.</p>
<p>That’s why this feels heavier to me now. The old interruptions never left. We just layered more on top.</p>
<h3 id="deep-work-still-matters">Deep work still matters</h3>
<p>If anything, I think it matters more now.</p>
<p>Agents don’t remove the need to think clearly at all. If anything, we need to think more about how we approach problems, handle evaluation, and make sure we protect our focus.</p>
<p>The focus is required to make sure that we understand what we’re trying to deliver, that we keep critically evaluating changes and requests, and that we break down the work effectively.</p>
<h3 id="what-im-trying-now">What I’m trying now</h3>
<p>I don’t have a good solution for this yet. If you do, please feel free to share it with me. But a few things seem to help.</p>
<p>Breaking down your work ahead of time helps. A clear plan for the day helps. Fewer active (meaning executors that need instant feedback) agents help. Batching reviews still helps when I can get away with it, although this one is especially tricky with the current rate of change.</p>
<p>Mostly though, I think the big shift is just understanding that the job changed.</p>
<p>We didn’t make context switching disappear. We found a much more productive-looking version of it, and it’s here to stay.</p>
<p>Some will thrive, others will realize it might not be for them. The only thing you can do to deal with this is adapt and overcome.</p>]]></content:encoded>
</item>
<item>
<title><![CDATA[When Code Outpaces the Systems Around It]]></title>
<description><![CDATA[AI can help write the code. The harder part is everything after: review, tests, approvals, and ownership.]]></description>
<link>https://joachimz.me/blog/when-code-outpaces-the-systems-around-it/</link>
<guid isPermaLink="true">https://joachimz.me/blog/when-code-outpaces-the-systems-around-it/</guid>
<category>software-engineering</category><category>AI</category>
<dc:creator><![CDATA[Joachim Zeelmaekers]]></dc:creator>
<pubDate>Mon, 30 Mar 2026 00:00:00 GMT</pubDate>
<media:content url="https://joachimz.me/images/blog/when-code-outpaces-the-systems-around-it.webp" medium="image"/>
<content:encoded><![CDATA[<p>AI has made it easy to turn ideas into code. Teams are seeing genuine speedups in planning, initial implementation, and even early review stages. But as with any engineering advancement, there are trade-offs.</p>
<p>The reality is that while code generation has accelerated, delivery pipelines haven’t kept pace. When test suites still take thirty minutes to run, CI pipelines are flaky, and deployments feel like defusing bombs, delivery pipelines haven’t sped up.</p>
<p>Consider this: Systems have been built that work when engineers are putting out 1-2 changes a day, but they break down at 5-10 changes a day. CI still takes 30 minutes per branch, everything needs merging so PRs pile up, and there is typically one deploy to staging every half hour.</p>
<p>This mismatch is often overlooked. Much of the process that’s been layered onto software development comes with speed trade-offs - sometimes intentional, often just organic evolution.</p>
<p>Now that generating code is cheap, one would expect software roadmaps to fly through delivery. But in most cases, that’s not what happens.</p>
<p>We expected the bottleneck to shift to trust decisions: Is this actually the right change? Does it work correctly? Can we ship it without starting a fire drill?</p>
<p>Yet that hasn’t been the outcome. Let’s examine why.</p>
<h2 id="the-bottleneck-phase-of-ai-adoption">The Bottleneck Phase of AI Adoption</h2>
<p>The first hard limit in review is review.</p>
<p>Code can be generated faster than it can be read. That’s where things get tricky.</p>
<p>Small change requests turn into 27-file diffs because AI tools touch nearby helpers, “simplify” abstractions that didn’t need changing, update unnecessary tests, and reformat code on the way out. None of it is obviously broken. Some of it might even be fine. But it still leaves reviewers staring at diffs they don’t have the time to read line by line.</p>
<p>Often, reviewers already have their own work open in three other tabs.</p>
<p>Reviews often become superficial. People start to just skim through changes, and they trust a green pipeline. They approve the shape of the change because tracing every consequence takes time they don’t have. This is not out of bad intention, but rather because it’s the only way to catch up.</p>
<p>These review cycles and safety checks exist for a reason—they protect the production environment. The goal isn’t to scrap them, but to evolve them to match the new generation speed.</p>
<p>That’s one of the real costs here. The goal should not only be ship faster, the goal should be ship faster without lowering quality.</p>
<h2 id="the-three-stages-of-ai-adoption-maturity">The Three Stages of AI Adoption Maturity</h2>
<p>A predictable maturation curve emerges as AI integrates into software delivery:</p>
<p><strong>Stage 1: The Code Generation Boom</strong>
Teams discover they can generate code faster than ever before for planning, implementation, and even initial review.</p>
<p><strong>Stage 2: The Bottleneck Phase</strong>
AI gets bolted onto systems that were already slower and messier than desired. Thirty-minute test suites that were tolerable before now become absolutely ridiculous in an AI loop. Changes stack up in a queue, with engineers waiting for things to move through the queue. Flaky CI now becomes even worse, as it not only impacts one change, but ten. Risky deploys create cautious behavior just when tooling is pushing to move faster.</p>
<p><strong>Stage 3: The Automated Trust Phase</strong>
The real wins come from investing in unsexy but critical infrastructure: fast feedback loops that don’t make you want to scream, clear boundaries so teams aren’t stepping on each other’s toes, boringly reliable deploys, documentation that doesn’t require an archaeology degree to understand, and having enough review capacity to actually keep up with the increased volume.</p>
<p>A flashy AI setup could help move the needle, but lasting improvement comes from evolving systems to match the new pace.</p>
<h2 id="decision-latency-the-next-bottleneck-in-the-maturity-curve">Decision Latency: The Next Bottleneck in the Maturity Curve</h2>
<p>Another bottleneck that emerges is decision latency around process and approvals.</p>
<p>Changes get drafted quickly but sit for days due to additional review requests, sign-offs, unclear test plans, and unclear ownership.</p>
<p>This is what we mean by decision latency. The code can be done in an afternoon. The waiting takes the rest of the week.</p>
<p>When it’s unclear who has the final say, when every change needs three sign-offs, or when acceptance criteria only get clear near the end, AI mostly helps teams finish the code and then wait.</p>
<p>This highlights that this isn’t just an engineering structure problem, but more like a process problem that needs evolution alongside AI adoption.</p>
<h2 id="ai-rewards-decomposition">AI rewards decomposition</h2>
<p>This is why breaking work down clearly matters.</p>
<p>A sloppy task used to waste one person’s time. Now it can waste one person’s time at much higher speed while producing a lot of plausible-looking output someone has to untangle later.</p>
<p>When AI helps most, the task looks like this:</p>
<ul>
<li>break a problem into smaller steps that can actually be verified</li>
<li>draft an implementation plan before touching the code</li>
<li>execute a scoped change with clear constraints</li>
<li>review a diff for missing edge cases or weak tests</li>
</ul>
<p>When it helps least, the request sounds like “clean this up” or “improve the architecture” or “just take a pass at this module.”</p>
<p>Those aren’t real tasks. They’re a good way to get a diff nobody asked for.</p>
<h2 id="ownership-matters-more-not-less">Ownership matters more, not less</h2>
<p>When code becomes cheaper to produce, the people who actually matter aren’t the ones who can type the fastest. They’re the ones who can properly scope a change, verify it thoroughly, and stand behind the decision to ship it.</p>
<p>This isn’t nearly as exciting as bragging about 10x productivity gains, but it’s closer to what engineering teams actually need to succeed, without reducing quality.</p>
<p>It doesn’t matter whether the first draft flowed from an AI model, a Stack Overflow snippet, or late-night coding. What matters is who really understood the trade-offs involved, who bothered to check the edge cases, and who’s willing to put their name on it when it hits production.</p>
<p>That’s why ownership becomes more critical, not less, when AI handles the initial draft. The machine can spit out options all day long, but it can’t be paged (yet) when something breaks in the middle of the night. It can’t explain why it made a particular trade-off during Friday’s incident review. It can’t look a stakeholder in the eye and say, “I’ve got this.”</p>
<h2 id="what-actually-helps">What actually helps</h2>
<p>Look, the answer isn’t to artificially slow things down for the sake of purity. The answer is fixing the system around the code.</p>
<p>Here’s what that looks like in practice:</p>
<p>First, break work down properly. Smaller tasks aren’t just easier to manage - they’re easier to generate with AI, easier to verify, easier to review, and way easier to toss out when the model starts hallucinating or going off the rails.</p>
<p>Second, make those feedback loops cheaper. When pumping out more code changes, tests need to be fast, CI needs to be reliable, and deploys need to be boringly predictable. If any of those are slow or flaky, they become instant bottlenecks that negate any speed gains from AI.</p>
<p>Third, get crystal clear on decisions. If every meaningful change requires a meeting and three layers of approval, you’re still optimized for a world where code was hard to write. AI just highlights how broken that process is to begin with.</p>
<p>Fourth, treat review like the bottleneck it often is, because it frequently is. If review capacity can’t keep up with the increased volume from AI assistance, you’re just creating a bigger backlog of unreviewed changes. That’s not progress, that’s just creating future problems for yourself. Make thorough reviewing a priority.</p>
<p>Fifth, make ownership obvious. Someone on the team should be able to look at a change and say without hesitation: “I understand this, I’ve verified it, and I’m responsible for it shipping successfully.”</p>
<p>And finally, when the AI tool isn’t actually helping? Stop prompting it. Seriously. Close the chat window, go back to the original problem statement, and rewrite it in simpler terms. Sometimes the best way to use AI is to not use it at all until you’ve clarified what you actually want, because if you don’t know what you want, how do you expect Claude to know what you want?</p>
<p>The upside of AI in software development is absolutely real. It’s a double-edged sword that can be extremely beneficial when the correct approach is taken, but it can produce significant amounts of low-quality output when the wrong approach is used.</p>]]></content:encoded>
</item>
<item>
<title><![CDATA[How AI Changed My Workflow as a Software Engineer]]></title>
<description><![CDATA[AI tools changed my workflow fast, but the real lesson was simpler: build for flexibility, resilience, and task-fit instead of chasing peak capability.]]></description>
<link>https://joachimz.me/blog/how-ai-changed-my-workflow-as-a-software-engineer/</link>
<guid isPermaLink="true">https://joachimz.me/blog/how-ai-changed-my-workflow-as-a-software-engineer/</guid>
<category>software-engineering</category><category>AI</category>
<dc:creator><![CDATA[Joachim Zeelmaekers]]></dc:creator>
<pubDate>Mon, 09 Mar 2026 00:00:00 GMT</pubDate>
<media:content url="https://joachimz.me/images/blog/how-ai-changed-my-workflow-as-a-software-engineer.webp" medium="image"/>
<content:encoded><![CDATA[<p>Over the past year, AI changed my workflow faster than anything else in engineering. Every few months there was a new editor to try, a new model to test, or a new agent worth taking seriously. Useful, yes. Also tiring.</p>
<p>What I took from that was simple: stop rebuilding your workflow around the current winner. I care more about having a setup that survives outages, pricing changes, and vendor churn than I do about squeezing out the absolute best capability this week.</p>
<p>So this post is not really about moving from Cursor to Claude Code to OpenCode. That is just the timeline. The whole point is that flexibility matters more than peak capability.</p>
<h2>The tipping point</h2>
<p>I started using Cursor seriously around March or April 2025.</p>
<p>At first it helped with the obvious stuff. Moving between files faster. Cleaning up code. Refactoring without losing momentum. It made me quicker, but it did not change how I thought about development yet.</p>
<p>That changed a few months later when the first Opus model got good enough for real development work. I think it was 4 or 4.1. AI tooling stopped feeling like autocomplete on steroids and started feeling like something that could move real work forward.</p>
<p>I still dislike the 10x talk around this stuff. For me it was more like 1.1x or 1.2x on a good day. Still meaningful, because no one dislikes writing boilerplate more than me.</p>
<p>After that, AI was no longer the question. The question was how I could rebuild my workflow around it.</p>
<h2>Back in the terminal</h2>
<p>Later that year, Claude Code got good enough that I could bring that leverage back into the terminal.</p>
<p>That mattered because the terminal is where I actually want to work. I have spent years turning it into the fastest place for me to think. Hundreds of commands. A lot of Neovim muscle memory. Outside pair programming, because I do not want to force Neovim on other people, and the occasional Git diff in VS Code, that is home base.</p>
<p>Claude Code was the first time I got that kind of leverage without having to leave the terminal. I was not trading leverage for comfort anymore.</p>
<p>So the question changed. If the environment finally felt right, what was actually worth optimizing around it?</p>
<h2>The workflow became part of the job</h2>
<p>From there I started spending a couple of hours a week improving the workflow itself.</p>
<p>Not using it. Improving it.</p>
<p>I tested tools, changed prompts, added skills, cleaned up context, and got more deliberate about where expensive models were actually worth paying for.</p>
<p>That work mattered more than any single model release. The real gain came from building a setup that kept working even while the tools kept moving.</p>
<p>That is when OpenCode started making a lot of sense to me.</p>
<h2>Why OpenCode mattered</h2>
<p>I do not use OpenCode because it beats every native tool at everything. It does not. Native tools often have an edge.</p>
<p>I use it because it lets me keep my workflow and swap the model layer underneath it more easily. That matters a lot to me.</p>
<p>I do not want to explain to a client that my setup fell apart because one provider had a rough day. I do not want to rebuild my habits every time pricing changes or the leaderboard shuffles.</p>
<p>If your workflow only works when one company is cheap, online, and still winning, that is not much of a workflow. It is a single point of failure.</p>
<p>I use these tools for hours every day. Reliability matters more to me than squeezing out the last bit of capability.</p>
<h2>Not every task needs the best model</h2>
<p>It also changed how I pick models for different jobs.</p>
<p>Some work really does need the strongest model. Architecture trade-offs. Messy refactors. Unfamiliar systems. Pay for intelligence there.</p>
<p>But a lot of tasks are much smaller than that. Rename this. Move that. Update the tests. Clean up some typing. I am not paying top-tier model prices for chores if I do not have to.</p>
<p>That is partly about cost, sure. It is also about not getting lazy. If the strongest model is always one keystroke away, it gets very easy to stop thinking earlier than you should.</p>
<p>The setup grew out of that.</p>
<h2>The setup itself</h2>
<p>The setup is honestly boring.</p>
<p>I keep the defaults in my dotfiles: shared <code>AGENTS.md</code>, workflow instructions, and skills that I symlink into the tools I use.</p>
<p>Claude looks for skills in <code>.claude/skills</code>. OpenCode looks in <code>.opencode/skills</code>. I do not want copies of the same setup scattered across repos, so I keep one version and link it where needed. That way both tools behave the same on both my Macs.</p>
<p>Then I add a repo-level <code>AGENTS.md</code> with only the local context that matters for that project. That gives the agent enough to work with without me repeating myself or waiting for a full repo scan.</p>
<p>The actual work happens in Git worktrees. Usually one task per branch, one agent session per worktree.</p>
<p><img src="/images/blog/ai-workflow-diagram.svg" alt="Diagram showing shared global agent defaults feeding tool-specific config, repo-level rules, and local project context."></p>
<p>I am not trying to build the cleverest system here. I want something I can inspect, debug, and change without drama.</p>
<p>That is another reason the terminal still wins for me. My prompt tells me where I am. <code>tmux</code> makes it easy to hop between sessions. Worktrees map cleanly to tasks. Nothing is hidden behind a glossy interface that stops being cute the moment something breaks.</p>
<h2>Final thought</h2>
<p>I am not telling anyone to move to the terminal. Use whatever environment helps you think clearly.</p>
<p>But if AI is going to sit in the middle of your daily workflow, build for resilience first and keep enough flexibility to pick the right model for the job. Capability still matters. I just do not want everything else depending on it.</p>
<p>A workflow is only really yours when it survives the tool you built it around.</p>
]]></content:encoded>
</item>
</channel>
</rss>