<?xml version="1.0" encoding="UTF-8"?>
<rss
  version="2.0"
  xmlns:dc="http://purl.org/dc/elements/1.1/"
  xmlns:content="http://purl.org/rss/1.0/modules/content/"
  xmlns:atom="http://www.w3.org/2005/Atom"
  xmlns:media="http://search.yahoo.com/mrss/"
>
<channel>
<title><![CDATA[Software Engineering - Joachim Zeelmaekers]]></title>
<description><![CDATA[Articles tagged "Software Engineering" on Joachim Zeelmaekers.]]></description>
<link>https://joachimz.me/tag/software-engineering/</link>

<generator>Astro</generator>
<lastBuildDate>Mon, 06 Apr 2026 00:00:00 GMT</lastBuildDate>
<atom:link href="https://joachimz.me/tag/software-engineering/rss.xml" rel="self" type="application/rss+xml"/>
<ttl>60</ttl>
<language>en-us</language>
<item>
<title><![CDATA[From Slack Pings to Agent Prompts: How Focus Got Harder]]></title>
<description><![CDATA[I used to protect focus from Slack, meetings, and PR reviews. Now the same interruptions are still there, plus a steady stream of agent prompts.]]></description>
<link>https://joachimz.me/blog/from-slack-pings-to-agent-prompts-how-focus-got-harder/</link>
<guid isPermaLink="true">https://joachimz.me/blog/from-slack-pings-to-agent-prompts-how-focus-got-harder/</guid>
<category>software-engineering</category><category>AI</category>
<dc:creator><![CDATA[Joachim Zeelmaekers]]></dc:creator>
<pubDate>Mon, 06 Apr 2026 00:00:00 GMT</pubDate>
<media:content url="https://joachimz.me/images/blog/from-slack-pings-to-agent-prompts-how-focus-got-harder.webp" medium="image"/>
<content:encoded><![CDATA[<p>For a while, one of the best things I did for myself was stop treating notifications like a live feed.</p>
<p>I would check PR reviews at fixed times, usually around 9 AM, 1 PM, and 4 PM. And I tried to protect two two-hour focus blocks every day.</p>
<p>Was that ever fully real? Not really. Something urgent always slipped through. But even a messy version of that setup helped to get things done.</p>
<p>And I believe that was the point behind all the old context switching talk. If you’re in the middle of a hard problem, interruptions take you out of the zone as an engineer.</p>
<p>If I’m tracing a bug through three services, trying to understand why something only breaks in production, or working through a design decision I still don’t trust, a random Slack ping doesn’t just “take a minute.” It breaks the flow entirely. Then I get to spend the next twenty minutes rebuilding the model in my head.</p>
<p>I used to describe it as being in “the zone,” and I always told people how hard it was to get back into the zone. Not to push people away, but to let people realize that each interruption had a price, and I believe this is becoming more prominent now.</p>
<h3 id="what-used-to-work">What used to work</h3>
<p>Back then, most of the interruptions came from outside the work.</p>
<ul>
<li>Slack</li>
<li>meetings</li>
<li>PR reviews</li>
<li>incident noise</li>
<li>somebody with a “quick question”</li>
</ul>
<p>Part of the job, yes, but at least the source was clear.</p>
<p>This made me think batch processing these should help here. And in a way it did for a while, since it at least gave enough room to process both the interruptions and the work.</p>
<h3 id="what-changed">What changed</h3>
<p>Currently, the old interruptions are still there, but we added a new category on top of them.</p>
<p>Agents.</p>
<p>And I don’t mean that in some dramatic anti-AI way. I use them constantly, and that’s exactly why I am writing this. I’ve been experiencing it first-hand.</p>
<p>If I have a few agents running in parallel, my day starts looking different fast. One is exploring the next task. One is exploring a refactor. One is looking at the latest tech updates (because who can keep up manually nowadays). Then one of them finishes. Another needs clarification. Another wants me to choose between two directions. Another comes back with a wall of questions because my prompt was missing one detail that suddenly matters a lot.</p>
<p>So now the interruptions don’t just come from Slack and meetings. The work itself keeps interrupting me.</p>
<p>That’s the part that feels different.</p>
<p>In the past, context switching mostly came from other people pulling you away from the task. Today, a lot of it comes from managing the task through multiple half-finished threads at the same time.</p>
<h3 id="why-it-feels-heavier">Why it feels heavier</h3>
<p>What makes this harder for me is that it all feels productive.</p>
<p>Slack is obviously an interruption. A meeting is obviously an interruption. But when an agent finishes a task or asks for clarification, it doesn’t feel like an interruption in the same way. It feels like progress, and often it is progress, but sometimes it’s just another thing that wants a piece of your attention NOW.</p>
<p>That is how I end up with days where a lot happened, and I feel tired by the end of it. Not necessarily in a bad way, but in a different way than I am used to.</p>
<p>When you spend the whole day switching between sessions, reviewing outputs, answering follow-up questions, checking Slack, doing PR reviews, deploying services, and moving tickets, you realize how easy it is to forget or miss things. Not because you’re not trying your best, but mainly because of the constant focus you have to keep up.</p>
<p>That’s why this feels heavier to me now. The old interruptions never left. We just layered more on top.</p>
<h3 id="deep-work-still-matters">Deep work still matters</h3>
<p>If anything, I think it matters more now.</p>
<p>Agents don’t remove the need to think clearly at all. If anything, we need to think more about how we approach problems, handle evaluation, and make sure we protect our focus.</p>
<p>The focus is required to make sure that we understand what we’re trying to deliver, that we keep critically evaluating changes and requests, and that we break down the work effectively.</p>
<h3 id="what-im-trying-now">What I’m trying now</h3>
<p>I don’t have a good solution for this yet. If you do, please feel free to share it with me. But a few things seem to help.</p>
<p>Breaking down your work ahead of time helps. A clear plan for the day helps. Fewer active (meaning executors that need instant feedback) agents help. Batching reviews still helps when I can get away with it, although this one is especially tricky with the current rate of change.</p>
<p>Mostly though, I think the big shift is just understanding that the job changed.</p>
<p>We didn’t make context switching disappear. We found a much more productive-looking version of it, and it’s here to stay.</p>
<p>Some will thrive, others will realize it might not be for them. The only thing you can do to deal with this is adapt and overcome.</p>]]></content:encoded>
</item>
<item>
<title><![CDATA[When Code Outpaces the Systems Around It]]></title>
<description><![CDATA[AI can help write the code. The harder part is everything after: review, tests, approvals, and ownership.]]></description>
<link>https://joachimz.me/blog/when-code-outpaces-the-systems-around-it/</link>
<guid isPermaLink="true">https://joachimz.me/blog/when-code-outpaces-the-systems-around-it/</guid>
<category>software-engineering</category><category>AI</category>
<dc:creator><![CDATA[Joachim Zeelmaekers]]></dc:creator>
<pubDate>Mon, 30 Mar 2026 00:00:00 GMT</pubDate>
<media:content url="https://joachimz.me/images/blog/when-code-outpaces-the-systems-around-it.webp" medium="image"/>
<content:encoded><![CDATA[<p>AI has made it easy to turn ideas into code. Teams are seeing genuine speedups in planning, initial implementation, and even early review stages. But as with any engineering advancement, there are trade-offs.</p>
<p>The reality is that while code generation has accelerated, delivery pipelines haven’t kept pace. When test suites still take thirty minutes to run, CI pipelines are flaky, and deployments feel like defusing bombs, delivery pipelines haven’t sped up.</p>
<p>Consider this: Systems have been built that work when engineers are putting out 1-2 changes a day, but they break down at 5-10 changes a day. CI still takes 30 minutes per branch, everything needs merging so PRs pile up, and there is typically one deploy to staging every half hour.</p>
<p>This mismatch is often overlooked. Much of the process that’s been layered onto software development comes with speed trade-offs - sometimes intentional, often just organic evolution.</p>
<p>Now that generating code is cheap, one would expect software roadmaps to fly through delivery. But in most cases, that’s not what happens.</p>
<p>We expected the bottleneck to shift to trust decisions: Is this actually the right change? Does it work correctly? Can we ship it without starting a fire drill?</p>
<p>Yet that hasn’t been the outcome. Let’s examine why.</p>
<h2 id="the-bottleneck-phase-of-ai-adoption">The Bottleneck Phase of AI Adoption</h2>
<p>The first hard limit in review is review.</p>
<p>Code can be generated faster than it can be read. That’s where things get tricky.</p>
<p>Small change requests turn into 27-file diffs because AI tools touch nearby helpers, “simplify” abstractions that didn’t need changing, update unnecessary tests, and reformat code on the way out. None of it is obviously broken. Some of it might even be fine. But it still leaves reviewers staring at diffs they don’t have the time to read line by line.</p>
<p>Often, reviewers already have their own work open in three other tabs.</p>
<p>Reviews often become superficial. People start to just skim through changes, and they trust a green pipeline. They approve the shape of the change because tracing every consequence takes time they don’t have. This is not out of bad intention, but rather because it’s the only way to catch up.</p>
<p>These review cycles and safety checks exist for a reason—they protect the production environment. The goal isn’t to scrap them, but to evolve them to match the new generation speed.</p>
<p>That’s one of the real costs here. The goal should not only be ship faster, the goal should be ship faster without lowering quality.</p>
<h2 id="the-three-stages-of-ai-adoption-maturity">The Three Stages of AI Adoption Maturity</h2>
<p>A predictable maturation curve emerges as AI integrates into software delivery:</p>
<p><strong>Stage 1: The Code Generation Boom</strong>
Teams discover they can generate code faster than ever before for planning, implementation, and even initial review.</p>
<p><strong>Stage 2: The Bottleneck Phase</strong>
AI gets bolted onto systems that were already slower and messier than desired. Thirty-minute test suites that were tolerable before now become absolutely ridiculous in an AI loop. Changes stack up in a queue, with engineers waiting for things to move through the queue. Flaky CI now becomes even worse, as it not only impacts one change, but ten. Risky deploys create cautious behavior just when tooling is pushing to move faster.</p>
<p><strong>Stage 3: The Automated Trust Phase</strong>
The real wins come from investing in unsexy but critical infrastructure: fast feedback loops that don’t make you want to scream, clear boundaries so teams aren’t stepping on each other’s toes, boringly reliable deploys, documentation that doesn’t require an archaeology degree to understand, and having enough review capacity to actually keep up with the increased volume.</p>
<p>A flashy AI setup could help move the needle, but lasting improvement comes from evolving systems to match the new pace.</p>
<h2 id="decision-latency-the-next-bottleneck-in-the-maturity-curve">Decision Latency: The Next Bottleneck in the Maturity Curve</h2>
<p>Another bottleneck that emerges is decision latency around process and approvals.</p>
<p>Changes get drafted quickly but sit for days due to additional review requests, sign-offs, unclear test plans, and unclear ownership.</p>
<p>This is what we mean by decision latency. The code can be done in an afternoon. The waiting takes the rest of the week.</p>
<p>When it’s unclear who has the final say, when every change needs three sign-offs, or when acceptance criteria only get clear near the end, AI mostly helps teams finish the code and then wait.</p>
<p>This highlights that this isn’t just an engineering structure problem, but more like a process problem that needs evolution alongside AI adoption.</p>
<h2 id="ai-rewards-decomposition">AI rewards decomposition</h2>
<p>This is why breaking work down clearly matters.</p>
<p>A sloppy task used to waste one person’s time. Now it can waste one person’s time at much higher speed while producing a lot of plausible-looking output someone has to untangle later.</p>
<p>When AI helps most, the task looks like this:</p>
<ul>
<li>break a problem into smaller steps that can actually be verified</li>
<li>draft an implementation plan before touching the code</li>
<li>execute a scoped change with clear constraints</li>
<li>review a diff for missing edge cases or weak tests</li>
</ul>
<p>When it helps least, the request sounds like “clean this up” or “improve the architecture” or “just take a pass at this module.”</p>
<p>Those aren’t real tasks. They’re a good way to get a diff nobody asked for.</p>
<h2 id="ownership-matters-more-not-less">Ownership matters more, not less</h2>
<p>When code becomes cheaper to produce, the people who actually matter aren’t the ones who can type the fastest. They’re the ones who can properly scope a change, verify it thoroughly, and stand behind the decision to ship it.</p>
<p>This isn’t nearly as exciting as bragging about 10x productivity gains, but it’s closer to what engineering teams actually need to succeed, without reducing quality.</p>
<p>It doesn’t matter whether the first draft flowed from an AI model, a Stack Overflow snippet, or late-night coding. What matters is who really understood the trade-offs involved, who bothered to check the edge cases, and who’s willing to put their name on it when it hits production.</p>
<p>That’s why ownership becomes more critical, not less, when AI handles the initial draft. The machine can spit out options all day long, but it can’t be paged (yet) when something breaks in the middle of the night. It can’t explain why it made a particular trade-off during Friday’s incident review. It can’t look a stakeholder in the eye and say, “I’ve got this.”</p>
<h2 id="what-actually-helps">What actually helps</h2>
<p>Look, the answer isn’t to artificially slow things down for the sake of purity. The answer is fixing the system around the code.</p>
<p>Here’s what that looks like in practice:</p>
<p>First, break work down properly. Smaller tasks aren’t just easier to manage - they’re easier to generate with AI, easier to verify, easier to review, and way easier to toss out when the model starts hallucinating or going off the rails.</p>
<p>Second, make those feedback loops cheaper. When pumping out more code changes, tests need to be fast, CI needs to be reliable, and deploys need to be boringly predictable. If any of those are slow or flaky, they become instant bottlenecks that negate any speed gains from AI.</p>
<p>Third, get crystal clear on decisions. If every meaningful change requires a meeting and three layers of approval, you’re still optimized for a world where code was hard to write. AI just highlights how broken that process is to begin with.</p>
<p>Fourth, treat review like the bottleneck it often is, because it frequently is. If review capacity can’t keep up with the increased volume from AI assistance, you’re just creating a bigger backlog of unreviewed changes. That’s not progress, that’s just creating future problems for yourself. Make thorough reviewing a priority.</p>
<p>Fifth, make ownership obvious. Someone on the team should be able to look at a change and say without hesitation: “I understand this, I’ve verified it, and I’m responsible for it shipping successfully.”</p>
<p>And finally, when the AI tool isn’t actually helping? Stop prompting it. Seriously. Close the chat window, go back to the original problem statement, and rewrite it in simpler terms. Sometimes the best way to use AI is to not use it at all until you’ve clarified what you actually want, because if you don’t know what you want, how do you expect Claude to know what you want?</p>
<p>The upside of AI in software development is absolutely real. It’s a double-edged sword that can be extremely beneficial when the correct approach is taken, but it can produce significant amounts of low-quality output when the wrong approach is used.</p>]]></content:encoded>
</item>
<item>
<title><![CDATA[How AI Changed My Workflow as a Software Engineer]]></title>
<description><![CDATA[AI tools changed my workflow fast, but the real lesson was simpler: build for flexibility, resilience, and task-fit instead of chasing peak capability.]]></description>
<link>https://joachimz.me/blog/how-ai-changed-my-workflow-as-a-software-engineer/</link>
<guid isPermaLink="true">https://joachimz.me/blog/how-ai-changed-my-workflow-as-a-software-engineer/</guid>
<category>software-engineering</category><category>AI</category>
<dc:creator><![CDATA[Joachim Zeelmaekers]]></dc:creator>
<pubDate>Mon, 09 Mar 2026 00:00:00 GMT</pubDate>
<media:content url="https://joachimz.me/images/blog/how-ai-changed-my-workflow-as-a-software-engineer.webp" medium="image"/>
<content:encoded><![CDATA[<p>Over the past year, AI changed my workflow faster than anything else in engineering. Every few months there was a new editor to try, a new model to test, or a new agent worth taking seriously. Useful, yes. Also tiring.</p>
<p>What I took from that was simple: stop rebuilding your workflow around the current winner. I care more about having a setup that survives outages, pricing changes, and vendor churn than I do about squeezing out the absolute best capability this week.</p>
<p>So this post is not really about moving from Cursor to Claude Code to OpenCode. That is just the timeline. The whole point is that flexibility matters more than peak capability.</p>
<h2>The tipping point</h2>
<p>I started using Cursor seriously around March or April 2025.</p>
<p>At first it helped with the obvious stuff. Moving between files faster. Cleaning up code. Refactoring without losing momentum. It made me quicker, but it did not change how I thought about development yet.</p>
<p>That changed a few months later when the first Opus model got good enough for real development work. I think it was 4 or 4.1. AI tooling stopped feeling like autocomplete on steroids and started feeling like something that could move real work forward.</p>
<p>I still dislike the 10x talk around this stuff. For me it was more like 1.1x or 1.2x on a good day. Still meaningful, because no one dislikes writing boilerplate more than me.</p>
<p>After that, AI was no longer the question. The question was how I could rebuild my workflow around it.</p>
<h2>Back in the terminal</h2>
<p>Later that year, Claude Code got good enough that I could bring that leverage back into the terminal.</p>
<p>That mattered because the terminal is where I actually want to work. I have spent years turning it into the fastest place for me to think. Hundreds of commands. A lot of Neovim muscle memory. Outside pair programming, because I do not want to force Neovim on other people, and the occasional Git diff in VS Code, that is home base.</p>
<p>Claude Code was the first time I got that kind of leverage without having to leave the terminal. I was not trading leverage for comfort anymore.</p>
<p>So the question changed. If the environment finally felt right, what was actually worth optimizing around it?</p>
<h2>The workflow became part of the job</h2>
<p>From there I started spending a couple of hours a week improving the workflow itself.</p>
<p>Not using it. Improving it.</p>
<p>I tested tools, changed prompts, added skills, cleaned up context, and got more deliberate about where expensive models were actually worth paying for.</p>
<p>That work mattered more than any single model release. The real gain came from building a setup that kept working even while the tools kept moving.</p>
<p>That is when OpenCode started making a lot of sense to me.</p>
<h2>Why OpenCode mattered</h2>
<p>I do not use OpenCode because it beats every native tool at everything. It does not. Native tools often have an edge.</p>
<p>I use it because it lets me keep my workflow and swap the model layer underneath it more easily. That matters a lot to me.</p>
<p>I do not want to explain to a client that my setup fell apart because one provider had a rough day. I do not want to rebuild my habits every time pricing changes or the leaderboard shuffles.</p>
<p>If your workflow only works when one company is cheap, online, and still winning, that is not much of a workflow. It is a single point of failure.</p>
<p>I use these tools for hours every day. Reliability matters more to me than squeezing out the last bit of capability.</p>
<h2>Not every task needs the best model</h2>
<p>It also changed how I pick models for different jobs.</p>
<p>Some work really does need the strongest model. Architecture trade-offs. Messy refactors. Unfamiliar systems. Pay for intelligence there.</p>
<p>But a lot of tasks are much smaller than that. Rename this. Move that. Update the tests. Clean up some typing. I am not paying top-tier model prices for chores if I do not have to.</p>
<p>That is partly about cost, sure. It is also about not getting lazy. If the strongest model is always one keystroke away, it gets very easy to stop thinking earlier than you should.</p>
<p>The setup grew out of that.</p>
<h2>The setup itself</h2>
<p>The setup is honestly boring.</p>
<p>I keep the defaults in my dotfiles: shared <code>AGENTS.md</code>, workflow instructions, and skills that I symlink into the tools I use.</p>
<p>Claude looks for skills in <code>.claude/skills</code>. OpenCode looks in <code>.opencode/skills</code>. I do not want copies of the same setup scattered across repos, so I keep one version and link it where needed. That way both tools behave the same on both my Macs.</p>
<p>Then I add a repo-level <code>AGENTS.md</code> with only the local context that matters for that project. That gives the agent enough to work with without me repeating myself or waiting for a full repo scan.</p>
<p>The actual work happens in Git worktrees. Usually one task per branch, one agent session per worktree.</p>
<p><img src="/images/blog/ai-workflow-diagram.svg" alt="Diagram showing shared global agent defaults feeding tool-specific config, repo-level rules, and local project context."></p>
<p>I am not trying to build the cleverest system here. I want something I can inspect, debug, and change without drama.</p>
<p>That is another reason the terminal still wins for me. My prompt tells me where I am. <code>tmux</code> makes it easy to hop between sessions. Worktrees map cleanly to tasks. Nothing is hidden behind a glossy interface that stops being cute the moment something breaks.</p>
<h2>Final thought</h2>
<p>I am not telling anyone to move to the terminal. Use whatever environment helps you think clearly.</p>
<p>But if AI is going to sit in the middle of your daily workflow, build for resilience first and keep enough flexibility to pick the right model for the job. Capability still matters. I just do not want everything else depending on it.</p>
<p>A workflow is only really yours when it survives the tool you built it around.</p>
]]></content:encoded>
</item>
<item>
<title><![CDATA[Career Growth Is Not a Meeting You Attend]]></title>
<description><![CDATA[One-on-ones and development plans only work when you treat them as objectives, not calendar events.]]></description>
<link>https://joachimz.me/blog/career-growth-is-not-a-meeting-you-attend/</link>
<guid isPermaLink="true">https://joachimz.me/blog/career-growth-is-not-a-meeting-you-attend/</guid>
<category>career-advice</category><category>personal-growth</category><category>software-engineering</category>
<dc:creator><![CDATA[Joachim Zeelmaekers]]></dc:creator>
<pubDate>Mon, 02 Mar 2026 00:00:00 GMT</pubDate>
<media:content url="https://joachimz.me/images/blog/career-growth-is-not-a-meeting-you-attend.webp" medium="image"/>
<content:encoded><![CDATA[<p>I hear mixed opinions about one-on-ones all the time. For some people they are a blessing. For others they feel like a waste.</p>
<p>My own experience has mostly been positive. I worked with managers who took these conversations seriously. The good one-on-ones were never a ritual to survive. They helped me think clearly, get unstuck, and leave with a better plan.</p>
<p>When I asked people about their experience, I heard the same split, and you can see it in the replies here:</p>
<blockquote class="twitter-tweet" data-theme="dark" data-align="center" data-dnt="true"><p lang="en" dir="ltr">Working on a new post about 1 on 1 meetings, and how they can be extremely valuable for one, but worthless for the other.<br><br>I'm lucky that I've had great mentors and managers, so in my experience they have been extremely valuable, but I hear different experiences.<br><br>What is your...</p>— Joachim Zeelmaekers (@joachimz_me) <a href="https://twitter.com/joachimz_me/status/2027286387251196022?ref_src=twsrc%5Etfw">February 27, 2026</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
<p>Some one-on-ones genuinely shaped careers. People felt seen, challenged, and supported. They still reach out to former leaders because those conversations mattered.</p>
<p>Others felt like calendar fillers. Status updates disguised as coaching. A manager trying to track activity instead of improving conditions.</p>
<p>The difference was intent.</p>
<p>The best one-on-ones happened when the manager showed up curious. Curious about what was blocking me, what was slowing me down, and what I needed to grow. The worst ones happened when there was no preparation.</p>
<p>That is why one-on-ones are powerful, even though not all of them are perfect, they give YOU the perfect opportunity to grow.</p>
<p>The question is, will you take that opportunity?</p>
<h2 id="preparation">Preparation</h2>
<p>The one-on-ones that failed for me were predictable:</p>
<ul>
<li>no clear goal or agenda</li>
<li>no evidence of progress</li>
<li>no concrete ask</li>
</ul>
<p>In short, no preparation.</p>
<p>When that happens, the conversation defaults to updates, logistics, and whatever fire happened that week.</p>
<p>None of that is bad. Some one-on-ones should be casual like this. But I am talking about the ones that are meant to help you grow.</p>
<h2 id="the-meeting-is-not-where-growth-happens">The meeting is not where growth happens</h2>
<p>I used to think one-on-ones were where growth happened.</p>
<p>Now I think they are where growth is evaluated.</p>
<p>The real work happens before and after the meeting:</p>
<ul>
<li>pick a skill to improve</li>
<li>practice it in real work</li>
<li>ask for targeted feedback</li>
<li>adjust based on what you learned</li>
</ul>
<p>Your manager can support this process. But they cannot run it for you.</p>
<h2 id="what-to-bring-to-every-one-on-one">What to bring to every one-on-one</h2>
<p>Once a one-on-one is properly structured, they get much better. So make sure to structure them. You don’t like the format? Change it.</p>
<p>Here are the four things that worked for me.</p>
<h3 id="1-a-short-wins-list">1. A short wins list</h3>
<p>Not for ego. For evidence.</p>
<p>What did you deliver? What improved? What impact did it have?</p>
<p>You can shortly discuss it with your manager, and at the same time you can use it for future evaluations. Win-Win!</p>
<h3 id="2-one-blocker">2. One blocker</h3>
<p>Not a list of complaints. One blocker that matters.</p>
<p>Maybe your pull requests take too long to merge. Maybe stakeholder alignment is messy. Maybe you freeze in architecture discussions.</p>
<p>Pick one. Make it specific. That way it can be talked about, reviewed and actioned.</p>
<p>A good manager does not need this meeting to track your tasks. There are better systems for that. The meeting is more useful when it focuses on what is preventing good work.</p>
<h3 id="3-one-skill-gap-you-are-working-on-optional-sometimes">3. One skill gap you are working on (optional sometimes)</h3>
<p>This is the part most people skip.</p>
<p>Say it directly: I need to improve at X. Here is how it shows up and this is what I’ll be doing about it.</p>
<p>Examples:</p>
<ul>
<li>turning ambiguous tasks into executable plans</li>
<li>giving clear technical updates to non-technical stakeholders</li>
<li>writing smaller, easier-to-review pull requests</li>
</ul>
<p>This will be a way to keep yourself accountable on the next session. Did you really improve, or were these just meaningless words?</p>
<h3 id="4-one-concrete-ask">4. One concrete ask</h3>
<p>This one depends on the frequency of your one-on-ones. If you have a weekly one, asking for something every week can feel very needy.</p>
<p>But a general rule of thumb should be if your one-on-one ends without an ask, do not expect much to change.</p>
<p>Your manager is busy. They cannot read your mind, so if you don’t ask for anything, expect nothing.</p>
<p>Ask for something actionable:</p>
<ul>
<li>Can I lead the next retrospective?</li>
<li>Can we define what senior-level ownership looks like for my scope?</li>
<li>Can you review my design doc and challenge my assumptions?</li>
</ul>
<p>The least you can get is a no. So ask what you want to ask, and make it clear.</p>
<h2 id="development-plans-that-actually-survive">Development plans that actually survive</h2>
<p>Most development plans start strong and then disappear. It’s easy to create a plan, but it’s hard to execute it.</p>
<p>Not because people are lazy. The opposite actually.</p>
<p>I struggled with this a lot. I always prioritized my work over my personal development time. Not because I didn’t want to do it, but more because I felt I had to work harder, do more. But ultimately, it makes you do less in the long run.</p>
<p>What worked better for me was treating growth like a real project, because real projects are exciting and make me want to work on it. You also have something to show for, which gives that same dopamine as finishing tickets, or implementing new features.</p>
<p>So where can I start, you might ask? Build a six-month plan, then break it into trackable pieces.</p>
<p>Six months is long enough to change behavior and short enough to stay grounded.</p>
<p>A simple structure:</p>
<ul>
<li>Pick one growth theme for six months.</li>
<li>Define what “better” looks like in observable terms.</li>
<li>Break it into monthly focus areas.</li>
<li>Create one monthly opportunity to practice.</li>
<li>Review monthly and adjust.</li>
</ul>
<p>“Become a better leader” is not a plan.</p>
<p>“Over the next six months, lead one cross-team technical discussion per month, write a short recap, and ask two stakeholders for written feedback each time” is a plan.</p>
<h2 id="the-uncomfortable-part">The uncomfortable part</h2>
<p>It is easy to say my manager is not helping me grow.</p>
<p>Sometimes that is true. Some managers are not the best coaches. Some environments are genuinely limiting. Some organizations split people management and technical leadership, which is makes you focus on one or the other, instead of working on both.</p>
<p>But in many cases, the harder truth is this: we expect growth to happen around us, not through us.</p>
<p>We wait for better projects, better timing, better guidance, better recognition.</p>
<p>Meanwhile, weeks pass, then months.</p>
<p>One-on-ones do not create progress by themselves. They reveal whether progress is happening. If every meeting feels the same and something is not moving. That is the moment to reset expectations and change the inputs.</p>
<h2 id="final-thought">Final thought</h2>
<p>One-on-ones matter. Development plans matter.</p>
<p>But they are not magic.</p>
<p>If you’re not coming up with anything, and your manager is not delivering what you’re expecting, you will stay where you are indefinitely.</p>
<p>It’s up to you to make things happen.</p>]]></content:encoded>
</item>
<item>
<title><![CDATA[Rebuilding My Blog From the Ground Up]]></title>
<description><![CDATA[Leaving Ghost for Astro and Cloudflare Pages: owning my blog stack]]></description>
<link>https://joachimz.me/blog/rebuilding-my-blog-from-the-ground-up/</link>
<guid isPermaLink="true">https://joachimz.me/blog/rebuilding-my-blog-from-the-ground-up/</guid>
<category>software-engineering</category><category>writing</category>
<dc:creator><![CDATA[Joachim Zeelmaekers]]></dc:creator>
<pubDate>Mon, 23 Feb 2026 06:00:00 GMT</pubDate>
<media:content url="https://joachimz.me/images/blog/rebuilding-my-blog-from-the-ground-up.webp" medium="image"/>
<content:encoded><![CDATA[<p>My blog ran on <a href="https://ghost.org/">Ghost</a> for years and felt fast enough. Then I rebuilt it with Astro and Cloudflare Pages.</p>
<p>Page load dropped from about one second to under 300 ms, the stack cost went to zero, and I finally owned every part of the experience.</p>
<p>This is why I left Ghost and rebuilt the blog from the ground up.</p>
<h2>Why I Left Ghost</h2>
<p>Ghost is a great product. If you want a managed blogging platform with a clean editor and built-in newsletter support, it does the job well.</p>
<p>My reasons for leaving were not about quality. They were about control and fit.</p>
<p>Since my goal this year is to start publishing weekly, I wanted my blog to feel completely mine: faster, simpler, and shaped exactly to my preferences. Ghost did not allow the level of customization I wanted without workarounds.</p>
<p>The bigger issue was scope. Ghost Pro is not cheap, and self-hosting Ghost means running a server, a database, and ongoing upgrades. For a personal blog with one post a week, that stack felt like overkill.</p>
<p>I expected the newsletter to be the hardest part to replace. It turned out not to be.</p>
<h2>Choosing the Stack</h2>
<p>I did not spend weeks evaluating frameworks, like I would in the past. The decision was pretty simple.</p>
<p>Lately I heard a lot about <strong><a href="https://astro.build/">Astro</a></strong> and I wanted to try it out. It ships hardly any Javascript by default, supports Markdown and MDX natively, and gives you full control over the output. No hydration overhead, no client-side routing unless you explicitly opt in. Just HTML and CSS.</p>
<p><strong><a href="https://pages.cloudflare.com/">Cloudflare Pages</a></strong> for hosting. Free tier, global CDN, automatic deployments using GitHub Actions. The site builds in seconds and deploys to edge locations worldwide. No servers to manage, no bills to worry about.</p>
<p><strong>Markdown files</strong> instead of a database. Like we all know, lately everything has been about Markdown files, so why not put every post in a Markdown file in a Git repository. I can make changes to it in any editor, preview locally, and publish by pushing to main. Version history comes for free.</p>
<p>For my newsletter, with the full 12 subscribers, I first thought to not include it anymore. But then I found <a href="https://kit.com/">Kit</a>, 10 000 subscribers for free. Perfect for my use case.</p>
<p>The entire stack costs me nothing beyond the domain name, and my time of course.</p>
<h2>Performance Results</h2>
<p>The page load numbers are the clearest win.</p>
<p>In real usage the Ghost site felt reasonably fast, around one second to visible content. However, Lighthouse mobile simulation revealed much heavier rendering costs, especially LCP. The Astro rebuild reduces both perceived and measured performance costs.</p>
<p>The architecture shift from dynamic rendering to static edge delivery shows up clearly in performance metrics.</p>
<p>Lighthouse snapshot (single run, mobile emulation, default throttling).</p>
<p>Here is a Lighthouse snapshot (single run, mobile emulation) comparing the old Ghost site with the new Astro build:</p>
<table>
<thead>
<tr>
<th>Metric</th>
<th>Ghost (before)</th>
<th>Astro (after)</th>
</tr>
</thead>
<tbody><tr>
<td>Performance score</td>
<td>63</td>
<td>77</td>
</tr>
<tr>
<td>Accessibility score</td>
<td>80</td>
<td>100</td>
</tr>
<tr>
<td>Best Practices score</td>
<td>81</td>
<td>100</td>
</tr>
<tr>
<td>SEO score</td>
<td>100</td>
<td>100</td>
</tr>
<tr>
<td>FCP</td>
<td>2,597 ms</td>
<td>1,049 ms</td>
</tr>
<tr>
<td>LCP</td>
<td>23,758 ms</td>
<td>4,792 ms</td>
</tr>
<tr>
<td>Speed Index</td>
<td>7,857 ms</td>
<td>1,296 ms</td>
</tr>
<tr>
<td>Total page weight</td>
<td>7.71 MB</td>
<td>0.42 MB</td>
</tr>
<tr>
<td>Requests</td>
<td>36</td>
<td>28</td>
</tr>
</tbody></table>
<p>The largest improvements come from removing heavy images and client-side scripts. Total page weight dropped by over 18x, which directly improved Speed Index and LCP.</p>
<h2>Design + UX Updates</h2>
<p>The better question would be, what did not change. Well, the content.</p>
<p>The homepage moved from a stock template to a focused, branded entry point with clear hierarchy and fewer distractions.</p>
<p>Beyond speed, the design is entirely mine. The goal was a calmer, more intentional experience with room to grow. I can now iterate quickly and add new features without fighting the platform.</p>
<p>The new list view trades heavy thumbnails and whitespace for tighter hierarchy and faster scanning. Smaller images, consistent metadata, and clearer contrast mean you can find a post at a glance without hunting for it.</p>
<p>The reading view moves from a template-heavy page to a focused editorial layout. Better line length, calmer typography, and fewer competing elements keep attention on the text and reduce cognitive load.</p>
<p>The CTA experience is now integrated into the reading flow instead of floating over it. Recommendations feel like a natural next step, not an interruption, which keeps the page focused while still nudging the next click.</p>
<p>The newsletter now feels like part of the site, not a third-party embed. The form is lighter, the spacing is calmer, and the CTA matches the rest of the system, so it reads as an invitation rather than a modal ad.</p>
<p><img src="/images/blog/rebuild/new-features.webp" alt="Site features overview in the new blog"></p>
<p>The benefit of building this site myself is being able to introduce more features to improve the experience:</p>
<ul>
<li>Search across every post (full text, fast).</li>
<li>An About page that actually explains what I write about.</li>
<li>Dark/light mode that preserves the brand tone.</li>
<li>One-click code copy for snippets.</li>
<li>Dedicated tag pages so topics are easy to follow.</li>
<li>Posts grouped by year for quick scanning.</li>
</ul>
<h2>The Takeaway</h2>
<p>If you are a developer blogging on a managed platform, ask yourself what you are getting in return for that convenience. If the answer is &quot;less control and a monthly bill,&quot; it might be time to build your own.</p>
<p>It does not need to be complicated. A static site generator, a CDN, and Markdown files will take you further than most platforms. And the process of building it will teach you more than any blog post about blogging ever could.</p>
<p>Including this one.</p>
]]></content:encoded>
</item>
<item>
<title><![CDATA[The Cost Of Keeping What You Should Remove]]></title>
<description><![CDATA[Every product accumulates features that cost more to keep than they are worth. Here's why keeping them is the most expensive form of laziness.]]></description>
<link>https://joachimz.me/the-cost-of-keeping-what-you-should-remove/</link>
<guid isPermaLink="true">https://joachimz.me/the-cost-of-keeping-what-you-should-remove/</guid>
<category>software-engineering</category>
<dc:creator><![CDATA[Joachim Zeelmaekers]]></dc:creator>
<pubDate>Mon, 16 Feb 2026 00:00:00 GMT</pubDate>
<media:content url="https://joachimz.me/images/blog/the-cost-of-keeping-what-you-should-remove.webp" medium="image"/>
<content:encoded><![CDATA[<p>Every product I have worked on has features that cost more to keep than they are worth. Not broken features. Not controversial ones. Features that work fine, get used by almost nobody, and sit there quietly taking up space.</p>
<p>I have kept features like that alive for years. Not because they added value, but because removing them felt like more work than leaving them in place. They worked. They were not hurting anyone. Why bother?</p>
<p>That reasoning felt pragmatic at the time. I have since learned it is the most expensive form of laziness in software.</p>
<h2 id="the-lie-we-tell-ourselves">The lie we tell ourselves</h2>
<p>“It works, so it is free to keep.”</p>
<p>This is what we say when we do not want to think about it. And for a while, it’s true. The features sit there, untouched, collecting dust. Nobody complains. Nobody celebrates. They just exist.</p>
<p>But software does not work like this. Dependencies shift. Frameworks update. Security patches ripple through layers you forgot were connected. Those quiet features are not free. They are accumulating debt.</p>
<p>I learned this the hard way many times. A release went out, and three weeks later a bug report came in from a single user clicking a button I had forgotten existed. Someone had to context-switch, investigate, trace the regression, fix it, and report back. Half a day of engineering time for a feature that served one person who probably would not have noticed if the button had simply disappeared.</p>
<p>That was one feature, one bug report. Now multiply it across every neglected corner of a product.</p>
<h2 id="the-weight-you-stop-noticing">The weight you stop noticing</h2>
<p>One unused feature is manageable. You barely think about it. But products do not collect one. They collect dozens, quietly, over years. A toggle nobody clicks. A report nobody reads. An export format nobody requests. Each one seemed harmless when it shipped. Each one still seems harmless on its own.</p>
<p>But collectively, they change the shape of every conversation. A design review takes longer because the interface has no space left. A dependency upgrade touches code paths nobody remembers writing. A new engineer spends their first week building a mental model of a system where a third of it serves no one.</p>
<p>In some occasions, removing unused features is not even on the table. Not because anyone defends them, but because nobody can confidently say which ones are unused. The tracking might not be there. Or maybe the institutional knowledge had left with the people who built them.</p>
<p>That is how it compounds. Not through any single decision, but through the absence of decisions. The features you never question become the ones that define your constraints.</p>
<h2 id="why-we-still-keep-them">Why we still keep them</h2>
<p>I think the real reason we keep unused features is not pragmatism. It is discomfort.</p>
<p>Removing a feature feels like admitting failure. Someone built it. Someone approved it. Maybe it was your idea. Deleting it means acknowledging that the effort was, in some sense, wasted.</p>
<p>I have felt that discomfort. I have argued to keep things I built, not because they mattered to users, but because they mattered to me. That is not product thinking. That is ego dressed up as engineering judgment.</p>
<p>There is also the fear of the edge case you cannot see. What if that one user depends on it? What if removing it breaks something downstream? These fears are real, but they are also answerable. Usage data exists. Conversations can be had. The fear of finding out is not a reason to avoid looking. And certainly not, when you can just bring the feature back if complaints come in.</p>
<h2 id="when-keeping-is-the-right-call">When keeping is the right call</h2>
<p>I do not think the answer is to remove everything with low usage. Some features serve a small audience that depends on them deeply. Some could exist because of contractual obligations or regulatory requirements. Some are niche but genuinely valuable for power users.</p>
<p>There is a difference between keeping a feature because you evaluated it and decided it still earns its place, and keeping it because nobody bothered to ask the question.</p>
<h2 id="the-discipline-of-subtraction">The discipline of subtraction</h2>
<p>Every feature has a rent. Some pay for themselves through usage, through value, through the problems they solve. Others just accumulate cost quietly, making everything around them a little harder, a little slower, a little more constrained.</p>
<p>Adding features is easy to celebrate. Shipping feels good. But the products I admire most are not the ones with the most features. They are the ones where every feature earns its place. Where someone had the discipline to ask: does this still serve the people using this product?</p>
<p>The discipline is not in building more. It is in being honest about what deserves to stay.</p>]]></content:encoded>
</item>
<item>
<title><![CDATA[What Code Reviews Actually Teach Us]]></title>
<description><![CDATA[A reflection on why code reviews matter, why they feel personal, and how they protect teams from costly mistakes.]]></description>
<link>https://joachimz.me/code-reviews/</link>
<guid isPermaLink="true">https://joachimz.me/code-reviews/</guid>
<category>software-engineering</category><category>personal-growth</category><category>career-advice</category>
<dc:creator><![CDATA[Joachim Zeelmaekers]]></dc:creator>
<pubDate>Mon, 09 Feb 2026 00:00:00 GMT</pubDate>
<media:content url="https://joachimz.me/images/blog/code-reviews.webp" medium="image"/>
<content:encoded><![CDATA[<p>If I had to name something that taught me a lot as a software engineer, it would be the code review.</p>
<p>I hear a lot of complaints about it. It’s harsh, it demotivates people, it slows down shipping. And in some case, that’s true. When code reviews are used to gatekeep or show superiority rather than improve the work, they’re genuinely harmful. That’s not okay, and I’m not going to defend that.</p>
<p>But I think something gets lost in those complaints. Something worth examining.</p>
<h2 id="the-woodworkers-table">The woodworker’s table</h2>
<p>Imagine you’re a hobbyist woodworker. You’ve spent weeks building a table. In your mind, it’s perfect. Exactly how you envisioned it. But before you let your family sit around it at dinner, you invite a professional woodworker to take a look. Someone with ten-plus years of experience creating tables. You ask for their honest opinion.</p>
<p>They walk around the table, run their hand across the surface, check the joints. Then they give you their feedback.</p>
<p>The leg structure isn’t positioned to distribute weight evenly. The wood hasn’t been treated properly, so the color will fade within a year. The technique works for a shelf but not for something that needs to bear this kind of load.</p>
<p>Your first reaction? Frustration. Maybe even anger. <em>This is my first table. I’m not a professional. Why are they being so harsh?</em></p>
<p>Then you take a breath. You realize you asked for their honest feedback. And you know that if you use the table as is, it might collapse one evening with your kids sitting around it.</p>
<p>The story is fictional, but I think it maps pretty well to what happens in a code review (give or take). Someone with more context (not always more experience, but more context about that specific area) looks at your work and tells you what they see. Not to be cruel. Only to protect the thing you’re building from the things you couldn’t see yet.</p>
<h2 id="the-code-review-protects-us-from-ourselves">The code review protects us from ourselves</h2>
<p>The code review is there to catch what we miss when we’re too close to the problem. Our excitement about a clever solution. Our limited knowledge of a system we’ve only touched for a few days. Our frustration with a task that’s taken longer than expected, pushing us to just get it over with.</p>
<p>I’ve been on both sides of this. I’ve submitted code I knew wasn’t ready because I was tired of looking at it. And I’ve reviewed code where the problems were obvious to me because I’d made the exact same mistakes six months earlier.</p>
<p>That’s the thing people forget. The reviewer isn’t judging you from some position of superiority. They’ve been where you are. Most of the feedback they give comes from their own past mistakes and experience.</p>
<h2 id="taking-it-personal">Taking it personal</h2>
<p>The first challenge I’ve noticed is that engineers take review feedback personal.</p>
<p>This is a completely natural response, especially when you’re not used to receiving direct feedback on your work. But it limits your thinking. When you’re focused on defending what you wrote, you stop being able to see what you could improve.</p>
<p>I remember at least a dozen of reviews early in my career where senior engineers left a comment saying my approach was “unnecessarily complex” or way too brittle. At first, I’d try to often prove them wrong. But after a while I realized I was compensating for my ego, instead of trying to create the best solution for the problem. This type of compensating often cost me hours, hours that I could’ve put towards other work. Nowadays, a code review for me is a learning opportunity, where I want people to find the things I missed, so that we as a team can deliver the highest possibly quality.</p>
<h2 id="respecting-the-reviewers-time">Respecting the reviewer’s time</h2>
<p>The second challenge is one that doesn’t get talked about enough: respecting the reviewer’s time.</p>
<p>This is partially driven by delivery pressure. There’s always a deadline, always a stakeholder asking when it’ll be done. But that pressure doesn’t change the reality that someone is about to spend their time reading your code, understanding your decisions, and giving you thoughtful feedback. That’s a gift, not an obligation. And it comes with a cost.</p>
<p>I’ve seen too many pull requests where the author clearly didn’t review their own code before requesting a review. Leftover debug statements. Commented-out blocks. TODO comments that should have been addressed. An entirely broken CI. Things you’d catch in thirty seconds if you just read through it one more time.</p>
<p>My approach is simple. Once I think I’m done, I step away. I go get a coffee or work on something else for a bit. Then I come back and review my own pull request as if someone else wrote it. I leave comments explaining trade-offs I made. I catch the obvious stuff before anyone else has to. This does two things. It shows the reviewer that I value their time as much as my own. And it often catches problems I would have been embarrassed to have someone else find.</p>
<h2 id="a-learning-opportunity-not-a-chore">A learning opportunity, not a chore</h2>
<p>The third challenge is maybe the most important one: code reviews have become a chore instead of a learning opportunity.</p>
<p>A review comment you don’t understand isn’t an interruption. It’s the start of a conversation. An approach you’ve never seen before isn’t confusing, it’s a chance to learn something new or understand something better.</p>
<p>I’ve learned more from asking “why did you suggest this?” in a code review than from most technical articles I’ve read. Those conversations, where someone walks you through their reasoning, where you push back and they explain further, where you both arrive at something better than either of you started with. That’s where real growth happens.</p>
<p>And it goes both ways. Some of the best technical discussions I’ve ever had started with a junior engineer asking me “why?” on a review comment I left. Forcing me to articulate my reasoning made me realize that sometimes I didn’t have a good reason. I was just pattern-matching against habits I’d never questioned. I also might have not explained it properly, because if I make a comment, it should be obvious to the reader be that comment. Otherwise it’s just noise.</p>
<h2 id="shipping-still-matters">Shipping still matters</h2>
<p>I want to be clear about something. I understand that shipping matters. Deadlines are real. Business pressure is real. I’m not arguing that every pull request needs a three-day philosophical discussion.</p>
<p>But catching issues early in the process is significantly less painful than a 2 AM incident with four people on a call trying to figure out what went wrong. The code review is one of the cheapest places to find problems. Cheaper than staging. Cheaper than production. Infinitely cheaper than a postmortem.</p>
<p>The thirty minutes a reviewer spends on your pull request might save your team ten hours later. That’s not slowing down. That’s investing.</p>
<hr>
<p>I’ve been doing code reviews for years now, on both sides of the table. I still learn something most of the time. Not always something technical. Sometimes it’s about communication, about how to give feedback that helps instead of hurts, about how to receive feedback without letting my ego get in the way.</p>
<p>The code review isn’t perfect. But when it’s done with mutual respect, genuine curiosity, and a shared goal of making the work better, it’s one of the most valuable things we do as engineers.</p>]]></content:encoded>
</item>
<item>
<title><![CDATA[Commitment Without Agreement: What Great Teams Understand]]></title>
<description><![CDATA[The best engineering teams share four traits: full commitment to decisions, separating ideas from identity, pragmatic judgment, and informed captains.]]></description>
<link>https://joachimz.me/what-great-teams-understand/</link>
<guid isPermaLink="true">https://joachimz.me/what-great-teams-understand/</guid>
<category>software-engineering</category>
<dc:creator><![CDATA[Joachim Zeelmaekers]]></dc:creator>
<pubDate>Mon, 02 Feb 2026 00:00:00 GMT</pubDate>
<media:content url="https://joachimz.me/images/blog/what-great-teams-understand.webp" medium="image"/>
<content:encoded><![CDATA[<p>I’ve been on a lot of teams. Some of them looked great on paper. Smart people. Good intentions. Modern stack. And somehow, every decision turned into a three-week debate that ended with everyone exhausted and no one satisfied.</p>
<p>I’ve also been on teams that delivered value fast and consistently.</p>
<p>The difference wasn’t talent. It was something harder to define, and harder to build. I spent years assuming I’d figure it out eventually, that experience would make it obvious.</p>
<p>It took longer than I expected. But I believe I can finally name it.</p>
<p>The best teams I’ve worked with share four traits.</p>
<p>Not methodologies. Not tools. Traits. Ways of being together that make everything else work.</p>
<h2 id="1-they-commit-fully-to-decisions-they-disagree-with">1. They commit fully to decisions they disagree with</h2>
<p>You’ve heard “disagree and commit” before. I believe most teams get it wrong.</p>
<p>They think it means: voice your objection, get outvoted, then quietly comply.</p>
<p>That’s not commitment. That’s resignation.</p>
<p>Real commitment looks different. It means putting your all your energy into an approach you wouldn’t have chosen. It means actively trying to make it succeed, not waiting for it to fail so you can say “I told you so.”</p>
<p>I’ll be honest: I’ve failed at this more times than I’d like to admit. Early in my career, I’d disagree with a decision, get overruled, and then just follow blindly. I wasn’t sabotaging anything (at least not consciously), but I wasn’t helping either. That’s a terrible way to work. And it’s a terrible way to be on a team.</p>
<p>I watched an engineer do the opposite. Arguing passionately against an architectural change for weeks. Good arguments, valid concerns. The team decided to proceed anyway.</p>
<p>What did he do? He became the biggest champion. He found edge cases the rest of us missed. Started looking at improvements to actively make the idea better, and started supporting it even more. From that point I understood disagree and commit.</p>
<h2 id="2-they-separate-ideas-from-identity">2. They separate ideas from identity</h2>
<p>Here’s a pattern I see constantly: someone proposes an approach, another person critiques it, and suddenly we’re not talking about the approach anymore. We’re talking about who’s right or wrong, not what is right or wrong.</p>
<p>High-performing teams have figured out how to argue about ideas without making it personal. They can say “I think this approach has problems” without anyone hearing “I think you’re incompetent.”</p>
<p>This goes both ways. You need people who can give direct feedback without being cruel. And you need people who can receive direct feedback without feeling attacked.</p>
<p>Neither skill is natural. Both can be learned. I’m still constantly learning them, if I’m being honest.</p>
<p>The teams that figure this out move fast. The teams that don’t spend half their energy managing emotions instead of solving problems.</p>
<h2 id="3-theyre-pragmatic-without-being-careless">3. They’re pragmatic without being careless</h2>
<p>There’s a certain kind of engineer who treats every decision like a thesis. Every choice needs to be the theoretically optimal solution, fully considered, perfectly designed and 17 confluence pages along side of it.</p>
<p>There’s another kind who just wants to ship something, anything, and deal with the consequences later.</p>
<p>The best teams live in between these two. They care about doing things well, but they also understand that a good solution today beats a perfect solution next quarter. They also realize that everything has it’s price. And that they might be building something that is not needed in 6 months.</p>
<p>But (and this matters) pragmatic teams like this don’t use “move fast” as an excuse for sloppy thinking. They make quick decisions, but informed ones. They cut scope, not corners.</p>
<p>Finding that balance is hard. But the teams that find it tend to build things that actually get used, while keeping stakeholders and, most importantly, users happy.</p>
<h2 id="4-they-have-informed-captains">4. They have informed captains</h2>
<p>Every decision needs an owner. Not a committee. Not “the team.” One person.</p>
<p>Netflix popularized the term informed captains, but the name doesn’t matter. What matters is this: someone has the context, the authority, and the accountability to make the call.</p>
<p>The key word is “informed”. A captain who doesn’t understand the problem is just a bottleneck. A captain who understands deeply can move fast because they don’t need to ask for details or approval before every decision.</p>
<p>To be clear, informed captains aren’t dictators. They gather input. They listen to concerns. They genuinely consider alternatives (and I mean genuinely, not the kind where the decision was already made before the discussion started). But when it’s time to decide, they decide. And the team commits.</p>
<p>This only works if you’ve already built the first three traits. Commitment requires trust. Trust requires the ability to disagree without damage. And the captain needs pragmatic judgment to know when good enough is good enough.</p>
<h2 id="the-uncomfortable-truth">The uncomfortable truth</h2>
<p>None of this is a process you can implement. You can’t mandate commitment. You can’t enforce the separation of ideas from identity. These are cultural traits, and culture is built one interaction at a time.</p>
<p>I used to think the answer was hiring - find the right people and everything clicks. But I’ve seen great individuals form dysfunctional teams, and average individuals form extraordinary ones. It’s not about who’s in the room. It’s about what happens between them.</p>
<p>That’s the thing about these four traits. You don’t notice them when they’re present. Everything just works and you assume it’s because the people are talented or the project is well-scoped. You only notice their absence - in the meetings that drag, the decisions that unravel, the slow bleeding of momentum that no retro seems to fix.</p>
<p>I’m still learning how to build these types of teams. I suspect I always will be.</p>]]></content:encoded>
</item>
<item>
<title><![CDATA[Revisiting: Clean Up Your Code by Applying These 7 Rules]]></title>
<description><![CDATA[A reflection on clean code, five years later. What still holds up, what feels incomplete, and how experience changed how I think about cleanup.]]></description>
<link>https://joachimz.me/revisiting-clean-up-your-code-by-applying-these-7-rules/</link>
<guid isPermaLink="true">https://joachimz.me/revisiting-clean-up-your-code-by-applying-these-7-rules/</guid>
<category>clean-code</category><category>software-engineering</category>
<dc:creator><![CDATA[Joachim Zeelmaekers]]></dc:creator>
<pubDate>Tue, 20 Jan 2026 00:00:00 GMT</pubDate>
<media:content url="https://joachimz.me/images/blog/revisiting-clean-up-your-code-by-applying-these-7-rules.webp" medium="image"/>
<content:encoded><![CDATA[<p>About five years ago, I wrote a post titled “Clean Up Your Code by Applying These 7 Rules.” At the time, I was focused on writing clearer, more readable code and on sharing practical techniques that helped me improve my day-to-day work.</p>
<p>Looking back, that makes sense. Clean code is an attractive idea. It promises clarity, maintainability, and a sense of control over growing complexity. Back then, I approached the topic mostly from the perspective of an individual contributor trying to write better software.</p>
<p>Five years later, I still agree with the intent of that post. What has changed is my understanding of when and how those ideas should be applied.</p>
<p>This is not a rewrite of the original article. It is a reflection on what still holds up, what feels incomplete today, and what experience added that rules alone could not.</p>
<p>If you want to read the original version first, you can find it here 👇</p>
<p><a href="https://joachimz.me/clean-up-your-code-by-applying-these-7-rules/">Clean Up Your Code by Applying These 7 Rules</a> - In this post, I will go over some of the rules that you can apply to improve your code. Every developer on the team, regardless of their skill level, has to be able to understand the code by reading it. This helps young developers to be more confident in writing code.</p>
<h2 id="what-still-holds-up">What still holds up</h2>
<p>Several ideas from the original post aged well in my day-to-day, mostly because they are rooted in communication rather than technique.</p>
<p>Readable code is still one of the best investments you can make. Clear naming, simple control flow, and small, focused units of logic make it easier for others to understand what a piece of code is trying to do. That has not changed, or at least not yet.</p>
<p>The idea behind the Boy Scout Rule also still resonates. Leaving code slightly better than you found it is a healthy mindset. Small improvements compound over time, and a codebase benefits from care and attention rather than neglect.</p>
<p>These principles work because they are not tied to tools or trends. They are about helping the next person, which often turns out to be your future self. Or create a path for some LLM to write readable code based on yours.</p>
<h2 id="where-my-view-has-changed">Where my view has changed</h2>
<p>What feels incomplete today is the assumption that cleanup is always the obvious next step.</p>
<p>Earlier in my career, I treated cleanup mostly as a local activity. I saw something I did not like, applied a rule, and moved on. Over time, I learned that cleanup is rarely just about code. It is about context, ownership, timing, and intent.</p>
<p>Refactoring without understanding why code exists can be risky. A piece of logic that looks redundant or overly defensive might be protecting against a constraint you are not yet aware of. Removing it can introduce subtle bugs or undo decisions that were made deliberately. It can also be a costly exercise, when you come to the realization that after refactoring in, you ended up with the same solution because of these unknown constraints.</p>
<p>I also underestimated how often technical decisions are driven by product needs, deadlines, or organizational constraints. In those cases, “clean” was not the primary goal. Shipping something that worked was.</p>
<p>The rules were never wrong, but they were incomplete without context.</p>
<h2 id="what-experience-added">What experience added</h2>
<p>Working on larger and older systems changed how I look at cleanup. Inheriting code I did not write forced me to slow down. It became clear that understanding usually creates more value than immediately improving aesthetics.</p>
<p>I also started to see cleanup as a shared responsibility rather than an individual one. In a team, even well-intended refactoring can have side effects. Changes that feel small locally can ripple outward and affect others in unexpected ways.</p>
<p>That is where context becomes crucial. Knowing why something exists matters more than knowing how to improve it.</p>
<h2 id="principles-over-rules">Principles over rules</h2>
<p>If I had to summarize how my thinking evolved, it would be this: clean code is not a checklist, it is a byproduct of understanding.</p>
<p>Rules are helpful starting points, but judgment is what makes them useful. Sometimes the right choice is to refactor. Sometimes it is to document intent. Sometimes it is to leave things as they are and revisit them later.</p>
<p>The goal is not perfection. The goal is stewardship, and delivering value to the customers.</p>
<h2 id="closing-thoughts">Closing thoughts</h2>
<p>I still believe in writing clean code. I also believe that respecting what came before is part of that responsibility. Code is a record of decisions made under pressure, with the information that was available at the point of writing it and often under constraints we no longer see.</p>
<p>Revisiting my old post reminded me that learning is not always about new ideas. Sometimes reframing old ones can be even more valuable.</p>
<p>In general, I can say that these rules have helped me write better code over the past 5 years, however, experience taught me when to apply these rules.</p>]]></content:encoded>
</item>
<item>
<title><![CDATA[Architecture Without Constraints Is Just Speculation]]></title>
<description><![CDATA[Over-engineering early products slows development and increases costs long before scale becomes real.]]></description>
<link>https://joachimz.me/architecture-without-constraints-is-just-speculation/</link>
<guid isPermaLink="true">https://joachimz.me/architecture-without-constraints-is-just-speculation/</guid>
<category>software-engineering</category>
<dc:creator><![CDATA[Joachim Zeelmaekers]]></dc:creator>
<pubDate>Tue, 13 Jan 2026 00:00:00 GMT</pubDate>
<media:content url="https://joachimz.me/images/blog/architecture-without-constraints-is-just-speculation.webp" medium="image"/>
<content:encoded><![CDATA[<p>Early in my career, I built a lot of prototypes. MVPs with tight budgets, tighter deadlines, and one clear goal: work well enough to demo. There were no tests, little to no CI/CD, and no real discussion about architecture. The software only needed to support a handful of carefully prepared scenarios and integrate with a few APIs. If the demo succeeded, the customer would decide whether the product was worth building properly.</p>
<p>These systems were throwaway by design. Longevity was not a requirement, so architecture was not a concern.</p>
<p>Later, I moved into product development. We were building for real users, real usage, and real growth. Architecture suddenly mattered a lot more. But the underlying forces stayed the same.</p>
<p>Deadlines still controlled the decisions. Budgets still mattered. The main difference was that scale entered the picture. Not as an ambition, but as a concrete question: what does this system actually need to handle?</p>
<h2 id="scale-is-a-constraint-not-a-goal">Scale Is a Constraint, Not a Goal</h2>
<p>This is where architectural discussions often go wrong. Scale turns into a goal instead of a constraint. Future growth becomes a justification for present-day complexity.</p>
<p>As a part-time freelancer, I have worked with early-stage founders at the very beginning of their journey. Small teams, limited budgets, and often no users at all. Yet it is common to see architectures designed for a future that does not yet exist. Kubernetes clusters, microservices, event-driven pipelines. Not because the product requires them today, but because it might need them someday.</p>
<p>Most of the time, it does not. Certainly not early on, and if you do, it’s a good problem to have!</p>
<h2 id="what-most-products-actually-need">What Most Products Actually Need</h2>
<p>What these products usually need is something that works. A monolith and a Postgres database can go very far. Vertical scaling is often sufficient for years. Adding complexity early rarely buys safety. It mostly buys cost.</p>
<p>Cost in infrastructure.</p>
<p>Cost in development speed.</p>
<p>Cost in cognitive load.</p>
<h2 id="what-we-really-mean-by-requirements">What We Really Mean by Requirements</h2>
<p>When we talk about requirements or constraints, we are not just talking about features. Architectural decisions are shaped by multiple types of constraints:</p>
<ul>
<li>Business requirements like deadlines, budget, and time to market.</li>
<li>Non-functional requirements like latency, throughput, availability, and consistency.</li>
<li>Organizational constraints like team size, experience, and operational maturity.</li>
<li>Product reality like current usage, realistic growth, real customers, and actual revenue.</li>
</ul>
<p>If you cannot point to a concrete constraint in one of these categories, you are not solving a problem. You are guessing.</p>
<h2 id="premature-complexity-in-practice">Premature Complexity in Practice</h2>
<p>Consider a small product team of four engineers building a B2B SaaS product. At this stage, the product has fewer than fifty active users and only a handful of paying customers. Despite this, the team invests early in a microservices architecture.</p>
<p>The system consists of eight services, a message broker, distributed tracing, and a managed Kubernetes cluster. Every feature requires changes across multiple services. Local development is slow and brittle. Deployments are frequent but risky, rolling back is even worse. A significant portion of engineering time goes into keeping the system running rather than improving the product.</p>
<p>Infrastructure costs alone exceed a few thousand dollar per month. More importantly, development speed suffers. Simple product changes take weeks.</p>
<p>Eventually, the system is consolidated into a simpler architecture. Fewer services, fewer moving parts, and fewer failure modes. Deployment becomes boring and simple. Development speeds up. Nothing about the product’s requirements has changed. Only the architecture has.</p>
<p>Microservices are not the wrong choice in principle. But they can be the wrong choice for that moment.</p>
<h2 id="pragmatism-as-an-architectural-strategy">Pragmatism as an Architectural Strategy</h2>
<p>This is why pragmatism is an architectural choice.</p>
<p>Pragmatic architecture optimizes for:</p>
<ul>
<li>speed of change,</li>
<li>low cognitive load,</li>
<li>controlled costs,</li>
<li>and the ability to learn from real usage.</li>
</ul>
<p>It deprioritizes:</p>
<ul>
<li>theoretical scalability,</li>
<li>architectural purity,</li>
<li>and solutions to problems that have not yet materialized.</li>
</ul>
<h2 id="let-architecture-earn-its-complexity">Let Architecture Earn Its Complexity</h2>
<p>This does not mean ignoring the future. Successful products do outgrow their initial designs. Re-architecture is sometimes necessary. But those changes are justified by evidence. Usage patterns, performance bottlenecks, revenue growth, or operational pain.</p>
<p>They happen because the product has earned them.</p>
<p>Architecture should evolve with the product, not ahead of it.</p>
<h2 id="honesty-over-hypotheticals">Honesty Over Hypotheticals</h2>
<p>Good architecture is less about preparing for every possible future and more about being honest about the present. Build systems that serve the product as it exists today and can grow based on evidence, not assumptions. Leave room to change when reality demands it.</p>
<p>That kind of pragmatism is not a compromise. It is a responsibility of every software engineer.</p>]]></content:encoded>
</item>
<item>
<title><![CDATA[A More Practical Way for Developers to Learn Algorithms]]></title>
<description><![CDATA[Many developers assume performance is for specialists. Stacksmith lets you learn and experience how algorithmic choices impact every language.]]></description>
<link>https://joachimz.me/a-more-practical-way-for-developers-to-learn-algorithms/</link>
<guid isPermaLink="true">https://joachimz.me/a-more-practical-way-for-developers-to-learn-algorithms/</guid>
<category>programming</category><category>software-engineering</category>
<dc:creator><![CDATA[Joachim Zeelmaekers]]></dc:creator>
<pubDate>Mon, 05 Jan 2026 00:00:00 GMT</pubDate>
<media:content url="https://joachimz.me/images/blog/a-more-practical-way-for-developers-to-learn-algorithms.webp" medium="image"/>
<content:encoded><![CDATA[<p>This post shares what I learned while building Stacksmith, a small lab to understand performance benefits from algorithms in Go and TypeScript.</p>
<p>There is a myth I used to believe. You might believe it too.</p>
<p>It sounds something like this: “I am a developer, I write TypeScript, computers are incredibly fast now, and I mostly move JSON from an API to a UI. Big O notation is for low-level engineers who write databases or do game development.”</p>
<p>It is a comforting idea. It lets us focus on frameworks, state management, crud operations and component libraries. It suggests that performance is somebody else’s concern.</p>
<p>I built Stacksmith to test that belief. I wanted a practical way to re-learn Data Structures and Algorithms after years of neglect, so I spent the past few months revisiting them during my personal development time at DataCamp.</p>
<p>I decided to read and process the theory, and translate the exercises into code. This was also the perfect opportunity to try out a different programming language, Go. To make it easier for most people to understand, I also wrote everything in Typescript, so you get to choose your preference.</p>
<p>Is it perfect? Most likely not!</p>
<p>Is there room for improvement? Always! Feel free to raise a PR at any time.</p>
<p>Is it enough to start your learning journey? 100%!</p>
<h2 id="the-universal-laws-of-speed">The Universal Laws of Speed</h2>
<p>Stacksmith ships with a small CLI. You can clone the repo, run yarn start, pick a chapter, and watch different algorithms race each other.</p>
<p>In Chapter 2, I compare Linear Search and Binary Search on the same dataset.</p>
<p>Linear Search checks items one by one → O(N)</p>
<p>Binary Search repeatedly cuts a sorted list in half → O(log N)</p>
<p>Binary vs Linear Search</p>
<p>This table clearly shows how the checks change based on the data.</p>

























<table><thead><tr><th>Dataset Size</th><th>Linear Search (O(N))</th><th>Binary Search (O(log N))</th></tr></thead><tbody><tr><td>1,000</td><td>~1,000 checks</td><td>~10 checks</td></tr><tr><td>10,000</td><td>~10,000 checks</td><td>~14 checks</td></tr><tr><td>1,000,000</td><td>~1,000,000 checks</td><td>~20 checks</td></tr></tbody></table>
<p>The surprising part wasn’t the difference between Go and TypeScript. It was how insignificant that difference became once the algorithm changed.</p>
<p>Yes, Go’s Linear Search was faster than TypeScript’s in raw milliseconds.</p>
<p>But the gap between O(N) and O(log N) completely dwarfed the gap between languages.</p>
<p>If you pick an O(N) solution where O(log N) is possible, no compiler, JIT, or runtime can save you.</p>
<p>That’s the universal law: algorithms scale, languages don’t.</p>
<h2 id="where-bad-performance-hides-in-your-code">Where Bad Performance Hides In Your Code</h2>
<p>Most of us are not writing search functions from scratch. Our actual work looks more like “fetch data, filter it, merge two arrays, update the UI”. That feels harmless.</p>
<p>Take this very common pattern:</p>
<pre class="astro-code astro-code-themes github-light github-dark" style="background-color:#fff;--shiki-dark-bg:#24292e;color:#24292e;--shiki-dark:#e1e4e8; overflow-x: auto;" tabindex="0" data-language="javascript"><code><span class="line"><span style="color:#D73A49;--shiki-dark:#F97583">const</span><span style="color:#005CC5;--shiki-dark:#79B8FF"> activeUsers</span><span style="color:#D73A49;--shiki-dark:#F97583"> =</span><span style="color:#24292E;--shiki-dark:#E1E4E8"> users.</span><span style="color:#6F42C1;--shiki-dark:#B392F0">map</span><span style="color:#24292E;--shiki-dark:#E1E4E8">((</span><span style="color:#E36209;--shiki-dark:#FFAB70">user</span><span style="color:#24292E;--shiki-dark:#E1E4E8">) </span><span style="color:#D73A49;--shiki-dark:#F97583">=></span><span style="color:#24292E;--shiki-dark:#E1E4E8"> {</span></span>
<span class="line"><span style="color:#D73A49;--shiki-dark:#F97583">  const</span><span style="color:#005CC5;--shiki-dark:#79B8FF"> details</span><span style="color:#D73A49;--shiki-dark:#F97583"> =</span><span style="color:#24292E;--shiki-dark:#E1E4E8"> userDetails.</span><span style="color:#6F42C1;--shiki-dark:#B392F0">find</span><span style="color:#24292E;--shiki-dark:#E1E4E8">((</span><span style="color:#E36209;--shiki-dark:#FFAB70">d</span><span style="color:#24292E;--shiki-dark:#E1E4E8">) </span><span style="color:#D73A49;--shiki-dark:#F97583">=></span><span style="color:#24292E;--shiki-dark:#E1E4E8"> d.id </span><span style="color:#D73A49;--shiki-dark:#F97583">===</span><span style="color:#24292E;--shiki-dark:#E1E4E8"> user.id);</span></span>
<span class="line"><span style="color:#D73A49;--shiki-dark:#F97583">  return</span><span style="color:#24292E;--shiki-dark:#E1E4E8"> { </span><span style="color:#D73A49;--shiki-dark:#F97583">...</span><span style="color:#24292E;--shiki-dark:#E1E4E8">user, </span><span style="color:#D73A49;--shiki-dark:#F97583">...</span><span style="color:#24292E;--shiki-dark:#E1E4E8">details };</span></span>
<span class="line"><span style="color:#24292E;--shiki-dark:#E1E4E8">});</span></span></code></pre>
<p>It is short, readable, and easy to review. It is also secretly O(N²). If both lists contain around 1 000 items, this friendly looking code performs about 1 000 000 comparisons.</p>
<p>On small datasets you will not notice. But on a page where users can filter, sort, or search, it starts to add up. Suddenly a simple dropdown freeze feels “mysterious”. We reach for useMemo, or we blame the backend, or we start worrying that React is just slow.</p>
<p>If we rewrite the same code to build a small hash map of userDetails first, the cost of each lookup becomes O(1) instead of O(N). The total cost drops from roughly 1 000 000 operations to something closer to a few thousand.</p>

















<table><thead><tr><th>Approach</th><th>Work Done</th></tr></thead><tbody><tr><td>map + find</td><td>~1,000 × 1,000 = 1,000,000 checks</td></tr><tr><td>Prebuilt map (Map / object)</td><td>~1,000 inserts + 1,000 lookups</td></tr></tbody></table>
<p>Same feature. Same framework. Same browser. Completely different experience for the person using it, and completely different performance.</p>
<p>Once you start to see patterns like this, you cannot unsee them. They appear in every language: in Python scripts, in Java microservices, in SQL queries, even in infrastructure code.</p>
<h2 id="becoming-a-stacksmith">Becoming a Stacksmith</h2>
<p>That is why I picked the name Stacksmith.</p>
<p>A blacksmith learns the character of their materials. Iron is strong but brittle. Steel can bend. They choose the right one for the job.</p>
<p>As developers we work with different materials: arrays, maps, sets, trees, graphs, stacks, queues. Each one has a character. Some are great for random access. Some are great for inserting in the middle. Some are built for fast lookups.</p>
<p>Stacksmith is my workbench to explore that craft:</p>
<ul>
<li>Around twenty chapters of exercises, from arrays and linked lists to graphs and dynamic programming.</li>
<li>Every solution written side by side in TypeScript and Go. (sidenote: the code is not production-grade, and that’s fine!)</li>
<li>A CLI to run the algorithms, print timings, and feel how they behave under different input sizes.</li>
<li>A “KB Assistant” where you can paste your own code and get a quick, human readable “Big O audit”. (If it doesn’t have AI, is it even real nowadays?)</li>
</ul>
<p>The goal is not to create perfect implementations. The goal is to give you a safe place to experiment and to build an intuition for how your choices in data structures and algorithms show up in real performance.</p>
<h2 id="the-dual-language-advantage">The Dual Language Advantage</h2>
<p>Even if you mostly write TypeScript today, I would still invite you to look at the Go implementations in the repo.</p>
<p>TypeScript gives you many helpful abstractions. It hides pointers and most memory allocation details. It lets you resize arrays and objects without thinking too much about what happens behind the scenes.</p>
<p>Go is a bit more explicit. When you build a linked list, you wire together nodes that point to each other. When you create a slice and append to it, you see how capacity comes into play.</p>
<p>Writing the same algorithm in both languages forces you to reconcile the two views. You start to see that the “magic” behaviour in your high level language is still bound by the same low level rules.</p>
<p>You do not have to become a Go expert to benefit from this. Seeing how a few core structures are built in a different ecosystem is often enough. After that, you carry the mental model with you, no matter what language you are using at work.</p>
<h2 id="why-this-matters-for-your-day-job">Why This Matters For Your Day Job</h2>
<p>So what is the practical outcome of caring about this?</p>
<p>First, there is the obvious one: speed. A list filter that runs in O(N) instead of O(N²) will feel smooth on devices that are older, slower, or busy doing other things. That is a quiet kind of inclusivity. You are respecting the time and patience of your users.</p>
<p>Second, there is cost. On the backend, more efficient algorithms and data structures usually mean less CPU time and more throughput, which slows down the need for scaling, which in turn reduces infrastructure cost (even if it’s negligible at a small scale). That might not show up on your personal credit card, but it does matter for the teams and companies that pay for the infrastructure (no matter how negligible).</p>
<p>Finally, and maybe the most important, it is foundational knowledge. The exact frameworks and libraries you use will change over your career. The core ideas behind stacks, queues, maps and graphs are not going anywhere.</p>
<h2 id="how-to-try-stacksmith">How To Try Stacksmith</h2>
<p>If you are curious, here is a simple way to get started:</p>
<p>Clone the repo: git clone <a href="https://github.com/joachimzeelmaekers/stacksmith">https://github.com/joachimzeelmaekers/stacksmith</a>.</p>
<p>Install the dependencies and run the CLI with yarn start.</p>
<p>Pick a chapter that sounds familiar, like arrays or maps.</p>
<p>Run a benchmark a few times, then change the input size and see what happens.</p>
<p>If you feel brave, paste a piece of your own code into the Knowledge Base Assistant (because what’s a post in 2026 without some AI attached to it?) and see what it says about your current approach.</p>
<p>Note: Make sure to read the README if you wish to use this!</p>
<p>You do not have to go through all twenty chapters. Even spending an evening with two or three of them can give you a strong sense of how much impact “small” algorithmic choices can have.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Performance is not a niche concern for low level specialists. It is part of building software that feels respectful, reliable, and enjoyable to use.</p>
<p>You can switch languages, frameworks, or hosting platforms, and those choices do matter. But underneath all of them, the same rules still apply.</p>
<p>If you want a practical way to explore those rules, take a look at Stacksmith.</p>
<p>Clone it. Run yarn start. Race a few algorithms. Bring your own snippets.</p>
<p>Most of all, use it as a reminder that you do not need to become a “low-level engineer” to write performant code. You just need to take a step back to the foundation, and treat data structures and algorithms as tools in your craft, no matter which language you use.</p>]]></content:encoded>
</item>
<item>
<title><![CDATA[Respecting What Came Before]]></title>
<description><![CDATA[A reflection on old code, missing context, and the importance of understanding past decisions in software development.]]></description>
<link>https://joachimz.me/respecting-what-came-before/</link>
<guid isPermaLink="true">https://joachimz.me/respecting-what-came-before/</guid>
<category>software-engineering</category><category>personal-growth</category>
<dc:creator><![CDATA[Joachim Zeelmaekers]]></dc:creator>
<pubDate>Tue, 30 Dec 2025 00:00:00 GMT</pubDate>
<media:content url="https://joachimz.me/images/blog/respecting-what-came-before.webp" medium="image"/>
<content:encoded><![CDATA[<p>We all know the saying “respect your elders”. Most of us grow up with the idea that experience matters, and I believe that to be true. We listen to people who have been around longer than us, we value the lessons they have learned, and we accept that their decisions were shaped by a different time and different constraints.</p>
<p>That attitude tends to disappear when the subject is software.</p>
<p>Taking over an older codebase is a familiar experience for most engineers. You open a project that has been around for years, start reading through the code, and quickly form an opinion. The structure feels outdated, patterns look odd, and choices stand out that you would never make today. The quiet conclusion often follows: this is a mess.</p>
<p>I have had that reaction myself more times than I care to admit. Part of that reaction comes from confidence. We see the code through the lens of what we know now, with better tools, more experience, and the benefit of hindsight. That makes it easy to assume we would have done better, given the same situation.</p>
<p>I remember opening a project like that more than a dozen times. A few minutes in, I was already mentally rewriting large parts of it. At the time, it felt obvious that starting fresh would be easier than understanding what was there.</p>
<p>What we tend to forget is that code is only the visible part of the story. It captures decisions, but not the context in which those decisions were made. Deadlines, team size, available tools, business pressure, and technical limitations rarely survive in the repository. All that remains is the outcome.</p>
<p>When we judge old code without understanding its context, we are judging with incomplete information.</p>
<p>One reason context disappears so easily is that we are rarely intentional about preserving it. Code tends to outlive conversations, whiteboards, and Slack threads. Sadly, even pull request descriptions rarely include the decisions that were made.</p>
<p>This is where things like Architecture Decision Records can help. Not because they enforce structure, but because they capture intent. A short note explaining why a decision was made, what alternatives were considered, and which constraints mattered at the time can make a huge difference years later.</p>
<p>The same is true beyond architecture. Many technical decisions are driven by product choices, timelines, or business realities. When that context is missing, the code can look arbitrary or careless, even when it was neither.</p>
<p>Writing these things down is not about justifying the past. It is about giving future readers a chance to understand it.</p>
<p>It is also worth turning that lens inward. The code I write today feels reasonable and well structured to me now. Given enough time, it will not. Someone else will open it years from now, without the benefit of my assumptions or constraints, and wonder why certain choices were made. In that sense, none of us write timeless code.</p>
<p>Experience in people earns respect because we understand that learning happens over time. We accept that earlier decisions were made with less information, different priorities, or under pressure. Code deserves the same generosity. It is not a monument to perfection, but a snapshot of understanding at a particular moment.</p>
<p>The urge to rewrite is understandable. Rewriting feels productive, clean, and decisive. Understanding takes longer, and it rarely comes with the same sense of progress. But skipping that step often means repeating mistakes rather than learning from them.</p>
<p>Respecting old code does not mean defending every decision or avoiding change. It means starting from the assumption that the people who worked on it were solving real problems as best they could. It means reading before rewriting, asking why something exists before removing it, and assuming rational intent instead of incompetence.</p>
<p>Most of the time, the code is not the problem. Missing context is.</p>
<p>When we approach older code with curiosity rather than judgment, we give ourselves a better chance to improve it responsibly. More importantly, we acknowledge that we are part of a longer story, not the first chapter and certainly not the last.</p>
<p>Respect, in software as in life, is not about agreement. It is about understanding what came before.</p>]]></content:encoded>
</item>
<item>
<title><![CDATA[The First 5 Years as a Software Engineer]]></title>
<description><![CDATA[I'll take you through "the good, the bad, and the ugly" of what I experienced in the first 5 years of my career.]]></description>
<link>https://joachimz.me/the-first-5-years-as-a-software-engineer/</link>
<guid isPermaLink="true">https://joachimz.me/the-first-5-years-as-a-software-engineer/</guid>
<category>personal-growth</category><category>career-advice</category><category>software-engineering</category>
<dc:creator><![CDATA[Joachim Zeelmaekers]]></dc:creator>
<pubDate>Thu, 17 Aug 2023 00:00:00 GMT</pubDate>
<media:content url="https://joachimz.me/images/blog/the-first-5-years-as-a-software-engineer.webp" medium="image"/>
<content:encoded><![CDATA[<p>I can’t believe I’m already writing my first 5 years in review as a <insert-your-favorite-title-for-developer>, but here we are! A few years of barely writing anything passed by and I could start this post by summing up all the excuses why I stopped investing time in writing, but I’ll spare you the details. What we will do is have a look at a few things that I’ve experienced in the first 5 years of my career. First things first…</insert-your-favorite-title-for-developer></p>
<h2 id="titles">Titles!</h2>
<p>A title is just a label to define the role of an employee. Some companies might use Software engineers and others might use developers or programmers. “Potayto, potahto”. But be aware, whatever the title may be, this doesn’t necessarily represent the knowledge or skill level that this person has.</p>
<p>You might be thinking, why is this important? It’s not, but that’s the point.</p>
<p>What I’ve learned is that I should consider everyone as equal, whatever the title the person may have, and ask those questions that might sound dumb. In the end, you should treat an intern with the same level of respect as your CEO.</p>
<h2 id="pick-tasks-that-scare-you">Pick tasks that scare you!</h2>
<p>We’ve all been there… New team, a new way of working, 15 new tools to explore and learn, and a whole new business to get accustomed to. This is pretty scary, certainly when some tasks on the board involve talking to different teams within the organization…</p>
<p>These tasks are the ones that help you learn the most. Don’t pick up the tasks to change the color of a button because they look small and easy. Pick the tasks that require you to step out of your comfort zone, even if you need 3 times the estimated time. This doesn’t mean that you should blindly pick big and complex tasks but you should try to coordinate with your team in order to pick these types of tickets. If anything, it will give you a great introduction to the organization.</p>
<h2 id="learn-concepts-not-syntax">Learn concepts, not syntax</h2>
<p>A classic, but such an important lesson that should never be forgotten. In 5 years I’ve used MongoDB, PostgreSQL, MySQL,… as databases. I’ve written projects in Spring Boot, Python, Nodejs, and even C#… Used Google Cloud, AWS, and Azure as a cloud platform. This is not bragging or anything that I’m proud of. This will likely happen to you as well when you’re working on short-term projects as a consultant and that’s perfectly fine. As long as you stick to learning concepts and not syntax.</p>
<p>Don’t get me wrong. I do understand that syntax is important and sometimes you need to focus on specific technologies, but technologies are ever-changing and this won’t stop. Most concepts will not change at the pace technologies are changing, which gives you the ability to adapt better to difficult situations. All of these concepts that you learn on the way will potentially help you solve your next big problem.</p>
<h2 id="before-undertaking-his-craft-the-artisan-first-sharpens-his-tools">Before undertaking his craft, the artisan first sharpens his tools</h2>
<p>We as engineers are constantly working on our computers, and it’s invaluable to know as much as you can about it. This goes from learning a bunch of shortcuts while writing code to getting used to navigating servers via the command line. Every tool that you use on a daily basis should be sharpened before undertaking your craft.</p>
<p>This of course doesn’t mean that you can’t use tools that you don’t know well. I’m just saying that you shouldn’t be cutting 5000 trees with a blunt ax. If you’re looking for this specific challenge, don’t let my advice stop you from trying!</p>
<h2 id="adding-value">Adding value</h2>
<p>Adding value, as far as I know, is what employers want you to do the most. You should always try to add value to the project that you are working on. Adding value could mean increasing revenue for a product, improving user experience, streamlining processes, you name it. But it will not be easy since it might require you to speak up in tough situations or be silent in others but the outcome should stay the same.</p>
<p>I’m not just sharing this so that your employer is pleased with you, but this can also help you to prove your own value to the company in performance reviews. If you continually add value and improve at what you’re doing, it’s very difficult for employers to ignore it.</p>
<h2 id="stick-to-your-values">Stick to your values</h2>
<p>This might be confusing, but we’re talking about different kinds of values here.</p>
<p>I have a few values that are very important to me in my work. I want my work to be challenging, I want to learn new things, and I want to contribute by helping others within or outside of my team.</p>
<p>We all need money to survive, but I try to stick to the values that I think are of most importance to me, and if my job is no longer in line with these values, I should do something about it. I either search for a way to fit my values within my current role, or look for a new role within or outside the organization.</p>
<h2 id="breathe-think-execute">Breathe, Think, Execute</h2>
<p>Early on in my career, I used to immediately start executing tasks before creating a blueprint in my brain or on a piece of paper of what the task involves. I would even say that I took the acceptance criteria (if the task had any) and started building on a solution for it, without thinking about the future of this piece of code or solution. Most of the questions came after the “completion” of the task.</p>
<p>A few of my favorite ones were:</p>
<ul>
<li>Can we deploy this proof of concept to production tomorrow?</li>
<li>Will there be any problems if a lot of users are using it?</li>
<li>Did we add the ability to support French, Dutch, and English interfaces?</li>
</ul>
<p>You get the drill. Don’t over complicate everything you build, but make sure you anticipate the future of your solution in a pragmatic manner. It will reduce the headaches, long hours at the office, and projects that go over budget by a significant amount.</p>
<p>Sidenote: Make sure to try and ask these questions as soon as possible in the process, certainly before making estimates.</p>
<h2 id="learn-to-say-no">Learn to say no</h2>
<p>This is something that I struggled with and still do, but it’s important to say no to things. Doing 4 extra hours of work a day for an extended period of time can cause tremendous issues to both your mental and physical health. It’s great that you want to help the company, but those 4 extra hours (where only one hour is likely to be productive) won’t fix the problem when you are out for 6 months due to burnout.</p>
<h2 id="feedback">Feedback</h2>
<p>One of the things that is often taken for granted is feedback. I’m very quick in giving someone a positive comment about something they did, and that’s something I love to do. Positive feedback is great. Everyone loves to hear positive comments about their work every now and then.</p>
<p>Positive feedback is only valuable when there are multiple types of feedback. If you’re not receiving constructive feedback, positive feedback is adding less value to you, which often makes it more difficult to process.</p>
<p>This feeling is normal since not all feedback is fair, but get used to receiving feedback by asking for it regularly. If you ask for feedback often, you’ll be able to adjust and improve and if you don’t, you’ll eventually receive feedback that might be much harder to digest if you didn’t ask for it yourself.</p>
<p>I could go on and on about the lessons that I learned in only 5 years, and I hope it helps you with navigating your first years as a software engineer (or whatever your title might be 😉). You might disagree completely with everything that I said here, and that’s totally fine but I hope at least one of these points can help you improve your career or serve as an affirmation for you. I think this blog post is also perfectly in line with my values to try and contribute positively to or help others.</p>
<p>Let’s see what the next 5 years bring!</p>]]></content:encoded>
</item>
</channel>
</rss>