Welcome back to the Between Good Enough & Too Much series 😄
In the last issue—Chasing Your Next‑Level Self—we talked about balancing perfection and shipping under pressure. Now, let’s see how that same choice plays out when “good enough” bumps into real performance problems.
Why Performance Matters Even at Small Scale
Even if your application isn’t mission‑critical at Netflix scale, a slow feature can still break trust. You might hear:
“It’s fine. We’re not Google. We don’t have the bandwidth for that right now.
The code works—refactoring won’t bring any value. It’s probably not performance‑sensitive.”
None of those excuses are wrong; they’re just incomplete—true… until they’re not.
The Day We Discovered a Slowdown
At one company, I was part of a cross‑functional team building a sizable desktop application in .NET. Our users were happy, and we kept shipping new versions that generated more business. Our CI pipeline caught many pitfalls, and our massive test suite covered unit behavior, functional tests, and UI via simulated clicks, rendering the application and screenshot tests.
I once worked on a desktop application in .NET. We had a solid CI setup and a big test suite containing unit tests, functional tests, UI tests and captured screenshots of our application.
Users were happy and we shipped new versions often.
One morning, a core feature that usually took about a minute suddenly took several minutes. Code reviews passed. Tests were green. Yet something was wrong.
You can’t spot every problem ahead of time. We had no performance checks in our tests. No one noticed until a user did.
We found the bad commit fast—only a few had merged the day before. Using git‑bisect, we marked a known good commit and a bad one. Git‑bisect tested midway points, then split the range until it found the culprit. We reverted that change and the feature worked again. But that fix only solved the symptom. Without a regression test for performance, the same bug could come back.
Leveraging Git‑Bisect to Pinpoint Regressions
Learning about git‑bisect was a turning point. If you’re not familiar, git‑bisect asks you to point to a good and a bad commit. Then it tests halfway between them. Each test cuts the suspect range in half until you find the faulty commit. It really shines when a slowdown appears weeks after merging. The broader your range of commits, the more useful it gets to retrieve the culprit. (See the docs here.)
Fundamental Performance Observability
So I started experimenting.
I evaluated BenchmarkDotNet and built a small library—based on its source—to measure synchronous and asynchronous actions. Next, I added simple logs to record key user actions. As a DevOps engineer on the team, I built CI pipelines to record performance data for the application.
Behind the scenes, our CI was doing the following for recording performance data:
- Running performance scenarios off business hours,
- Collect the data in an Azure SQL database,
- Publish the results daily against a PowerBI dashboard we created,
- Mark a commit as problematic if it degraded performance for any critical path beyond a specific threshold.
Suddenly, anyone in the company could see which workflows were slowing down or speeding up, and compare improvement factors between releases through PowerBI.
Small Steps, Big Impact
We weren’t performance experts. Our app did not connect to the internet. Users had different hardware. We had no DataDog or Grafana. But we wanted some way to watch performance over time. So we built only what we needed. No one added endless libraries or spun up a huge monitoring system.
At first, leadership did not give us time for this work. After the slow feature hit, they did. That gave us room to protect our app’s key paths without going overboard.
This quote from Black Clover resonated with me:
“Being weak is nothing to be ashamed of. Staying weak is!”
In a hackathon, we built a few Roslyn analyzers. Each analyzer flagged code patterns that had bitten us before:
- Calling
Count()on anIList<T>instead of using itsCountproperty. - Running
ToList()on anIEnumerable<T>when a simple loop would do.
We did not polish these analyzers into a big product. They were enough to catch costly anti‑patterns before they landed.
Quick .NET Performance Wins
If you work in .NET and want fast, low‑effort improvements to reduce memory strain and GC pressure, consider adopting these habits:
- Avoid
ToList()/ToArray()unless you need indexing or multiple iterations. - Use
StringBuilderwith an initial capacity inside loops. - Initialize collections with an estimated size (e.g.,
new List<T>(capacity)). - Prefer
ICollection<T>if you need bothCountand iteration, not justIEnumerable<T>. - Explore
Span<T>andReadOnlySpan<T>for high‑performance parsing or string manipulation. - Seal classes unless you genuinely need extensibility.
- Use
readonly structandrecordtypes to enforce immutability where appropriate. - Leverage
ArrayPool<T>for scenarios with frequent large allocations.
No magic—just small, repeatable habits that add up.
Performance Is Quality
Now, I don’t frame performance as “optimization.” I frame it as quality. It’s about software that doesn’t get in the user’s way—software that feels snappy and earns trust.
If you don’t yet have any CI checks for action‑level performance, try adding a simple BenchmarkDotNet test for one key use case this week. See if your build or your main workflow changes. Track it in whatever dashboard you already use. Small steps today keep that fuse from burning down the road.
Performance isn’t a fire. It’s a fuse.
Before it ignites, ask yourself: what habit can you build now?
— Kevin