Performance engineering has been something I’ve been pushing at my job for the past few months. We’ve had performance issues that needed to be solved and it felt like even if we fixed them, we could still get hit by a regression. So that’s when I started to research online to find a solution to these problems. The beginning of my investigation started by reading Writing High-Performance .NET Code from Ben Watson.
Reading through this book lead me to an interesting and popular library in .NET made for benchmarking, BenchmarkDotNet.
From there, I started playing around the library and found that it was truly amazing. I started introducing micro-benchmarks. After seeing what the library could do, we saw some cases in our code base couldn’t be covered with micro-benchmarking so we set our sights even higher. We wanted to extract the performance data of our code from within an end-to-end UI test.
To do so, I created a library from the inner workings of BenchmarkDotNet. This library is also a multi-purpose library that can be used in any .NET application to measure accurately (within some degree) the time it takes to execute either a synchronous or asynchronous operation or set of operations.
Fast forward a few months and now, we have an infrastructure that can measure both with micro-benchmarks and end-to-end UI tests the behaviour of our application and provide a performance profile of the software.
I would like to thank the creators and maintainers of the BenchmarkDotNet library. They made my life way easier for this!
These were my very first steps in performance analysis and testing. My journey has just begun and I will continue to deepen my knowledge in this domain. I feel naturally drawn to it from my inner desire to provide optimized code. I don’t go towards over-engineered solutions, but when reasonable, I’ll provide the most desirable implementation that I can think of.
Thanks for reading,