Ruminations of J.net idle rants and ramblings of a code monkey

Notes on performance testing

.NET Stuff | Performance

In performing the performance tests for Linq vs. ADO.NET, I spent quite a bit of time getting the methodology ironed out. Why? Well, I kept getting different results depending on the order in which the test methods were run. This struck me as somewhat odd and, honestly, even more frustrating. If the methodology was valid, one would certainly expect the results to be consistent regardless of the order in which the test methods were called.

Of course, the first things that comes to mind is the connection pool. The first access to the database with a particular set of credentials would create the pool and take the hit for opening the connection to Sql Server. This would skew the results against the first called test run. This was an easy one and one that I had figured out before even running the tests. Creating and opening the connection before any of the tests were run was a no-brainer.

But something else was going on. The first method called on a particular run seemed to have a performance advantage. I even, at one time on previous tests, had case statements to alter the order ... but even then I'd get different results on different runs. This left me scratching my head a bit. Eventually, though, it occurred to me. There's a bunch of stuff that the Framework does for us and it's sometimes easy to forget about these things and how the impact performance. In this case, it was garbage collection. And it makes complete sense. Think about it ... the GC in non-deterministic. It happens pretty much when the runtime "feels" like it. So ... the GC would happen in various places and invariably skew the results somewhat. The impact didn't seem to be evenly distributed. Why the skewing? Because the GC, when it does a collection, halts all thread processing while it does its thing. Of course, when this occurred to me, it was a "DOH!" moment.

Once I added a call to GC.Collect() after every call to a test method, the results were, as I expected, remarkably similar across all of the test runs, regardless of the order in which they were called. Confirming, of course, my newly realized theory about the garbage collection and its impact on my performance tests.

I did, for the final "numbers" toss out the low and the high values and re-averaged. Since Windows always has other things going on, some of those things may take a time slice or two of the processor from the test run. Or not take any. Still, doing this actually made very little difference to the results. As I think about it, though, I should also create an instance  of every class that I create in order to make sure that the type is initialized in memory and the dll is loaded. But, looking at the results, this really didn't appear to make much difference. Still, on future tests, I'll start doing that.

Now, keep in mind that this applies only to artificial tests. And if you look at the Linq vs. ADO.NET tests, they were certainly quite artificial. Not what you would do in a real-world application. This was, of course, really only designed to test raw numbers for each of the methods that were being used at the time. When you are doing performance testing on your applications, this kind of testing methodology is invalid, to say the least. And calling GC.Collect() after every method call will, without question, hurt the overall performance of your application. So don't do it. For your individual applications, you need to take a holistic approach; test the application in the way it is expected to be used on the real world. Of course, this can only go so far because users will, invariably, do something that we didn't expect (why is that???) and telling them "Well, just don't do that" never seems to be an acceptable answer. For web applications, this needs to go a step further - in web apps, performance != to scalability. They are related, to be sure, but not the same. I've seen web apps that perform pretty well ... but only with a few users, keeling over when they get 20 or more users. That's not good.