Today, I was looking around in the source code of a project on GitHub and noticed some reflection used to extract methods. The methods in turn were used to apply state to instances during aggregations and projectins. I found out that the access path to the code only were once per aggregate/projection type and that the reflected methods in turn were used within compiled lambdas so in this case the method extraction was OK. Introducing caching would obviously not make it go faster, on the contrary, slower. But what if you are in another case? A case where you are writing similar code and where you are extracting a generic type's members more than once via reflection? Is there a simple way to cache this? Yes, you can use a generic method cache. But how much would you gain from doing so? Lets benchmark.
Summary
To get real performance gain, you should not use the reflected members directly. You should instead turn the reflected members into compiled lambda statements or generate IL code for accessing the members instead. And of course cache these so that you don't compile or generate them each time. I've done previous measurements of compiled lambdas and IL generation for accessing properties. Go read to find the differences it make.
Result
The relative performance gain is high, but do note that we are down counting nano seconds.
BenchmarkDotNet=v0.10.3.0, OS=Microsoft Windows NT 6.2.9200.0
Processor=Intel(R) Core(TM) i7-4790K CPU 4.00GHz, ProcessorCount=8
Frequency=3906248 Hz, Resolution=256.0001 ns, Timer=TSC
[Host] : Clr 4.0.30319.42000, 64bit RyuJIT-v4.6.1586.0
DefaultJob : Clr 4.0.30319.42000, 64bit RyuJIT-v4.6.1586.0
Method | Count | Mean | StdDev | Median |
------------------------ |------ |---------------- |----------- |---------------- |
UsingReflection | 1 | 1,896.1500 ns | 2.1446 ns | 1,896.0381 ns |
UsingGenericMethodCache | 1 | 0.0000 ns | 0.0000 ns | 0.0000 ns |
UsingReflection | 10 | 19,178.0403 ns | 48.5365 ns | 19,177.2546 ns |
UsingGenericMethodCache | 10 | 3.0877 ns | 0.0089 ns | 3.0871 ns |
UsingReflection | 100 | 188,735.4146 ns | 94.8109 ns | 188,743.0300 ns |
UsingGenericMethodCache | 100 | 27.5652 ns | 0.0669 ns | 27.5811 ns |
Test case
The benchmark code is using BenchmarkDotNet and is included in one of my GitHub repos and looks like this;
class Program
{
static void Main(string[] args)
{
BenchmarkRunner.Run<MethodExtraction>();
}
}
internal static class GenericMethodCacheFor<T> where T : class
{
internal static readonly MethodInfo[] Methods = typeof(T)
.GetMethods()
.Where(x => x.Name == "Apply" && x.GetParameters().Length == 1)
.ToArray();
}
public class MethodExtraction
{
[Params(1, 10, 100)]
public int Count { get; set; }
[Benchmark]
public void UsingReflection()
{
for (var c = 0; c < Count; c++)
{
var m1 = typeof(MyThing1)
.GetMethods()
.Where(x => x.Name == "Apply" && x.GetParameters().Length == 1)
.ToArray();
var m2 = typeof(MyThing2)
.GetMethods()
.Where(x => x.Name == "Apply" && x.GetParameters().Length == 1)
.ToArray();
var m3 = typeof(MyThing3)
.GetMethods()
.Where(x => x.Name == "Apply" && x.GetParameters().Length == 1)
.ToArray();
}
}
[Benchmark]
public void UsingGenericMethodCache()
{
for (var c = 0; c < Count; c++)
{
var m1 = GenericMethodCacheFor<MyThing1>.Methods;
var m2 = GenericMethodCacheFor<MyThing2>.Methods;
var m3 = GenericMethodCacheFor<MyThing3>.Methods;
}
}
}
public class Message1 { }
public class Message2 { }
public class Message3 { }
public class Message4 { }
public class Message5 { }
public class MyThing1
{
public void Apply(Message1 msg) { }
public void Apply(Message2 msg) { }
public void Apply(Message3 msg) { }
public void Apply(Message4 msg) { }
public void Apply(Message5 msg) { }
}
public class MyThing2
{
public void Apply(Message1 msg) { }
public void Apply(Message2 msg) { }
public void Apply(Message3 msg) { }
public void Apply(Message4 msg) { }
public void Apply(Message5 msg) { }
}
public class MyThing3
{
public void Apply(Message1 msg) { }
public void Apply(Message2 msg) { }
public void Apply(Message3 msg) { }
public void Apply(Message4 msg) { }
public void Apply(Message5 msg) { }
}