Why can't readonly field check be optimised out of a loop? - c#

When I have a readonly variable:
public readonly bool myVar = true;
And check it in a code like this:
for(int i = 0; i != 10; i++)
{
if(myVar)
DoStuff();
else
DoOtherStuff();
}
Looking at emitted IL, I can see that the check is performed on each iteration of the loop. I would expect the code above to produce the same IL as this:
if (myVar)
{
for(int i = 0; i != 10; i++)
{
DoStuff();
}
}
else
{
for(int i = 0; i != 10; i++)
{
DoOtherStuff();
}
}
So why isn't the check optimised to the outside of the loop, since the field is readonly and can't be changed between iterations?

Your proposed optimization really is a combination of two individual simpler transformations. First is pulling the member access outside the loop. From
for(int i = 0; i != 10; i++)
{
var localVar = this.memberVar;
if(localVar)
DoStuff();
else
DoOtherStuff();
}
to
var localVar = this.memberVar;
for(int i = 0; i != 10; i++)
{
if(localVar)
DoStuff();
else
DoOtherStuff();
}
The second is interchanging the loop condition with the if condition. From
var localVar = this.memberVar;
for(int i = 0; i != 10; i++)
{
if(localVar)
DoStuff();
else
DoOtherStuff();
}
to
var localVar = this.memberVar;
if (localVar) {
for(int i = 0; i != 10; i++)
DoStuff();
}
else {
for(int i = 0; i != 10; i++)
DoOtherStuff();
}
The first one is influenced by readonly. To do it, the compiler has to prove that memberVar cannot change inside the loop, and readonly guarantees this1 -- even though this loop could be called inside a constructor, and the value of memberVar could be changed in the constructor after the loop ends, it cannot be changed in the loop body -- DoStuff() is not a constructor of the current object, neither is DoOtherStuff(). Reflection does not count, while it may be possible to use Reflection to break invariants, it isn't permitted to do so. Threads do count, see footnote.
The second is a simple transformation but a more difficult decision for the compiler to make, because it's difficult to predict whether it will actually improve performance. Naturally you can look at it separately by doing the first transformation on the code yourself, and seeing what code is generated.
Perhaps a more important consideration is that in .NET, the optimization pass takes place in between MSIL and machine code, not during compilation of C# to IL. So you cannot see what optimizations are being done by looking at the MSIL!
1 Or does it? The .NET memory model is considerably more forgiving than e.g. the C++ model where any data race leads very quickly to undefined behavior unless the object is defined volatile/atomic. What if this loop runs in a worker thread spawned from the object constructor, and after spawning the thread, the constructor goes on (which I'll call the "second half") to change the readonly member? Does the memory model require that change to be seen by the worker thread? What if DoStuff() and the second half of the constructor force memory fences, for example access other members which are volatile, or take a lock? So readonly would only allow the optimization in a single-threaded environment.

Because it can be changed using Reflection:
using System;
using System.Reflection;
public class Program
{
public static void Main()
{
var t = new Test();
var field = typeof(Test).GetField("myVar", BindingFlags.Instance | BindingFlags.Public);
Console.WriteLine(t.myVar); // prints True
field.SetValue(t, false);
Console.WriteLine(t.myVar); // prints False
// Trying to use t.myVar = false or true; <-- does not compile
}
}
public class Test
{
public readonly bool myVar = true;
}
Working Fiddle: https://dotnetfiddle.net/W9UO3m
Note there is no way in which the compile time code optimizer can predict or detect with absolute certainty whether or not such reflection code exists, and if it exists, if it will run or not.

A readonly field can be initialized to different values at different points in the code (multiple constructors). It can't be optimized out because multiple constructors allow for multiple values of the field in question.
Now, if you only had one constructor, with no relevant branching, and therefore only one possible value, ever, for that field, I'd expect more from the optimizer. However, that sort of analysis is one reason that C++ takes much longer to compile than C#, and these sorts of tasks aren't typically delegated to the runtime.
For a clearer explanation:
Note
The readonly keyword is different from the const keyword. A const
field can only be initialized at the declaration of the field. A
readonly field can be assigned multiple times in the field declaration
and in any constructor. Therefore, readonly fields can have different
values depending on the constructor used. Also, while a const field is
a compile-time constant, the readonly field can be used for run-time
constants as in the following example:
See https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/readonly
For your specific case, I'd suggest using const instead of readonly, though admittedly I haven't looked for differences in the generated IL.
Honestly, I'm wondering if it really makes a difference. Any CPU that performs branch prediction and has a halfway-decent cache will likely yield the same performance in either case. (Note that I haven't tested that; it's just a suspicion.)

Related

Can this be simplified with Lambda/Block initialization?

Been bouncing back and forth between Swift and C# and I'm not sure if I'm forgetting certain things, or if C# just doesn't easily support what I'm after.
Consider this code which calculates the initial value for Foo:
// Note: This is a field on an object, not a local variable.
int Foo = CalculateInitialFoo();
static int CalculateInitialFoo() {
int x = 0;
// Perform calculations to get x
return x;
}
Is there any way to do something like this without the need to create the separate one-time-use function and instead use an instantly-executing lambda/block/whatever?
In Swift, it's simple. You use a closure (the curly-braces) that you instantly execute (open and closed parentheses), like this:
int Foo = {
int x = 0
// Perform calculations to get x
return x
}()
It's clear, concise and doesn't clutter up the object's interface with functions just to initialize fields.
Note: To be clear, I do NOT want a calculated property. I am trying to initialize a member field which requires multiple statements to do completely.
I wouldn't suggest doing this, but you could use an anonymous function to initialize
int _foo = new Func<int>(() =>
{
return 5;
})();
Is there a reason you would like to do it using lambdas rather than named functions, or as a calculated property?
I assume you want to avoid calculated properties because you want to either modify the value later, or the computation is expensive and you want to cache the value.
int? _fooBacking = null;
int Foo
{
get
{
if (!_fooBacking.HasValue)
{
_fooBacking = 5;
}
return _fooBacking.Value;
}
set
{
_fooBacking = value;
}
}
This will use what you evaluate in the conditional the first time it is gotten, while still allowing the value to be assigned.
If you remove the setter it will turn it into a cached calculation. Be careful when using this pattern, though. Side-effects in property getters will be frowned upon because they make the code difficult to follow.
To solve the problem in the general case you'd need to create and then execute an anonymous function, which you can technically do as an expression:
int Foo = new Func<int>(() =>
{
int x = 0;
// Perform calculations to get x
return x;
})();
You can clean this up a bit by writing a helper function:
public static T Perform<T>(Func<T> function)
{
return function();
}
Which lets you write:
int Foo = Perform(() =>
{
int x = 0;
// Perform calculations to get x
return x;
});
While this is better than the first, I think it's pretty hard to argue that either is better than just writing a function.
In the non-general case, many specific implementations can be altered to run on a single line rather than multiple lines. Such a solution may be possible in your case, but we couldn't possibly say without knowing what it is. There will be cases where this is possible but undesirable, and cases where this may actually be preferable. Which are which is of course subjective.
You could initialize your field in the constructor and declare CalculateInitialFoo as local function.
private int _foo;
public MyType()
{
_foo = CalculateInitialFoo();
int CalculateInitialFoo()
{
int x = 0;
// Perform calculations to get x
return x;
}
}
This won't change your code too much but you can at least limit the scope of the method to where it's only used.

How an auto property is compiled?

I am asking this after I have seen some questions and answers about properties and auto properties and how these are represented by the compiler.
From what I understood, an auto-property is represented as a field, with two methods, a getter an a setter. In this case if a field is used, the code to access that field should be faster than the code which accesses a property, because it avoids any suplimentar call to methods. In order to prove this theory I was writing the following code, and please, excuse me for how it looks:
public class A
{
public int Prop { get; set; }
public int Field;
public A()
{
Prop = 1;
Field = 1;
}
}
class Program
{
static void Main(string[] args)
{
List<long> propertyExecutionTimes = new List<long>();
List<long> fieldExecutionTimes = new List<long>();
A a = new A();
int aux;
for (int j = 0; j < 100; j++)
{
var watch = System.Diagnostics.Stopwatch.StartNew();
for (int i = 0; i < 10000000; i++)
{
aux = a.Prop;
a.Prop = aux;
}
watch.Stop();
propertyExecutionTimes.Add(watch.ElapsedMilliseconds);
watch = System.Diagnostics.Stopwatch.StartNew();
for (int i = 0; i < 10000000; i++)
{
aux = a.Field;
a.Field = aux;
}
watch.Stop();
fieldExecutionTimes.Add(watch.ElapsedMilliseconds);
}
Console.WriteLine("Property best time: " + propertyExecutionTimes.OrderBy(x => x).First());
Console.WriteLine("Field best time: " + fieldExecutionTimes.OrderBy(x => x).First());
Console.ReadKey();
}
}
It consists of calling 10M times a field, vs calling 10M times a property everthing 10 times, and then it is picked the lowest value from each test. The results with a VS2012 release version were:
Property best time: 96
Field best time: 45
So as expected the results for a field were quite better than a property.
But after this I runned the program without debugger (simply run the .exe file) and as a surprise for me, the times were similar for both using field and a property like this:
Property best time: 20
Field best time: 20
In this case the differences between using a property and a field were none, which makes me consider that they are compiled in this case as the same, or the methods of the property were converted to something similar with inline methods in C++. This made me think I should see how actual getters and setters are working copared to the property, so I have added a pair of getters and setters:
private int _field;
public int Get() { return _field; }
public void Set(int value) { _field = value; }
Similar testing this against a property with a few changes to respect the same behavior:
aux = a.Get();
a.Set(aux);
Gave me this output:
Property best time: 96
Methods best time: 96
With debugger and:
Property best time: 20
Methods best time: 20
Without debugger. These values are the same, so I concluded that auto-properties are getters and setters which are compiled just like fields. Is this a correct conclusion? And finaly, why the field was also faster than a property when the debugger was attached?
These values are the same, so I concluded that auto-properties are getters and setters which are compiled just like fields.
Yes, auto-properties are implemented as a field, and a get and optionally a set accessor for that field.
Is this a correct conclusion?
For the wrong reasons, but yes. :) To accurately see how auto-properties are implemented, don't rely on timing, but create a program or library and open that in a MSIL disassembler. As you've seen, timing results can be misleading.
And finaly, why the field was also faster than a property when the debugger was attached?
When debugging, there are far fewer opportunities for things like inlining, since inlining makes it more difficult to set a break point, to support edit-and-continue, etc. It's mainly inlining that makes property accessors as fast as direct field access.

Where does a local variable get stored so that it's accessible from an asyc delegate? [duplicate]

What is a closure? Do we have them in .NET?
If they do exist in .NET, could you please provide a code snippet (preferably in C#) explaining it?
I have an article on this very topic. (It has lots of examples.)
In essence, a closure is a block of code which can be executed at a later time, but which maintains the environment in which it was first created - i.e. it can still use the local variables etc of the method which created it, even after that method has finished executing.
The general feature of closures is implemented in C# by anonymous methods and lambda expressions.
Here's an example using an anonymous method:
using System;
class Test
{
static void Main()
{
Action action = CreateAction();
action();
action();
}
static Action CreateAction()
{
int counter = 0;
return delegate
{
// Yes, it could be done in one statement;
// but it is clearer like this.
counter++;
Console.WriteLine("counter={0}", counter);
};
}
}
Output:
counter=1
counter=2
Here we can see that the action returned by CreateAction still has access to the counter variable, and can indeed increment it, even though CreateAction itself has finished.
If you are interested in seeing how C# implements Closure read "I know the answer (its 42) blog"
The compiler generates a class in the background to encapsulate the anoymous method and the variable j
[CompilerGenerated]
private sealed class <>c__DisplayClass2
{
public <>c__DisplayClass2();
public void <fillFunc>b__0()
{
Console.Write("{0} ", this.j);
}
public int j;
}
for the function:
static void fillFunc(int count) {
for (int i = 0; i < count; i++)
{
int j = i;
funcArr[i] = delegate()
{
Console.Write("{0} ", j);
};
}
}
Turning it into:
private static void fillFunc(int count)
{
for (int i = 0; i < count; i++)
{
Program.<>c__DisplayClass1 class1 = new Program.<>c__DisplayClass1();
class1.j = i;
Program.funcArr[i] = new Func(class1.<fillFunc>b__0);
}
}
Closures are functional values that hold onto variable values from their original scope. C# can use them in the form of anonymous delegates.
For a very simple example, take this C# code:
delegate int testDel();
static void Main(string[] args)
{
int foo = 4;
testDel myClosure = delegate()
{
return foo;
};
int bar = myClosure();
}
At the end of it, bar will be set to 4, and the myClosure delegate can be passed around to be used elsewhere in the program.
Closures can be used for a lot of useful things, like delayed execution or to simplify interfaces - LINQ is mainly built using closures. The most immediate way it comes in handy for most developers is adding event handlers to dynamically created controls - you can use closures to add behavior when the control is instantiated, rather than storing data elsewhere.
Func<int, int> GetMultiplier(int a)
{
return delegate(int b) { return a * b; } ;
}
//...
var fn2 = GetMultiplier(2);
var fn3 = GetMultiplier(3);
Console.WriteLine(fn2(2)); //outputs 4
Console.WriteLine(fn2(3)); //outputs 6
Console.WriteLine(fn3(2)); //outputs 6
Console.WriteLine(fn3(3)); //outputs 9
A closure is an anonymous function passed outside of the function in which it is created.
It maintains any variables from the function in which it is created that it uses.
A closure is when a function is defined inside another function (or method) and it uses the variables from the parent method. This use of variables which are located in a method and wrapped in a function defined within it, is called a closure.
Mark Seemann has some interesting examples of closures in his blog post where he does a parallel between oop and functional programming.
And to make it more detailed
var workingDirectory = new DirectoryInfo(Environment.CurrentDirectory);//when this variable
Func<int, string> read = id =>
{
var path = Path.Combine(workingDirectory.FullName, id + ".txt");//is used inside this function
return File.ReadAllText(path);
};//the entire process is called a closure.
Here is a contrived example for C# which I created from similar code in JavaScript:
public delegate T Iterator<T>() where T : class;
public Iterator<T> CreateIterator<T>(IList<T> x) where T : class
{
var i = 0;
return delegate { return (i < x.Count) ? x[i++] : null; };
}
So, here is some code that shows how to use the above code...
var iterator = CreateIterator(new string[3] { "Foo", "Bar", "Baz"});
// So, although CreateIterator() has been called and returned, the variable
// "i" within CreateIterator() will live on because of a closure created
// within that method, so that every time the anonymous delegate returned
// from it is called (by calling iterator()) it's value will increment.
string currentString;
currentString = iterator(); // currentString is now "Foo"
currentString = iterator(); // currentString is now "Bar"
currentString = iterator(); // currentString is now "Baz"
currentString = iterator(); // currentString is now null
Hope that is somewhat helpful.
Closures are chunks of code that reference a variable outside themselves, (from below them on the stack), that might be called or executed later, (like when an event or delegate is defined, and could get called at some indefinite future point in time)... Because the outside variable that the chunk of code references may gone out of scope (and would otherwise have been lost), the fact that it is referenced by the chunk of code (called a closure) tells the runtime to "hold" that variable in scope until it is no longer needed by the closure chunk of code...
Basically closure is a block of code that you can pass as an argument to a function. C# supports closures in form of anonymous delegates.
Here is a simple example:
List.Find method can accept and execute piece of code (closure) to find list's item.
// Passing a block of code as a function argument
List<int> ints = new List<int> {1, 2, 3};
ints.Find(delegate(int value) { return value == 1; });
Using C#3.0 syntax we can write this as:
ints.Find(value => value == 1);
If you write an inline anonymous method (C#2) or (preferably) a Lambda expression (C#3+), an actual method is still being created. If that code is using an outer-scope local variable - you still need to pass that variable to the method somehow.
e.g. take this Linq Where clause (which is a simple extension method which passes a lambda expression):
var i = 0;
var items = new List<string>
{
"Hello","World"
};
var filtered = items.Where(x =>
// this is a predicate, i.e. a Func<T, bool> written as a lambda expression
// which is still a method actually being created for you in compile time
{
i++;
return true;
});
if you want to use i in that lambda expression, you have to pass it to that created method.
So the first question that arises is: should it be passed by value or reference?
Pass by reference is (I guess) more preferable as you get read/write access to that variable (and this is what C# does; I guess the team in Microsoft weighed the pros and cons and went with by-reference; According to Jon Skeet's article, Java went with by-value).
But then another question arises: Where to allocate that i?
Should it actually/naturally be allocated on the stack?
Well, if you allocate it on the stack and pass it by reference, there can be situations where it outlives it's own stack frame. Take this example:
static void Main(string[] args)
{
Outlive();
var list = whereItems.ToList();
Console.ReadLine();
}
static IEnumerable<string> whereItems;
static void Outlive()
{
var i = 0;
var items = new List<string>
{
"Hello","World"
};
whereItems = items.Where(x =>
{
i++;
Console.WriteLine(i);
return true;
});
}
The lambda expression (in the Where clause) again creates a method which refers to an i. If i is allocated on the stack of Outlive, then by the time you enumerate the whereItems, the i used in the generated method will point to the i of Outlive, i.e. to a place in the stack that is no longer accessible.
Ok, so we need it on the heap then.
So what the C# compiler does to support this inline anonymous/lambda, is use what is called "Closures": It creates a class on the Heap called (rather poorly) DisplayClass which has a field containing the i, and the Function that actually uses it.
Something that would be equivalent to this (you can see the IL generated using ILSpy or ILDASM):
class <>c_DisplayClass1
{
public int i;
public bool <GetFunc>b__0()
{
this.i++;
Console.WriteLine(i);
return true;
}
}
It instantiates that class in your local scope, and replaces any code relating to i or the lambda expression with that closure instance. So - anytime you are using the i in your "local scope" code where i was defined, you are actually using that DisplayClass instance field.
So if I would change the "local" i in the main method, it will actually change _DisplayClass.i ;
i.e.
var i = 0;
var items = new List<string>
{
"Hello","World"
};
var filtered = items.Where(x =>
{
i++;
return true;
});
filtered.ToList(); // will enumerate filtered, i = 2
i = 10; // i will be overwriten with 10
filtered.ToList(); // will enumerate filtered again, i = 12
Console.WriteLine(i); // should print out 12
it will print out 12, as "i = 10" goes to that dispalyclass field and changes it just before the 2nd enumeration.
A good source on the topic is this Bart De Smet Pluralsight module (requires registration) (also ignore his erroneous use of the term "Hoisting" - what (I think) he means is that the local variable (i.e. i) is changed to refer to the the new DisplayClass field).
In other news, there seems to be some misconception that "Closures" are related to loops - as I understand "Closures" are NOT a concept related to loops, but rather to anonymous methods / lambda expressions use of local scoped variables - although some trick questions use loops to demonstrate it.
A closure aims to simplify functional thinking, and it allows the runtime to manage
state, releasing extra complexity for the developer. A closure is a first-class function
with free variables that are bound in the lexical environment. Behind these buzzwords
hides a simple concept: closures are a more convenient way to give functions access
to local state and to pass data into background operations. They are special functions
that carry an implicit binding to all the nonlocal variables (also called free variables or
up-values) referenced. Moreover, a closure allows a function to access one or more nonlocal variables even when invoked outside its immediate lexical scope, and the body
of this special function can transport these free variables as a single entity, defined in
its enclosing scope. More importantly, a closure encapsulates behavior and passes it
around like any other object, granting access to the context in which the closure was
created, reading, and updating these values.
Just out of the blue,a simple and more understanding answer from the book C# 7.0 nutshell.
Pre-requisit you should know :A lambda expression can reference the local variables and parameters of the method
in which it’s defined (outer variables).
static void Main()
{
int factor = 2;
//Here factor is the variable that takes part in lambda expression.
Func<int, int> multiplier = n => n * factor;
Console.WriteLine (multiplier (3)); // 6
}
Real part:Outer variables referenced by a lambda expression are called captured variables. A lambda expression that captures variables is called a closure.
Last Point to be noted:Captured variables are evaluated when the delegate is actually invoked, not when the variables were captured:
int factor = 2;
Func<int, int> multiplier = n => n * factor;
factor = 10;
Console.WriteLine (multiplier (3)); // 30
A closure is a function, defined within a function, that can access the local variables of it as well as its parent.
public string GetByName(string name)
{
List<things> theThings = new List<things>();
return theThings.Find<things>(t => t.Name == name)[0];
}
so the function inside the find method.
t => t.Name == name
can access the variables inside its scope, t, and the variable name which is in its parents scope. Even though it is executed by the find method as a delegate, from another scope all together.

What is the lifetime of a delegate created by a lambda in C#?

Lambdas are nice, as they offer brevity and locality and an extra form of encapsulation. Instead of having to write functions which are only used once you can use a lambda.
While wondering how they worked, I intuitively figured they are probably only created once. This inspired me to create a solution which allows to restrict the scope of a class member beyond private to one particular scope by using the lambda as an identifier of the scope it was created in.
This implementation works, although perhaps overkill (still researching it), proving my assumption to be correct.
A smaller example:
class SomeClass
{
public void Bleh()
{
Action action = () => {};
}
public void CallBleh()
{
Bleh(); // `action` == {Method = {Void <SomeClass>b__0()}}
Bleh(); // `action` still == {Method = {Void <SomeClass>b__0()}}
}
}
Would the lambda ever return a new instance, or is it guaranteed to always be the same?
It's not guaranteed either way.
From what I remember of the current MS implementation:
A lambda expression which doesn't capture any variables is cached statically
A lambda expression which only captures "this" could be captured on a per-instance basis, but isn't
A lambda expression which captures a local variable can't be cached
Two lambda expressions which have the exact same program text aren't aliased; in some cases they could be, but working out the situations in which they can be would be very complicated
EDIT: As Eric points out in the comments, you also need to consider type arguments being captured for generic methods.
EDIT: The relevant text of the C# 4 spec is in section 6.5.1:
Conversions of semantically identical anonymous functions with the same (possibly empty) set of captured outer variable instances to the same delegate types are permitted (but not required) to return the same delegate instance. The term semantically identical is used here to mean that execution of the anonymous functions will, in all cases, produce the same effects given the same arguments.
Based on your question here and your comment to Jon's answer I think you are confusing multiple things. To make sure it is clear:
The method that backs the delegate for a given lambda is always the same.
The method that backs the delegate for "the same" lambda that appears lexically twice is permitted to be the same, but in practice is not the same in our implementation.
The delegate instance that is created for a given lambda might or might not always be the same, depending on how smart the compiler is about caching it.
So if you have something like:
for(i = 0; i < 10; ++i)
M( ()=>{} )
then every time M is called, you get the same instance of the delegate because the compiler is smart and generates
static void MyAction() {}
static Action DelegateCache = null;
...
for(i = 0; i < 10; ++i)
{
if (C.DelegateCache == null) C.DelegateCache = new Action ( C.MyAction )
M(C.DelegateCache);
}
If you have
for(i = 0; i < 10; ++i)
M( ()=>{this.Bar();} )
then the compiler generates
void MyAction() { this.Bar(); }
...
for(i = 0; i < 10; ++i)
{
M(new Action(this.MyAction));
}
You get a new delegate every time, with the same method.
The compiler is permitted to (but in fact does not at this time) generate
void MyAction() { this.Bar(); }
Action DelegateCache = null;
...
for(i = 0; i < 10; ++i)
{
if (this.DelegateCache == null) this.DelegateCache = new Action ( this.MyAction )
M(this.DelegateCache);
}
In that case you would always get the same delegate instance if possible, and every delegate would be backed by the same method.
If you have
Action a1 = ()=>{};
Action a2 = ()=>{};
Then in practice the compiler generates this as
static void MyAction1() {}
static void MyAction2() {}
static Action ActionCache1 = null;
static Action ActionCache2 = null;
...
if (ActionCache1 == null) ActionCache1 = new Action(MyAction1);
Action a1 = ActionCache1;
if (ActionCache2 == null) ActionCache2 = new Action(MyAction2);
Action a2 = ActionCache2;
However the compiler is permitted to detect that the two lambdas are identical and generate
static void MyAction1() {}
static Action ActionCache1 = null;
...
if (ActionCache1 == null) ActionCache1 = new Action(MyAction1);
Action a1 = ActionCache1;
Action a2 = ActionCache1;
Is that now clear?
No guarantees.
A quick demo:
Action GetAction()
{
return () => Console.WriteLine("foo");
}
Call this twice, do a ReferenceEquals(a,b), and you'll get true
Action GetAction()
{
var foo = "foo";
return () => Console.WriteLine(foo);
}
Call this twice, do a ReferenceEquals(a,b), and you'll get false
I see Skeet jumped in while I was answering, so I won't belabor that point. One thing I would suggest, to better understand how you are using things, is to get familiar with reverse engineering tools and IL. Take the code sample(s) in question and reverse engineer to IL. It will give you a great amount of information on how the code is working.
Good question. I don't have an "academic answer," more of a practical answer: I could see a compiler optimizing the binary to use the same instance, but I wouldn't ever write code that assumes it's "guaranteed" to be the same instance.
I upvoted you at least, so hopefully someone can give you the academic answer you're looking for.

C#: passing parameters to callbacks

I have a fairly generic class (Record) that I have added a callback handler to, so I can do something like the following.
record.Save(AfterSaveMethod);
Which also returns the identity number of the record created. The issue I have now is that I have this nested save routine in a loop, and I need to use/pass the i variable!
for (int i; i < count ;i++)
{
record.Save(AfterSaveMethod2) //but I need to pass i through as well
}
What do I do here?
A\ rewrite the save method and include it in this class (yuk),
B\ have an overloaded save option that takes an object as a parameter, so I can pass it through to my record class, and then merge it with the identity number. Returning both the identity number and anything extra contained in the passed through object (hmm, sounds a bit messy),
C\ is there a sexier option?
This is where anonymous methods or lambda expressions are really handy :)
Try this (assuming C# 3):
for (int i = 0; i < count; i++)
{
int index = i;
record.Save(id => AfterSaveMethod2(id, index));
}
Note that the lambda expression here is capturing index, not i - that's to avoid the problems inherent in closing over the loop variable.
You could also create a (thread-static) context object, which stores your "state" (in this case, the index variable value), which you can access using a static property (say MyContext.Current). Depends on the complexity of the context ...
WCF uses something like this with OperationContext.Current, ASP.NET, TransactionScope et al.
If the context is a bit more complex than yours and you don't want to pollute the signatures of several methods, I think this model is quite neat.
Use:
// init context
MyContext.Current = new MyContext();
for (var idx = 0; idx < 99; idx++) {
MyContext.Current.Idx = idx;
record.Save(AfterSaveMethod);
}
// uninitialize context
MyContext.Current = null;
(very simplistic sample)
If the context is [ThreadStatic] you could have multiple concurrent calls which won't affect each other
class SexierSaveMethod
{
public Int32 Index { get; set; }
public SexierSaveMethod(Int32 i)
{
Index = i;
}
public void AfterSaveMethod()
{
// Use i here
}
}
record.Save(new SexierSaveMethod(i).AfterSaveMethod);

Categories

Resources