Does defining an instance as dynamic in C# mean:
The compiler does not perform compile-time type checking, but run-time checking takes place like it always does for all instances.
The compiler does not perform compile-time type checking, but run-time checking takes place, unlike with any other non-dynamic instances.
Same as 2, and this comes with performance penalty (trivial? potentially significant?).
The question is very confusing.
Does defining an instance as dynamic in C# mean:
By "defining an instance" do you mean "declaring a variable"?
The compiler does not perform compile-time type checking, but run-time checking takes place like it always does for all instances.
What do you mean by "run-time checking like it always does"? What run-time checking did you have in mind? Are you thinking of the checking performed by the IL verifier, or are you thinking of runtime type checks caused by casts, or what?
Perhaps it would be best to simply explain what "dynamic" does.
First off, dynamic is from the perspective of the compiler a type. From the perspective of the CLR, there is no such thing as dynamic; by the time the code actually runs, all instances of "dynamic" have been replaced with "object" in the generated code.
The compiler treats expressions of type dynamic exactly as expressions of type object, except that all operations on the value of that expression are analyzed, compiled and executed at runtime based on the runtime type of the instance. The goal is that the code executed has the same semantics as if the compiler had known the runtime types at compile time.
Your question seems to be about performance.
The best way to answer performance questions is to try it and find out - what you should do if you need hard numbers is to write the code both ways, using dynamic and using known types, and then get out a stopwatch and compare the timings. That's the only way to know.
However, let's consider the performance implications of some operations at an abstract level. Suppose you have:
int x = 123;
int y = 456;
int z = x + y;
Adding two integers takes about a billionth of a second on most hardware these days.
What happens if we make it dynamic?
dynamic x = 123;
dynamic y = 456;
dynamic z = x + y;
Now what does this do at runtime? This boxes 123 and 456 into objects, which allocates memory on the heap and does some copies.
Then it starts up the DLR and asks the DLR "has this code site been compiled once already with the types for x and y being int and int?"
The answer in this case is no. The DLR then starts up a special version of the C# compiler which analyzes the addition expression, performs overload resolution, and spits out an expression tree describing the lambda which adds together two ints. The DLR then compiles that lambda into dynamically generated IL, which the jit compiler then jits. The DLR then caches that compiled state so that the second time you ask, the compiler doesn't have to do all that work over again.
That takes longer than a nanosecond. It takes potentially many thousands of nanoseconds.
Does that answer your questions? I don't really understand what you're asking here but I'm making a best guess.
As far as I know, the answer is 3.
You can do this:
dynamic x = GetMysteriousObject();
x.DoLaundry();
Since the compiler does no type checking on x, it will compile this code, the assumption being that you know what you're doing.
But this means extra run-time checking has to occur: namely, examining x's type, seeing if it has a DoLaundry method accepting no arguments, and executing it.
In other words the above code is sort of like doing this (I'm not saying it's the same, just drawing a comparison):
object x = GetMysteriousObject();
MethodInfo doLaundry = x.GetType().GetMethod(
"DoLaundry",
BindingFlags.Instance | BindingFlags.Public
);
doLaundry.Invoke(x, null);
This is definitely not trivial, though that isn't to say you're going to be able to see a performance issue with your naked eye.
I believe the implementation of dynamic involves some pretty sweet behind-the-scenes caching that gets done for you, so that if you run this code again and x is the same type, it'll run a lot faster.
Don't hold me to that, though. I don't have all that much experience with dynamic; this is merely how I understand it to work.
Declaring a variable as dynamic is similar to declaring it as object. Dynamic simply gets another flag indicating that member resolution gets deferred to run-time.
In terms of the performance penalty - it depends on what the underlying object is. That's the whole point of dynamic objects right? The underlying object can be a Ruby or Python object or it can be a C# object. The DLR will figure out at run-time how to resolve member calls on this object and this resolution method will determine the performance penalty.
Having said that - there definitely is a performance penalty.
That's why we're not simply going to start using dynamic objects all over the place.
I made a simple test: 100000000 assignments to a variable as a dynamic vs. the same number of direct double assignments, something like
int numberOfIterations = 100000000;
Stopwatch sw = new Stopwatch();
sw.Start();
for (int i = 0; i < numberOfIterations; i++)
{
var x = (dynamic)2.87;
}
sw.Stop();
sw.Restart();
for (int i = 0; i < numberOfIterations; i++)
{
double y = 2.87;
}
sw.Stop();
In the first loop (with dynamic) it took some 500ms; in the second one about 200ms. Certainly, the performance loss depends of what you do in your loops, these representing a simplest action possible.
Well, the variable is statically typed to be of the type dynamic but beyond that the compiler doesn't do any checking as far as I know.
Type binding is done at runtime and yes, there's a penalty, but if dynamic is the only option then so what. If you can solve the problem using static typing do so. That being said, the DLR does call site caching which means some of the overhead is reduced as the plumbing can be reused in some cases.
As far i undesrtand dynamic it only bypasses compile time check. resolution of type happens at runtime as it does for all types. so i dont think there is any performance penalty associated with it.
Related
EDIT: This question is based on the misconception that GetType() returns a string.
I'm trying to get a better handle on how C# works, so this question is more theoretical than practical.
As I understand it, calling GetType on a value type requires boxing and then calling the method. Since value types can't be inherited from, though, the type is known at compile time, so why can't the compiler simply replace the call to GetType() with a string literal?
Or is this something that could be done, but isn't considered necessary since there wouldn't be much need to call GetType on an unboxed value type anyway?
Let's consider the question you might have asked had you not had the misconception that GetType returns a string. Can the compiler compile
Foo foo = whatever;
Type t = foo.GetType();
as
Type t = typeof(Foo);
Yes, that would be a legal optimization. The compiler doesn't do that optimization because it would be a waste of the compiler team's time to do that optimization when they could be doing an optimization that actually makes a difference. Let's think about the proposed optimization.
Is there a new GetType method on Foo? If so, then it can do anything. The compiler team has to detect calls to the original GetType. And then write test cases that ensure that the optimization is not applied in these cases.
It is incorrect if the receiver has any side effect. The compiler team would have to detect side effects and suppress the optimization in those cases, or implement the optimization in a manner that preserved side effects. And again write the test cases.
Those side effects include the possible null dereference exception in the case where we have Foo? instead of Foo, so you'd have to have a special case for that in the compiler.
What about GetType on a generic that might be a value type? There are a bunch of cases to consider there, and again, these increase the design, implementation, and testing cost of the optimization.
The optimization saves a single boxing penalty. The code by supposition is about to do an unnecessary reflection. Does that strike you as code that is on the critical path for the application to achieve high performance? Don't eliminate the boxing; eliminate the reflection!
An optimization that optimizes code that no one writes in the first place is not a useful optimization. Why would you do reflection to determine the type of an expression that you already know the type of at compile time? Sensible people don't write this code in the first place, so there is no need to optimize it.
Or put another way: if you accidentally write code the way that produces a boxing conversion, and you want to eliminate it, you can do so trivially. There's no need for the compiler to do it for you when it is easy to do yourself.
So for all these reasons and more, the cost of the optimization is higher than the benefit it produces.
For a longer but similar discussion on how to evaluate proposed optimizations, see this answer from yesterday: Weird behaviour of c# compiler due caching delegate
In an unboxed value type, GetType will always return the type of the variable. You already know the type of the variable, so what is the advantage to begin with? Simply use nameof on the type if you want the name:
var i = 1;
var iKnowTheType = nameof(System.Int32); //is this evaluated at compile time?
var s = "Int32";
var areSame = ReferenceEquals(iKnowTheType, s); //returns true!
iKnowTheType and s are the same string, which means nameof(System.Int32) and the literal "Int32" are basically the same thing (read about string interning for more accurate information on this subject).
GetType returns the runtime type of an object. In unboxed value types it will, again, always be the variable's type, but the key differnce here is that the type is evaluated at runtime:
var i = 1;
var iDontKnowTheType = i.GetType().Name;
var s = "Int32";
var areSame = ReferenceEquals(iDontKnowTheType, s); //returns false!
var areEqual = iDontKnowTheType == s; //returns true
Here, the compiler can't intern iDontKnowTheType because that particular string is evaluated at runtime.
GetType() returns a Type object. That object does not exist at compile time, only at runtime.
Also that object has lots of metadata and other runtime data attached to it, so it can be used for much more than just a 'typename comparison', just think of Reflection and Serialization.
Supposing I have a class MyType:
sealed class MyType
{
static Type typeReference = typeof(MyType);
//...
}
Given the following code:
var instance = new MyType();
var type1 = instance.GetType();
var type2 = typeof(MyType);
var type3 = typeReference;
Which of these variable assignments would be the most efficient?
Is performance of GetType() or typeof() concerning enough that it would be beneficial to save off the type in a static field?
typeof(SomeType) is a simple metadata token lookup
GetType() is a virtual call; on the plus side you'll get the derived type if it is a subclass, but on the minus side you'll get the derived class if it is a subclass. If you see what I mean. Additionally, GetType() requires boxing for structs, and doesn't work well for nullable structs.
If you know the type at compiletime, use typeof().
I would go with type2. It doesn't require instantiating an instance to get the type. And it's the most human readable.
The only way to find out is to measure.
The "type1" variant isn't reliable or recommended in any way, since not all types can be constructed. Even worse, it allocates memory that will need to be garbage collector and invokes the object constructors.
For the remaining two options, on my machine "type3" is about twice as fast as "type1" in both debug and release modes. Remember that this is only true for my test - the results may not be true for other processor types, machine types, compilers, or .NET versions.
var sw = System.Diagnostics.Stopwatch.StartNew();
for (int i = 0; i < 10000000; i++)
{
var y = typeof(Program).ToString();
}
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
sw.Restart();
for (int i = 0; i < 10000000; i++)
{
var y = typeReference.ToString();
}
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
That said, it's a bit alarming this question is being asked without a clear requirement. If you noticed a performance problem, you'd likely have already profiled it and know which option was better. That tells me that this is likely premature optimization - you know the saying, "premature optimization is the root of all evil".
Programming code is not measured only by performance. It's also measured by correctness, developer productivity, and maintainability. Increasing the complexity of your code without a strong reason just transfers that cost to somewhere else. What might have been a non-issue has now turned into a serious loss of productivity, both now and for future maintainers of the application.
My recommendation would be to always use the "type1" variant. The measurement code I listed isn't a real world scenario. Caching typeof to a reference variable likely has a ton of side-effects, particularly around the way .NET loads assemblies. Rather than having them load only when needed, it might end up loading them all one every use of the application - turning a theoretical performance optimization into a very real performance problem.
They're rather different.
typeof(MyType) gets a Type object describing the MyType type resolved in compile-type using the ldtoken instruction.
myInstance.GetType() gets the Type object describing the runtime type of the myInstance variable.
Both are intended for different scenarios.
You cannot use typeof(MyType) unless you know the type at the compile-time and have access to it.
You cannot use myInstance.GetType() unless you have an instance of the type.
typeof(MyType) is always more efficient, but you cannot use if you don't see the type at the compile time. You cannot use typeof(MyType) to learn the real runtime type of some variable, because you don't know the type.
Both basically the same. Although typeof can be used on a non-instance class like
typeof(MyClass);
But
MyClass.GetType();
won't even build since you need to have an instance of the class.
Long story short, they both do the same job in different context.
I am learning C# and .NET, and I frequently use the keyword var in my code. I got the idea from Eric Lippert and I like how it increases my code's maintainability.
I am wondering, though... much has been written in blogs about slow heap-located refs, yet I am not observing this myself. Is this actually slow? I am referring to slow compile times due to type inferencing.
You state:
I am referring to slow time for compile due to type 'inferencing'
This does not slow down the compiler. The compiler already has to know the result type of the expression, in order to check compatibility (direct or indirect) of the assignment. In some ways using this already-known type removes a few things (the potential to have to check for inheritance, interfaces and conversion operators, for example).
It also doesn't slow down the runtime; they are fully static compiled like regular c# variables (which they are).
In short... it doesn't.
'var' in C# is not a VARIANT like you're used to in VB. var is simply syntactic sugar the compiler lets you use to shorthand the type. The compiler figures out the type of the right-hand side of the expression and sets your variable to that type. It has no performance impact at all - just the same as if you'd typed the full type expression:
var x = new X();
exactly the same as
X x = new X();
This seems like a trivial example, and it is. This really shines when the expression is much more complex or even 'unexpressable' (like anonymous types) and enumerables.
Var is replaced at compile time with your actual variable type. Are you thinking of dynamic?
A "variant" is typeless, so access to state (or internal state conversion) always must go through two steps: (1) Determine the "real" internal type, and (2) Extract the relevant state from that "real" internal type.
You do not have that two-step process when you start with a typed object.
True, a "variant" thus has this additional overhead. The appropriate use is in those cases where you want the convenience of any-type for code simplicity, like is done with most scripting languages, or very high-level APIs. In those cases, the "variant" overhead is often not significant (since you are working at a high-level API anyway).
If you're talking about "var", though, then that is merely a convenience way for you to say, "Compiler, put the proper type here" because you don't want to do that work, and the compiler should be able to figure it out. In that case, "var" doesn't represent a (runtime) "variant", but rather a mere source-code specification syntax.
The compiler infers from the constructor the type.
var myString = "123"; is no different from string myString = "123";
Also, generally speaking, reference types live on the heap and value types live on the stack, regardless if they're declared using var.
Our internal audit suggests us to use explicit variable type declaration instead of using the keyword var. They argue that using of var "may lead to unexpected results in some cases".
I am not aware of any difference between explicit type declaration and using of var once the code is compiled to MSIL.
The auditor is a respected professional so I cannot simply refuse such a suggestion.
How about this...
double GetTheNumber()
{
// get the important number from somewhere
}
And then elsewhere...
var theNumber = GetTheNumber();
DoSomethingImportant(theNumber / 5);
And then, at some point in the future, somebody notices that GetTheNumber only ever returns whole numbers so refactors it to return int rather than double.
Bang! No compiler errors and you start seeing unexpected results, because what was previously floating-point arithmetic has now become integer arithmetic without anybody noticing.
Having said that, this sort of thing should be caught by your unit tests etc, but it's still a potential gotcha.
I tend to follow this scheme:
var myObject = new MyObject(); // OK as the type is clear
var myObject = otherObject.SomeMethod(); // Bad as the return type is not clear
If the return type of SomeMethod ever changes then this code will still compile. In the best case you get compile errors further along, but in the worst case (depending on how myObject is used) you might not. What you will probably get in that case is run-time errors which could be very hard to track down.
Some cases could really lead to unexpected results. I'm a var fan myself, but this could go wrong:
var myDouble = 2;
var myHalf = 1 / myDouble;
Obviously this is a mistake and not an "unexpected result". But it is a gotcha...
var is not a dynamic type, it is simply syntactic sugar. The only exception to this is with Anonymous types. From the Microsoft Docs
In many cases the use of var is optional and is just a syntactic convenience. However, when a variable is initialized with an anonymous type you must declare the variable as var if you need to access the properties of the object at a later point.
There is no difference once compiled to IL unless you have explicitly defined the type as different to the one which would be implied (although I can't think of why you would). The compiler will not let you change the type of a variable declared with var at any point.
From the Microsoft documentation (again)
An implicitly typed local variable is strongly typed just as if you had declared the type yourself, but the compiler determines the type
In some cases var can impeed readability. More Microsoft docs state:
The use of var does have at least the potential to make your code more difficult to understand for other developers. For that reason, the C# documentation generally uses var only when it is required.
In the non-generic world you might get different behavior when using var instead of the type whenever an implicit conversion would occur, e.g. within a foreach loop.
In the example below, an implicit conversion from object to XmlNode takes place (the non-generic IEnumerator interface only returns object). If you simply replace the explicit declaration of the loop variable with the var keyword, this implicit conversion no longer takes place:
using System;
using System.Xml;
class Program
{
static void Foo(object o)
{
Console.WriteLine("object overload");
}
static void Foo(XmlNode node)
{
Console.WriteLine("XmlNode overload");
}
static void Main(string[] args)
{
XmlDocument doc = new XmlDocument();
doc.LoadXml("<root><child/></root>");
foreach (XmlNode node in doc.DocumentElement.ChildNodes)
{
Foo(node);
}
foreach (var node in doc.DocumentElement.ChildNodes)
{
// oops! node is now of type object!
Foo(node);
}
}
}
The result is that this code actually produces different outputs depending on whether you used var or an explicit type. With var the Foo(object) overload will be executed, otherwise the Foo(XmlNode) overload will be. The output of the above program therefore is:
XmlNode overload
object overload
Note that this behavior is perfectly according to the C# language specification. The only problem is that var infers a different type (object) than you would expect and that this inference is not obvious from looking at the code.
I did not add the IL to keep it short. But if you want you can have a look with ildasm to see that the compiler actually generates different IL instructions for the two foreach loops.
It's an odd claim that using var should never be used because it "may lead to unexpected results in some cases", because there are subtleties in the C# language far more complex than the use of var.
One of these is the implementation details of anonymous methods which can lead to the R# warning "Access to modified closure" and behaviour that is very much not what you might expect from looking at the code. Unlike var which can be explained in a couple of sentences, this behaviour takes three long blog posts which include the output of a disassembler to explain fully:
The implementation of anonymous methods in C# and its consequences (part 1)
The implementation of anonymous methods in C# and its consequences (part 2)
The implementation of anonymous methods in C# and its consequences (part 3)
Does this mean that you also shouldn't use anonymous methods (i.e. delegates, lambdas) and the libraries that rely on them such as Linq or ParallelFX just because in certain odd circumstances the behaviour might not be what you expect?
Of course not.
It means that you need to understand the language you're writing in, know its limitations and edge cases, and test that things work as you expect them to. Excluding language features on the basis that they "may lead to unexpected results in some cases" would mean that you were left with very few language features to use.
If they really want to argue the toss, ask them to demonstrate that a number of your bugs can be directly attributed to use of var and that explicit type declaration would have prevented them. I doubt you'll hear back from them soon.
They argue that using of var "may lead
to unexpected results in some cases".to unexpected results in some cases".
If unexpected is, "I don't know how to read the code and figure out what it is doing," then yes, it may lead to unexpected results. The compiler has to know what type to make the variable based on the code written around the variable.
The var keyword is a compile time feature. The compiler will put in the appropriate type for the declaration. This is why you can't do things like:
var my_variable = null
or
var my_variable;
The var keyword is great because, you have to define less information in the code itself. The compiler figures out what it is supposed to do for you. It's almost like always programming to an interface when you use it (where the interface methods and properties are defined by what you use within the declaration space of the variable defined by var). If the type of a variable needs to change(within reason of course), you don't need to worry about changing the variable declaration, the compiler handles this for you. This may sound like a trivial matter, but what happens if you have to change the return value in a function, and that function is used all throughout the program. If you didn't use var, then you have to find and replace every place that variable is called. With the var keyword, you don't need to worry about that.
When coming up with guidelines, as an auditor has to do, it is probably better to err on the side of fool safe, that is white listing good practices / black listing bad practices as opposed to telling people to simply be sensible and do the right thing based on an assessment of the situation at hand.
If you just say "don't use var anywhere in code", you get rid of a lot of ambiguity in the coding guidelines. This should make code look & feel more standardized without having to solve the question of when to do this and when to do that.
I personally love var. I use it for all local variables. All the time. If the resulting type is not clear, then this is not an issue with var, but an issue with the (naming of) methods used to initialize a variable...
I follow a simple principle when it comes to using the var keyword. If you know the type beforehand, don't use var.
In most cases, I use var with linq as I might want to return an anonymous type.
var best using when you have obviously declaration
ArrayList<Entity> en = new ArrayList<Enity>()
complicates readability
var en = new ArrayList<Entity>()
Lazy, clear code, i like it
I use var only where it is clear what type the variable is, or where it is no need to know the type at all (e.g. GetPerson() should return Person, Person_Class, etc.).
I do not use var for primitive types, enum, and string. I also do not use it for value type, because value type will be copied by assignment so the type of variable should be declared explicitly.
About your auditor comments, I would say that adding more lines of code as we have been doing everyday also "lead to unexpected results in some cases". This argument validity has already proven by those bugs we created, therefore I would suggest freezing the code base forever to prevent that.
using var is lazy code if you know what the type is going to be. Its just easier and cleaner to read. When looking at lots and lots of code, easier and cleaner is always better
There is absolutely no difference in the IL output for a variable declaration using var and one explicitly specified (you can prove this using reflector). I generally only use var for long nested generic types, foreach loops and anonymous types, as I like to have everything explicitly specified. Others may have different preferences.
var is just a shorthand notation of using the explicit type declaration.
You can only use var in certain circumstances; You'll have to initialize the variable at declaration time when using var.
You cannot assign a variable that is of another type afterwards to the variable.
It seems to me that many people tend to confuse the 'var' keyword with the 'Variant' datatype in VB6 .
The "only" benefit that i see towards using explicit variable declaration, is with well choosen typenames you state the intent of your piece of code much clearer (which is more important than anything else imo). The var keyword's benefit really is what Pieter said.
I also think that you will run into trouble if you declare your doubles without the D on the end. when you compile the release version, your compiler will likely strip off the double and make them a float to save space since it will not consider your precision.
var will compile to the same thing as the Static Type that could be specified. It just removes the need to be explicit with that Type in your code. It is not a dynamic type and does not/can not change at runtime. I find it very useful to use in foreach loops.
foreach(var item in items)
{
item.name = ______;
}
When working with Enumerations some times a specific type is unknown of time consuming to lookup. The use of var instead of the Static Type will yeald the same result.
I have also found that the use of var lends it self to easier refactoring. When a Enumeration of a different type is used the foreach will not need to be updated.
Use of var might hide logical programming errors, that otherwise you would have got warning from the compiler or the IDE. See this example:
float distX = innerDiagramRect.Size.Width / (numObjInWidth + 1);
Here, all the types in the calculation are int, and you get a warning about possible loss of fraction because you pick up the result in a float variable.
Using var:
var distX = innerDiagramRect.Size.Width / (numObjInWidth + 1);
Here you get no warning because the type of distX is compiled as int. If you intended to use float values, this is a logical error that is hidden to you, and hard to spot in executing unless it triggers a divide by zero exception in a later calculation if the result of this initial calculation is <1.
A few days back, while writing an answer for this question here on overflow I got a bit surprised by the C# compiler, who wasn’t doing what I expected it to do. Look at the following to code snippets:
First:
object[] array = new object[1];
for (int i = 0; i < 100000; i++)
{
ICollection<object> col = (ICollection<object>)array;
col.Contains(null);
}
Second:
object[] array = new object[1];
for (int i = 0; i < 100000; i++)
{
ICollection<object> col = array;
col.Contains(null);
}
The only difference in code between the two snippets is the cast to ICollection<object>. Because object[] implements the ICollection<object> interface explicitly, I expected the two snippets to compile down to the same IL and be, therefore, identical. However, when running performance tests on them, I noticed the latter to be about 6 times as fast as the former.
After comparing the IL from both snippets, I noticed the both methods were identical, except for a castclass IL instruction in the first snippet.
Surprised by this I now wonder why the C# compiler isn’t ‘smart’ here. Things are never as simple as it seems, so why is the C# compiler a bit naïve here?
My guess is that you have discovered a minor bug in the optimizer. There is all kinds of special-case code in there for arrays. Thanks for bringing it to my attention.
This is a rough guess, but i think it's about the Array's relationship to its generic IEnumerable.
In the .NET Framework version 2.0, the
Array class implements the
System.Collections.Generic.IList,
System.Collections.Generic.ICollection,
and
System.Collections.Generic.IEnumerable
generic interfaces. The
implementations are provided to arrays
at run time, and therefore are not
visible to the documentation build
tools. As a result, the generic
interfaces do not appear in the
declaration syntax for the Array
class, and there are no reference
topics for interface members that are
accessible only by casting an array to
the generic interface type (explicit
interface implementations). The key
thing to be aware of when you cast an
array to one of these interfaces is
that members which add, insert, or
remove elements throw
NotSupportedException.
See MSDN Article.
It's not clear whether this relates to .NET 2.0+, but in this special case it would make perfect sense why the compiler cannot optimize your expression if it only becomes valid at run time.
This doesn't look like more than just a missed opportunity in the compiler to suppress the cast. It will work if you write it like this:
ICollection<object> col = array as ICollection<object>;
which suggests that it gets too conservative because casts can throw exceptions. However, it does work when you cast to the non-generic ICollection. I'd conclude that they simply overlooked it.
There's a bigger optimization issue at work here, the JIT compiler doesn't apply the loop invariant hoisting optimization. It should have re-written the code like this:
object[] array = new object[1];
ICollection<object> col = (ICollection<object>)array;
for (int i = 0; i < 100000; i++)
{
col.Contains(null);
}
Which is a standard optimization in the C/C++ code generator for example. Still, the JIT optimizer can't burn a lot of cycles on the kind of analysis required to discover such possible optimizations. The happy angle on this is that optimized managed code is still quite debuggable. And that there still is a role for the C# programmer to write performant code.