C# is, unlike C++, a language that hides technical stuff from the developer. No pointers (except in unsafe code) and garbage collection are examples of this. As I understand, C# wants the developer to focus only on the concepts and not the underlying architecture, memory handling etc..
But then, why does the developer have to decide where an object is to be stored? For class it is always on the heap, for struct it is either on the stack (if local variable) or inline (if member of an object).
Isn't that something the compiler could figure out either based on the class definition (it could estimate needed memory space and decide heuristically based on that) or based on the context a given instance is in (is it a local variable in a function, then stack; is it more global, then heap; is it member of an object, then base decision on its estimated memory space)?
PS: I know class and struct have more differences than that, namely reference equality versus value equality, but this is not point of my question. (And for those aspects, other solutions could be found to unlink these properties from the decision class/struct.)
Your question is not valid (I mean in a logical way) because it depends on a false premise:
The developper cannot really decide where an object is to be stored, because this is an implementation detail.
See this answer discussing struct on heap or stack,
or this question: C# structs/classes stack/heap control?
The first links to Eric Lippert's blog. Here is an extract:
Almost every article I see that describes the difference between value
types and reference types explains in (frequently incorrect) detail
about what “the stack” is and how the major difference between value
types and reference types is that value types go on the stack. I’m
sure you can find dozens of examples by searching the web.
I find this characterization of a value type based on its
implementation details rather than its observable characteristics to
be both confusing and unfortunate. Surely the most relevant fact about
value types is not the implementation detail of how they are
allocated, but rather the by-design semantic meaning of “value type”,
namely that they are always copied “by value” . If the relevant thing
was their allocation details then we’d have called them “heap types”
and “stack types”. But that’s not relevant most of the time. Most of
the time the relevant thing is their copying and identity semantics.
I regret that the documentation does not focus on what is most
relevant; by focusing on a largely irrelevant implementation detail,
we enlarge the importance of that implementation detail and obscure
the importance of what makes a value type semantically useful. I
dearly wish that all those articles explaining what “the stack” is
would instead spend time explaining what exactly “copied by value”
means and how misunderstanding or misusing “copy by value” can cause
bugs.
C# is, unlike C++, a language that hides technical stuff from the developer
I do not think this is a fair characterization of c#. There is plenty of technical details a c# developer has to be aware of, and there are plenty of languages work on a much higher abstraction level. I think it would be more fair to say that C# aims to make it easy to write good, working code. Sometimes called "The pit of success" by Eric Lippert. See also C++ and the pit of despair.
Ideally you would just write code that describes the problem, and let the compiler sort out anything that has to do with performance. But writing compilers is hard, especially when you have a hard time constraint since you are compiling just in time. While language is theoretically unrelated to performance, practice show that higher level languages tend to be more difficult to optimize.
While there are important semantic differences between a struct and a class, the main reason to chose one or the other usually comes down to performance, and the performance is directly related to how they are stored and passed around. You would typically avoid many small objects, or passing around huge structs.
As a comparison, Java is very similar to C#, and did just fine without value types for many years. It seems however like they have or will introduce one to reduce the overhead of creating objects.
why does the developer have to decide where an object is to be stored?
The simple answer seem to be that determining the optimal storage location is difficult for the compiler to do. Letting the developer hint how the type is used helps improve performance for some situations and allow C# to be used in in situations where it would otherwise be unsuitable for. At the cost of making the language more complex and more difficult to learn.
Related
In contrast with Perl 5, Raku introduced gradual typing. The landscape of gradually typed object-oriented languages is rich and includes: Typed Racket, C#, StrongScript, Reticulated Python.
It's said that "optional gradual type-checking at no additional runtime cost" on the Raku official website. As far as I know, some gradual typing language (like Typed Racket and Reticulated Python) suffered from the serious performance issue due to strategy of enforcing type system soundness. On the other hand, the concrete types in StrongScript performs well thanks to the relatively inexpensive nominal subtype tests. Research on classification of gradual typing (excluding Raku):
C# and concrete types in StrongScript: use run-time subtype tests on type constructors to supplement static typing. While statically typed code executes at native speed, values are dynamically checked at typed-untyped boundaries. Types insert efficient casts and lead to code that can be optimized. They are also sound and has low overheads, but comes at a cost in expressiveness and ability to migrate from untyped to typed.
Typed Racket: monitors values to ensure that they behave in accordance to their assigned types. Instead of checking higher-order and mutable values for static type tags like concrete, wrappers ensure enduring conformance of values to their declared type. It avoids casts in typed code. The price it pays for this soundness, however, is that heavyweight wrappers inserted at typed-untyped boundaries.
Reticulated Python: lies between above two; it adds type casts but does so only for the top level of data structures. The performance of the transient semantics for Reticulated Python is a worst case scenario for concrete types –i.e, there is a cast at almost every call. It checks types at uses, so the act of adding types to a program introduces more casts and may slow the program down (even in fully typed code).
Is Raku's run-time enforcement strategy similar to C# and concrete types in StrongScript, or does it have its own set of strategies to ensure that there is no obvious performance issue like Typed Racket and Reticulated Python? Does it have a sound gradual type system?
Raku mandates that type constraints written into the program are enforced at runtime at latest. How that promise is kept is up to the compiler and runtime implementer. I'll discuss how the Rakudo (compiler) and MoarVM (runtime) pairing does it, because that's what I've worked on.
The initial compilation itself does rather little in terms of analysis to eliminate type checks, and thus the bytecode we produce has a lot of type checks in it. The bet being made here is that analysis takes time, only some of the code will actually find itself on a hot path (or for very short scripts, there is no hot path), so we might as well leave it to the VM to figure out what's hot, and then focus on those bits.
The VM does the typical profiling a modern runtime does, not only recording what code is hot, but also recording statistics on parameter types, return types, lexical types, and so forth. Despite the amount of potential dynamism that could occur, in a given application the reality is that a huge amount of code is monomorphic (only ever sees one type, or for a routine, one argument type tuple). Another bunch is polymorphic (sees a few different types), and a comparatively tiny amount is megamorphic (loads of types).
Based on the data it gets, the runtime produces specializations: versions of the code compiled based on assumptions about what exact types will show up. Guarding against exact types is cheaper than having to care for subtyping relations and so forth. So at this point we've got a version of the code where we have some cheap preconditions up front, and we've used them to eliminate the more costly type checks (as well as some extra guards scattered through the code replacing other type checks). However, this isn't really free...yet.
When calls are made, one of two things can happen:
For small callees, inlining takes place. We inline a specialization of the callee. If the knowledge of types in the caller is already sufficient to prove the type assumptions - which it often is - then there's no need for any guarding. Essentially, type checks in the callee just became free. We can inline multiple levels deep. Furthermore, inlining lets us trace data flows through the callee, which may let us eliminate further guards, for example about return value types in the callee.
For larger callees, we can perform specialization linking - that is, calling a specialization directly and bypassing its guards, because we can use the type knowledge in the caller to prove we're meeting the guard assumptions. Again, the callee parameter type checks thus become free.
But what about type-y things that aren't calls, such as return value type checks and assignments? We compile those as calls too, so we can re-use the same machinery. For example, a return type check, in the case it's monomorphic (often), turns into a guard + a call to the identity function, and whenever we can prove the guard, that just turns into the identity function, which is a trivial inline.
There's more to come yet. Of note:
The mechanisms I've described above are built around various kinds of caches and guard trees and it's not all quite so beautiful as I've made it sound. Sometimes one needs to build ugly to learn enough to know how to build nice. Thankfully, a current bunch of work is folding all of these learnings into a new, unified, guard and dispatch mechanism, which will also take on various aspects of the language that are very poorly optimized today. That's due to land in releases within a couple of months.
The current runtime already does some very limited escape analysis and scalar replacement. This means that it can trace data flows into short-lived objects, and thus find even more type checks to eliminate (on top of having eliminated the memory allocations). There's work underway to make it more powerful, providing partial escape analysis, transitive analysis in order to scalar replace entire object graphs and thus be able to trace data flows, and so types, through them.
Last year, a paper titled Transient typechecks are (almost) free was published. It's not about Raku/Rakudo/MoarVM at all, but it's the closest description I've seen in academic literature to what we're doing. That was the first time I realized that maybe we are doing something kinda innovative in this area. :-)
Now jnthn has written an authoritative overview of where things are at for Rakudo and MoarVM as of 2020, I feel OK publishing what amounts to a non-expert writing up some hand wavy historical notes covering 2000 thru 2019 that may be of interest to some readers.
My notes are organized to respond to excerpts from your question:
The performance penalties for types/constraints in Raku?
There aren't supposed to be penalties, but rather the reverse. That is to say Larry Wall wrote, in an early (2001) design doc:
more performance and safety as you give it more type information to work with
(This was 4 years before the term "gradual typing" was introduced at a 2005 academic conference.)
So his intent was that if a dev added a suitable type, the program ran either safer, or faster/leaner, or both.
(And/or was able to be used in interop with foreign languages: "Besides performance and safety, one other place where type information is useful is in writing interfaces to other languages.". A decade later he was saying that the #1 and #2 reasons for types were multiple dispatch and documentation.)
I don't know of any systematic effort to measure the degree to which Rakudo delivers on the design intent that types never slow code down and predictably speed it up if they're native types.
In addition, Rakudo is still relatively rapidly changing, with an overall annual performance improvement in the 2-3x range stretching back a decade.
(While Rakudo is 15 years old, it's been developed as the Raku language has evolved alongside it -- finally settling down in the last few years -- and the overall phasing of Rakudo's development has been a deliberate 1-2-3 of "Make it work, Make it work right, Make it work fast", with the latter only really beginning to kick in in recent years.)
As far as I know, some gradual typing language (like Typed Racket and Reticulated Python) suffered from the serious performance issue due to strategy of enforcing type system soundness.
Gradual Typing from Theory to Practice (2019) summarized a 2015 paper that said:
The first systematic effort to measure [soundness costs] ... revealed substantial performance problems ...
... (presumably the ones you've been reading about) ....
[and that] performance can be significantly improved using JIT compilers, nominal types, representation improvements, and custom-built compilers, among others ...
Now compare their above recipe for performance with the characteristics of Rakudo and Raku:
Rakudo is a 15 year old custom-built compiler with several backends including the custom MoarVM backend with an x86 JIT.
The Raku language has a (gradual) nominal type system.
The Raku language supports representation polymorphism. This is like the mother of all representation improvements, not in the sense of being one, but rather in the sense it abstracts representation from structure so it's possible to improve with the freedom that representation polymorphism brings.
There are other potential type system related contributions to performance; eg I expect native arrays (including multi-dimensional; sparse; etc.) to one day be a significant contributor.
On the other hand, the concrete types in StrongScript performs well thanks to the relatively inexpensive nominal subtype tests
I note jnthn's comment:
Guarding against exact types is cheaper than having to care for subtyping relations and so forth
My guess is that the jury will be out for about another 5 years or so on whether Rakudo is delivering, or will one day deliver, sufficient performance to make its gradual typing generally attractive.
And perhaps one juror (hi Nile) will be the first to draw some tentative conclusions about how Raku(do) compares to other gradually typed languages in the next year or so?
Soundness
Does it have a sound gradual type system?
In the sense there's a mathematical treatment? I'm 99% sure the answer is no.
In the sense that it's thought to be sound? Where the only presumed guarantee is memory safety? I think so. Anything more than that? Good question.
All I can say is that afaik Raku's type system was developed by hackers like Larry Wall and Audrey Tang. (cf her 2005 notes on type inference.)
I've always wondered how the dependencies are managed from a programming language to its libraries. Take for example C#. When I was beginning to learn about computing, I would assume (wrongly as it turns out) that the language itself is designed independently of the class libraries that would eventually become available for it. That is, the set of language keywords (such as for, class or throw) plus the syntax and semantics are defined first, and libraries that can be used from the language are developed separately. The specific classes in those libraries, I used to think, should not have any impact on the design of the language.
But that doesn't work, or not all the time. Consider throw. The C# compiler makes sure that the expression following throw resolves to an exception type. Exception is a class in a library, and as such it should not be special at all. It would be a class as any other, except that the C# compiler assigns it that special semantics. That is very good, but my conclusion is that the design of the language does depend on the existence and behaviour of specific elements in the class libraries.
Additionally, I wonder how this dependency is managed. If I were to design a new programming language, what techniques would I use to map the semantics of throw to the very particular class that is Exception?
So my questions are two:
Am I correct in thinking that language design is tightly coupled to that of its base class libraries?
How are these dependencies managed from within the compiler and run-time? What techniques are used?
Thank you.
EDIT. Thanks to those who pointed out that my second question is very vague. I agree. What I am trying to learn is what kind of references the compiler stores about the types it needs. For example, does it find the types by some kind of unique id? What happens when a new version of the compiler or the class libraries is released? I am aware that this is still pretty vague, and I don't expect a precise, single-paragraph answer; rather, pointers to literature or blog posts are most welcome.
What I am trying to learn is what kind of references the compiler stores about the types it needs. For example, does it find the types by some kind of unique id?
Obviously the C# compiler maintains an internal database of all the types available to it in both source code and metadata; this is why a compiler is called a "compiler" -- it compiles a collection of data about the sources and libraries.
When the C# compiler needs to, say, check whether an expression that is thrown is derived from or identical to System.Exception it pretends to do a global namespace lookup on System, and then it does a lookup on Exception, finds the class, and then compares the resulting class information to the type that was deduced for the expression.
The compiler team uses this technique because that way it works no matter whether we are compiling your source code and System.Exception is in metadata, or if we are compiling mscorlib itself and System.Exception is in source.
Of course as a performance optimization the compiler actually has a list of "known types" and populates that list early so that it does not have to undergo the expense of doing the lookup every time. As you can imagine, the number of times you'd have to look up the built-in types is extremely large. Once the list is populated then the type information for System.Exception can be just read out of the list without having to do the lookup.
What happens when a new version of the compiler or the class libraries is released?
What happens is: a whole bunch of developers, testers, managers, designers, writers and educators get together and spend a few million man-hours making sure that the compiler and the class libraries all work before they're released.
This question is, again, impossibly vague. What has to happen to make a new compiler release? A lot of work, that's what has to happen.
I am aware that this is still pretty vague, and I don't expect a precise, single-paragraph answer; rather, pointers to literature or blog posts are most welcome.
I write a blog about, among other things, the design of the C# language and its compiler. It's at http://ericlippert.com.
I would assume (perhaps wrongly) that the language itself is designed independently of the class libraries that would eventually become available for it.
Your assumption is, in the case of C#, completely wrong. C# 1.0, the CLR 1.0 and the .NET Framework 1.0 were all designed together. As the language, runtime and framework evolved, the designers of each worked very closely together to ensure that the right resources were allocated so that each could ship new features on time.
I do not understand where your completely false assumption comes from; that sounds like a highly inefficient way to write a high-level language and a great way to miss your deadlines.
I can see writing a language like C, which is basically a more pleasant syntax for assembler, without a library. But how would you possibly write, say, async-await without having the guy designing Task<T> in the room with you? It seems like an exercise in frustration.
Am I correct in thinking that language design is tightly coupled to that of its base class libraries?
In the case of C#, yes, absolutely. There are dozens of types that the C# language assumes are available and as-documented in order to work correctly.
I once spent a very frustrating hour with a developer who was having some completely crazy problem with a foreach loop before I discovered that he had written his own IEnumerable<T> that had slightly different methods than the real IEnumerable<T>. The solution to his problem: don't do that.
How are these dependencies managed from within the compiler and run-time?
I don't know how to even begin to answer this impossibly vague question.
All (practical) programming languages have a minimum number of required functions. For modern "OO" languages, this also includes a minimum number of required types.
If the type is required in the Language Specification, then it is required - regardless of how it is packaged.
Conversely, not all of the BCL is required to have a valid C# implementation. This is because not all of the BCL types are required by the Language Specification. For instance, System.Exception (see #16.2) and NullReferenceException are required, but FileNotFoundException is not required to implement the C# Language.
Note that even though the specification provides minimal definitions for base types (e.g System.String), it does not define the commonly-accepted methods (e.g. String.Replace). That is, almost all of the BCL is outside the scope of the Language Specification1.
.. but my conclusion is that the design of the language does depend on the existence and behaviour of specific elements in the class libraries.
I agree entirely and have included examples (and limits of such definitions) above.
.. If I were to design a new programming language, what techniques would I use to map the semantics of "throw" to the very particular class that is "Exception"?
I would not look primarily at the C# specification, but rather I would look at the Common Language Infrastructure specification. This new language should, for practically reasons, be designed to interoperate with existing CLI/CLR languages, but does not necessarily need to "be C#".
1 The CLI (and associated references) do define the requirements of a minimal BCL. So if it is taken that a valid C# implementation must conform to (or may assume) the CLI then there are many other types to consider that are not mentioned in the C# specification itself.
Unfortunately, I do not have sufficient knowledge of the 2nd (and more interesting) question.
my impression is that
in languages like C# and Ada
application source code is portable
standard library source code is not portable
accross compilers/implementations
From other questions I can see that locking on types is a bad idea. But it is possible to do so, so I was wondering if it is such a bad thing to do why is it allowed? I am assuming there must be good use cases for its purpose so could someone let me know what they are please?
It's nearly always a bad idea:
Anyone can lock on the types from anywhere in the code, so you have no way to be sure that you won't get a deadlock without looking through all the code.
Locking on a type can even cause deadlocks across AppDomains. See Joe Duffy's article: Don't lock on marshal-by-bleed objects.
It's allowed because there are almost no restrictions on what you can use as your lock object. In other words, it wasn't specifically allowed - it's just that there isn't any code in the .NET framework that disallows it.
The book "Debugging Microsoft .NET Applications" has source code for an FxCop rule DoNotLockOnTypes that warns you if you try to do this. (thanks to Christian.K)
To understand why it is a bad idea in general have a look at the article Don't lock type objects.
It is allowed because the language/framework designers decided to be able to take lock on anything that derives from System.Object. Nobody can prevent it because System.Type derives from System.Object (as every other .NET type).
Take this signature:
void Foo(object o)
How could a compiler enforce that o is no System.Type? You could of course check it at runtime, but this would have a performance impact.
And of course there might be super-exotic situations where one might need to lock on a type. Maybe the CLR does it internally.
Many bad ideas find their way into programming languages because no language designer can foretell the future. Any language created by humans will have warts.
Some examples:
Hejlsberg wished (Original article: The A-Z of Programming Languages: C# - Computerworld) he had added non-nullable class references to C#. (I wish he had bitten off the const problem as well.)
The C++ committee screwed up with valarray, and export, among numerous other minor and major regrets.
Java's templates were a botch-job (OMG, type elision!) designed to avoid changing the VM, and by the time they realised the VM had to change anyway, it was too late to do the necessary rework.
Python's scoping rules are a constant irritant that numerous attempts to improve it haven't really helped much (a little, but not much).
I am a C++ programmer moving into C#. I worked with the language for a month now and understand many concepts.
What are some surprises I may get while moving from C++ to C#? I was warned about destructors not being executed as I intended. Recently I tried to do something with generics that would use T as the base class. That didn't work. I also had another problem but I'll chalk that up to inexperience in C#. I was also surprised that my app was eating RAM, then I figured out I needed to use .dispose in one function. (I thought it would clean up like a smart pointer)
What else may surprise me?
Please no language bashing. I doubt anyone will but just in case...
Fortunately, Microsoft have some of that info here: C# for C++ Developers.
The struct vs class differences is another biggie for C++ origins.
I think you've covered the main one. You should read up on garbage collection, understand why there are no destructors as such, figure out the IDisposable pattern (which kind of replaces destructors). I'd say that was the big one.
The only other thing I would say is to warn you the C# and the .Net Base Class Library are pretty big, to get the most out of it there is a lot to learn... Once you have covered the basics of garbage collection and the type system you'll want to look at LINQ, and you should take the time to explore the relevnt libraries / frameworks for your area (e.g. WPF, WCF, ASP.Net etc). But it's all good. I moved from C++ to C# and would never go back, I find it way more productive (I'm not bashing C++, I do still dable:-) )
Well, the languages are completely different as I'm sure you've realized if you've worked with C# for any time. You don't have a powerful macro or templating (I realize there are generics in C#) in C# as you do in C++. As far as memory, remember you aren't in a tightly controlled environment anymore. Expect to see a lot of memory usage in Task Manager and similar tools, this is normal. There are better, more fine-grained performance counters to see true memory usage. Also, you probably don't need to call dispose as much as you might think (by the way, check out "using" blocks if you haven't already).
Another clear one is the default constructor, in C# this does not create a new Foo object:
Foo myFoo;
You can't have anything like a "void pointer" unless you just think of that as being like having a reference of type object. As well, you need to think of Properties as syntactic sugar for methods and not public members as they look in C++ syntax.
Make sure you understand "out" and "ref" parameters.
Obviously this not a large list, just a few "pointers" (no pun intended).
This is a rather big topic. A few thoughts:
C# is garbage collected. Doesn't mean you can stop paying attention about resource allocation, but in general you don't have to worry nearly as much about the most common resource: memory.
In C# Everything is an object. There are no "primitive" datatypes, even an int is an object.
C# has generics, not templates. Templates are far richer and more complex than C#'s similarly syntaxed generics, but generics still provide nearly all of the practical utility of templates, without many of the headaches.
C# has interfaces and single inheritance. Where you might look to multiple inheritance in C++, instead look to using interfaces or a different design pattern (e.g. strategy).
C# has delegates instead of function pointers. A delegate is basically just a typed function pointer. The use of delegates and delegate-relatives (lambda expressions, events, predicates, etc.) is very powerful and worth putting significant effort into studying.
C# supports yield return. This is very fundamental to the C# way of doing things. The most common form of iterating over some set is to use foreach. It's worth understanding how IEnumerable and iterators work.
I've made pretty much the same change some months ago (before that I've made a change to Java - but I didn't really spend much time programming Java).
Here are some of the biggest traps I've come across:
Attribute vs. Variable vs. Setter
One of the biggest traps I was stepping into was knowing if you have to change an attribute or set a variable or use a setter to set some aspect of a class.
IList vs. List vs. other collections
Know the difference between IList, List and all the other collections (IMO you can't really do much with an IList).
Generics do have their own pitfalls
And if you plan to use a lot of generics, maybe reading this helps you avoiding some of my errors:
Check if a class is derived from a generic class
But in general I'd say that the change went pretty painlessly.
Differences in the object model. For example value and reference types are separate by definition, not by how they are instantiated. This has some surprises, e.g.
myWinForm.Size.Width = 100;
will not change the width, you need to create a new Size instance and assign it.
Some things that I have not seen mentioned that are not available in C++ and may be a bit surprising are attributes and reflection
Attributes as such do not give you full fledged AOP. However, they do allow you to solve a bunch of problems in a way that is very different to how you would solve them in C++.
When we talk about the .NET world the CLR is what everything we do depends on.
What is the minimum knowledge of CLR a .NET programmer must have to be a good programmer?
Can you give me one/many you think is/are the most important subjects:
GC?, AppDomain?, Threads?, Processes?, Assemblies/Fusion?
I will very much appreciate if you post a links to articles, blogs, books or other on the topic where more information could be found.
Update: I noticed from some of comments that my question was not clear to some. When I say CLR I don't mean .Net Framework. It is NOT about memorizing .NET libraries, it is rather to understand how does the execution environment (in which those libraries live on runtime) work.
My question was directly inspired by John Robbins the author of "Debugging Applications for Microsoft® .NET" book (which I recommend) and colleague of here cited Jeffrey Richter at Wintellect. In one of introductory chapters he is saying that "...any .NET programmer should know what is probing and how assemblies are loaded into runtime". Do you think there are other such things?
Last Update: After having read first 5 chapters of "CLR via C#" I must say to anyone reading this. If you haven't allready, read this book!
Most of those are way deeper than the kind of thing many developers fall down on in my experience. Most misunderstood (and important) aspects in my experience:
Value types vs reference types
Variables vs objects
Pass by ref vs pass by value
Delegates and events
Distinguishing between language, runtime and framework
Boxing
Garbage collection
On the "variables vs objects" front, here are three statements about the code
string x = "hello";
(Very bad) x is a string with 5 letters
(Slightly better) x is a reference to a string with 5 letters
(Correct) The value of x is a reference to a string with 5 letters
Obviously the first two are okay in "casual" conversation, but only if everyone involved understands the real situation.
A great programmer cannot be measured by the quantity of things he knows about the CLR. Sure it's a nice beginning, but he must also know OOP/D/A and a lot of other things like Design Patterns, Best Practices, O/RM concepts etc.
Fact is I'd say a "great .Net programmer" doesn't necessary need to know much about the CLR at all as long as he has great knowledge about general programming theory and concepts...
I would rather hire a "great Java developer" with great general knowledge and experience in Java for a .Net job then a "master" in .Net that have little experience and thinks O/RM is a stock ticker and stored procedures are a great way to "abstract away the database"...
I've seen professional teachers in .Net completely fail in doing really simple things without breaking their backs due to lack of "general knowledge" while they at the same time "know everything" there is to know about .Net and the CLR...
Updated: reading the relevant parts of the book CLR via C# by Jeffrey Richter..this book can be a good reference..
Should know about Memory Management, Delegates
Jon's answer seems to be pretty complete to me (plus delegates) but I think what fundamentally seperates a good programmer from an average one is answering the why questions rather than the how. It's great to know how garbage collections works and how value types and reference types work, but it's a whole other level to understand when to use a value type vs. reference type. It's the difference between speaking in a language vs. giving a speech in a language (it's all about how we apply the knowledge we have and how we arrive at those decisions).
Jon's answer is good. Those are all fairly basic but important areas that a lot of developers do not have a good understanding of. I think knowing the difference between value and reference types ties in to a basic understanding of how the GC in .NET behaves, but, more importantly, a good understanding of the Dispose pattern is important.
The rest of the areas you mention are either very deep knowledge about the CLR itself or more advanced concepts that aren't widely used (yet). [.NET 4.0 will start to change some of that with the introduction of the parallel extensions and MEF.]
One thing that can be really tricky to grasp is deferred execution and the likes.
How do you explain how a method that returns an IEnumerable works? What does a delegate really do? things like that.