I have a C++ background and I do fully understand and agree with the answers to this question: Why is “using namespace std;” considered bad practice?
So I'm astonished that, having some experience with C# now, I see the exact opposite there:
using Some.Namespace; is literally used everywhere. Whenever you start using a type, you add a using directive for its namespace first (if it isn't there already). I cannot recall having seen a .cs-file that didn't start with using System; using System.Collections.Generic; using X.Y.Z; etc....
In fact, if you add a new file via the Visual Studio wizard, it automatically adds some using directives there, even though you may not need them at all. So, while in the C++ community you get basically lynched, C# even encourages doing this. At least this is how it appears to me.
Now, I do understand that using directives in C# and C++ are not exactly the same thing. Also, I do understand that one of the nastiest things you can do with using namespace in C++, namely putting it in a header file, has no equivalently nasty counterpart in C# due to the lack of a concept of header files and #include.
However, despite their differences, using directives in C# and C++ serve the same purpose, which is only having to type SomeType all the time, rather than the much longer Some.Namespace.SomeType (in C++ with :: instead of .). And with this same purpose, also the danger appears to be the same to me: naming collisions.
In the best case this results in a compilation error, so you "only" have to fix it. In the worst case, it still compiles and the code silently does different things than you intended it to do. So my question is: Why (apparently) are using directives considered so unequally bad in C# and C++?
Some ideas of an answer that I have (none of these really satisfy me, though):
Namespaces tend to be much longer and much more nested in C# than in C++ (std vs. System.Collection.Generic). So, there is more desire and more gain in de-noising the code this way. But even if this is true, this argument only applies when we look at the standard namespaces. Custom ones can have any short name you like, in both C# and C++.
Namespaces appear to be much more "fine granular" in C# than in C++. As an example, in C++ the entire standard library is contained in std (plus some tiny nested namespaces like chrono) while in C# you have System.IO, System.Threading, System.Text etc. So, the risk of having naming collisions is smaller. However, this is only a gut feeling. I didn't actually count how many names you "import" with using namespace std and using System. And again, even if this is true, this argument applies only when looking at the standard namespaces. Your own ones can be designed as fine granular as you wish, in both C# and C++.
Are there more arguments? I'm especially interested in actual hard facts (if there are any) and not so much in opinions.
Why is “using System;” not considered bad practice?
"using System;" is not universally not considered a bad practice. See for example: Why would you not use the 'using' directive in C#?
But it may be true that it is not considered quite as bad as using namespace std. Probably because:
C# does not have header files. It is uncommon to "include" one C# source file into another using a pre-processor.
std namespace is nearly flat i.e. almost all standard library functions, types and variables are in it (there are few exceptions such as the filesystem sub-namespace). It contains very, very high number of identifiers. To my understanding, System contains much fewer names, and instead has more sub-namespaces.
In C#, there are no global functions or variables. As such, the number of global identifiers is typically quite small in contrast to C++ which does have those: Furthermore, it is typical to use C libraries (often indirectly) which doesn't have namespaces, and therefore place all their names into the global namespace.
As far as I know, C# has no argument dependent lookup. ADL in conjunction with name hiding, overloading etc. can produce cases where some programs are not affected by a name conflict, while others are subtly affected, and catching all corner cases is not feasible with testing.
Because of these differences, “using System;” has lower chance of name conflict than using namespace std.
Also, namespace "importing" is in a way, a self-perpetuating convention: If it is conventional to import a standard namespace, then programmers will conventionally try to avoid choosing names from that namespace for their own identifiers, which helps to reduce problems with such convention.
If such an import is considered a bad practice, then programmers will be less likely to even attempt such avoidance of conflicts with imported namespaces. As such, conventions tend to get polarised either for or against the practice, even if weights of arguments between the choices were originally subtle.
However, despite their differences, using directives in C# and C++ serve the same purpose, which is only having to type SomeType all the time, rather than the much longer Some.Namespace.SomeType (in C++ with :: instead of .). And with this same purpose, also the danger appears to be theto me: naming collisions.
Yes, but you didn't export that danger (read: forcing others to deal with it), because of:
Now, I do understand that using directives in C# and C++ are not exactly the same thing. Also, I do understand that one of the most nasty things you can do with using namespace in C++, namely putting it in a header file, has no equivalent in C# due to the lack of a concept of header files and #include.
So it's rather a different category of thing.
Also, C++ is not "designed" to be developed in an IDE the same way C# is. C# is basically always written in Visual Studio with its Intellisense and whatnot. It's designed to be used that way, by the people who created it. Regardless of how many people use an IDE to develop in C++, it is not designed with that use case as an overwhelming concern.
Namespaces appear to be much more "fine granular" in C# than in C++.
Yes, that too. using namespace std and using System.Collection.Generic are incomparable.
So don't compare them!
Related
I know in general, you should never put something like "using namespace std;" in a c++ header, since it will propagate to any file that includes that header.
I'm wondering if this rule of thumb also applies to .NET namespaces in a C++/CLI program. I'm writing a wrapper for a c++ library, to be used in C#. The idea of "using namespace _____" in the CLI headers seems appealing because these namespaces go quite deep. Having "System::Collections::Generic::List" in front of every list in my function prototypes, for example, feels unnecessarily verbose and kills readability.
I think that this might be okay, because (and I'm not positive on this point, I could be wrong) I don't believe "using System::Collections::Generic" propogates across C# files the way it does in C++ headers. I don't expect it to negatively impact anyone using my CLI library in a C# project.
I couldn't really find any information on what the proper custom is in CLI-world if there is one, or if I'm wrong about the downsides - so I appreciate any advice!
I've always wondered how the dependencies are managed from a programming language to its libraries. Take for example C#. When I was beginning to learn about computing, I would assume (wrongly as it turns out) that the language itself is designed independently of the class libraries that would eventually become available for it. That is, the set of language keywords (such as for, class or throw) plus the syntax and semantics are defined first, and libraries that can be used from the language are developed separately. The specific classes in those libraries, I used to think, should not have any impact on the design of the language.
But that doesn't work, or not all the time. Consider throw. The C# compiler makes sure that the expression following throw resolves to an exception type. Exception is a class in a library, and as such it should not be special at all. It would be a class as any other, except that the C# compiler assigns it that special semantics. That is very good, but my conclusion is that the design of the language does depend on the existence and behaviour of specific elements in the class libraries.
Additionally, I wonder how this dependency is managed. If I were to design a new programming language, what techniques would I use to map the semantics of throw to the very particular class that is Exception?
So my questions are two:
Am I correct in thinking that language design is tightly coupled to that of its base class libraries?
How are these dependencies managed from within the compiler and run-time? What techniques are used?
Thank you.
EDIT. Thanks to those who pointed out that my second question is very vague. I agree. What I am trying to learn is what kind of references the compiler stores about the types it needs. For example, does it find the types by some kind of unique id? What happens when a new version of the compiler or the class libraries is released? I am aware that this is still pretty vague, and I don't expect a precise, single-paragraph answer; rather, pointers to literature or blog posts are most welcome.
What I am trying to learn is what kind of references the compiler stores about the types it needs. For example, does it find the types by some kind of unique id?
Obviously the C# compiler maintains an internal database of all the types available to it in both source code and metadata; this is why a compiler is called a "compiler" -- it compiles a collection of data about the sources and libraries.
When the C# compiler needs to, say, check whether an expression that is thrown is derived from or identical to System.Exception it pretends to do a global namespace lookup on System, and then it does a lookup on Exception, finds the class, and then compares the resulting class information to the type that was deduced for the expression.
The compiler team uses this technique because that way it works no matter whether we are compiling your source code and System.Exception is in metadata, or if we are compiling mscorlib itself and System.Exception is in source.
Of course as a performance optimization the compiler actually has a list of "known types" and populates that list early so that it does not have to undergo the expense of doing the lookup every time. As you can imagine, the number of times you'd have to look up the built-in types is extremely large. Once the list is populated then the type information for System.Exception can be just read out of the list without having to do the lookup.
What happens when a new version of the compiler or the class libraries is released?
What happens is: a whole bunch of developers, testers, managers, designers, writers and educators get together and spend a few million man-hours making sure that the compiler and the class libraries all work before they're released.
This question is, again, impossibly vague. What has to happen to make a new compiler release? A lot of work, that's what has to happen.
I am aware that this is still pretty vague, and I don't expect a precise, single-paragraph answer; rather, pointers to literature or blog posts are most welcome.
I write a blog about, among other things, the design of the C# language and its compiler. It's at http://ericlippert.com.
I would assume (perhaps wrongly) that the language itself is designed independently of the class libraries that would eventually become available for it.
Your assumption is, in the case of C#, completely wrong. C# 1.0, the CLR 1.0 and the .NET Framework 1.0 were all designed together. As the language, runtime and framework evolved, the designers of each worked very closely together to ensure that the right resources were allocated so that each could ship new features on time.
I do not understand where your completely false assumption comes from; that sounds like a highly inefficient way to write a high-level language and a great way to miss your deadlines.
I can see writing a language like C, which is basically a more pleasant syntax for assembler, without a library. But how would you possibly write, say, async-await without having the guy designing Task<T> in the room with you? It seems like an exercise in frustration.
Am I correct in thinking that language design is tightly coupled to that of its base class libraries?
In the case of C#, yes, absolutely. There are dozens of types that the C# language assumes are available and as-documented in order to work correctly.
I once spent a very frustrating hour with a developer who was having some completely crazy problem with a foreach loop before I discovered that he had written his own IEnumerable<T> that had slightly different methods than the real IEnumerable<T>. The solution to his problem: don't do that.
How are these dependencies managed from within the compiler and run-time?
I don't know how to even begin to answer this impossibly vague question.
All (practical) programming languages have a minimum number of required functions. For modern "OO" languages, this also includes a minimum number of required types.
If the type is required in the Language Specification, then it is required - regardless of how it is packaged.
Conversely, not all of the BCL is required to have a valid C# implementation. This is because not all of the BCL types are required by the Language Specification. For instance, System.Exception (see #16.2) and NullReferenceException are required, but FileNotFoundException is not required to implement the C# Language.
Note that even though the specification provides minimal definitions for base types (e.g System.String), it does not define the commonly-accepted methods (e.g. String.Replace). That is, almost all of the BCL is outside the scope of the Language Specification1.
.. but my conclusion is that the design of the language does depend on the existence and behaviour of specific elements in the class libraries.
I agree entirely and have included examples (and limits of such definitions) above.
.. If I were to design a new programming language, what techniques would I use to map the semantics of "throw" to the very particular class that is "Exception"?
I would not look primarily at the C# specification, but rather I would look at the Common Language Infrastructure specification. This new language should, for practically reasons, be designed to interoperate with existing CLI/CLR languages, but does not necessarily need to "be C#".
1 The CLI (and associated references) do define the requirements of a minimal BCL. So if it is taken that a valid C# implementation must conform to (or may assume) the CLI then there are many other types to consider that are not mentioned in the C# specification itself.
Unfortunately, I do not have sufficient knowledge of the 2nd (and more interesting) question.
my impression is that
in languages like C# and Ada
application source code is portable
standard library source code is not portable
accross compilers/implementations
I am currently developing an application where you can create "programs" with it without writing source code, just click&play if you like.
Now the question is how do I generate an executable program from my data model. There are many possibilities but I am not sure which one is the best for me. I need to generate assemblies with classes and namespace and everything which can be part of the application.
CodeDOM class: I heard of lots of limitations and bugs of this class. I need to create attributes on method parameters and return values. Is this supported?
Create C# source code programmatically and then call CompileAssemblyFromFile on it: This would work since I can generate any code I want and C# supports most CLR features. But wouldn't this be slow?
Use the reflection ILGenerator class: I think with this I can generate every possible .NET code. But I think this is much more complicated and error prone than the other approaches?
Are there other possible solutions?
EDIT:
The tool is general for developing applications, it is not restricted to a specific domain. I don't know if it can be considered a visual programming language. The user can create classes, methods, method calls, all kinds of expressions. It won't be very limitating because you should be able to do most things which are allowed in real programming languages.
At the moment lots of things must still be written by the user as text, but the goal at the end is, that nearly everything can be clicked together.
You my find it is rewarding to look at the Dynamic Language Runtime which is more or less designed for creating high-level languages based on .NET.
It's perhaps also worth looking at some of the previous Stack Overflow threads on Domain Specific Languages which contain some useful links to tools for working with DSLs, which sounds a little like what you are planning although I'm still not absolutely clear from the question what exactly your aim is.
Most things "click and play" should be simple enough just to stick some pre-defined building-block objects together (probably using interfaces on the boundaries). Meaning: you might not need to do dynamic code generation - just "fake it". For example, using property-bag objects (like DataTable etc, although that isn't my first choice) for values, etc.
Another option for dynamic evaluation is the Expression class; especially in .NET 4.0, this is hugely versatile, and allows compilation to a delegate.
Do the C# source generation and don't care about speed until it matters. The C# compiler is quite quick.
When I wrote a dynamic code generator, I relied heavily on System.Reflection.Emit.
Basically, you programatically create dynamic assemblies and add new types to them. These types are constructed using the Emit constructs (properties, events, fields, etc..). When it comes to implementing methods, you'll have to use an ILGenerator to pump out MSIL op-codes into your method. That sounds super scary, but you can use a couple of tools to help:
A pre-built sample implementation
ILDasm to inspect the op-codes of the sample implementation.
It depends on your requirements, CodeDOM would certainly be the best fit for a "program" stored it in a "data model".
However its unlikely that using option 2 will be in any way measurably slower in comparision with any other approach.
I would echo others in that 1) the compiler is quick, and 2) "Click and Play" things should be simple enough so that no single widget added to a pile of widgets can make it an illegal pile.
Good luck. I'm skeptical that you can achieve point (2) for anything but really toy-level programs.
I'm looking for advice as to coding conventions. The primary languages I use, in order of frequency are C#, JavaScript, and ActionScript. They are all ECMA-based languages, so for the most part, the syntax is interchangeable. What I would like to do is standardize the way I write code.
I looked around for documents on coding standards and found some, by various authors including Microsoft, Adobe, Doug Crockford, and the authors of various books I own. Much of the individual standards are identical. For example, do not use capitalization to differentiate between object identifiers. Okay, sounds good.
However, they are different in some ways, most notably to me in the naming conventions. For example, using underscores in naming private properties, or camel casing vs Pascal casing for method names.
The C# advice tends to differ more between the others than ActionScript and JavaScript do with each other, which makes it more difficult for me since it is a greater number of languages vs a greater amount of code written. There is also the issue of automatic formatting in the IDE (e.g. the placement of opening braces in functions in JavaScript vs C#).
Any advice as to how you might have approached this problem? Any big pitfalls I'm not seeing? I realize I may be being pedantic, and that I'm lucky enough to work in an environment where I don't have to conform to someone else's standard. I hope to gain some increase in productivity and more readable code. Thanks.
Idioms that make sense in C# aren't necessarily going to make sense in Javascript (and vice-versa), despite the fact that both use pointy braces and semicolons.
We use different coding styles - for the most part, standard Microsoft style for C# and for the most part, standard jQuery style for Javascript. It can be a bit strange-looking (the disjoint of Pascal versus camel case means that you have some C# objects that have "improper" casing because they're pretty much just there as JSON containers), but I wouldn't try to shoehorn what are two discrete languages into a single grammar.
I would stick to the standards proposed by the communities or creators of the languages instead of trying to create one standard that crosses boundaries. Doing otherwise tends to torque off developers that are passionate about and active in the communities surrounding the language.
We tried to do that at one of my employers with Delphi and C#, and no one was happy.
I'm lucky enough to work in an environment where I don't have to conform to someone else's standard
Personally I don't follow the Microsoft standard for C#: instead, all my method names and property names use camelCase (though my types still use UpperCase). And, I decorate my member data (so that it can't be confused with local variables, properties, and/or parameters).
I don't see why it's necessary to follow Microsoft's naming conventions; IMO it's even occasionally a good thing not to: when I subclass a Microsoft type, the case (e.g. 'add') distinguishes my methods from Microsoft's methods (e.g. 'Add') in the underlying base class.
Also when I'm writing C++, I don't follow the same naming conventions as the standard library authors (who use lower_case for their types whereas I use UpperCase).
It is true however that other developers may/do not like it; for example, someone commented on some example C# code that I posted in some answer here on SO, to criticise it not for its content but for its naming convention.
This is a matter of preference really, because that's just what coding standards are: standards. There is no obvious right or wrong here, enforcing every language's community standards has a lot going for it until you are working in 5 different languages frequently which all have subtle differences. You will not be able to keep up and start following neither standard.
What I have done before is use the same standard for languages in the same ballpark (PHP, Java, Ruby), and then some specific ones if it was absolutely impractical to use that same set of standards, and the code looks different enough for your brain to also make the switch (for BASH scripts for instance).
But really it's what you (and the rest of your team) agrees upon. You don't gain productivity from a specific set of coding standards, you gain productivity by having the same standards as the people you work with. If you want to go full out hungarian camel case with an underscore on top: more power to you, just make sure the entire team does it ;)
Is there any design reason for that (like the reason they gave up multi inheritance)?
or it just wasn't important enough?
And same question applies for optional parameters in methods... this was already in the first version of vb.net... so it surely no laziness that cause MS not to allow optional parameters, probably architecture decision.. and it seems they had change of heart about that, because C# 4 is going to include that..
What was the decision and why did they give it up?
Edit:
Maybe readers didn't fully understand me. I'm working lately on a calculation program (support numbers of any size, to the last digit), in which some methods are used millions of times per second.
Say I have a method called Add(int num), and this method is used quiet a lot with 1 as parameter (Add(1);), I've found out it is faster to implement a special method especially for one. And I don't mean overloading - Writing a new method called AddOne, and literally copy the Add method into it, except that instead of using num I'm writing 1. This might seems horribly weird to you, but it's actually faster.
(as much as ugly it is)
That made me wonder why C# doesn't support manual inline which can be amazingly helpful here.
Edit 2:
I asked myself whether or not to add this. I'm very well familiar with the weirdness (and disadvantages) of choosing a platform such as dot net for such project, but I think dot net optimizations are more important than you think... especially features such as Any CPU etc.
To answer part of your question, see Eric Gunnerson's blog post: Why doesn't C# have an 'inline' keyword?
A quote from his post:
For C#, inlining happens at the JIT
level, and the JIT generally makes a
decent decision.
EDIT: I'm not sure of the reason for delayed optional parameters support, however saying they "gave up" on it sounds as though they were expected to implement it based on our expectations of what other languages offered. I imagine it wasn't high on their priority list and they had deadlines to get certain features out the door for each version. It probably didn't rise in importance till now, especially since method overloading was an available alternative. Meanwhile we got generics (2.0), and the features that make LINQ possible etc. (3.0). I'm happy with the progression of the language; the aforementioned features are more important to me than getting support for optional parameters early on.
Manual inlining would be almost useless. The JIT compiler inlines methods during native code compilation where appropriate, and I think in almost all cases the JIT compiler is better at guessing when it is appropriate than the programmer.
As for optional parameters, I don't know why they weren't there in previous versions. That said, I don't like them to be there in C# 4, because I consider them somewhat harmful because the parameter get baked into the consuming assembly and you have to recompile it if you change the standard values in a DLL and want the consuming assembly to use the new ones.
EDIT:
Some additional information about inlining. Although you cannot force the JIT compiler to inline a method call, you can force it to NOT inline a method call. For this, you use the System.Runtime.CompilerServices.MethodImplAttribute, like so:
internal static class MyClass
{
[System.Runtime.CompilerServices.MethodImplAttribute(MethodImplOptions.NoInlining)]
private static void MyMethod()
{
//Powerful, magical code
}
//Other code
}
My educated guess: the reason earlier versions of C# didn't have optional parameters is because of bad experiences with them in C++. On the surface, they look straight-forward enough, but there are a few bothersome corner cases. I think one of Herb Sutter's books describes this in more detail; in general, it has to do with overriding virtual methods. Maximilian has mentioned one of the .NET corner cases in his answer.
You can also pretty much get by with out them by manually writing multiple overloads; that may not be very nice for the author of the class, but clients will hardly notice the difference between overloads and optional parameters.
So after all these years w/o them, why did C# 4.0 add them? 1) improved parity with VB.NET, and 2) easier interop with COM.
I'm working lately on a calculation program (support numbers of any size, to the last digit), in which some methods are used literally millions of times per second.
Then you chose a wrong language. I assume you actually profiled your code (right?) and know that there is nothing apart from micro-optimisations that can help you. Also, you're using a high-performance native bigint library and not writing your own, right?
If that's true, don't use .NET. If you think you can gain speed on partial specialisation, go to Haskell, C, Fortran or any other language that either does it automatically, or can expose inlining to you to do it by hand.
If Add(1) really matters to you, heap allocations will matter too.
However, you should really look at what the profiler can tell you...
C# has added them in 4.0: http://msdn.microsoft.com/en-us/library/dd264739(VS.100).aspx
As to why they weren't done from the beginning, its most likely because they felt method overloads gave more flexibility. With overloading you can specify multiple 'defaults' based on the other parameters that you're taking. Its also not that much more syntax.
Even in languages like C++, inlining something doesn't guarantee that it'll happen; it's a hint to the compiler. The compiler can either take the hint, or do its own thing.
C# is another step removed from the generated assembly code (via IL + the JIT), so it becomes even harder to guarantee that something will inline. Furthermore, you have issues like the x86 + x64 implementations of the JIT differing in behaviour.
Java doesn't include an inline keyword either. The better Java JITs can inline even virtual methods, nor does the use of keywords like private or final make any difference (it used to, but that is now ancient history).