Related
In C#, is there any difference between using System.Object in code rather than just object, or System.String rather than string and so on? Or is it just a matter of style?
Is there a reason why one form is preferrable to the other?
string is an alias for global::System.String. It's simply syntactic sugar. The two are exactly interchangable in almost all cases, and there'll be no difference in the compiled code.
Personally I use the aliases for variable names etc, but I use the CLR type names for names in APIs, for example:
public int ReadInt32() // Good, language-neutral
public int ReadInt() // Bad, assumes C# meaning of "int"
(Note that the return type isn't really a name - it's encoded as a type in the metadata, so there's no confusion there.)
The only places I know of where one can be used and the other can't (that I'm aware of) are:
nameof prohibits the use of aliases
When specifying an enum base underlying type, only the aliases can be used
The object type is an alias for System.Object. The object type is used and shown as a keyword. I think it has something to do with legacy, but that's just a wild guess.
Have a look at this MSDN page for all details.
I prefer the use of the lowercased versions, but for no special reasons. Just because the syntax highlighting is different on these "basic" types and I don't have to use the shift key when typing...
One is an alias to the other. Its down to style.
string is an alias for global::System.String, and object for global::System.Object
Providing you have using System; in your class, String / string and Object / object are functionally identical and usage is a matter of style.
(EDIT: removed slightly misleading quote, as per Jon Skeet's comment)
string (with the lowercase "s") is the string type of the C# language and the type System.String is the implementation of string in the .NET framework.
In practise there is no difference besides stylistic ones.
EDIT: Since the above obviously wasn't clear enough, there is no difference between them, they are the same type once compiled. I was explaining the semantic difference that the compiler sees (which is just syntactic sugar, much like the difference between a while and for loop).
There are no difference. There is a number of types, called Primitive Data Types which are threated by compiler in style you mentioned.
Capitalized naming style is ISO naming rule. It's more general, common; forces the same naming rules for all objects in the source without exceptions such C# compiler has.
As of my knowledge, I know that it's a shortcut, it's easier to use string, rather than System.string.
But be careful there's a difference between String and string (c# is case sensitive)
object, int, long and bool were provided as training wheels for engineers that had trouble adapting to the idea that the data types were not a fixed part of the language. C#, unlike the languages that went before it, has no limit on the number of data types you can add. The 'System' library provides a starter kit featuring such useful types as System.Int32, System.Boolean, System.Double, System.DateTime and so on, but engineers are encouraged to add their own. Because Microsoft was interested in quick adoption of their new language, they provided aliases that made it appear as if the language was more 'C'-like, but these aliases are a completely disposable feature (C# would be just as good a language if you removed all the build-in aliases, probably better).
While StyleCop does enforce the use of the legacy C-style aliases, it is a blemish on an otherwise logical set of rules. As of yet, I've not heard a single justification for this rule (SA1121) that wasn't based on dogma. If you think SA1121 is logical, then why is there no buildin type for datetime?
I know this question has been asked around a bit, and by the looks of it, there isn't a clear yes or no answer to this question, but still, I'm a little confused about something.
Usually when I program, I follow a few rules about prefixes:
m_ in front of members
p_ in front of properties
s_ in front of static
a_ in front of parameters
l_ in front of local variables
I got a new job right now, and I noticed that prefixes are not used in code. I asked why, and they replied that IDEs do all the work of keeping track of what's a member variable and what's a local variable. Now I'm thinking, that may be so, but isn't it easier to use prefixes anyway?
I mean, if I for example have a member, a static, and a local variable named "robot", would it not be a pain in the ass to reference it when writing a method? This is perhaps an unrealistic example, but I like to have a good rule-set in my head that I can apply consistently, even for unrealistic conditions.
Does this example justify using Hungarian notation?
I think I'll make a pros/cons list and edit it as I learn more about it.
Argument against hungarian:
Class.Robot or Robot
this.robot
robot
No need for Hungarian.
Counter:
There is still an inconsistency, Robot could mean different things in different methods. To stay consistent you should prefix Class or this (or nothing) before each Robot variable.
On top of that, lets say you want to access the static variable Strawberry, how do you know a member variable named Strawberry isn't defined? Maybe it's defined in another file that you can't see, so you might get unexpected results. Now you might say that this is visible via the IDE but I make the argument that using a prefix is superior because you see what you're referencing, while you might miss what the IDE is telling you. You could also use this/Classname prefixes of course but that kind of defeats the purpose of not using a Hungarian notation.
Argument against hungarian:
A violation of this rule occurs when Hungarian notation is used in the naming of fields and variables. The use of Hungarian notation has become widespread in C++ code, but the trend in C# is to use longer, more descriptive names for variables, which are not based on the type of the variable but which instead describe what the variable is used for.
Counter:
The prefixes I mentioned are not based on the type of the variable, the prefixes indeed specify what the variable is used for.
Argument against hungarian:
modern code editors such as Visual Studio make it easy to identify type information for a variable or field, typically by hovering the mouse cursor over the variable name. This reduces the need for Hungarian notation.
Counter:
While this is true, I myself almost never hover with my mouse above a variable name unless an error has occurred. In contrast, with Hungarian notation, you immediately see where your variable is located in the class.
Remark:
Doesn't Microsoft recommend using Hungarian notation for file names? I read that it is a convention to prefix interface files with an I, this is a form of Hungarian notation. While this doesn't directly relate to my question above, it does raise the point that Hungarian notation is sometimes recommended.
No, don't do it. It makes the code harder to read. If you wrote English with every verb with a v_ in front and every noun with n_ that would make the sentence harder to read while adding information that is not useful most of the time.
If your classes are well designed with few responsibilities and short methods it shouldn't be too difficult to figure out what each variable means from the name and the context in which it is used. When it's not obvious and you need to know, it's easy to find out: you can just hover the mouse over the variable name, or press "Go To Definition".
StyleCop has a rule that warns when you use Hungarian notation. The rule description has a little explanation about why that rule exists:
TypeName FieldNamesMustNotUseHungarianNotation
CheckId SA1305
Category Naming Rules
Cause
The name of a field or variable in C# uses Hungarian notation.
Rule Description
A violation of this rule occurs when Hungarian notation is used in the naming of fields and variables. The use of Hungarian notation has become widespread in C++ code, but the trend in C# is to use longer, more descriptive names for variables, which are not based on the type of the variable but which instead describe what the variable is used for.
In addition, modern code editors such as Visual Studio make it easy to identify type information for a variable or field, typically by hovering the mouse cursor over the variable name. This reduces the need for Hungarian notation.
No, do not use Hungarian Notation. First, it's so 1990s. Secondly, you may be assaulted by your co-workers... ;-)
Your robot example:
Class.Robot or Robot
this.robot
robot
No need for Hungarian.
The answer is no, as everyone wrote here already.
First of all: You're not actually using the hungarian notation - or a known variant of it - as you state in the question yourself.
So let's start with the problem that you use a naming convention that is one of your own making and not widely used. This leads to immediate problems as soon as you are exposing your code to the real world - like your new coworkers. You're just inventing a private third (nth?) variant of this prefix notation, with all the problems that forcing something uncommon on other people includes.
Now - is it a change for the better? Are you right, and the other people should adapt to gain from these set of rules?
The consensus here seems to be 'No' and I'm fiercely on that side. Ignoring the standard arguments about the hungarion notation (I dismiss them as 'not entirely relevant'):
Don't reuse names to mean lots of things. The one exception that still seems to be common is to have a constructor taking an argument with the same name as a field:
public Foo(string robot) {
this.robot = robot;
}
If you have trouble managing the sheer number of names in your code, chances are you have too many of those in one place / in scope. You're trying to solve a code smell with a (smelly, according to the consensus here so far) workaround
To reiterate it once: You come to a team of people that don't use your convention (and how could they - it seems it's of your own making..) so you have to adjust to the team. You can argue about personal readability and are free to ask your coworkers to reconsider, but if they disagree with that style: Don't fight it. You're just making yourself miserable if you insist on being right and them being wrong. Don't let this drain your productivity.
but I like to have a good rule-set in my head that I can apply
consistently
Even better than a rule-set in your head, is a rule-set in your IDE/build system:, so you should check out StyleCop
StyleCop will let you configure your coding guidelines how you like, but by default offers a popular alternative to the apps hungarian notation you describe:
fields: this.myField
properties: this.MyProperty
methods: this.MyMethod()
statics: MyClass.MyStaticMethod()
etc.
You'll find endless discussion on this aspect of coding style both on stackoverflow and elsewhere, so I expect this question will be closed as a duplicate....
I for myself have some convention that I follow. It's true that modern IDEs take away a lot of things like this but still I think a little hungarian :). I'm using:
robot_ for attributes (ms recommends to use this.robot but like this I can't forget the this)
camelCase for local variables and private/protected/internal methods
PascalCase for public properties or methods
And that is it :). I think the code looks very weird with all these m_, a_, ... stuff I find it very difficult to read. Hovering over the code in VS gives you hints sure, but I see some additional benefit in using some kind of convention.
I mean even ms uses some kind of hungarian notation in postfixing all async functions with Async or prefixing all interfaces with I
In C#, is there any difference between using System.Object in code rather than just object, or System.String rather than string and so on? Or is it just a matter of style?
Is there a reason why one form is preferrable to the other?
string is an alias for global::System.String. It's simply syntactic sugar. The two are exactly interchangable in almost all cases, and there'll be no difference in the compiled code.
Personally I use the aliases for variable names etc, but I use the CLR type names for names in APIs, for example:
public int ReadInt32() // Good, language-neutral
public int ReadInt() // Bad, assumes C# meaning of "int"
(Note that the return type isn't really a name - it's encoded as a type in the metadata, so there's no confusion there.)
The only places I know of where one can be used and the other can't (that I'm aware of) are:
nameof prohibits the use of aliases
When specifying an enum base underlying type, only the aliases can be used
The object type is an alias for System.Object. The object type is used and shown as a keyword. I think it has something to do with legacy, but that's just a wild guess.
Have a look at this MSDN page for all details.
I prefer the use of the lowercased versions, but for no special reasons. Just because the syntax highlighting is different on these "basic" types and I don't have to use the shift key when typing...
One is an alias to the other. Its down to style.
string is an alias for global::System.String, and object for global::System.Object
Providing you have using System; in your class, String / string and Object / object are functionally identical and usage is a matter of style.
(EDIT: removed slightly misleading quote, as per Jon Skeet's comment)
string (with the lowercase "s") is the string type of the C# language and the type System.String is the implementation of string in the .NET framework.
In practise there is no difference besides stylistic ones.
EDIT: Since the above obviously wasn't clear enough, there is no difference between them, they are the same type once compiled. I was explaining the semantic difference that the compiler sees (which is just syntactic sugar, much like the difference between a while and for loop).
There are no difference. There is a number of types, called Primitive Data Types which are threated by compiler in style you mentioned.
Capitalized naming style is ISO naming rule. It's more general, common; forces the same naming rules for all objects in the source without exceptions such C# compiler has.
As of my knowledge, I know that it's a shortcut, it's easier to use string, rather than System.string.
But be careful there's a difference between String and string (c# is case sensitive)
object, int, long and bool were provided as training wheels for engineers that had trouble adapting to the idea that the data types were not a fixed part of the language. C#, unlike the languages that went before it, has no limit on the number of data types you can add. The 'System' library provides a starter kit featuring such useful types as System.Int32, System.Boolean, System.Double, System.DateTime and so on, but engineers are encouraged to add their own. Because Microsoft was interested in quick adoption of their new language, they provided aliases that made it appear as if the language was more 'C'-like, but these aliases are a completely disposable feature (C# would be just as good a language if you removed all the build-in aliases, probably better).
While StyleCop does enforce the use of the legacy C-style aliases, it is a blemish on an otherwise logical set of rules. As of yet, I've not heard a single justification for this rule (SA1121) that wasn't based on dogma. If you think SA1121 is logical, then why is there no buildin type for datetime?
I am a PHP web programmer who is trying to learn C#.
I would like to know why C# requires me to specify the data type when creating a variable.
Class classInstance = new Class();
Why do we need to know the data type before a class instance?
As others have said, C# is static/strongly-typed. But I take your question more to be "Why would you want C# to be static/strongly-typed like this? What advantages does this have over dynamic languages?"
With that in mind, there are lots of good reasons:
Stability Certain kinds of errors are now caught automatically by the compiler, before the code ever makes it anywhere close to production.
Readability/Maintainability You are now providing more information about how the code is supposed to work to future developers who read it. You add information that a specific variable is intended to hold a certain kind of value, and that helps programmers reason about what the purpose of that variable is.
This is probably why, for example, Microsoft's style guidelines recommended that VB6 programmers put a type prefix with variable names, but that VB.Net programmers do not.
Performance This is the weakest reason, but late-binding/duck typing can be slower. In the end, a variable refers to memory that is structured in some specific way. Without strong types, the program will have to do extra type verification or conversion behind the scenes at runtime as you use memory that is structured one way physically as if it were structured in another way logically.
I hesitate to include this point, because ultimately you often have to do those conversions in a strongly typed language as well. It's just that the strongly typed language leaves the exact timing and extent of the conversion to the programmer, and does no extra work unless it needs to be done. It also allows the programmer to force a more advantageous data type. But these really are attributes of the programmer, rather than the platform.
That would itself be a weak reason to omit the point, except that a good dynamic language will often make better choices than the programmer. This means a dynamic language can help many programmers write faster programs. Still, for good programmers, strongly-typed languages have the potential to be faster.
Better Dev Tools If your IDE knows what type a variable is expected to be, it can give you additional help about what kinds of things that variable can do. This is much harder for the IDE to do if it has to infer the type for you. And if you get more help with the minutia of an API from the IDE, then you as a developer will be able to get your head around a larger, richer API, and get there faster.
Or perhaps you were just wondering why you have to specify the class name twice for the same variable on the same line? The answer is two-fold:
Often you don't. In C# 3.0 and later you can use the var keyword instead of the type name in many cases. Variables created this way are still statically typed, but the type is now inferred for you by the compiler.
Thanks to inheritance and interfaces sometimes the type on the left-hand side doesn't match the type on the right hand side.
It's simply how the language was designed. C# is a C-style language and follows in the pattern of having types on the left.
In C# 3.0 and up you can kind of get around this in many cases with local type inference.
var variable = new SomeClass();
But at the same time you could also argue that you are still declaring a type on the LHS. Just that you want the compiler to pick it for you.
EDIT
Please read this in the context of the users original question
why do we need [class name] before a variable name?
I wanted to comment on several other answers in this thread. A lot of people are giving "C# is statically type" as an answer. While the statement is true (C# is statically typed), it is almost completely unrelated to the question. Static typing does not necessitate a type name being to the left of the variable name. Sure it can help but that is a language designer choice not a necessary feature of static typed languages.
These is easily provable by considering other statically typed languages such as F#. Types in F# appear on the right of a variable name and very often can be altogether ommitted. There are also several counter examples. PowerShell for instance is extremely dynamic and puts all of its type, if included, on the left.
One of the main reasons is that you can specify different types as long as the type on the left hand side of the assignment is a parent type of the type on the left (or an interface that is implemented on that type).
For example given the following types:
class Foo { }
class Bar : Foo { }
interface IBaz { }
class Baz : IBaz { }
C# allows you to do this:
Foo f = new Bar();
IBaz b = new Baz();
Yes, in most cases the compiler could infer the type of the variable from the assignment (like with the var keyword) but it doesn't for the reason I have shown above.
Edit: As a point of order - while C# is strongly-typed the important distinction (as far as this discussion is concerned) is that it is in fact also a statically-typed language. In other words the C# compiler does static type checking at compilation time.
C# is a statically-typed, strongly-typed language like C or C++. In these languages all variables must be declared to be of a specific type.
Ultimately because Anders Hejlsberg said so...
You need [class name] in front because there are many situations in which the first [class name] is different from the second, like:
IMyCoolInterface obj = new MyInterfaceImplementer();
MyBaseType obj2 = new MySubTypeOfBaseType();
etc. You can also use the word 'var' if you don't want to specify the type explicitely.
Why do we need to know the data type
before a class instance?
You don't! Read from right to left. You create the variable and then you store it in a type safe variable so you know what type that variable is for later use.
Consider the following snippet, it would be a nightmare to debug if you didn't receive the errors until runtime.
void FunctionCalledVeryUnfrequently()
{
ClassA a = new ClassA();
ClassB b = new ClassB();
ClassA a2 = new ClassB(); //COMPILER ERROR(thank god)
//100 lines of code
DoStuffWithA(a);
DoStuffWithA(b); //COMPILER ERROR(thank god)
DoStuffWithA(a2);
}
When you'r thinking you can replace the new Class() with a number or a string and the syntax will make much more sense. The following example might be a bit verbose but might help to understand why it's designed the way it is.
string s = "abc";
string s2 = new string(new char[]{'a', 'b', 'c'});
//Does exactly the same thing
DoStuffWithAString("abc");
DoStuffWithAString(new string(new char[]{'a', 'b', 'c'}));
//Does exactly the same thing
C#, as others have pointed out, is a strongly, statically-typed language.
By stating up front what the type you're intending to create is, you'll receive compile-time warnings when you try to assign an illegal value. By stating up front what type of parameters you accept in methods, you receive those same compile-time warnings when you accidentally pass nonsense into a method that isn't expecting it. It removes the overhead of some paranoia on your behalf.
Finally, and rather nicely, C# (and many other languages) doesn't have the same ridiculous, "convert anything to anything, even when it doesn't make sense" mentality that PHP does, which quite frankly can trip you up more times than it helps.
c# is a strongly-typed language, like c++ or java. Therefore it needs to know the type of the variable. you can fudge it a bit in c# 3.0 via the var keyword. That lets the compiler infer the type.
That's the difference between a strongly typed and weakly typed language. C# (and C, C++, Java, most more powerful languages) are strongly typed so you must declare the variable type.
When we define variables to hold data we have to specify the type of data that those variables will hold. The compiler then checks that what we are doing with the data makes sense to it, i.e. follows the rules. We can't for example store text in a number - the compiler will not allow it.
int a = "fred"; // Not allowed. Cannot implicitly convert 'string' to 'int'
The variable a is of type int, and assigning it the value "fred" which is a text string breaks the rules- the compiler is unable to do any kind of conversion of this string.
In C# 3.0, you can use the 'var' keyword - this uses static type inference to work out what the type of the variable is at compile time
var foo = new ClassName();
variable 'foo' will be of type 'ClassName' from then on.
One things that hasn't been mentioned is that C# is a CLS (Common Language Specification) compliant language. This is a set of rules that a .NET language has to adhere to in order to be interopable with other .NET languages.
So really C# is just keeping to these rules. To quote this MSDN article:
The CLS helps enhance and ensure
language interoperability by defining
a set of features that developers can
rely on to be available in a wide
variety of languages. The CLS also
establishes requirements for CLS
compliance; these help you determine
whether your managed code conforms to
the CLS and to what extent a given
tool supports the development of
managed code that uses CLS features.
If your component uses only CLS
features in the API that it exposes to
other code (including derived
classes), the component is guaranteed
to be accessible from any programming
language that supports the CLS.
Components that adhere to the CLS
rules and use only the features
included in the CLS are said to be
CLS-compliant components
Part of the CLS is the CTS the Common Type System.
If that's not enough acronyms for you then there's a tonne more in .NET such as CLI, ILasm/MSIL, CLR, BCL, FCL,
Because C# is a strongly typed language
Static typing also allows the compiler to make better optimizations, and skip certain steps. Take overloading for example, where you have multiple methods or operators with the same name differing only by their arguments. With a dynamic language, the runtime would need to grade each version in order to determine which is the best match. With a static language like this, the final code simply points directly to the appropriate overload.
Static typing also aids in code maintenance and refactoring. My favorite example being the Rename feature of many higher-end IDEs. Thanks to static typing, the IDE can find with certainty every occurrence of the identifier in your code, and leave unrelated identifiers with the same name intact.
I didn't notice if it were mentioned yet or not, but C# 4.0 introduces dynamic checking VIA the dynamic keyword. Though I'm sure you'd want to avoid it when it's not necessary.
Why C# requires me to specify the data type when creating a variable.
Why do we need to know the data type before a class instance?
I think one thing that most answers haven't referenced is the fact that C# was originally meant and designed as "managed", "safe" language among other things and a lot of those goals are arrived at via static / compile time verifiability. Knowing the variable datatype explicitly makes this problem MUCH easier to solve. Meaning that one can make several automated assessments (C# compiler, not JIT) about possible errors / undesirable behavior without ever allowing execution.
That verifiability as a side effect also gives you better readability, dev tools, stability etc. because if an automated algorithm can understand better what the code will do when it actually runs, so can you :)
Statically typed means that Compiler can perform some sort of checks at Compile time not at run time. Every variable is of particular or strong type in Static type. C# is strongly definitely strongly typed.
This question already has answers here:
Why shouldn't I prefix my fields? [closed]
(25 answers)
Closed 2 years ago.
Before using C#, C++ was my primary programming language. And the Hungarian notation is deep in my heart.
I did some small projects in C# without reading a C# book or other guidelines on the language. In those small c# projects I used something like
private string m_strExePath;
Until I read something from SO that said:
Do not use Hungarian notation.
So why? Am I the only person that has m_strExePath or m_iNumber in my C# code?
Joel Spolsky has a really good article on this topic. The quick summary is that there's two types of Hungarian notation that are used in practice.
The first is "Systems Hungarian" where you specify the variable type using a prefix. Things like "str" for string. This is nearly useless information, especially since modern IDEs will tell you the type anyway.
The second is "Apps Hungarian" where you specify the purpose of the variable with a prefix. The most common example of this is using "m_" to indicate member variables. This can be extremely useful when done correctly.
My recommendation would be to avoid "Systems Hungarian" like the plague but definitely use "Apps Hungarian" where it makes sense to. I suggest reading Joel's article. It's a bit long winded but explains it much better than I could.
The most interesting part of this article is that the original inventor of Hungarian notation, Charles Simonyi, created "Apps Hungarian" but his paper was horribly misinterpreted and the abomination of "Systems Hungarian" was created as a result.
When doing user interface design, I have found it very useful to maintain Hungarian notation. Items like text boxes, labels and drop down lists are much easier to quickly understand and often you get repeating control names:
lblTitle = Label
txtTitle = TextBox
ddlTitle = DropDownList
To me that's easier to read and parse. Otherwise, Hungarian notation doesn't fit in because of the advances in IDE's, specifically Visual Studio.
Also, Joel on Software has an excellent article related to Hungarian notation titled: Making Wrong Code Look Wrong which makes some good arguments for Hungarian notation.
You're not the only person, but I'd say it's relatively uncommon. Personally I'm not a fan of Hungarian notation, at least not in the simple sense that just restates the type information which is already present in the declaration. (Fans of "true" Hungarian notation will explain the difference - it's never bothered me that much, but I can see their point. If you use a common prefix for, say, units of length vs units of weight, you won't accidentally assign a length variable with a weight value, even though both may be integers.)
However, for private members you can pretty much do what you want - agree with your team what the naming convention should be. The important point is that if you want your API to fit in with the rest of .NET, don't use Hungarian notation in your public members (including parameters).
No, you're not the only one who does it. It's just generally accepted that Hungarian notation isn't the best way to name things in C# (the IDE handles a lot o the issues that Hungarian notation tried to address).
Hungarian notation is a terrible mistake, in any language. You shouldn't use it in C++ either. Name your variables so you know what they're for. Don't name them to duplicate type information that the IDE can give you anyway, and which may change (and is usually irrelevant anyway. If you know that something is a counter, then it doesn't matter whether it's an int16, 32 or 64. You know that it acts as a counter, and as such, any operation that's valid on a counter should be valid. Same argument for X/Y coordinates. They're coordinates. It doesn't matter if they're floats or doubles. It may be relevant to know whether a value is in units of weight, distance or speed. It doesn't matter that it's a float.).
In fact, Hungarian notation only came around as a misunderstanding. The inventor had intended for it to be used to describe the "conceptual" type of a variable (is it a coordinate, an index, a counter, a window?)
And the people who read his description assumed that by "type" he meant the actual programming language type (int, float, zero-terminated string, char pointer)
That was never the intention, and it is a horrible idea. It duplicates information that the IDE can better provide, and which isn't all that relevant in the first place, as it encourages you to program at the lowest possible level of abstraction.
So why? Am I the only person that has
m_strExePath or m_iNumber in my C#
code?
No. Unfortunately not. Tell me, what would exePath be if it wasn't a string? Why do I, as a reader of your code, need to know that it is a string? Isn't it enough to know that it is the path to the executable? m_iNumber is just badly named. which number is this? WHat is it for? You have just told me twice that it is a number, but I still don't know what the meaning of the number is.
You're certainly not the only person, but I do hope you're part of a declining trend :)
The problem with Hungarian notation is that it's trying to implement a type system via naming conventions. This is extremely problematic because it's a type system with only human verification. Every human on the project has to agree to the same set of standards, do rigorous code reviews and ensure that all new types are assigned the appropriate and correct prefix. In short, it's impossible to guarantee consistency with a large enough code base. At the point there is no consistency why are you doing it?
Furthermore tools don't support Hungarian notation. That might seem like a stupid comment on the surface but consider refactoring for instance. Each refactoring in a Hungarian naming convention system must be accompanied with a mass rename to ensure that prefixes are maintained. Mass renames are susceptible to all sorts of subtle bugs.
Instead of using names to implement a type system, just rely on the type system. It has automatic verification and tool support. Newer IDE's make it much easier to discover a variable's type (intellisense, hover tips, etc ...) and really remove the original desire for Hungarian Notation.
One downside of Hungarian notation is that developers frequently change the type of variables during early coding, which requires the name of the variable to also change.
unless you are using a text editor rather than the VS IDE there is little value to hungarian notation and it impeeds rather than improves readability
The real value in Hungarian notation dates back to C programming and the weakly typed nature of pointers. Basically, back in the day the easiest way to keep track of the type was to use Hungarian.
In languages like C# the type system tells you all you need to know, and the IDE presents this to you in a very user friendly way so there is simply no need to use Hungarian.
As for good reason to not use it, well there are quite a few. FIrstly, in C# and for that matter C++ and many other langause you often create your own types, so what would be the Hungarian for a "MyAccountObject" type? Even if you can decide on sensible Hungarian notations it still makes the actual variable name slightly harder to read because you have to skip past the "LPZCSTR" (or whatever) at the start. More important is the maintainance cost though, what if you start of with a List and change to another type of collection (something I seem to do a lot at the moment)? You then need to rename all your variables that use that type, all for no real benefit. If you had just used a decent name to begin with you wouldn't have to worry about this.
In your example, what if you created or used some more meaningful type for holding a path (e.g. Path), you then need to change your m_strExePath to m_pathExePath, which is a pain and in this case not actually very helpful.
The only two areas where I currently see any form of Hungarian notation are:
Class member variables, with a leading underscore (e.g. _someVar)
WinForms and WebForms control names (btn for Button, lbl for Label etc)
The leading underscore on member variables seems like it is here to stay with us for a while longer, but I expect to see the popularity of Hungarian on control names to wane.
Most hungarian notation describes what the variable is (a pointer, or a pointer to a pointer, or the contents of a pointer etc. etc.), and what the thing that it points to is (string etc).
I've found very little use for pointers in C#, especially when there's no unmanaged/pinvoke calls. Also, there's no option to use void* so there's no need for hungarian to describe it.
The only left-over from Hungarian that I (and most others in C# land) use, is to preceed private fields with _, as in _customers;
Hungarian notation found its first major use with the BCPL programming language. BCPL is short for Before C Programming Language and is an ancient (designed in 1966), typeless language where everything is a word. In that situation, Hungarian notation can help with naked-eye type-checking.
Many years have passed since then...
Here's what the latest Linux kernel documentation says (bold mine):
Encoding the type of a function into the name (so-called Hungarian
notation) is brain damaged - the compiler knows the types anyway and can
check those, and it only confuses the programmer.
Please also note that there are two types of Hungarian notation:
Systems Hungarian notation: The prefix encodes physical data type.
Apps Hungarian notation: The prefix encodes logical data type.
Systems Hungarian notation is no longer recommended due to type-checking redundancy which is in turn due to advancement in compiler capabilities. However, you may still use Apps Hungarian notation if the compiler doesn't check logical data types for you. As a rule of thumb, Hungarian notation is good if and only if it describes semantics that are not available otherwise.
If Hungarian notation is deep in your and your team's and your company's heart, by all means use it. It is no longer considered a best practice as far as I can read from blogs and books.
If you hold the pointer over the variable in VS it will tell you what type of variable it is, so there is no reason to go with these cryptic variable names, esp as people that are coming from other languages may need to maintain the code and it will be an obstacle to easy reading.
In a language like C# where many of the limitations on variable name length do not exist that were once in languages like C and C++, and where the IDE provides excellent support for determining the type of a variable, Hungarian notation is redundant.
Code should be self-explanatory so names like m_szName can be replaced by this.name. And m_lblName can be nameLabel, etc. The important thing is to be consistent and to create code that is easy to read and maintain - this is the goal of any naming convention so you should always decide what convention you use based on this. I feel that Hungarian notation is not the most readable or the most maintainable because it can be too terse, requires a certain amount of knowledge on what the prefixes mean, and is not part of the language and therefore, hard to enforce and hard to detect when the name no longer matches the underlying type.
Instead, I like to replace Apps Hungarian (prefixes like m_) by referencing this for members, referencing the class name for static variables, and no prefix for local variables. And instead of System Hungarian (including the type of a variable in the variable name), I prefer to describe what the variable is for such as nameLabel, nameTextBox, count, and databaseReference.
At some point, you're likely to have a property
public string ExecutablePath { ... }
(not that you should try to avoid abbreviations like "Exe" too). That being the case, your question might then be moot much of the time as you can use C# 3.0's auto-properties
public string ExecutablePath { get; set; }
you now no longer have to come up with a name for the backing store variable. No name, no question/debate about naming conventions.
Obviously, auto-implemented properties aren't always appropriate.
It depends.
Is it significant enough to add the description of the data type in variable/object name in your method/class?
System Hungarian Notation > Apps Hungarian Notation
If you are designing in procedural language (e.g. C) that has many of global variables or/and lower-API that deals with precise data types and sizes (e.g. C's<stdint.h>where 40+ different data type notations are used to describe int), System Hungarian Notation is more helpful.
Note, however, many modern programmers consider adding type information in variable name is abundant as many of modern IDEs have very convenient tools (e.g. Outline of Eclipse, Symbol Navigator of Xcode, Visual AssistX of Visual Studio, etc.). System Hungarian Notation was widely used in the earlier ages in programming (e.g. FORTRAN, C99) when type information was not readily available as a tool of the IDEs.
System Hungarian Notation < Apps Hungarian Notation
If you are working in higher-level class or method (e.g. user-defined driver class/method for your C# application), notating simple data type may not be sufficient enough to describe the variable/object. Therefore, we use Apps Hungarian Notation in this case.
For example, if the purpose of your method is to retrieve executable path and to display an error message if not found, instead of noting that it is a type of string, we note more significant information to describe the variable/object for programmers to better comprehend the purpose of the variable/obejct:
private string mPathExe;
private string msgNotFound;
It's about efficiency. How much time was wasted assigning the wrong value to a variable. Having accurate variable declarations are about gain in efficiency. It's about readability. It about following the existing patterns. If you want creativity, do user interface design, do system architecture. If your programming, it should be designed to make you team members job easier: not yours. A 2% gain in efficiencies is an extra week each year. That's 40 hours gained. Human productivity gains are made by compounding efficiencies.
I don't care, I always use a combination of Hungarian and Pascal/Camel Case.
Not for data types, but for controls. Because txtName is pretty self-explanatory. Some people will name it NameTextBox which is too long.
lblAge, txtName, rdoTrue, cboCities, lstAccountInfo. Nice and quick and compact
For variables, it's Pascal case for ordinary variables and properties. "firstName = txtFName.Text" (seriously if you can't look at that and realize it's a textbox, sheez). Then I'll use Camel Case for methods
public void PrintNames (string fName, string lName)
{
lblFullName.Text=fName + " " + lName;
}
So yes, I use Hungarian case, so sue me. No one can say they don't know what those variables are and I like being able to see really quickly without having to hover over what type of variable or control it is. When I first started programming I used all caps for every variable so this is a step up. And for pity's sake, do NOT put the opening curly brace at the end of the line. Nice and symmetrical.