What is AutoClass in .net? - c#

I was reading Inside C#, and i am stumbled upon Type.IsAutoClass.
The documentation says
true if the string format attribute AutoClass is selected for the Type; otherwise, false.
The question is What is AutoClass and how due it impacts a Type?
Note that this is academic question, and there is no practical usage (to best of my knowledge) in projects, I am associated with.

It is part of the TypeAttributes Enumeration:
AutoClass - LPTSTR is interpreted automatically.
And the remarks:
The members of this enumerator class match the CorTypeAttr enumerator as defined in the corhdr.h file.
So, this is used for interop, in how strings constants are interpreted.
By the way - LPTSTR.

Related

Is there any reason to use string over String in c# (IDE0001) [duplicate]

What are the differences between these two and which one should I use?
string s = "Hello world!";
String s = "Hello world!";
string is an alias in C# for System.String.
So technically, there is no difference. It's like int vs. System.Int32.
As far as guidelines, it's generally recommended to use string any time you're referring to an object.
e.g.
string place = "world";
Likewise, I think it's generally recommended to use String if you need to refer specifically to the class.
e.g.
string greet = String.Format("Hello {0}!", place);
This is the style that Microsoft tends to use in their examples.
It appears that the guidance in this area may have changed, as StyleCop now enforces the use of the C# specific aliases.
Just for the sake of completeness, here's a brain dump of related information...
As others have noted, string is an alias for System.String. Assuming your code using String compiles to System.String (i.e. you haven't got a using directive for some other namespace with a different String type), they compile to the same code, so at execution time there is no difference whatsoever. This is just one of the aliases in C#. The complete list is:
object: System.Object
string: System.String
bool: System.Boolean
byte: System.Byte
sbyte: System.SByte
short: System.Int16
ushort: System.UInt16
int: System.Int32
uint: System.UInt32
long: System.Int64
ulong: System.UInt64
float: System.Single
double: System.Double
decimal: System.Decimal
char: System.Char
Apart from string and object, the aliases are all to value types. decimal is a value type, but not a primitive type in the CLR. The only primitive type which doesn't have an alias is System.IntPtr.
In the spec, the value type aliases are known as "simple types". Literals can be used for constant values of every simple type; no other value types have literal forms available. (Compare this with VB, which allows DateTime literals, and has an alias for it too.)
There is one circumstance in which you have to use the aliases: when explicitly specifying an enum's underlying type. For instance:
public enum Foo : UInt32 {} // Invalid
public enum Bar : uint {} // Valid
That's just a matter of the way the spec defines enum declarations - the part after the colon has to be the integral-type production, which is one token of sbyte, byte, short, ushort, int, uint, long, ulong, char... as opposed to a type production as used by variable declarations for example. It doesn't indicate any other difference.
Finally, when it comes to which to use: personally I use the aliases everywhere for the implementation, but the CLR type for any APIs. It really doesn't matter too much which you use in terms of implementation - consistency among your team is nice, but no-one else is going to care. On the other hand, it's genuinely important that if you refer to a type in an API, you do so in a language-neutral way. A method called ReadInt32 is unambiguous, whereas a method called ReadInt requires interpretation. The caller could be using a language that defines an int alias for Int16, for example. The .NET framework designers have followed this pattern, good examples being in the BitConverter, BinaryReader and Convert classes.
String stands for System.String and it is a .NET Framework type. string is an alias in the C# language for System.String. Both of them are compiled to System.String in IL (Intermediate Language), so there is no difference. Choose what you like and use that. If you code in C#, I'd prefer string as it's a C# type alias and well-known by C# programmers.
I can say the same about (int, System.Int32) etc..
The best answer I have ever heard about using the provided type aliases in C# comes from Jeffrey Richter in his book CLR Via C#. Here are his 3 reasons:
I've seen a number of developers confused, not knowing whether to use string or String in their code. Because in C# the string (a keyword) maps exactly to System.String (an FCL type), there is no difference and either can be used.
In C#, long maps to System.Int64, but in a different programming language, long could map to an Int16 or Int32. In fact, C++/CLI does in fact treat long as an Int32. Someone reading source code in one language could easily misinterpret the code's intention if he or she were used to programming in a different programming language. In fact, most languages won't even treat long as a keyword and won't compile code that uses it.
The FCL has many methods that have type names as part of their method names. For example, the BinaryReader type offers methods such as ReadBoolean, ReadInt32, ReadSingle, and so on, and the System.Convert type offers methods such as ToBoolean, ToInt32, ToSingle, and so on. Although it's legal to write the following code, the line with float feels very unnatural to me, and it's not obvious that the line is correct:
BinaryReader br = new BinaryReader(...);
float val = br.ReadSingle(); // OK, but feels unnatural
Single val = br.ReadSingle(); // OK and feels good
So there you have it. I think these are all really good points. I however, don't find myself using Jeffrey's advice in my own code. Maybe I am too stuck in my C# world but I end up trying to make my code look like the framework code.
string is a reserved word, but String is just a class name.
This means that string cannot be used as a variable name by itself.
If for some reason you wanted a variable called string, you'd see only the first of these compiles:
StringBuilder String = new StringBuilder(); // compiles
StringBuilder string = new StringBuilder(); // doesn't compile
If you really want a variable name called string you can use # as a prefix:
StringBuilder #string = new StringBuilder();
Another critical difference: Stack Overflow highlights them differently.
There is one difference - you can't use String without using System; beforehand.
It's been covered above; however, you can't use string in reflection; you must use String.
System.String is the .NET string class - in C# string is an alias for System.String - so in use they are the same.
As for guidelines I wouldn't get too bogged down and just use whichever you feel like - there are more important things in life and the code is going to be the same anyway.
If you find yourselves building systems where it is necessary to specify the size of the integers you are using and so tend to use Int16, Int32, UInt16, UInt32 etc. then it might look more natural to use String - and when moving around between different .net languages it might make things more understandable - otherwise I would use string and int.
I prefer the capitalized .NET types (rather than the aliases) for formatting reasons. The .NET types are colored the same as other object types (the value types are proper objects, after all).
Conditional and control keywords (like if, switch, and return) are lowercase and colored dark blue (by default). And I would rather not have the disagreement in use and format.
Consider:
String someString;
string anotherString;
string and String are identical in all ways (except the uppercase "S"). There are no performance implications either way.
Lowercase string is preferred in most projects due to the syntax highlighting
This YouTube video demonstrates practically how they differ.
But now for a long textual answer.
When we talk about .NET there are two different things one there is .NET framework and the other there are languages (C#, VB.NET etc) which use that framework.
"System.String" a.k.a "String" (capital "S") is a .NET framework data type while "string" is a C# data type.
In short "String" is an alias (the same thing called with different names) of "string". So technically both the below code statements will give the same output.
String s = "I am String";
or
string s = "I am String";
In the same way, there are aliases for other C# data types as shown below:
object: System.Object, string: System.String, bool: System.Boolean, byte: System.Byte, sbyte: System.SByte, short: System.Int16 and so on.
Now the million-dollar question from programmer's point of view: So when to use "String" and "string"?
The first thing to avoid confusion use one of them consistently. But from best practices perspective when you do variable declaration it's good to use "string" (small "s") and when you are using it as a class name then "String" (capital "S") is preferred.
In the below code the left-hand side is a variable declaration and it is declared using "string". On the right-hand side, we are calling a method so "String" is more sensible.
string s = String.ToUpper() ;
C# is a language which is used together with the CLR.
string is a type in C#.
System.String is a type in the CLR.
When you use C# together with the CLR string will be mapped to System.String.
Theoretically, you could implement a C#-compiler that generated Java bytecode. A sensible implementation of this compiler would probably map string to java.lang.String in order to interoperate with the Java runtime library.
Lower case string is an alias for System.String.
They are the same in C#.
There's a debate over whether you should use the System types (System.Int32, System.String, etc.) types or the C# aliases (int, string, etc). I personally believe you should use the C# aliases, but that's just my personal preference.
string is just an alias for System.String. The compiler will treat them identically.
The only practical difference is the syntax highlighting as you mention, and that you have to write using System if you use String.
Both are same. But from coding guidelines perspective it's better to use string instead of String. This is what generally developers use. e.g. instead of using Int32 we use int as int is alias to Int32
FYI
“The keyword string is simply an alias for the predefined class System.String.” - C# Language Specification 4.2.3
http://msdn2.microsoft.com/En-US/library/aa691153.aspx
As the others are saying, they're the same. StyleCop rules, by default, will enforce you to use string as a C# code style best practice, except when referencing System.String static functions, such as String.Format, String.Join, String.Concat, etc...
New answer after 6 years and 5 months (procrastination).
While string is a reserved C# keyword that always has a fixed meaning, String is just an ordinary identifier which could refer to anything. Depending on members of the current type, the current namespace and the applied using directives and their placement, String could be a value or a type distinct from global::System.String.
I shall provide two examples where using directives will not help.
First, when String is a value of the current type (or a local variable):
class MySequence<TElement>
{
public IEnumerable<TElement> String { get; set; }
void Example()
{
var test = String.Format("Hello {0}.", DateTime.Today.DayOfWeek);
}
}
The above will not compile because IEnumerable<> does not have a non-static member called Format, and no extension methods apply. In the above case, it may still be possible to use String in other contexts where a type is the only possibility syntactically. For example String local = "Hi mum!"; could be OK (depending on namespace and using directives).
Worse: Saying String.Concat(someSequence) will likely (depending on usings) go to the Linq extension method Enumerable.Concat. It will not go to the static method string.Concat.
Secondly, when String is another type, nested inside the current type:
class MyPiano
{
protected class String
{
}
void Example()
{
var test1 = String.Format("Hello {0}.", DateTime.Today.DayOfWeek);
String test2 = "Goodbye";
}
}
Neither statement in the Example method compiles. Here String is always a piano string, MyPiano.String. No member (static or not) Format exists on it (or is inherited from its base class). And the value "Goodbye" cannot be converted into it.
Using System types makes it easier to port between C# and VB.Net, if you are into that sort of thing.
Against what seems to be common practice among other programmers, I prefer String over string, just to highlight the fact that String is a reference type, as Jon Skeet mentioned.
string is an alias (or shorthand) of System.String. That means, by typing string we meant System.String. You can read more in think link: 'string' is an alias/shorthand of System.String.
I'd just like to add this to lfousts answer, from Ritchers book:
The C# language specification states, “As a matter of style, use of the keyword is favored over
use of the complete system type name.” I disagree with the language specification; I prefer
to use the FCL type names and completely avoid the primitive type names. In fact, I wish that
compilers didn’t even offer the primitive type names and forced developers to use the FCL
type names instead. Here are my reasons:
I’ve seen a number of developers confused, not knowing whether to use string
or String in their code. Because in C# string (a keyword) maps exactly to
System.String (an FCL type), there is no difference and either can be used. Similarly,
I’ve heard some developers say that int represents a 32-bit integer when the application
is running on a 32-bit OS and that it represents a 64-bit integer when the application
is running on a 64-bit OS. This statement is absolutely false: in C#, an int always maps
to System.Int32, and therefore it represents a 32-bit integer regardless of the OS the
code is running on. If programmers would use Int32 in their code, then this potential
confusion is also eliminated.
In C#, long maps to System.Int64, but in a different programming language, long
could map to an Int16 or Int32. In fact, C++/CLI does treat long as an Int32.
Someone reading source code in one language could easily misinterpret the code’s
intention if he or she were used to programming in a different programming language.
In fact, most languages won’t even treat long as a keyword and won’t compile code
that uses it.
The FCL has many methods that have type names as part of their method names. For
example, the BinaryReader type offers methods such as ReadBoolean, ReadInt32,
ReadSingle, and so on, and the System.Convert type offers methods such as
ToBoolean, ToInt32, ToSingle, and so on. Although it’s legal to write the following
code, the line with float feels very unnatural to me, and it’s not obvious that the line is
correct:
BinaryReader br = new BinaryReader(...);
float val = br.ReadSingle(); // OK, but feels unnatural
Single val = br.ReadSingle(); // OK and feels good
Many programmers that use C# exclusively tend to forget that other programming
languages can be used against the CLR, and because of this, C#-isms creep into the
class library code. For example, Microsoft’s FCL is almost exclusively written in C# and
developers on the FCL team have now introduced methods into the library such as
Array’s GetLongLength, which returns an Int64 value that is a long in C# but not
in other languages (like C++/CLI). Another example is System.Linq.Enumerable’s
LongCount method.
I didn't get his opinion before I read the complete paragraph.
String (System.String) is a class in the base class library. string (lower case) is a reserved work in C# that is an alias for System.String. Int32 vs int is a similar situation as is Boolean vs. bool. These C# language specific keywords enable you to declare primitives in a style similar to C.
It's a matter of convention, really. string just looks more like C/C++ style. The general convention is to use whatever shortcuts your chosen language has provided (int/Int for Int32). This goes for "object" and decimal as well.
Theoretically this could help to port code into some future 64-bit standard in which "int" might mean Int64, but that's not the point, and I would expect any upgrade wizard to change any int references to Int32 anyway just to be safe.
#JaredPar (a developer on the C# compiler and prolific SO user!) wrote a great blog post on this issue. I think it is worth sharing here. It is a nice perspective on our subject.
string vs. String is not a style debate
[...]
The keyword string has concrete meaning in C#. It is the type System.String which exists in the core runtime assembly. The runtime intrinsically understands this type and provides the capabilities developers expect for strings in .NET. Its presence is so critical to C# that if that type doesn’t exist the compiler will exit before attempting to even parse a line of code. Hence string has a precise, unambiguous meaning in C# code.
The identifier String though has no concrete meaning in C#. It is an identifier that goes through all the name lookup rules as Widget, Student, etc … It could bind to string or it could bind to a type in another assembly entirely whose purposes may be entirely different than string. Worse it could be defined in a way such that code like String s = "hello"; continued to compile.
class TricksterString {
void Example() {
String s = "Hello World"; // Okay but probably not what you expect.
}
}
class String {
public static implicit operator String(string s) => null;
}
The actual meaning of String will always depend on name resolution.
That means it depends on all the source files in the project and all
the types defined in all the referenced assemblies. In short it
requires quite a bit of context to know what it means.
True that in the vast majority of cases String and string will bind to
the same type. But using String still means developers are leaving
their program up to interpretation in places where there is only one
correct answer. When String does bind to the wrong type it can leave
developers debugging for hours, filing bugs on the compiler team, and
generally wasting time that could’ve been saved by using string.
Another way to visualize the difference is with this sample:
string s1 = 42; // Errors 100% of the time
String s2 = 42; // Might error, might not, depends on the code
Many will argue that while this is information technically accurate using String is still fine because it’s exceedingly rare that a codebase would define a type of this name. Or that when String is defined it’s a sign of a bad codebase.
[...]
You’ll see that String is defined for a number of completely valid purposes: reflection helpers, serialization libraries, lexers, protocols, etc … For any of these libraries String vs. string has real consequences depending on where the code is used.
So remember when you see the String vs. string debate this is about semantics, not style. Choosing string gives crisp meaning to your codebase. Choosing String isn’t wrong but it’s leaving the door open for surprises in the future.
Note: I copy/pasted most of the blog posts for archive reasons. I ignore some parts, so I recommend skipping and reading the blog post if you can.
String is not a keyword and it can be used as Identifier whereas string is a keyword and cannot be used as Identifier. And in function point of view both are same.
Coming late to the party: I use the CLR types 100% of the time (well, except if forced to use the C# type, but I don't remember when the last time that was).
I originally started doing this years ago, as per the CLR books by Ritchie. It made sense to me that all CLR languages ultimately have to be able to support the set of CLR types, so using the CLR types yourself provided clearer, and possibly more "reusable" code.
Now that I've been doing it for years, it's a habit and I like the coloration that VS shows for the CLR types.
The only real downer is that auto-complete uses the C# type, so I end up re-typing automatically generated types to specify the CLR type instead.
Also, now, when I see "int" or "string", it just looks really wrong to me, like I'm looking at 1970's C code.
There is no difference.
The C# keyword string maps to the .NET type System.String - it is an alias that keeps to the naming conventions of the language.
Similarly, int maps to System.Int32.
There's a quote on this issue from Daniel Solis' book.
All the predefined types are mapped directly to
underlying .NET types. The C# type names (string) are simply aliases for the
.NET types (String or System.String), so using the .NET names works fine syntactically, although
this is discouraged. Within a C# program, you should use the C# names
rather than the .NET names.
Yes, that's no difference between them, just like the bool and Boolean.
string is a keyword, and you can't use string as an identifier.
String is not a keyword, and you can use it as an identifier:
Example
string String = "I am a string";
The keyword string is an alias for
System.String aside from the keyword issue, the two are exactly
equivalent.
typeof(string) == typeof(String) == typeof(System.String)

Copying as a reference or Clone? [duplicate]

In C#, is there any difference between using System.Object in code rather than just object, or System.String rather than string and so on? Or is it just a matter of style?
Is there a reason why one form is preferrable to the other?
string is an alias for global::System.String. It's simply syntactic sugar. The two are exactly interchangable in almost all cases, and there'll be no difference in the compiled code.
Personally I use the aliases for variable names etc, but I use the CLR type names for names in APIs, for example:
public int ReadInt32() // Good, language-neutral
public int ReadInt() // Bad, assumes C# meaning of "int"
(Note that the return type isn't really a name - it's encoded as a type in the metadata, so there's no confusion there.)
The only places I know of where one can be used and the other can't (that I'm aware of) are:
nameof prohibits the use of aliases
When specifying an enum base underlying type, only the aliases can be used
The object type is an alias for System.Object. The object type is used and shown as a keyword. I think it has something to do with legacy, but that's just a wild guess.
Have a look at this MSDN page for all details.
I prefer the use of the lowercased versions, but for no special reasons. Just because the syntax highlighting is different on these "basic" types and I don't have to use the shift key when typing...
One is an alias to the other. Its down to style.
string is an alias for global::System.String, and object for global::System.Object
Providing you have using System; in your class, String / string and Object / object are functionally identical and usage is a matter of style.
(EDIT: removed slightly misleading quote, as per Jon Skeet's comment)
string (with the lowercase "s") is the string type of the C# language and the type System.String is the implementation of string in the .NET framework.
In practise there is no difference besides stylistic ones.
EDIT: Since the above obviously wasn't clear enough, there is no difference between them, they are the same type once compiled. I was explaining the semantic difference that the compiler sees (which is just syntactic sugar, much like the difference between a while and for loop).
There are no difference. There is a number of types, called Primitive Data Types which are threated by compiler in style you mentioned.
Capitalized naming style is ISO naming rule. It's more general, common; forces the same naming rules for all objects in the source without exceptions such C# compiler has.
As of my knowledge, I know that it's a shortcut, it's easier to use string, rather than System.string.
But be careful there's a difference between String and string (c# is case sensitive)
object, int, long and bool were provided as training wheels for engineers that had trouble adapting to the idea that the data types were not a fixed part of the language. C#, unlike the languages that went before it, has no limit on the number of data types you can add. The 'System' library provides a starter kit featuring such useful types as System.Int32, System.Boolean, System.Double, System.DateTime and so on, but engineers are encouraged to add their own. Because Microsoft was interested in quick adoption of their new language, they provided aliases that made it appear as if the language was more 'C'-like, but these aliases are a completely disposable feature (C# would be just as good a language if you removed all the build-in aliases, probably better).
While StyleCop does enforce the use of the legacy C-style aliases, it is a blemish on an otherwise logical set of rules. As of yet, I've not heard a single justification for this rule (SA1121) that wasn't based on dogma. If you think SA1121 is logical, then why is there no buildin type for datetime?

Range[] instead of get_Range()

http://msdn.microsoft.com/en-us/library/microsoft.office.tools.excel.worksheet.get_range.aspx it says to use the Range property instead of get_Range(Object Cell1, Object Cell2).
They are both doing the same thing, Gets a Microsoft.Office.Interop.Excel.Range object that represents a cell or a range of cells. So, what's the difference except that this is a method and another is a property? Why are they pointing on use of Range[], what's the reason for it?
Range() is faster than Range[]
By practice we have noticed it the case. But here should define a reason to say so.
This shortcut is convenient when you want to refer to an absolute range. However, it is not as flexible as the Rangeproperty as it cannot handle variable input as strings or object references. So at the end of the day you will still end up referring the long way. Although the shorty provides readability. Hence might as well get it right the first round without more resources spending.
Now why is it slow? In the compiling.
"During run-time Excel always uses conventional notation (or so I've been told), so when the code is being compiled all references in shortcut notation must be converted to conventional range form (or so I've been told). {ie [A150] must be converted to Range("A150") form}. Whatever the truth of what I've been told, Visual Basic has to memorize both its compiled version of the code and whatever notation you used to write your code (i.e. whatever's in the code module), the workbook properties for the file size (the memory used) thus goes up slightly. "
As you see my answer was more in line with VBA. However after some research it is sort of proved that VBA side doesn't do much slowing down. So you only need to take care of the C# side. #Hans gives you a better answer in C# perspective. Hope combining both that you will get a great performing code :)
Here is some finding on the performance of Range[] vs Range() in Excel
If you use C# version 4 and up then you can use the Range indexer. But you have to use get_Range() on earlier versions.
Do note that there's something special about it, the default property of a COM interface maps to the indexer. But the Range property is not the default property of a Worksheet, it is just a regular property. Trouble is, C# does not permit declaring indexed properties other than the indexer. Works in VB.NET, not in C#, you had to call the property getter method directly. By popular demand, the C# team dropped this restriction in version 4 (VS2010). But only on COM interfaces, you still cannot declare indexed properties in your own code.
I have used both and both returned the same results. I think Range[] actually uses get_Range() internally.
For a question of naming convention I only use Range[] now.

What's the best way to communicate the purpose of a string parameter in a public API?

According to the guidance published in New Recommendations for Using Strings in Microsoft .NET 2.0, the data in a string may exhibit one of the following types of behavior:
A non-linguistic identifier, where bytes match exactly.
A non-linguistic identifier, where case is irrelevant, especially a piece of data stored in most Microsoft Windows system services.
Culturally-agnostic data, which still is linguistically relevant.
Data that requires local linguistic customs.
Given that, I'd like to know the best way to communicate which behavior is expected of a string parameter in a public API. I wasn't able to find an answer in the Framework Design Guidelines.
Consider the following methods:
f(string this_is_a_linguistic_string)
g(string this_is_a_symbolic_identifier_so_use_ordinal_compares)
Is variable naming and XML documentation the best I can do? Could I use attributes in some way to mark the requirements of the string?
Now consider the following case:
h(Dictionary<string, object> dictionary)
Note that the dictionary instance is created by the caller. How do I communicate that the callee expects the IEqualityComparer<string> object held by the dictionary to perform, for example, a case-insensitive ordinal comparison?
Use the documentation syntax:
/// <param name="dictionary">
/// ... string is case sensitive ordinal ...
/// </param>
You could always use a modified Hungarian convention (and I mean the Joel-approved kind):
Prefix cs for case-sensitive (non-linguistic)
Prefix ci for case-insensitive (non-linguistic)
Prefix cil for culture-invariant linguistic
Prefix csl for culture-specific linguistic or culture-sensitive linguistic
The "i" and "s" have consistent implications here, even though they can mean two different things depending on the context, which is a helpful attribute. "i" means "don't care" (about case/culture) and "s" means "do care".
Of course, as a disclaimer, I never do this, because for the vast majority of strings I deal with, the distinction between these types of strings is blurry at best. But if they have semantic meaning to you, this would be a reasonable alternative to relying on XML docs. Especially when you're using them as arguments to private methods, which most people don't write XML docs for.

c#: difference between "System.Object" and "object"

In C#, is there any difference between using System.Object in code rather than just object, or System.String rather than string and so on? Or is it just a matter of style?
Is there a reason why one form is preferrable to the other?
string is an alias for global::System.String. It's simply syntactic sugar. The two are exactly interchangable in almost all cases, and there'll be no difference in the compiled code.
Personally I use the aliases for variable names etc, but I use the CLR type names for names in APIs, for example:
public int ReadInt32() // Good, language-neutral
public int ReadInt() // Bad, assumes C# meaning of "int"
(Note that the return type isn't really a name - it's encoded as a type in the metadata, so there's no confusion there.)
The only places I know of where one can be used and the other can't (that I'm aware of) are:
nameof prohibits the use of aliases
When specifying an enum base underlying type, only the aliases can be used
The object type is an alias for System.Object. The object type is used and shown as a keyword. I think it has something to do with legacy, but that's just a wild guess.
Have a look at this MSDN page for all details.
I prefer the use of the lowercased versions, but for no special reasons. Just because the syntax highlighting is different on these "basic" types and I don't have to use the shift key when typing...
One is an alias to the other. Its down to style.
string is an alias for global::System.String, and object for global::System.Object
Providing you have using System; in your class, String / string and Object / object are functionally identical and usage is a matter of style.
(EDIT: removed slightly misleading quote, as per Jon Skeet's comment)
string (with the lowercase "s") is the string type of the C# language and the type System.String is the implementation of string in the .NET framework.
In practise there is no difference besides stylistic ones.
EDIT: Since the above obviously wasn't clear enough, there is no difference between them, they are the same type once compiled. I was explaining the semantic difference that the compiler sees (which is just syntactic sugar, much like the difference between a while and for loop).
There are no difference. There is a number of types, called Primitive Data Types which are threated by compiler in style you mentioned.
Capitalized naming style is ISO naming rule. It's more general, common; forces the same naming rules for all objects in the source without exceptions such C# compiler has.
As of my knowledge, I know that it's a shortcut, it's easier to use string, rather than System.string.
But be careful there's a difference between String and string (c# is case sensitive)
object, int, long and bool were provided as training wheels for engineers that had trouble adapting to the idea that the data types were not a fixed part of the language. C#, unlike the languages that went before it, has no limit on the number of data types you can add. The 'System' library provides a starter kit featuring such useful types as System.Int32, System.Boolean, System.Double, System.DateTime and so on, but engineers are encouraged to add their own. Because Microsoft was interested in quick adoption of their new language, they provided aliases that made it appear as if the language was more 'C'-like, but these aliases are a completely disposable feature (C# would be just as good a language if you removed all the build-in aliases, probably better).
While StyleCop does enforce the use of the legacy C-style aliases, it is a blemish on an otherwise logical set of rules. As of yet, I've not heard a single justification for this rule (SA1121) that wasn't based on dogma. If you think SA1121 is logical, then why is there no buildin type for datetime?

Categories

Resources