I'm planning to start using FxCop in one of our ongoing project. But, when i tried it with selecting all available rules, it looks like I have to make lots of changes in my code. Being a "team member" I can't right away start making these changes, like naming convention change etc. anyway i would like to start using FxCop with a minimal rule set and would gradually increase the rule set as we goes on. Can you suggest me some must have FxCop rules which i should start following. Or do you suggest any better approach?
Note: Most of my code is in C#.
On our most important code:
Treat warnings as errors (level 4)
FxCop must pass 100% (no ignores generally allowed)
Gendarme used as a guideline (sometimes it conflicts with FxCop)
Believe it or not, FxCop teaches you a hell of a lot on how to write better code... great tool! So for us, all rules are equally important.
In my opinion, do the following:
For any new project, follow all FxCop rules. You may want to disable some of them, since not everything will make sense for your project.
For an existing project, follow the rules from these categories as a minimum set:
Globalization
Interoperability
Security
Performance
Portability
Since these are typically only few rule violations in an existing project, compared to the other categories, but may improve the quality of your application. When these rules are clear, try to fix the following categories:
Design
Usage
Since these will make it easier for you to spot bugs that have to do with the violations, but you will have a large amount of violations in existing code.
Always sort the violations by level/fix category and start with the critical ones. Skip the warnings for now.
In case you didn't know, there's also StyleCop available from Microsoft, checking your code on the source level. Be sure to enable MSBuild integration during installation.
Some of the rules avoid us bugs or leaks:
Do not catch general exception types (May be the best rule for us. According to the case, it can be easy or difficult to enforce)
Test for NaN correctly (easy to enforce)
Disposable fields should be disposed (quite easy to enforce)
Dispose should call base dispose (quite easy to enforce)
Disposable types should declare finalizer (quite easy to enforce)
Some help us have a better design, but be careful, they may lead you to big refactoring when central API is impacted. We like
Collection properties should be readonly (difficult to enforce in our case)
Do not expose generic list
member should not expose certain conrete types
Review usuned parameters (Improves easily your API)
Someone on our project tried the performance rules with no improvement. (Actually, these rules are about micro-optimizing, which gives no result if no bottleneck identification shows microoptimizing is needed). I would suggest not starting with these ones.
Turn on one rule at a time. Fix or exclude any warnings it reports, then start on the next one.
An alternative to FxCop would be to use the tool NDepend that lets write Code Rules over C# LINQ Queries (namely CQLinq). Disclaimer: I am one of the developers of the tool
More than 200 code rules are proposed by default. Customizing existing rules or creating your own rules is straightforward thanks to the well-known C# LINQ syntax.
NDepend overlaps with FxCop on some code rules, but proposes plenty of unique code rules. Here are a few rules that I would classify as must-follow:
Avoid decreasing code coverage by tests of types
Avoid making complex methods even more complex (Source CC)
Avoid transforming an immutable type into a mutable one
Overrides of Method() should call base.Method()
Avoid the Singleton pattern
Types with disposable instance fields must be disposable
Disposable types with unmanaged resources should declare finalizer
Avoid namespaces mutually dependent
Avoid namespaces dependency cycles
UI layer shouldn't use directly DB types
API Breaking Changes: Methods
Complex methods partially covered by tests should be 100% covered
Potentially dead Types
Structures should be immutable
Avoid naming types and namespaces with the same identifier
Notice that Rules can be verified live in Visual Studio and at Build Process time, in a generated HTML+javascript report.
The minimal fxcop and also code analysis (if using VS2010 Premium or Ultimate) is the following: http://msdn.microsoft.com/en-us/library/dd264893.aspx
We're a web shop so we drop the following rules:
Anything with Interop (we don't support COM integration unless a client pays for it!)
Key signing (web apps shouldn't need high security prilages)
Occationally we'll drop the rule about using higher frameworks in dependancies as some of our CMS's are still .NET 2.0, but that doesn't mean the DAL/ Business Layers can't be .NET 3.5 as long as you're not trying to return an IQueryable (or anything .NET 3, 3.5).
In our process, we enabled all the rules and then we have to justify any suppressions as part of our review process. Often, it's just not possible to fix the error in time-efficient manner with regards to deadlines or its an error raised in error (this sometimes occurs - especially if your architecture handles plug-ins via reflection).
We also wrote a custom rule for globalization to replace an existing one because we didn't want to globalize the strings passed to exceptions.
In general, I'd say it's best to try and adhere to all rules. In my current home project, I have four build configurations - one set that specify the CODE_ANALYSIS define and one set that don't. That way, I can see all the messages I have suppressed just by building a non-CODE_ANALYSIS configuration. This means that suppressed messages can be periodically reviewed and potentially addressed or removed as required.
What I'd like to do in the long-run is have a build step that analyzes the SuppressMessage attributes against the actual errors and highlights those suppressions that are no longer required, but that's not currently possible with my setup.
The design and security rules are a good place to start.
I fully agree with Sklivvz. But for existing projects, you may clean up FxCop violations category by category.
From time to time, Gendarme accepts new rules that are quite useful. So you may use Gendarme besides.
Related
Please clarify, is the Code Contracts is similar to FxCop and StyleCop?
As per the online references, we need to add Codes for implementing the code contract conditions inside the function of existing code.
public void Initialize(string name, int id)
{
Contract.Requires(!string.IsNullOrEmpty(name));
Contract.Requires(id > 0);
Contract.Ensures(Name == name);
//Do some work
}
Usually in FxCop, the code we want to check will be in separate Dll and the Class library which includes the rules to check will be in separate dll.
Likewise whether we can create separate class library for Code contract to rule the existing code?
Please confirm..
disclaimer: you'd better take their current docs and read them through, write down the features and then compare them. What I wrote below is some facts I remembered long time ago about their core functionalities and I can't guarantee you that they are not outdated and now-wrong. For example, someone could write some complex&heavy rules for FxCop that behave as Contracts do. This is why I'm marking it as community-wiki. Please correct me if I'm wrong anywhere.
No they are not similar, although they share common target: help you find bugs.
FxCop is a "static analyzer", which inspects your code and tries to find "bad patterns". You will not see any FxCop rules/effects during runtime. FxCop has a set of "rules" that will be used during inspection and it reports to you whenever it finds a rule to be broken. Rules can be very easy and nitpicking like you must initialize every variable or you must name classes with uppercase or complex ones like you shouldn't have more than one loop in a method. Some rules are available by the standard installation, and you can expand the ruleset with your own rules.
CodeContracts is two-sided. At the most basic level, it is a set of helper methods, like throw if argument 'foo' is null. At runtime, when someone passes a null, it will throw. Just that simple. However, if you use those helper methods correctly and widely in your code, you will be able to run an additional static analyzer. It will scan your code, find all usages of those helper methods, and will try to automatically detect any places where their contracts are not satisfied. So, with the "argument is null" example, it will try to find all usages of that function, check who calls it with what args, it will try to deduce (prove) if that arg can be null at all anytime, and will warn you if it finds such case. There are more types of such validators other than just not-null, but you can't add/write your own. I mean, you could add more such helper validators, but the static analyzer wouldn't pick them up (it's too hard to write a general theorem prover for any rule).
CodeContracts is more powerful in its analyses than FxCop, but limited in diversity and scope. CodeContracts cannot check the structure of the code: it will not check the number of loops, code complexity, names of methods, code hierarchy, etc. It can only attempt to prove/disprove some contracts (runtime requirements) of some methods/interfaces. FxCop on the other hand can inspect your code, style, structure, etc, but it will not "prove" or "deduce" anything - it will just check for some bad patterns defined by rules.
While FxCop is used to verify some code-style or typical perfomance issues,
Code Contracts influences your code design, so it aims to achieve higher level goals. It's a .NET implementation attempt of contract programming methodology used in Eiffel language. Methodology says, that every type will behave correctly (performing its postconditions and invariants), only if it will have input according to required preconditions.
You should describe your types preconditions, invariants and postconditions with use of library helper methods and attributes (Contract.Requires, etc.) and Code Contracts static analizer will be able to detect their failures during compilation.
(Last time I looked at it, tool was rather slow and hard to use. Seems, that it haven't been completed by microsoft research team. Fortunately, few days ago a new version of it have been released with bug-fixes for async-await as well as VS2015 support.)
I have two separate namespaces in my assembly: DataAccess and DomainLogic.
I need a code snippet checking that no class in DomainLogic depends on the namespace DataAccess.
How would you dou it?
PS: I think I saw an example for such a unit test in Mark Seemann's awesome book Dependency Injection in .Net, but I don't have it available here and can't find an example via Google.
Edit
Since all reactions so far point out that I should just split these interdependent classes into two different assemblies, I would like to point out that this is currently not an option (although this is indeed one of my main goals in the end). I'm dealing with legacy code and I just can't refactor it in one big bang right now. The separate namespaces and the test for dependencies between them are an intermediate step. As soon as that test passes, I can go ahead and move a part of the code into a different assembly.
All code within an assembly can legitimately access public and internal code throughout the rest of the assembly. So such unit tests, even if possible, would be a bad idea.
If you split the DataAccess types out into a separate project and made it all internal, then nothing would be able to access it. Clearly not what you'd want. However, by splitting it out, you can ensure that DomainAccess can access DomainLogic, but not vice versa. This presumably is what you want.
In the meantime, rather than try to develop unit tests to check that the rule of "DomainLogic must not access DomainAccess", use code reviews instead. Assuming you are using agile methods (if not, do so!), all activities will be documented as tasks. No task can be considered "done" until someone who understands and embraces your rule has reviewed the code changes for a task. Break the rule and the task fails code review and must be reworked before it's done.
There's a tool that does just this: checks namespace dependencies based on your rules and reports violations at build-time as warnings or errors.
It's called NsDepCop, free, open-source.
The rule config would look something like this:
<NsDepCopConfig IsEnabled="True" CodeIssueKind="Warning">
<Allowed From="*" To="*" />
<Disallowed From="DomainLogic" To="DataAccess" />
</NsDepCopConfig>
I don't mean to troll but I really don't get it. Why would language designers allow private methods instead of some naming convention (see __ in Python) ?
I searched for the answer and usual arguments are:
a) To make the implementation cleaner/avoid long vertical list of methods in IDE autocompletion
b) To announce to the world which methods are public interface and which may change and are just for implementation purpose
c) Readability
Ok so now, all of those could be achieved by naming all private methods with __ prefix or by "private" keyword which doesn't have any implications other than be information for IDE (don't put those in autocompletion) and other programers (don't use it unless you really must). Hell, one could even require unsafe-like keyword to access private methods to really discourage this.
I am asking this because I work with some c# code and I keep changing private methods to public for test purposes as many in-between private methods (like string generators for xml serialization) are very useful for debugging purposes (like writing some part of string to log file etc.).
So my question is:
Is there anything which is achieved by access restriction but couldn't be achieved by naming conventions without restricting the access ?
There are a couple questions/issues that you are raising, so I'll handle each one separately.
How do I test private methods?
Beyond the debate/discussion of if you should test private methods, there are a few ways you can do this.
Refactor
A broad general answer is that you can refactor the behaviour into a separate testable class which the original class leverages. This is debatable and not always applicable depending on your design or privileges to do so.
InternalsVisibleTo
A common routine is to extract testable logic into a method and mark it as internal. In your assembly properties, you can add an attribute [InternalsVisibleTo("MyUnitTestingProject")]. This will allow you to access the method from your unit testing project while still hiding access to all other assemblies. http://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.internalsvisibletoattribute.aspx
However, given the comments made by you, you are unable to change the structure of source code permanently in your workplace; that you are changing the accessors to test, test, then change them back before committing. In this case there are two options:
Partial testing classes
If you mark the class as partial. Create a second partial class file which will contain your tests (or public wrappers to the private members). Then when it comes time to merge/commit, just remove your partial classes from the project and remove the partial keyword from the main class. In addition, you can wrap the entire testing code file with if DEBUG (or other directive) so it's only available when unit testing and will not affect production/development code.
http://weblogs.asp.net/ralfw/archive/2006/04/14/442836.aspx
public partial class MyClass
{
private string CreateTempString()
{
return "Hello World!";
}
}
#if DEBUG
public partial class MyClass //in file "MyClass_Accessor.cs"
{
public string CreateTempString_Accessor()
{
return CreateTempString();
}
}
#endif
Reflection
You can still access private members via reflection:
public class Test
{
private string PrivateField = "private";
}
Test t = new Test();
var publicFieldInfo = typeof(Test).GetField("PrivateField", BindingFlags.Instance | BindingFlags.NonPublic);
Console.WriteLine(publicFieldInfo.GetValue(t)); //outputs "private"
Your unit tests could pull out private/hidden data in classes this way. In fact, Microsoft provides two classes that do exactly this for you: PrivateObject and PrivateType
Given your in-house development process limitations, this is likely your best bet as you'll be able to manage your own tests outside the main project libraries without having to alter anything in the core code.
Note that Silverlight (and likely other Core-CLR runtimes) strictly enforce non-public access during reflection, so this option is not applicable in those cases.
So, there are a few ways to test private members, and I'm sure there are a few more clever/not-so-clever methods of doing so lurking out there.
Could all of those benefits could be achieved by naming all private methods with __ prefix or by introducing a private-but-accessible access modifier?
The benefits cited by you (citing others) being:
To make the implementation cleaner/avoid long vertical list of
methods in IDE autocompletion
To announce to the world which methods are public interface and
which may change and are just for implementation purpose
Readability
Now you add that these could all be achieved with __ or by a change to the language specification and/or IDEs that would support a private-but-accessible access modifier, possibly with some unsafe-like keyword that would discourage this. I don't think it will be worthwhile going into a debate about changing the current features/behaviours of the language and IDE (and possibly it wouldn't be make sense for StackOverflow), so focusing on what is available:
1) Cleaner implementation and intellisense
The Visual Studio IDE (I can't speak for MonoDevelop) does support hiding members from intellisense when they're marked with the [EditorBrowsableAttribute]. But this only works if the developer enables the option "Hide Advanced Members" in their Visual Studio options. (note that it will not supress members in the intellisense when you're working within the same assembly)
http://msdn.microsoft.com/en-us/library/system.componentmodel.editorbrowsableattribute.aspx
So marking a public member as such makes it behave (intellisense-wise) as internal-ish (no [InternalsVisibleTo] support). So if you're in the same assembly, or if you do not have the Hide Advanced Members enabled, you'll still see a long list of __ members in the intellisense. Even if you have it hidden from intellisense, it's still fully accessible according to its current access modifier.
2) Public usage interface/contract
This assumes that all developers in the C#, and Visual Basic, and F#, and C++.NET and any .NET development world will adopt the same __ naming convention and adhere to it as assemblies are compiled and interchanged between developers. Maybe if you're scripting in IronPython, you can get away with it, or if your company internally adopts this approach. But generally speaking, it's not going to happen and .NET developers may likely be hestitant to leverage libraries adopting this convention as that is not the general .NET culture.
3) Readability
This kind of goes with #2 in that what is "readable" depends on the culture and what developers within that field expect; it is certainly debatable and subjective. I would wager that the majority of the C# developers find the strict/enforced encapsulation to significantly improve code readability and I'm sure a good chunk of them would find __ used often would detract from that. (as a side, I'm sure it's not uncommon for developers to adopt _ or __ prefixes for private fields and still keep them private)
However, readability and encapsulation in C# goes beyond just public/private accessors. In C#, there are private, public, protected internal, protected, and internal (am I missing one?) each has their own use and provide different information for developers. Now I'm not sure how you would go about communicating those accessors only via __. Suggesting single underscore is protected, double underscore is private, that would definitely hamper readability.
Is there anything which is achieved by access restriction that couldn't be achieved by naming conventions without restricting the access?
If you're asking why did the C# design team go this route, well I guess you'd have to ask Mr. Hejlsberg one day. I know they were creating a language gleaning the best parts of C/C++ and to strongly focus on the priciples of object-oriented principles.
As to what is achieved by enforcing access via the access modifiers:
More guaranteed proper access by consumers of the API. If your class utilizes a MakeTempStringForXMLSerialization method which stores the string as a class property for serialization, but for performance reasons forgoes costly checks (because you, as a developer have done unit testing to ensure that all of class's fields will be valid via the public API) then a third party does some lovely garbage-in-garbage-out, they'll blame you and/or the vendor for a shoddy library. Is that fair? Not necessarily; they put the garbage in, but the reality is many will still blame the vendor.
For new developers attempting to understand how your API works, it helps to simplify their experience. Yes, developers should read the documentation, but if the public API is intuitive (as it generally should be) and not exposing a boatload of members that shouldn't be accessed, then it's far less confusing and less likely they'll accidentally feed garbage into the system. It will also lower the overhead to get the developers to consume and leverage your API effectively without hassles. This is especially the case when it comes to any updates you publish of your API in which you wish to change/refactor/improve your internal implementation details.
So from a business perspective, it protects them from liability and bad relations with customers and is more appealing for developers to purchase and invest in it.
Now this can all be the case, as you say, if everyone follows the convention that __ members should not be accessed outside of the class or provide some unsafe marker where you say, "If you do this, and it breaks, it's not my fault!" well then you're not on the same planet as C# .NET development. The accessors provided by C# provide that __ convention but ensure that all developers adhere to it.
One could argue that the restricted access is an illusion as consumers can work around it via reflection (as demonstrated above), and thus there is actually no programmatic difference between the access modifiers and __ (or other) notation. (On Silverlight/Core-CLR, there is, most definitely a programmatic difference though!) But the work developers would go through to access those private fields is the difference between you giving consumers an open door with a sign "don't go in" (that you hope they can read) and a door with a lock that they have to bash down.
So in the end what does it actually provide? Standardized, enforced access to members where as __ provides non-standardized, non-enforced access to members. In addition, __ lacks the range of description that the varieties of available access modifiers supply.
Update (January 2nd, 2013)
I know it's been half a year, but I've been reading through the C# language specification and came across this little gem from section 2.4.2 Identifiers which states:
Identifiers containing two consecutive underscore characters (U+005F)
are reserved for use by the implementation. For example, an
implementation might provide extended keywords that begin with two
underscores.
I imagine nothing necessarily bad will happen, most likely nothing will break catastrophically if you do. But it's just one more aspect that should be considered when thinking about using double underscores in your code that the specification suggests that you do not.
The reason private methods exist are to provide encapsulation.
This allows you to provide public contracts by which you want your object to interact, yet have your internal logic and state be encapsulated in a way that refactoring would not affect consumers.
For example, you could provide public named properties, yet decide to store state in a Dictionary, similar to what typed DataSets do.
It's not really a "Security" feature (since you always have reflection to override it), but a way to keep public APIs separate from internal implementations.
Nobody should "depend" on your internal and private implementation, they should only depend on your public (or protected) implementation.
Now, regarding unit testing, it is usually undesired to test internal implementations.
One common approach though is to declare it internal and give the test assemblies access, through InternalsVisibleToAttribute, as Chris mentioned.
Also, a clear distinction between public, private, and protected are extremely useful with inheritance, in defining what you expect to be overridable and what shouldn't.
In general, there is no real point to marking fields or methods private. It provides an artificial sense of security. All the code is running inside the same process, presumably written by people with a friendly relationship to each other. Why do they need access controls to protect the code from each other? That sounds like a cultural issue to fix.
Documentation provides more information than private/protected/public does, and a good system will have documentation anyway. Use that to guide your development. If you mark a method as "private", and a developer calls it anyway, that's a bad developer, and he will ruin your system in some other way eventually. Fix that developer.
Access controls eventually get in the way of development, and you spend time just making the compiler happy.
Of course, if you are talking strictly about C#, then you should stick to marking methods as private, because other C# developers will expect it.
It hides a lot of internal details, especially for library implementers who may actually want to hide those details.
Keep in mind that there are commercial libraries out there, being sold. They expose only a very limited set of options in their interface to their users and they want all the rest to be well hidden.
If you design a language that doesn't give this option, you're making a language that will be used almost exclusively for open-source projects (or mostly for scripting).
I don't know much about Python though, would be interesting to know if there are commercial closed-source libraries written in python.
The reasons you mentioned are good enough too.
If you're publishing a library with public members, it's possible for users to use these members. You can use naming conventions to tell them they shouldn't, but some people will.
If your "internal" members are public, it becomes impossible for you to guarantee compatibility with the old version of the library, unless you keep all of the internal structure the same and make only very minor changes. That means that every program using your library must use the version it was compiled against (and will most likely do this by shipping its own version, rather than using a shared system version).
This becomes a problem when your library has a security update. At best, all programs are using different shared versions, and you have to backport the security update to every version that's in the wild. More likely, every program ships its own version and simply will not get the update unless that program's author does it.
If you design your public API carefully, with private members for the things you might change, you can keep breaking changes to a minimum. Then all programs/libraries can use the latest version of your library, even if they were compiled for an earlier one, and they can benefit from the updates.
This can lead to some interesting debate. A main reason for marking private methods (and member variables) is to enforce information hiding. External code shouldn't know or care about how your class works. You should be free to change your impl without breaking other things. This is well established software practice.
However in real life, private methods can get in the way of testing and unforeseen changes that need to occur. Python has a belief "we're all consenting adults" and discourages private methods. So, sometimes I'll just use naming conventions and/or comment "not intended for public use". It's another one of those OO-theory vs. reality issues.
How truly "evil" are cyclic namespace dependencies within a single assembly. I understand using multiple assemblies changes everything and something like that wouldn't compile, but what is the real risk of doing it within a single assembly?
No risk at all - feel free to reference anything you want within the same assembly.
However this approach will make you application brittle and hard to scale. A better approach is to try and keep your components as orthogonal as possible.
What the language and compiler allow you to do is not necessarily what is best for long-term developments needs (bug fixes, scaling, new features, etc.). You ought to strive for loose coupling and high cohesion.
namespaces are a way to provide a logical organization to your code. They provide a way to reuse names by applying it within a certain context. There is nothing wrong with having classes in two namespaces depend on each other.
Cyclic depedancies in assemblies is a bit more complex. Last I checked visual studio wouldn't allow this type of relationship directly.
I don't think there are any risks per se, and in some cases it's certainly the natural way of doing things. System.Text.StringBuilder obviously uses System.String, and I'd be very surprised if nothing in the System namespace used StringBuilder in turn.
It's probably worth just checking your design every so often: question whether you should be able to separate the functionality into separate components. Quite often the answer will be "no", at which point just move on with no feeling of guilt :)
I have been trying to follow StyleCop's guidelines on a project, to see if the resulting code was better in the end. Most rules are reasonable or a matter of opinion on coding standard, but there is one rule which puzzles me, because I haven't seen anyone else recommend it, and because I don't see a clear benefit to it:
SA1101: The call to {method or property name} must begin with the 'this.' prefix to indicate that the item is a member of the class.
On the downside, the code is clearly more verbose that way, so what are the benefits of following that rule? Does anyone here follow that rule?
I don't really follow this guidance unless I'm in the scenarios you need it:
there is an actual ambiguity - mainly this impacts either constructors (this.name = name;) or things like Equals (return this.id == other.id;)
you want to pass a reference to the current instance
you want to call an extension method on the current instance
Other than that I consider this clutter. So I turn the rule off.
It can make code clearer at a glance. When you use this, it's easier to:
Tell static and instance members apart. (And distinguish instance methods from delegates.)
Distinguish instance members from local variables and parameters (without using a naming convention).
I think this article explains it a little
http://blogs.msdn.microsoft.com/sourceanalysis/archive/2008/05/25/a-difference-of-style.aspx
...a brilliant young developer at Microsoft (ok, it was me) decided to take it upon himself to write a little tool which could detect variances from the C# style used within his team. StyleCop was born. Over the next few years, we gathered up all of the C# style guidelines we could find from the various teams within Microsoft, and picked out all of best practices which were common to these styles. These formed the first set of StyleCop rules. One of the earliest rules that came out of this effort was the use of the this prefix to call out class members, and the removal of any underscore prefixes from field names. C# style had officially grown apart from its old C++ tribe.
this.This
this.Does
this.Not
this.Add
this.Clarity
this.Nor
this.Does
this.This
this.Add
this.Maintainability
this.To
this.Code
The usage of "this.", when used excessively or a forced style requirement, is nothing more then a contrivance used under the guise that there is < 1% of developers that really do not understand code or what they are doing, and makes it painful for 99% who want to write easily readable and maintainable code.
As soon as you start typing, Intellisence will list the content available in the scope of where you are typing, "this." is not necessary to expose class members, and unless you are completely clueless to what you are coding for you should be able to easily find the item you need.
Even if you are completely clueless, use "this." to hint what is available, but don't leave it in code. There are also a slew of add-ons like Resharper that help to bring clarity to the scope and expose the contents of objects more efficiently. It is better to learn how to use the tools provided to you then to develop a bad habit that is hated by a large number of your co-workers.
Any developer that does not inherently understand the scope of static, local, class or global content should not rely on "hints" to indicate the scope. "this." is worse then Hungarian notation as at least Hungarian notation provided an idea about the type the variable is referencing and serves some benefit. I would rather see "_" or "m" used to denote class field members then to see "this." everywhere.
I have never had an issue, nor seen an issue with a fellow developer that repeatedly fights with code scope or writes code that is always buggy because of not using "this." explicitly. It is an unwarranted fear that "this." prevents future code bugs and is often the argument used where ignorance is valued.
Coders grow with experience, "this." is like asking someone to put training wheels on their bike as an adult because it is what they first had to use to learn how to ride a bike. And adult might fall off a bike 1 in 1,000 times they get on it, but that is no reason to force them to use training wheels.
"this." should be banned from the language definition for C#, unfortunately there is only one reason for using it, and that is to resolve ambiguity, which could also be easily resolved through better code practices.
A few basic reasons for using this (and I coincidentally always prefix class values with the name of the class of which they are a part as well - even within the class itself).
1) Clarity. You know right this instant which variables you declared in the class definition and which you declared as locals, parameters and whatnot. In two years, you won't know that and you'll go on a wondrous voyage of re-discovery that is absolutely pointless and not required if you specifically state the parent up front. Somebody else working on your code has no idea from the get-go and thus benefits instantly.
2) Intellisense. If you type 'this.' you get all instance-specific members and properties in the help. It makes finding things a lot easier, especially if you're maintaining somebody else's code or code you haven't looked at in a couple of years. It also helps you avoid errors caused by misconceptions of what variables and methods are declared where and how. It can help you discover errors that otherwise wouldn't show up until the compiler choked on your code.
3) Granted you can achieve the same effect by using prefixes and other techniques, but this begs the question of why you would invent a mechanism to handle a problem when there is a mechanism to do so built into the language that is actually supported by the IDE? If you touch-type, even in part, it will ultimately reduce your error rate, too, by not forcing you to take your fingers out of the home position to get to the underscore key.
I see lots of young programmers who make a big deal out of the time they will save by not typing a character or two. Most of your time will be spent debugging, not coding. Don't worry so much about your typing speed. Worry more about how quickly you can understand what is going on in the code. If you save a total of five minutes coding and win up spending an extra ten minutes debugging, you've slowed yourself down, no matter how fast you look like you're going.
Note that the compiler doesn't care whether you prefix references with this or not (unless there's a name collision with a local variable and a field or you want to call an extension method on the current instance.)
It's up to your style. Personally I remove this. from code as I think it decreases the signal to noise ratio.
Just because Microsoft uses this style internally doesn't mean you have to. StyleCop seems to be a MS-internal tool gone public. I'm all for adhering to the Microsoft conventions around public things, such as:
type names are in PascalCase
parameter names are in camelCase
interfaces should be prefixed with the letter I
use singular names for enums, except for when they're [Flags]
...but what happens in the private realms of your code is, well, private. Do whatever your team agrees upon.
Consistency is also important. It reduces cognitive load when reading code, especially if the code style is as you expect it. But even when dealing with a foreign coding style, if it's consistent then it won't take long to become used to it. Use tools like ReSharper and StyleCop to ensure consistency where you think it's important.
Using .NET Reflector suggests that Microsoft isn't that great at adhering to the StyleCop coding standards in the BCL anyway.
I do follow it, because I think it's really convenient to be able to tell apart access to static and instance members at first glance.
And of course I have to use it in my constructors, because I normally give the constructor parameters the same names as the field their values get assigned to. So I need "this" to access the fields.
In addition it is possible to duplicate variable names in a function so using 'this' can make it clearer.
class foo {
private string aString;
public void SetString(string aString){
//this.aString refers to the class field
//aString refers to the method parameter
this.aString = aString;
}
}
I follow it mainly for intellisense reasons. It is so nice typing this. and getting a consise list of properties, methods, etc.