Is altering my namespace a nicety, or a necessity? - c#

I added two custom classes to my project, namely "AutoSizeGrid" and "AutoSizeGridEditable"
Both derive from DataGridView, but that's probably neither here nor there.
But where they are is seemingly somewhat of a conundrum.
My project builds and runs fine; however, when inspecting it with Resharper, it gives me a "Constraints Violation" for both of these, saying: "Namespace does not correspond to file location, should be: ''
Do I need to change them like so, from, e.g.:
class AutoSizeGrid : DataGridView
...to:
class <Name of my Solution>.AutoSizeGrid : DataGridView
?
I'd rather not, as I don't know if this would force me to delete the prior DGV-derived components from my forms and replace them with the recompiled versions; that would be a pain in the donkey.

As ElVieejo says, it is not necessary to change it if the code compiles. ReSharper (and other code quality tools) recommend you keep namespaces in sync with file paths because that is the Microsoft convention. Namespaces are extremely helpful for keeping code organized, especially as projects/applications get larger, so it's good to have some clear rules and follow them, but they are for readability and separation of concerns, not syntactical correctness.

Related

Force Pex to ignore generated code, can I do it without a reference to the Pex assembly?

I'm trying to start using Pex, and I have certain code that I want it to ignore testing.
I create configuration sections for config files using the Configuration Section Designer addin. Unfortunately the code generated is not quite perfect, because it doesn't do tests for nulls and other nice checks. However, for now at least I want the code to be ignored when running pex explorations as I can't change the code without it being overwritten in future, and it's a known fault that we can work around.
I found the PexInstrumentMarkedByand the PexCoverageFilterMarkedBy attributes, which seem like they may do the job (of ignoring code with the GeneratedCodeAttribute), but as far as I cam see I would need to put those in my assembly, and thus have a reference to the Pex framework in my operational assembly... not going to happen.
Does anyone have better ideas?
I know this may not be an option, but here is a suggestion - If I understand your question correctly, the only code you trying to avoid is the designer generated code. Since there is no way you can reference Pex assembly in your operational assembly, would you consider an alternative approach to creating Config Sections, i.e implement them as you would normally do.

Arranging solution files

My C# .NET solution files are a mess and I am trying to find a way of getting things in order.
I tried to put all close files together in the same folder I am creating for that purpose. For example, I put interfaces, abstract classes, and all their inherited classes at the same folder. By the way - when I do that, I need to write a "using" statement pointing to that folder so I can use those classes in other files (also a mess I guess).
Is there an elegant way of doing things more clean, and not a list of files that I find very confusing?
Is it a good idea to (let's say) open a abstract class file and add nested classes for all the classes derived from it?
Is there a way of telling the solution to automatically set the folder "using" statements above every class I create?
The best way is when your solution file system structure reflects your program architecture and not your code architecture.
For example: if you define an abstract class and after have entities that implement it: put them into the same "basket" (solution folder) if they make a part of the same software architectual unit.
In this case one by looking on your solution tree can see what is your architecture about (more or less) from very top view.
There are different ways to enforce the architecture vision, understanding and felling of the code file system. For example if you use some known frameworks, like NHibernate, or (say) ASP.NET MVC tend to call the things in the name the technolgy calls them, in this way one who is familiar with that technology can easily find itself in your architecture.
For example WPF force you define in code things in some way, but also you need to define byb the way Model, ModelView, View.. which you will do intuitively in seprate files. The technology enforcce you to define your file system in way it was thought.
By the way the topic you're asking for, is broad known dilema/question, not resolved, cuase the code is just characters sequence and nothing else.
Good luck.
It sounds like you're hitting the point where you actually need to break things up a bit, but you're resisting this because more files seems like more complexity. That's true to a point. But there's also a point where files just become big and unmanageable, which is where you might end up if you try to do nested classes.
Keeping code in different namespaces is actually a good thing--that's the "issue" you're running into with the folders and having to add using statements at the top of your files. Namespacing allows you to logically divide your code, and even occasionally reuse a class name, without stepping on other parts of your code base.
What version of Visual Studio are you using? One little known feature of Visual Studio is that it can automatically create the using directive when you type a class name. That would eliminate one pain point.
If I was in your shoes, I'd start looking for logical places to segment my code into different projects. You can definitely go overboard here as well, but it's pretty common to have:
A "core" project that contains your business logic and business objects.
UI projects for the different user interfaces you build, such as a website or Windows Forms app.
A datalayer project that handles all interactions with the database. Your business logic talks to the datalayer instead of directly to the database, which makes it easier to make changes to your database setup down the road.
As your code base grows, a tool like ReSharper starts to become really important. I work on a code base that has ~1 million lines and 10 or so projects in the solution, and I couldn't live without ReSharper's go-to-file navigation feature. It lets you hit a keyboard shortcut and start typing a file name and just jump to it when it finds a match. It's sort of like using Google to find information instead of trying to bookmark every interesting link you come across. Once I made this mental shift, navigating through the code base became so much easier.
Try using multiple projects in the same solution to bring order. Seperate projects for web, entity, data access, setup, testing, etc.
IF the files are in the same namespace you won't need a using statement. If you're breaking your code into multiple projects you'll need to reference the other projects with using statements.
Its up to you. Break things apart logically. Use subfolders where you deem necessary.
Not sure.
Yes, but you'll need to create a template. Search for tuturorials on that.
1) Your solution folders should match your namespace structure. Visual Studio is set up to work this way and will automatically create a matching namespace. Yes, this requires a using for stuff in the folders but that's what it's for.
So yes, group common stuff together under an appropriate namespace.
2) Yes, subclasses should probably live in the same namespace/folder as their abstract base, or a sub folder of it. I'm not sure if you mean all in the same file? If so I would say generally not unless they're very very simple. Different files, same folder.
3) Not that I'm aware of. If you right click the classname when you use it you can get Studio to automatically resolve it and add a using (Ctrl + . also does this)

In C# (VS-2010), is there a way to fail a frontend build if a certain library class is used? (When normally it would compile just fine?)

I'm writing a library that has a bunch of classes in it which are intended to be used by multiple frontends (some frontends share the same classes). For each frontend, I am keeping a hand edited list of which classes (of a particular namespace) it uses. If the frontend tries to use a class that is not in this list, there will be runtime errors. My goal is to move these errors to compile time.
If any of you are curious, these are 'mapped' nhibernate classes. I'm trying to restrict which frontend can use what so that there is less spin up time, and just for my own sanity. There's going to be hundreds of these things eventually, and it will be really nice if there's a list somewhere that tells me which frontends use what that I'm forced to maintain. I can't seem to get away with making subclasses to be used by each frontend and I can't use any wrapper classes... just take that as a given please!
Ideally, I want visual studio to underline red the offending classes if someone dares to try and use them, with a nice custom error in the errors window. I also want them GONE from the intellisense windows. Is it possible to customize a project to do these things?
I'm also open to using a pre-build program to analyze the code for these sorts of things, although this would not be as nice. Does anyone know of tools that do this?
Thanks
Isaac
Let's say that you have a set of classes F. You want these classes to be visible only to a certain assembly A. Then you segregate these classes in F into a separate assembly and mark them as internal and set the InternalsVisibleTo on that assembly to true for this certain assembly A.
If you try to use these classes from any assembly A' that is not marked as InternalsVisibleTo from the assembly containing F, then you will get a compile-time error if you try to use any class from F in A'.
I also want them GONE from the intellisense windows. Is it possible to customize a project to do these things?
That happens with the solution I presented above as well. They are internal to the assembly containing F and not visible from any assembly A' not marked as InternalsVisibleTo in the assembly containing F.
However, I generally find that InternalsVisibleTo is a code smell (not always, just often).
You should club your classes into separate dlls / projects and only provide access to those dlls to front end projects that are 'appropriate' for it. This should be simple if your front-end and the group of classes it may use are logically related.
If not then I would say some thing smells fishy - probably your class design / approach needs a revisit.
I think you'll want to take a look at the ObsoleteAttribute: http://msdn.microsoft.com/en-us/library/system.obsoleteattribute%28v=VS.100%29.aspx
I believe you can set IsError to true and it will issue an error on build time.
(not positive though)
As for the intellisense you can use EditorBrowseableAttribute: http://msdn.microsoft.com/en-us/library/system.componentmodel.editorbrowsableattribute.aspx Or at least that is what seems to get decorated when I add a service reference and cannot see the members.

Why remove unused using directives in C#?

I'm wondering if there are any reasons (apart from tidying up source code) why developers use the "Remove Unused Usings" feature in Visual Studio 2008?
There are a few reasons you'd want to take them out.
It's pointless. They add no value.
It's confusing. What is being used from that namespace?
If you don't, then you'll gradually accumulate pointless using statements as your code changes over time.
Static analysis is slower.
Code compilation is slower.
On the other hand, there aren't many reasons to leave them in. I suppose you save yourself the effort of having to delete them. But if you're that lazy, you've got bigger problems!
I would say quite the contrary - it's extremely helpful to remove unneeded, unnecessary using statements.
Imagine you have to go back to your code in 3, 6, 9 months - or someone else has to take over your code and maintain it.
If you have a huge long laundry list of using statement that aren't really needed, looking at the code could be quite confusing. Why is that using in there, if nothing is used from that namespace??
I guess in terms of long-term maintainability in a professional environment, I'd strongly suggest to keep your code as clean as possible - and that includes dumping unnecessary stuff from it. Less clutter equals less confusion and thus higher maintainability.
Marc
In addition to the reasons already given, it prevents unnecessary naming conflicts. Consider this file:
using System.IO;
using System.Windows.Shapes;
namespace LicenseTester
{
public static class Example
{
private static string temporaryPath = Path.GetTempFileName();
}
}
This code doesn't compile because both the namespaces System.IO and System.Windows.Shapes each contain a class called Path. We could fix it by using the full class path,
private static string temporaryPath = System.IO.Path.GetTempFileName();
or we could simply remove the line using System.Windows.Shapes;.
This seems to me to be a very sensible question, which is being treated in quite a flippant way by the people responding.
I'd say that any change to source code needs to be justified. These changes can have hidden costs, and the person posing the question wanted to be made aware of this. They didn't ask to be called "lazy", as one person inimated.
I have just started using ReSharper, and it is starting to give warnings and style hints on the project I am responsible for. Amongst them is the removal of redundant using directive, but also redundant qualifiers, capitalisation and many more. My gut instinct is to tidy the code and resolve all hints, but my business head warns me against unjustified changes.
We use an automated build process, and therefore any change to our SVN repository would generate changes that we couldn't link to projects/bugs/issues, and would trigger automated builds and releases which delivered no functional change to previous versions.
If we look at the removal of redundant qualifiers, this could possibly cause confusion to developers as classes in our Domain and Data layers are only differentiated by the qualifiers.
If I look at the proper use of capitalisation of anachronyms (i.e. ABCD -> Abcd), then I have to take into account that ReSharper doesn't refactor any of the Xml files we use that reference class names.
So, following these hints is not as straight-forward as it appears, and should be treated with respect.
Less options in the IntelliSense popup (particularly if the namespaces contain lots of Extension methods).
Theoretically IntelliSense should be faster too.
Remove them. Less code to look at and wonder about saves time and confusion. I wish more people would KEEP THINGS SIMPLE, NEAT and TIDY.
It's like having dirty shirts and pants in your room. It's ugly and you have to wonder why it's there.
Code compiles quicker.
Recently I got another reason why deleting unused imports is quite helpful and important.
Imagine you have two assemblies, where one references the other (for now let´s call the first one A and the referenced B). Now when you have code in A that depends on B everything is fine. However at some stage in your development-process you notice that you actually don´t need that code any more but you leave the using-statement where it was. Now you not only have a meaningless using-directive but also an assembly-reference to B which is not used anywhere but in the obsolete directive. This firstly increases the amount of time needed for compiling A, as B has to be loaded also.
So this is not only an issue on cleaner and easier to read code but also on maintaining assembly-references in production-code where not all of those referenced assemblies even exist.
Finally in our exapmle we had to ship B and A together, although B is not used anywhere in A but in the using-section. This will massively affect the runtime-performance of A when loading the assembly.
It also helps prevent false circular dependencies, assuming you are also able to remove some dll/project references from your project after removing the unused usings.
At least in theory, if you were given a C# .cs file (or any single program source code file), you should be able to look at the code and create an environment that simulates everything it needs. With some compiling/parsing technique, you may even create a tool to do it automatically. If this is done by you at least in mind, you can ensure you understand everything that code file says.
Now consider, if you were given a .cs file with 1000 using directives which only 10 was actually used. Whenever you look at a symbol that is newly introduced in the code that references the outside world, you will have to go through those 1000 lines to figure out what it is. This obviously slows down the above procedure. So if you can reduce them to 10, it will help!
In my opinion, the C# using directive is very very weak, since you cannot specify single generic symbol without genericity being lost, and you cannot use using alias directive to use extension methods. This is not the case in other languages like Java, Python and Haskell, in those languages you are able to specify (almost) exactly what you want from the outside world. But even then, I will suggest to use using alias whenever possible.

What controls XML serialization order of C# partial classes?

As discussed in Does the order of fields in C# matter?, the order of serializable properties affects, among other things, XmlSerializer output.
But if fields are in 2 files (using partial classes), does anyone know what in fact controls the resulting order? That is, which file's properties comes first?
(Background: I ask this because I've run into a scenario where one of the 2 files is auto-generated from xsd, and the other is manually edited. The test output is different on developer boxes vs. our scripted build box. Presumably this is a side effect of the several differences in the timing and history of the xsd->C# step in the 2 environments. Various ways to fix, but I'd like to understand the compilation process a little better if possible.)
Nothing is guaranteed per C# spec.
I've found that using the 'easy' approach to making an object by marking it [Serializable] is usually only good enough for very simple implementations.
I would recommend that you implement the IXmlSerializable interface which is pretty easy to do and gives you all the control you need.
Here is what we found out through the fix of a nasty bug:
We had exactly the same problem, our serialization order has changed after a release without modifying any of the serialization related classes.
We had one half of a class generated from xsd-s, and the other half was hand-made. The order attributes were effect-less. What we saw was that before the release, the hand-made partial parts were serialized first, and after it the order changed.
The solution was in the order of the files in project file, that contained the two classes. It turned out, that after an MSBuild (on our build server) build, the serializer will put the elements of the earlier (in the csproj) ".cs" file first in the serialized XML. Changing the order of the ".cs" files in the csproj swapped the order and the generated parts were up front as needed in the XML.
This is is aligned with the answer and observation of Eric Hirst above, as renaming the file reorders the csproj's items (they are generally in alphabetical order). Watch out for editing the csproj by hand for this reason too.

Categories

Resources