C# Resource Files - c#

My resources are stored in an assembly of their own, and I have a reference to that assembly in my web app. I am able to access resources in two different ways - via compiled constants or by using a ResourceManager.
var method1 = Prototype.Localization.CustomerRecord.BillingId;
or
var resx = new ResourceManager(typeof (Prototype.Localization.CustomerRecord));
var method2 = resx.GetString("BillingId");
Using the first approach seems like a no brainer (but that generally means it's going to come back and bite me when things get more complicated), so what are the advantages or disadvantages to the second approach? Is the first approach going to cause me issues down the road?

So what are the advantages or disadvantages to the second approach?
2nd approach is what resx compiler does under the cover. IMO 2nd one is always worse because you have a string (then it may goes detached and you won't know until run-time) and because of that you'll lack of compile-time checks and design-time support.
Is the first approach going to cause me issues down the road?
No, at least no more than anything else (because you'll catch errors in your HTML pages when you'll first compile them at run-time).
So I may ask: if first approach is shorter and easier then is there any good reason to use the 2nd one? My answer is "no" (moreover you can mix them, in case you'll need it for a special thing).

Yeah, I agree with Adriano that the 2nd is worse because of those "Magic" strings.
The first approach you have some more advantages that would include compile time check, since you reference a class and a property if this property doesn't exist you will get an error back, intellisense is another one.
Another advantage, is that if you decided to move away from the built in resource manager, you could just replace the generated class (resource class) by your own respecting the namespace and so on, with your own internal logic.
IMO, I would stick with the first one.

Related

Performance issues with the existing coding approach using global :: keyword

I have been asked to refactor a particular module in my application and also to tweak some
performance related issues (if any).
Coming to the module,there are certain portions where it needs some strings to be
displayed. Also the strings are supplied by a Language assembly(.dll) which is
referred in my project, which basically returns a string from an XML file containing
strings if you pass a keyword.
For ex:
Language.GetStringFromID("TXT_WARNING"); would return something like Warning !!
The original developer has generously used
global :: Language.GetInstance().GetStringFromID("KEYWORD") whenever to fetch a string
Question 1,
Is this a good approach ??
I had the second thoughts about this approach, i ran a performance profiler and i see that
everytime when string is requested, it nearly takes 500ms to return the string for the
queried keyword.
Before i conclude that this is indeed the culprit, i need some thoughts from .NET Experts
in StackOverflow
Question 2
Is there any performance hit if we use global :: in general ??
Cheers
It only should be used to eliminate namespace conflicts.
No performance hit with using global::. It's handled by the compiler. Theres nothing about namespaces at runtime.
The performance hit is inside Language.GetStringFromID
The use of the global keyword has no implication on performance at all.
The equivalent of global:: is always what is used at the IL level, there is no concept of using a namespace there. In other words it is irrelevant for runtime performance.

how to do a "Move type to another file to match its name" accross a complete solution

does anyone know of a way to split all classes in one solution into multiple files?
The point here is that I've inherited a project in which a few hundred files contain a thousand or so classes...
I'd like to be able to get to a 1 file per class approach..
Using resharper I can easily do this manually, but I'm guessing there must be a better way?
Kind regards
Frederik
You could try one of the ReSharper 5.0 nightly builds which allow you to do it across your whole solution. You can revert to ReSharper 4.5 (or whatever version you are using) afterwards.
GraemeF's answer is correct, but when you do that refactoring chances are you'll lose all source-control history for the existing classes. This might not be a problem for you (especially if the system you've inherited wasn't source-controlled!) but I've often found that line-annotated views of a class are very helpful for determining the intent behind a particular line.
ReSharper has the Ctrl-T shortcut to jump to a type name, and holding down Ctrl makes types clickable; that might be another way to solve your problem.

Checking for the existence a reference/type at compile time in .NET

I've recently found the need to check at compile-time whether either: a) a certain assembly reference exists and can be successfully resolved, or b) a certain class (whose fully qualified name is known) is defined. These two situations are equivalent for my purposes, so being able to check for one of them would be good enough. Is there any way to do this in .NET/C#? Preprocessor directives initially struck me as something that might help, but it seems it doesn't have the necessary capability.
Of course, checking for the existence of a type at runtime can be done easily enough, but unfortunately that won't resolve my particular problem in this situation. (I need to be able to ignore the fact that a certain reference is missing and thus fall-back to another approach in code.)
Is there a reason you can't add a reference and then use a typeof expression on a type from the assembly to verify it's available?
var x = typeof(SomeTypeInSomeAssembly);
If the assembly containing SomeTypeInSomeAssembly is not referenced and available this will not compile.
It sounds like you want the compiler to ignore one branch of code, which is really only doable by hiding it behind an #if block. Would defining a compiler constant and using #if work for your purposes?
#if MyConstant
.... code here that uses the type ....
#else
.... workaround code ....
#endif
Another option would be to not depend on the other class at compile-time at all, and use reflection or the .NET 4.0 dynamic keyword to use it. If it'll be called repeatedly in a perf-critical scenario in .NET 3.5 or earlier, you could use DynamicMethod to build your code on first use instead of using reflection every time.
I seem to have found a solution here, albeit not precisely for what I was initially hoping.
My Solution:
What I ended up doing is creating a new build configuration and then defining a precompiler constant, which I used in code to determine whether to use the reference, or to fall back to the alternative (guaranteed to work) approach. It's not fully automatic, but it's relatively simple and seems quite elegant - good enough for my purposes.
Alternative:
If you wanted to fully automate this, it could be done using a pre-build command that runs a Batch script/small program to check the availabilty of a given reference on the machine and then updates a file containing precompiler constants. This however I considered more effort than it was worth, though it may have been more useful if I had multiple independent references that I need to resolve (check availability).

Why remove unused using directives in C#?

I'm wondering if there are any reasons (apart from tidying up source code) why developers use the "Remove Unused Usings" feature in Visual Studio 2008?
There are a few reasons you'd want to take them out.
It's pointless. They add no value.
It's confusing. What is being used from that namespace?
If you don't, then you'll gradually accumulate pointless using statements as your code changes over time.
Static analysis is slower.
Code compilation is slower.
On the other hand, there aren't many reasons to leave them in. I suppose you save yourself the effort of having to delete them. But if you're that lazy, you've got bigger problems!
I would say quite the contrary - it's extremely helpful to remove unneeded, unnecessary using statements.
Imagine you have to go back to your code in 3, 6, 9 months - or someone else has to take over your code and maintain it.
If you have a huge long laundry list of using statement that aren't really needed, looking at the code could be quite confusing. Why is that using in there, if nothing is used from that namespace??
I guess in terms of long-term maintainability in a professional environment, I'd strongly suggest to keep your code as clean as possible - and that includes dumping unnecessary stuff from it. Less clutter equals less confusion and thus higher maintainability.
Marc
In addition to the reasons already given, it prevents unnecessary naming conflicts. Consider this file:
using System.IO;
using System.Windows.Shapes;
namespace LicenseTester
{
public static class Example
{
private static string temporaryPath = Path.GetTempFileName();
}
}
This code doesn't compile because both the namespaces System.IO and System.Windows.Shapes each contain a class called Path. We could fix it by using the full class path,
private static string temporaryPath = System.IO.Path.GetTempFileName();
or we could simply remove the line using System.Windows.Shapes;.
This seems to me to be a very sensible question, which is being treated in quite a flippant way by the people responding.
I'd say that any change to source code needs to be justified. These changes can have hidden costs, and the person posing the question wanted to be made aware of this. They didn't ask to be called "lazy", as one person inimated.
I have just started using ReSharper, and it is starting to give warnings and style hints on the project I am responsible for. Amongst them is the removal of redundant using directive, but also redundant qualifiers, capitalisation and many more. My gut instinct is to tidy the code and resolve all hints, but my business head warns me against unjustified changes.
We use an automated build process, and therefore any change to our SVN repository would generate changes that we couldn't link to projects/bugs/issues, and would trigger automated builds and releases which delivered no functional change to previous versions.
If we look at the removal of redundant qualifiers, this could possibly cause confusion to developers as classes in our Domain and Data layers are only differentiated by the qualifiers.
If I look at the proper use of capitalisation of anachronyms (i.e. ABCD -> Abcd), then I have to take into account that ReSharper doesn't refactor any of the Xml files we use that reference class names.
So, following these hints is not as straight-forward as it appears, and should be treated with respect.
Less options in the IntelliSense popup (particularly if the namespaces contain lots of Extension methods).
Theoretically IntelliSense should be faster too.
Remove them. Less code to look at and wonder about saves time and confusion. I wish more people would KEEP THINGS SIMPLE, NEAT and TIDY.
It's like having dirty shirts and pants in your room. It's ugly and you have to wonder why it's there.
Code compiles quicker.
Recently I got another reason why deleting unused imports is quite helpful and important.
Imagine you have two assemblies, where one references the other (for now let´s call the first one A and the referenced B). Now when you have code in A that depends on B everything is fine. However at some stage in your development-process you notice that you actually don´t need that code any more but you leave the using-statement where it was. Now you not only have a meaningless using-directive but also an assembly-reference to B which is not used anywhere but in the obsolete directive. This firstly increases the amount of time needed for compiling A, as B has to be loaded also.
So this is not only an issue on cleaner and easier to read code but also on maintaining assembly-references in production-code where not all of those referenced assemblies even exist.
Finally in our exapmle we had to ship B and A together, although B is not used anywhere in A but in the using-section. This will massively affect the runtime-performance of A when loading the assembly.
It also helps prevent false circular dependencies, assuming you are also able to remove some dll/project references from your project after removing the unused usings.
At least in theory, if you were given a C# .cs file (or any single program source code file), you should be able to look at the code and create an environment that simulates everything it needs. With some compiling/parsing technique, you may even create a tool to do it automatically. If this is done by you at least in mind, you can ensure you understand everything that code file says.
Now consider, if you were given a .cs file with 1000 using directives which only 10 was actually used. Whenever you look at a symbol that is newly introduced in the code that references the outside world, you will have to go through those 1000 lines to figure out what it is. This obviously slows down the above procedure. So if you can reduce them to 10, it will help!
In my opinion, the C# using directive is very very weak, since you cannot specify single generic symbol without genericity being lost, and you cannot use using alias directive to use extension methods. This is not the case in other languages like Java, Python and Haskell, in those languages you are able to specify (almost) exactly what you want from the outside world. But even then, I will suggest to use using alias whenever possible.

Why should you remove unnecessary C# using directives?

For example, I rarely need:
using System.Text;
but it's always there by default. I assume the application will use more memory if your code contains unnecessary using directives. But is there anything else I should be aware of?
Also, does it make any difference whatsoever if the same using directive is used in only one file vs. most/all files?
Edit: Note that this question is not about the unrelated concept called a using statement, designed to help one manage resources by ensuring that when an object goes out of scope, its IDisposable.Dispose method is called. See Uses of "using" in C#.
There are few reasons for removing unused using(s)/namespaces, besides coding preference:
removing the unused using clauses in a project, can make the compilation faster because the compiler has fewer namespaces to look-up types to resolve. (this is especially true for C# 3.0 because of extension methods, where the compiler must search all namespaces for extension methods for possible better matches, generic type inference and lambda expressions involving generic types)
can potentially help to avoid name collision in future builds when new types are added to the unused namespaces that have the same name as some types in the used namespaces.
will reduce the number of items in the editor auto completion list when coding, posibly leading to faster typing (in C# 3.0 this can also reduce the list of extension methods shown)
What removing the unused namespaces won't do:
alter in any way the output of the compiler.
alter in any way the execution of the compiled program (faster loading, or better performance).
The resulting assembly is the same with or without unused using(s) removed.
It won't change anything when your program runs. Everything that's needed is loaded on demand. So even if you have that using statement, unless you actually use a type in that namespace / assembly, the assembly that using statement is correlated to won't be loaded.
Mainly, it's just to clean up for personal preference.
Code cleanliness is important.
One starts to get the feeling that the code may be unmaintained and on the browfield path when one sees superfluous usings. In essence, when I see some unused using statements, a little yellow flag goes up in the back of my brain telling me to "proceed with caution." And reading production code should never give you that feeling.
So clean up your usings. Don't be sloppy. Inspire confidence. Make your code pretty. Give another dev that warm-fuzzy feeling.
There's no IL construct that corresponds to using. Thus, the using statements do not increase your application memory, as there is no code or data that is generated for it.
Using is used at compile time only for the purposes of resolving short type names to fully qualified type names. Thus, the only negative effect unnecessary using can have is slowing the compile time a little bit and taking a bit more memory during compilation. I wouldn't be worried about that though.
Thus, the only real negative effect of having using statements you don't need is on intellisense, as the list of potential matches for completion while you type increases.
You may have name clashes if you call your classes like the (unused) classes in the namespace. In the case of System.Text, you'll have a problem if you define a class named "Encoder".
Anyways this is usually a minor problem, and detected by the compiler.
Leaving extra using directives is fine. There is a little value in removing them, but not much. For example, it makes my IntelliSense completion lists shorter, and therefore easier to navigate.
The compiled assemblies are not affected by extraneous using directives.
Sometimes I put them inside a #region, and leave it collapsed; this makes viewing the file a little cleaner. IMO, this is one of the few good uses of #region.
if you want to maintain your code clean, not used using statements should be removed from the file. the benefits appears very clear when you work in a collaborative team that need to understand your code, think all your code must be maintained, less code = less work, the benefits are long term.
Your application will not use more memory. It's for the compiler to find classes you use in the code files. It really doesn't hurt beyond not being clean.
It’s personal preference mainly. I clean them up myself (ReSharper does a good job of telling me when there’s unneeded using statements).
One could say that it might decrease the time to compile, but with computer and compiler speeds these days, it just wouldn’t make any perceptible impact.
They are just used as a shortcut. For example, you'd have to write:
System.Int32 each time if you did not have a using System; on top.
Removing unused ones just makes your code look cleaner.
The using statement just keeps you from qualifying the types you use. I personally like to clean them up. Really it depends on how a loc metric is used
Having only the namespaces that you actually use allows you to keep your code documented.
You can easily find what parts of your code are calling one another by any search tool.
If you have unused namespaces this means nothing, when running a search.
I'm working on cleaning up namespaces now, because I'm constantly asked what parts of the application are accessing the same data one way or another.
I know which parts are accessing data each way due to the data access being separated by namespaces e.g. directly through a database and in-directly through a web service.
I can't think of a simpler way to do this all at once.
If you just want your code to be a black box (to the developers), then yes it doesn't matter. But if you need to maintain it over time it is valuable documentation like all other code.
The 'using' statement does not affect performance as it is merely a helper in qualifying the names of your identifiers. So instead of having to type, System.IO.Path.Combine(...), you can simply type, Path.Combine(...) if you have using System.IO.
Do not forget that the compiler do a lot of work to optimize everything when building your project. Using that is used in a lot of place or 1 shouldn't do a different once compiled.

Categories

Resources