Does Resharper helps with System Performance? - c#

I am wondering if by following all the standards and rules from Resharper... will it help me to improve the performance of my code?
I am not talking about if I will be faster coding. I am talking about if the code that runs after applying all Resharper's recommendations will be faster than my original code.
Is there any kind of applications that may detect performance issues after analyzing code?

Some of Resharper's suggestions will help you to notice potentially-costly performance-impacting mistakes. For example, if you iterate over an IEnumerable<> multiple times, Resharper will warn you about that, and if your IEnumerable<> requires a round-trip to the database any time you enumerate over it, that could end up hurting performance noticeably.
However, that's not Resharper's purpose. I would never rely on Resharper to help you catch performance issues, or even assume that Resharper's suggestions never hurt performance.
In fact, I would never trust the performance-related suggestions of any tool that only analyzes your code. Such tools have no idea what your day-to-day data is going to look like. Any auto-detectable change that's guaranteed to improve performance while leaving your program correct could be done just as easily by the compiler, without affecting your code's readability.
Performance is a tricky beast, and you really need a healthy mix of common sense, load tests, and profiling metrics to help you know what will help, what will hurt, and what really doesn't matter. In my experience, 99% of the decisions we make concerning how to write code fall into the latter category, so it's best to focus on performance separately from the day-to-day clean code decisions that Resharper helps you with.

Related

Do public class & methods slow down your code?

My teacher ve always said that we shouldn't write the same part of code more than once while programming. But should a code which priority is to be robust & quick use class and methods instead of write down the same code ever and ever?! Does calling a class take a little bit more of time than a direct code?!
For example if i want to do that:
Program1.Action1();
Program1.Action2();
Program1.Action3();
&
Program2.Action1();
etc etc etc
and I want these actions to be perform the quickest as possible, May I call actions() or write down the full code?!
Adnt his question lead to an another one:
For a a project we need to make it easily readable by the teacher so we have a lot of "class tab" on visual studio, we make everything public and we call our class or methods in our mainform.
Ok it's quite organized, very easy to read, BUT doesn't make the code slow down?!
Does public "class tab" are slower than a private class in our mainform?!
I didn't find anything conlusive anywhere.. Thank you.
You could always consider profiling the performance.
But really, you ought to trust that C# will be better than you at making such choices when compiling your code.
The things you state in your question seem like unnecessary micro-optimisations to me that will probably not make a scrap of difference.
Readability and the ability to scale your program are more important considerations: computers are tending to double in speed every year but programmers are getting more and more expensive.
You have one main question and some concerns.
Let me address them separately; first: public and private are not, per se, faster or slower. The compiler could, in theory, optimize more when private methods are involved, but I don't think there are many cases when that could make a difference. So, the short answer is NO, public does not slow down your code.
A simple function call has negligible cost. If you're not programming some number-crunchink code looping over and over million and million of times, the cost of some function call is no concern.
So, if you don't have performance problems, you should not care about them. Do yourself a favor, while learning, and write this down 10 times: if you don't have performance problems, you should not care about them.
You should concentrate about code readability and algorithmic complexity, not about micro optimizations which may or may not improve "performance", but can easily complicate the code and create bugs.
Easy to read and test is paramount in (dare I say it?) 98% of the software developed.

How much would wrapping every method in in try-catch blocks slow a program down? [duplicate]

This question already has answers here:
Do try/catch blocks hurt performance when exceptions are not thrown?
(13 answers)
Closed 9 years ago.
So I'm working on this code base, and every single method is contained within a try-catch block, which just logs the exception to a logging file.
I'm considering waving my hands around and trying to get this changed, but I'm gone in a month, and I'm not sure how much it really slows the code down. Yeah, it's horrible practice, but that's par for the course. Yeah, it makes errors more difficult to debug, but it's "easier".
The deciding factor for people here is speed. So I'm wondering how much slower would this really make the code? I'm asking for a ballpark estimation from someone who knows compilers much better than I do.
I know there's a lot of duplicate questions about whether or not exceptions slow things down, and that it differs depending on compiler versions and the like, but I'm looking for more of a factor of/ some advice here. I'm also just plain curious.
It's probably not going to slow down the running application, since it doesn't actually do anything until an exception is thrown. (Unless you're really trying to squeeze every bit of performance, and most applications don't need to do that. Not to mention that this coding style implies very heavily that this application probably has lots of other more prevalent problems.)
What it is doing is polluting the code with tons of unnecessary exception catching. (Note the difference between catching and meaningfully handling. I can guarantee you that this code is doing the former, not the latter.) It's slowing down development, and hours of developer time is a lot more expensive than milliseconds of system time.
Yeah, it's horrible practice, but that's par for the course.
Sounds like you've already left :)
Yeah, it makes errors more difficult to debug, but it's "easier".
Those are two mutually-exclusive statements. It can't be both more difficult and easier. Someone somewhere thought it would be easier. They were mistaken.
How much would wrapping every method in try-catch blocks slow a program down?
Measured in development time, a lot.
Putting a try/catch in every method to catch the general exception ends up being a pain for reasons other than performance. Namely, the application continues to "function" regardless of the state of the application. In other words, not all Exceptions can or should be handled. E.G. What happens when the database goes down, the email server, a bad transaction (or something that should have been transactional but wasn't because poor design) etc...?
I've worked in similar environments. The concern ultimately was not one of performance. I fear places that throw around "performance reasons" as a general vague, catch-all reason to make otherwise arbritrary decisions...I digress.
However, if you are out in a month then I caution you to consider whether the argument is warranted. The development shop has proven to below your standards. It's important not to burn a bridge as a bad reference can cost you a future position.
Code within a try block is limited in how it can be optimized by the compilers. You would, in effect, limit optimizations if you wrapped every method with a try/catch. See http://msmvps.com/blogs/peterritchie/archive/2007/06/22/performance-implications-of-try-catch-finally.aspx and http://msmvps.com/blogs/peterritchie/archive/2007/07/12/performance-implications-of-try-catch-finally-part-two.aspx
The degree to which performance would be affected would be application-specific.

Ways to improve efficiency of C# code [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
Like most of us, I am a big fan of improving efficiency of code. So much so that I would rather choose fast-executing dirty code over something which might be more elegant or clean, but slower.
Fortunately for all of us, in most cases, the faster and more efficient solutions are also the cleaner and the most elegant ones. I used to be just a dabbler in programming but I am into full-time development now, and just started with C# and web development. I have been reading some good books on these subjects but sadly, books rarely cover the finer aspects. Like say, which one of two codes which do the same thing will run faster. This kind of knowledge comes mostly through experience only. I request all fellow programmers to share any such knowledge here.
Here, I'll start off with these two blog posts I came across. This is exactly the kind of stuff I am looking for in this post:
Stringbuilder vs String performance analysis
The cost of throwing an exception
P.S: Do let me know if this kind of thing already exists somewhere on this site. I searched but couldn't find, surprisingly. Also please post any book you know of that covers such things.
P.P.S: If you got to know of something from some blog post or some online source to which we all have access, then it would be better to post the link itself imo.
There are some things you should do like use generics instead of objects to avoid boxing/unboxing and also improve the code safety, but the best way to optimize your code is to use a profiler to determine which parts of your code are slow. There are many great profilers for .NET code available and they can help determine the bottlenecks in your programs.
Generally you shouldn't concern yourself with small ways to improve code efficiency, but instead when you are done coding, then profile it to find the bottlenecks.
A good profiler will tell you stats like how many times a function was executed, what the average running time was for a function, what the peak running time was for a function, what the total running time was for a function, etc. Some profilers will even draw graphs for you so you can visually see which parts of the program are the biggest bottleneck and you can drill down into the sub function calls.
Without profiling you will most likely be wrong about which part of your program is slow.
An example of a great and free profiler for .NET is the EQATEC Profiler.
The single most important thing regarding this question is: Don't optimize prematurely!
There is only one good time to optimize and that is when there are performance constraints that your current working implementation cannot fulfill. Then you should get out a profiler and check which parts of your code are slow and how you can fix them.
Thinking about optimization while coding the first version is mostly wasted time and effort.
"I would rather choose fast-executing dirty code over something which might be more elegant or clean, but slower."
If I were writing a pixel renderer for a game, perhaps I'd consider doing this - however, when responding to a user's click on a button, for example, I'd always favour the slower, elegant approach over quick-and-dirty (unless slow > a few seconds, when I might reconsider).
I have to agree with the other posts - profile to determine where your slow points are and then deal with those. Writing optimal code from the outset is more trouble than its worth, you'll usually find that what you think will be slow will be just fine and the real slow areas will surprise you.
One good resource for .net related performance info is Rico Mariani's Blog
IMO it's the same for all programming platforms / languages, you have to use profiler and see whitch part of the code are slow, and then do optimization on that parts.
While these links that you provided are valuable insig don't do such things in advance, measure first and then optimize.
edit:
http://www.codinghorror.com/blog/2009/01/the-sad-tragedy-of-micro-optimization-theater.html
When to use StringBuilder?
At what point does using a StringBuilder become insignificant or an overhead?
There are lots of tricks, but if that's what you're thinking you need, you need to start over. The secret of performance in any language is not in coding techniques, it is in finding what to optimize.
To make an analogy, if you're a police detective, and you want to put robbers in jail, the heart of your business is not about different kinds of jails. It is about finding the robbers.
I rely on a purely manual method of profiling. This is an example of finding a series of points to optimize, resulting in a speedup multiple of 43 times.
If you do this on an existing application, you are likely to discover that the main cause of slow performance is overblown data structure design, resulting in an excess of notification-style consistency maintenance, characterized by an excessively bushy call tree. You need to find the calls in the call tree that cost a lot and that you can prune.
Having done that, you may realize that a way of designing software that uses the bare minimum of data structure and abstractions will run faster to begin with.
If you've profiled your code, and found it to be lacking swiftness, then there are some micro-optimizations you can sometimes use. Here's a short list.
Micro-optimize judiciously - it's like the mirror from Harry Potter: if you're not careful you'll spend all your time there and get nothing else done without getting a lot in return.
The StringBuilder and exception throwing examples are good ones - those are mistakes I used to make which sometimes added seconds to a function execution. When profiling, I find I personally use up a lot of cycles simply finding things. In that case, I cache frequently accessed objects using a hashtable (or a dictionary).
Good program architecture give you a lot better optimization, than optimized function.
The most optimization is to avoiding all if else in runtime code, put them all at initialize time.
Overall, optimization is bad idea, because the most valuable is readable program, not a fast program.
http://www.techgalaxy.net/Docs/Dev/5ways.htm has some very good points... just came across it today.

Why does ReSharper invert IFs for C# code? Does it give better performance (even slightly)?

Consider the following code sample:
private void AddEnvelope(MailMessage mail)
{
if (this.CopyEnvelope)
{
// Perform a few operations
}
}
vs
private void AddEnvelope(MailMessage mail)
{
if (!this.CopyEnvelope) return;
// Perform a few operations
}
Will the bottom code execute any faster? Why would ReSharper make this recommendation?
Update
Having thought about this question the answer might seem obvious to some. But lots of us developers were never in the habit of nesting zounds of if statements in the first place...
It doesn't matter. Stop agonizing over performance issues that don't exist - use a profiler to identify areas in your code that DO exhibit issues, and fix them. Proactive optimization - before you know that there is a problem - is by definition a waste of time.
Updated Answer:
It's a code maintainability suggestion. Easier to read than nesting the rest of the code in an IF statement. Examples/discussion of this can be seen at the following links:
Flattening Arrow Code
Replace Nested Conditional With Guard Clauses
Code Contracts section "Legacy Requires Statements"
Original Answer:
It will actually run (very negligibly) slower from having to perform a
NOT operation.
So much in fact, some people actually consider that prettier way to
code as it avoids an extra level of indentation for the bulk of the
code.
It's a refactor of a conditional that encompasses the entire method contents to a Guard Clause. It has nothing to do with optimization.
I like the comments about optimizing things like this, to add a little more to it...
The only time I can think of that it makes sense to optimize your if statements is when you have the results of TWO or more longish running methods that need to be combined to determine to do something else. You would only want to execute the second operation if the first operation yielded results that would pass the condition. Putting the one that is most likely to return false first will generally be a smarter choice. This is because if it is false, the second one will not be evaluated at all. Again, only worth worrying about if the operations are significant and you can predict which is more likely to pass or fail. Invert this for OR... if true, it will only evaluate the first, and so optimize that way.
i.e.
if (ThisOneUsuallyPasses() && ThisOneUsuallyFails())
isn't so good as
if (ThisOneUsuallyFails() && ThisOneUsuallyPasses())
because it's only on the odd case that the first one actually works that you have to look at the second. There's some other flavors of this you can derive, but I think you should get the point.
Better to worry about how you use strings, collections, index your database, and allocate objects than spend a lot of time worrying about single condition if statements if you are worrying about perf.
In general, what the bottom code you give will do is give you an opportunity to avoid a huge block of code inside an if statement which can lead to silly typo driven errors. Old school thinking was that you should only have one point that you return from a method to avoid a different breed of coder error. Current thinking (at least by some of the tool vendors such as jetbrains resharper, etc) seems to be that wrapping the least amount of code inside of conditional statements is better. Anything more than that would be subjective so I'll leave it at that.
This kind of "Optimizations" are not worth the time spent on refactoring your code, because all modern compilers does already enough small optimizations that they make this kind of tips trivial.
As mentioned above Performance Optimization is done through profilers to calculate how your system is performing and the potential bottlenecks before applying the performance fix, and then after the performance fix to see if your fix is any good.
Required reading: Cyclomatic_complexity
Cyclomatic Complexity is a quantitative measure of the number of linearly independent
paths through a program's source code
Which means, every time you branch using and if statement you increase the Cyclomatic Complexity by 1.
To test each linearly independent path through the program; in this case, the number of test cases will equal the cyclomatic complexity of the program.
Which means, if you want to test your code completely, for each if statement you would have to introduce a new test case.
So, by introducing more if statements the complexity of your code increases, as does the number of test cases required to test it.
By removing if statements, your code complexity decreases as does the number of test cases required to test.

C#, Writing longer method or shorter method?

I am getting two contradicting views on this. Some source says there should be less little methods to reduce method calls, but some other source says writing shorter method is good for letting the JIT to do the optimization.
So, which side is correct?
The overhead of actually making the method call is inconsequentially small in most every case. You never need to worry about it unless you can clearly identify a problem down the road that requires revisiting the issue (you won't).
It's far more important that your code is simple, readable, modular, maintainable, and modifiable. Methods should do one thing, one thing only and delegate sub-things to other routines. This means your methods should be as short as they can possibly be, but not any shorter. You will see far more performance benefits by having code that is less prone to error and bugs because it is simple, than by trying to outsmart the compiler or the runtime.
The source that says methods should be long is wrong, on many levels.
None, you should have relatively short method to achieve readability.
There is no one simple rule about function size. The guideline should be a function should do 'one thing'. That's a little vague but becomes easier with experience. Small functions generally lead to readability. Big ones are occasionally necessary.
Worrying about the overhead of method calls is premature optimization.
As always, it's about finding a good balance. The most important thing is that the method does one thing only. Longer methods tend to do more than one thing.
The best single criterion to guide you in sizing methods is to keep them well-testable. If you can (and actually DO!-) thoroughly unit-test every single method, your code is likely to be quite good; if you skimp on testing, your code is likely to be, at best, mediocre. If a method is difficult to test thoroughly, then that method is likely to be "too big" -- trying to do too many things, and therefore also harder to read and maintain (as well as badly-tested and therefore a likely haven for bugs).
First of all, you should definitely not be micro-optimizing the performance on the number-of-methods level. You will most likely not get any measurable performance benefit. Only if you have some method that are being called in a tight loop millions of times, it might be an idea - but don't begin optimizing on that before you need it.
You should stick to short concise methods, that does one thing, that makes the intent of the method clear. This will give you easier-to-read code, that is easier to understand and promotes code reuse.
The most important cost to consider when writing code is maintanability. You will spend much, much more time maintaining an application and fixing bugs than you ever will fixing performance problems.
In this case the almost certainly insignificant cost of calling a method is incredibly small when compared to the cost of maintaining a large unwieldy method. Small concise methods are easier to maintain and comprehend. Additionally the cost of calling the method almost certainly will not have a significant performance impact on your application. And if it does, you can only assertain that by using a profiler. Developers are notoriously bad at identifying performance problems before hand.
Generally speaking, once a performance problem is identified, they are easy to fix. Making a method or more importantly a code base, maintainable is a much higher cost.
Personally, I am not afraid of long methods as long as the person writing them writes them well (every piece of sub-task separated by 2 newlines and a nice comment preceeding it, etc. Also, identation is very important.).
In fact, many times I even prefer them (e.g. when writing code that does things in a specific order with sequential logic).
Also, I really don't understand why breaking a long method into 100 pieces will improve readablility (as others suggest). Only the opposite. You will only end-up jumping all over the place and holding pieces of code in your memory just to get a complete picture of what is going on in your code. Combine that with possible lack of comments, bad function names, many similar function names and you have the perfect recipe for chaos.
Also, you could go the other end while trying to reduce the size of the methods: to create MANY classes and MANY functions each of which may take MANY parameters. I don't think this improves readability either (especially for a begginer to a project that has no clue what each class/method do).
And the demand that "a function should do 'one thing'" is very subjective. 'One thing' may be increasing a variable by one up to doing a ton of work supposedly for the 'same thing'.
My rule is only reuseability:
The same code should not appear many times in many places. If this is the case you need a new function.
All the rest is just philosophical talk.
In a question of "why do you make your methods so big" I reply, "why not if the code is simple?".

Categories

Resources