Ways to improve efficiency of C# code [closed] - c#

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
Like most of us, I am a big fan of improving efficiency of code. So much so that I would rather choose fast-executing dirty code over something which might be more elegant or clean, but slower.
Fortunately for all of us, in most cases, the faster and more efficient solutions are also the cleaner and the most elegant ones. I used to be just a dabbler in programming but I am into full-time development now, and just started with C# and web development. I have been reading some good books on these subjects but sadly, books rarely cover the finer aspects. Like say, which one of two codes which do the same thing will run faster. This kind of knowledge comes mostly through experience only. I request all fellow programmers to share any such knowledge here.
Here, I'll start off with these two blog posts I came across. This is exactly the kind of stuff I am looking for in this post:
Stringbuilder vs String performance analysis
The cost of throwing an exception
P.S: Do let me know if this kind of thing already exists somewhere on this site. I searched but couldn't find, surprisingly. Also please post any book you know of that covers such things.
P.P.S: If you got to know of something from some blog post or some online source to which we all have access, then it would be better to post the link itself imo.

There are some things you should do like use generics instead of objects to avoid boxing/unboxing and also improve the code safety, but the best way to optimize your code is to use a profiler to determine which parts of your code are slow. There are many great profilers for .NET code available and they can help determine the bottlenecks in your programs.
Generally you shouldn't concern yourself with small ways to improve code efficiency, but instead when you are done coding, then profile it to find the bottlenecks.
A good profiler will tell you stats like how many times a function was executed, what the average running time was for a function, what the peak running time was for a function, what the total running time was for a function, etc. Some profilers will even draw graphs for you so you can visually see which parts of the program are the biggest bottleneck and you can drill down into the sub function calls.
Without profiling you will most likely be wrong about which part of your program is slow.
An example of a great and free profiler for .NET is the EQATEC Profiler.

The single most important thing regarding this question is: Don't optimize prematurely!
There is only one good time to optimize and that is when there are performance constraints that your current working implementation cannot fulfill. Then you should get out a profiler and check which parts of your code are slow and how you can fix them.
Thinking about optimization while coding the first version is mostly wasted time and effort.

"I would rather choose fast-executing dirty code over something which might be more elegant or clean, but slower."
If I were writing a pixel renderer for a game, perhaps I'd consider doing this - however, when responding to a user's click on a button, for example, I'd always favour the slower, elegant approach over quick-and-dirty (unless slow > a few seconds, when I might reconsider).
I have to agree with the other posts - profile to determine where your slow points are and then deal with those. Writing optimal code from the outset is more trouble than its worth, you'll usually find that what you think will be slow will be just fine and the real slow areas will surprise you.

One good resource for .net related performance info is Rico Mariani's Blog

IMO it's the same for all programming platforms / languages, you have to use profiler and see whitch part of the code are slow, and then do optimization on that parts.
While these links that you provided are valuable insig don't do such things in advance, measure first and then optimize.
edit:
http://www.codinghorror.com/blog/2009/01/the-sad-tragedy-of-micro-optimization-theater.html
When to use StringBuilder?
At what point does using a StringBuilder become insignificant or an overhead?

There are lots of tricks, but if that's what you're thinking you need, you need to start over. The secret of performance in any language is not in coding techniques, it is in finding what to optimize.
To make an analogy, if you're a police detective, and you want to put robbers in jail, the heart of your business is not about different kinds of jails. It is about finding the robbers.
I rely on a purely manual method of profiling. This is an example of finding a series of points to optimize, resulting in a speedup multiple of 43 times.
If you do this on an existing application, you are likely to discover that the main cause of slow performance is overblown data structure design, resulting in an excess of notification-style consistency maintenance, characterized by an excessively bushy call tree. You need to find the calls in the call tree that cost a lot and that you can prune.
Having done that, you may realize that a way of designing software that uses the bare minimum of data structure and abstractions will run faster to begin with.

If you've profiled your code, and found it to be lacking swiftness, then there are some micro-optimizations you can sometimes use. Here's a short list.
Micro-optimize judiciously - it's like the mirror from Harry Potter: if you're not careful you'll spend all your time there and get nothing else done without getting a lot in return.
The StringBuilder and exception throwing examples are good ones - those are mistakes I used to make which sometimes added seconds to a function execution. When profiling, I find I personally use up a lot of cycles simply finding things. In that case, I cache frequently accessed objects using a hashtable (or a dictionary).

Good program architecture give you a lot better optimization, than optimized function.
The most optimization is to avoiding all if else in runtime code, put them all at initialize time.
Overall, optimization is bad idea, because the most valuable is readable program, not a fast program.

http://www.techgalaxy.net/Docs/Dev/5ways.htm has some very good points... just came across it today.

Related

Do public class & methods slow down your code?

My teacher ve always said that we shouldn't write the same part of code more than once while programming. But should a code which priority is to be robust & quick use class and methods instead of write down the same code ever and ever?! Does calling a class take a little bit more of time than a direct code?!
For example if i want to do that:
Program1.Action1();
Program1.Action2();
Program1.Action3();
&
Program2.Action1();
etc etc etc
and I want these actions to be perform the quickest as possible, May I call actions() or write down the full code?!
Adnt his question lead to an another one:
For a a project we need to make it easily readable by the teacher so we have a lot of "class tab" on visual studio, we make everything public and we call our class or methods in our mainform.
Ok it's quite organized, very easy to read, BUT doesn't make the code slow down?!
Does public "class tab" are slower than a private class in our mainform?!
I didn't find anything conlusive anywhere.. Thank you.
You could always consider profiling the performance.
But really, you ought to trust that C# will be better than you at making such choices when compiling your code.
The things you state in your question seem like unnecessary micro-optimisations to me that will probably not make a scrap of difference.
Readability and the ability to scale your program are more important considerations: computers are tending to double in speed every year but programmers are getting more and more expensive.
You have one main question and some concerns.
Let me address them separately; first: public and private are not, per se, faster or slower. The compiler could, in theory, optimize more when private methods are involved, but I don't think there are many cases when that could make a difference. So, the short answer is NO, public does not slow down your code.
A simple function call has negligible cost. If you're not programming some number-crunchink code looping over and over million and million of times, the cost of some function call is no concern.
So, if you don't have performance problems, you should not care about them. Do yourself a favor, while learning, and write this down 10 times: if you don't have performance problems, you should not care about them.
You should concentrate about code readability and algorithmic complexity, not about micro optimizations which may or may not improve "performance", but can easily complicate the code and create bugs.
Easy to read and test is paramount in (dare I say it?) 98% of the software developed.

Does Resharper helps with System Performance?

I am wondering if by following all the standards and rules from Resharper... will it help me to improve the performance of my code?
I am not talking about if I will be faster coding. I am talking about if the code that runs after applying all Resharper's recommendations will be faster than my original code.
Is there any kind of applications that may detect performance issues after analyzing code?
Some of Resharper's suggestions will help you to notice potentially-costly performance-impacting mistakes. For example, if you iterate over an IEnumerable<> multiple times, Resharper will warn you about that, and if your IEnumerable<> requires a round-trip to the database any time you enumerate over it, that could end up hurting performance noticeably.
However, that's not Resharper's purpose. I would never rely on Resharper to help you catch performance issues, or even assume that Resharper's suggestions never hurt performance.
In fact, I would never trust the performance-related suggestions of any tool that only analyzes your code. Such tools have no idea what your day-to-day data is going to look like. Any auto-detectable change that's guaranteed to improve performance while leaving your program correct could be done just as easily by the compiler, without affecting your code's readability.
Performance is a tricky beast, and you really need a healthy mix of common sense, load tests, and profiling metrics to help you know what will help, what will hurt, and what really doesn't matter. In my experience, 99% of the decisions we make concerning how to write code fall into the latter category, so it's best to focus on performance separately from the day-to-day clean code decisions that Resharper helps you with.

Debugging and improving the efficiency C# winform code

I have written a Winform application in C#. How can I check the performance of my code. By that I mean, how can I check which forms references are active at a given time or event, so that I can remove them if they are not required (make them available for garbage collection). Is there a way to do it using VS 2005 or any free tool. Any tutorials or guide will be useful.
[Edit] Sorry if my question is confusing. I am not looking for a professional tool, but ways to know/understand the working of my code better and code more efficiently.
Thanks
Making code efficient is always a secondary step for me. First I write the code so that it works. Next, I profile it if i am unhappy with the performance. The truth is most applications run fast enough after the first time writing them. Sometimes though, better performance is needed. Performance can be gained many different ways. It all depends on your application. I write LOB apps mainly, so I deal with alot of IO to databases, services and storage. These calls are all very expensive and need to be limited so they are my first area to optimize. I optimize by lazy-loading, eager-loading, batching calls, making less frequent calls and so on. I recently had a winforms app that created hundreds of controls dynamically and it took a long time. That's another bottleneck that I have to address. I use a profiler to measure the performance of the applications.
Use the free Equatec profiler. It will show you how long calls take and how many times a call is made. The profiler gives a nice report and visual display that can drill down the call stacks.
Red Gate Performance Profiler
...it's been said here a million times before. If you suspect performance issues, profile your application. It will tell you how long calls are taking and point out the bottlenecks in your code.
Kobra,
What you're looking for is called a Memory Profiler. There happens to be one (paid) version for .NET aptly named ".NET Memory Profiler", I've not used it extensively but it should answer the questions you're asking. There are a few others ones which will do basically the same thing, like giving you instance counts of loaded types, and help you identify when instances are not being garbage collected for one reason or another (i.e. Event Handler References, Static Properties, etc).
Hope this helps,
Dylan

What is the best way to debug performance problems?

I'm writing a plug-in for another program in C#.NET, and am having performance issues where commands take a lot longer then I would. The plug-in reacts to events in the host program, and also depends on utility methods of the the host program SDK. My plug-in has a lot of recursive functions because I'm doing a lot of reading and writing to a tree structure. Plus I have a lot of event subscriptions between my plugin and the host application, as well as event subscriptions between classes in my plug-in.
How can I figure out what is taking so long for a task to complete? I can't use regular breakpoint style debugging, because it's not that it doesn't work it's just that it's too slow. I have setup a static "LogWriter" class that I can reference from all my classes that will allow me to write out timestamped lines to a log file from my code. Is there another way? Does visual studio keep some kind of timestamped log that I could use instead? Is there someway to view the call stack after the application has closed?
You need to use profiler. Here link to good one: ANTS Performance Profiler.
Update: You can also write messages in control points using Debug.Write. Then you need to load DebugView application that displays all your debug string with precise time stamp. It is freeware and very good for quick debugging and profiling.
My Profiler List includes ANTS, dotTrace, and AQtime.
However, looking more closely at your question, it seems to me that you should do some unit testing at the same time you're doing profiling. Maybe start by doing a quick overall performance scan, just to see which areas need most attention. Then start writing some unit tests for those areas. You can then run the profiler while running those unit tests, so that you'll get consistent results.
In my experience, the best method is also the simplest. Get it running, and while it is being slow, hit the "pause" button in the IDE. Then make a record of the call stack. Repeat this several times. (Here's a more detailed example and explanation.)
What you are looking for is any statement that appears on more than one stack sample that isn't strictly necessary. The more samples it appears on, the more time it takes. The way to tell if the statement is necessary is to look up the stack, because that tells you why it is being done.
Anything that causes a significant amount of time to be consumed will be revealed by this method, and recursion does not bother it.
People seem to tackle problems like this in one of two ways:
Try to get good measurements before doing anything.
Just find something big that you can get rid of, rip it out, and repeat.
I prefer the latter, because it's fast, and because you don't have to know precisely how big a tumor is to know it's big enough to remove. What you do need to know is exactly where it is, and that's what this method tells you.
Sounds like you want a code 'profiler'. http://en.wikipedia.org/wiki/Code_profiler#Use_of_profilers
I'm unfamiliar with which profilers are the best for C#, but I came across this link after a quick google which has a list of free open-source offerings. I'm sure someone else will know which ones are worth considering :)
http://csharp-source.net/open-source/profilers
Despite the title of this topic I must argue that the "best" way is subjective, we can only suggest possible solutions.
I have had experience using Redgate ANTS Performance Profiler which will show you where the bottlenecks are in your application. It's definitely worth checking out.
Visual Studio Team System has a profiler baked in, its far from perfect, but for simple applications you can kind of get it to work.
Recently I have had the most success with EQATECs free profiler, or rolling my own tiny profiling class where needed.
Also, there have been quite a few questions about profilers in that past see: http://www.google.com.au/search?hl=en&q=site:stackoverflow.com+.net+profiler&btnG=Google+Search&meta=&aq=f&oq=
Don't ever forget Rico Mariani's advice on how to carry out a good perf investigation.
You can also use performance counter for asp.net applications.

C# How can I determine where the slow parts of my code are?

I've not be coding long so I'm not familiar with which technique is quickest so I was wondering if there was a way to do this in VS or with a 3rd party tool?
Thanks
Profilers are great for measuring.
But your question was "How can I determine where the slow parts of my code are?".
That is a different problem. It is diagnosis, not measurement.
I know this is not a popular view, but it's true.
It is like a business that is trying to cut costs.
One approach (top down) is to measure the overall finances, then break it down by categories and departments, and try to guess what could be eliminated. That is measurement.
Another approach (bottom up) is to walk in at random into an office, pick someone at random, and ask them what they are doing at that moment and (importantly) why, in detail.
Do this more than once.
That is what Harry Truman did at the outbreak of WW2, in the US defense industry, and immediately uncovered massive fraud and waste, by visiting several sites. That is diagnosis.
In code you can do this in a very simple way: "Pause" it and ask it why it is spending that particular cycle. Usually the call stack tells you why, in detail.
Do this more than once.
This is sampling. Some profilers sample the call stack. But then for some reason they insist on summarizing time spent in each function, inclusive and exclusive. That is like summarizing by department in business, inclusive and exclusive.
It loses the information you need, which is the fine-grain detail that tells if the cycles are necessary.
To answer your question:
Just pause your program several times, and capture the call stack each time. If your code is very slow, the wasteful function calls will be on nearly every stack. They will point with precision to the "slow parts of your code".
ADDED: RedGate ANTS is getting there. It can give you cost-by-line, and it is quite spiffy. So if you're in .NET, and can spare 3 figures, and don't mind waiting around to install & learn it, it can tell you much of what your Pause key can tell you, and be much more pretty about it.
Profiling.
RedGate has a product.
JetBrains has a product.
I've used ANTS Profiler and I can join the others with recommendation.
The price is NEGLIGIBLE when you compare it with the amount of dev hours it will save you.
I you're developer for a living, and your company won't buy it for you, either change the company or buy it for yourself.
For profiling large complex UI applications then you often need a set of tools and approaches. I'll outline the approach and tools I used recently on a project to improve the performance of a .Net 2.0 UI application.
First of all I interviewed users and worked through the use cases myself to come up with a list of target use cases that highlighted the systems worse performing areas. I.e. I didn't want to spend n man days optimising a feature that was hardly ever used but very slow. I would want to spend time, however, optimising a feature that was a little bit sluggish but invoked a 1000 times a day, etc.
Once the candidate use cases were identified I instrumented my code with my own light weight logging class (I used some high performance timers and a custom logging solution because a needed sub-millisecond accuracy). You might, however, be able to get away with log4net and time stamps. The reason I instrumented code is that it is sometimes easier to read your own logs rather than the profiler's output. I needed both for a variety of reasons (e.g. measuring .Net user control layouts is not always straightforward using the profiler).
I then ran my instrumented code with the ANTS profiler and profiled the use case. By combining the ANTS profile and my own log files I was very quickly able to discover problems with our application.
We also profiled the server as well as the UI and were able to work out breakdowns for time spent in the UI, time spent on the wire, time spent on the server etc.
Also worth noting is that 1 run isn't enough, and the 1st run is usually worth throwing away. Let me explain: PC load, network traffic, JIT compilation status etc can all affect the time a particular operation will take. A simple strategy is to measure an operation n times (say 5), throw away the slowest and fastest run, the analyse the remianing profiles.
Eqatec profiler is a cute small profiler that is free and easy to use. It probably won't come anywhere near the "wow" factor of Ants profiler in terms of features but it still is very cool IMO and worth a look.
Use a profiler. ANTS costs money but is very nice.
i just set breakpoints, visual will tell you how many ms between breakpoint has passed. so you can find it manually.
ANTS Profiler is very good.
If you don't want to pay, the newer VS verions come with a profiler, but to be honest it doesn't seem very good. ATI/AMD make a free profiler... but its not very user friendly (to me, I couldn't get any useful info out of it).
The advice I would give is to time function calls yourself with code. If they are fast and you do not have a high-precision timer or the calls vary in slowness for a number of reasons (e.g. every x calls building some kind of cache), try running each one x10000 times or something, then dividing the result accordingly. This may not be perfect for some sections of code, but if you are unable to find a good, free, 3rd party solution, its pretty much what's left unless you want to pay.
Yet another option is Intel's VTune.

Categories

Resources