I am wondering: what is the best instruction in terms of performance between those 2 versions:
Background = Application.Current.Resources[condition ? BackgroundName1 : BackgroundName2] as Brush;
and:
Background = condition ? Application.Current.Resources[BackgroundName1] as Brush : Application.Current.Resources[BackgroundName2] as Brush;
is there any difference? and if yes, which one is better?
NB: BackgroundName1 & 2 are simply strings
The first one is shorter and more readable.
It's also easier to maintain.
If you later change it to read a different Resources dictionary, you might forget to change the second half of the second one.
The first one is also more clearly reading from the same dictionary.
First: Use a profiler to find the slowest thing. If you're having a performance problem it doesn't make sense to spend hours or days working on making something faster that is already fast enough.
Second: You can determine the answer to your question by trying it both ways and carefully measuring to see if there is a difference. Don't ask us which is faster; we don't know because we haven't tried it and have no ability to try it.
Don't get too caught up in micro-optimizations! The performance gain you'll get will be nil. Go for the code that is more readable and easier to understand in the end.
No difference whatsoever.
Related
I have the following simple equation in my C# program to convert a number to a resulting value:
sectorSize = 1 << sectorShift;
Is there some sort of inverse operation that will allow me to go the other way as well?
sectorShift = ???
I know that you can implement a loop, but that's a little bit of an overkill. I've never had to do this before, so I have no idea and I can't find anything online about it. The equation I need only needs to produce valid results when sectorSize is a power of two; the rest of the domain can go to hell for all I care.
Here are five ways to do that in C. Translating them to correct C# is left as an exercise. Be extremely careful.
http://graphics.stanford.edu/~seander/bithacks.html#IntegerLogObvious
Frankly, I would personally always go with the loop. It is not clear to me why you believe that simple and obviously correct code is "overkill".
Logarithms. But since you don't want to do that, use a loop and/or lookup table.
Yes, I am using a profiler (ANTS). But at the micro-level it cannot tell you how to fix your problem. And I'm at a microoptimization stage right now. For example, I was profiling this:
for (int x = 0; x < Width; x++)
{
for (int y = 0; y < Height; y++)
{
packedCells.Add(Data[x, y].HasCar);
packedCells.Add(Data[x, y].RoadState);
packedCells.Add(Data[x, y].Population);
}
}
ANTS showed that the y-loop-line was taking a lot of time. I thought it was because it has to constantly call the Height getter. So I created a local int height = Height; before the loops, and made the inner loop check for y < height. That actually made the performance worse! ANTS now told me the x-loop-line was a problem. Huh? That's supposed to be insignificant, it's the outer loop!
Eventually I had a revelation - maybe using a property for the outer-loop-bound and a local for the inner-loop-bound made CLR jump often between a "locals" cache and a "this-pointer" cache (I'm used to thinking in terms of CPU cache). So I made a local for Width as well, and that fixed it.
From there, it was clear that I should make a local for Data as well - even though Data was not even a property (it was a field). And indeed that bought me some more performance.
Bafflingly, though, reordering the x and y loops (to improve cache usage) made zero difference, even though the array is huge (3000x3000).
Now, I want to learn why the stuff I did improved the performance. What book do you suggest I read?
CLR via C# by Jeffrey Richter.
It is such a great book that someone stolen it in my library together with C# in depth.
The CLR is not involved at all here, this should all be translated to straight machine code without calls into the CLR. The JIT compiler is responsible for generating that machine code, it has an optimizer that tries to come up with the most efficient code. It has limitations, it cannot spend a large amount of time on it.
One of the important things it does is figuring out what local variables should be stored in the CPU registers. That's something that changed when you put the Height property in a local variable. It possibly decided to store that variable in a register. But now there's one less available to store another variable. Like the x or y variable, one that's critical for speed. Yes, that will slow it down.
You got a bad diagnostic about the outer loop. That could possibly be caused by the JIT optimizer re-arranging the loop code, giving the profiler a harder time mapping the machine code back to the corresponding C# statement.
Similarly, the optimizer might have decided that you were using the array inefficiently and switched the indexing order back. Not so sure it actually does that, but not impossible.
Anyhoo, the only way you can get some insight here is by looking at the generated machine code. There are many decent books about x86 assembly code, although they might be a bit hard to find these days. Your starting point is Debug + Windows + Disassembly.
Keep in mind however that even the machine code is not a very good predictor of how efficient code is going to run. Modern CPU cores are enormously complicated and the machine code is no longer representative for what actually happens inside the core. The only tried and true way is what you've already been doing: trial and error.
Albin - no. Honestly I didn't think that running outside a profiler would change the performance difference, so I didn't bother. You think I should have? Has that been a problem for you before? (I am compiling with optimizations on though)
Running under a debugger changes the performance: when it's being run under a debugger, the just-in-time compiler automatically disables optimizations (to make it easier to debug)!
If you must, use the debugger to attach to an already-running already-JITted process.
One thing you should know about working with Arrays is that the CLR will always make sure that array-indices are not out-of-bounds. It has an optimization for 1-dimensional arrays but not for 2+ dimensions.
Knowing this, you may want to benchmark MyCell Data[][] instead of MyCell Data[,]
Hm, I don't think that the loop enrolling is the real problem.
1. I'd try to avoid accessing the array Data three times per inner loop.
2. I'd also recommend, to re-think the three Add statements: you are apparently accessing a collection three times to add trivial some data. Make it only one access per iteration and add a data type containing three entries:
for (int y = 0; ... {
tTemp = Data[x, y];
packedCells.Add(new {
tTemp.HasCar, tTemp.RoadState, tTemp.Population
});
}
Another look reveals, that you are basically vectorizing a matrix by copying it into an array (or some other sequential collection)... Is that necessary at all? Why don't you just define a specialized indexer which simulates that linear access? Even better, if you only need to enumerate the entries (in that example you do, no random access required), why don't you use an adequate LINQ expression?
Point 1) Educated guesses are not the way to do performance tuning. In this case I can guess about as well as most, but guessing is the wrong way to do it.
Point 2) Profilers need to be well understood before you know what they're actually telling you. Here's a discussion of the issues. For example, what many profilers do is tell you "where the program spends its time", i.e. where the program counter spends its time, so they are almost absolutely blind to time requested by function calls, which is what your inner loop seems to consist of.
I do a lot of performance tuning, and here is what I do. I cycle between two activities:
Overall time measurement. This doesn't require special tools. I'm not trying to measure individual routines.
"Bottleneck" location. This does not require running the code at any kind of speed, because I'm not measuring. What I'm doing is locating lines of code that are responsible for a significant percent of time. I know which lines they are because they are on the stack for that percent, and stack samples easily find them.
Once I find a "bottleneck" and fix it, I go back to the first step, measure what percent of time I saved, and do it all again on the next "bottleneck", typically from 2 to 6 times. I am helped by the "magnification effect", in which a fixed problem magnifies the percentage used by remaining problems. It works for both macro and micro optimization.
(Sorry if I can't write "bottleneck" without quotes, because I don't think I've ever found a performance problem that resembled the neck of a bottle. Rather they were all simply doing things that didn't really need to be done.)
Since the comment might be overseen, I repeat myself: it is quite cumbersome to optimize code which is per se overfluous. You do not really need to explicitely linearize your matrix at all, see the comment above: Define a linearizing adapter which implements IEnumerable<MyCell> and feed it into the formatter.
I am getting a warning when I try to add another answer, so I am going to recycle this one.. :) After reading Steve's comments and thinking about it for a while, I suggest the following:
If serializing a multi-dimensional array is too slow (haven't tryied, I just believe you...) don't use it at all! It appears, that your matrix is not sparse and has fixed dimensions. So define the structure holding your cells as simple linear array with indexer:
[Serializable()]
class CellMatrix {
Cell [] mCells;
public int Rows { get; }
public int Columns { get; }
public Cell this (int i, int j) {
get {
return mCells[i + Rows * j];
}
// setter...
}
// constructor taking rows/cols...
}
A thing like this should serialize as fast as native Array does... I don't recommend hard coding the layout of Cell in order to save few bytes there...
Cheers,
Paul
I have to read a huge xml file which consists of over 3 million records and over 10 million nested elements.
Naturally I am using xmltextreader and have got my parsing time down to about 40 seconds from earlier 90 seconds using multiple optimization tricks and tips.
But I want to further save processing time as much as I can hence below question.
Quite a few elements are of type xs:boolean and the data provider always represents values as "true" or "false" - never "1" or "0".
For such cases my earliest code was:
if (xmlTextReader.Value == "true")
{
bool subtitled = true;
}
which i further optimized to:
if (string.Equals(xmlTextReader.Value, "true", StringComparison.OrdinalIgnoreCase))
{
bool subtitled = true;
}
I wanted to know if below would be fastest (because its either "true" or "false")?
if (xtr.value.length == 4)
{
bool subtitled = true;
}
Yes, it is faster, because you only compare exactly one value, namely the length of the string.
By comparing two strings with each other, you compare each and every character, as long as both characters are the same. So if you're finding a match for the string "true", you're going to do 4 comparisons before the predicate evaluates to true.
The only problem you have with this solution is, that if someday the value is going to change from true to let's say 1, you're going to run into a problem here.
Comparing length will be faster, but less readable. I wouldn't use it unless I profile the performance of the code and conclude that I need this optimization.
What about comparing the first character to "t"?
Should (maybe :) be faster than comparing the whole string..
Measuring the length would almost invariably be faster. That said, unless this is an experiment in micro-optimization, I'd just focus on making the code to be readable and convey the proper semantics.
You might also try something like that uses the following approach:
Boolean.TryParse(xmlTextReader.Value, out subtitled)
I know that has nothing to do with your question, but I figured I'd throw it out there anyway.
Cant you just write a unit test? Run each scenario for example 1000 times and compare the datetimes.
If you know it's either "true" or "false", the last snippet must be fastest.
Anyway, you can also write:
bool subtitled = (xtr.Value.length == 4);
That should be even faster.
Old question I know but the accepted answer is wrong, or at least, incorrect in it's explanation.
Comparing the lengths maybe be the slightest bit faster but only because string.Equals is likely doing some other comparisons before it too checks the lengths and decides that they are not equal strings.
So in practice this is an optimization of last resort.
Here you can find the source for .NET core string comparison.
String comparing and parsing is very slow in .Net, I'd recommend avoid intensive using string parsing/comparing in .Net.
If you're forced to do it -- use highly optimized unmanaged or unsafe code and use parallelism.
IMHO.
When attempting to solve the problem
How many seven-element subsets (not repeatable) are there in a set of nine elements ?
I tried
IEnumerable<string> NineSet =new string[] {"a","b","c","d","e","f","g","h","i"};
var SevenSet =
from first in NineSet
from second in NineSet
where first.CompareTo(second)< 0 && first.Count() + second.Count()==7
select new { first, second };
What is the problem that prevents me from attempting to use first.Count() and second.Count()? I did not check whether it is the best solution for the problem.
As already stated, what you have written down will lead you to nowhere. This is a question of combinatorics. AFAIK there is nothing pre-made in the .NET framework to solve for you combinatorics problems, hence you will have to implement the correct algorithm. If you get stuck, there are solutions out there, e.g. http://www.codeproject.com/KB/recipes/Combinatorics.aspx, where you can look at the source to see what you need to do.
first and second are strings, so you'll count their characters (this compiles, but intellisence hides it).
You're looking for something like NineSet.Count(first.Equals)
Well...
You haven't shown the error message, so it's hard to know what's wrong
Every element is of length exactly one, so I'm not sure what you're expecting to happen
As you know that first and second are strings, why aren't you using first.Length and second.Length?
As a side issue, I don't think this approach is going to solve the problem for you, I'm afraid...
I assume this should be fine
bool prefMatch = false;
// Is the frequency the same?
prefMatch = string.Compare(user.Frequency, pref.Action.ToString()) == 0;
so if user.Frequency is "3" and pref.Action.ToString() is "3" then it should set the prefMatch to true right? I'm getting false and I've definitely checked the 2 values in the watch tab in VS 2008 just to be sure they're the same
You can just use ==
prefMath = (user.Frequency == pref.Action.ToString());
Though string.Compare will also work. I suggest there is a problem elsewhere.
-- Edit
Also, just for completeness, there is no point assigning a variable to something, and then assigning it again directly after. It's slightly confusing to do so, so better to leave it unassigned, or assign it all in one spot. This way the compiler can help you if you have a case where it doesn't get assigned like you think. It is, obviously, acceptable to assign first if you wrap the second assignment in a try/catch though.
In situations like these, it's sometimes tempting to point the finger of blame at third-party code, as you've done here. Sometimes, this is justified - but not here. String.Compare is a central, extremely-well-tested piece of the .NET Framework. It's not failing. I guarantee it.
What I find helpful in these situations is to isolate the failure. Write a small, self-contained test case that attempts to demonstrate the problem. Write it with as few dependencies as possible. Make it a stand-alone console application, if possible. Post it here. If we can take it, compile and run it, and reproduce the problem, we can help you. I'd bet money, though, that in the course of creating this test case, you'll experience a head-slapping moment - "of course!" - and realize what the problem is.
Maybe the string(s) contain unprintable characters ?
To check, I'd do something like :
byte[] b1 = System.Text.Encoding.UTF8.GetBytes(user.Frequency);
byte[] b2 = System.Text.Encoding.UTF8.GetBytes(pref.Action.ToString());
then compare the contents of b1 and b2.