Are there any differences between decimal.Negate(myDecimal) and myDecimal * -1 (except maybe readability)?
I suspect Negate exists because there's a unary minus operator (op_UnaryNegation), and it's good practice to have methods representing the equivalent functionality so that languages which don't support operator overloading can still achieve the same result.
So instead of comparing it with myDecimal * -1 it may be helpful to think of it as being an alternative way of writing -myDecimal.
(I believe it's also optimised - given the way floating point works, negating a value is much simpler than normal multiplication; there's just a bit to flip. No need to perform any actual arithmetic.)
If you look in the .NET source with .NET Reflector, you will see the following:
(getting coffee until it finally opens..)
public static decimal Negate(decimal d)
{
return new decimal(d.lo, d.mid, d.hi, d.flags ^ -2147483648);
}
Looks like this is a fancy way to say -1 due to the way decimal internally works.
If you do *-1 it maps it to the following call:
FCallMultiply(ref result, yourNumber, -1M);
which will likely produce different IL code.
Personally, I find -myDecimal to be more readable than either (I'm no math geek, but I pretty sure all three are equivalent), but then again, I generally prefer compact notation.
If that is out, I'd go with Negate since I like to avoid magic numbers floating around in my code, and while the -1 as used there isn't really a magic number, it sure looks like one at first glance.
From MSDN, decimal.Negate:
Returns the result of multiplying the specified Decimal value by negative one.
No practical difference then, though readability is important.
Related
Assume you have a list of edges with given length (as double). Now you want to find the edge with maximal length. Is there an easy way to do this in LINQ?
Of course, I can first compute the maximal value with Max and then do comparison, but firstly, this would be a two computations and secondly comparing doubles for equality is a bad thing.
Any suggestions?
Jon Skeet has MaxBy in his MoreLINQ library: http://code.google.com/p/morelinq
Also, look at Observable.MaxBy
Alternatively look at
e.OrderByDescending(x => x.SomeProperty).First();
sehe beat me to the correct answer, so I'll use this to note a possibly flawed assumption:
...secondly comparing doubles for equality is a bad thing.
IIRC there shouldn't be any problem comparing a copied double for equality. After all it's just 8 bytes. The problem occurs in computation - or using two similar values from different sources.
That said, it's good to be scared of comparing floating point numbers :P
All experienced programmers in C# (I think this comes from C) are used to cast on of the integers in a division to get the decimal / double / float result instead of the int (the real result truncated).
I'd like to know why is this implemented like this? Is there ANY good reason to truncate the result if both numbers are integer?
C# traces its heritage to C, so the answer to "why is it like this in C#?" is a combination of "why is it like this in C?" and "was there no good reason to change?"
The approach of C is to have a fairly close correspondence between the high-level language and low-level operations. Processors generally implement integer division as returning a quotient and a remainder, both of which are of the same type as the operands.
(So my question would be, "why doesn't integer division in C-like languages return two integers", not "why doesn't it return a floating point value?")
The solution was to provide separate operations for division and remainder, each of which returns an integer. In the context of C, it's not surprising that the result of each of these operations is an integer. This is frequently more accurate than floating-point arithmetic. Consider the example from your comment of 7 / 3. This value cannot be represented by a finite binary number nor by a finite decimal number. In other words, on today's computers, we cannot accurately represent 7 / 3 unless we use integers! The most accurate representation of this fraction is "quotient 2, remainder 1".
So, was there no good reason to change? I can't think of any, and I can think of a few good reasons not to change. None of the other answers has mentioned Visual Basic which (at least through version 6) has two operators for dividing integers: / converts the integers to double, and returns a double, while \ performs normal integer arithmetic.
I learned about the \ operator after struggling to implement a binary search algorithm using floating-point division. It was really painful, and integer division came in like a breath of fresh air. Without it, there was lots of special handling to cover edge cases and off-by-one errors in the first draft of the procedure.
From that experience, I draw the conclusion that having different operators for dividing integers is confusing.
Another alternative would be to have only one integer operation, which always returns a double, and require programmers to truncate it. This means you have to perform two int->double conversions, a truncation and a double->int conversion every time you want integer division. And how many programmers would mistakenly round or floor the result instead of truncating it? It's a more complicated system, and at least as prone to programmer error, and slower.
Finally, in addition to binary search, there are many standard algorithms that employ integer arithmetic. One example is dividing collections of objects into sub-collections of similar size. Another is converting between indices in a 1-d array and coordinates in a 2-d matrix.
As far as I can see, no alternative to "int / int yields int" survives a cost-benefit analysis in terms of language usability, so there's no reason to change the behavior inherited from C.
In conclusion:
Integer division is frequently useful in many standard algorithms.
When the floating-point division of integers is needed, it may be invoked explicitly with a simple, short, and clear cast: (double)a / b rather than a / b
Other alternatives introduce more complication both the programmer and more clock cycles for the processor.
Is there ANY good reason to truncate the result if both numbers are integer?
Of course; I can think of a dozen such scenarios easily. For example: you have a large image, and a thumbnail version of the image which is 10 times smaller in both dimensions. When the user clicks on a point in the large image, you wish to identify the corresponding pixel in the scaled-down image. Clearly to do so, you divide both the x and y coordinates by 10. Why would you want to get a result in decimal? The corresponding coordinates are going to be integer coordinates in the thumbnail bitmap.
Doubles are great for physics calculations and decimals are great for financial calculations, but almost all the work I do with computers that does any math at all does it entirely in integers. I don't want to be constantly having to convert doubles or decimals back to integers just because I did some division. If you are solving physics or financial problems then why are you using integers in the first place? Use nothing but doubles or decimals. Use integers to solve finite mathematics problems.
Calculating on integers is faster (usually) than on floating point values. Besides, all other integer/integer operations (+, -, *) return an integer.
EDIT:
As per the request of the OP, here's some addition:
The OP's problem is that they think of / as division in the mathematical sense, and the / operator in the language performs some other operation (which is not the math. division). By this logic they should question the validity of all other operations (+, -, *) as well, since those have special overflow rules, which is not the same as would be expected from their math counterparts. If this is bothersome for someone, they should find another language where the operations perform as expected by the person.
As for the claim on perfomance difference in favor of integer values: When I wrote the answer I only had "folk" knowledge and "intuition" to back up the claim (hece my "usually" disclaimer). Indeed as Gabe pointed out, there are platforms where this does not hold. On the other hand I found this link (point 12) that shows mixed performances on an Intel platform (the language used is Java, though).
The takeaway should be that with performance many claims and intuition are unsubstantiated until measured and found true.
Yes, if the end result needs to be a whole number. It would depend on the requirements.
If these are indeed your requirements, then you would not want to store a decimal and then truncate it. You would be wasting memory and processing time to accomplish something that is already built-in functionality.
The operator is designed to return the same type as it's input.
Edit (comment response):
Why? I don't design languages, but I would assume most of the time you will be sticking with the data types you started with and in the remaining instance, what criteria would you use to automatically assume which type the user wants? Would you automatically expect a string when you need it? (sincerity intended)
If you add an int to an int, you expect to get an int. If you subtract an int from an int, you expect to get an int. If you multiple an int by an int, you expect to get an int. So why would you not expect an int result if you divide an int by an int? And if you expect an int, then you will have to truncate.
If you don't want that, then you need to cast your ints to something else first.
Edit: I'd also note that if you really want to understand why this is, then you should start looking into how binary math works and how it is implemented in an electronic circuit. It's certainly not necessary to understand it in detail, but having a quick overview of it would really help you understand how the low-level details of the hardware filter through to the details of high-level languages.
In .NET, why does System.Math.Round(1.035, 2, MidpointRounding.AwayFromZero) yield 1.03 instead of 1.04? I feel like the answer to my question lies in the section labeled "Note to Callers" at http://msdn.microsoft.com/en-us/library/ef48waz8.aspx, but I'm unable to wrap my head around the explanation.
Your suspicion is exactly right. Numbers with fractional portion, when expressed as literals in .NET, are by default doubles. A double (like a float) is an approximation of a decimal value, not a precise decimal value. It is the closest value that can be expressed in base-2 (binary). In this case, the approximation is ever so vanishingly on the small side of 1.035. If you write it using an explicit Decimal it works as you expect:
Console.WriteLine(Math.Round(1.035m, 2, MidpointRounding.AwayFromZero));
Console.ReadKey();
To understand why doubles and floats work the way they do, imagine representing the number 1/3 in decimal (or binary, which suffers from the same problem). You can't- it translates to .3333333...., meaning that to represent it accurately would require an infinite amount of memory.
Computers get around this using approximations. I'd explain precisely how, but I'd probably get it wrong. You can read all about it here though: http://en.wikipedia.org/wiki/IEEE_754-1985
The binary representation of 1.035d is 0x3FF08F5C28F5C28F, which in fact is 1.03499999999999992006394222699E0, so System.Math.Round(1.035, 2, MidpointRounding.AwayFromZero) yield 1.03 instead of 1.04, so it's correct.
However, the binary representation of 4.005d is 0x4010051EB851EB85, which is 4.00499999999999989341858963598, so System.Math.Round(4.005, 2, MidpointRounding.AwayFromZero) should yield 4.00, but it yield 4.01 which is wrong (or a smart 'fix'). If you check it in MS SQL select ROUND(CAST(4.005 AS float), 2), it's 4.00
I don't understand why .NET apply this 'smart fix' which makes things worse.
You can check binary representation of a double at:
http://www.binaryconvert.com/convert_double.html
I'ts because the BINARY representation of 1.035 closer to 1.03 than 1.04
For better results do it this way -
decimal result = decimal.Round(1.035m, 2, MidpointRounding.AwayFromZero);
I believe the example you're referring to is a different issue; as far as I understand they're saying that 0.1 isn't stored, in float, as exactly 0.1, it's actually slightly off because of how floats are stored in binary. As such let's suppose it actually looks more like 0.0999999999999 (or similar), something very, very slightly less than 0.1 - so slightly that it doesn't tend to make much difference. Well, no, they're saying: one noticeable difference would be that adding this to your number and rounding would actually appear to go the wrong way because even though the numbers are extremely close it's still considered "less than" the .5 for rounding.
If I misunderstood that page, I hope somebody corrects me :)
I don't see how it relates to your call, though, because you're being more explicit. Perhaps it's just storing your number in a similar fashion.
At a guess I'd say that internally 1.035 can't be represented in binary as exactly 1.035 and it's probably (under the hood) 1.0349999999999999, which would be why it rounds down.
Just a guess though.
Decimal rounding is OK, but it is a slow operation.
A faster workaround would be to multiply the number by (1.0 + 1E-15) to do the trick, works like a charm for the MidpointRounding.AwayFromZero enum option.
I saw a tweet recently that confused me (this was posted by an XNA coder, in the context of writing an XNA game):
Microoptimization tip of the day: when possible, use multiplication instead of division in high frequency areas. It's a few cycles faster.
I was quite surprised, because I always thought compilers where pretty smart (for example, using bit-shifting), and recently read a post by Shawn Hargreaves saying much the same thing. I wondered how much truth there was in this, since there are lots of calculations in my game.
I inquired, hoping for a sample, however the original poster was unable to give one. He did, however, say this:
Not necessarily when it's something like "center = width / 2". And I've already determined "yes, it's worth it". :)
So, I'm curious...
Can anyone give an example of some code where you can change a division to a multiplication and get a performance gain, where the C# compiler wasn't able to do the same thing itself.
Most compilers can do a reasonable job of optimizing when you give them a chance. For example, if you're dividing by a constant, chances are pretty good that the compiler can/will optimize that so it's done about as quickly as anything you can reasonably substitute for it.
When, however, you have two values that aren't known ahead of time, and you need to divide one by the other to get the answer, if there was much way for the compiler to do much with it, it would -- and for that matter, if there was much room for the compiler to optimize it much, the CPU would do it so the compiler didn't have to.
Edit: Your best bet for something like that (that's reasonably realistic) would probably be something like:
double scale_factor = get_input();
for (i=0; i<values.size(); i++)
values[i] /= scale_factor;
This is relatively easy to convert to something like:
scale_factor = 1.0 / scale_factor;
for (i=0; i<values.size(); i++)
values[i] *= scale_factor;
I can't really guarantee much one way or the other about a particular compiler doing that. It's basically a combination of strength reduction and loop hoisting. There are certainly optimizers that know how to do both, but what I've seen of the C# compiler suggests that it may not (but I never tested anything exactly like this, and the testing I did was a few versions back...)
Although the compiler can optimize out divisions and multiplications by powers of 2, other numbers can be difficult or impossible to optimize. Try optimizing a division by 17 and you'll see why. This is of course assuming the compiler doesn't know that you are dividing by 17 ahead of time (it is a run-time variable, not a constant).
Bit late but never mind.
The answer to your question is yes.
Have a look at my article here, http://www.codeproject.com/KB/cs/UniqueStringList2.aspx, which uses information based on the article mentioned in the first comment to your question.
I have a QuickDivideInfo struct which stores the magic number and the shift for a given divisor thus allowing division and modulo to be calculated using faster multiplication. I pre-computed (and tested!) QuickDivideInfos for a list of Golden Prime Numbers. For x64 at least, the .Divide method on QuickDivideInfo is inlined and is 3x quicker than using the divide operator (on an i5); it works for all numerators except int.MinValue and cannot overflow since the multiplication is stored in 64 bits before shifting. (I've not tried on x86 but if it doesn't inline for some reasons then the neatness of the Divide method would be lost and you would have to manually inline it).
So the above will work in all scenarios (except int.MinValue) if you can precalculate. If you trust the code that generates the magic number/shift, then you can deal with any divisor at runtime.
Other well-known small divisors with a very limited range of numerators could be written inline and may well be faster if they don't need an intermediate long.
Division by multiple of two: I would expect the compiler to deal with this (as in your width / 2) example since it is constant. If it doesn't then changing it to width >> 1 should be fine
To give some numbers, on this pdf
http://cs.smith.edu/dftwiki/index.php/CSC231_Pentium_Instructions_and_Flags
of the Pentium we get some numbers, and they aren't good:
IMUL 10 or 11
FMUL 3+1
IDIV 46 (32 bits operand)
FDIV 39
We are speaking of BIG differences
while(start<=end)
{
int mid=(start+end)/2;
if(mid*mid==A)
return mid;
if(mid*mid<A)
{
start=mid+1;
ans=mid;
}
If i am doing this way the outcome is the TIME LIMIT EXCEEDED for square root of 2147483647
But if i am doing the following way then the thing is clear that for Division compiler responds faster than for multiplication.
while(start<=end)
{
int mid=(start+end)/2;
if(mid==A/mid)
return mid;
if(mid<A/mid)
{
start=mid+1;
ans=mid;
}
else
end=mid-1;
}
I am using Convert.ChangeType() to convert from Object (which I get from DataBase) to a generic type T. The code looks like this:
T element = (T)Convert.ChangeType(obj, typeof(T));
return element;
and this works great most of the time, however I have discovered that if I try to cast something as simple as return of the following sql query
select 3.2
the above code (T being double) wont return 3.2, but 3.2000000000000002. I can't realise why this is happening, or how to fix it. Please help!
What you're seeing is an artifact of the way floating-point numbers are represented in memory. There's quite a bit of information available on exactly why this is, but this paper is a good one. This phenomenon is why you can end up with seemingly anomalous behavior. A double or single should never be displayed to the user unformatted, and you should avoid equality comparisons like the plague.
If you need numbers that are accurate to a greater level of precision (ie, representing currency values), then use decimal.
This probably is because of floating point arithmetic. You probably should use decimal instead of double.
It is not a problem of Convert. Internally double type represent as infinite fraction of 2 of real number, that is why you got such result. Depending of your purpose use:
Either Decimal
Or use precise formating {0:F2}
Use Math.Flor/Math.Ceil