Assume you have a list of edges with given length (as double). Now you want to find the edge with maximal length. Is there an easy way to do this in LINQ?
Of course, I can first compute the maximal value with Max and then do comparison, but firstly, this would be a two computations and secondly comparing doubles for equality is a bad thing.
Any suggestions?
Jon Skeet has MaxBy in his MoreLINQ library: http://code.google.com/p/morelinq
Also, look at Observable.MaxBy
Alternatively look at
e.OrderByDescending(x => x.SomeProperty).First();
sehe beat me to the correct answer, so I'll use this to note a possibly flawed assumption:
...secondly comparing doubles for equality is a bad thing.
IIRC there shouldn't be any problem comparing a copied double for equality. After all it's just 8 bytes. The problem occurs in computation - or using two similar values from different sources.
That said, it's good to be scared of comparing floating point numbers :P
Related
What would you use if you had to hold numbers bigger than the ulong type can hold (0 to 18,446,744,073,709,551,615)? For instance measuring distance between planets or galaxies?
For values which only require up to 28 digits, you can use System.Decimal. The way it's designed, you don't encounter the scaling issue you do with double, where for large numbers the gap between two adjacent numbers is bigger than 1. Admittedly this looks slightly odd, given that decimal is usually used for non-integer values - but in some cases I think it's a perfectly reasonable use, so long as you document it properly.
For values bigger than that, you can use System.Numerics.BigInteger.
Another alternative is to just use double and accept that you really don't get a precision down to the integer. When it comes to the distance between galaxies, are you really going to have a value which is accurate to a metre anyway? It does depend on how you're going to use this - it can certainly make testing simpler if you use a nicely-predictable integer type, but you should really think about where the values are going to go and what you're going to do with them.
I rather doubt you need to have integer types to store something like the distance between the galaxies. To my knowledge very large integer values are required in few cases like (demonstrative) encryption. For these cases, you can use System.Numerics.BigInteger.
I think you should use floating point types (e.g. double) for purposes like one you described.
All experienced programmers in C# (I think this comes from C) are used to cast on of the integers in a division to get the decimal / double / float result instead of the int (the real result truncated).
I'd like to know why is this implemented like this? Is there ANY good reason to truncate the result if both numbers are integer?
C# traces its heritage to C, so the answer to "why is it like this in C#?" is a combination of "why is it like this in C?" and "was there no good reason to change?"
The approach of C is to have a fairly close correspondence between the high-level language and low-level operations. Processors generally implement integer division as returning a quotient and a remainder, both of which are of the same type as the operands.
(So my question would be, "why doesn't integer division in C-like languages return two integers", not "why doesn't it return a floating point value?")
The solution was to provide separate operations for division and remainder, each of which returns an integer. In the context of C, it's not surprising that the result of each of these operations is an integer. This is frequently more accurate than floating-point arithmetic. Consider the example from your comment of 7 / 3. This value cannot be represented by a finite binary number nor by a finite decimal number. In other words, on today's computers, we cannot accurately represent 7 / 3 unless we use integers! The most accurate representation of this fraction is "quotient 2, remainder 1".
So, was there no good reason to change? I can't think of any, and I can think of a few good reasons not to change. None of the other answers has mentioned Visual Basic which (at least through version 6) has two operators for dividing integers: / converts the integers to double, and returns a double, while \ performs normal integer arithmetic.
I learned about the \ operator after struggling to implement a binary search algorithm using floating-point division. It was really painful, and integer division came in like a breath of fresh air. Without it, there was lots of special handling to cover edge cases and off-by-one errors in the first draft of the procedure.
From that experience, I draw the conclusion that having different operators for dividing integers is confusing.
Another alternative would be to have only one integer operation, which always returns a double, and require programmers to truncate it. This means you have to perform two int->double conversions, a truncation and a double->int conversion every time you want integer division. And how many programmers would mistakenly round or floor the result instead of truncating it? It's a more complicated system, and at least as prone to programmer error, and slower.
Finally, in addition to binary search, there are many standard algorithms that employ integer arithmetic. One example is dividing collections of objects into sub-collections of similar size. Another is converting between indices in a 1-d array and coordinates in a 2-d matrix.
As far as I can see, no alternative to "int / int yields int" survives a cost-benefit analysis in terms of language usability, so there's no reason to change the behavior inherited from C.
In conclusion:
Integer division is frequently useful in many standard algorithms.
When the floating-point division of integers is needed, it may be invoked explicitly with a simple, short, and clear cast: (double)a / b rather than a / b
Other alternatives introduce more complication both the programmer and more clock cycles for the processor.
Is there ANY good reason to truncate the result if both numbers are integer?
Of course; I can think of a dozen such scenarios easily. For example: you have a large image, and a thumbnail version of the image which is 10 times smaller in both dimensions. When the user clicks on a point in the large image, you wish to identify the corresponding pixel in the scaled-down image. Clearly to do so, you divide both the x and y coordinates by 10. Why would you want to get a result in decimal? The corresponding coordinates are going to be integer coordinates in the thumbnail bitmap.
Doubles are great for physics calculations and decimals are great for financial calculations, but almost all the work I do with computers that does any math at all does it entirely in integers. I don't want to be constantly having to convert doubles or decimals back to integers just because I did some division. If you are solving physics or financial problems then why are you using integers in the first place? Use nothing but doubles or decimals. Use integers to solve finite mathematics problems.
Calculating on integers is faster (usually) than on floating point values. Besides, all other integer/integer operations (+, -, *) return an integer.
EDIT:
As per the request of the OP, here's some addition:
The OP's problem is that they think of / as division in the mathematical sense, and the / operator in the language performs some other operation (which is not the math. division). By this logic they should question the validity of all other operations (+, -, *) as well, since those have special overflow rules, which is not the same as would be expected from their math counterparts. If this is bothersome for someone, they should find another language where the operations perform as expected by the person.
As for the claim on perfomance difference in favor of integer values: When I wrote the answer I only had "folk" knowledge and "intuition" to back up the claim (hece my "usually" disclaimer). Indeed as Gabe pointed out, there are platforms where this does not hold. On the other hand I found this link (point 12) that shows mixed performances on an Intel platform (the language used is Java, though).
The takeaway should be that with performance many claims and intuition are unsubstantiated until measured and found true.
Yes, if the end result needs to be a whole number. It would depend on the requirements.
If these are indeed your requirements, then you would not want to store a decimal and then truncate it. You would be wasting memory and processing time to accomplish something that is already built-in functionality.
The operator is designed to return the same type as it's input.
Edit (comment response):
Why? I don't design languages, but I would assume most of the time you will be sticking with the data types you started with and in the remaining instance, what criteria would you use to automatically assume which type the user wants? Would you automatically expect a string when you need it? (sincerity intended)
If you add an int to an int, you expect to get an int. If you subtract an int from an int, you expect to get an int. If you multiple an int by an int, you expect to get an int. So why would you not expect an int result if you divide an int by an int? And if you expect an int, then you will have to truncate.
If you don't want that, then you need to cast your ints to something else first.
Edit: I'd also note that if you really want to understand why this is, then you should start looking into how binary math works and how it is implemented in an electronic circuit. It's certainly not necessary to understand it in detail, but having a quick overview of it would really help you understand how the low-level details of the hardware filter through to the details of high-level languages.
I'm trying to use strings to do math with very large numbers using strings, and without external libraries.
I have tried looking online with no success, and I need functions for addition, subtraction, multiplication, and division (if possible, and limited to a specified number of decimal places.)
example: add 9,900,000,000
and 100,000,020
should be 10,000,000,020.
EDIT: Im sorry I diddn't be specific enough, but I can only use Strings. no Long, bigInt, anything.
just the basic string and if nessecary, int32.
This is NOT a homework question!
Have you looked at BigInteger ?
If you're using .NET Framework 4, you can make use of the new System.Numerics.BigInteger class, which is an integer that can hold any whole number at all, until you run out of memory.
(The examples you provide, by the way, can be computed using long or System.UInt64.)
You have to convert the value in bits first & then apply the operation which you wish. After operation, you should convert back the bits to the number.
Is there a library for decimal calculation, especially the Pow(decimal, decimal) method? I can't find any.
It can be free or commercial, either way, as long as there is one.
Note: I can't do it myself, can't use for loops, can't use Math.Pow, Math.Exp or Math.Log, because they all take doubles, and I can't use doubles. I can't use a serie because it would be as precise as doubles.
One of the multipliyers is a rate : 1/rate^(days/365).
The reason there is no decimal power function is because it would be pointless to use decimal for that calculation. Use double.
Remember, the point of decimal is to ensure that you get exact arithmetic on values that can be exactly represented as short decimal numbers. For reasonable values of rate and days, the values of any of the other subexpressions are clearly not going to be exactly represented as short decimal values. You're going to be dealing with inexact values, so use a type designed for fast calculations of slightly inexact values, like double.
The results when computed in doubles are going to be off by a few billionths of a penny one way or the other. Who cares? You'll round out the error later. Do the rate calculation in doubles. Once you have a result that needs to be turned back into a currency again, multiply the result by ten thousand, round it off to the nearest integer, convert that to a decimal, and then divide it out by ten thousand again, and you'll have a result accurate to four decimal places, which ought to be plenty for a financial calculation.
Here is what I used.
output = (decimal)Math.Pow((double)var1, (double)var2);
Now I'm just learning but this did work but I don't know if I can explain it correctly.
what I believe this does is take the input of var1 and var2 and cast them to doubles to use as the argument for the math.pow method. After that have (decimal) in front of math.pow take the value back to a decimal and place the value in the output variable.
I hope someone can correct me if my explination is wrong but all I know is that it worked for me.
I know this is an old thread but I'm putting this here in case someone finds it when searching for a solution.
If you don't want to mess around with casting and doing you own custom implementation you can install the NuGet DecimalMath.DecimalEx and use it like DecimalEx.Pow(number,power).
Well, here is the Wikipedia page that lists current C# numerics libraries. But TBH I don't think there is a lot of support for decimals
http://en.wikipedia.org/wiki/List_of_numerical_libraries
It's kind of inappropriate to use decimals for this kind of calculation in general. It's high precision yes - but it's also low range. As the MSDN docs state it's for financial/monetary calculations - where there isn't much call for POW unfortunately!
Of course you might have a specific problem domain that needs super high precision and all numbers are within 10(28) - 10(-28). But in that case you will probably just need to write your own series calculator such as the one linked to in the comments to the question.
Not using decimal. Use double instead. According to this thread, the Math.Pow(double, double) is called directly from CLR.
How is Math.Pow() implemented in .NET Framework?
Here is what .NET Framework 4 has (2 lines only)
[SecuritySafeCritical]
public static extern double Pow(double x, double y);
64-bit decimal is not native in this 32-bit CLR yet. Maybe on 64-bit Framework in the future?
wait, huh? why can't you use doubles? you could always cast if you're using ints or something:
int a = 1;
int b = 2;
int result = (int)Math.Pow(a,b);
Are there any differences between decimal.Negate(myDecimal) and myDecimal * -1 (except maybe readability)?
I suspect Negate exists because there's a unary minus operator (op_UnaryNegation), and it's good practice to have methods representing the equivalent functionality so that languages which don't support operator overloading can still achieve the same result.
So instead of comparing it with myDecimal * -1 it may be helpful to think of it as being an alternative way of writing -myDecimal.
(I believe it's also optimised - given the way floating point works, negating a value is much simpler than normal multiplication; there's just a bit to flip. No need to perform any actual arithmetic.)
If you look in the .NET source with .NET Reflector, you will see the following:
(getting coffee until it finally opens..)
public static decimal Negate(decimal d)
{
return new decimal(d.lo, d.mid, d.hi, d.flags ^ -2147483648);
}
Looks like this is a fancy way to say -1 due to the way decimal internally works.
If you do *-1 it maps it to the following call:
FCallMultiply(ref result, yourNumber, -1M);
which will likely produce different IL code.
Personally, I find -myDecimal to be more readable than either (I'm no math geek, but I pretty sure all three are equivalent), but then again, I generally prefer compact notation.
If that is out, I'd go with Negate since I like to avoid magic numbers floating around in my code, and while the -1 as used there isn't really a magic number, it sure looks like one at first glance.
From MSDN, decimal.Negate:
Returns the result of multiplying the specified Decimal value by negative one.
No practical difference then, though readability is important.