C# assignment question - c#

In the following snippet:
long frameRate = (long)(_frameCounter / this._stopwatch.Elapsed.TotalSeconds);
Why is there an additional (long)(...) to the right of the assignment operator?

The division creates a double-precision floating point value (since TimeSpan.TotalSeconds is a double), so the cast truncates the resulting value to be integral instead of floating point. You end up with an approximate but whole number of frames-per-second instead of an exact answer with fractional frames-per-second.
If frameRate is used for display or logging, the cast might just be to make the output look nicer.

It's an explicit conversion (cast) that converts the result of the division operation to a long.
See: Casting and Type Conversions

Because the result of the calculation relates to what types the variables are that are being used. If the compiler thinks the result type is not a long because of the types being acted on, then you need to cast your result.
Note that casting your result may incur loss of accuracy or values. The bracketed cast (long) is an explicit cast and will not generate any errors if, say, you tried to fit "1.234" into a long, which could only store "1".

In my opinion there could be a few reasons:
At least one of types in expression is not integer type (I don't think so).
Developer wanted to highlighted that result is long type (makes the result type clear for reader -- good reason).
Developer was not sure what is the result of expression and wanted to make sure it will be long (it's better to make sure, that hopes, it will work).
I believe it was 3 :).

Related

Why does HashSet<T>.Count return an int instead of uint? [duplicate]

I always come across code that uses int for things like .Count, etc, even in the framework classes, instead of uint.
What's the reason for this?
UInt32 is not CLS compliant so it might not be available in all languages that target the Common Language Specification. Int32 is CLS compliant and therefore is guaranteed to exist in all languages.
int, in c, is specifically defined to be the default integer type of the processor, and is therefore held to be the fastest for general numeric operations.
Unsigned types only behave like whole numbers if the sum or product of a signed and unsigned value will be a signed type large enough to hold either operand, and if the difference between two unsigned values is a signed value large enough to hold any result. Thus, code which makes significant use of UInt32 will frequently need to compute values as Int64. Operations on signed integer types may fail to operate like whole numbers when the operands are overly large, but they'll behave sensibly when operands are small. Operations on unpromoted arguments of unsigned types pose problems even when operands are small. Given UInt32 x; for example, the inequality x-1 < x will fail for x==0 if the result type is UInt32, and the inequality x<=0 || x-1>=0 will fail for large x values if the result type is Int32. Only if the operation is performed on type Int64 can both inequalities be upheld.
While it is sometimes useful to define unsigned-type behavior in ways that differ from whole-number arithmetic, values which represent things like counts should generally use types that will behave like whole numbers--something unsigned types generally don't do unless they're smaller than the basic integer type.
UInt32 isn't CLS-Compliant. http://msdn.microsoft.com/en-us/library/system.uint32.aspx
I think that over the years people have come to the conclusions that using unsigned types doesn't really offer that much benefit. The better question is what would you gain by making Count a UInt32?
Some things use int so that they can return -1 as if it were "null" or something like that. Like a ComboBox will return -1 for it's SelectedIndex if it doesn't have any item selected.
If the number is truly unsigned by its intrinsic nature then I would declare it an unsigned int. However, if I just happen to be using a number (for the time being) in the positive range then I would call it an int.
The main reasons being that:
It avoids having to do a lot of type-casting as most methods/functions are written to take an int and not an unsigned int.
It eliminates possible truncation warnings.
You invariably end up wishing you could assign a negative value to the number that you had originally thought would always be positive.
Are just a few quick thoughts that came to mind.
I used to try and be very careful and choose the proper unsigned/signed and I finally realized that it doesn't really result in a positive benefit. It just creates extra work. So why make things hard by mixing and matching.
Some old libraries and even InStr use negative numbers to mean special cases. I believe either its laziness or there's negative special values.

.NET vs Mono: different results for conversion from 2^32 as double to int

TL;DR Why does the (int)Math.Pow(2,32) return 0 on Mono and Int32.MinValue on .NET ?
While testing my .NET-written code on Mono I've stumbled upon the following line:
var i = X / ( (int)Math.Pow(2,32) );
well that line doesn't make much sense of course and I've already changed it to long.
I was however curious why didn't my code throw DivideByZeroException on .NET so I've checked the return value of that expression on both Mono and .NET
Can anyone please explain me the results?
IMHO the question is academic; the documentation promises only that "the result is an unspecified value of the destination type", so each platform is free to do whatever it wants.
One should be very careful when casting results that might overflow. If there is a possibility of that, and it's important to get a specific result instead of whatever arbitrary implementation a given platform has provided, one should use the checked keyword and catch any OverflowException that might occur, handling it with whatever explicit behavior is desired.
"the result is an unspecified value of the destination type". I thought it would be interesting to see what's actually happening in the .NET implementation.
It's to do with the OpCodes.Conv_I4 Field in IL: " Conversion from floating-point numbers to integer values truncates the number toward zero. When converting from a float64 to a float32, precision can be lost. If value is too large to fit in a float32 (F), positive infinity (if value is positive) or negative infinity (if value is negative) is returned. If overflow occurs converting one integer type to another, the high order bits are truncated" It does once again say overflow is unspecified however.

Does (int)myDouble ever differ from (int)Math.Truncate(myDouble)?

Does (int)myDouble ever differ from (int)Math.Truncate(myDouble)?
Is there any reason I should prefer one over the other?
Math.Truncate is intended for when you need to keep your result as a double with no fractional part. If you want to convent it to an int, just use the cast directly.
Edit: For reference, here is the relevant documentation from the “Explicit Numeric Conversions Table”:
When you convert from a double or float value to an integral type, the value is truncated.
As pointed out by Ignacio Vazquez-Abrams (int)myDouble will fail in the same way as (int)Math.Truncate(myDouble) when myDouble is too large.
So there is no difference in output, but (int)myDouble is working faster.

C# Wrong conversion using Convert.ChangeType()

I am using Convert.ChangeType() to convert from Object (which I get from DataBase) to a generic type T. The code looks like this:
T element = (T)Convert.ChangeType(obj, typeof(T));
return element;
and this works great most of the time, however I have discovered that if I try to cast something as simple as return of the following sql query
select 3.2
the above code (T being double) wont return 3.2, but 3.2000000000000002. I can't realise why this is happening, or how to fix it. Please help!
What you're seeing is an artifact of the way floating-point numbers are represented in memory. There's quite a bit of information available on exactly why this is, but this paper is a good one. This phenomenon is why you can end up with seemingly anomalous behavior. A double or single should never be displayed to the user unformatted, and you should avoid equality comparisons like the plague.
If you need numbers that are accurate to a greater level of precision (ie, representing currency values), then use decimal.
This probably is because of floating point arithmetic. You probably should use decimal instead of double.
It is not a problem of Convert. Internally double type represent as infinite fraction of 2 of real number, that is why you got such result. Depending of your purpose use:
Either Decimal
Or use precise formating {0:F2}
Use Math.Flor/Math.Ceil

Why does .NET use int instead of uint in certain classes?

I always come across code that uses int for things like .Count, etc, even in the framework classes, instead of uint.
What's the reason for this?
UInt32 is not CLS compliant so it might not be available in all languages that target the Common Language Specification. Int32 is CLS compliant and therefore is guaranteed to exist in all languages.
int, in c, is specifically defined to be the default integer type of the processor, and is therefore held to be the fastest for general numeric operations.
Unsigned types only behave like whole numbers if the sum or product of a signed and unsigned value will be a signed type large enough to hold either operand, and if the difference between two unsigned values is a signed value large enough to hold any result. Thus, code which makes significant use of UInt32 will frequently need to compute values as Int64. Operations on signed integer types may fail to operate like whole numbers when the operands are overly large, but they'll behave sensibly when operands are small. Operations on unpromoted arguments of unsigned types pose problems even when operands are small. Given UInt32 x; for example, the inequality x-1 < x will fail for x==0 if the result type is UInt32, and the inequality x<=0 || x-1>=0 will fail for large x values if the result type is Int32. Only if the operation is performed on type Int64 can both inequalities be upheld.
While it is sometimes useful to define unsigned-type behavior in ways that differ from whole-number arithmetic, values which represent things like counts should generally use types that will behave like whole numbers--something unsigned types generally don't do unless they're smaller than the basic integer type.
UInt32 isn't CLS-Compliant. http://msdn.microsoft.com/en-us/library/system.uint32.aspx
I think that over the years people have come to the conclusions that using unsigned types doesn't really offer that much benefit. The better question is what would you gain by making Count a UInt32?
Some things use int so that they can return -1 as if it were "null" or something like that. Like a ComboBox will return -1 for it's SelectedIndex if it doesn't have any item selected.
If the number is truly unsigned by its intrinsic nature then I would declare it an unsigned int. However, if I just happen to be using a number (for the time being) in the positive range then I would call it an int.
The main reasons being that:
It avoids having to do a lot of type-casting as most methods/functions are written to take an int and not an unsigned int.
It eliminates possible truncation warnings.
You invariably end up wishing you could assign a negative value to the number that you had originally thought would always be positive.
Are just a few quick thoughts that came to mind.
I used to try and be very careful and choose the proper unsigned/signed and I finally realized that it doesn't really result in a positive benefit. It just creates extra work. So why make things hard by mixing and matching.
Some old libraries and even InStr use negative numbers to mean special cases. I believe either its laziness or there's negative special values.

Categories

Resources