Just noticed that the unchecked context doesn't work when working with a BigInteger, for instance:
unchecked
{
// no exception, long1 assigned to -1 as expected
var long1 = (long)ulong.Parse(ulong.MaxValue.ToString());
}
unchecked
{
var bigInt = BigInteger.Parse(ulong.MaxValue.ToString());
// throws overflow exception
var long2 = (long)bigInt;
}
Any idea why that's the case? Is there something special with the way big integers are converted to other primitive integer types?
Thanks,
The C# compiler has no idea whatsoever that a BigInteger is logically an "integral type". It just sees a user-defined type with a user-defined explicit conversion to long. From the compiler's point of view,
long long2 = (long)bigInt;
is exactly the same as:
long long2 = someObject.SomeMethodWithAFunnyNameThatReturnsALong();
It has no ability to reach inside that method and tell it to stop throwing exceptions.
But when the compiler sees
int x = (int) someLong;
the compiler is generating the code doing the conversion, so it can choose to generate checked or unchecked code as it sees fit.
Remember, "checked" and "unchecked" have no effect at runtime; it's not like the CLR goes into "unchecked mode" when control enters an unchecked context. "checked" and "unchecked" are instructions to the compiler about what sort of code to generate inside the block. They only have an effect at compile time, and the compilation of the BigInt conversion to long has already happened. Its behaviour is fixed.
The OverflowException is actually being thrown by the explicit cast operator defined on BigInteger. It looks like this:
int num = BigInteger.Length(value._bits);
if (num > 2)
{
throw new OverflowException(SR.GetString("Overflow_Int64"));
}
In other words, it handles overflows this way regardless of the checked or unchecked context. The docs actually say so.
Update: Of course, Eric is the final word on this. Please go read his post :)
The documentation explicitly states that it will throw OverflowException in this situation. The checked context only makes a difference to "native" arithmetic operations that the C# compiler emits - which doesn't include invoking explicit conversion operators.
To perform the conversion "safely" you'd have to compare it with long.MaxValue and long.MinValue first to check whether or not it's in range. To get the overflow-to-negative effect, I suspect you'd have to perform use bitwise operators within BigInteger first. For example:
using System;
using System.Numerics;
class Program
{
static void Main(string[] args)
{
BigInteger bigValue = new BigInteger(ulong.MaxValue);
long x = ConvertToInt64Unchecked(bigValue);
Console.WriteLine(x);
}
private static readonly BigInteger MaxUInt64AsBigInteger
= ulong.MaxValue;
private static long ConvertToInt64Unchecked(BigInteger input)
{
unchecked
{
return (long) (ulong) (input & MaxUInt64AsBigInteger);
}
}
}
Related
I'm working on a custom implementation of a Number struct, with very different ways of storing and manipulating numeric values.
The struct is fully immutable - all fields are implemented as readonly
I'm trying to implement the ++ and -- operators, and I've run into a little confusion:
How do you perform the assignment?
Or does the platform handle this automatically, and I just need to return n + 1?
public struct Number
{
// ...
// ... readonly fields and properties ...
// ... other implementations ...
// ...
// Empty placeholder + operator, since the actual method of addition is not important.
public static Number operator +(Number n, int value)
{
// Perform addition and return sum
// The Number struct is immutable, so this technically returns a new Number value.
}
// ERROR here: "ref and out are not valid in this context"
public static Number operator ++(ref Number n)
{
// ref seems to be required,
// otherwise this assignment doesn't affect the original variable?
n = n + 1;
return n;
}
}
EDIT: I think this is not a duplicate of other questions about increment and decrement operators, since this involves value-types which behave differently than classes in this context. I understand similar rules apply regarding ++ and --, but I believe the context of this question is different enough, and nuanced enough, to stand on its own.
The struct is fully immutable - all fields are implemented as readonly
Good!
I'm trying to implement the ++ and -- operators, and I've run into a little confusion: How do you perform the assignment?
You don't. Remember what the ++ operator does. Whether it is prefix or postfix it:
fetches the original value of the operand
computes the value of the successor
stores the successor
produces either the original value or the successor
The only part of that process that the C# compiler does not know how to do for your type is "compute the successor", so that's what your overridden ++ operator should do. Just return the successor; let the compiler deal with figuring out how to make the assignment.
Or does the platform handle this automatically, and I just need to return n + 1?
Yes, do that.
The processing of ++ and -- operators is described in C# language specification, section 7.7.5 Prefix increment and decrement operators:
The run-time processing of a prefix increment or decrement operation of the form ++x or --x consists of the following steps:
• If x is classified as a variable:
o x is evaluated to produce the variable.
o The selected operator is invoked with the value of x as its argument.
o The value returned by the operator is stored in the location given by the evaluation of x.
o The value returned by the operator becomes the result of the operation.
So a custom overloads of these operators only need to produce an incremented/decremented value. The rest is handled by the compiler.
A Number class is going to have a value of some kind as a property.
public static Number operator ++(Number n)
{
// ref seems to be required,
// otherwise this assignment doesn't affect the original variable?
n.value = n.value + 1;
return n;
}
This should do what you want.
I wrote this using your struc and added the value property.
private static void Main(string[] args)
{
var x = new Number();
x.value = 3;
x++;
Console.WriteLine(x.value);
Console.Read();
}
This properly generates a 4
The statement num++; by itself expands to num = PlusPlusOperator(num);. Since your data type is immutable, just return n+1; and the compiler will handle the rest.
This question already has answers here:
Is casting the same thing as converting?
(11 answers)
Closed 9 years ago.
I have been working on some code for a while. And I had a question: What's the difference among casting, parsing and converting? And when we can use them?
Casting is when you take a variable of one type and change it to a different type. You can only do that in some cases, like so:
string str = "Hello";
object o = str;
string str2 = (string)o; // <-- This is casting
Casting does not change the variable's value - the value remains of the same type (the string "Hello").
Converting is when you take a value from one type and convert it to a different type:
double d = 5.5;
int i = (int)d; // <---- d was converted to an integer
Note that in this case, the conversion was done in the form of casting.
Parsing is taking a string and converting it to a different type by understanding its content. For instance, converting the string "123" to the number 123, or the string "Saturday, September 22nd" to a DateTime.
Casting: Telling the compiler that an object is really something else without changing it (though some data loss may be incurred).
object obj_s= "12345";
string str_i = (string) obj; // "12345" as string, explicit
int small = 12345;
long big = 0;
big = small; // 12345 as long, implicit
Parsing: Telling the program to interpret (on runtime) a string.
string int_s = "12345";
int i = int.Parse(int_s); // 12345 as int
Converting: Telling the program to use built in methods to try to change type for what may be not simply interchangeable.
double dub = 123.45;
int i = System.Convert.ToInt32(dub); // 123 as int
These are three terms each with specific uses:
casting - changing one type to another. In order to do this, the
types must be compatible: int -> object; IList<T> -> IEnumerable<T>
parsing - typically refers to reading strings and extracting useful parts
converting - similar to casting, but typically a conversion would involve changing one type to an otherwise non-compatible type. An example of that would be converting objects to strings.
A cast from one type to another requires some form of compatibility, usually via inheritance or implementation of an interface. Casting can be implicit or explicit:
class Foo : IFoo {
// implementations
}
// implicit cast
public IFoo GetFoo() {
return Foo;
}
// explicit cast
public IFoo GetFoo() {
return Foo as IFoo;
}
There are quite a few ways to parse. We read about XML parsing; some types have Parse and TryParse methods; and then there are times we need to parse strings or other types to extract the 'stuff we care about'.
int.Parse("3") // returns an integer value of 3
int.TryParse("foo", out intVal) // return true if the string could be parsed; otherwise false
Converting may entail changing one type into another incompatible one. This could involve some parsing as well. Conversion examples would usually be, IMO, very much tied to specific contexts.
casting
(casting to work the types need to be compatible)
Converting between data types can be done explicitly using a cast
static void _Casting()
{
int i = 10;
float f = 0;
f = i; // An implicit conversion, no data will be lost.
f = 0.5F;
i = (int)f; // An explicit conversion. Information will be lost.
}
parsing (Parsing is conversion between different types:)
converts one type to another type can be called as parsing uisng int.parse
int num = int.Parse("500");
traversing through data items like XML can be also called as parsing
When user-defined conversions get involved, this usually entails returning a different object/value. user-defined conversions usually exist between value types rather than reference types, so this is rarely an issue.
converting
Using the Convert-class actually just helps you parse it
for more please refer http://msdn.microsoft.com/en-us/library/ms228360%28VS.80%29.aspx
This question is actually pretty complicated...
Normally, a cast just tells the runtime to change one type to another. These have to be types that are compatible. For example an int can always be represented as a long so it is OK to cast it to a long. Some casts have side-effects. For example, a float will drop its precision if it is cast to an int. So (int)1.5f will result in int value 1. Casts are usually the fastest way to change the type, because it is a single IL operator. For example, the code:
public void CastExample()
{
int i = 7;
long l = (long)i;
}
Performs the cast by running the IL code:
conv.i8 //convert to 8-byte integer (a.k.a. Int64, a.k.a. long).
A parse is some function that takes in once type and returns another. It is an actual code function, not just an IL operator. This usually takes longer to run, because it runs multiple lines of code.
For example, this code:
public void ParseExample()
{
string s = "7";
long l = long.Parse(s);
}
Runs the IL code:
call int64 [mscorlib]System.Int64::Parse(string)
In other words it calls an actual method. Internally, the Int64 type provides that method:
public static long Parse(String s) {
return Number.ParseInt64(s, NumberStyles.Integer, NumberFormatInfo.CurrentInfo);
}
And Number.Parse:
[System.Security.SecuritySafeCritical] // auto-generated
internal unsafe static Int64 ParseInt64(String value, NumberStyles options, NumberFormatInfo numfmt) {
Byte * numberBufferBytes = stackalloc Byte[NumberBuffer.NumberBufferBytes];
NumberBuffer number = new NumberBuffer(numberBufferBytes);
Int64 i = 0;
StringToNumber(value, options, ref number, numfmt, false);
if ((options & NumberStyles.AllowHexSpecifier) != 0) {
if (!HexNumberToInt64(ref number, ref i)) {
throw new OverflowException(Environment.GetResourceString("Overflow_Int64"));
}
}
else {
if (!NumberToInt64(ref number, ref i)) {
throw new OverflowException(Environment.GetResourceString("Overflow_Int64"));
}
}
return i;
}
And so on... so you can see it is actually doing a lot of code.
Now where things get more complicated is that although a cast is usually the fastest, classes can override the implicit and explicit cast operators. For example, if I write the class:
public class CastableClass
{
public int IntValue { get; set; }
public static explicit operator int(CastableClass castable)
{
return castable.IntValue;
}
}
I have overridden the explicit cast operator for int, so I can now do:
public void OverridedCastExample()
{
CastableClass cc = new CastableClass {IntValue = 7};
int i = (int)cc;
}
Which looks like a normal cast, but in actuality it calls my method that I defined on my class. The IL code is:
call int32 UnitTestProject1.CastableClass::op_Explicit(class UnitTestProject1.CastableClass)
So anyway, you typically want to cast whenever you can. Then parse if you can't.
Casting: or Parsing
A cast explicitly invokes the conversion operator from one type to another.
Casting variables is not simple. A complicated set of rules resolves casts. In some cases data is lost and the cast cannot be reversed. In others an exception is provoked in the execution engine.
int.Parse is a simplest method but it throws exceptions on invalid input.
TryParse
int.TryParse is one of the most useful methods for parsing integers in the C# language. This method works the same way as int.Parse.
int.TryParse has try and catch structure inside. So, it does not throw exceptions
Convert:
Converts a base data type to another base data type.
Convert.ToInt32, along with its siblings Convert.ToInt16 and Convert.ToInt64, is actually a static wrapper method for the int.Parse method.
Using TryParse instead of Convert or Cast is recommended by many programmers.
source:www.dotnetperls.com
Different people use it to mean different things. It need not be true outside .net world, but here is what I have understood in .net context reading Eric Lippert's blogs:
All transformations of types from one form to another can be called conversion. One way of categorizing may be
implicit -
a. representation changing (also called coercion)
int i = 0;
double d = i;
object o = i; // (specifically called boxing conversion)
IConvertible o = i; // (specifically called boxing conversion)
Requires implicit conversion operator, conversion always succeeds (implicit conversion operator should never throw), changes the referential identity of the object being converted.
b. representation preserving (also called implicit reference conversion)
string s = "";
object o = s;
IList<string> l = new List<string>();
Only valid for reference types, never changes the referential identity of the object being converted, conversion always succeeds, guaranteed at compile time, no runtime checks.
explicit (also called casting) -
a. representation changing
int i = 0;
enum e = (enum)i;
object o = i;
i = (int)o; // (specifically called unboxing conversion)
Requires explicit conversion operator, changes the referential identity of the object being converted, conversion may or may not succeed, does runtime check for compatibility.
b. representation preserving (also called explicit reference conversion)
object o = "";
string s = (string)o;
Only valid for reference types, never changes the referential identity of the object being converted, conversion may or may not succeed, does runtime check for compatibility.
While conversions are language level constructs, Parse is a vastly different thing in the sense it's framework level, or in other words they are custom methods written to get an output from an input, like int.Parse which takes in a string and returns an int.
I'm curious to know why the C# compiler only gives me an error message for the second if statement.
enum Permissions : ulong
{
ViewListItems = 1L,
}
public void Method()
{
int mask = 138612833;
int compare = 32;
if (mask > 0 & (ulong)Permissions.ViewListItems > 32)
{
//Works
}
if (mask > 0 & (ulong)Permissions.ViewListItems > compare)
{
//Operator '>' cannot be applied to operands of type 'ulong' and 'int'
}
}
I've been experimenting with this, using ILSpy to examine the output, and this is what I've discovered.
Obviously in your second case this is an error - you can't compare a ulong and an int because there isn't a type you can coerce both to. A ulong might be too big for a long, and an int might be negative.
In your first case, however, the compiler is being clever. It realises that const 1 > const 32 is never true, and doesn't include your if statement in the compiled output at all. (It should give a warning for unreachable code.) It's the same if you define and use a const int rather than a literal, or even if you cast the literal explicitly (i.e. (int)32).
But then isn't the compiler successfully comparing a ulong with an int, which we just said was impossible?
Apparently not. So what is going on?
Try instead to do something along the following lines. (Taking input and writing output so the compiler doesn't compile anything away.)
const int thirtytwo = 32;
static void Main(string[] args)
{
ulong x = ulong.Parse(Console.ReadLine());
bool gt = x > thirtytwo;
Console.WriteLine(gt);
}
This will compile, even though the ulong is a variable, and even though the result isn't known at compile time. Take a look at the output in ILSpy:
private static void Main(string[] args)
{
ulong x = ulong.Parse(Console.ReadLine());
bool gt = x > 32uL; /* Oh look, a ulong. */
Console.WriteLine(gt);
}
So, the compiler is in fact treating your const int as a ulong. If you make thirtytwo = -1, the code fails to compile, even though we then know that gt will always be true. The compiler itself can't compare a ulong to an int.
Also note that if you make x a long instead of a ulong, the compiler generates 32L rather than 32 as an integer, even though it doesn't have to. (You can compare an int and a long at runtime.)
This points to the compiler not treating 32 as a ulong in the first case because it has to, merely because it can match the type of x. It's saving the runtime from having to coerce the constant, and this is just a bonus when the coercion should by rights not be possible.
It's not the CLR giving this error message it's the compiler.
In your first example the compiler treats 32 as ulong (or a type that's implicitly convertible to ulong eg uint) whereas in your second example you've explicitly declared the type as an int. There is no overload of the > operator that accepts an ulong and an int and hence you get a compiler error.
rich.okelly and rawling's answers are correct as to why you cannot compare them directly. You can use the Convert class's ToUInt64 method to promote the int.
if (mask > 0 & (ulong)Permissions.ViewListItems > Convert.ToUInt64(compare))
{
}
The Type class has a method IsAssignableFrom() that almost works. Unfortunately it only returns true if the two types are the same or the first is in the hierarchy of the second. It says that decimal is not assignable from int, but I'd like a method that would indicate that decimals are assignable from ints, but ints are not always assignable from decimals. The compiler knows this but I need to figure this out at runtime.
Here's a test for an extension method.
[Test]
public void DecimalsShouldReallyBeAssignableFromInts()
{
Assert.IsTrue(typeof(decimal).IsReallyAssignableFrom(typeof(int)));
Assert.IsFalse(typeof(int).IsReallyAssignableFrom(typeof(decimal)));
}
Is there a way to implement IsReallyAssignableFrom() that would work like IsAssignableFrom() but also passes the test case above?
Thanks!
Edit:
This is basically the way it would be used. This example does not compile for me, so I had to set Number to be 0 (instead of 0.0M).
[AttributeUsage(AttributeTargets.Property | AttributeTargets.Parameter)]
public class MyAttribute : Attribute
{
public object Default { get; set; }
}
public class MyClass
{
public MyClass([MyAttribute(Default= 0.0M)] decimal number)
{
Console.WriteLine(number);
}
}
I get this error:
Error 4 An attribute argument must be a constant expression, typeof expression or array creation expression of an attribute parameter type
There are actually three ways that a type can be “assignable” to another in the sense that you are looking for.
Class hierarchy, interface implementation, covariance and contravariance. This is what .IsAssignableFrom already checks for. (This also includes permissible boxing operations, e.g. int to object or DateTime to ValueType.)
User-defined implicit conversions. This is what all the other answers are referring to. You can retrieve these via Reflection, for example the implicit conversion from int to decimal is a static method that looks like this:
System.Decimal op_Implicit(Int32)
You only need to check the two relevant types (in this case, Int32 and Decimal); if the conversion is not in those, then it doesn’t exist.
Built-in implicit conversions which are defined in the C# language specification. Unfortunately Reflection doesn’t show these. You will have to find them in the specification and copy the assignability rules into your code manually. This includes numeric conversions, e.g. int to long as well as float to double, pointer conversions, nullable conversions (int to int?), and lifted conversions.
Furthermore, a user-defined implicit conversion can be chained with a built-in implicit conversion. For example, if a user-defined implicit conversion exists from int to some type T, then it also doubles as a conversion from short to T. Similarly, T to short doubles as T to int.
This one almost works... it's using Linq expressions:
public static bool IsReallyAssignableFrom(this Type type, Type otherType)
{
if (type.IsAssignableFrom(otherType))
return true;
try
{
var v = Expression.Variable(otherType);
var expr = Expression.Convert(v, type);
return expr.Method == null || expr.Method.Name == "op_Implicit";
}
catch(InvalidOperationException ex)
{
return false;
}
}
The only case that doesn't work is for built-in conversions for primitive types: it incorrectly returns true for conversions that should be explicit (e.g. int to short). I guess you could handle those cases manually, as there is a finite (and rather small) number of them.
I don't really like having to catch an exception to detect invalid conversions, but I don't see any other simple way to do it...
Timwi's answer is really complete, but I feel there's an even simpler way that would get you the same semantics (check "real" assignability), without actually defining yourself what this is.
You can just try the assignment in question and look for an InvalidCastException (I know it's obvious). This way you avoid the hassle of checking the three possible meanings of assignability as Timwi mentioned. Here's a sample using xUnit:
[Fact]
public void DecimalsShouldReallyBeAssignableFromInts()
{
var d = default(decimal);
var i = default(i);
Assert.Throws<InvalidCastException)( () => (int)d);
Assert.DoesNotThrow( () => (decimal)i);
}
What you are looking for is if there's an implicit cast from the one type to the other. I would think that's doable by reflection, though it might be tricky because the implicit cast should be defined as an operator overload which is a static method and I think it could be defined in any class, not just the one that can be implicitly converted.
In order to find out if one type can be assigned to another, you have to look for implicit conversions from one to the other. You can do this with reflection.
As Timwi said, you will also have to know some built-in rules, but those can be hard-coded.
It actually happens to be the case that the decimal type is not "assignable" to the int type, and vice versa. Problems occur when boxing/unboxing gets involved.
Take the example below:
int p = 0;
decimal d = 0m;
object o = d;
object x = p;
// ok
int a = (int)d;
// invalid cast exception
int i = (int)o;
// invalid cast exception
decimal y = (decimal)p;
// compile error
int j = d;
This code looks like it should work, but the type cast from object produces an invalid cast exception, and the last line generates a compile-time error.
The reason the assignment to a works is because the decimal class has an explicit override on the type cast operator to int. There does not exist an implicit type cast operator from decimal to int.
Edit: There does not exist even the implicit operator in reverse. Int32 implements IConvertible, and that is how it converts to decimal
End Edit
In other words, the types are not assignable, but convertible.
You could scan assemblies for explicit type cast operators and IConvertible interfaces, but I get the impression that would not serve you as well as programming for the specific few cases you know you will encounter.
Good luck!
Why in C# is Example A valid, compilable and will just wrap while examples B will not compile?
A
int val = 0;
val = val + Int32.MaxValue +2;
or
int val = Int32.MaxValue;
val++;
B
int val = 0;
val = 2147483647 + 1;
or
int val = 0;
int val = Int32.MaxValue + 1;
I know by default that arithmetic exceptions are not checked by default unless you explicitly do so using checked method, block or attribute in the config. My question relates more to compiler then how an arithmetic exception happens.
Your B examples are constant-folded at compile time, indicating to the compiler that it's guaranteed to overflow.
Because your A examples use variables, the expressions cannot be (completely) constant-folded, so the compiler can't guarantee that the values will result in an overflow.
For instance...
int val = 0;
// some other thread changes `val` to -5...
val = val + Int32.MaxValue +2; // no overflow
However, if you know that val won't change, and assign 0 to a const int:
const int startval = 0;
int val = startval + Int32.MaxValue + 2;
You can get your compile-time overflow check back because the value can be completely determined and therefore constant-folded.
I know that arithmetic exceptions are not checked by default unless you explicitly do so using checked method, block or attribute in the config
You do not know that because that statement is incorrect. And in fact you know it to be incorrect because you've provided a case where your statement is proven false!
I refer you to section 7.6.12 of the C# specification, a portion of which I reproduce here for your convenience:
For non-constant expressions (expressions that are evaluated at run-time) that are not enclosed by any checked or unchecked operators or statements, the default overflow checking context is unchecked unless external factors (such as compiler switches and execution environment configuration) call for checked evaluation.
For constant expressions (expressions that can be fully evaluated at compile-time), the default overflow checking context is always checked. Unless a constant expression is explicitly placed in an unchecked context, overflows that occur during the compile-time evaluation of the expression always cause compile-time errors.
See the spec for further details.
It simply has to do with the limitations of the compile time checking. Certain things can only be known at runtime.