How do you go about casting numbers to specific data types?
Example:
253 is 32-bit signed integer
253L is 64-bit signed integer
253D is Double precision float
As you can see you can cast directly to Long and Double, but there are certain problems I have here. I cannot cast to byte, single, 16bit, unsigned...
It becomes a problem when I have to input data into many different functions with arguments of varying data types:
Method1( byte Value );
Method2( sbyte Value );
Method3( ushort Value );
//Etc.
Using int.Parse(string) or Convert.ToInt32 will do the trick.
Or you could try casting the value expicitly like that:
int age = 53;
Method1((byte) age);
Try using the Convert class:
http://msdn.microsoft.com/en-us/library/System.Convert_methods(v=vs.110).aspx
e.g.
int myInt = Convert.ToInt32(anything);
Related
I want to use HEX number to assign a value to an int:
int i = 0xFFFFFFFF; // effectively, set i to -1
Understandably, compiler complains.
Question, how do I make above work?
Here is why I need this.
WritableBitmap class exposes pixels array as int[]. So if I want to set pixel to Blue, I would say: 0xFF0000FF (ARGB) (-16776961)
Plus I am curious if there is an elegant, compile time solution.
I know there is a:
int i = BitConverter.ToInt32(new byte[] { 0xFF, 0x00, 0x00, 0xFF }, 0);
but it is neither elegant, nor compile time.
Give someone a fish and you feed them for a day. Teach them to pay attention to compiler error messages and they don't have to ask questions on the internet that are answered by the error message.
int i = 0xFFFFFFFF;
produces:
Cannot implicitly convert type 'uint' to 'int'. An explicit conversion exists
(are you missing a cast?)
Pay attention to the error message and try adding a cast:
int i = (int)0xFFFFFFFF;
Now the error is:
Constant value '4294967295' cannot be converted to a 'int'
(use 'unchecked' syntax to override)
Again, pay attention to the error message. Use the unchecked syntax.
int i = unchecked((int)0xFFFFFFFF);
Or
unchecked
{
int i = (int)0xFFFFFFFF;
}
And now, no error.
As an alternative to using the unchecked syntax, you could specify /checked- on the compiler switches, if you like to live dangerously.
Bonus question:
What makes the literal a uint in the first place?
The type of an integer literal does not depend on whether it is hex or decimal. Rather:
If a decimal literal has the U or L suffixes then it is uint, long or ulong, depending on what combination of suffixes you choose.
If it does not have a suffix then we take the value of the literal and see if it fits into the range of an int, uint, long or ulong. Whichever one matches first on that list is the type of the expression.
In this case the hex literal has a value that is outside the range of int but inside the range of uint, so it is treated as a uint.
You just need an unchecked cast:
unchecked
{
int i = (int)0xFFFFFFFF;
Console.WriteLine("here it is: {0}", i);
}
The unchecked syntax seems a bit gar'ish (!) when compared to the various single-letter numerical suffixes available.
So I tried for a shellfish:
static public class IntExtMethods
{
public static int ui(this uint a)
{
return unchecked((int)a);
}
}
Then
int i = 0xFFFFFFFF.ui();
Because the lake has more fish.
Note: it's not a constant expression, so it can't be used to initialize an enum field for instance.
I have a problem with enum in c#
enum myenum {one, two, three} ;
Public myenum type;
type=2;
Why it doesn't work? How to cast enum to integer in such a way?
You have to explicitly cast integer to myenum:
type = (myenum) 2;
See this thread for more explanation: Cast int to enum in C#
There is no implicit conversion between an int and enum. There is however an explicit conversion
type = 2; // Error
type = (myenum)2; // Ok
The one exception to this rule is the literal 0. The literal 0 implicitly converts to any enum type
type = 0; // Ok
You can explicitly convert between the two, also ensure that you declare the enums with a specific value as the automatic index starts from zero.
enum myenum {one = 1, two = 2, three = 3} ;
private void GoEnum()
{
myenum x = (myenum)1;
Console.WriteLine(x);
Console.WriteLine((int)x);
}
The entire purpose of enums is to avoid "magic numbers". Their purpose is to avoid code just like that in which the reader needs to just "know" that 2 represents myenum.two. Behind the scenes it is just an integer (or some other integral type) but the language works to hide that fact from you.
It does allow you to convert an integer into it's enumeration representation, and vice versa, because there are times where this is simply necessary, but because it should be avoided wherever possible the language forces you to explicitly cast the integer (type = (myenum) 2;), rather than implicitly converting it for you, so that your intentions are made very clear in code.
Unless specified, an enum cannot be converted to an integer without a cast (which is unsafe). You have to assign integer values to each option in the enum.
Example:
enum MyEnum {
One,
Two,
Three
};
Would become:
enum MyEnum {
One = 1,
Two = 2,
Three = 3
};
An enum takes up the size of an integer (4 bytes). This allows a cast to int, but like I have said it is better practise to assign values to enum options. If you wanted to change the value that MyEnum extends, do this:
enum MyEnum : long /* or short, ushort, ulong, uint, sbyte, byte etc */ {
One = 1L,
Two = 2L,
Three = 3L
};
Note that the only types that can be used are:
byte, sbyte, short, ushort, int, uint, long or ulong
This does not include anything like System.Integer or System.Byte.
I'm using BitConverter.GetBytes() to convert various variables (of different types) to a byte array, to pass it to a custom method where I need to check the value of each byte.
I've noticed that I can pass a variable of type byte to BitConverter.GetBytes() (even if it is not listed in the Overload list: see related MSDN page) and in this case I always have a 2-bytes array as return value.
Shouldn't I have a single-byte array as return value? How does .NET interpret the byte argument?
Sample:
byte arg = 0x00;
byte[] byteArr = BitConverter.GetBytes(arg);
// Result: byteArr is a 2-bytes array where byte[0] = 0 and byte[ 1] = 0
When you look up GetBytes() you will find that there is no overload that takes a byte parameter.
You are looking at the results of the closest match, GetBytes(Int16) and that of course produces a byte[2].
In other words, your code:
byte arg = 0x00;
byte[] byteArr = BitConverter.GetBytes(arg);
is equivalent to:
byte arg = 0x00;
short _temp = arg;
byte[] byteArr = BitConverter.GetBytes(_temp);
As the other answers have pointed out, there is no GetBytes overload that takes a byte parameter. The next question is why does it choose the overload that takes a short. It could pick any of these for example:
GetBytes(short)
GetBytes(int)
GetBytes(long)
GetBytes(float)
...
The reasoning for why it chooses short is not just because short is the next closest thing. There is better reasoning behind it. The C# language specification explains:
"Given an implicit conversion C1 that converts from a type S to a type T1, and an implicit conversion C2 that converts from a type S to a type T2, the better conversion of the two conversions is determined as follows" [1]
Here are two possible conversions from S to either T1 or T2:
S
C1 byte short (T1)
C2 byte int (T2)
The rule that works here is:
"If an implicit conversion from T1 to T2 exists, and no implicit conversion from T2 to T1 exists, C1 is the better conversion."
There's is an implicit conversion from short to int, but not from int to short, so the conversion from byte to short is chosen.
[1] http://msdn.microsoft.com/en-us/library/aa691339(v=vs.71).aspx (old copy)
It's actually using the overload for short instead of byte, which means it's up-casting your byte to a short, which is 2 bytes long.
There is no overload for GetBytes which accepts a byte.
However, Section 6.1.2 of the C# language spec says that there is an implicit numeric conversion
• From byte to short, ushort, int, uint, long, ulong, float, double, or decimal.
This causes the compiler to convert the byte to short (which is 2 bytes), which causes the method to return a 2 byte array.
Expanding on Jon Skeet's answer to This Previous Question. Skeet doesn't address the failure that occurs when negative values and two's complement values enter the picture.
In short, I want to convert any simple type (held in an unknown boxed object) to System.UInt64 so I can work with the underlying binary representation.
Why do I want to do this? See the explanation at the bottom.
The example below shows the cases where Convert.ToInt64(object) and Convert.ToUInt64(object) both break (OverflowException).
There are only two causes for the OverflowExceptions below:
-10UL causes an exception when converting to Int64 because the negative value casts to 0xfffffffffffffff6 (in the unchecked context), which is a positive number larger than Int64.MaxValue. I want this to convert to -10L.
When converting to UInt64, signed types holding negative values cause an exception because -10 is less than UInt64.MinValue. I want these to convert to their true two's complement value (which is 0xffffffffffffffff6). Unsigned types don't truly hold the negative value -10 because it is converted to two's complement in the unchecked context; thus, no exception occurs with unsigned types.
The kludge solution would seem to be conversion to Int64 followed by an unchecked cast to UInt64. This intermediate cast would be easier because only one instance causes an exception for Int64 versus eight failures when converting directly to UInt64.
Note: The example uses an unchecked context only for the purpose of forcing negative values into unsigned types during boxing (which creates a positive two's complement equivalent value). This unchecked context is not a part of the problem at hand.
using System;
enum DumbEnum { Negative = -10, Positive = 10 };
class Test
{
static void Main()
{
unchecked
{
Check((sbyte)10);
Check((byte)10);
Check((short)10);
Check((ushort)10);
Check((int)10);
Check((uint)10);
Check((long)10);
Check((ulong)10);
Check((char)'\u000a');
Check((float)10.1);
Check((double)10.1);
Check((bool)true);
Check((decimal)10);
Check((DumbEnum)DumbEnum.Positive);
Check((sbyte)-10);
Check((byte)-10);
Check((short)-10);
Check((ushort)-10);
Check((int)-10);
Check((uint)-10);
Check((long)-10);
//Check((ulong)-10); // OverflowException
Check((float)-10);
Check((double)-10);
Check((bool)false);
Check((decimal)-10);
Check((DumbEnum)DumbEnum.Negative);
CheckU((sbyte)10);
CheckU((byte)10);
CheckU((short)10);
CheckU((ushort)10);
CheckU((int)10);
CheckU((uint)10);
CheckU((long)10);
CheckU((ulong)10);
CheckU((char)'\u000a');
CheckU((float)10.1);
CheckU((double)10.1);
CheckU((bool)true);
CheckU((decimal)10);
CheckU((DumbEnum)DumbEnum.Positive);
//CheckU((sbyte)-10); // OverflowException
CheckU((byte)-10);
//CheckU((short)-10); // OverflowException
CheckU((ushort)-10);
//CheckU((int)-10); // OverflowException
CheckU((uint)-10);
//CheckU((long)-10); // OverflowException
CheckU((ulong)-10);
//CheckU((float)-10.1); // OverflowException
//CheckU((double)-10.1); // OverflowException
CheckU((bool)false);
//CheckU((decimal)-10); // OverflowException
//CheckU((DumbEnum)DumbEnum.Negative); // OverflowException
}
}
static void Check(object o)
{
Console.WriteLine("Type {0} converted to Int64: {1}",
o.GetType().Name, Convert.ToInt64(o));
}
static void CheckU(object o)
{
Console.WriteLine("Type {0} converted to UInt64: {1}",
o.GetType().Name, Convert.ToUInt64(o));
}
}
WHY?
Why do I want to be able to convert all these value types to and from UInt64? Because I have written a class library that converts structs or classes to bit fields packed into a single UInt64 value.
Example: Consider the DiffServ field in every IP packet header, which is composed of a number of binary bit fields:
Using my class library, I can create a struct to represent the DiffServ field. I created a BitFieldAttribute which indicates which bits belong where in the binary representation:
struct DiffServ : IBitField
{
[BitField(3,0)]
public PrecedenceLevel Precedence;
[BitField(1,3)]
public bool Delay;
[BitField(1,4)]
public bool Throughput;
[BitField(1,5)]
public bool Reliability;
[BitField(1,6)]
public bool MonetaryCost;
}
enum PrecedenceLevel
{
Routine, Priority, Immediate, Flash, FlashOverride, CriticEcp,
InternetworkControl, NetworkControl
}
My class library can then convert an instance of this struct to and from its proper binary representation:
// Create an arbitrary DiffServe instance.
DiffServ ds = new DiffServ();
ds.Precedence = PrecedenceLevel.Immediate;
ds.Throughput = true;
ds.Reliability = true;
// Convert struct to value.
long dsValue = ds.Pack();
// Create struct from value.
DiffServ ds2 = Unpack<DiffServ>(0x66);
To accomplish this, my class library looks for fields/properties decorated with the BitFieldAttribute. Getting and setting members retrieves an object containing the boxed value type (int, bool, enum, etc.) Therefore, I need to unbox any value type and convert it to it's bare-bones binary representation so that the bits can be extracted and packed into a UInt64 value.
I'm going to post my best solution as fodder for the masses.
These conversions eliminate all exceptions (except for very large float, double, decimal values which do not fit in 64-bit integers) when unboxing an unknown simple value type held in object o:
long l = o is ulong ? (long)(ulong)o : Convert.ToInt64(o));
ulong u = o is ulong ? (ulong)o : (ulong)Convert.ToInt64(o));
Any improvements to this will be welcomed.
I am reading a row from a SQL Server table. One of the columns is of type tinyint.
I want to get the value into an int or int32 variable.
rdr.GetByte(j)
(byte) rdr.GetValue(j)
...seems to be the only way to retrieve the value. But how do I get the result into an int variable?
int value = rdr.GetByte(j);
An explicit cast is not required, because a byte to int is a widening conversion (no possibility of data loss).
See the documentation for BitConverter.ToInt32 (contains more examples):
byte[] bytes = { 0, 0, 0, 25 };
// If the system architecture is little-endian (that is, little end first),
// reverse the byte array.
if (BitConverter.IsLittleEndian)
Array.Reverse(bytes);
int i = BitConverter.ToInt32(bytes, 0);
Console.WriteLine("int: {0}", i);
// Output: int: 25
Assigning a byte to an int works:
int myInt = myByte;
But maybe you're getting an exception inside IDataRecord.GetByte, in which case you should check that the index you're using to access the data record really points to a tinyint column. You can check the type returned from GetValue. It should be a byte for a tinyint column.
Trace.Assert(rdr.GetValue(j).GetType() == typeof(byte));
Another option is to forego the fragile numeric index altogether:
int myInt = rdr.GetByte(rdr.GetOrdinal(TheNameOfTheTinyintColumn))
(int)rdr.GetByte(j)
Casting the byte to int should work just fine:
int myInt = (int) rdr.GetByte(j);
Since C# supports implicit conversions from byte to int, you can alternatively just do this:
int myInt = rdr.GetByte(j);
Which one you choose is a matter of preference (whether you want to document the fact that a cast is taking place or not). Note that you will need the explicit cast if you want to use type inference, or otherwise myInt will have the wrong type:
var myInt = (int) rdr.GetByte(j);
Quick tidbit I ran into as a kind of corner case. If you have an object type that is of type System.Byte, you can not directly cast to int. You must first cast to a byte, then cast to an int.
public int Method(object myByte)
{
// will throw a cast exception
// var val = (int)myInt;
// will not throw a cast exception
var val = (int)((byte)myInt)
return val;
}
Method((byte)1);
This is similar to Stephen Cleary's comment on the accepted answer, however I am required to specify the size of the int. This worked for me:
int value = Convert.ToInt32(rdr.GetValue(j));
(And it also provided backward compatibility with a database column using an int.)