I am dealing with a SQL table that has a column of type varbinary(max). I intend to store two Int16 values in it via a stored procedure. I am going to consume this column value in C# code so I was hoping if I could do something like 'save one value in the first 8 bits and second value in last 8 bits', etc. I explored SQL bitwise operators but was unable to conclude how I can do it.
Would greatly appreciate if I can get any pointers or links to read-up.
You can convert the stored procedure parameters to binary and concatenate them:
DECLARE #T TABLE (BinaryValue VARBINARY(MAX))
DECLARE #Int1 SMALLINT
DECLARE #Int2 SMALLINT
SELECT
#Int1 = 32767,
#Int2 = -32768
INSERT #T (BinaryValue)
SELECT CAST(ISNULL(#Int1,0) AS VARBINARY(2)) + CAST(ISNULL(#Int2,0) AS VARBINARY(2))
SELECT
BinaryValue,
Int1 = CAST(SUBSTRING(BinaryValue, 1, 2) AS SMALLINT) ,
Int2 = CAST(SUBSTRING(BinaryValue, 3, 2) AS SMALLINT)
FROM
#T
To store 2 Int16 values, you obviously need a total of 32 bits, or 4 bytes. Here is some C# code that shows you how you can convert your 2 Int16 values to a byte array and back the other way around using bit shifting.
I realize that you may need to do some of this inside a stored procedure. But if you study the simple bit shifting logic, you shouldn't have a hard time translating the logic into your procedure.
Hopefully this will get you started:
public static void Main(string[] args)
{
Int16 value1 = 12345;
Int16 value2 = 31210;
byte[] bytes = new byte[4];
bytes[0] = (byte)(value1 >> 8);
bytes[1] = (byte)value1;
bytes[2] = (byte)(value2 >> 8);
bytes[3] = (byte)value2;
// store the byte array in your db column.
// Now let's pretend we're reading the byte array and converting back to our numbers.
Int16 decodedValue1 = (Int16)((bytes[0] << 8) | bytes[1]);
Int16 decodedValue2 = (Int16)((bytes[2] << 8) | bytes[3]);
Console.WriteLine(decodedValue1); // prints 12345
Console.WriteLine(decodedValue2); // prints 31210
}
Here is another way to do it without explicit bit shifting in C#, by using the built-in BitConverter class:
public static void Main(string[] args)
{
Int16 value1 = 12345;
Int16 value2 = 31210;
byte[] bytes = new byte[4];
Array.Copy(BitConverter.GetBytes(value1), 0, bytes, 0, 2);
Array.Copy(BitConverter.GetBytes(value2), 0, bytes, 2, 2);
// store the byte array in your db column.
// Now let's pretend we're reading the byte array and converting back to our numbers.
Int16 decodedValue1 = BitConverter.ToInt16(bytes, 0);
Int16 decodedValue2 = BitConverter.ToInt16(bytes, 2);
Console.WriteLine(decodedValue1); // prints 12345
Console.WriteLine(decodedValue2); // prints 31210
}
Related
I'm trying to convert an object with only int fields (ushort, ulong, uint, int, etc.) into a byte array containing each int as a byte in the order that it appears in the object.
For example, if I have an object of the form
obj = {subobj: {uint firstProp: 500, ushort secondProp: 12}, byte lastProp: 5}
then I expect the byte array to be
{0, 0, 1, 244, 0, 12, 5}
I tried to create this byte array by using Serialization (as described in this answer), but I'm noticing there's a bunch of stuff before and after each byte. Based on this website, I believe this represents the database and the file, which I don't want.
I know that in C++ I can use reinterpret_cast<uint8_t*>(obj) to get the desired result. Is there an equivalent way to do this in C#?
You can try do something like this:
foreach(int value in obj)
{
byte lsbOfLsb = (byte)value;
byte msbOfLsb = (byte)(value >> 8);
byte lsbOfMsb = (byte)(value >> 16);
byte msbOfMsb = (byte)(value >> 24);
}
Obviously this is only the idea.
You should use a for loop instead of a foreach and Parse all elements to int e.g. .
Other way is to check the type of data with
typeof(value) //op 1
// OR
if(value is int) //e.g.
and then convert to byte as you need.
First, I had read many posts and tried BitConverter methods for the conversion, but I haven't got the desired result.
From a 2 byte array of:
byte[] dateArray = new byte[] { 0x07 , 0xE4 };
Y need to get an integer with value 2020. So, the decimal of 0x7E4.
Following method does not returning the desired value,
int i1 = BitConverter.ToInt16(dateArray, 0);
The endianess tells you how numbers are stored on your computer. There are two possibilities: Little endian and big endian.
Big endian means the biggest byte is stored first, i.e. 2020 would become 0x07, 0xE4.
Little endian means the lowest byte is stored first, i.e. 2020 would become 0xE4, 0x07.
Most computers are little endian, hence the other way round a human would expect. With BitConverter.IsLittleEndian, you can check which type of endianess your computer has. Your code would become:
byte[] dateArray = new byte[] { 0x07 , 0xE4 };
if(BitConverter.IsLittleEndian)
{
Array.Reverse(dataArray);
}
int i1 = BitConverter.ToInt16(dateArray, 0);
dateArray[0] << 8 | dateArray[1]
I have a byte array recived from Cpp program.
arr[0..3] // a real32,
arr[4] // a uint8,
How can I interpret arr[4] as int?
(uint)arr[4] // Err: can't implicitly convert string to int.
BitConverter.ToUint16(arr[4]) // Err: Invalid argument.
buff[0+4] as int // Err: must be reference or nullable type
Do I have to zero consecutive byte to interpret it as a UInt16?
OK, here is the confusion. Initially, I defined my class.
byte[] buff;
buff = getSerialBuffer();
public class Reading{
public string scale_id;
public string measure;
public int measure_revised;
public float wt;
}
rd = new Reading();
// !! here is the confusion... !!
// Err: Can't implicitly convert 'string' to 'int'
rd.measure = string.Format("{0}", buff[0 + 4]);
// then I thought, maybe I should convert buff[4] to int first ?
// I throw all forms of conversion here, non worked.
// but, later it turns out:
rd.measure_revised = buff[0+4]; // just ok.
So basically, I don't understand why this happens
rd.measure = string.Format("{0}", buff[0 + 4]);
//Err: Can't implicitly convert 'string' to 'int'
If buff[4] is a byte and byte is uint8, what does it mean by can't implicitly convert string to int ?... It confuses me.
You were almost there. Assuming you wanted a 32-bit int from the first 4 bytes (it's hard to interpret your question):
BitConverter.ToInt32(arr, 0);
This says to take the 4 bytes from arr, starting at index 0, and turn them into a 32-bit int. (docs)
Note that BitConverter uses the endianness of the computer, so on x86/x64 this will be little-endian.
If you want to use an explicit endianness, you'll need to construct the int by hand:
int littleEndian = arr[0] | (arr[1] << 8) | (arr[2] << 16) | (arr[3] << 24);
int bigEndian = arr[3] | (arr[2] << 8) | (arr[1] << 16) | (arr[0] << 24);
If instead you wanted a 32-bit floating-point number from the first 4 bytes, see Dmitry Bychenko's answer.
If I've understood you right you have byte (not string) array
byte[] arr = new byte[] {
182, 243, 157, 63, // Real32 - C# Single or float (e.g. 1.234f)
123 // uInt8 - C# byte (e.g. 123)
};
To get float and byte back you can try BitConverter
// read float / single starting from 0th byte
float realPart = BitConverter.ToSingle(arr, 0);
byte bytePart = arr[4];
Console.Write($"Real Part: {realPart}; Integer Part: {bytePart}");
Outcome:
Real Part: 1.234; Integer Part: 123
Same idea (BitConverter class) if we want to encode arr:
float realPart = 1.234f;
byte bytePart = 123;
byte[] arr =
BitConverter.GetBytes(realPart)
.Concat(new byte[] { bytePart })
.ToArray();
Console.Write(string.Join(" ", arr));
Outcome:
182 243 157 63 123
I need to convert an int to a byte array of size 3. This means dropping the last byte, for example:
var temp = BitConverter.GetBytes(myNum).Take(3).ToArray());
However, is there a better way to do is? Maybe by creating a custom struct?
EDIT
For this requirement I have a predefined max value of 16777215 for this new data type.
Something like this (no Linq, just getting bytes)
int value = 123;
byte[] result = new byte[] {
(byte) (value & 0xFF),
(byte) ((value >> 8) & 0xFF),
(byte) ((value >> 16) & 0xFF),
};
Sounds like you want to create a new struct that represents a 3 byte unsigned integer (based solely on the max value quoted).
Using your original method is very prone to failure, firstly, Take(3) is dependent on whether the system you're running on is big-endian or little-endian, secondly, it doesn't take into account what happens when you get passed a negative int which your new struct can't handle.
You will need to write the conversions yourself, I would take in the int as given, check if it's negative, check if it's bigger than 16777215, if it passes those checks then it's between 0 and 16777215 and you can store it in your new struct, simply execute a Where(b => b != 0) instead of Take(3) to get around the endian-ness problem. Obviously take into account the 0 case where all bytes = 0.
As input I get an int (well, actually a string I should convert to an int).
This int should be converted to bits.
For each bit position that has a 1, I should get the position.
In my database, I want all records that have an int value field that has this position as value.
I currently have the following naive code that should ask my entity(holding the databaseValue) if it matches the position, but obviously doesn't work correctly:
Byte[] bits = BitConverter.GetBytes(theDatabaseValue);
return bits[position].equals(1);
Firstly, I have an array of byte because there apparantly is no bit type. Should I use Boolean[] ?
Then, how can I fill this array?
Lastly, if previous statements are solved, I should just return bits[position]
I feel like this should somehow be solved with bitmasks, but I don't know where to start..
Any help would be appreciated
Your feeling is correct. This should be solved with bitmasks. BitConverter does not return bits (and how could it? "bits" isn't an actual data type), it converts raw bytes to CLR data types. Whenever you want to extract the bits out of something, you should think bitmasks.
If you want to check if a bit at a certain position is set, use the & operator. Bitwise & is only true if both bits are set. For example if you had two bytes 109 and 33, the result of & would be
0110 1101
& 0010 0001
-----------
0010 0001
If you just want to see if a bit is set in an int, you & it with a number that has only the bit you're checking set (ie 1, 2, 4, 8, 16, 32 and so forth) and check if the result is not zero.
List<int> BitPositions(uint input) {
List<int> result = new List<int>();
uint mask = 1;
int position = 0;
do {
if (input & mask != 0) {
result.Add(position);
}
mask <<= 1;
position++;
} while (mask != 0);
return result;
}
I suspect BitArray is what you're after. Alternatively, using bitmasks yourself isn't hard:
for (int i=0; i < 32; i++)
{
if ((value & (1 << i)) != 0)
{
Console.WriteLine("Bit {0} was set!", i);
}
}
Do not use Boolean. Although boolean has only two values, it is actually stored using 32 bits like an int.
EDIT: Actually, in array form Booleans will be packed into bytes, not 4 bytes.