I have a function that receives some data I have to respond with a HEX value.
public byte[] GetData(string value)
{
byte[] returnVal = null;
switch(value){
case "demo1":
byte byte1 = (byte)((32769 & 0x000000FF));
byte byte2 = (byte)((32769 & 0x0000FF00) >> 8);
returnVal = new byte[] { byte1, byte2 };
break;
.....
}
return returnVal;
}
In this example I have to response with 0x8001
I create the following code to build manually a 2 byte array with the right response.
byte byte1 = (byte)((32769 & 0x000000FF));
byte byte2 = (byte)((32769 & 0x0000FF00) >> 8);
var resCmd = new byte[] { byte1, byte2 };
The response can be different depending on the value I received so I want to have a ENUM with the responses and then convert that to byte array.
For example:
public enum Commands
{
CMD1 = 0x8001,
CMD2 = 0x8002,
CMD3 = 0x8003
};
How can I convert an Enum for example CMD1 to the 2 byte array that I need?
Thanks
Use BitConverter after casting it to a 16 bit unsigned int:
var bytes = BitConverter.GetBytes((UInt16)Commands.CMD1);
Related
I have a struct that gets used all over the place and that I store as byteArray on the hd and also send to other platforms.
I used to do this by getting a string version of the struct and using getBytes(utf-8) and getString(utf-8) during serialization. With that I guess I avoided the little and big endian problems?
However that was quite a bit of overhead and I am now using this:
public static explicit operator byte[] (Int3 self)
{
byte[] int3ByteArr = new byte[12];//4*3
int x = self.x;
int3ByteArr[0] = (byte)x;
int3ByteArr[1] = (byte)(x >> 8);
int3ByteArr[2] = (byte)(x >> 0x10);
int3ByteArr[3] = (byte)(x >> 0x18);
int y = self.y;
int3ByteArr[4] = (byte)y;
int3ByteArr[5] = (byte)(y >> 8);
int3ByteArr[6] = (byte)(y >> 0x10);
int3ByteArr[7] = (byte)(y >> 0x18);
int z = self.z;
int3ByteArr[8] = (byte)z;
int3ByteArr[9] = (byte)(z >> 8);
int3ByteArr[10] = (byte)(z >> 0x10);
int3ByteArr[11] = (byte)(z >> 0x18);
return int3ByteArr;
}
public static explicit operator Int3(byte[] self)
{
int x = self[0] + (self[1] << 8) + (self[2] << 0x10) + (self[3] << 0x18);
int y = self[4] + (self[5] << 8) + (self[6] << 0x10) + (self[7] << 0x18);
int z = self[8] + (self[9] << 8) + (self[10] << 0x10) + (self[11] << 0x18);
return new Int3(x, y, z);
}
It works quite well for me, but I am not quite sure how little/big endian works,. do I still have to take care of something here to be safe when some other machine receives an int I sent as a bytearray?
Your current approach will not work for the case when your application running on system which use Big-Endian. In this situation you don't need reordering at all.
You don't need to reverse byte arrays by your self
And you don't need check for endianess of the system by your self
Static method IPAddress.HostToNetworkOrder will convert integer to the integer with big-endian order.
Static method IPAddress.NetworkToHostOrder will convert integer to the integer with order your system using
Those methods will check for Endianness of the system and will do/or not reordering of integers.
For getting bytes from integer and back use BitConverter
public struct ThreeIntegers
{
public int One;
public int Two;
public int Three;
}
public static byte[] ToBytes(this ThreeIntegers value )
{
byte[] bytes = new byte[12];
byte[] bytesOne = IntegerToBytes(value.One);
Buffer.BlockCopy(bytesOne, 0, bytes, 0, 4);
byte[] bytesTwo = IntegerToBytes(value.Two);
Buffer.BlockCopy(bytesTwo , 0, bytes, 4, 4);
byte[] bytesThree = IntegerToBytes(value.Three);
Buffer.BlockCopy(bytesThree , 0, bytes, 8, 4);
return bytes;
}
public static byte[] IntegerToBytes(int value)
{
int reordered = IPAddress.HostToNetworkOrder(value);
return BitConverter.GetBytes(reordered);
}
And converting from bytes to struct
public static ThreeIntegers GetThreeIntegers(byte[] bytes)
{
int rawValueOne = BitConverter.ToInt32(bytes, 0);
int valueOne = IPAddress.NetworkToHostOrder(rawValueOne);
int rawValueTwo = BitConverter.ToInt32(bytes, 4);
int valueTwo = IPAddress.NetworkToHostOrder(rawValueTwo);
int rawValueThree = BitConverter.ToInt32(bytes, 8);
int valueThree = IPAddress.NetworkToHostOrder(rawValueThree);
return new ThreeIntegers(valueOne, valueTwo, valueThree);
}
If you will use BinaryReader and BinaryWriter for saving and sending to another platforms then BitConverter and byte array manipulating can be dropped off.
// BinaryWriter.Write have overload for Int32
public static void SaveThreeIntegers(ThreeIntegers value)
{
using(var stream = CreateYourStream())
using (var writer = new BinaryWriter(stream))
{
int reordredOne = IPAddress.HostToNetworkOrder(value.One);
writer.Write(reorderedOne);
int reordredTwo = IPAddress.HostToNetworkOrder(value.Two);
writer.Write(reordredTwo);
int reordredThree = IPAddress.HostToNetworkOrder(value.Three);
writer.Write(reordredThree);
}
}
For reading value
public static ThreeIntegers LoadThreeIntegers()
{
using(var stream = CreateYourStream())
using (var writer = new BinaryReader(stream))
{
int rawValueOne = reader.ReadInt32();
int valueOne = IPAddress.NetworkToHostOrder(rawValueOne);
int rawValueTwo = reader.ReadInt32();
int valueTwo = IPAddress.NetworkToHostOrder(rawValueTwo);
int rawValueThree = reader.ReadInt32();
int valueThree = IPAddress.NetworkToHostOrder(rawValueThree);
}
}
Of course you can refactor methods above and get more cleaner solution.
Or add as extension methods for BinaryWriter and BinaryReader.
Yes you do. With changes endianness your serialization which preserves bit ordering will run into trouble.
Take the int value 385
In a bigendian system it would be stored as
000000000000000110000001
Interpreting it as littleendian would read it as
100000011000000000000000
And reverse translate to 8486912
If you use the BitConverter class there will be a book property desiring the endianness of the system. The bitconverter can also produce the bit arrays for you.
You will have to decide to use either endianness and reverse the byte arrays according to the serializing or deserializing systems endianness.
The description on MSDN is actually quite detailed. Here they use Array.Reverse for simplicity. I am not certain that your casting to/from byte in order to do the bit manipulation is in fact the fastest way of converting, but that is easily benchmarked.
I want to create byte array that contains 64 bits, How can i get particular bits values say 17th bit, and also how can i get hex value of that index of byte? I did like this, Is this correct?
byte[] _byte = new byte[8];
var bit17=((((_byte[2]>>1)& 0x01);
string hex=BitConverter.ToString(_byte,2,4).Replace("-", string.Empty)
You could use a BitArray:
var bits = new BitArray(64);
bool bit17 = bits[17];
I'm not sure what you mean by the "hex value of that bit" - it will be 0 or 1, because it's a bit.
If you have the index of a bit in a byte (between 0 and 7 inclusive) then you can convert that to a hex string as follows:
int bitNumber = 7; // For example.
byte value = (byte)(1 << bitNumber);
string hex = value.ToString("x");
Console.WriteLine(hex);
You can just use ToString() method.
byte[] arr= new byte[8];
int index = 0;
string hexValue = arr[index].ToString("X");
i am getting a Soap response which contains a base64 string. I am using XDocument to get the value of the element and a function like this to read it
public void main()
{
//****UPDATE
string data64 = "";
data64 = removeNewLinesFromString(data64);
content = data64.ToCharArray();
byte[] binaryData = Convert.FromBase64CharArray(content, 0, content.Length);
Stream stream = new MemoryStream(binaryData);
BinaryReader reader = new BinaryReader(stream,Encoding.UTF8);
string object64 = SoapSerializable.ReadUTF(reader);
}
this is the readUTF function
public static String ReadUTF(BinaryReader reader)
{
// read the following string's length in bytes
int length = Helpers.FlipInt32(reader.ReadInt32());
// read the string's data bytes
byte[] utfString = reader.ReadBytes(length);
// get the string by interpreting the read data as UTF-8
return System.Text.Encoding.UTF8.GetString(utfString, 0, utfString.Length);
}
and my FlipInt32 function
public static Int32 FlipInt32(Int32 value)
{
Int32 a = (value >> 24) & 0xFF;
Int32 b = (value >> 16) & 0xFF;
Int32 c = (value >> 8) & 0xFF;
Int32 d = (value >> 0) & 0xFF;
return (((((d << 8) | c) << 8) | b) << 8) | a;
}
but the resulting values are slightly different from the results an online decoder gives.
I am missing something here?
I am not sure what you are trying to do with BinaryReader But here is what I do to get
this is a dummy encoded base64 string from your base64 data
string data64 = "dGhpcyBpcyBhIGR1bW15IGVuY29kZWQgYmFzZTY0IHN0cmluZy4=";
var buf = Convert.FromBase64String(data64);
var str = Encoding.UTF8.GetString(buf);
I have retrieved the Size of my Struct by using size of like below:
int len = Marshal.SizeOf(packet);
Now the len has a Value of 40. I have to assign this 40 to a 3-byte Field of my Structure.My Strucure looks like below:
public struct TCP_CIFS_Packet
{
public byte zerobyte;
public byte[] lengthCIFSPacket;
public CIFSPacket cifsPacket;
}
I tried assigning the values like following:
tcpCIFSPacket.lengthCIFSPacket = new byte[3];
tcpCIFSPacket.lengthCIFSPacket[0] = Convert.ToByte(0);
tcpCIFSPacket.lengthCIFSPacket[1] = Convert.ToByte(0);
tcpCIFSPacket.lengthCIFSPacket[2] = Convert.ToByte(40);
But this doesn't seem to be the right way. Is there any other Way I can do this?
Edit #ho1 and #Rune Grimstad:
After using BitConverter.GetBytes like follwoing:
tcpCIFSPacket.lengthCIFSPacket = BitConverter.GetBytes(lengthofPacket);
The size of lengthCIFSPacket changes to 4-bytes but I have only 3-bytes of space for tcpCIFSPacket.lengthCIFSPacket as the packet structure.
int number = 500000;
byte[] bytes = new byte[3];
bytes[0] = (byte)((number & 0xFF) >> 0);
bytes[1] = (byte)((number & 0xFF00) >> 8);
bytes[2] = (byte)((number & 0xFF0000) >> 16);
or
byte[] bytes = BitConverter.GetBytes(number); // this will return 4 bytes of course
edit: you can also do this
byte[] bytes = BitConverter.GetBytes(number);
tcpCIFSPacket.lengthCIFSPacket = new byte[3];
tcpCIFSPacket.lengthCIFSPacket[0] = bytes[0];
tcpCIFSPacket.lengthCIFSPacket[1] = bytes[1];
tcpCIFSPacket.lengthCIFSPacket[2] = bytes[2];
Look at BitConverter.GetBytes. It'll convert the int to an array of bytes. See here for more info.
You can use the BitConverter class to convert an Int32 to an array of bytes using the GetBytes method.
I have a string that contains the following 7 bits: 1101000. How can I convert it to a byte or int?
This should do it:
string binaryText = "1101000";
int value1 = Convert.ToInt32(binaryText, 2) // 104
byte value2 = Convert.ToByte(binaryText, 2); // 104
to convert into byte array:
System.Text.ASCIIEncoding encoding=new System.Text.ASCIIEncoding();
byte [] dBytes = encoding.GetBytes(str);