Well, I got my HexString (PacketS) for example "70340A0100000000000000" I want to split every time after 2 chars and put it into an byte array (Stream).
Means {70, 34, 0A, 01, 00, 00, 00, 00, 00, 00, 00}
The shortest path (.NET 4+) is (depending the length or origin):
byte[] myBytes = BigInteger.Parse("70340A0100000000000000", NumberStyles.HexNumber).ToByteArray();
Array.Reverse(myBytes);
myStram.write(myBytes, 0, myBytes.Length);
For previous versions string.length/2 also defines the length of a byte array than can be filled for each parsed pair. This byte array can be written on stream as above.
For both cases, if your origin string is too long, and you want to avoid a huge byte array, proceed with steps getting groups of N characters from your origin.
This actually worked perfect! I am sorry if your code does the same but I just do not understand.
public static byte[] ConvertHexStringToByteArray(string hexString)
{
if (hexString.Length % 2 != 0)
{
throw new ArgumentException(String.Format(CultureInfo.InvariantCulture, "The binary key cannot have an odd number of digits: {0}", hexString));
}
byte[] HexAsBytes = new byte[hexString.Length / 2];
for (int index = 0; index < HexAsBytes.Length; index++)
{
string byteValue = hexString.Substring(index * 2, 2);
HexAsBytes[index] = byte.Parse(byteValue, NumberStyles.HexNumber, CultureInfo.InvariantCulture);
}
return HexAsBytes;
Related
I'm searching to calculate CRC-16 for concox VTS. After search a lot I've got several formula, lookup table, CRC32.net library etc and tried them. But didn't get the actual result which I've wanted.
For example, when I use crc32.net library :
byte[] data = { 11, 01, 03, 51, 51, 00, 94, 10, 95, 20, 20, 08, 25, 81, 00, 23 };
UInt32 crcOut = Crc32Algorithm.ComputeAndWriteToEnd(data);
Console.WriteLine(crcOut.ToString());
It returns : 3232021645
But actually it should return : 90DD
I've tried another example too but they also did not return proper value.
Edit :
Here is the RAW data from Device :
{78}{78}{11}{01}{03}{51}{51}{00}{94}{10}{95}{20}{20}{08}{25}{81}{00}{23}{90}{DD}{0D}{0A}
When split by following data sheet, it looks like -
{78}{78} = Start Bit
{11} = Packet Length(17) = (01 + 12 + 02 + 02) = Decimal 17 = Hexadecimal 11
{01} = Protocol No
{03}{51}{51}{00}{94}{10}{95}{20} = TerminalID = 0351510094109520
{20}{08} = Model Identification Code
{25}{81} = Time Zone Language
{00}{23} = Information Serial No
{90}{DD} = Error Check (CRC : Packet Length to Information Serial No)
{0D}{0A} = Stop Bit
They told in error check it need CRC from Packet Length to Information Serial No. To make reply this packet I also need to make a data packet with CRC code.
I've found an online calculator from below link. The data match with CRC-16/X-25.
Now I need to calculate it by C# code.
https://crccalc.com/?crc=11,%2001,%2003,%2051,%2051,%2000,%2094,%2010,%2095,%2020,%2020,%2008,%2025,%2081,%2000,%2023&method=CRC-16/X-25&datatype=hex&outtype=0
Waiting for your reply.
Thanks
The CRC-16 you assert that you need in your comment (which needs to be in your question) is the CRC-16/X-25. On your data, that CRC gives 0xcac0, not 0x90dd.
In fact, none of the documented CRC-16's listed in that catalog produce 0x90dd for your data. You need to provide a reference for the CRC that you need, and how you determined that 0x90dd is the expected result for that data.
Update for updated question:
The bytes you provided in your example data are in decimal:
byte[] data = { 11, 01, 03, 51, 51, 00, 94, 10, 95, 20, 20, 08, 25, 81, 00, 23 };
That is completely wrong, since, based on the actual data message data in your updated question that you want the CRC for, those digits must be interpreted as hexadecimal. (Just by chance, none of those numbers have hexadecimal digits in a..f.) To represent that test vector in your code correctly, it needs to be:
byte[] data = { 0x11, 0x01, 0x03, 0x51, 0x51, 0x00, 0x94, 0x10, 0x95, 0x20, 0x20, 0x08, 0x25, 0x81, 0x00, 0x23 };
This computes the CRC-16/X-25:
ushort crc16_x25(byte[] data, int len) {
ushort crc = 0xffff;
for (int i = 0; i < len; i++) {
crc ^= data[i];
for (unsigned k = 0; k < 8; k++)
crc = (crc & 1) != 0 ? (crc >> 1) ^ 0x8408 : crc >> 1;
}
return ~crc;
}
Example, I want to convert 3 bytes to ASCII conversion
int a = random.Next(0, 100);
int b = random.Next(0, 1000);
int c = random.Next(0, 30);
byte[] byte1 = BitConverter.GetBytes(a);
byte[] byte2 = BitConverter.GetBytes(b);
byte[] byte3 = BitConverter.GetBytes(c);
byte[] bytes = byte1.Concat(byte2).Concat(byte3).ToArray();
string asciiString = Encoding.ASCII.GetString(bytes, 0, bytes.Length);
label1.Text = asciiString;
It only shows byte1 instead of all bytes.
If you look in the debugger at your asciiString variable you will see that all the 3 letters are there, but in between you always have a 0x00 char.
(screenshot from LINQPad Dump)
This is unfortunately interpreted as end of string. So this is why you see only the first byte/letter.
The documentation of GetBytes(char) says that it returns:
An array of bytes with length 2.
if you now get the bytes from a single char:
byte[] byte1 = BitConverter.GetBytes('a');
You get the following result:
The solution would be to pick only the bytes that are not 0x00:
bytes = bytes.Where(x => x != 0x00).ToArray();
string asciiString = Encoding.ASCII.GetString(bytes, 0, bytes.Length);
label1.Text = asciiString;
This example is based on the char variant of GetBytes. But it holds for all other overloads of this method. They all return an array which can hold the maximum value of the corresponding data type. So this will happen always if the value is so small that the last byte in the array is not used and ends up to be 0!
I have
0x4D5A90000300000004000000FFFF0000B80000000000000040...
generated from sql server.
How can I insert byte string into byte[] column in database using EntityFramework?
As per my comment above, I strongly suspect that the best thing to do here is to return the data as a byte[] from the server; this should be fine and easy to do. However, if you have to use a string, then you'll need to parse it out - take off the 0x prefix, divide the length by 2 to get the number of bytes, then loop and parse each 2-character substring using Convert.ToByte(s, 16) in turn. Something like (completely untested):
int len = (value.Length / 2)-1;
var arr = new byte[len];
for(int i = 0; i < len;i++) {
var s = value.Substring((i + 1) * 2, 2);
arr[i] = Convert.ToByte(s, 16);
}
The following code is a work-in-progress that I am also taking time with to try and learn some more about converting between bits, hex, and Int;
A lot of this is obviously repetitive operations since we're doing the same thing to 7 different "packages," so feel free to gloss over the repeats (just wanted to have entire code structure up to maybe answer some questions ahead of time).
/* Pack bits into containers to send them as 32-bit (4 bytes) items */
int finalBitPackage_1 = 0;
int finalBitPackage_2 = 0;
int finalBitPackage_3 = 0;
int finalBitPackage_4 = 0;
int finalBitPackage_5 = 0;
int finalBitPackage_6 = 0;
int finalBitPackage_7 = 0;
var bitContainer_1 = new BitArray(32, false);
var bitContainer_2 = new BitArray(32, false);
var bitContainer_3 = new BitArray(32, false);
var bitContainer_4 = new BitArray(32, false);
var bitContainer_5 = new BitArray(32, false);
var bitContainer_6 = new BitArray(32, false);
var bitContainer_7 = new BitArray(32, false);
string hexValue = String.Empty;
...
*assign 32 bits (from bools) to every bitContainer[] here*
...
/* Using this single 1-D array for all assignments works because as soon as we convert arrays,
we store the result; this way we never overwrite ourselves */
int[] data = new int[1];
/* Copy containers to a 1-dimensional array, then into an Int for transmission */
bitContainer_1.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_1 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_2.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_2 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_3.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_3 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_4.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_4 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_5.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_5 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_6.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_6 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
bitContainer_7.CopyTo(data, 0);
hexValue = data[0].ToString("X");
finalBitPackage_7 = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
From what I've learned so far, if a binary value is being converted to Int32, the first digit tells if it will be -/+, where 1 indicates (-) and 0 indicates (+); however, in my bitArrays that start with a 0, they show up as a negative number when I do the CopyTo(int[]) transaction, and the bitArrays that start with a 1 show up as a positive when they are copied.
In addition, there is the problem of converting them from their Int32 values into Hex values. Any values that come out of the array conversion as negative don't get the 8 F's added to the front as when checked by http://www.binaryhexconverter.com/, so I wasn't sure the difference in that since my Hex knowledge is limited and I didn't want to lose meaningful data when I transmit the data to another system (over TCP/IP if it matters to anyone). I'll post the values I'm getting out of everything below to help clear it up some.
Variable Binary Int32[] My Hex
bitContainer_1 "01010101010101010101010101010101" "-1431655766" AAAAAAAA
bitContainer_2 "10101010101010101010101010101010" "1431655765" 55555555
bitContainer_3 "00110011001100110011001100110011" "-858993460" CCCCCCCC
bitContainer_4 "11001100110011001100110011001100" "858993459" 33333333
bitContainer_5 "11100011100011100011100011100011" "-954437177" C71C71C7
bitContainer_6 "00011100011100011100011100011100" "954437176" 38E38E38
bitContainer_7 "11110000111100001111000011110000" "252645135" F0F0F0F
Online Hex Values:
FFFFFFFFAAAAAAAA
555555555
FFFFFFFFCCCCCCCC
33333333
FFFFFFFFC71C71C7
38E38E38
F0F0F0F
If every integer sign value is reversed place a -1*theIntegerValue to un-reverse it. It could also have something to do with when you're calling toStirng("X"), maybe use a blank string?
I'm trying to convert two bytes into an unsigned short so I can retrieve the actual server port value. I'm basing it off from this protocol specification under Reply Format. I tried using BitConverter.ToUint16() for this, but the problem is, it doesn't seem to throw the expected value. See below for a sample implementation:
int bytesRead = 0;
while (bytesRead < ms.Length)
{
int first = ms.ReadByte() & 0xFF;
int second = ms.ReadByte() & 0xFF;
int third = ms.ReadByte() & 0xFF;
int fourth = ms.ReadByte() & 0xFF;
int port1 = ms.ReadByte();
int port2 = ms.ReadByte();
int actualPort = BitConverter.ToUInt16(new byte[2] {(byte)port1 , (byte)port2 }, 0);
string ip = String.Format("{0}.{1}.{2}.{3}:{4}-{5} = {6}", first, second, third, fourth, port1, port2, actualPort);
Debug.WriteLine(ip);
bytesRead += 6;
}
Given one sample data, let's say for the two byte values, I have 105 & 135, the expected port value after conversion should be 27015, but instead I get a value of 34665 using BitConverter.
Am I doing it the wrong way?
If you reverse the values in the BitConverter call, you should get the expected result:
int actualPort = BitConverter.ToUInt16(new byte[2] {(byte)port2 , (byte)port1 }, 0);
On a little-endian architecture, the low order byte needs to be second in the array. And as lasseespeholt points out in the comments, you would need to reverse the order on a big-endian architecture. That could be checked with the BitConverter.IsLittleEndian property. Or it might be a better solution overall to use IPAddress.HostToNetworkOrder (convert the value first and then call that method to put the bytes in the correct order regardless of the endianness).
BitConverter is doing the right thing, you just have low-byte and high-byte mixed up - you can verify using a bitshift manually:
byte port1 = 105;
byte port2 = 135;
ushort value = BitConverter.ToUInt16(new byte[2] { (byte)port1, (byte)port2 }, 0);
ushort value2 = (ushort)(port1 + (port2 << 8)); //same output
To work on both little and big endian architectures, you must do something like:
if (BitConverter.IsLittleEndian)
actualPort = BitConverter.ToUInt16(new byte[2] {(byte)port2 , (byte)port1 }, 0);
else
actualPort = BitConverter.ToUInt16(new byte[2] {(byte)port1 , (byte)port2 }, 0);