I'm playing around with Source RCON Protocol, but I'm having issues converting a strings to byte array successfully.
Original Code (VB.NET) + Pastebin: http://pastebin.com/4BkbTRfD
Private Function RCON_Command(ByVal Command As String,
ByVal ServerData As Integer) As Byte()
Dim Packet As Byte() = New Byte(CByte((13 + Command.Length))) {}
Packet(0) = Command.Length + 9 'Packet Size (Integer)
Packet(4) = 0 'Request Id (Integer)
Packet(8) = ServerData 'SERVERDATA_EXECCOMMAND / SERVERDATA_AUTH (Integer)
For X As Integer = 0 To Command.Length - 1
Packet(12 + X) = System.Text.Encoding.Default.GetBytes(Command(X))(0)
Next
Return Packet
End Function
My current code in C# + Pastebin: http://pastebin.com/eVv0nZCf
byte[] RCONCommand(string cmd, int serverData)
{
int packetSize = cmd.Length + 12;
byte[] byteList = new byte[packetSize];
byteList[0] = (byte)packetSize;
byteList[4] = 0;
byteList[8] = (byte)serverData;
for(int X = 0; X < cmd.Length; X++)
{
byteList[12 + X] = Encoding.ASCII.GetBytes(cmd)[X];
}
return byteList;
}
When I use Encoding.ASCII.GetString(RCONCommand("Word", 3)); the result will be square mark. I tried with Encoding.UTF8.GetString() too, but same result.
Packet structure can be found here: https://developer.valvesoftware.com/wiki/Source_RCON_Protocol#Basic_Packet_Structure
I don't just figure it out what I've done wrong, because I'm not even a familiar with bytes and such. PS. The example applications that has been posted in C# for Source RCON Protocol documentation are garbled, because people is using so much OOP and creates millions of class files so I cannot even find the correct stuff.
A byte is 8 bits. That means that a value such as Size, which according to the spec is a 32-bit little-endian unsigned integer, will require 4 bytes (32 รท 8 = 4).
To fit 32 bits of information into four bytes, you have to split it up. Think of this as dividing a four-digit number into four strings, one for each digit. Except we're working in binary, so it's a little more complicated. We have to do some bit shifting and masking to get just the bits we want into each "string."
The spec calls for little-endian, sothe least significant bytes come first; this takes some getting used to if you haven't done this before.
byte[0] = size && 0xFF;
byte[1] = (size >> 8) && 0xFF;
byte[2] = (size >> 16) && 0xFF;
byte[3] = (size >> 24) && 0xFF;
If you want to rely on the CLR, you can use BitConverter, although it's platform dependent, and not all platforms are little-endian.
var tmp = BitConverter.GetBytes(size);
if (BitConverter.IsLittleEndian)
{
byte[0] = tmp[0];
byte[1] = tmp[1];
byte[2] = tmp[2];
byte[3] = tmp[3];
}
else //in case you are running on a bigendian machine like a Mac
{
byte[0] = tmp[3];
byte[1] = tmp[2];
byte[2] = tmp[1];
byte[3] = tmp[0];
}
Related
I've found method which implements Adler32 algorithm in C# and I would like to use it, but I do not understand part of the code:
Can someone explain me:
1) why bit operators are used when sum1, and sum2 are initialized
2) why sum2 is shifted ?
Adler32 on wiki https://en.wikipedia.org/wiki/Adler-32
& operator explanation:
(Binary AND Operator copies a bit to the result if it exists in both operands)
private bool MakeForBuffer(byte[] bytesBuff, uint adlerCheckSum)
{
if (Object.Equals(bytesBuff, null))
{
checksumValue = 0;
return false;
}
int nSize = bytesBuff.GetLength(0);
if (nSize == 0)
{
checksumValue = 0;
return false;
}
uint sum1 = adlerCheckSum & 0xFFFF; // 1) why bit operator is used?
uint sum2 = (adlerCheckSum >> 16) & 0xFFFF; // 2) why bit operator is used? , why is it shifted?
for (int i = 0; i < nSize; i++)
{
sum1 = (sum1 + bytesBuff[i]) % adlerBase;
sum2 = (sum1 + sum2) % adlerBase;
}
checksumValue = (sum2 << 16) + sum1;
return true;
}
1) why bit operator is used?
& 0xFFFF sets the two high bytes of the checksum to 0, so sum1 is simply the lower 16 bits of the checksum.
2) why bit operator is used? , why is it shifted?
adlerCheckSum >> 16 shifts the 16 higher bytes down to the lower 16 bytes, & 0xFFFF does the same as in the first step - it sets the 16 high bits to 0.
Example
adlerChecksum = 0x12345678
adlerChecksum & 0xFFFF = 0x00005678
adlerChecksum >> 16 = 0x????1234
(it should be 0x00001234 in C# but other languages / compilers "wrap the bits around" and you would get 0x56781234)
(adlerChecksum >> 16) & 0xFFFF = 0x00001234 now you can be sure it's 0x1234, this step is just a precaution that's probably unnecessary in C#.
adlerChecksum = 0x12345678
sum1 = 0x00005678
sum2 = 0x00001234
Those two operations combined simply split the UInt32 checksum into two UInt16.
From the adler32 Tag-Wiki:
Adler-32 is a fast checksum algorithm used in zlib to verify the results of decompression. It is composed of two sums modulo 65521. Start with s1 = 1 and s2 = 0, then for each byte x, s1 = s1 + x, s2 = s2 + s1. The two sums are combined into a 32-bit value with s1 in the low 16 bits and s2 in the high 16 bits.
I'm trying to convert 3 bytes to signed integer (Big-endian) in C#.
I've tried to use BitConverter.ToInt32 method, but my problem is what value should have the lats byte.
Can anybody suggest me how can I do it in different way?
I also need to convert 5 (or 6 or 7) bytes to signed long, is there any general rule how to do it?
Thanks in advance for any help.
As a last resort you could always shift+add yourself:
byte b1, b2, b3;
int r = b1 << 16 | b2 << 8 | b3;
Just swap b1/b2/b3 until you have the desired result.
On second thought, this will never produce negative values.
What result do you want when the msb >= 0x80 ?
Part 2, brute force sign extension:
private static int Bytes2Int(byte b1, byte b2, byte b3)
{
int r = 0;
byte b0 = 0xff;
if ((b1 & 0x80) != 0) r |= b0 << 24;
r |= b1 << 16;
r |= b2 << 8;
r |= b3;
return r;
}
I've tested this with:
byte[] bytes = BitConverter.GetBytes(p);
int r = Bytes2Int(bytes[2], bytes[1], bytes[0]);
Console.WriteLine("{0} == {1}", p, r);
for several p.
The last value should be 0 if it isn't set for a positive number, 256 for a negative.
To know what you should pass in, you can try converting it the other way:
var bytes = BitConverter.GetBytes(i);
int x = BitConverter.ToInt32(bytes, 0);
To add to the existing answers here, there's a bit of a gotcha in that Bitconverter.ToInt32() will throw an ArgumentException if the array is less than sizseof(int) (4) bytes in size;
Destination array is not long enough to copy all the items in the collection. Check array index and length.
Given an array less than sizeof(int) (4) bytes in size, you can compensate for left/right padding like so;
Right-pad
Results in positive Int32 numbers
int intByteSize = sizeof(int);
byte[] padded = new byte[intByteSize];
Array.Copy(sourceBytes, 0, padded, 0, sourceBytes.Length);
sourceBytes = padded;
Left-pad
Results in negative Int32 numbers, assuming non-zero value at byte index sourceBytes.Length - 1.
int intByteSize = sizeof(int);
byte[] padded = new byte[intByteSize];
Array.Copy(sourceBytes, 0, padded, intByteSize - sourceBytes.Length, sourceBytes.Length);
sourceBytes = padded;
Once padded, you can safely call int myValue = BitConverter.ToInt32(sourceBytes, 0);.
I have 10 bytes - 4 bytes of low order, 4 bytes of high order, 2 bytes of highest order - that I need to convert to an unsigned long. I've tried a couple different methods but neither of them worked:
Try #1:
var id = BitConverter.ToUInt64(buffer, 0);
Try #2:
var id = GetID(buffer, 0);
long GetID(byte[] buffer, int startIndex)
{
var lowOrderUnitId = BitConverter.ToUInt32(buffer, startIndex);
var highOrderUnitId = BitConverter.ToUInt32(buffer, startIndex + 4);
var highestOrderUnitId = BitConverter.ToUInt16(buffer, startIndex + 8);
return lowOrderUnitId + (highOrderUnitId * 100000000) + (highestOrderUnitId * 10000000000000000);
}
Any help would be appreciated, thanks!
As the comments indicate, 10 bytes will not fit in a long (which is a 64-bit data type - 8 bytes). However, you could use a decimal (which is 128-bits wide - 16 bytes):
var lowOrderUnitId = BitConverter.ToUInt32(buffer, startIndex);
var highOrderUnitId = BitConverter.ToUInt32(buffer, startIndex + 4);
var highestOrderUnitId = BitConverter.ToUInt16(buffer, startIndex + 8);
decimal n = highestOrderUnitId;
n *= UInt32.MaxValue;
n += highOrderUnitId;
n *= UInt32.MaxValue;
n += lowOrderUnitId;
I've not actually tested this, but I think it will work...
As has been mentioned, a ulong isn't large enough to hold 10 bytes of data, it's only 8 bytes. You'd need to use a Decimal. The most efficient way (not to mention least code) would probably be to get a UInt64 out of it first, then add the high-order bits:
ushort high = BitConverter.ToUInt16(buffer, 0);
ulong low = BitConverter.ToUInt64(buffer, 2);
decimal num = (decimal)high * ulong.MaxValue + high + low;
(You need to add high a second time because otherwise you'd need to multiply by the value ulong.MaxValue + 1, and that's a lot of annoying casting and parentheses.)
I want to convert an int to a byte[2] array using BCD.
The int in question will come from DateTime representing the Year and must be converted to two bytes.
Is there any pre-made function that does this or can you give me a simple way of doing this?
example:
int year = 2010
would output:
byte[2]{0x20, 0x10};
static byte[] Year2Bcd(int year) {
if (year < 0 || year > 9999) throw new ArgumentException();
int bcd = 0;
for (int digit = 0; digit < 4; ++digit) {
int nibble = year % 10;
bcd |= nibble << (digit * 4);
year /= 10;
}
return new byte[] { (byte)((bcd >> 8) & 0xff), (byte)(bcd & 0xff) };
}
Beware that you asked for a big-endian result, that's a bit unusual.
Use this method.
public static byte[] ToBcd(int value){
if(value<0 || value>99999999)
throw new ArgumentOutOfRangeException("value");
byte[] ret=new byte[4];
for(int i=0;i<4;i++){
ret[i]=(byte)(value%10);
value/=10;
ret[i]|=(byte)((value%10)<<4);
value/=10;
}
return ret;
}
This is essentially how it works.
If the value is less than 0 or greater than 99999999, the value won't fit in four bytes. More formally, if the value is less than 0 or is 10^(n*2) or greater, where n is the number of bytes, the value won't fit in n bytes.
For each byte:
Set that byte to the remainder of the value-divided-by-10 to the byte. (This will place the last digit in the low nibble [half-byte] of the current byte.)
Divide the value by 10.
Add 16 times the remainder of the value-divided-by-10 to the byte. (This will place the now-last digit in the high nibble of the current byte.)
Divide the value by 10.
(One optimization is to set every byte to 0 beforehand -- which is implicitly done by .NET when it allocates a new array -- and to stop iterating when the value reaches 0. This latter optimization is not done in the code above, for simplicity. Also, if available, some compilers or assemblers offer a divide/remainder routine that allows retrieving the quotient and remainder in one division step, an optimization which is not usually necessary though.)
Here's a terrible brute-force version. I'm sure there's a better way than this, but it ought to work anyway.
int digitOne = year / 1000;
int digitTwo = (year - digitOne * 1000) / 100;
int digitThree = (year - digitOne * 1000 - digitTwo * 100) / 10;
int digitFour = year - digitOne * 1000 - digitTwo * 100 - digitThree * 10;
byte[] bcdYear = new byte[] { digitOne << 4 | digitTwo, digitThree << 4 | digitFour };
The sad part about it is that fast binary to BCD conversions are built into the x86 microprocessor architecture, if you could get at them!
Here is a slightly cleaner version then Jeffrey's
static byte[] IntToBCD(int input)
{
if (input > 9999 || input < 0)
throw new ArgumentOutOfRangeException("input");
int thousands = input / 1000;
int hundreds = (input -= thousands * 1000) / 100;
int tens = (input -= hundreds * 100) / 10;
int ones = (input -= tens * 10);
byte[] bcd = new byte[] {
(byte)(thousands << 4 | hundreds),
(byte)(tens << 4 | ones)
};
return bcd;
}
maybe a simple parse function containing this loop
i=0;
while (id>0)
{
twodigits=id%100; //need 2 digits per byte
arr[i]=twodigits%10 + twodigits/10*16; //first digit on first 4 bits second digit shifted with 4 bits
id/=100;
i++;
}
More common solution
private IEnumerable<Byte> GetBytes(Decimal value)
{
Byte currentByte = 0;
Boolean odd = true;
while (value > 0)
{
if (odd)
currentByte = 0;
Decimal rest = value % 10;
value = (value-rest)/10;
currentByte |= (Byte)(odd ? (Byte)rest : (Byte)((Byte)rest << 4));
if(!odd)
yield return currentByte;
odd = !odd;
}
if(!odd)
yield return currentByte;
}
Same version as Peter O. but in VB.NET
Public Shared Function ToBcd(ByVal pValue As Integer) As Byte()
If pValue < 0 OrElse pValue > 99999999 Then Throw New ArgumentOutOfRangeException("value")
Dim ret As Byte() = New Byte(3) {} 'All bytes are init with 0's
For i As Integer = 0 To 3
ret(i) = CByte(pValue Mod 10)
pValue = Math.Floor(pValue / 10.0)
ret(i) = ret(i) Or CByte((pValue Mod 10) << 4)
pValue = Math.Floor(pValue / 10.0)
If pValue = 0 Then Exit For
Next
Return ret
End Function
The trick here is to be aware that simply using pValue /= 10 will round the value so if for instance the argument is "16", the first part of the byte will be correct, but the result of the division will be 2 (as 1.6 will be rounded up). Therefore I use the Math.Floor method.
I made a generic routine posted at IntToByteArray that you could use like:
var yearInBytes = ConvertBigIntToBcd(2010, 2);
static byte[] IntToBCD(int input) {
byte[] bcd = new byte[] {
(byte)(input>> 8),
(byte)(input& 0x00FF)
};
return bcd;
}
Assumption:
Converting a
byte[] from Little Endian to Big
Endian means inverting the order of the bits in
each byte of the byte[].
Assuming this is correct, I tried the following to understand this:
byte[] data = new byte[] { 1, 2, 3, 4, 5, 15, 24 };
byte[] inverted = ToBig(data);
var little = new BitArray(data);
var big = new BitArray(inverted);
int i = 1;
foreach (bool b in little)
{
Console.Write(b ? "1" : "0");
if (i == 8)
{
i = 0;
Console.Write(" ");
}
i++;
}
Console.WriteLine();
i = 1;
foreach (bool b in big)
{
Console.Write(b ? "1" : "0");
if (i == 8)
{
i = 0;
Console.Write(" ");
}
i++;
}
Console.WriteLine();
Console.WriteLine(BitConverter.ToString(data));
Console.WriteLine(BitConverter.ToString(ToBig(data)));
foreach (byte b in data)
{
Console.Write("{0} ", b);
}
Console.WriteLine();
foreach (byte b in inverted)
{
Console.Write("{0} ", b);
}
The convert method:
private static byte[] ToBig(byte[] data)
{
byte[] inverted = new byte[data.Length];
for (int i = 0; i < data.Length; i++)
{
var bits = new BitArray(new byte[] { data[i] });
var invertedBits = new BitArray(bits.Count);
int x = 0;
for (int p = bits.Count - 1; p >= 0; p--)
{
invertedBits[x] = bits[p];
x++;
}
invertedBits.CopyTo(inverted, i);
}
return inverted;
}
The output of this little application is different from what I expected:
00000001 00000010 00000011 00000100 00000101 00001111 00011000
00000001 00000010 00000011 00000100 00000101 00001111 00011000
80-40-C0-20-A0-F0-18
01-02-03-04-05-0F-18
1 2 3 4 5 15 24
1 2 3 4 5 15 24
For some reason the data remains the same, unless printed using BitConverter.
What am I not understanding?
Update
New code produces the following output:
10000000 01000000 11000000 00100000 10100000 11110000 00011000
00000001 00000010 00000011 00000100 00000101 00001111 00011000
01-02-03-04-05-0F-18
80-40-C0-20-A0-F0-18
1 2 3 4 5 15 24
128 64 192 32 160 240 24
But as I have been told now, my method is incorrect anyway because I should invert the bytes
and not the bits?
This hardware developer I'm working with told me to invert the bits because he cannot read the data.
Context where I'm using this
The application that will use this does not really work with numbers.
I'm supposed to save a stream of bits to file where
1 = white and 0 = black.
They represent pixels of a bitmap 256x64.
byte 0 to byte 31 represents the first row of pixels
byte 32 to byte 63 the second row of pixels.
I have code that outputs these bits... but the developer is telling
me they are in the wrong order... He says the bytes are fine but the bits are not.
So I'm left confused :p
No. Endianness refers to the order of bytes, not bits. Big endian systems store the most-significant byte first and little-endian systems store the least-significant first. The bits within a byte remain in the same order.
Your ToBig() function is returning the original data rather than the bit-swapped data, it seems.
Your method may be correct at this point. There are different meanings of endianness, and it depends on the hardware.
Typically, it's used for converting between computing platforms. Most CPU vendors (now) use the same bit ordering, but different byte ordering, for different chipsets. This means, that, if you are passing a 2-byte int from one system to another, you leave the bits alone, but swap bytes 1 and 2, ie:
int somenumber -> byte[2]: somenumber[high],somenumber[low] ->
byte[2]: somenumber[low],somenumber[high] -> int newNumber
However, this isn't always true. Some hardware still uses inverted BIT ordering, so what you have may be correct. You'll need to either trust your hardware dev. or look into it further.
I recommend reading up on this on Wikipedia - always a great source of info:
http://en.wikipedia.org/wiki/Endianness
Your ToBig method has a bug.
At the end:
invertedBits.CopyTo(data, i);
}
return data;
You need to change that to:
byte[] newData = new byte[data.Length];
invertedBits.CopyTo(newData, i);
}
return newData;
You're resetting your input data, so you're receiving both arrays inverted. The problem is that arrays are reference types, so you can modify the original data.
As greyfade already said, endianness is not about bit ordering.
The reason that your code doesn't do what you expect, is that the ToBig method changes the array that you send to it. That means that after calling the method the array is inverted, and data and inverted are just two references pointing to the same array.
Here's a corrected version of the method.
private static byte[] ToBig(byte[] data) {
byte[] result = new byte[data.length];
for (int i = 0; i < data.Length; i++) {
var bits = new BitArray(new byte[] { data[i] });
var invertedBits = new BitArray(bits.Count);
int x = 0;
for (int p = bits.Count - 1; p >= 0; p--) {
invertedBits[x] = bits[p];
x++;
}
invertedBits.CopyTo(result, i);
}
return result;
}
Edit:
Here's a method that changes endianness for a byte array:
static byte[] ConvertEndianness(byte[] data, int wordSize) {
if (data.Length % wordSize != 0) throw new ArgumentException("The data length does not divide into an even number of words.");
byte[] result = new byte[data.Length];
int offset = wordSize - 1;
for (int i = 0; i < data.Length; i++) {
result[i + offset] = data[i];
offset -= 2;
if (offset < -wordSize) {
offset += wordSize * 2;
}
}
return result;
}
Example:
byte[] data = { 1,2,3,4,5,6 };
byte[] inverted = ConvertEndianness(data, 2);
Console.WriteLine(BitConverter.ToString(inverted));
Output:
02-01-04-03-06-05
The second parameter is the word size. As endianness is the ordering of bytes in a word, you have to specify how large the words are.
Edit 2:
Here is a more efficient method for reversing the bits:
static byte[] ReverseBits(byte[] data) {
byte[] result = new byte[data.Length];
for (int i = 0; i < data.Length; i++) {
int b = data[i];
int r = 0;
for (int j = 0; j < 8; j++) {
r <<= 1;
r |= b & 1;
b >>= 1;
}
result[i] = (byte)r;
}
return result;
}
One big problem I see is ToBig changes the contents of the data[] array that is passed to it.
You're calling ToBig on an array named data, then assigning the result to inverted, but since you didn't create a new array inside ToBig, you modified both arrays, then you proceed to treat the arrays data and inverted as different when in reality they are not.