number formation for byte conversion - c#

How to format the number which can be converted to Byte[] (array);
I know how to convert the result value to byte[], but the problem is how to format the intermediate number.
This is by data,
int packet = 10;
int value = 20;
long data = 02; // This will take 3 bytes [Last 3 Bytes]
i need the long value, through that i can shift and I'll fill the byte array like this
Byte[0] = 10;
Byte[1] = 20;
Byte[2] = 00;
Byte[3] = 00;
Byte[4] = 02;
Byte 2,3,4 are data
but formatting the intermediate value is the problem. How to form this
Here is the sample
long data= 683990319104; ///I referred this as intermediate value.
This is the number i receiving from built in application.
Byte[] by = new byte[5];
for (int i = 0; i < 5; i++)
{
by[i] = (byte)(data & 0xFF);
data>>= 8;
}
Here
Byte[0] = 00;
Byte[1] = 00; //Byte 0,1,2 are Data (ie data = 0)
Byte[2] = 00;
Byte[3] = 65; //Byte 3 is value (ie Value = 65)
Byte[4] = 159; // Byte 4 is packet (is Packet = 159);
This is one sample.
Currently BitConverter.GetBytes(..) is ues to receive.
Byte while sending, the parameter is long.
So i want the format to generate the number 683990319104 from
packet = 159
value = 65
data = 0
I think now its in understandable format :)

Not entirely sure what you are asking exactly, but I think you are looking for BitConverter.GetBytes(data).

The use of 3 bytes to define the long seems unusual; if it is just the 3 bytes... why a long? why not an int?
For example (note I've had to make assumptions about your byte-trimming based on your example - you won't have the full int/long range...):
static void Main() {
int packet = 10;
int value = 20;
long data = 02;
byte[] buffer = new byte[5];
WritePacket(buffer, 0, packet, value, data);
for (int i = 0; i < buffer.Length; i++)
{
Console.Write(buffer[i].ToString("x2"));
}
Console.WriteLine();
ReadPacket(buffer, 0, out packet, out value, out data);
Console.WriteLine(packet);
Console.WriteLine(value);
Console.WriteLine(data);
}
static void WritePacket(byte[] buffer, int offset, int packet,
int value, long data)
{
// note I'm trimming as per the original question
buffer[offset++] = (byte)packet;
buffer[offset++] = (byte)value;
int tmp = (int)(data); // more efficient to work with int, and
// we're going to lose the MSB anyway...
buffer[offset++] = (byte)(tmp>>2);
buffer[offset++] = (byte)(tmp>>1);
buffer[offset] = (byte)(tmp);
}
static void ReadPacket(byte[] buffer, int offset, out int packet,
out int value, out long data)
{
// note I'm trimming as per the original question
packet = buffer[offset++];
value = buffer[offset++];
data = ((int)buffer[offset++] << 2)
| ((int)buffer[offset++] << 1)
| (int)buffer[offset];
}

oooh,
Its simple
int packet = 159
int value = 65
int data = 0
long address = packet;
address = address<<8;
address = address|value;
address = address<<24;
address = address|data;
now the value of address is 683990319104 and i termed this as intermediate value. Let me know the exact term.

Related

Equivalent C code in C#. (uint8_t *) cast?

I am trying to convert these C piece of code into C#. But I've stumble in some parts that I don't understand exactly what is happening, so I am unable to translate it.
void foo(uint64_t *output, uint64_t *input, uint32_t Length){
uint64_t st[25];
memcpy(st, input, Length);
((uint8_t *)st)[Length] = 0x01;
memset(((uint8_t *)st) + Length + 1, 0x00, 128 - Length - 1);
for(int i = 16; i < 25; ++i) st[i] = 0x00UL;
// Last bit of padding
st[16] = 0x8000000000000000UL;
bar(st);
memcpy(output, st, 200);
}
More specifically in the ((uint8_t *)st)[Length] = 0x01; part. I cannot understand the cast/pointer in this line. Why is the * there for? If somebody could explain what is happening, I would be grateful.
What I got so far in C#:
private void foo(ref ulong[] output, ref ulong[] input, uint Length)
{
ulong[] st = new ulong[25];
//memcpy(st, input, Length);
Buffer.BlockCopy(input, 0, st, 0, (int)Length);
// Help in these line please:
//((uint8_t *)st)[Length] = 0x01;
// Still don't know what to do here too:
//memset(((byte)st) + Length + 1, 0x00, 128 - Length - 1);
for (int i = 16; i < 25; ++i)
{
st[i] = 0x00U;
}
// Last bit of padding
st[16] = 0x8000000000000000U;
bar(st);
//memcpy(output, st, 200);
Buffer.BlockCopy(st, 0, output, 0, 200);
}
Thank you.
What your function does is rather primitive memory manipulation:
void foo(uint64_t *output, uint64_t *input, uint32_t Length)
{
uint64_t st[25];
Your function takes a pointer to 16 elements of uint64_t and prepares a local buffer (st) to store a copy of them.
memcpy(st, input, Length);
((uint8_t *)st)[Length] = 0x01;
memset(((uint8_t *)st) + Length + 1, 0x00, 128 - Length - 1);
Make a copy of the input array into st but insert a byte with value 1 at an offset of Length bytes from the start.
for(int i = 16; i < 25; ++i) st[i] = 0x00UL;
The remaining 8 elements of the buffer are cleared to 0. Writing st[16] is pointless as it will be changed again by the next line...
// Last bit of padding
st[16] = 0x8000000000000000UL;
Now we have 16*8 bytes from the input buffer, 1 extra byte with value 1 inserted somewhere and the remaining elements are filled with 1<<63 and lots of 0 bytes.
bar(st);
memcpy(output, st, 200);
}
Do something we have no clue about and copy the resulting 25 elements into the output buffer

Calculate CRC 8 of data received on serial port c#

I am receiving data from serial port in an byte array
How can I calculate the checksum of the data not included the sync (54) and checksum (F2) byte and want to match with the last check sum byte.
Updated :
int bytes = comport.BytesToRead;
byte indexCRC;
int sumCRC = 0;
byte checksumCRC = 0;
byte checksum;
byte[] RXBuffer = new byte[bytes];
comport.Read(RXBuffer, 0, bytes);
checksum = RXBuffer.Last();
byte[] RXBufferCRC = new byte[bytes];
for (indexCRC = 1; indexCRC < RXBufferCRC.Length; indexCRC++)
{
sumCRC = sumCRC + RXBufferCRC[indexCRC];
}
checksumCRC = (byte)(sumCRC);
Start the index from 1 but before doing that delete the last index of the array and store the array in an other array something like that
int Secbytes = comport.BytesToRead;
byte[] SecRXBuffer = new byte[Secbytes];
Array.Copy(SecRXBuffer, VanguardConstants.RECEIVEINDEX, RXBuffer, 0, Secbytes);
byte[] tmp = new byte[bytes - 1];
Array.Copy(RXBuffer, tmp, Secbytes - 1);
for (i = 1; i < tmp.Length; i++)
{
Sum = (byte)(Sum + tmp[i]);
}
Checksum = ((byte)Sum);
http://err.se/crc8-for-ibutton-in-c/
Here is an implementation of a CRC8 function in C#.

How have a generic conversion from 32/24bit From Bytes To 16bit To bytes

Have been searching the solution for two days.
I want to convert my wave 32 or 24 bits to a 16bit.
This my code after reading few stackoverflow topics):
byte[] data = Convert.FromBase64String("-- Wav String encoded --") (32 or 24 bits)
int conv = Convert.ToInt16(data);
byte[] intBytes = BitConverter.GetBytes(conv);
if (BitConverter.IsLittleEndian)
Array.Reverse(intBytes);
byte[] result = intBytes;
but when i writeAllbyte my result, nothing to hear...
Here is a method that cuts the least significant bits:
byte[] data = ...
var skipBytes = 0;
byte[] data16bit;
int samples;
if( /* data was 32 bit */ ) {
skipBytes = 2;
samples = data.Length / 4;
} else if( /* data was 24 bit */ ) {
skipBytes = 1;
samples = data.Length / 3;
}
data16bit = new byte[samples * 2];
int writeIndex = 0;
int readIndex = 0;
for(var i = 0; i < samples; ++i) {
readIndex += skipBytes; //skip the least significant bytes
//read the two most significant bytes
data16bit[writeIndex++] = data[readIndex++];
data16bit[writeIndex++] = data[readIndex++];
}
This assumes a little endian byte order (least significant byte is the first byte, usual for WAV RIFF). If you have big endian, you have to put the readIndex += ... after the two read lines.
You could implement your own conversion iterator for this task like so:
IEnumerable<byte> ConvertTo16Bit(byte[] data, int skipBytes)
{
int bytesToRead = 0;
int bytesToSkip = skipBytes;
int readIndex = 0;
while (readIndex < data.Length)
{
if (bytesToSkip > 0)
{
readIndex += bytesToSkip;
bytesToSkip = 0;
bytesToRead = 2;
continue;
}
if (bytesToRead == 0)
{
bytesToSkip = skipBytes;
continue;
}
yield return data[readIndex++];
bytesToRead--;
}
}
This way you don't have to create a new array if there is no need for it. And you could simply convert the data array to a new 16 bit array with the IEnumerable<T> extension methods:
var data16bit = ConvertTo16Bit(data, 1).ToArray();
Or if you don't need the array, you can iterate the data skipping the least significant bytes:
foreach (var b in ConvertTo16Bit(data, 1))
{
Console.WriteLine(b);
}

Represent a Guid as a set of integers

If I want to represent a guid as a set of integers how would I handle the conversion? I'm thinking along the lines of getting the byte array representation of the guid and breaking it up into the fewest possible 32 bit integers that can be converted back into the original guid. Code examples preferred...
Also, what will the length of the resulting integer array be?
As a GUID is just 16 bytes, you can convert it to four integers:
Guid id = Guid.NewGuid();
byte[] bytes = id.ToByteArray();
int[] ints = new int[4];
for (int i = 0; i < 4; i++) {
ints[i] = BitConverter.ToInt32(bytes, i * 4);
}
Converting back is just getting the integers as byte arrays and put together:
byte[] bytes = new byte[16];
for (int i = 0; i < 4; i++) {
Array.Copy(BitConverter.GetBytes(ints[i]), 0, bytes, i * 4, 4);
}
Guid id = new Guid(bytes);
System.Guid guid = System.Guid.NewGuid();
byte[] guidArray = guid.ToByteArray();
// condition
System.Diagnostics.Debug.Assert(guidArray.Length % sizeof(int) == 0);
int[] intArray = new int[guidArray.Length / sizeof(int)];
System.Buffer.BlockCopy(guidArray, 0, intArray, 0, guidArray.Length);
byte[] guidOutArray = new byte[guidArray.Length];
System.Buffer.BlockCopy(intArray, 0, guidOutArray, 0, guidOutArray.Length);
System.Guid guidOut = new System.Guid(guidOutArray);
// check
System.Diagnostics.Debug.Assert(guidOut == guid);
Somehow I had much more fun doing it this way:
byte[] bytes = guid.ToByteArray();
int[] ints = new int[bytes.Length / sizeof(int)];
for (int i = 0; i < bytes.Length; i++) {
ints[i / sizeof(int)] = ints[i / sizeof(int)] | (bytes[i] << 8 * ((sizeof(int) - 1) - (i % sizeof(int))));
}
and converting back:
byte[] bytesAgain = new byte[ints.Length * sizeof(int)];
for (int i = 0; i < bytes.Length; i++) {
bytesAgain[i] = (byte)((ints[i / sizeof(int)] & (byte.MaxValue << 8 * ((sizeof(int) - 1) - (i % sizeof(int))))) >> 8 * ((sizeof(int) - 1) - (i % sizeof(int))));
}
Guid guid2 = new Guid(bytesAgain);
Will the build-in Guid structure not suffice?
Constructor:
public Guid(
byte[] b
)
And
public byte[] ToByteArray()
Which, returns a 16-element byte array that contains the value of this instance.
Packing the bytes into integers and visa versa should be trivial.
A Guid is typically just a 128-bit number.
-- Edit
So in C#, you can get the 16 bytes via
byte[] b = Guid.NewGuid().ToByteArray();

Why do I get the following output when inverting bits in a byte?

Assumption:
Converting a
byte[] from Little Endian to Big
Endian means inverting the order of the bits in
each byte of the byte[].
Assuming this is correct, I tried the following to understand this:
byte[] data = new byte[] { 1, 2, 3, 4, 5, 15, 24 };
byte[] inverted = ToBig(data);
var little = new BitArray(data);
var big = new BitArray(inverted);
int i = 1;
foreach (bool b in little)
{
Console.Write(b ? "1" : "0");
if (i == 8)
{
i = 0;
Console.Write(" ");
}
i++;
}
Console.WriteLine();
i = 1;
foreach (bool b in big)
{
Console.Write(b ? "1" : "0");
if (i == 8)
{
i = 0;
Console.Write(" ");
}
i++;
}
Console.WriteLine();
Console.WriteLine(BitConverter.ToString(data));
Console.WriteLine(BitConverter.ToString(ToBig(data)));
foreach (byte b in data)
{
Console.Write("{0} ", b);
}
Console.WriteLine();
foreach (byte b in inverted)
{
Console.Write("{0} ", b);
}
The convert method:
private static byte[] ToBig(byte[] data)
{
byte[] inverted = new byte[data.Length];
for (int i = 0; i < data.Length; i++)
{
var bits = new BitArray(new byte[] { data[i] });
var invertedBits = new BitArray(bits.Count);
int x = 0;
for (int p = bits.Count - 1; p >= 0; p--)
{
invertedBits[x] = bits[p];
x++;
}
invertedBits.CopyTo(inverted, i);
}
return inverted;
}
The output of this little application is different from what I expected:
00000001 00000010 00000011 00000100 00000101 00001111 00011000
00000001 00000010 00000011 00000100 00000101 00001111 00011000
80-40-C0-20-A0-F0-18
01-02-03-04-05-0F-18
1 2 3 4 5 15 24
1 2 3 4 5 15 24
For some reason the data remains the same, unless printed using BitConverter.
What am I not understanding?
Update
New code produces the following output:
10000000 01000000 11000000 00100000 10100000 11110000 00011000
00000001 00000010 00000011 00000100 00000101 00001111 00011000
01-02-03-04-05-0F-18
80-40-C0-20-A0-F0-18
1 2 3 4 5 15 24
128 64 192 32 160 240 24
But as I have been told now, my method is incorrect anyway because I should invert the bytes
and not the bits?
This hardware developer I'm working with told me to invert the bits because he cannot read the data.
Context where I'm using this
The application that will use this does not really work with numbers.
I'm supposed to save a stream of bits to file where
1 = white and 0 = black.
They represent pixels of a bitmap 256x64.
byte 0 to byte 31 represents the first row of pixels
byte 32 to byte 63 the second row of pixels.
I have code that outputs these bits... but the developer is telling
me they are in the wrong order... He says the bytes are fine but the bits are not.
So I'm left confused :p
No. Endianness refers to the order of bytes, not bits. Big endian systems store the most-significant byte first and little-endian systems store the least-significant first. The bits within a byte remain in the same order.
Your ToBig() function is returning the original data rather than the bit-swapped data, it seems.
Your method may be correct at this point. There are different meanings of endianness, and it depends on the hardware.
Typically, it's used for converting between computing platforms. Most CPU vendors (now) use the same bit ordering, but different byte ordering, for different chipsets. This means, that, if you are passing a 2-byte int from one system to another, you leave the bits alone, but swap bytes 1 and 2, ie:
int somenumber -> byte[2]: somenumber[high],somenumber[low] ->
byte[2]: somenumber[low],somenumber[high] -> int newNumber
However, this isn't always true. Some hardware still uses inverted BIT ordering, so what you have may be correct. You'll need to either trust your hardware dev. or look into it further.
I recommend reading up on this on Wikipedia - always a great source of info:
http://en.wikipedia.org/wiki/Endianness
Your ToBig method has a bug.
At the end:
invertedBits.CopyTo(data, i);
}
return data;
You need to change that to:
byte[] newData = new byte[data.Length];
invertedBits.CopyTo(newData, i);
}
return newData;
You're resetting your input data, so you're receiving both arrays inverted. The problem is that arrays are reference types, so you can modify the original data.
As greyfade already said, endianness is not about bit ordering.
The reason that your code doesn't do what you expect, is that the ToBig method changes the array that you send to it. That means that after calling the method the array is inverted, and data and inverted are just two references pointing to the same array.
Here's a corrected version of the method.
private static byte[] ToBig(byte[] data) {
byte[] result = new byte[data.length];
for (int i = 0; i < data.Length; i++) {
var bits = new BitArray(new byte[] { data[i] });
var invertedBits = new BitArray(bits.Count);
int x = 0;
for (int p = bits.Count - 1; p >= 0; p--) {
invertedBits[x] = bits[p];
x++;
}
invertedBits.CopyTo(result, i);
}
return result;
}
Edit:
Here's a method that changes endianness for a byte array:
static byte[] ConvertEndianness(byte[] data, int wordSize) {
if (data.Length % wordSize != 0) throw new ArgumentException("The data length does not divide into an even number of words.");
byte[] result = new byte[data.Length];
int offset = wordSize - 1;
for (int i = 0; i < data.Length; i++) {
result[i + offset] = data[i];
offset -= 2;
if (offset < -wordSize) {
offset += wordSize * 2;
}
}
return result;
}
Example:
byte[] data = { 1,2,3,4,5,6 };
byte[] inverted = ConvertEndianness(data, 2);
Console.WriteLine(BitConverter.ToString(inverted));
Output:
02-01-04-03-06-05
The second parameter is the word size. As endianness is the ordering of bytes in a word, you have to specify how large the words are.
Edit 2:
Here is a more efficient method for reversing the bits:
static byte[] ReverseBits(byte[] data) {
byte[] result = new byte[data.Length];
for (int i = 0; i < data.Length; i++) {
int b = data[i];
int r = 0;
for (int j = 0; j < 8; j++) {
r <<= 1;
r |= b & 1;
b >>= 1;
}
result[i] = (byte)r;
}
return result;
}
One big problem I see is ToBig changes the contents of the data[] array that is passed to it.
You're calling ToBig on an array named data, then assigning the result to inverted, but since you didn't create a new array inside ToBig, you modified both arrays, then you proceed to treat the arrays data and inverted as different when in reality they are not.

Categories

Resources