Identifying socket messages - c#

I have a code snippet below that process a socket message, and I would like to know what should be the message sent in order not to result in a return.
Where SocketPacket is a class which stores the received socket, and DataLength would be the length of the received message, dataBuffer stores the message.
int num3;
byte num6 = 0;
SocketPacket workSocket;
int DataLength;
if (workSocket.dataBuffer[0] == 0x33)
{
if (DataLength < 0xbb)
{
return false;
}
for (num3 = 0; num3 < 0xba; num3++)
{
num6 = (byte) (num6 + workSocket.dataBuffer[num3]);
}
// how to get pass this if condition??
if (num6 != workSocket.dataBuffer[0xba])
{
return false;
}
}
So,
What would be the message to send to the server such to get pass the last if condition? (According to my understanding, the message should be at least 187 in length and the first digit should be "3:.........................")
What are the 0xba, 0x33, 0xbb etc....? Hexadecimals? How should I re-construct the input message? Convert these to ASCII? or.... dec? Doesn't make any sense to me.......
I tried to convert workSocket.dataBuffer[0 or 1 or any int] to a readable string. Convert.ToChar(workSocket.dataBuffer[0]) and workSocket.dataBuffer[0].toString() gives different results. Why is that?

Well, what you have there is a fixed-length message (a 187 bytes message). The first byte is a mark to identify the begining of the message then if the first byte is not 0x33 then your code doesn't process the bytes in the buffer.
Next, in the For statement you have a checksum. It is adding all the first 186 bytes in order to compare the result with the last byte (the precalculated checksum). It is to verify the message is okay (and it is useless by the way because protocols warranty the stream/datagram is okey).
So, about your questions:
What would be the message to send to the server such to get pass the last if condition?
Well, you need to send 187-bytes-length message (simply a byte[187]): the first one has to be 0x33, next the content and the last one has to be the checksum (you should calculate in the same way your snippet shows)
[0x33 | THE CONTENT | CHKSUM]
0 1 185 186
For example: the following buffer has a valid message (one that will pass the if condition). It simply begins with the mark byte (0x33) and the next 185 bytes are zero (I didn't assign values) then, the checksum is 0x33 + 0 + 0 + 0 + 0 ... 0 ... = 0x33
var buffer = new byte[187];
buffer[0] = 0x33;
buffer[186] = 0x33;
What are the 0xba, 0x33, 0xbb etc....? Hexadecimals?
Yes, they are just numbers in hexadecimal.
I tried to convert (sic) gives different results. Why is that?
Sockets send/receive bytes (just numbers) but the real question is: why do you assume they have to be text? Probably they are text, yes but who knows. That is part of the agreements (the protocol) that both endpoints agreed and that allows them to exchange data. So, you have to know what those 185 bytes (187 - 1 byte for mark - 1 byte checksum) mean in order to be able to process them.
Now, what you are doing is a reverse engineering of a protocol and that is because it is clear you don't know the message format and I guess you don't know what the content meaning is, and even when you are right and the content is just text, you ignore the encoding used. Those are the things you need to focus on.
I hope this helps you.

Related

Indicating the end of a raw data chunk in an RLE algorithm that can contain all byte values

I'm writing an RLE algorithm in C# that can work on any file as input. The approach to encoding I'm taking is as follows:
An RLE packet contains 1 byte for the length and 1 byte for the value. For example, if the byte 0xFF appeared 3 times in a row, 0x03 0xFF would be written to the file.
If representing the data as raw data would be more efficient, I use 0x00 as a terminator. This works because the length of a packet can never be zero. If I wanted to add the bytes 0x53 0x2C 0x01 to my compressed file it would look like this:
0x03 0xFF 0x00 0x53 0x2C 0x01
However a problem arises when trying to switch back to RLE packets. I can't use a byte as a terminator like I did for switching onto raw data because any byte value from 0x00 to 0xFF can be in the input data, and when decoding the bytes the decoder would misinterpret the byte as a terminator and ruin everything.
What can I do to indicate that I have to switch back to RLE packets when it can't be written as data in the file?
Here is my code if it helps:
private static void RunLengthEncode(ref byte[] bytes)
{
// Create a list to store the bytes
List<byte> output = new List<byte>();
byte runLengthByte;
int runLengthCounter = 0;
// Set the RLE byte to the first byte in the array and increment the RLE counter
runLengthByte = bytes[0];
// For each byte in the input array...
for (int i = 0; i < bytes.Length; i++)
{
if (runLengthByte == bytes[i] || runLengthCounter == 255)
{
runLengthCounter++;
}
else
{
// RLE packets under 3 should be written as raw data to avoid increasing the file size
if (runLengthCounter < 3)
{
// Add a 0x00 to indicate raw data
output.Add(0x00);
// Add the bytes that were skipped while counting the run length
for (int j = i - runLengthCounter; j < i; j++)
{
output.Add(bytes[j]);
}
}
else
{
// Add 2 bytes, one for the number of bytes and one for the value
output.Add((byte)runLengthCounter);
output.Add(runLengthByte);
}
runLengthCounter = 1;
runLengthByte = bytes[i];
}
// Add the last bytes to the list when finishing
if (i == bytes.Length - 1)
{
// Add 2 bytes, one for the number of bytes and one for the value
output.Add((byte)runLengthCounter);
output.Add(runLengthByte);
}
}
// Set the bytes to the RLE encoded data
bytes = output.ToArray();
}
Also if you want to comment and say that RLE isn't very efficient for binary data, I know it isn't. This is a project I'm doing to implement many kinds of compression to learn about them, not for an actual product.
Any help would be appreciated! Thanks!
There are many ways to unambiguously encode run-lengths. One simple way is, when decoding: if you see two equal bytes in a row, then the next byte is a a count of repeats of that byte after those first two. I.e. 0..255 additional repeats, so encoding runs of 2..257. (There's no point in encoding runs of 0 or 1.)

What is the conventional way to convert a range of bits to an integer value in c# .net?

I've been programming for many years, but have never needed to use bitwise operations too much or really deal with data too much on a bit or even byte level, until now. So, please forgive my lack of knowledge.
I'm having to process streaming message frame data that I'm getting via socket communication. The message frames are a series of hex bytes encoded Big Endian which I read into a byte array called byteArray. Take the following 2 bytes for example:
0x03 0x20
The data I need is represented in the first 14 bits - meaning I need to convert the first 14 bits into an int value. (The last 2 bits represent 2 other bool values). I have coded the following to accomplish this:
if (BitConverter.IsLittleEndian)
{
Array.Reverse(byteArray);
}
BitArray bitArray = GetBitArrayFromRange(new BitArray(byteArray), 0, 14);
int dataValue = GetIntFromBitArray(bitArray)
The dataValue variable ends up with the correct result which is: 800
The two functions I'm calling are here:
private static BitArray GetBitArrayFromRange(BitArray bitArray, int startIndex, int length)
{
var newBitArray = new BitArray(length);
for (int i = startIndex; i < length; i++)
{
newBitArray[i] = bitArray.Get(i);
}
return newBitArray;
}
private static int GetIntFromBitArray(BitArray bitArray)
{
int[] array = new int[1];
bitArray.CopyTo(array, 0);
return array[0];
}
Since I have a lack of experience in this area, my question is: Does this code look correct/reasonable? Or, is there a more preferred/conventional way of accomplishing what I need?
Thanks!
"The dataValue variable ends up with the correct result which is: 800"
Shouldn't that correct result be actually 200?
1) 00000011 00100001 : is integer 0x0321 (so now skip beginning two bits 01...)
2) xx000000 11001000 : is extracted last 14 bits (missing 2 bits, so those xx count as zero)
3) 00000000 11001000 : is expected final result from 14-bits extraction = 200
At present it looks like you have an empty (zero filled) 16 bits into which you put the 14 bits. Somehow you putting in exact same position (left-hand side instead of right-hand side)
Original bits : 00000011 00100001
Slots 16 bit : XXXXXXXX XXXXXXXX
Instead of this : XX000000 11001000 //correct way
You have done : 00000011 001000XX //wrong way
Your right-hand side XX are zero so your result is 00000011 00100000 which would give 800, but that's wrong because it's not the true value of those specific 14 bits you extracted.
"Is there a more preferred/conventional way of accomplishing what I
need?"
I guess bit-shifting is the conventional way...
Solution (pseudo-code) :
var myShort = 0x0321; //Short means 2 bytes
var answer = (myShort >> 2); //bitshift to right-hand side by 2 places
By nudging everything 2 places/slots towards right, you can see how the two now-empty places at far-left becomes the XX (automatically zero until you change them), and by nudging you have also just removed the (right-side) 2 bits you wanted to ignore... Leaving you with correct 14-bit value.
PS:
Regarding your code... I've not had chance to test it all but the below logic seems more appropriate for your GetBitArrayFromRange function :
for (int i = 0; i < (length-1); i++)
{
newBitArray[i] = bitArray.Get(startIndex + i);
}

How to Reduce the size of a speccific format string?

I have designed a 2 Pass Assembler for my project. The output is in Hexadecimal form i.e. 15 is 0F.
I am working with ComPort and to send "0F" over the line it should be sent as String.
But the problem is that I can only receive 1 byte on the other end and sizeOf("0F") > 1 byte .
There is no way of decompressing data on the other end and I need to do all work on my end and still i want to receive "0F" on the other end.
Can i do this if yes then how?
I did this to get the hexadecimal string :
String.format("{0:X2}",15);
In addition,
using System.IO.Ports;
private SerialPort comPort = new SerialPort();
comPort.Write("0F");
On the receiving end I have a 8-bit processor which have a 1byte * 256 blocks i.e. 256 bytes. "0F" when received is received as 2 bytes and cannot be stored in a single block of 1 byte. So I want "0F" to be of 1 byte.
Looks like you need something like this:
// create buffer
byte[] buffer = new byte[256];
// put values you need to send to buffer
buffer[0] = 0x0f;
// ... add another bytes if you need...
// send them
var comPort = new SerialPort();
comPort.Write(buffer, 0, 1); // 0 is buffer offset, 1 is number of bytes to write

Convert large number to two bytes in C#

I'm trying to convert a number from a textbox into 2 bytes which can then be sent over serial. The numbers range from 500 to -500. I already have a setup so I can simply send a string which is then converted to a byte. Here's a example:
send_serial("137", "1", "244", "128", "0")
The textbox number will go in the 2nd and 3rd bytes
This will make my Roomba (The robot that all this code is for) drive forward at a velocity of 500 mm/s. The 1st number sent tells the roomba to drive, 2nd and 3rd numbers are the velocity and the 4th and 5th numbers are the radius of the turn (between 2000 and -2000, also has a special case where 32768 is straight).
var value = "321";
var shortNumber = Convert.ToInt16(value);
var bytes = BitConverter.GetBytes(shortNumber);
Alternatively, if you require Big-Endian ordering:
var bigEndianBytes = new[]
{
(byte) (shortNumber >> 8),
(byte) (shortNumber & byte.MaxValue)
};
Assume you are using System.IO.Ports.SerialPort, you will write using SerialPort.Write(byte[], int, int) to send the data.
In case if your input is like this: 99,255, you will do this to extract two bytes:
// Split the string into two parts
string[] strings = textBox1.text.Split(',');
byte byte1, byte2;
// Make sure it has only two parts,
// and parse the string into a byte, safely
if (strings.Length == 2
&& byte.TryParse(strings[0], System.Globalization.NumberStyles.Integer, System.Globalization.CultureInfo.InvariantCulture, out byte1)
&& byte.TryParse(strings[1], System.Globalization.NumberStyles.Integer, System.Globalization.CultureInfo.InvariantCulture, out byte2))
{
// Form the bytes to send
byte[] bytes_to_send = new byte[] { 137, byte1, byte2, 128, 0 };
// Writes the data to the serial port.
serialPort1.Write(bytes_to_send, 0, bytes_to_send.Length);
}
else
{
// Show some kind of error message?
}
Here I assume your "byte" is from 0 to 255, which is the same as C#'s byte type. I used byte.TryParse to parse the string into a byte.

received byte never over 127 in serial port

I have a program that sends a stream bytes to eother pc.
The values range from 0 to 255. I set up my serialport like this
sp.BaudRate = 115200;
sp.PortName = "COM53";
sp.DataBits = 8;
sp.StopBits = System.IO.Ports.StopBits.One;
sp.Parity = System.IO.Ports.Parity.None;
sp.ReadTimeout = 0;
sp.Open();
sp.DataReceived += new
System.IO.Ports.SerialDataReceivedEventHandler(sp_ DataReceived);
and then I have this
void sp_DataReceived(object sender,
System.IO.Ports.SerialDataReceivedEventArgs e)
{
string Mystring = sp.ReadExisting();
byte testbyte = 254;
// Gather all the bytes until 102 is reached
foreach (byte c in Mystring)
{
if(pixelcount<102)
pixel[pixelcount] = c;
pixelcount++;
if (c 126)
Console.WriteLine("big number {0}", c);// biggest number ever printed is 127
}
//got all the bytes, now draw them
if (pixelcount == 102)
{
Console.WriteLine("testbyte = {0}", testbyte);
oldx = 0;
pixelcount = 0;
pictureBox_rawData.Invalidate();
}
}
My problem is that "c" is never over 127.
What am I missing here?
i've test all encoding but i can not solve this problem. please help.
thanks
int91h
If you want to get the raw bytes, you should be using SerialPort.Read to read it into a byte array. Using SerialPort.ReadExisting to read the data into a string is going to force a conversion of some kind (i.e. encoding will convert bytes to chars).
In the documentation for SerialPort.Write (Remarks section):
By default, SerialPort uses ASCIIEncoding to encode the characters. ASCIIEncoding encodes all characters greater then 127 as (char)63 or '?'. To support additional characters in that range, set Encoding to UTF8Encoding, UTF32Encoding, or UnicodeEncoding.
Maybe ReadExisting behaves similar and converts every byte greater then 127 to 63.
You are not reading bytes, you are reading text. Which is produced by converting the bytes that the port receives according to the SerialPort.Encoding property value. Which defaults to Encoding.ASCII, an encoding that only has characters for byte values 0 through 127. Byte values out of that range are replaced by the "?" character.
Which explains what you see. Choosing another Encoding is an unlikely solution in your case, use SerialPort.Read() instead. The equivalent of ReadExisting is calling Read() with a sufficiently large count argument. You'll get back whatever fits, the actual number of bytes copied into the buffer is the method return value. It blocks when the input buffer is empty. Which can only happen in the DataReceived event handler when e.EventType is not equal to SerialData.Chars. Not usually a problem.
Beware that your call to pictureBox_rawData.Invalidate() is invalid. DataReceived runs on a threadpool thread. You can only touch control members on the UI thread. You'll need to use Control.BeginInvoke().
Just as what Hans Passant said, you need to use SerialPort.Read().
Something like this would work
'retrieve number of bytes in the buffer
Dim bytes1 As Integer = ComPort.BytesToRead
'create a byte array to hold the awaiting data
Dim comBuffer As Byte() = New Byte(bytes1 - 1) {}
'read the data and store it to comBuffer
ComPort.Read(comBuffer, 0, bytes1)

Categories

Resources