c# send int array via serial port to arduino - c#

I'm trying to send an int array from C# to Arduino using the serial port. In C#, first I have the input string
input = "98;1;5;160;0;255;421;101";
then, I convert it to an int array
int [] sendint = input.Split(';').Select(n => Convert.ToInt32(n)).ToArray();
//this array is what I need to send to Arduino
then, I convert it to a byte array to send via the serial port
byte[] sendbytes = sendint.Select(x => (byte)x).ToArray();
// because Write() requires byte, not int
and finally, I send it
serialport.Write(sendbytes,0,sendbytes.Length);
// port is open, baudrate is OK, portname is OK
Then, it should be received by my Arduino
int recdata[10];
int bytes = 0;
if(Serial.available())
{
while(Serial.available())
{
recdata[bytes]=Serial.read();
bytes++;
}
checkdata(); //function which checks the received data
}
so recdata should be an int array
recdata = {98,1,5,160,0,255,421,101};
but it isn't. When I print it to another serial port to check..
for(int i = 0; i < 10; i++) //called befory checkdata() function in code above
{
Serial1.print(recdata[i] + " ");
}
I get 3 outputs, instead of 1, as if the serialport sends first one int, then second and then the rest.
98 0 0 0 0 0 0 0 0 0 1checkfail //1checkfail is from function checkdata()
1 0 0 0 0 0 0 0 0 0 1checkfail //and it's saying, that data
5 160 0 255 165 101 0 0 0 0 1checkfail//are wrong
98 1 5 160 0 255 421 101 0 0 1checkok //this is how it should like
//421 and 165 - i know, that i'm trying to save 421 in byte, which has a limit of 256, so this is another problem
Does anyone have a suggestion to this problem?

We should see what your checkdata() function does, but I'm prtty sure that you don't check the bytes variable.
What happens is that.... You are using a SERIAL communication. Which means that you don't get all your data at once, but instead you get it one byte at a time. If you send 8 bytes from the PC but check the Serial.available() function after two bytes have been received, you will get just 2 as an answer. I think that if you modify your code in this way
// Move these OUTSIDE the loop function
int recdata[10];
int bytes = 0;
// This is in the loop function
if(Serial.available())
{
while(Serial.available())
{
recdata[bytes]=Serial.read();
bytes++;
}
if (bytes >= 8)
{
checkdata(); //function which checks the received data
bytes = 0;
}
}
it will work properly...
Otherwise... Write your checkdata function and we'll see.
By the way... I'll put a check in the reading part, like
while(Serial.available())
{
if (bytes < 10)
{
recdata[bytes]=Serial.read();
bytes++;
}
}
So you can avoid memory corruption if you receive 12 bytes instead of 10...

The Arduino loop is catching up to the serial stream, so the loop is entered three times instead of just once.
You can use Serial.readBytes() instead of Serial.read(). It will keep filling your buffer until it times out. Use Serial.setTimeout() to choose a reasonable timeout instead of the default 1000 milliseconds.
char recdata[10];
int bytes = 0;
if(Serial.available())
{
bytes = Serial.readBytes(recdata, MAX_LENGTH);
checkdata(); //function which checks the received data
}
For the problem of converting ints to bytes, look at this question.

Related

FlatBuffers: Encoding in C++ versus C#, decoding in C# end-to-end example

Imagine a schema:
namespace MyEvents;
table EventAddress
{
id:uint;
timestamp:ulong;
adress:string;
}
table EventSignalStrength
{
id:uint;
timestamp:ulong;
strength:float;
}
table EventStatus
{
status:string;
}
union Events {EventAddress, EventSignalStrength, EventStatus}
table EventHolder
{
theEvent:Events;
}
root_type EventHolder;
For status message "EXIT", in C++ I encode and send over the wire like:
std::string message("EXIT");
flatbuffers::FlatBufferBuilder builder;
auto messageString= builder.CreateString(message); // Message to send.
auto statusEvent= MyEvents::CreateEventStatus(builder, messageString);
auto eventHolder= MyEvents::CreateEventHolder(builder, MyEvents::Events_EventStatus, statusEvent.Union());
builder.Finish(eventHolder);
// Code to decode to check my work omitted, but the data decode properly in my real-world application.
ret= sendto(m_udpSocket, reinterpret_cast<const char*>(builder.GetBufferPointer()), static_cast<int>(builder.GetSize()), 0, reinterpret_cast<SOCKADDR *>(&m_destination), sizeof(m_destination));
For the same message, "EXIT", in C# I encode and send over the wire like:
string message= "EXIT";
FlatBufferBuilder builder = new FlatBufferBuilder(1);
StringOffset messageOffset = builder.CreateString(message);
EventStatus.StartEventStatus(builder);
EventStatus.AddStatus(builder, messageOffset);
Offset<EventStatus> eventStatusOffset = EventStatus.EndEventStatus(builder);
EventHolder.StartEventHolder(builder);
EventHolder.AddTheEventType(builder, Events.EventStatus);
EventHolder.AddTheEvent(builder, eventStatusOffset.Value);
Offset<EventHolder> eventHolderOffset = EventHolder.EndEventHolder(builder);
EventHolder.FinishEventHolderBuffer(builder, eventHolderOffset);
// Test the encoding by decoding:
EventHolder flatBuffer = EventHolder.GetRootAsEventHolder(builder.DataBuffer);
Events flatBufferType = flatBuffer.TheEventType; // Type looks good.
EventStatus decodedEvent= new EventStatus();
flatBuffer.GetDataObject<EventStatus>(decodedEvent); // decodedEvent.Status looks good.
// This code seems to send the correct data:
Byte[] sendSized = builder.SizedByteArray();
udpClient.Send(sendSized, sendSized.Length);
// This code does not seem to send the correct data:
//ByteBuffer sendByteBuffer = builder.DataBuffer;
//udpClient.Send(sendByteBuffer.Data, sendByteBuffer.Data.Length);
In my client application, written in C#, I decode as:
Byte[] receiveBytes = udpClient.Receive(ref m_remoteEndpoint);
ByteBuffer flatBufferBytes= new ByteBuffer(receiveBytes);
EventHolder flatBuffer = EventHolder.GetRootAsEventHolder(flatBufferBytes);
Events flatBufferType= flatBuffer.DataObjectType;
EventAddress eventAddress = null;
EventSignalStrength eventSignalStrength = null;
EventStatus eventStatus = null;
switch (flatBufferType)
{
case Events.EventAddress:
{
eventAddress = new EventAddress();
flatBuffer.GetDataObject<EventAddress>(eventAddress);
ProcessEventAddress(eventAddress);
break;
}
case Events.EventSignalStrength:
{
eventSignalStrength = new EventSignalStrength();
flatBuffer.GetDataObject<EventSignalStrength>(eventSignalStrength);
ProcessEventSignalStrength(eventSignalStrength);
break;
}
case Events.EventStatus:
{
eventStatus= new EventStatus();
flatBuffer.GetDataObject<EventStatus>(eventStatus);
Console.WriteLine("\nStatus Message: {0}", eventStatus.status);
break;
}
}
When I receive EventStatus messages from the C++ application, they decode properly.
When I receive EventStatus messages from the C# sending application, they decode properly.
When I dump the buffers sent from the applications, they are (in decimal):
C++ - 12 0 0 0 8 0 14 0 7 0 8 0 8 0 0 0 0 0 0 4 12 0 0 0 0 0 6 0 8 0 4 0 6 0 0 0 4 0 0 0 4 0 0 0 69 88 73 84 0 0 0 0
C# - 12 0 0 0 8 0 10 0 9 0 4 0 8 0 0 0 12 0 0 0 0 4 6 0 8 0 4 0 6 0 0 0 4 0 0 0 4 0 0 0 69 88 73 84 0 0 0 0
Originally, the messages from the C# sender were not decoding properly - now they are. I had made a change to the sender, so maybe had not rebuilt.
I am a little mystified that the received C++ buffer and the C# buffer are different, yet they decode properly to the same result.
My real-world schema is much more complex - am I following the proper procedure for decoding on the C# side?
Am I following the correct procedure for reducing the flatbuffer to Byte[] for sending over the wire in C#? It looks like I am, but it did not seem to work for awhile....
Any input appreciated.
The ByteBuffer contains the buffer, but not necessarily at offset 0, so yes, turning it into a byte array (or sending the bytebuffer contents from its starting offset) are the only correct ways of sending it.
The encoding may differ between languages, as implementations may serialize things in different orders. Here, the C++ implementation decides to write the union type field before the offset, which happens to be inefficient for alignment, so it is a bit bigger. C# does the opposite.

Get the value of bit in byte - C#

I need to write the system to check either this user is valid or not by numbers issued for customer (NIC).
The data was given in forms of bytes with the total is 255 kilobytes, and I need to convert from bytes to bit. If 255 kb convert to bit, it will become 2,088,960.
Let say we take F9 as first byte, when convert to binary it will become 11111001.
NIC | Binary
1 = 1
2 = 1
3 = 1
4 = 1
5 = 1
6 = 0
7 = 0
8 = 1
0 = False
1 = True
For example,
NIC for this customer is number 3, so the value of bit is 1. So for another customer, let say his NIC is 6 then the value of bit is 0.
If the value of bit is 0, so this customer is valid. But if the value is 1, so this customer not valid.
So far what has done
var reader = com.ExecuteScalar() as byte[];
if (reader != null)
{
//From database to bytes array
list_bytes = reader;
//From bytes array to bit array
BitArray bits = new BitArray(list_bytes);
for (int a = 1; a <= NIC; a++)
{
//Debug purpose
if(a == NIC)
lblStatus.Text = Convert.ToBoolean(bits[a - 1]).ToString();
}
}
The problem is, let say I enter the NIC is 1 then it return = True. When I enter NIC is 2 then it return False but the answer should be True.
NIC 1 = True = 1
NIC 2 = False = 0
NIC 3 = False = 0
NIC 4 = True = 1
NIC 5 = True = 1
NIC 6 = True = 1
NIC 7 = True = 1
NIC 8 = True = 1
The binary is 10011111 and convert to byte is 0x9F, but the the data should be 0xF9.
I was Google for few hour ago and no one answer fit with my problems. Kindly let me know if this question not clear.
The binary is 10011111 and convert to byte is 0x9F, but the the data should be 0xF9.
The problem is that the first value you push on the array will be first value you are getting out of the array again (logic). But in your case, you don't want this, you want to start at the last value and work your way up to the first value, then it will work and ultimately return 0xF9.
for (int a = NIC; a >= 0; a--)
{
// Loop will run from NIC until it reaches 0. (can be 1 to, depending if your collection is zero-based)
}
This actually appears to be a rather simple problem covered up by too much code. One suggestion: forget about converting anything to bits. There is only pain that way.
As I understand it you have an array of bytes and each byte is a customer. The customer has a NIC and you simply want to check whether a particular bit is set in the customer's byte according to the NIC. The code looks something like this.
Byte customer_byte = list_bytes[customer_id];
Boolean isvalid = test_bit(customer_byte, customer_nic);
The NIC bits are numbered from 1 to 8, where 1 means 0x80 and 8 means 0x01. The test_bit function to do that could be written:
Boolean test_bit(Byte value, Int bitno) {
Byte mask = POW(2, 8 - bitno); // POW is the power function
return (value AND bitno) NEQ 0; // AND is the bitwise operator
}
I leave writing the actual code (and fixing my misunderstandings) as an exercise to the reader.

Identifying socket messages

I have a code snippet below that process a socket message, and I would like to know what should be the message sent in order not to result in a return.
Where SocketPacket is a class which stores the received socket, and DataLength would be the length of the received message, dataBuffer stores the message.
int num3;
byte num6 = 0;
SocketPacket workSocket;
int DataLength;
if (workSocket.dataBuffer[0] == 0x33)
{
if (DataLength < 0xbb)
{
return false;
}
for (num3 = 0; num3 < 0xba; num3++)
{
num6 = (byte) (num6 + workSocket.dataBuffer[num3]);
}
// how to get pass this if condition??
if (num6 != workSocket.dataBuffer[0xba])
{
return false;
}
}
So,
What would be the message to send to the server such to get pass the last if condition? (According to my understanding, the message should be at least 187 in length and the first digit should be "3:.........................")
What are the 0xba, 0x33, 0xbb etc....? Hexadecimals? How should I re-construct the input message? Convert these to ASCII? or.... dec? Doesn't make any sense to me.......
I tried to convert workSocket.dataBuffer[0 or 1 or any int] to a readable string. Convert.ToChar(workSocket.dataBuffer[0]) and workSocket.dataBuffer[0].toString() gives different results. Why is that?
Well, what you have there is a fixed-length message (a 187 bytes message). The first byte is a mark to identify the begining of the message then if the first byte is not 0x33 then your code doesn't process the bytes in the buffer.
Next, in the For statement you have a checksum. It is adding all the first 186 bytes in order to compare the result with the last byte (the precalculated checksum). It is to verify the message is okay (and it is useless by the way because protocols warranty the stream/datagram is okey).
So, about your questions:
What would be the message to send to the server such to get pass the last if condition?
Well, you need to send 187-bytes-length message (simply a byte[187]): the first one has to be 0x33, next the content and the last one has to be the checksum (you should calculate in the same way your snippet shows)
[0x33 | THE CONTENT | CHKSUM]
0 1 185 186
For example: the following buffer has a valid message (one that will pass the if condition). It simply begins with the mark byte (0x33) and the next 185 bytes are zero (I didn't assign values) then, the checksum is 0x33 + 0 + 0 + 0 + 0 ... 0 ... = 0x33
var buffer = new byte[187];
buffer[0] = 0x33;
buffer[186] = 0x33;
What are the 0xba, 0x33, 0xbb etc....? Hexadecimals?
Yes, they are just numbers in hexadecimal.
I tried to convert (sic) gives different results. Why is that?
Sockets send/receive bytes (just numbers) but the real question is: why do you assume they have to be text? Probably they are text, yes but who knows. That is part of the agreements (the protocol) that both endpoints agreed and that allows them to exchange data. So, you have to know what those 185 bytes (187 - 1 byte for mark - 1 byte checksum) mean in order to be able to process them.
Now, what you are doing is a reverse engineering of a protocol and that is because it is clear you don't know the message format and I guess you don't know what the content meaning is, and even when you are right and the content is just text, you ignore the encoding used. Those are the things you need to focus on.
I hope this helps you.

How to Reduce the size of a speccific format string?

I have designed a 2 Pass Assembler for my project. The output is in Hexadecimal form i.e. 15 is 0F.
I am working with ComPort and to send "0F" over the line it should be sent as String.
But the problem is that I can only receive 1 byte on the other end and sizeOf("0F") > 1 byte .
There is no way of decompressing data on the other end and I need to do all work on my end and still i want to receive "0F" on the other end.
Can i do this if yes then how?
I did this to get the hexadecimal string :
String.format("{0:X2}",15);
In addition,
using System.IO.Ports;
private SerialPort comPort = new SerialPort();
comPort.Write("0F");
On the receiving end I have a 8-bit processor which have a 1byte * 256 blocks i.e. 256 bytes. "0F" when received is received as 2 bytes and cannot be stored in a single block of 1 byte. So I want "0F" to be of 1 byte.
Looks like you need something like this:
// create buffer
byte[] buffer = new byte[256];
// put values you need to send to buffer
buffer[0] = 0x0f;
// ... add another bytes if you need...
// send them
var comPort = new SerialPort();
comPort.Write(buffer, 0, 1); // 0 is buffer offset, 1 is number of bytes to write

Convert large number to two bytes in C#

I'm trying to convert a number from a textbox into 2 bytes which can then be sent over serial. The numbers range from 500 to -500. I already have a setup so I can simply send a string which is then converted to a byte. Here's a example:
send_serial("137", "1", "244", "128", "0")
The textbox number will go in the 2nd and 3rd bytes
This will make my Roomba (The robot that all this code is for) drive forward at a velocity of 500 mm/s. The 1st number sent tells the roomba to drive, 2nd and 3rd numbers are the velocity and the 4th and 5th numbers are the radius of the turn (between 2000 and -2000, also has a special case where 32768 is straight).
var value = "321";
var shortNumber = Convert.ToInt16(value);
var bytes = BitConverter.GetBytes(shortNumber);
Alternatively, if you require Big-Endian ordering:
var bigEndianBytes = new[]
{
(byte) (shortNumber >> 8),
(byte) (shortNumber & byte.MaxValue)
};
Assume you are using System.IO.Ports.SerialPort, you will write using SerialPort.Write(byte[], int, int) to send the data.
In case if your input is like this: 99,255, you will do this to extract two bytes:
// Split the string into two parts
string[] strings = textBox1.text.Split(',');
byte byte1, byte2;
// Make sure it has only two parts,
// and parse the string into a byte, safely
if (strings.Length == 2
&& byte.TryParse(strings[0], System.Globalization.NumberStyles.Integer, System.Globalization.CultureInfo.InvariantCulture, out byte1)
&& byte.TryParse(strings[1], System.Globalization.NumberStyles.Integer, System.Globalization.CultureInfo.InvariantCulture, out byte2))
{
// Form the bytes to send
byte[] bytes_to_send = new byte[] { 137, byte1, byte2, 128, 0 };
// Writes the data to the serial port.
serialPort1.Write(bytes_to_send, 0, bytes_to_send.Length);
}
else
{
// Show some kind of error message?
}
Here I assume your "byte" is from 0 to 255, which is the same as C#'s byte type. I used byte.TryParse to parse the string into a byte.

Categories

Resources