I want to create same message and send it with C# as I do it with C++ where all works. Note that I have C# client where I have troubles, C++ client where all works fine and C++ server that should read messages from both C# and C++ clients.
Here is how I send the message from C++:
void ConnectAuthserverCommand::SendLogin(tcp::socket &s, const flatbuffers::FlatBufferBuilder &builder) const {
ClientOpcode opc = CLIENT_LOGIN_REQUEST;
flatbuffers::FlatBufferBuilder builder2;
auto email = builder2.CreateString("test#abv.bg");
auto password = builder2.CreateString("test");
auto loginRequest = Vibranium::CreateLoginRequest(builder2, email, password);
builder2.FinishSizePrefixed(loginRequest);
size_t size2 = builder2.GetSize();
uint8_t *buf2 = builder2.GetBufferPointer();
uint8_t *actualBuffer2 = new uint8_t[size2 + 2];
actualBuffer2[1] = (opc >> 8);
actualBuffer2[0] = (opc&0xFF);
memcpy(actualBuffer2 + 2, buf2, size2);
boost::asio::write(s, boost::asio::buffer(actualBuffer2,size));
}
ClientOpcode is as follows:
enum ClientOpcode : uint16_t{
CLIENT_AUTH_CONNECTION = 0x001,
CLIENT_LOGIN_REQUEST = 0x002,
CLIENT_NUM_MSG_TYPES = 0x003,
};
What I do is the following: I get a ClientOpcode which I want to put infront of FlatBuffers message. So I create an array of uint8_t which I extend with exactly 2 bytes(Because the size of uint16_t is 2 bytes.) Than on the server I read the first 2 bytes in order to get the header and here is how I do that:
void Vibranium::Client::read_header() {
auto self(shared_from_this());
_packet.header_buffer.resize(_packet.header_size);
boost::asio::async_read(socket,
boost::asio::buffer(_packet.header_buffer.data(), _packet.header_size),
[this, self](boost::system::error_code ec,std::size_t bytes_transferred)
{
if ((boost::asio::error::eof == ec) || (boost::asio::error::connection_reset == ec))
{
Disconnect();
}
else
{
assert(_packet.header_buffer.size() >= sizeof(_packet.headerCode));
std::memcpy(&_packet.headerCode, &_packet.header_buffer[0], sizeof (_packet.headerCode));
if(_packet.headerCode)
read_size();
else
Logger::Log("UNKNOWN HEADER CODE", Logger::FatalError);
}
});
}
So far so good, however I am not able to send correctly formatted same message from the C# client. Note that I send exactly same data, take a look:
Client authClient = GameObject.Find("Client").GetComponent<AuthClient>().client; // This is how I get Client class instance.
ClientOpcode clientOpcode = ClientOpcode.CLIENT_LOGIN_REQUEST;
var builder = new FlatBuffers.FlatBufferBuilder(1);
var email = builder.CreateString("test#abv.bg");
var password = builder.CreateString("test");
var loginRequest = LoginRequest.CreateLoginRequest(builder, email, password);
builder.FinishSizePrefixed(loginRequest.Value);
authClient.Send(builder, clientOpcode);
And here is how I actually prepend the header and send the data in C#:
public static Byte[] PrependClientOpcode(FlatBufferBuilder byteBuffer, ClientOpcode code)
{
var originalArray = byteBuffer.SizedByteArray();
byte[] buffer = new byte[originalArray.Length + 2];
buffer[1] = (byte)((ushort)code / 0x0100);
buffer[0] = (byte)code;
Array.Copy(originalArray, 0, buffer, 2, originalArray.Length);
return buffer;
}
public void Send(FlatBufferBuilder builder, ClientOpcode opcode)
{
byte[] buffer = builder.SizedByteArray();
var bufferToSend = PrependClientOpcode(builder, opcode);
if (bufferToSend.Length > MaxMessageSize)
{
Logger.LogError("Client.Send: message too big: " + bufferToSend.Length + ". Limit: " + MaxMessageSize);
return;
}
if (Connected)
{
// respect max message size to avoid allocation attacks.
if (bufferToSend.Length <= MaxMessageSize)
{
// add to send queue and return immediately.
// calling Send here would be blocking (sometimes for long times
// if other side lags or wire was disconnected)
sendQueue.Enqueue(bufferToSend);
sendPending.Set(); // interrupt SendThread WaitOne()
}
}
else
{
Logger.LogWarning("Client.Send: not connected!");
}
}
ClientOpcode enum on C# is as follows:
public enum ClientOpcode : ushort
{
CLIENT_AUTH_CONNECTION = 0x001,
CLIENT_LOGIN_REQUEST = 0x002,
CLIENT_NUM_MSG_TYPES = 0x003,
}
I think I can use ushort as a replacement of uint16_t in C#. That is why ClientOpcode is ushort.
When I send the message I get error on the client saying UNKNOWN HEADER CODE. If you take a look at the C++ server code to read the header you'll see that this message is displayed when the server is unable to read the header code. So somehow I am unable to place the ClientOpcode header correctly infront of the TCP message send from the C# client.
In order to find out what are the differences I installed WireShark on the host to track both messages. Here are they:
This one is from the correctly working C++ client:
And this one is the dump of the C# client:
As you can see on the second image of the TCP dump the Length of is bigger. C++ message is with length of 58 where C# message's length is 62. Why?
The C++ client is sending data:
0200340000000c00000008000c00040008000800000014000000040000000400000074657374000000000b00000074657374406162762e626700
When the C# client is sending:
0000003a0200340000000c00000008000c00040008000800000014000000040000000400000074657374000000000b00000074657374406162762e626700
The C# client is adding to it's message in front 0000003a. If I remove that messages should be the same and all will work.
Why is my C# client adding those extra data in front and how can I fix it?
Related
I am new to using Modbus Communication, and I found some other related threads here but unfortunately, it was for other languages or using TCP rather than RTU connection for Modbus.
So I have this segment of C# code that I can use to send data:
byte address = Convert.ToByte(txtSlaveID.Text);
ushort start = Convert.ToUInt16(txtWriteRegister.Text);
short[] value = new short[1];
if(Int16.TryParse(txtWriteValue.Text, out short numberValue))
{
value[0] = numberValue; //This part works!
}
else
{
value = new short[3] { 0x52, 0x4E, 0x56 }; //This is where I am trying to send letters/ASCII
}
try
{
mb.SendFc16(address, start, (ushort)value.Length, value);
}
catch (Exception err)
{
WriteLog("Error in write function: " + err.Message);
}
WriteLog(mb.modbusStatus);
So when I want to send down a single value, this code works. It will take the short array and dow the following to build the packet:
//Put write values into message prior to sending:
for (int i = 0; i < registers; i++)
{
message[7 + 2 * i] = (byte)(values[i] >> 8);
message[8 + 2 * i] = (byte)(values[i]);
}
So as you can see I attempted to use the hex values in the array, and send them down to the registers.
How can I modify the first sample of code to be able to send down HEX values, and write out characters into the register space in the first image?
I think you are not able to write the character at the device's register.
You need not to store the hex values into the short array. Simply store the characters into array, before writing them into the register convert them into byte.
Note - Whatever data will be written into the devices register, should be in byte.
I'm utilizing Protobuf3 with c# for my client & c++ for my server, where the .proto's are generated from the corresponding protoc3 compiler. I'm new to utilizing Google Protocol Buffers, where I'm currently trying to figure out how to re-parse the received bytes on my server that were sent from my client to turn it back into it's originating google::protobuf::Message object on my c++ server.
My client is sending a CodedOutputStream in c# :
public PacketHandler()
{
m_client = GetComponent<ClientObject>();
m_packet = new byte[m_client.GetServerConnection().GetMaxPacketSize()];
m_ostream = new BinaryWriter(new MemoryStream(m_packet));
}
public void DataToSend(IMessage message)
{
CodedOutputStream output = new CodedOutputStream(m_ostream.BaseStream, true);
output.WriteMessage(message);
output.Flush();
m_client.GetServerConnection().GetClient().Send(m_packet, message.CalculateSize());
}
This seems to be working, the message that is sent right now is a simple Ping message that looks like this:
// Ping.proto
syntax = "proto3";
package server;
import "BaseMessage.proto";
message Ping {
base.BaseMessage base = 1;
}
message Pong {
base.BaseMessage base = 1;
}
My BaseMessage looks like this :
// BaseMessage.proto
syntax = "proto3";
package base;
message BaseMessage {
uint32 size = 1;
}
My plan is to hopefully extend all of my messages from a BaseMessage to allow for identification of the particular message eventually. Once I get the parsing & re-parsing figured out.
The received message that I am getting on my c++ server side looks like this : \u0004\n\u0002\b
When receiving the message I am attempting to re-parse using the CodedInputStream object by attempting to parse the received bytes.
PacketHandler::PacketHandler(QByteArray& packet, const Manager::ClientPtr client) :
m_packet(packet),
m_client(client)
{
//unsigned char buffer[512] = { 0 };
unsigned char data[packet.size()] = { 0 };
memcpy(data, packet.data(), packet.size());
google::protobuf::uint32 msgSize;
google::protobuf::io::CodedInputStream inputStream(data, packet.size());
//inputStream.ReadVarint32(&msgSize);
//inputStream.ReadRaw(buffer, packet.size());
server::Ping pingMsg;
pingMsg.ParseFromCodedStream(&inputStream);
qDebug() << pingMsg.base().size();
}
This is where I am a bit unsure of the process that is needing to be done to re-parse the message into the particular message. I believe if I utilize a BaseMessage that is extending all messages that will allow me to identify the particular message so I know which one to create. However, in this current test where I know it will be a Ping message, the ParseFromCodedStream doesn't seem to create the original message. My reasoning comes from during my qDebug() the pingMsg.base().size() is not the correct value that was set during the sending phase in my c# client.
I was able to get this solved by incrementing my sent byte count by + 1.
public void DataToSend(IMessage message)
{
CodedOutputStream output = new CodedOutputStream(m_ostream.BaseStream, true);
output.WriteMessage(message);
output.Flush();
(m_ostream.BaseStream as MemoryStream).SetLength(0); // reset stream for next packet(s)
m_client.GetServerConnection().GetClient().Send(m_packet, message.CalculateSize() + 1);
}
I'm still a bit suspect as to why I need to add one. I'm thinking it must have to do with a null terminator. However prior without this addition my message would look like : #\u0012!\n\u001Ftype.googleapis.com/server.Pin
Then with the addition all it seems to do is add on is the correct message : #\u0012!\n\u001Ftype.googleapis.com/server.Ping
Also, during this process I figured out how to classify my messages utilizing the protobuf3 Any type. Where now my BaseMessage is defined as :
syntax = "proto3";
package base;
import "google/protobuf/any.proto";
message BaseMessage {
uint32 size = 1;
google.protobuf.Any msg = 2;
}
Which allows me to place any sort of google::protobuf::Message into the msg field, where on ParsingFromCodedStream I can check if it is that particular message.
PacketHandler::PacketHandler(QByteArray& packet, const Manager::ClientPtr client) :
m_packet(packet),
m_client(client)
{
unsigned char data[packet.size()] = { 0 };
memcpy(data, packet.data(), packet.size());
google::protobuf::io::CodedInputStream inputStream(data, packet.size());
// read the prefixed length of the message & discard
google::protobuf::uint32 msgSize;
inputStream.ReadVarint32(&msgSize);
// -----
// collect the BaseMessage & execute functionality based on type_url
base::BaseMessage msg;
if (!msg.ParseFromCodedStream(&inputStream))
{
qDebug() << msg.DebugString().c_str();
return;
}
if (msg.msg().Is<server::Ping>())
{
server::Ping pingMsg;
msg.msg().UnpackTo(&pingMsg);
}
}
I have a client server app which is sending data from a C# client to a C++ server. When the server receives this data request, 9 out of 10 times it works ok, but there is always 1 time were there will be garbage data appended to the end of the received data on the server side.
for example instead of receiving a number 1 it will receive 1C or 1#????
Here are snippets of the client and server code, any help will be appreciated.
C# client
int flagSide = 1;
msg = name;
msg += "+";
msg += "qty";
msg += "+";
msg += flagSide.ToString();
ZeroMQ.ZmqContext context = ZeroMQ.ZmqContext.Create();
ZeroMQ.ZmqSocket socket = context.CreateSocket(SocketType.REQ);
socket.Connect("tcp://111.111.0.111:5556");
socket.Send(Encoding.ASCII.GetBytes(msg.ToCharArray()));
Thread.Sleep(1);
string reply = socket.Receive(Encoding.ASCII);
Console.WriteLine("Received reply = " + reply + "\n");
C++ Server
std::tr1::unordered_map <std::string, std::string> aMap;
zmq::context_t context( 1 );
zmq::socket_t responder( context, ZMQ_REP );
responder.bind ("tcp://*:5556");
while ( 1 )
{
zmq::message_t recvMsg;
responder.recv( &recvMsg );
t = static_cast<char*>( recvMsg.data() );
std::string s(t);
std::vector<std::string> strs;
boost::split(strs, s, boost::is_any_of("+"));
aMap["name"] = strs[0];
aMap["qty"] = strs[1];
aMap["flag"] = strs[2];
..........
outputing the split string in the server reveals that sometimes the flag or strs[2] receives the garbage data.
Please help me if you see something that I'm not seeing.
Thanks
In C#, strings converted to bytes are not null-terminated, and c++ string expects a null terminated pointer.
So I presume what is happening here, is a buffer underflow. You are reading memory which does not belongs to the string.
I wrote a C# chat software that uses a new (at least for me) system that I called request system. I don't know if that has been created before, but for now I think of it as my creation :P
Anyhow, this system works like this:
soc receives a signal
checks the signal
if the data it just received is the number 2, the client software knows that the server is about to send a chat message. if the number is 3, so the client knows that the server is about to send the member list, and so on.
The problem is this: when I do step-by-step in VS2012 it works fine, the chat is working properly. When I use it on debug mode or just run it on my desktop, there seems to be missing data, and it shouldn't be because the code is working just fine...
Example of code for the sending&receiving message on client:
public void RecieveSystem()
{
while (true)
{
byte[] req = new byte[1];
soc.Receive(req);
int requestID = int.Parse(Encoding.UTF8.GetString(req));
if (requestID == 3)
{
byte[] textSize = new byte[5];
soc.Receive(textSize);
byte[] text = new byte[int.Parse(Encoding.UTF8.GetString(textSize))];
soc.Receive(text);
Dispatcher.Invoke(() => { ChatBox.Text += Encoding.UTF8.GetString(text) + "\r\n"; });
}
}
}
public void OutSystem(string inputText)
{
byte[] req = Encoding.UTF8.GetBytes("3");
soc.Send(req);
byte[] textSize = Encoding.UTF8.GetBytes(Encoding.UTF8.GetByteCount(inputText).ToString());
soc.Send(textSize);
byte[] text = Encoding.UTF8.GetBytes(inputText);
soc.Send(text);
Thread.CurrentThread.Abort();
}
and on the server:
public void UpdateChat(string text)
{
byte[] req = Encoding.UTF8.GetBytes("3");
foreach (User user in onlineUsers)
user.UserSocket.Send(req);
byte[] textSize = Encoding.UTF8.GetBytes(Encoding.UTF8.GetByteCount(text).ToString());
foreach (User user in onlineUsers)
user.UserSocket.Send(textSize);
byte[] data = Encoding.UTF8.GetBytes(text);
foreach (User user in onlineUsers)
user.UserSocket.Send(data);
}
public void RequestSystem(Socket soc)
{
~~~
}
else if (request == 3)
{
byte[] dataSize = new byte[5];
soc.Receive(dataSize);
byte[] data = new byte[int.Parse(Encoding.UTF8.GetString(dataSize))];
soc.Receive(data);
UpdateChat(Encoding.UTF8.GetString(data));
}
}
catch
{
if (!soc.Connected)
{
Dispatcher.Invoke(() => { OnlineMembers.Items.Remove(decodedName + " - " + soc.RemoteEndPoint); Status.Text += soc.RemoteEndPoint + " Has disconnected"; });
onlineUsers.Remove(user);
Thread.CurrentThread.Abort();
}
}
}
}
What could be the problem?
You're assuming that you'll have one packet for each Send call. That's not stream-oriented - that's packet-oriented. You're sending multiple pieces of data which I suspect are coalesced into a single packet, and then you'll get them all in a single Receive call. (Even if there are multiple packets involved, a single Receive call could still receive all the data.)
If you're using TCP/IP, you should be thinking in a more stream-oriented fashion. I'd also encourage you to change the design of your protocol, which is odd to say the least. It's fine to use a length prefix before each message, but why would you want to encode it as text when you've got a perfectly good binary connection between the two computers?
I suggest you look at BinaryReader and BinaryWriter: use TcpClient and TcpListener rather than Socket (or at least use NetworkStream), and use the reader/writer pair to make it easier to read and write pieces of data (either payloads or primitives such as the length of messages). (BinaryWriter.Write(string) even performs the length-prefixing for you, which makes things a lot easier.)
I am trying to send a word over to an Arduino running as a server, from a WPF C# application. Every now and again the complete work is not sent.
C# Code
public void send(String message)
{
TcpClient tcpclnt = new TcpClient();
ConState.Content = "Connecting.....";
try
{
tcpclnt.Connect("192.168.0.177", 23);
ConState.Content = "Connected";
String str = message;
Stream stm = tcpclnt.GetStream();
ASCIIEncoding asen = new ASCIIEncoding();
byte[] ba = asen.GetBytes(str);
stm.Write(ba, 0, ba.Length);
tcpclnt.Close();
}
catch (Exception)
{
ConState.Content = "Not Connected";
return;
}
}
How it is sent to the method:
String mes = "back;";
send(mes);
Arduino code:
if (client.available() > 0) {
// Read the bytes incoming from the client:
char thisChar = client.read();
if (thisChar == ';')
{
//Add a space
Serial.println("");
}
else {
//Print because it's not a space
Serial.write(thisChar);
}
}
The Arduino is using the chat server example. I am sending "back;" and "forward;" across. The results on the serial monitor:
back
forwaback
forward
back
forwaforwar
The problem seems to be with this code:
if (client.available() > 0) {
// read the bytes incoming from the client:
char thisChar = client.read();
...
}
What it does is:
Check if we have received data from the client
Read a single byte from the client buffer
Exit, and go on to do other things
As the OP pointed out, this comes direct from Arduino chat server example. In that example, this working correctly in loop() depends on the alreadyConnected flag being set right after a new connection is made: if it isn't, then the buffer is flushed before any data is read. That's one possible landmine.
Nonetheless, there is no reason to change the if block to be a while loop in the OP's case so, in other words instead of
if (client.available() > 0) {
have
while (client.available() > 0) {
The only reason to have an if statement there is to make sure that you frequently do other processing in loop() if you have clients that send a lot of data: If the reading of client data is done from inside a while this loop will not exit until the there is no more data from the client. Since this doesn't seem to be an issue in the asked-about case, the if to while change makes sense.