I have a client server app which is sending data from a C# client to a C++ server. When the server receives this data request, 9 out of 10 times it works ok, but there is always 1 time were there will be garbage data appended to the end of the received data on the server side.
for example instead of receiving a number 1 it will receive 1C or 1#????
Here are snippets of the client and server code, any help will be appreciated.
C# client
int flagSide = 1;
msg = name;
msg += "+";
msg += "qty";
msg += "+";
msg += flagSide.ToString();
ZeroMQ.ZmqContext context = ZeroMQ.ZmqContext.Create();
ZeroMQ.ZmqSocket socket = context.CreateSocket(SocketType.REQ);
socket.Connect("tcp://111.111.0.111:5556");
socket.Send(Encoding.ASCII.GetBytes(msg.ToCharArray()));
Thread.Sleep(1);
string reply = socket.Receive(Encoding.ASCII);
Console.WriteLine("Received reply = " + reply + "\n");
C++ Server
std::tr1::unordered_map <std::string, std::string> aMap;
zmq::context_t context( 1 );
zmq::socket_t responder( context, ZMQ_REP );
responder.bind ("tcp://*:5556");
while ( 1 )
{
zmq::message_t recvMsg;
responder.recv( &recvMsg );
t = static_cast<char*>( recvMsg.data() );
std::string s(t);
std::vector<std::string> strs;
boost::split(strs, s, boost::is_any_of("+"));
aMap["name"] = strs[0];
aMap["qty"] = strs[1];
aMap["flag"] = strs[2];
..........
outputing the split string in the server reveals that sometimes the flag or strs[2] receives the garbage data.
Please help me if you see something that I'm not seeing.
Thanks
In C#, strings converted to bytes are not null-terminated, and c++ string expects a null terminated pointer.
So I presume what is happening here, is a buffer underflow. You are reading memory which does not belongs to the string.
Related
I want to create same message and send it with C# as I do it with C++ where all works. Note that I have C# client where I have troubles, C++ client where all works fine and C++ server that should read messages from both C# and C++ clients.
Here is how I send the message from C++:
void ConnectAuthserverCommand::SendLogin(tcp::socket &s, const flatbuffers::FlatBufferBuilder &builder) const {
ClientOpcode opc = CLIENT_LOGIN_REQUEST;
flatbuffers::FlatBufferBuilder builder2;
auto email = builder2.CreateString("test#abv.bg");
auto password = builder2.CreateString("test");
auto loginRequest = Vibranium::CreateLoginRequest(builder2, email, password);
builder2.FinishSizePrefixed(loginRequest);
size_t size2 = builder2.GetSize();
uint8_t *buf2 = builder2.GetBufferPointer();
uint8_t *actualBuffer2 = new uint8_t[size2 + 2];
actualBuffer2[1] = (opc >> 8);
actualBuffer2[0] = (opc&0xFF);
memcpy(actualBuffer2 + 2, buf2, size2);
boost::asio::write(s, boost::asio::buffer(actualBuffer2,size));
}
ClientOpcode is as follows:
enum ClientOpcode : uint16_t{
CLIENT_AUTH_CONNECTION = 0x001,
CLIENT_LOGIN_REQUEST = 0x002,
CLIENT_NUM_MSG_TYPES = 0x003,
};
What I do is the following: I get a ClientOpcode which I want to put infront of FlatBuffers message. So I create an array of uint8_t which I extend with exactly 2 bytes(Because the size of uint16_t is 2 bytes.) Than on the server I read the first 2 bytes in order to get the header and here is how I do that:
void Vibranium::Client::read_header() {
auto self(shared_from_this());
_packet.header_buffer.resize(_packet.header_size);
boost::asio::async_read(socket,
boost::asio::buffer(_packet.header_buffer.data(), _packet.header_size),
[this, self](boost::system::error_code ec,std::size_t bytes_transferred)
{
if ((boost::asio::error::eof == ec) || (boost::asio::error::connection_reset == ec))
{
Disconnect();
}
else
{
assert(_packet.header_buffer.size() >= sizeof(_packet.headerCode));
std::memcpy(&_packet.headerCode, &_packet.header_buffer[0], sizeof (_packet.headerCode));
if(_packet.headerCode)
read_size();
else
Logger::Log("UNKNOWN HEADER CODE", Logger::FatalError);
}
});
}
So far so good, however I am not able to send correctly formatted same message from the C# client. Note that I send exactly same data, take a look:
Client authClient = GameObject.Find("Client").GetComponent<AuthClient>().client; // This is how I get Client class instance.
ClientOpcode clientOpcode = ClientOpcode.CLIENT_LOGIN_REQUEST;
var builder = new FlatBuffers.FlatBufferBuilder(1);
var email = builder.CreateString("test#abv.bg");
var password = builder.CreateString("test");
var loginRequest = LoginRequest.CreateLoginRequest(builder, email, password);
builder.FinishSizePrefixed(loginRequest.Value);
authClient.Send(builder, clientOpcode);
And here is how I actually prepend the header and send the data in C#:
public static Byte[] PrependClientOpcode(FlatBufferBuilder byteBuffer, ClientOpcode code)
{
var originalArray = byteBuffer.SizedByteArray();
byte[] buffer = new byte[originalArray.Length + 2];
buffer[1] = (byte)((ushort)code / 0x0100);
buffer[0] = (byte)code;
Array.Copy(originalArray, 0, buffer, 2, originalArray.Length);
return buffer;
}
public void Send(FlatBufferBuilder builder, ClientOpcode opcode)
{
byte[] buffer = builder.SizedByteArray();
var bufferToSend = PrependClientOpcode(builder, opcode);
if (bufferToSend.Length > MaxMessageSize)
{
Logger.LogError("Client.Send: message too big: " + bufferToSend.Length + ". Limit: " + MaxMessageSize);
return;
}
if (Connected)
{
// respect max message size to avoid allocation attacks.
if (bufferToSend.Length <= MaxMessageSize)
{
// add to send queue and return immediately.
// calling Send here would be blocking (sometimes for long times
// if other side lags or wire was disconnected)
sendQueue.Enqueue(bufferToSend);
sendPending.Set(); // interrupt SendThread WaitOne()
}
}
else
{
Logger.LogWarning("Client.Send: not connected!");
}
}
ClientOpcode enum on C# is as follows:
public enum ClientOpcode : ushort
{
CLIENT_AUTH_CONNECTION = 0x001,
CLIENT_LOGIN_REQUEST = 0x002,
CLIENT_NUM_MSG_TYPES = 0x003,
}
I think I can use ushort as a replacement of uint16_t in C#. That is why ClientOpcode is ushort.
When I send the message I get error on the client saying UNKNOWN HEADER CODE. If you take a look at the C++ server code to read the header you'll see that this message is displayed when the server is unable to read the header code. So somehow I am unable to place the ClientOpcode header correctly infront of the TCP message send from the C# client.
In order to find out what are the differences I installed WireShark on the host to track both messages. Here are they:
This one is from the correctly working C++ client:
And this one is the dump of the C# client:
As you can see on the second image of the TCP dump the Length of is bigger. C++ message is with length of 58 where C# message's length is 62. Why?
The C++ client is sending data:
0200340000000c00000008000c00040008000800000014000000040000000400000074657374000000000b00000074657374406162762e626700
When the C# client is sending:
0000003a0200340000000c00000008000c00040008000800000014000000040000000400000074657374000000000b00000074657374406162762e626700
The C# client is adding to it's message in front 0000003a. If I remove that messages should be the same and all will work.
Why is my C# client adding those extra data in front and how can I fix it?
I am using Lapsnapper (a transponder timing system) running on Android.
Lapsnapper enables a TCP/IP server with which a connection can be made, to build a custom interface and get some other relevant data RE: the transponders etc from the system.
I do not understand the Lapsnapper tcp server specification.
I have done some tcp stuff before, but I am mostly a higher level programmer and to be honest I am a bit out of my depth with this raw TCP stuff.
The spec reads:
What I don't understand is how to "send" the tcp data?
I don't understand how 0x70, 0x17 equates to (6000) and 2 bytes...
The same goes for 0x13, 0x00, 0x00, 0x00 = 19 which the spec says should be 4 bytes, but a string of "19" is 2 bytes?
I am trying to understand what I am reading. Any help would be appreciated as I need to do quite a bit of comms to this server, and I want to understand what I am doing...
I have asked for help from lapsnapper support, but in the mean time I would like to learn something new as per the above.
What do I actually "send" on the TCP connection?
The spec says that I should expect a message back, but with my current implementation, a connection seems to be established, but I never receive anything back.
Response Message to expect:
My code:
(P.S This code block works if I do a simple connection to a SMTP server and I can do a basic connection with reply from said smtp server. I however never receive a reply when I try to talk to the Lapsnapper TCP server using the code below)
string lapSnapperIP = "10.0.0.131";
int lapsnapperPort = 9001;
string lapSnapperMessageID;
string lapsnapperLengthOfMessage;
string lapsnapperProductID;
string lapsnapperServerVersion;
string lapsnapperPasswordLength;
string lapsnapperPassword;
lapSnapperMessageID = "6000";
lapsnapperLengthOfMessage = "19"; //to implement
lapsnapperProductID = "50";
lapsnapperServerVersion = "100000";
lapsnapperPasswordLength = "4";
lapsnapperPassword = "1234";
string lapSnapperDataSend;
lapSnapperDataSend = lapSnapperMessageID + lapsnapperLengthOfMessage + lapsnapperProductID + lapsnapperServerVersion + lapsnapperPasswordLength + lapsnapperPassword;
s.Connect(lapSnapperIP, lapsnapperPort);
byte[] sendMessage = Encoding.UTF8.GetBytes(lapSnapperDataSend);
byte[] receiveBytes = new byte[256];
int i = 0;
string receivedMessage = "";
//send data
i = s.Send(sendMessage);
//receive data
i = s.Receive(receiveBytes); // (no reply here...)
receivedMessage = Encoding.UTF8.GetString(receiveBytes);
Thanks
I've been breaking my head over a bug in this system I've been building. Basically, I use sockets to communicate between two C# applications. Or rather a Unity C# script server and a C# client application.
With manual tests, the system works perfectly fine, no anomalies whatsoever.
In order to test performance and multi-user functionality, I wrote up a tester class which launches multiple threads(clients), and have those fire X amount of messages at the server. Here's where my problem occurs...Sometimes.
When a Socket sends or receives, it returns an integer container the amount of bytes that was sent/received. When the problem occurs, I can see that the correct amount of bytes arrived at the server. However, after putting the bytes into a string, suddenly I'm left with an empty string, instead of the message I'd normally see here.
I'm at a loss at to what's causing this problem. I'm using Encoding.Default.GetString() to translate the bytes into a string.
Any help is appreciated!
David
public void ReceiveFromClient (Socket handlerSocket)
{
serverBuffer = new byte[iBufferSize]; //iBufferSize = 8192;
int i = handlerSocket.Receive (serverBuffer);
Debug.Log ("Bytes received: " + i);
string message = Encoding.UTF8.GetString (serverBuffer, 0, i);
Debug.Log ("Message received: " + message);
//Do stuff with the message
}
bool SendMessageToUnity(string input)
{//returns a bool saying whether the message was sent or not
if (clientSocket != null)
{
if (clientSocket.Connected)
{
byte[] bytes = Encoding.UTF8.GetBytes(input+"|");
txtOutput.BeginInvoke(new Action(() => txtOutput.AppendText("Sending message: " + Encoding.UTF8.GetString(bytes) + Environment.NewLine)));
int i = clientSocket.Send(bytes);
txtOutput.BeginInvoke(new Action(() => txtOutput.AppendText("Sending "+i+" bytes. "+ Environment.NewLine)));
return true;
}
}
return false;
}
Look for for a zero value ('\0') in your array of bytes before converting it to a string.
private string GetString(byte[] data)
{
data = data.Where(b => b != 0).ToArray();
return Encoding.UTF8.GetString(data);
}
If you get the byte array correctly than the problem in the Encoding.
Check the sending Encoding usually UTF8 but you have to check it out.
and then var inputStr = Encoding.UTF8.GetString(InputByteArray);
^^
I have a c# namedpipe server created like so:
NamedPipeServerStream pipeServer = new NamedPipeServerStream(IVConstants.PIPENAME, PipeDirection.InOut);
pipeServer.WaitForConnection();
pipeWriter = new StreamWriter(pipeServer);
pipeWriter.AutoFlush = true;
try
{
pipeWriter.WriteLine("You are Connected!!!");
}
catch (IOException e)
{
Console.WriteLine("ERROR: {0}", e.Message);
}
Writing out small strings like the one above gives no problem.
However when I start chugging out huge strings e.g 1500 chars, the pipe hangs and stays hung until I kill the client it is trying to send something to. The client is a java app.
I see that is DOES send stuff to the client, after-which the hang happens.
The client is a Java app receiving with this:
RandomAccessFile pipe = new RandomAccessFile("\\\\.\\pipe\\mypipe", "rw");
while(true)
{
String received = pipe.readLine();
processEvent(received);
System.out.println("Response: " + received );
}
The client doesn't throw an exception, and I can see the System.out after the readline().
So what gives?
Bah! Pure foolishness on my part.
It seems something was hanging in
processEvent(received);
I thought it was getting to
System.out.println("Response: " + received );
But it wasn't. processEvent(...) had a duplicate System.out.println("Response: " + received ); in it, hence my confusion.
It had nothing to do with the namedpipe after-all.
Thanks guys =)
In case this might help anyone, my C# pipe server was hanging because I was trying to send a char array. I had to convert it to a string.
This fails:
char[] buf = new char[1024];
// (move stuff to buf...)
var pipeServer = new NamedPipeServerStream(...);
StreamWriter writer = new StreamWriter(pipeServer);
writer.WriteLine(buf); // hangs if char[]
This works:
string str = "";
for (int i = 0; buf[i] != '\0' && i < buf.Length; i++)
str = str + Convert.ToChar(buf[i]);
writer.WriteLine(str);
I'm new at C#, so there's probably a smarter way of converting to a string.
I'm currently working on an asynchronous TCP-Client. I am able to send and receive messages. However, the following code is driving me crazy at the moment:
int rx = theSockId.thisSocket.EndReceive(asyn);
char[] rcvd = new char[rx + 1];
System.Text.Decoder d = System.Text.Encoding.ASCII.GetDecoder();
int charLen = d.GetChars(theSockId.dataBuffer, 0, rx, rcvd, 0);
System.String szData = new System.String(rcvd);
Normally, everything works fine - but as soon as a message starts with a dollar sign ($), I only see this char.
I was searching for a long time but I couldn't find any solution....
Receive can complete when any data is received at the socket - not necessarily a whole "message". You have to buffer the received data until a whole message ( as defined in your protocol ) has been received.