TCP socket programming, strange behavior when read data - c#

I'm writing tcp socket program.
When I send string, shorter than previous sending, from client and receive it on server, something strange happening. For example:
First I send '999999999'. Server receives finely.
After that, I send shorter string: '7777777'. But the data on server is '777777799'.
Why the previous data's remnant is still alive on next sending?
My code is below:
// Section: client sending data ----
NetworkStream serverStream = clientSocket.GetStream();
byte[] outStream = Encoding.ASCII.GetBytes("999999999");
serverStream.Write(outStream, 0, outStream.Length);
serverStream.Flush();
// Section: Server reading data ----
while ((true))
{
NetworkStream networkStream = clientSocket.GetStream();
networkStream.Read(bytesFrom, 0, (int)clientSocket.ReceiveBufferSize);
dataFromClient = Encoding.ASCII.GetString(bytesFrom);
networkStream.Flush();
}

You're ignoring the amount of data you've read, instead always converting the whole byte array into a string, including any data which is still present from a previous read (or the initial byte array elements). You should have:
int bytesRead = networkStream.Read(bytesFrom, 0, bytesFrom.Length);
dataFromClient = Encoding.ASCII.GetString(bytesFrom. 0, bytesRead);
Note that I've changed the third argument to networkStream.Read, too - otherwise if there's more data than you have space for in the array, you'll get an exception. (If you really want to use ReceiveBufferSize, then create the array for that length.)
Also, you should check that bytesRead is positive - otherwise you'll get an exception if the connection is closed.
Basically, you should pretty much never ignore the return value of Stream.Read.

Related

C# - Seems to be receiving two TCP packets at once?

I'm making a chatroom application, and I send TCP packets to communicate between the server and the client.
I have the following code:
string returnMessage = "[EVT]USERSUCCESS";
bytes = Encoding.ASCII.GetBytes(returnMessage);
info.WriteToStream(bytes);
foreach (ConnectionInfo con in connections)
{
info.WriteToStream(bytes);
bytes = Encoding.ASCII.GetBytes("[EVT]USERJOIN;" + username);
con.WriteToStream(bytes);
}
However, when the client reads this, the response is:
[EVT]USERSUCCESS[EVT]USERJOIN;Name
Which seems as though it is receiving both packets at once..?
This is the code I have for receiving:
static void ServerListener()
{
while (true)
{
byte[] bytes = new byte[1024];
int numBytes = stream.Read(bytes, 0, bytes.Length);
string message = Encoding.ASCII.GetString(bytes, 0, numBytes);
if (HandleResponse(message) && !WindowHasFocus())
{
player.Play();
}
}
}
Which is run as a separate thread. HandleResponse() is fully working.
Thanks in advance!
Tcp is a stream protocol not a packet protocol.
You could get the bytes in mutiple packets or a single packet like you are doing.
What you need to do is put a byte that siginifies the end of a packet (such as the null byte)
Psudeo code:
Have a buffer that you add every byte received to.
If their is a null byte in the buffer then split the buffer at the null byte
Handle the left side as a complete packet. Keep the right side in the buffer until you get a new null byte.
Advanced TCP:
You may want to add your own error checking in your "packet"

Specified argument was out of the range of valid values. Parameter name: size & Serial Port Communication

I need to create an application which requires communicating to an existent software using TCP/IP, where both mine and the other application will be using the port number specified below.
private void frmScan_Load(object sender, EventArgs e)
{
clientSocket.Connect("100.100.100.30", 76545);
}
public void msg(string mesg)
{
textBox1.Text = textBox1.Text + Environment.NewLine + " >> " + mesg;
}
private void cmdSCANok_Click(object sender, EventArgs e)
{
msg("Client Started");
NetworkStream serverStream = clientSocket.GetStream();
byte[] outStream = Encoding.ASCII.GetBytes("PCK|SCAN|5025066840471");
serverStream.Write(outStream, 0, outStream.Length);
serverStream.Flush();
byte[] inStream = new byte[10025];
serverStream.Read(inStream, 0, (int)clientSocket.ReceiveBufferSize);
string returndata = Encoding.ASCII.GetString(inStream, 0, inStream.Length);
msg("Data from Server : " + returndata);
}
What happens is, the program I am communicating with has some in-built language where it will understand the code that I send, and it will return data according to the code that I have sent. So in the code above, I sent three bits of code: ("PCK|SCAN|5025066840471") which will find a specific item in the database. When it runs, I get an error on the line:
serverStream.Read(inStream, 0, (int)clientSocket.ReceiveBufferSize);
the error shows the following:
"Specified argument was out of the range of valid values.
Parameter name: size"
I followed the tutorial I saw on this website: http://csharp.net-informations.com/communications/csharp-client-socket.htm - But I did slightly different. So instead of putting
string returndata = Encoding.ASCII.GetString(inStream);
I wrote:
string returndata = Encoding.ASCII.GetString(inStream, 0, inStream.Length);
I am extremely confused on why I am getting those problems, and to be honest I am not understanding much of what the code is doing, I just have a rough idea, but not enough to troubleshoot this. Can someone help please?
Much appreciated!
PS: I am programming for Windows CE (portable device) on Visual Studio 2010.
Your code is a great example of how not to do TCP communication. I've seen this code copied over and over many times, and I'd be very happy to point you to a good tutorial on TCP - too bad I haven't seen one yet :)
Let me point out some errors first:
TCP doesn't guarantee you the packet arrives as one bunch of bytes. So (theoretically) the Write operation could result in a split, requiring two reads on the other side. Sending data without headers over TCP is a very bad idea - the receiving side has no idea how much it has to read. So you've got two options - either write the length of the whole bunch of data before the data itself, or use a control character to end the "packet"
The first point should also clarify that your reading is wrong as well. It may take more than a single read operation to read the whole "command", or a single read operation might give you two commands at once!
You're reading ReceiveBufferSize bytes into a 10025 long buffer. ReceiveBufferSize might be bigger than your buffer. Don't do that - read a max count of inStream.Length. If you were coding in C++, this would be a great example of a buffer overflow.
As you're converting the data to a string, you're expecting the whole buffer is full. That's most likely not the case. Instead, you have to store the return value of the read call - it tells you how many bytes were actually read. Otherwise, you're reading garbage, and basically having another buffer overflow.
So a much better (though still far from perfect) implementation would be like this:
NetworkStream serverStream = clientSocket.GetStream();
byte[] outStream = Encoding.ASCII.GetBytes("PCK|SCAN|5025066840471");
// It would be much nicer to send a terminator or data length first,
// but if your server doesn't expect that, you're out of luck.
serverStream.Write(outStream, 0, outStream.Length);
// When using magic numbers, at least use nice ones :)
byte[] inStream = new byte[4096];
// This will read at most inStream.Length bytes - it can be less, and it
// doesn't tell us how much data there is left for reading.
int bytesRead = serverStream.Read(inStream, 0, inStream.Length);
// Only convert bytesRead bytes - the rest is garbage
string returndata = Encoding.ASCII.GetString(inStream, 0, bytesRead);
Oh, and I have to recommend this essay on TCP protocol design.
It talks about many of the misconceptions about TCP, most importantly see the Message Framing part.
NetworkStream.Read method has the following check inside:
if(size < 0 || size > (buffer.Length - offset))
throw new ArgumentOutOfRanveException("size");
In your case:
size = clientSocket.ReceiveBufferSize
offset = 0
buffer = inStream
The error you received means that clientSocket.ReceiveBufferSize > inStream.Length. In other words you are trying to read more bytes than are available. Try to use the following code:
...
var count = serverStream.Read(inStream, 0, inStream.Length);
string returndata = Encoding.ASCII.GetString(inStream, 0, count);
See also an example here.

Best way to indicate that all data has been received over the network?

I've got a client / server application that works pretty well, but it's missing one crucial piece of behavior to make it a bit more solid.
Right now, it's far from "strong" in terms of network capabilities. I'm trying to get it there, and research has lead me to believe that I need some sort of protocol in place to ensure that no data is lost during network transmissions.
I've heard of a few methods. One that I think will work best for our situations is to use a terminator, something like an <EOF> tag. My issue is that I'm not sure of the best way to implement this.
Here's a couple code snippets that I'll be modifying to include a terminator after figuring out the best solution.
Client:
TcpClient client = new TcpClient();
client.Connect(hostname, portNo);
using (var stream = client.GetStream())
{
//send request
stream.Write(data, 0, data.Length);
stream.Flush();
//read server response
if (stream.CanRead)
{
byte[] buffer = new byte[1024];
string response = "";
int bytesRead = 0;
do
{
bytesRead = stream.Read(buffer, 0, buffer.Length);
response += Encoding.ASCII.GetString(buffer, 0, bytesRead);
} //trying to replace 'DataAvailable', it doesn't work well
while (stream.DataAvailable);
}
}
Note that I'm trying to replace the stream.DataAvailable method of checking for more data in the stream. It's been causing problems.
Server:
var listener = new TcpListener(IPAddress.Any, portNo);
listener.Start();
var client = listener.AcceptTcpClient();
using (var stream = client.GetStream())
{
var ms = new System.IO.MemoryStream();
byte[] buffer = new byte[4096];
int bytesRead = 0;
do
{
bytesRead = stream.Read(buffer, 0, buffer.Length);
ms.Write(buffer, 0, bytesRead);
} //also trying to replace this 'stream.DataAvailable'
while (stream.DataAvailable);
ms.Position = 0;
string requestString = Encoding.UTF8.GetString(ms.ToArray());
ms.Position = 0;
/*
process request and create 'response'
*/
byte[] responseBytes = Encoding.UTF8.GetBytes(response);
stream.Write(responseBytes, 0, responseBytes.Length);
stream.Flush();
}
So, given these two code examples, how can I modify these to both include and check for some sort of data terminator that indicates it's safe to stop reading data?
You can rely on TCP transmitting all the data before the FIN. The problem with your code is that available() is not a valid test for end of stream. It is a test for data being available now.
So you are terminating your reading loop prematurely, and thus missing data at the receiver, and getting resets at the sender.
You should just block in the Read() method until you receive the EOS indication from the API, whatever that is in C#.
You don't need your own additional EOS indicator.
We ended up using an EOS (end of stream) indicator on both ends of our project. I won't post the full code example, but here's a small snippet of how it works:
stream.Write(data, 0, data.Length);
stream.WriteByte(Convert.ToByte(ConsoleKey.Escape));
stream.Flush();
On the receiving end of this stream, the loop reads data byte-by-byte. Once it receives the ConsoleKey.Escape byte, it terminates the loop. It works!

comparing values sent to/from server by NetworkStream

When u know why the sent string "kamote" to server and the string received "kamote" from server are not the same..
CLIENT
tcpClient = new TcpClient();
tcpClient.Connect(ServerIP, Port);
connectionState = (HandShake("kamote", tcpClient)) ? "Connected to " + ServerIP.ToString() : "Host unreachable.";
private bool HandShake(String str, TcpClient tcpClient)
{
using (NetworkStream ns = tcpClient.GetStream())
{
byte[] toServer = Encoding.ASCII.GetBytes(str);
ns.Write(toServer,0,toServer.Length);
ns.Flush();
byte[] fromServer = new byte[10025];
ns.Read(fromServer, 0, (int)tcpClient.ReceiveBufferSize);
return Encoding.ASCII.GetString(fromServer).Equals(str);
}
}
SERVER
TcpClient tcpClient = new TcpClient();
tcpClient = tcpListener.AcceptTcpClient();
NetworkStream ns = tcpClient.GetStream();
byte[] fromClient = new byte[10025];
ns.Read(fromClient, 0, (int)tcpClient.ReceiveBufferSize);
byte[] toClient = fromClient;
ns.Write(toClient, 0, toClient.Length);
ns.Flush();
Client sent "kamote"
Server received "kamote"
Server sent "kamote"
Client received "kamote"
HandShake() always returns false. How can I fix this?
As in the previous question you asked, you're not keeping track of the number of bytes you received. So what's happening is this:
On the client, you send the string "kamote".
On the server, it receives that string into a buffer that's 10025 bytes long.
The server then sends the entire buffer back to the client -- all 10025 bytes
The client receives all or part of those 10025 bytes and converts them to a string.
The string that gets converted is really "kamote" with a bunch of 0's after it.
You must use the return value from Read to know how many bytes you received.
Did you try limiting the string length to the actual read bytes like this:
noOfBytes = ns.Read(bytes, 0, ...);
Encoding.ASCII.GetString(bytes, 0, noOfBytes);
You are including a lot of 0 characters, since you are including the entire fromServer in getstring. 0s don't print, but they are there. You must tell it the correct number of bytes to decode.

Strange behaviour of networkstream.read() in C#

When i write to a network stream two seperate byte array, i sometimes dont get the first byte array.
Why is that?
For eg This fails, the header is not received, sometimes by Read() on other side
byte[] header = msg.getByteHeader();
byte[] data = msg.getByteData();
clientStream.Write(header, 0, header.Length);
clientStream.Write(data, 0, data.Length);
clientStream.Flush();
however this succeeds
NetworkStream clientStream = tcpClient.GetStream();
byte[] header = msg.getByteHeader();
byte[] data = msg.getByteData();
int pos = 0;
Array.Copy(header, 0, message, pos, header.Length);
pos += header.Length;
Array.Copy(data, 0, message, pos, data.Length);
clientStream.Write(message, 0, message.Length);
This is how my Read() looks
try
{
//blocks until a client sends a message
bytesRead = clientStream.Read(message, 0, 4);
//string stringData = Encoding.ASCII.GetString(message, 0, bytesRead);
len = BitConverter.ToInt32(message, 0);
//MessageBox.Show(len.ToString());
bytesRead = clientStream.Read(message, 0, 5 + len);
}
i believe this is a timing issue. There's a lag between when you first open socket communication and when you can read the first data from the buffer. It's not instantaneous. You can query the DataAvailable boolean status of network stream before attempting to read. If there's no DataAvailable, Sleep the thread for say 100 ms and then try reading again.
You could troubleshoot it by commenting out the second Write and see if your server is sent any data.
Your reading mechanism does look very very fragile and I'd agree with Simon Fox that it does not look like it's correct. Why is the second read asking for len + 5 bytes? I would have thought it would only be len bytes since the first read was for the 4 header bytes.
If I was you I'd add a delimiter to the start of your header transmission. That will allow your receivers to scan for that to determine the start of a packet. With TCP you will often get fragemented transmissions or multiple transmissions bundled into the same packet. Things will go wrong once you deploy to real networks such as the internet if you are always relying on getting exactly the number of bytes you request.
Either that or switch to UDP where you can rely on having one transmission per packet.
Aren't you overwritting what you read in the first call to read with the second call? The second argument to Read is the offset at which to start storing the data read, both calls use 0 so the second overwrites the first...

Categories

Resources