C# Socket Trick - c#

I am sending and receiving bytes between a server and a client. The server regularly sends some message in the form of bytes and client receives them.
Message format is below:
{Key:Value,Key:Value,Key:Value}
Now at the client side instead of receiving this message, I am receiving multiple copies of this message which is not suitable for this.
The client is receiving like this:
{Key:Value,Key:Value,Key:Value}
{Key:Value,Key:Value,Key:Value}
{Key:Value,Key:Value,Key:Value}
{Key:Value,Key:Value,Key:Value}
{Key:Value,Key:Value,Key:Value}
{Key:Value,Key:Value,Key:Value}
{Key:Value,Key:Value,
Can someone help me figure out the problem?
Updated
This code is sending instructions.
var client = (param as System.Net.Sockets.Socket);
while (true)
{
try
{
var instructions = "{";
instructions += "Window:" + window + ",";
instructions += "Time:" + System.DateTime.Now.ToShortTimeString() + ",";
instructions += "Message:" + msgToSend + "";
instructions += "}";
var bytes = System.Text.Encoding.Default.GetBytes(instructions);
client.Send(bytes, 0, bytes.Length, System.Net.Sockets.SocketFlags.None);
}
catch (Exception ex)
{
continue;
}
}
This code is receiving at client side.
while (true)
{
try
{
var data = new byte[tcpClient.ReceiveBufferSize];
stream.Read(data, 0, tcpClient.ReceiveBufferSize);
instructions = System.Text.Encoding.Default.GetString(data.ToArray());
}
catch (Exception ex)
{
continue;
}
}

Okay, a few problems with this code:
You're using Encoding.Default, which is almost certainly not what you want to do
You're always decoding the whole string, rather than just the amount you've actually managed to read - you're ignoring the return value of stream.Read
You're just continuing after an exception, with no logging, error handling or anything
As Dean says, you're repeatedly sending the same data
Ideally, it would be useful for your messages to have a prefix saying how long each one is, in bytes. Then in the receiving side you can read that length, then loop to repeatedly read into a buffer until you've read all the data you need. Then perform the decoding.
If you can't change the protocol, you'll still need to loop round, but checking for the end delimiter ("}" presumably) explicitly - and noting that you may receive data from the next message which you'll have to store until you next want to read.

You've got:
while (true)
In the sender: it's just going to keep sending the same thing over and over...
Also, if you get an exception trying to send or receive the data, you can't just try again and expect it to work. Depending on the exact error, you might need to reestablish the connection, or it might be that the network has gone away completely. In any case, simply retrying again is almost always going to be the wrong thing to do.

Problem has figured out like Dean Harding said.
But beside you should be more clearly about "client" or "server".
Basicaly:
Only server side should wait (by a loop) for msgs. Client (sender) sends msgs when needed or in condition.
You can sending msg in loop but should control and regulate it by a "Sleep" or "Timer". In this way, you can spare resource and give more time for receiver can process msg completely.

Your are sending your data through TCP. TCP is a stream-oriented protocol, so you know the client will receive the same stream of bytes in the same order, but you loose the packet boundaries. Your protocol seems to be packet-oriented instead. Then you have the choice:
switch to a packet-oriented protocol (UDP) or
delimit the packets yourself at the receiving side (as Jon Skeet said, by looking for the delimiters).
Keep in mind that TCP has some reliability features not found in UDP. If reliability is not a concern, switch to UDP. Otherwise, finding the delimiters at the client side should be easier than implementing your own reliability layer.

Related

How to Properly Handle a Socket Client disconnecton in a Socket/Thread list?

I'm fairly new in trying to program with Sockets. I have a class whose instance variables include a client's socket and a client's thread, in the name called clientInfo. I created a list of clientInfos to keep track of the connections going into the server, where I've successfully managed to have multiple clients send messages to each other.
listOfClients.Add(new clientInfo(listen.Accept()));
The thread of the clientInfo is in an infinite loop to always receive incoming data, as shown in the code below. The idea that I had was, if I get an exception from the server trying to receive data from a disconnected client, all I should do is remove the client in the list causing the exception, right?
I would iterate through the clients to find exactly at which spot in the list the error is coming from by sending a heartbeat message. Should sending fail, I now have the exact location of the problematic socket and then I would then close their socket, abort the thread, and remove the clientInfo from the list, right? I hope that I have the right idea for that logic. However, when I do so, I've still yet to truly solve the exception which is why (I think) the code shoots itself in the foot by closing all other connections as well. Honestly, I'm at a loss of what to do to solve this.
There's also the unfortunate factor of sending packets to each socket in the list, where the ObjectDisposedException is raised should I close, abort, and remove a socket from a list. Is there a way to completely remove an item from the list as if it were never added in the first place? I assumed removeAt(i) would have done so, but I'm wrong about that.
I've read many answers stating that the best way to handle clients disconnecting is to use socket.close() and list.removeAt(i). My desired goal is that, even if 98 out of 100 clients unexpectedly lose connection, I would like the remaining two clients to still be able to send each other packets through the server. Am I on the right path or is my approach completely wrong?
byte[] buff;
int readBytes;
while (true) {
try {
buff = new byte[clientSocket.SendBufferSize];
readBytes = clientSocket.Receive(buff);
//This line raises an exception should a client disconnect unexpectedly.
if (readBytes > 0) {
Packet pack = new Packet(buff);
handleData(pack);
}
}
catch(SocketException e) {
Console.WriteLine("A client disconnected!");
for (int i = 0; i < listOfClients.Count; i++) {
try {
string message = "This client is alive!";
Packet heartbeat = new Packet(Packet.PacketType.Send, "Server");
heartbeat.data.Add(message);
clientSocket.Send(heartbeat.toByte());
}
catch (SocketException ex) {
Console.WriteLine("Removing " + listOfClients[i].clientEndPointy.Address + ":" + listOfClients[i].clientEndPointy.Port);
//listOfClients[i].clientSocket.Disconnect(reuseSocket: true);
listOfClients[i].clientSocket.Close();
listOfClients[i].clientThread.Abort();
listOfClients.RemoveAt(i);
}
}
}
}

Simple One Thread Socket - TCP Server

First, I don't know if Stackoverflow is the best site to post this kind of message, but I don't know another sites like this.
In oder to understand properly tcp programmation in C#, I decided to do all possible ways from scratch. Here is what I want to know (not in the right order:
- Simple One Thread Socket Server (this article)
- Simple Multiple Threads Socket Server (I don't know how, cause threads are complicated)
- Simple Thread Socket Server (put the client management in another thread)
- Multiple Threads Socket Server
- Using tcpListener
- Using async / Await
- Using tasks
The ultimate objective is to know how to do the best tcp server, without just copy/paste some parts of come, but understand properly all things.
So, this is my first part : a single thread tcp server.
There is my code, but I don't think anybody will correct something, because it's quite a copy from MSDN : http://msdn.microsoft.com/en-us/library/6y0e13d3(v=vs.110).aspx
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Net.Sockets;
using System.Text;
namespace SimpleOneThreadSocket
{
public class ServerSocket
{
private int _iPport = -1;
private static int BUFFER_SIZE = 1024;
private Socket _listener = null;
public ServerSocket(int iPort)
{
// Create a TCP/IP socket.
this._listener = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
// Save the port
this._iPport = iPort;
}
public void Start()
{
byte[] buffer = null;
String sDatasReceived = null;
// Bind the socket to loopback address
try
{
this._listener.Bind(new System.Net.IPEndPoint(System.Net.IPAddress.Loopback, _iPport));
this._listener.Listen(2);
}
catch (Exception e)
{
System.Console.WriteLine(e.ToString());
}
// Listening
try
{
Console.WriteLine("Server listening on 127.0.0.1:" + _iPport);
while (true)
{
Socket client = this._listener.Accept();
Console.WriteLine("Incoming connection from : " + IPAddress.Parse(((IPEndPoint)client.RemoteEndPoint).Address.ToString()) + ":" + ((IPEndPoint)client.RemoteEndPoint).Port.ToString());
// An incoming connection needs to be processed.
while (true)
{
buffer = new byte[BUFFER_SIZE];
int bytesRec = client.Receive(buffer);
sDatasReceived += Encoding.ASCII.GetString(buffer, 0, bytesRec);
if (sDatasReceived.IndexOf("<EOF>") > -1)
{
// Show the data on the console.
Console.WriteLine("Text received : {0}", sDatasReceived);
// Echo the data back to the client.
byte[] msg = Encoding.ASCII.GetBytes(sDatasReceived);
client.Send(msg);
sDatasReceived = "";
buffer = null;
}
else if (sDatasReceived.IndexOf("exit") > -1)
{
client.Shutdown(SocketShutdown.Both);
client.Close();
sDatasReceived = "";
buffer = null;
break;
}
}
}
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
}
}
}
But I have some questions about that :
Listen Method from Socket have a parameter : backlog. According to MSDN, backlog is the number of available connection. I don't know why, when I put 0, I can connect to my server with multiple Telnet sessions. EDIT : 0 & 1 both allow 2 connections (1 current, 1 pending), 2 allow 3 connections (1 current, 2 pending), etc... So I didn't understand well the meaning of MSDN.
Can you confirm that Accept Method will take each connection one after one, that's why I see text from differents Telnet session in my server ?
Can you confirm (my server is a C# library) I can't kill my server (with this kind of code) without killing the process ? It could be possible with threads but it will come later.
If something is wrong in my code, please help me :)
I will come back soon with a simple multiple thread socket server, but I don't know how (I think one step is available before using threads or async/await).
First off, do your best not to even learn this. If you can possibly use a SignalR server, then do so. There is no such thing as a "simple" socket server at the TCP/IP level.
If you insist on the painful route (i.e., learning proper TCP/IP server design), then there's a lot to learn. First, the MSDN examples are notoriously bad starting points; they barely work and tend to not handle any kind of error conditions, which is absolutely necessary in the real world when working at the TCP/IP level. Think of them as examples of how to call the methods, not examples of socket clients or servers.
I have a TCP/IP FAQ that may help you, including a description of the backlog parameter. This is how many connections the OS will accept on your behalf before your code gets around to accepting them, and it's only a hint anyway.
To answer your other questions: A single call to Accept will accept a single new socket connection. The code as-written has an infinite loop, so it will work like any other infinite loop; it will continue executing until it encounters an exception or its thread is aborted (which happens on process shutdown).
If something is wrong in my code, please help me
Oh, yes. There are lots of things wrong with this code. It's an MSDN socket example, after all. :) Off the top of my head:
The buffer size is an arbitrary value, rather low. I would start at 8K myself, so it's possible to get a full Ethernet packet in a single read.
The Bind explicitly uses the loopback address. OK for playing around, I guess, but remember to set this to IPAddress.Any in the real world.
backlog parameter is OK for testing, but should be int.MaxValue on a true server to enable the dynamic backlog in modern server OSes.
Code will fall through the first catch and attempt to Accept after a Bind/Listen failed.
If any exception occurs (e.g., from Listen or Receive), then the entire server shuts down. Note that a client socket being terminated will result in an exception that should be logged/ignored, but it would stop this server.
The read buffer is re-allocated on each time through the loop, even though the old buffer is never used again.
ASCII is a lossy encoding.
If a client cleanly shuts down without sending <EOF>, then the server enters an infinite busy loop.
Received data is not properly separated into messages; it is possible that the echoed message contains all of one message and part of another. In this particular example it doesn't matter (since it's just an echo server and it's using ASCII instead of a real encoding), but this example hides the fact that you need to handle message framing properly in any real-world application.
The decoding should be done after the message framing. This isn't necessary for ASCII (a lossy encoding), but it's required for any real encodings like UTF8.
Since the server is only either receiving or sending at any time (and never both), it cannot detect or recover from a half-open socket situation. A half-open socket will cause this server to hang.
The server is only capable of a single connection at a time.
That was just after a brief readthrough. There could easily be more.

C# network stream fragmented data

My Context
I have a TCP networking program that sends large objects that have been serialized and encoded into base64 over a connection. I wrote a client library and a server library, and they both use NetworkStream's Begin/EndReadandBegin/EndWrite. Here's the (very much simplified version of the) code I'm using:
For the server:
var Server = new TcpServer(/* network stuffs */);
Server.Connect();
Server.OnClientConnect += new ClientConnectEventHandler(Server_OnClientConnect);
void Server_OnClientConnect()
{
LargeObject obj = CalculateLotsOfBoringStuff();
Server.Write(obj.SerializeAndEncodeBase64());
}
Then the client:
var Client = new TcpClient(/* more network stuffs */);
Client.Connect();
Client.OnMessageFromServer += new MessageEventHandler(Client_OnMessageFromServer);
void Client_OnMessageFromServer(MessageEventArgs mea)
{
DoSomethingWithLargeObject(mea.Data.DecodeBase64AndDeserialize());
}
The client library has a callback method for NetworkStream.BeginRead which triggers the event OnMessageFromServer that passes the data as a string through MessageEventArgs.
My Problem
When receiving large amounts of data through BeginRead/EndRead, however, it appears to be fragmented over multiple messages. E.G. pretend this is a long message:
"This is a really long message except not because it's for explanatory purposes."
If that really were a long message, Client_OnMessageFromServer might be called... say three times with fragmented parts of the "long message":
"This is a really long messa"
"ge except not because it's for explanatory purpos"
"es."
Soooooooo.... takes deep breath
What would be the best way to have everything sent through one Begin/EndWrite to be received in one call to Client_OnMessageFromServer?
You can't. On TCP, how things arrive is not necessarily the same as how they were sent. It the job of your code to know what constitutes a complete message, and if necessary to buffer incoming data until you have a complete message (taking care not to discard the start of the next message I the process).
In text protocols, this usually means "spot the newline / nul-char". For binary, it usually means "read the length-header in the preamble the the message".
TCP is a stream protocol, and has no fixed message boundaries. This means you can receive part of a message or the end of one and the beginning of another.
There are two ways to solve this:
Alter your protocol to add end-of-message markers. This way you continuously receive until you find the special marker. This can however lead that you have a buffer containing the end of one message and the beginning of another which is why I recommend the next way.
Alter protocol to first send the length of the message. Then you will know exactly how long the message is, and can count down while receiving so you won't read the beginning of the next message.

Read contents from SslStream

I have an application using which I execute IMAP commands using:
TcpClient to connect to the IMAP server
SslStream to write and read commands
Problem:
Cannot read the complete ouput content from the stream
while loop on the SslStream.Read seems not to work
StreamReader.ReadLine, ReadToEnd, Read methods do not work
Sample code:
while ((l = reader.ReadLine()) != null)
{
output.AppendLine(l);
}
This code snippet would read 1 to 2 lines and hang in reader.Readline().
Workaround I tried with setting the ReadTimeout property:
try
{
_output=new byte[_tcpclient.ReceiveBufferSize];
_sslstream.Read(_output, 0, _output.Length);
textBox1.Text = Encoding.ASCII.GetString(_output);
}
catch (IOException ex)
{
textBox1.Text ="ERROR !! " + ex.Message;
}
Help:
How can I read the complete output of a command from the stream?
Note: I do not want to use any third party libraries.
The TCP stream cannot know whether the current response has finished. All it knows is whether it has just received data on the wire; it cannot know whether the next packet is going to come right now (a multi-packet response) or whether it will come much later (if the response is finished).
Instead, you need to predict when you'll get more data; you should keep reading until you receive a tagged completion response, as documented in the IMAP protocol.
However, IMAP seems to be intended to be read continuously on a background thread, since the server can send you information at any time. Therefore, you probably ought to have a separate thread which is always in ReadLine().

Handling dropped TCP packets in C#

I'm sending a large amount of data in one go between a client and server written C#. It works fine when I run the client and server on my local machine but when I put the server on a remote computer on the internet it seems to drop data.
I send 20000 strings using the socket.Send() method and receive them using a loop which does socket.Receive(). Each string is delimited by unique characters which I use to count the number received (this is the protocol if you like). The protocol is proven, in that even with fragmented messages each string is correctly counted. On my local machine I get all 20000, over the internet I get anything between 17000-20000. It seems to be worse the slower connection that the remote computer has. To add to the confusion, turning on Wireshark seems to reduce the number of dropped messages.
First of all, what is causing this? Is it a TCP/IP issue or something wrong with my code?
Secondly, how can I get round this? Receiving all of the 20000 strings is vital.
Socket receiving code:
private static readonly Encoding encoding = new ASCIIEncoding();
///...
while (socket.Connected)
{
byte[] recvBuffer = new byte[1024];
int bytesRead = 0;
try
{
bytesRead = socket.Receive(recvBuffer);
}
catch (SocketException e)
{
if (! socket.Connected)
{
return;
}
}
string input = encoding.GetString(recvBuffer, 0, bytesRead);
CountStringsIn(input);
}
Socket sending code:
private static readonly Encoding encoding = new ASCIIEncoding();
//...
socket.Send(encoding.GetBytes(string));
If you're dropping packets, you'll see a delay in transmission since it has to re-transmit the dropped packets. This could be very significant although there's a TCP option called selective acknowledgement which, if supported by both sides, it will trigger a resend of only those packets which were dropped and not every packet since the dropped one. There's no way to control that in your code. By default, you can always assume that every packet is delivered in order for TCP and if there's some reason that it can't deliver every packet in order, the connection will drop, either by a timeout or by one end of the connetion sending a RST packet.
What you're seeing is most likely the result of Nagle's algorithm. What it does is instead of sending each bit of data as you post it, it sends one byte and then waits for an ack from the other side. While it's waiting, it aggregates all the other data that you want to send and combines it into one big packet and then sends it. Since the max size for TCP is 65k, it can combine quite a bit of data into one packet, although it's extremely unlikely that this will occur, particularly since winsock's default buffer size is about 10k or so (I forget the exact amount). Additionally, if the max window size of the receiver is less than 65k, it will only send as much as the last advertised window size of the receiver. The window size also affects Nagle's algorithm as well in terms of how much data it can aggregate prior to sending because it can't send more than the window size.
The reason you see this is because on the internet, unlike your network, that first ack takes more time to return so Naggle's algorithm aggregates more of your data into a single packet. Locally, the return is effectively instantaneous so it's able to send your data as quickly as you can post it to the socket. You can disable Naggle's algorithm on the client side by using SetSockOpt (winsock) or Socket.SetSocketOption (.Net) but I highly recommend that you DO NOT disable Naggling on the socket unless you are 100% sure you know what you're doing. It's there for a very good reason.
Well there's one thing wrong with your code to start with, if you're counting the number of calls to Receive which complete: you appear to be assuming that you'll see as many Receive calls finish as you made Send calls.
TCP is a stream-based protocol - you shouldn't be worrying about individual packets or reads; you should be concerned with reading the data, expecting that sometimes you won't get a whole message in one packet and sometimes you may get more than one message in a single read. (One read may not correspond to one packet, too.)
You should either prefix each method with its length before sending, or have a delimited between messages.
It's definitely not TCP's fault. TCP guarantees in-order, exactly-once delivery.
Which strings are "missing"? I'd wager it's the last ones; try flushing from the sending end.
Moreover, your "protocol" here (I'm taking about the application-layer protocol you're inventing) is lacking: you should consider sending the # of objects and/or their length so the receiver knows when he's actually done receiving them.
How long are each of the strings? If they aren't exactly 1024 bytes, they'll be merged by the remote TCP/IP stack into one big stream, which you read big blocks of in your Receive call.
For example, using three Send calls to send "A", "B", and "C" will most likely come to your remote client as "ABC" (as either the remote stack or your own stack will buffer the bytes until they are read). If you need each string to come without it being merged with other strings, look into adding in a "protocol" with an identifier to show the start and end of each string, or alternatively configure the socket to avoid buffering and combining packets.

Categories

Resources