NetworkStream Receive, how to processing data without using 100% CPU? - c#

I have a small game server I'm making that will have dozens of connections sending player data constantly. While I've finally accomplished some basics and now have data sending/receiving, I now face a problem of flooding the server and the client with too much data. I've tried to throttle it back but even then I am hitting 90-100% cpu simply because of receiving and processing the data received running up the CPU.
The method below is a bare version of receiving data from the server. The server sends a List of data to be received by the player, then it goes through that list. I've thought perhaps instead just using a dictionary with a key based on type rather than for looping but I don't think that will significantly improve it, the problem is that it is processing data non-stop because player positions are constantly being updated, sent to the server, then send to other players.
The code below shows receive for the client, the server receive looks very similar. How might I begin to overcome this issue? Please be nice, I am still new to network programming.
private void Receive(System.Object client)
{
MemoryStream memStream = null;
TcpClient thisClient = (TcpClient)client;
List<System.Object> objects = new List<System.Object>();
while (thisClient.Connected && playerConnected == true)
{
try
{
do
{
//when receiving data, first comes length then comes the data
byte[] buffer = GetStreamByteBuffer(netStream, 4); //blocks while waiting for data
int msgLenth = BitConverter.ToInt32(buffer, 0);
if (msgLenth <= 0)
{
playerConnected = false;
thisClient.Close();
break;
}
if (msgLenth > 0)
{
buffer = GetStreamByteBuffer(netStream, msgLenth);
memStream = new MemoryStream(buffer);
}
} while (netStream.DataAvailable);
if (memStream != null)
{
BinaryFormatter formatter = new BinaryFormatter();
memStream.Position = 0;
objects = new List<System.Object>((List<System.Object>)formatter.Deserialize(memStream));
}
}
catch (Exception ex)
{
Console.WriteLine("Exception: " + ex.ToString());
if (thisClient.Connected == false)
{
playerConnected = false;
netStream.Close();
thisClient.Close();
break;
}
}
try
{
if (objects != null)
{
for (int i = 0; i < objects.Count; i++)
{
if(objects[i] != null)
{
if (objects[i].GetType() == typeof(GameObject))
{
GameObject p = (GameObject)objects[i];
GameObject item;
if (mapGameObjects.TryGetValue(p.objectID, out item))
{
mapGameObjects[p.objectID] = p;;
}
else
{
mapGameObjects.Add(p.objectID, p);
}
}
}
}
}
}
catch (Exception ex)
{
Console.WriteLine("Exception " + ex.ToString());
if (thisClient.Connected == false)
{
playerConnected = false;
netStream.Close();
break;
}
}
}
Console.WriteLine("Receive thread closed for client.");
}
public static byte[] GetStreamByteBuffer(NetworkStream stream, int n)
{
byte[] buffer = new byte[n];
int bytesRead = 0;
int chunk = 0;
while (bytesRead < n)
{
chunk = stream.Read(buffer, (int)bytesRead, buffer.Length - (int)bytesRead);
if (chunk == 0)
{
break;
}
bytesRead += chunk;
}
return buffer;
}

Based on the code shown, I can't say why the CPU utilization is high. The loop will wait for data, and the wait should not consume CPU. That said, it still polls the connection in checking the DataAvailable property, which is inefficient and can cause you to ignore received data (in the implementation shown...that's not an inherent problem with DataAvailable).
I'll go one further than the other answer and state that you should simply rewrite the code. Polling the socket is just no way to handle network I/O. This would be true in any scenario, but it is especially problematic if you are trying to write a game server, because you're going to use up a lot of your CPU bandwidth needlessly, taking it away from game logic.
The two biggest changes you should make here are:
Don't use the DataAvailable property. Ever. Instead, use one of the asynchronous APIs for dealing with network I/O. My favorite approach with the latest .NET is to wrap the Socket in a NetworkStream (or get the NetworkStream from a TcpClient as you do in your code) and then use the Stream.ReadAsync() along with async and await. But the older asynchronous APIs for Sockets work well also.
Separate your network I/O code from the game logic code. The Receive() method you show here has both the I/O and the actual processing of the data relative to the game state in the same method. This two pieces of functionality really belong in two separate classes. Keep both classes, and especially the interface between them, very simple and the code will be a lot easier to write and to maintain.
If you decide to ignore all of the above, you should at least be aware that your GetStreamByteBuffer() method has a bug in it: if you reach the end of the stream before reading as many bytes were requested, you still return a buffer as large as was requested, with no way for the caller to know the buffer is incomplete.
And finally, IMHO you should be more careful about how you shutdown and close the connection. Read about "graceful closure" for the TCP protocol. It's important that each end signal that they are done sending, and that each end receive the other end's signal, before either end actually closes the connection. This will allow the underlying networking protocol to release resources as efficiently and as quickly as possible. Note that TcpClient exposes the socket as the Client property, which you can use to call Shutdown().

Polling is rarely a good approach to communication, unless you're programming 16-bit microcontrollers (and even then, probably not the best solution).
What you need to do is to switch to a producer-consumer pattern, where your input port (a serial port, an input file, or a TCP socket) will act as a producer filling a FIFO buffer (a queue of bytes), and some other part of your program will be able to asynchronously consume the enqueued data.
In C#, there are several ways to do it: you can simply write a couple of methods using a ConcurrentQueue<byte>, or a BlockingCollection, or you can try a library like the TPL Dataflow Library which IMO doesn't add too much value over existing structures in .NET 4. Prior to .NET 4, you would simply use a Queue<byte>, a lock, and a AutoResetEvent to do the same job.
So the general idea is:
When your input port fires a "data received" event, enqueue all received data into the FIFO buffer and set a synchronization event to notify the consumer,
In your consumer thread, wait for the synchonization event. When the signal is received, check if there is enough data in the queue. If yes, process it, if not, continue waiting for the next signal.
For robustness, use an additional watchdog timer (or simply "time since last received data") to be able to fail on timeout.

You want to use the Task-based Asynchronous Pattern. Probably making liberal use of the async function modifier and the await keyword.
You'd be best replacing GetStreamByteBuffer with a direct call to ReadAsync.
For instance you could asynchronously read from a stream like this.
private static async Task<T> ReadAsync<T>(
Stream source,
CancellationToken token)
{
int requestLength;
{
var initialBuffer = new byte[sizeof(int)];
var readCount = await source.ReadAsync(
initialBuffer,
0,
sizeof(int),
token);
if (readCount != sizeof(int))
{
throw new InvalidOperationException(
"Not enough bytes in stream to read request length.");
}
requestLength = BitConvertor.ToInt32(initialBuffer, 0);
}
var requestBuffer = new byte[requestLength];
var bytesRead = await source.ReadAsync(
requestBuffer,
0,
requestLength,
token);
if (bytesRead != requestLength)
{
throw new InvalidDataException(
string.Format(
"Not enough bytes in stream to match request length." +
" Expected:{0}, Actual:{1}",
requestLength,
bytesRead));
}
var serializer = new BinaryFormatter();
using (var requestData = new MemoryStream(requestBuffer))
{
return (T)serializer.Deserialize(requestData);
}
}
Like your code this reads an int from the stream to get the length, then reads that number of bytes and uses the BinaryFormatter to deserialize the data to the specified generic type.
Using this generic function you can simplify your logic,
private Task Receive(
TcpClient thisClient,
CancellationToken token)
{
IList<object> objects;
while (thisClient.Connected && playerConnected == true)
{
try
{
objects = ReadAsync<List<object>>(netStream, token);
}
catch (Exception ex)
{
Console.WriteLine("Exception: " + ex.ToString());
if (thisClient.Connected == false)
{
playerConnected = false;
netStream.Close();
thisClient.Close();
break;
}
}
try
{
foreach (var p in objects.OfType<GameObject>())
{
if (p != null)
{
mapGameObjects[p.objectID] = p;
}
}
}
catch (Exception ex)
{
Console.WriteLine("Exception " + ex.ToString());
if (thisClient.Connected == false)
{
playerConnected = false;
netStream.Close();
break;
}
}
}
Console.WriteLine("Receive thread closed for client.");
}

You need to put a Thread.Sleep(10) in your while loop. This is also a very fragile way to receive tcp data because it assumes the other side has sent all data before you call this receive. If the other side has only sent half of the data this method fails. This can be countered by either sending fixed sized packages or sending the length of a package first.

Your player position update is similar to the framebuffer update in the VNC protocol where the client request a screen frame & server responds to it with the updated screen data. But there is one exception, VNC server doesn't blindly send the new screen it only sends the changes if there is one. So you need to change the logic from sending all the requested list of objects to only to the objects which are changed after the last sent. Also in addition to it, you should send entire object only once after that send only the changed properties, this will greatly reduce the size of data sent & processed both at clients & server.

Related

C# TCP server receive multiple JSON responses issue

I'm learning C# and I've got a TCP server that works asynchronously and supports multiple connected clients that I'm having trouble keeping a consistent flow to a method processData(String) . I think I've managed to trace the issue down to the byte buffer where if I receive data from multiple clients (also if a single client sends the same data multiple times in quick succession).
I've tried using an isWorking bool to prevent processData() from being accessed and that threw a JSONReader Exception because I think the buffer was sending the incomplete string from processing. At this point I tried sending a EOF at the end of a string and having the processData() accessed when the check was true. However, this presented another problem where the buffer would contain half of one string and the beginning of another - at this point I posted on here thinking there may be a better approach to ensure properly formatted Strings were sent to the processData() method.
I also tried splitting the response into an array using the } as a pointer but this didn't work resulting in the same half of one string and the beginning of another issue.
string data = content.Replace("[EOF]", "");
string[] TooManyDataObjects = data.Split('}');
string firstObject = TooManyDataObjects[0] += "}";
Here's the code I use to create a listen task and the handleDevice() method when a device connects.
private async Task Listen()
{
try
{
while (true)
{
Token.ThrowIfCancellationRequested();
TcpClient client = await server.AcceptTcpClientAsync();
client.NoDelay = true;
Console.WriteLine("Connected!");
await Task.Run(async () => await HandleDevice(client), Token);
}
}
catch (SocketException e)
{
Console.WriteLine("Exception: {0}", e);
}
}
private async Task HandleDevice(TcpClient client)
{
string data = null;
Byte[] bytes = new Byte[8196];
int i;
try
{
using (stream = client.GetStream())
{
while ((i = await stream.ReadAsync(bytes, 0, bytes.Length, Token)) != 0)
{
Token.ThrowIfCancellationRequested();
string hex = BitConverter.ToString(bytes);
data += Encoding.UTF8.GetString(bytes, 0, i);
//Console.WriteLine(data);
if (data.IndexOf("[EOF]") >= 0)
{
processData(data);
data = "";
Array.Clear(bytes, 0, bytes.Length);
}
}
}
}
catch (OperationCanceledException) { }
catch (Exception e)
{
Console.WriteLine("Exception: {0}", e.ToString());
}
finally
{
client.Close();
}
}
Question: How can I get properly formatted string JSON responses received from clients to processData() in a queued or the received processing order?
I'm learning C# and I've got a TCP server that works asynchronously
Sorry, but you're trying to learn three fairly complex topics simultaneously? I strongly recommend you learn C#, sockets, and asynchrony one at a time. C# and asynchrony are fairly common, so I'd recommend learning those first; socket development is much more rare, so I'd learn that only if necessary.
the buffer would contain half of one string and the beginning of another
This is a common problem when starting out with socket programming. My first response to everyone running into this is to drop the sockets completely and use something much higher-level like ASP.NET Core (self-hosted if necessary).
But if you need sockets for some reason, then the proper solution for this socket problem (the first of several usually encountered) is message framing. I discuss message framing on my blog, as part of a series on socket programming in .NET. I also have a video series I did earlier this year on asynchronous socket programming that would be well worth watching if you really need to learn socket programming (again, the vast majority of developers never need to learn that).

Best Way to write large data over network stream of TcpClient

We have a requirement to upload large firmware files to printers to upgrade the firmware of the device. The printer device is in the same network as my server, and the size of firmware that we are trying to upload is approximately to 200 - 500 MB. The approach that we have chosen is to load the firmware (.bin file) into Memory stream and write it in chunks over the network using TcpClient.
Based on the response from the network stream, we are displaying the status of firmware upgrade to the client. Following is the code snippet that we have used for firmware upgrade. I want to know if it is the best approach, as wrong one may harm the device.
EDIT:
class MyClass
{
int port = 9100;
string _deviceip;
byte[] m_ReadBuffer = null;
TcpClient _tcpclient;
NetworkStream m_NetworkStream;
static string CRLF = "\r\n";
public event EventHandler<DeviceStatus> onReceiveUpdate;
public async Task<bool> UploadFirmware(Stream _stream)
{
bool success = false;
try
{
_tcpclient = new TcpClient();
_tcpclient.Connect(_deviceip, port);
_stream.Seek(0, SeekOrigin.Begin);
m_NetworkStream = _tcpclient.GetStream();
byte[] buffer = new byte[1024];
m_ReadBuffer = new byte[1024];
int readcount = 0;
m_NetworkStream.BeginRead(m_ReadBuffer, 0, m_ReadBuffer.Length,
new AsyncCallback(EndReceive), null);
await Task.Run(() =>
{
while ((readcount = _stream.Read(buffer, 0, buffer.Length)) > 0)
{
m_NetworkStream.Write(buffer, 0, readcount);
m_NetworkStream.Flush();
}
});
success = true;
}
catch (Exception ex)
{
upgradeStatus = false;
}
return success;
}
private void EndReceive(IAsyncResult ar)
{
try
{
int nBytes;
nBytes = m_NetworkStream.EndRead(ar);
if (nBytes > 0)
{
string res = Encoding.UTF8.GetString(m_ReadBuffer, 0, nBytes);
DeviceStatus status = new DeviceStatus();
string[] readlines = res.Split(new string[] { CRLF },
StringSplitOptions.RemoveEmptyEntries);
foreach (string readline in readlines)
{
if (readline.StartsWith("CODE"))
{
//read readline string here
break;
}
}
}
if (m_NetworkStream.CanRead)
{
do
{
m_NetworkStream.BeginRead(m_ReadBuffer, 0, m_ReadBuffer.Length, new
AsyncCallback(EndReceive), null);
} while (m_NetworkStream.DataAvailable);
}
}
catch (ObjectDisposedException ods)
{
return;
}
catch (System.IO.IOException ex)
{
}
}
}
Any help will be really appreciated.
Your code is basically fine with a few issues:
m_NetworkStream.Flush(); AFAIK this does nothing. If it did something it would harm throughput. So delete that.
_stream.Seek(0, SeekOrigin.Begin); seeking is the callers concern, remove that. This is a layering violation.
Use bigger bigger buffers. Determine the right size experimentally. I usually start at 64KB for bulk transfers. This makes the IOs less chatty.
Turn on nagling which helps with bulk transfers because it saves you from spurious small packets.
You can replace the entire read-write-loop with Stream.Copy.
The way you report exceptions to the callers hides a lot of information. Just let the exception bubble out. Don't return a bool.
Use using for all resource to ensure they are cleaned up in the error case.
nBytes = m_NetworkStream.EndRead(ar); here, you assume that a single read will return all data that will be coming. You might receive just the first byte, though. Probably, you should use StreamReader.ReadLine in a loop until you know you are done.
catch (System.IO.IOException ex) { } What is that about? If firmware updates are such a critical thing suppressing errors appears very dangerous. How else can you find out about bugs?
I would convert the reading code to use await.
As the maximum length for a TcpPacket is 65535 (2^16-1), if you send any packet breaching this lenght it will be truncated. If I were you, I think the best way of sending large packets, is setting a Header of every packet and enumerating them. For example:
C->S ; [P1] <content>
and then the same structure, just plus 1 [P2] <content>
To do so, just use few substrings to truncate the data and sending them.
Cheers!

Async await TCP Server for concurrent connection with database calls

I have to write asynchronous TCP Server on to which multiple GPS Devices will be connecting simultaneously (Count :- 1000 Approx) and will push some data of size less than 1 Kb on server, In the response server will send them simple message containing byte received count, The same procedure will happen every 5 Min.
The data received at server is in CSV Format and contains many decimal values, server suppose to process this data and insert the same into database table
After doing lots of Google I decided to go with C#4.5 async and await methods,
This is the first time I am implementing the TCP Server, I believe this is not really the efficient and professional code so any small of small inputs for the same are greatly appreciated. My sample code is as below
// Server starts from Here
public async void Start()
{
IPAddress ipAddre = IPAddress.Parse("192.168.2.5");
TcpListener listener = new TcpListener(ipAddre, _listeningPort);
listener.Start();
while (true)
{
try
{
var tcpClient = await listener.AcceptTcpClientAsync();
HandleConnectionAsync(tcpClient);
}
catch (Exception exp)
{
}
}
}
// Handle the incoming connection and call process data
private async void HandleConnectionAsync(TcpClient tcpClient)
{
try
{
using (var networkStream = tcpClient.GetStream())
using (var reader = new StreamReader(networkStream, Encoding.Default))
using (var writer = new StreamWriter(networkStream))
{
networkStream.ReadTimeout = 5000;
networkStream.WriteTimeout = 5000;
char[] resp = new char[1024];
while (true)
{
var dataFromServer = await reader.ReadAsync(resp, 0, resp.Length);
string dataFromServer1 = new string(resp);
string status = await ProcessDataReceived(dataFromServer1);
if (status.Length != 0)
await writer.WriteAsync(status);
}
}
}
catch (Exception exp){}
}
//Process Data Function
private async Task<string> ProcessDataReceived(string dataFromServer)
{
List<string> values = dataFromServer.Split(',').ToList();
// Do some calculation and rearrange the data
// Create the datatable and insert the data into datatable
using (SqlBulkCopy bulkcopy = new SqlBulkCopy(_dbConn))
{
bulkcopy.WriteToServer(table);
}
return “status”;
}
At present, I have tested it with single GPS Devices and its working for some 10-15 min. than simply crashed and I am very much doubt full about its in the way it will work when there are multiple concurrent connection.
I just want to Make sure whether my basic approach as show in code is in right direction? Am I processing data in correct way or should I suppose to make the use of queue or some other data structure for processing?
Any inputs are greatly appreciable.
The lack of information means that I'm unable to tell whether this is your actual problem, but as far as problems go, this one's a biggy.
So you're not checking if the other end finished. This is indicated by a return value of 0 when calling ReadAsync.
The result value can be less than the number of bytes requested if the number of bytes currently available is less than the requested number, or it can be 0 (zero) if the end of the stream has been reached.
When this condition is detected, you need to get out of the loop, otherwise bad stuff will happen...
while(true)
{
//....
var dataFromServer = await reader.ReadAsync(resp, 0, resp.Length); //bad name!
if(dataFromServer == 0)
{
break;
}
//....
}
As a rule, when you're doing network programming, you need to trap every possible exception and understand what that exception means. Looking at failure in terms of "oh... it crashed" won't get you very far at all. Network stuff fails all the time and you have to understand why it's failing by reading all the diagnostic information you have to hand.

Out of memory exception when receiving large amounts of data on TCP connection

I have an application that receives data from GPRS clients in the field on a TCP connection. From time to time the GPRS client devices loses connection and buffers the data, when the connection is restored all the buffered data is send to the TCP connection, cauing my application to throw a System.OutOfMemoryException.
I presume this is because the data received is bigger than my buffer size (which is set to int.MaxValue).
How do I prevent my application to run out of memory?
How do I make sure that I do not lose any of the incomming data?
Below is the code used to listen for, and handle incomming data
public void Listen(string ip, int port)
{
_logger.Debug("All.Tms.SensorDataServer : SensorDataListener : Listen");
try
{
var listener = new TcpListener(IPAddress.Parse(ip), port);
listener.Start();
while (true)
{
var client = listener.AcceptTcpClient();
client.SendBufferSize = int.MaxValue;
var thread = new Thread(() => ReadAndHandleIncommingData(client));
thread.IsBackground = true;
thread.Start();
}
}
catch (Exception ex)
{
_logger.Error("TMS.Sensor.SensorDataServer : SensorDataListener : Error : ", ex);
}
}
and
private void ReadAndHandleIncommingData(TcpClient connection)
{
try
{
var stream = connection.GetStream();
var data = new byte[connection.ReceiveBufferSize];
var bytesRead = stream.Read(data, 0, System.Convert.ToInt32(connection.ReceiveBufferSize));
var sensorDataMapper = new SensorDataMapperProvider().Get(data);
if (sensorDataMapper != null)
{
_sensorDataHandler.Handle(sensorDataMapper.Map(data));
}
}
catch (Exception ex)
{
_logger.Error("TMS.Sensor.SensorDataServer : SensorDataListener : ReadAndHandleIncommingData : Error : ", ex);
}
finally
{
try
{
connection.Close();
}
catch(Exception ex)
{
_logger.Error("All.Tms.SensorDataServer : SensorDataListener : ReadAndHandleIncommingData : Error : ", ex);
}
}
}
About buffers
OutOfMemoryExceptions are thrown when there is no sequential memory left.
In your case, that means connection.ReceiveBufferSize is too big to keep in memory in one piece. Not because the data received is bigger than your buffer size.
You can use a smaller, fixed buffer to get the received bytes, append it somewhere and use the same buffer to receive the rest of the data, until you have it all.
One thing to look out for is the collection you use to store the received data. You can't use List<byte> for example because it stores its elements in a single array under the hood which makes no difference than getting everything in one go - like you do now.
You can see MemoryTributary, a stream implementation meant to replace MemoryStream. You can copy your stream to this one and keep it as a stream. This page also contains a lot of information that can help you understand the causes of the OutOfMemoryExceptions.
In addition, I wrote a buffer manager to provide fixed sized buffers a while ago. You can find that in Code Review, here.
About threads
Creating a thread for every single connection is brutal. They cost you 1 MB each so you should either use ThreadPool or better, IOCP (via asynchronous methods of the Socket class).
You may want to look into these for common pitfalls and best practices about socket programming:
Stephen Cleary (the blog) | TCP/IP .NET Sockets FAQ
Stack Overflow | How to write a scalable Tcp/Ip based server
Code Project | C# SocketAsyncEventArgs High Performance Socket Code
Use a fixed receive buffer, and parse data incrementally

Best way to accept multiple tcp clients?

I have a client/server infrastructure. At present they use a TcpClient and TcpListener to send a receive data between all the clients and server.
What I currently do is when data is received (on it's own thread), it is put in a queue for another thread to process in order to free the socket so it is ready and open to receive new data.
// Enter the listening loop.
while (true)
{
Debug.WriteLine("Waiting for a connection... ");
// Perform a blocking call to accept requests.
using (client = server.AcceptTcpClient())
{
data = new List<byte>();
// Get a stream object for reading and writing
using (NetworkStream stream = client.GetStream())
{
// Loop to receive all the data sent by the client.
int length;
while ((length = stream.Read(bytes, 0, bytes.Length)) != 0)
{
var copy = new byte[length];
Array.Copy(bytes, 0, copy, 0, length);
data.AddRange(copy);
}
}
}
receivedQueue.Add(data);
}
However I wanted to find out if there is a better way to do this. For example if there are 10 clients and they all want to send data to the server at the same time, one will get through while all the others will fail.Or if one client has a slow connection and hogs the socket all other communication will halt.
Is there not some way to be able to receive data from all clients at the same time and add the received data in the queue for processing when it has finished downloading?
So here is an answer that will get you started - which is more beginner level than my blog post.
.Net has an async pattern that revolves around a Begin* and End* call. For instance - BeginReceive and EndReceive. They nearly always have their non-async counterpart (in this case Receive); and achieve the exact same goal.
The most important thing to remember is that the socket ones do more than just make the call async - they expose something called IOCP (IO Completion Ports, Linux/Mono has these two but I forget the name) which is extremely important to use on a server; the crux of what IOCP does is that your application doesn't consume a thread while it waits for data.
How to Use The Begin/End Pattern
Every Begin* method will have exactly 2 more arguments in comparisson to it's non-async counterpart. The first is an AsyncCallback, the second is an object. What these two mean is, "here is a method to call when you are done" and "here is some data I need inside that method." The method that gets called always has the same signature, inside this method you call the End* counterpart to get what would have been the result if you had done it synchronously. So for example:
private void BeginReceiveBuffer()
{
_socket.BeginReceive(buffer, 0, buffer.Length, BufferEndReceive, buffer);
}
private void EndReceiveBuffer(IAsyncResult state)
{
var buffer = (byte[])state.AsyncState; // This is the last parameter.
var length = _socket.EndReceive(state); // This is the return value of the method call.
DataReceived(buffer, 0, length); // Do something with the data.
}
What happens here is .Net starts waiting for data from the socket, as soon as it gets data it calls EndReceiveBuffer and passes through the 'custom data' (in this case buffer) to it via state.AsyncResult. When you call EndReceive it will give you back the length of the data that was received (or throw an exception if something failed).
Better Pattern for Sockets
This form will give you central error handling - it can be used anywhere where the async pattern wraps a stream-like 'thing' (e.g. TCP arrives in the order it was sent, so it could be seen as a Stream object).
private Socket _socket;
private ArraySegment<byte> _buffer;
public void StartReceive()
{
ReceiveAsyncLoop(null);
}
// Note that this method is not guaranteed (in fact
// unlikely) to remain on a single thread across
// async invocations.
private void ReceiveAsyncLoop(IAsyncResult result)
{
try
{
// This only gets called once - via StartReceive()
if (result != null)
{
int numberOfBytesRead = _socket.EndReceive(result);
if(numberOfBytesRead == 0)
{
OnDisconnected(null); // 'null' being the exception. The client disconnected normally in this case.
return;
}
var newSegment = new ArraySegment<byte>(_buffer.Array, _buffer.Offset, numberOfBytesRead);
// This method needs its own error handling. Don't let it throw exceptions unless you
// want to disconnect the client.
OnDataReceived(newSegment);
}
// Because of this method call, it's as though we are creating a 'while' loop.
// However this is called an async loop, but you can see it the same way.
_socket.BeginReceive(_buffer.Array, _buffer.Offset, _buffer.Count, SocketFlags.None, ReceiveAsyncLoop, null);
}
catch (Exception ex)
{
// Socket error handling here.
}
}
Accepting Multiple Connections
What you generally do is write a class that contains your socket etc. (as well as your async loop) and create one for each client. So for instance:
public class InboundConnection
{
private Socket _socket;
private ArraySegment<byte> _buffer;
public InboundConnection(Socket clientSocket)
{
_socket = clientSocket;
_buffer = new ArraySegment<byte>(new byte[4096], 0, 4096);
StartReceive(); // Start the read async loop.
}
private void StartReceive() ...
private void ReceiveAsyncLoop() ...
private void OnDataReceived() ...
}
Each client connection should be tracked by your server class (so that you can disconnect them cleanly when the server shuts down, as well as search/look them up).
You should use asynchronous socket programming to achieve this. Take a look at the example provided by MSDN.
You should use asynchronous method of reading the data, an example is:
// Enter the listening loop.
while (true)
{
Debug.WriteLine("Waiting for a connection... ");
client = server.AcceptTcpClient();
ThreadPool.QueueUserWorkItem(new WaitCallback(HandleTcp), client);
}
private void HandleTcp(object tcpClientObject)
{
TcpClient client = (TcpClient)tcpClientObject;
// Perform a blocking call to accept requests.
data = new List<byte>();
// Get a stream object for reading and writing
using (NetworkStream stream = client.GetStream())
{
// Loop to receive all the data sent by the client.
int length;
while ((length = stream.Read(bytes, 0, bytes.Length)) != 0)
{
var copy = new byte[length];
Array.Copy(bytes, 0, copy, 0, length);
data.AddRange(copy);
}
}
receivedQueue.Add(data);
}
Also you should consider using AutoResetEvent or ManualResetEvent to be notified when new data is added to the collection so the thread that handle the data will know when data is received, and if you are using 4.0 you better switch off to using BlockingCollection instead of Queue.
What I do usually is using a thread pool with several threads.
Upon each new connection I'm running the connection handling (in your case - everything you do in the using clause) in one of the threads from the pool.
By that you achieve both performance since you're allowing several simultaneously accepted connection and you also limiting the number of resources (threads, etc') you allocate for handling incoming connections.
You have a nice example here
Good Luck

Categories

Resources