Chat server with a lot of clients - c#

I read some C# chat source code & I see that: on chat server with a lot of connected clients, server listener will run in a separated thread & each connected client will also run in a separated thread.
Code examples:
Start server & begin listening in a separated thread:
public void StartListening()
{
// Get the IP of the first network device, however this can prove unreliable on certain configurations
IPAddress ipaLocal = ipAddress;
// Create the TCP listener object using the IP of the server and the specified port
tlsClient = new TcpListener(1986);
// Start the TCP listener and listen for connections
tlsClient.Start();
// The while loop will check for true in this before checking for connections
ServRunning = true;
// Start the new tread that hosts the listener
thrListener = new Thread(KeepListening);
thrListener.Start();
}
private void KeepListening()
{
// While the server is running
while (ServRunning == true)
{
// Accept a pending connection
tcpClient = tlsClient.AcceptTcpClient();
// Create a new instance of Connection
Connection newConnection = new Connection(tcpClient);
}
}
And a connection will also run in a separated thread:
public Connection(TcpClient tcpCon)
{
tcpClient = tcpCon;
// The thread that accepts the client and awaits messages
thrSender = new Thread(AcceptClient);
// The thread calls the AcceptClient() method
thrSender.Start();
}
So, if a chat server with 10000 connected clients, the chat server application will have 10002 threads (one main thread, one server thread & 10000 client threads). I think the chat server will be overhead with a big number of threads. Please help me a solution. Thanks.
UPDATE:
I believe chat examples are only for learning networking & they are not suitable in real-world model. Please give me a real-world solution. Thanks.

If you use .Net framework 2.0 SP2 or higher, than you may use new asyncrhronous sockets model based on IO Completion ports. In this case you shouldn't create your own threads, because IO Completion ports do all job for you.
Here some examples:
tcpServer = new System.Net.Sockets.TcpListener(IPAddress.Any, port);
tcpServer.Start();
tcpServer.BeginAcceptSocket(EndAcceptSocket, tcpServer);
private void EndAcceptSocket(IAsyncResult asyncResult)
{
TcpListener lister = (TcpListener)asyncResult.AsyncState;
Socket sock = lister.EndAcceptSocket(asyncResult);
//handle socket connection (you may add socket to you internal storage or something)
//start accepting another sockets
lister.BeginAcceptSocket(EndAcceptSocket, lister);
SocketAsyncEventArgs e = new SocketAsyncEventArgs();
e.Completed += ReceiveCompleted;
e.SetBuffer(new byte[socketBufferSize], 0, socketBufferSize);
sock.ReceiveAsync(e);
}
void ReceiveCompleted(object sender, SocketAsyncEventArgs e)
{
var sock = (Socket)sender;
if (!sock.Connected)
{
//handle socket disconnection
}
var buf = new byte[size];
Array.Copy(e.Buffer, buf, size);
//handle received data
//start reading new data
sock.ReceiveAsync(e);
}

A standard mechanism to ease the burden is known as selection, which can multiplex multiple Socket instances to watch for the ones that are ready to be read or written to. See this document: http://codeidol.com/csharp/csharp-network/Csharp-Network-Programming-Classes/Csharp-Socket-Programming/ and scroll down to the section on select().

1) You'll NEVER want that many threads running - even if you could get them to run on your box (which you can't - each thread has a stack associated with it that takes real RAM and as you start more and more and more you'll run out of physical resources in your box and watch it blow up).
2) You'll want to look into thread pooling - using a smaller amount of threads to tackle a larger amount of work - typically reading from a queue of work that you try to get through as quickly as possible.
3) You'll want to look into io completion ports - a means of having a callback when io (likek a disk read or a network io) is waiting for you to take action - think of a thread (or pool of threads) dedicated to getting io notifications and then shoving the action to take for that io into a queue and then another pool of threads that take care of the actual messaging/logging/etc.
4) What happens when you scale beyond one machine? Which you hope to do if you're successful right? :-) Typically people dedicate a set of N machines to chat - then they hash based on a identifier for the user (think a GUID that represented the user - or a UserID/bigint depending on what corresponds to some internal authentication token that is consistent from login to login) which allows them to deterministically route the user's status/state information to a specific machine in that set of N boxes dedicated to messaging. So if a user that hashes to server N[2] needs to check if theri friends ar logged in it is easy to know for each of their friends exactly which machine their friend's status should be in because the backend consistently hashes those friends to the IM machine that corresponds to each userid hash. (i.e. you know just from the userid what server in the farm should be handling the IM status for that user.
Just dont' think you're gonna spin up a bunch of threads and that will save the day. It's sloppy and works only in very small numbers.

To make the matter worse you would also have to communicate between some arbitrary number of threads (it's a chat server, people want to talk to each other, not themselves.) I would suggest looking into UDP - can be done with a single thread on the server and fits the network activity well - people rarely write more then couple of sentences at a time in chat exchanges, which is very convenient for size-limited UDP datagrams.
There are other approaches of course, but one sure thing though is that you will never be able to do thread per socket at that scale.

I suggest you to read this great article on MSDN Magazine.
Describing:
Threaded Server
Select-Based Server
Asynchronous Server
codes in C# & VB.Net

Related

C# best way to implement TCP Client Server Application

I want to extend my experience with the .NET framework and want to build a client/server application.
Actually, the client/server is a small Point Of Sale system but first, I want to focus on the communication between server and client.
In the future, I want to make it a WPF application but for now, I simply started with a console application.
2 functionalities:
client(s) receive(s) a dataset and every 15/30min an update with changed prices/new products
(So the code will be in a Async method with a Thread.sleep for 15/30 mins).
when closing the client application, sending a kind of a report (for example, an xml)
On the internet, I found lots of examples but i can't decide which one is the best/safest/performanced manner of working so i need some advice for which techniques i should implement.
CLIENT/SERVER
I want 1 server application that handles max 6 clients. I read that threads use a lot of mb and maybe a better way will be tasks with async/await functionallity.
Example with ASYNC/AWAIT
http://bsmadhu.wordpress.com/2012/09/29/simplify-asynchronous-programming-with-c-5-asyncawait/
Example with THREADS
mikeadev.net/2012/07/multi-threaded-tcp-server-in-csharp/
Example with SOCKETS
codereview.stackexchange.com/questions/5306/tcp-socket-server
This seems to be a great example of sockets, however, the revisioned code isn't working completely because not all the classes are included
msdn.microsoft.com/en-us/library/fx6588te(v=vs.110).aspx
This example of MSDN has a lot more with Buffersize and a signal for the end of a message. I don't know if this just an "old way" to do this because in my previous examples, they just send a string from the client to the server and that's it.
.NET FRAMEWORK REMOTING/ WCF
I found also something about the remoting part of .NET and WCF but don' know if I need to implement this because i think the example with Async/Await isn't bad.
SERIALIZED OBJECTS / DATASET / XML
What is the best way to send data between it? Juse an XML serializer or just binary?
Example with Dataset -> XML
stackoverflow.com/questions/8384014/convert-dataset-to-xml
Example with Remoting
akadia.com/services/dotnet_dataset_remoting.html
If I should use the Async/Await method, is it right to something like this in the serverapplication:
while(true)
{
string input = Console.ReadLine();
if(input == "products")
SendProductToClients(port);
if(input == "rapport")
{
string Example = Console.ReadLine();
}
}
Here are several things anyone writing a client/server application should consider:
Application layer packets may span multiple TCP packets.
Multiple application layer packets may be contained within a single TCP packet.
Encryption.
Authentication.
Lost and unresponsive clients.
Data serialization format.
Thread based or asynchronous socket readers.
Retrieving packets properly requires a wrapper protocol around your data. The protocol can be very simple. For example, it may be as simple as an integer that specifies the payload length. The snippet I have provided below was taken directly from the open source client/server application framework project DotNetOpenServer available on GitHub. Note this code is used by both the client and the server:
private byte[] buffer = new byte[8192];
private int payloadLength;
private int payloadPosition;
private MemoryStream packet = new MemoryStream();
private PacketReadTypes readState;
private Stream stream;
private void ReadCallback(IAsyncResult ar)
{
try
{
int available = stream.EndRead(ar);
int position = 0;
while (available > 0)
{
int lengthToRead;
if (readState == PacketReadTypes.Header)
{
lengthToRead = (int)packet.Position + available >= SessionLayerProtocol.HEADER_LENGTH ?
SessionLayerProtocol.HEADER_LENGTH - (int)packet.Position :
available;
packet.Write(buffer, position, lengthToRead);
position += lengthToRead;
available -= lengthToRead;
if (packet.Position >= SessionLayerProtocol.HEADER_LENGTH)
readState = PacketReadTypes.HeaderComplete;
}
if (readState == PacketReadTypes.HeaderComplete)
{
packet.Seek(0, SeekOrigin.Begin);
BinaryReader br = new BinaryReader(packet, Encoding.UTF8);
ushort protocolId = br.ReadUInt16();
if (protocolId != SessionLayerProtocol.PROTOCAL_IDENTIFIER)
throw new Exception(ErrorTypes.INVALID_PROTOCOL);
payloadLength = br.ReadInt32();
readState = PacketReadTypes.Payload;
}
if (readState == PacketReadTypes.Payload)
{
lengthToRead = available >= payloadLength - payloadPosition ?
payloadLength - payloadPosition :
available;
packet.Write(buffer, position, lengthToRead);
position += lengthToRead;
available -= lengthToRead;
payloadPosition += lengthToRead;
if (packet.Position >= SessionLayerProtocol.HEADER_LENGTH + payloadLength)
{
if (Logger.LogPackets)
Log(Level.Debug, "RECV: " + ToHexString(packet.ToArray(), 0, (int)packet.Length));
MemoryStream handlerMS = new MemoryStream(packet.ToArray());
handlerMS.Seek(SessionLayerProtocol.HEADER_LENGTH, SeekOrigin.Begin);
BinaryReader br = new BinaryReader(handlerMS, Encoding.UTF8);
if (!ThreadPool.QueueUserWorkItem(OnPacketReceivedThreadPoolCallback, br))
throw new Exception(ErrorTypes.NO_MORE_THREADS_AVAILABLE);
Reset();
}
}
}
stream.BeginRead(buffer, 0, buffer.Length, new AsyncCallback(ReadCallback), null);
}
catch (ObjectDisposedException)
{
Close();
}
catch (Exception ex)
{
ConnectionLost(ex);
}
}
private void Reset()
{
readState = PacketReadTypes.Header;
packet = new MemoryStream();
payloadLength = 0;
payloadPosition = 0;
}
If you're transmitting point of sale information, it should be encrypted. I suggest TLS which is easily enabled on through .Net. The code is very simple and there are quite a few samples out there so for brevity I'm not going to show it here. If you are interested, you can find an example implementation in DotNetOpenServer.
All connections should be authenticated. There are many ways to accomplish this. I've use Windows Authentication (NTLM) as well as Basic. Although NTLM is powerful as well as automatic it is limited to specific platforms. Basic authentication simply passes a username and password after the socket has been encrypted. Basic authentication can still, however; authenticate the username/password combination against the local server or domain controller essentially impersonating NTLM. The latter method enables developers to easily create non-Windows client applications that run on iOS, Mac, Unix/Linux flavors as well as Java platforms (although some Java implementations support NTLM). Your server implementation should never allow application data to be transferred until after the session has been authenticated.
There are only a few things we can count on: taxes, networks failing and client applications hanging. It's just the nature of things. Your server should implement a method to clean up both lost and hung client sessions. I've accomplished this in many client/server frameworks through a keep-alive (AKA heartbeat) protocol. On the server side I implement a timer that is reset every time a client sends a packet, any packet. If the server doesn't receive a packet within the timeout, the session is closed. The keep-alive protocol is used to send packets when other application layer protocols are idle. Since your application only sends XML once every 15 minutes sending a keep-alive packet once a minute would able the server side to issue an alert to the administrator when a connection is lost prior to the 15 minute interval possibly enabling the IT department to resolve a network issue in a more timely fashion.
Next, data format. In your case XML is great. XML enables you to change up the payload however you want whenever you want. If you really need speed, then binary will always trump the bloated nature of string represented data.
Finally, as #NSFW already stated, threads or asynchronous doesn't really matter in your case. I've written servers that scale to 10000 connections based on threads as well as asynchronous callbacks. It's all really the same thing when it comes down to it. As #NSFW said, most of us are using asynchronous callbacks now and the latest server implementation I've written follows that model as well.
Threads are not terribly expensive, considering the amount of RAM available on modern systems, so I don't think it's helpful to optimize for a low thread count. Especially if we're talking about a difference between 1 thread and 2-5 threads. (With hundreds or thousands of threads, the cost of a thread starts to matter.)
But you do want to optimize for minimal blocking of whatever threads you do have. So for example instead of using Thread.Sleep to do work on 15 minute intervals, just set a timer, let the thread return, and trust the system to invoke your code 15 minutes later. And instead of blocking operations for reading or writing information over the network, use non-blocking operations.
The async/await pattern is the new hotness for asynchronous programming on .Net, and it is a big improvement over the Begin/End pattern that dates back to .Net 1.0. Code written with async/await is still using threads, it is just using features of C# and .Net to hide a lot of the complexity of threads from you - and for the most part, it hides the stuff that should be hidden, so that you can focus your attention on your application's features rather than the details of multi-threaded programming.
So my advice is to use the async/await approach for all of your IO (network and disk) and use timers for periodic chores like sending those updates you mentioned.
And about serialization...
One of the biggest advantages of XML over binary formats is that you can save your XML transmissions to disk and open them up using readily-available tools to confirm that the payload really contains the data that you thought would be in there. So I tend to avoid binary formats unless bandwidth is scarce - and even then, it's useful to develop most of the app using a text-friendly format like XML, and then switch to binary after the basic mechanism of sending and receiving data have been fleshed out.
So my vote is for XML.
And regarding your code example, well ther's no async/await in it...
But first, note that a typical simple TCP server will have a small loop that listens for incoming connections and starts a thread to hanadle each new connection. The code for the connection thread will then listen for incoming data, process it, and send an appropriate response. So the listen-for-new-connections code and the handle-a-single-connection code are completely separate.
So anyway, the connection thread code might look similar to what you wrote, but instead of just calling ReadLine you'd do something like "string line = await ReadLine();" The await keyword is approximately where your code lets one thread exit (after invoking ReadLine) and then resumes on another thread (when the result of ReadLine is available). Except that awaitable methods should have a name that ends with Async, for example ReadLineAsync. Reading a line of text from the network is not a bad idea, but you'll have to write ReadLineAsync yourself, building upon the existing network API.
I hope this helps.

Simple One Thread Socket - TCP Server

First, I don't know if Stackoverflow is the best site to post this kind of message, but I don't know another sites like this.
In oder to understand properly tcp programmation in C#, I decided to do all possible ways from scratch. Here is what I want to know (not in the right order:
- Simple One Thread Socket Server (this article)
- Simple Multiple Threads Socket Server (I don't know how, cause threads are complicated)
- Simple Thread Socket Server (put the client management in another thread)
- Multiple Threads Socket Server
- Using tcpListener
- Using async / Await
- Using tasks
The ultimate objective is to know how to do the best tcp server, without just copy/paste some parts of come, but understand properly all things.
So, this is my first part : a single thread tcp server.
There is my code, but I don't think anybody will correct something, because it's quite a copy from MSDN : http://msdn.microsoft.com/en-us/library/6y0e13d3(v=vs.110).aspx
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Net.Sockets;
using System.Text;
namespace SimpleOneThreadSocket
{
public class ServerSocket
{
private int _iPport = -1;
private static int BUFFER_SIZE = 1024;
private Socket _listener = null;
public ServerSocket(int iPort)
{
// Create a TCP/IP socket.
this._listener = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
// Save the port
this._iPport = iPort;
}
public void Start()
{
byte[] buffer = null;
String sDatasReceived = null;
// Bind the socket to loopback address
try
{
this._listener.Bind(new System.Net.IPEndPoint(System.Net.IPAddress.Loopback, _iPport));
this._listener.Listen(2);
}
catch (Exception e)
{
System.Console.WriteLine(e.ToString());
}
// Listening
try
{
Console.WriteLine("Server listening on 127.0.0.1:" + _iPport);
while (true)
{
Socket client = this._listener.Accept();
Console.WriteLine("Incoming connection from : " + IPAddress.Parse(((IPEndPoint)client.RemoteEndPoint).Address.ToString()) + ":" + ((IPEndPoint)client.RemoteEndPoint).Port.ToString());
// An incoming connection needs to be processed.
while (true)
{
buffer = new byte[BUFFER_SIZE];
int bytesRec = client.Receive(buffer);
sDatasReceived += Encoding.ASCII.GetString(buffer, 0, bytesRec);
if (sDatasReceived.IndexOf("<EOF>") > -1)
{
// Show the data on the console.
Console.WriteLine("Text received : {0}", sDatasReceived);
// Echo the data back to the client.
byte[] msg = Encoding.ASCII.GetBytes(sDatasReceived);
client.Send(msg);
sDatasReceived = "";
buffer = null;
}
else if (sDatasReceived.IndexOf("exit") > -1)
{
client.Shutdown(SocketShutdown.Both);
client.Close();
sDatasReceived = "";
buffer = null;
break;
}
}
}
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
}
}
}
But I have some questions about that :
Listen Method from Socket have a parameter : backlog. According to MSDN, backlog is the number of available connection. I don't know why, when I put 0, I can connect to my server with multiple Telnet sessions. EDIT : 0 & 1 both allow 2 connections (1 current, 1 pending), 2 allow 3 connections (1 current, 2 pending), etc... So I didn't understand well the meaning of MSDN.
Can you confirm that Accept Method will take each connection one after one, that's why I see text from differents Telnet session in my server ?
Can you confirm (my server is a C# library) I can't kill my server (with this kind of code) without killing the process ? It could be possible with threads but it will come later.
If something is wrong in my code, please help me :)
I will come back soon with a simple multiple thread socket server, but I don't know how (I think one step is available before using threads or async/await).
First off, do your best not to even learn this. If you can possibly use a SignalR server, then do so. There is no such thing as a "simple" socket server at the TCP/IP level.
If you insist on the painful route (i.e., learning proper TCP/IP server design), then there's a lot to learn. First, the MSDN examples are notoriously bad starting points; they barely work and tend to not handle any kind of error conditions, which is absolutely necessary in the real world when working at the TCP/IP level. Think of them as examples of how to call the methods, not examples of socket clients or servers.
I have a TCP/IP FAQ that may help you, including a description of the backlog parameter. This is how many connections the OS will accept on your behalf before your code gets around to accepting them, and it's only a hint anyway.
To answer your other questions: A single call to Accept will accept a single new socket connection. The code as-written has an infinite loop, so it will work like any other infinite loop; it will continue executing until it encounters an exception or its thread is aborted (which happens on process shutdown).
If something is wrong in my code, please help me
Oh, yes. There are lots of things wrong with this code. It's an MSDN socket example, after all. :) Off the top of my head:
The buffer size is an arbitrary value, rather low. I would start at 8K myself, so it's possible to get a full Ethernet packet in a single read.
The Bind explicitly uses the loopback address. OK for playing around, I guess, but remember to set this to IPAddress.Any in the real world.
backlog parameter is OK for testing, but should be int.MaxValue on a true server to enable the dynamic backlog in modern server OSes.
Code will fall through the first catch and attempt to Accept after a Bind/Listen failed.
If any exception occurs (e.g., from Listen or Receive), then the entire server shuts down. Note that a client socket being terminated will result in an exception that should be logged/ignored, but it would stop this server.
The read buffer is re-allocated on each time through the loop, even though the old buffer is never used again.
ASCII is a lossy encoding.
If a client cleanly shuts down without sending <EOF>, then the server enters an infinite busy loop.
Received data is not properly separated into messages; it is possible that the echoed message contains all of one message and part of another. In this particular example it doesn't matter (since it's just an echo server and it's using ASCII instead of a real encoding), but this example hides the fact that you need to handle message framing properly in any real-world application.
The decoding should be done after the message framing. This isn't necessary for ASCII (a lossy encoding), but it's required for any real encodings like UTF8.
Since the server is only either receiving or sending at any time (and never both), it cannot detect or recover from a half-open socket situation. A half-open socket will cause this server to hang.
The server is only capable of a single connection at a time.
That was just after a brief readthrough. There could easily be more.

How can I read from a socket repeatedly?

To start I am coding in C#. I am writing data of varying sizes to a device through a socket. After writing the data I want to read from the socket because the device will write back an error code/completion message once it has finished processing all of the data. Currently I have something like this:
byte[] resultErrorCode = new byte[1];
resultErrorCode[0] = 255;
while (resultErrorCode[0] == 255)
{
try
{
ReadFromSocket(ref resultErrorCode);
}
catch (Exception)
{
}
}
Console.WriteLine(ErrorList[resultErrorCode[0] - 48]);
I use ReadFromSocket in other places, so I know that it is working correctly. What ends up happening is that the port I am connecting from (on my machine) changes to random ports. I think that this causes the firmware on the other side to have a bad connection. So when I write data on the other side, it tries to write data to the original port that I connected through, but after trying to read several times, the connection port changes on my side.
How can I read from the socket continuously until I receive a completion command? If I know that something is wrong with the loop because for my smallest test file it takes 1 min and 13 seconds pretty consistently. I have tested the code by removing the loop and putting the code to sleep for 1 min and 15 seconds. When it resumes, it successfully reads the completion command that I am expecting. Does anyone have any advice?
What you should have is a separate thread which will act like a driver of your external hardware. This thread will receive all data, parse it and transmit the appropriate messages to the rest of your application. This portion of code will give you an idea of how receive and parse data from your hardware.
public void ContinuousReceive(){
byte[] buffer = new byte[1024];
bool terminationCodeReceived = false;
while(!terminationCodeReceived){
try{
if(server.Receive(buffer)>0){
// We got something
// Parse the received data and check if the termination code
// is received or not
}
}catch (SocketException e){
Console.WriteLine("Oops! Something bad happened:" + e.Message);
}
}
}
Notes:
If you want to open a specific port on your machine (some external hardware are configured to talk to a predefined port) then you should specify that when you create your socket
Never close your socket until you want to stop your application or the external hardware API requires that. Keeping your socket open will resolve the random port change
using Thread.Sleep when dealing with external hardware is not a good idea. When possible, you should either use events (in case of RS232 connections) or blocking calls on separate threads as it is the case in the code above.

Why would TcpListener be leaking ESTABLISHED connections?

I have an application that's listening messages from a modem in some 30 cars. I've used TcpListener to implement server code that looks like this (error handling elided):
...
listener.Start()
...
void
BeginAcceptTcpClient()
{
if(listener.Server.IsBound) {
listener.BeginAcceptTcpClient(TcpClientAccepted, null);
}
}
void
TcpClientAccepted(IAsyncResult ar)
{
var buffer = new byte[bufferSize];
BeginAcceptTcpClient();
using(var client = EndAcceptTcpClient(ar)) {
using(var stream = client.GetStream()) {
var count = 0;
while((count = stream.Read(buffer, total, bufferSize - total)) > 0) {
total += count;
}
}
DoSomething(buffer)
}
I get the messages correctly, my problem lies with disconnections. Every 12 hours the modems get reset and get a new IP address, but the Server continues to hold the old connections active (they are marked as ESTABLISHED in tcpview). Is there any way to set a timeout for the old connections? I thought that by closing the TcpClient the TCP connection was closed (and that's what happens in my local tests), what I'm doing wrong?
I'm actually a little confused by the code sample - the question suggests that these questions are open a reasonably long time, but the code is more typical for very short bursts; for a long-running connection, I would expect to see one of the async APIs here, not the sync API.
Sockets that die without trace are very common, especially when distributed with a number of intermediate devices that would all need to spot the shutdown. Wireless networks in particular sometimes try to keep sockets artificially alive, since it is pretty common to briefly lose a wireless connection, as the devices don't want that to kill every connection every time.
As such, it is pretty common to implement some kind of heartbeat on connections, so that you can keep track of who is still really alive.
As an example - I have a websocket server here, which in theory handles both graceful shutdowns (via a particular sequence that indicates closure), and ungraceful socket closure (unexpectedly terminating the connection) - but of the 19k connections I've seen in the last hour or so, 70 have died without hitting either of those. So instead, I track activity against a (slow) heartbeat, and kill them if they fail to respond after too long.
Re timeout; you can try the ReceiveTimeout, but that will only help you if you aren't usually expecting big gaps in traffic.

C# Socket.SendTo in a loop eventually causes SocketException (depends on the router)

I am doing some basic Socket messaging. I have a routine that works well but there is a problem under load.
I'm using UDP to do a connectionless SendTo to basically do a ping-like operation to see if any of my listeners are out there on the LAN. Ideally I would just use the broadcast address, but Wireless routers don't seem to relay my broadcast. My work around is to iterate through all IPs on the Subnet and send my data gram to each IP. The other PCs are listening and if they get the message they will reply and that is how I get Peers to find each other. Here is the code that is in the loop which sends the data gram to each IP in the subnet.
string msgStr = "some message here...";
byte[] sendbuf = Encoding.ASCII.GetBytes(msgStr);
Socket socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
socket.Blocking = true;
socket.SendTo(sendbuf, remoteEndPt);
//socket.Close();
This works, but when the Subnet range is large, say 255.255.0.0 (meaning ~60,000 IPs to iterate through) I will eventually get a SocketException with error code "10022", meaning "Invalid Argument". This tends to happen after ~10,000 or so successful sends then I start to see this error. Also, the router I use at work handles it and is presumably a high powered router, but the cheap-o one in my lab is the one that produces the error.
If I put in a wait time after catching the SocketException and before resuming the loop it will typically recover but eventually I'll get the error again.
I think what is happening is that the buffer on the router gets full and I cannot send anymore data. The higher quality one at work can handle it but the cheap-o one gets bogged down. Does that sound plausible?
A couple questions:
1) When using SendTo in a connectionless manner, do I need to call Close() on my Socket?
I've haven't seen any benefit in calling Close(), but when I do call Close() it severely slows down my iteration (I have it commented out above because it does slow things down a lot). Does this make sense?
2) Is there a way for me to tell I should wait before trying to send more data? It doesn't seem right to just catch the Exception which I still don't know what the cause of it is.
Thanks, J.
I am not sure that is the router only but I suspect that you are also running into some limit in the OS...
Any reason you are creating the Socket every time you send ?
Just reuse it...
Anyways according to http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.aspx it is a good idea to call Shutdown() and then Close() on the Socket... perhaps not with every send but every 255 IPs or so...
Checkout UdpClient - that could make implementation easier / more robust
EDIT - as per comment:
IF you want a Socket reuse "cache"... this for example would make sure that a specific Socket is only used every 256 checks...
// build/fill your Socket-Queue for example in the con
class SocketExample
{
Queue<Socket> a = new Queue<Socket>();
SocketExample ()
{
int ii = 0, C = 256;
for (ii = 0; ii < C; C++)
{
a.Enqueue (new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp));
}
}
// in your function you just dequeue a Socket and use it,
// after you are finished you enqueue it
void CheckNetIP (some parameters...)
{
Socket S = a.Dequeue();
// do whatever you want to do...
// IF there is no exception
a.Enqueue(S);
}
}

Categories

Resources