I am currently creating a Windows Service that will create TCP connections to multiple machines (same socket on all machines) and then listen for 'events' from those machines. I am attempting to write the code to create a connection and then spawn a thread that listens to the connection waiting for packets from the machine, then decode the packets that come through, and call a function depending on the payload of the packet.
The problem is I'm not entirely sure how to do that in C#. Does anyone have any helpful suggestions or links that might help me do this?
Thanks in advance for any help!
Depending on how many concurrent clients you plan on supporting, a thread-per-connection architecture will probably break down very quickly. Reason being, each thread requires significant resources. By default each .NET thread gets 1MB of stack space so that's 1MB per connection plus any overhead.
Instead when supporting multiple connected clients typically you will use the asynchronous methods (see here also) which are very efficient because Windows will use "completion ports" which basically free up the thread to do other things while waiting on some event to complete.
For this you would look at methods such as BeginAccept, BeginReceive, BeginSend, etc.
A simpler approach which also avoids making blocking calls and avoids multiple threads is to use the Socket.Select method in a loop. This allows a single thread to service multiple sockets. The thread can only physically read or write to a single socket at a time but the idea is that you are checking the state of multiple sockets which may or may not contain data to read.
In any case, the thread-per-connection approach is much simpler to get your head around at first, but it does have significant scalability problems. I would suggest doing that first with the synchronous methods like Accept, Receive, Send, etc. Then later on refactor your code to use the asynchronous methods so that you don't exhaust the server's memory.
You can have asynchronous receive for every socket connection and decode the data coming from other machines to perform your tasks (You can find some useful information about asynchronous methods here: http://msdn.microsoft.com/en-us/library/2e08f6yc.aspx).
To create a connection, you can do:
Socket sock = Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
sock.Connect(new IPEndPoint(address, port));
Same way you can create multiple connections and keep there references in a List of Dictionary (whichever you prefer).
For receiving data asynchronously on a socket, you can do:
sock.BeginReceive(buffer, 0, buffer.len, SocketFlags.None, new AsyncCallback(OnDataReceived), null);
The method OnDataReceived will be called when receive operation completes.
In OnDataReceived method you have to do:
void OnDataReceived(IAsyncResult result)
{
int dataReceived = sock.EndReceive(result);
//Make sure that dataReceived is equal to amount of data you wanted to receive. If it is
//less then the data you wanted, you can do synchronous receive to read remaining data.
//When all of the data is recieved, call BeginReceive again to receive more data
//... Do your decoding and call the method ...//
}
I hope this helps you.
Have a single thread that runs the accept() to pick up new connections. For each new connection you get, spawn a worker thread using the thread pool.
I don't know if it's possible in your situation but have you thought about using a WCF service that gets called by the multiple machines ? You can host this in a custom windows service or IIS. It will consume very little resource while waiting for events and it's much simpler to code than all that low level scary socket stuff. It's automatically async. You get nice messages to your service rather than a packet you need to deserialize and/or parse. You can use any number of protocols such as REST or binary.
You will of course need to create the process on the other end that sends the messages.
Just a thought...
Cheers
Related
I am trying to write a c# server/client that will simultaneously send byte arrays over TCP across each other. I'm trying to wrap my head around how to accomplish this. All of the example I have seen wait for a message, then send a response. I need communication to happen simultaneously.
Would I need to create 2 separate TCP socket connections for ingoing & outgoing on both the server & client? Can I pass data simultaneously with 1 connection in a "full duplex" fashion? Any help is appreciated.
I would advise you to look at the asynchronous sockets. The reason is, that they don't block threads while receiving or sending data.
Socket.BeginReceive(buffer, offset, size, endReceiveMethod);
The endreceive method will be called when there are bytes received. (on a other thread)
This is the same for sending.
Socket.BeginSend(buffer, offset, size, endSendMethod);
I remember in the early days I was worried about reading and writing on the same thread, creating difficult constructions with read-timeouts etc and each client it's own thread.
This isn't needed with Asynchronous sockets. It doesn't use a single thread per client. It uses I/O Completion Ports http://msdn.microsoft.com/en-us/library/windows/desktop/aa365198(v=vs.85).aspx instead of blocking threads.
You should look into using select() method to listen on server and client file descriptor (or fd). http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.select.aspx
Basically, if you have a TCP server, let us say, fd0 and if the client sends a connect, then the server fd would create a new fd for the new connection, let us call it fd1. Now, you would want to do three things with these: (a) listen for newer incoming connections on fd0, (b) wait for reciving data on fd1, and (c) send data on fd1. Sending data is usually non-blocking, so you dont have to worry about that. But, for (a) and (b), you can use a select.. If there is data to be read on fd1, then you would get a read event. Likewise, if there is a new connection on fd0, then also you would get a read event and you can call accept.
I'm trying to write an asynch socket application which transfering complex objects over across sides..
I used the example here...
Everything is fine till i try send multi package data. When the transferred data requires multiple package transfer server application is suspending and server is going out of control without any errors...
After many hours later i find a solution; if i close client sender socket after each EndSend callback, the problem is solving. But i couldn't understand why this is necessary? Or are there any other solution for the situation?
My (2) projects is same with example above only i changed EndSend callback method like following:
public void EndSendCallback(IAsyncResult result)
{
Status status = (Status)result.AsyncState;
int size = status.Socket.EndSend(result);
status.Socket.Close(); // <--------------- This line solved the situation
Console.Out.WriteLine("Send data: " + size + " bytes.");
Console.ReadLine();
allDone.Set();
}
Thanks..
This is due to the example code given not handling multiple packages (and being broken).
A few observations:
The server can only handle 1 client at a time.
The server simply checks whether the data coming in is in a single read smaller than the data requested and if so, assumes that's the last part.
The server then ignores the client socket while leaving the connection open. This puts the responsibility of closing the connection on the client side which can be confusing and which will waste resources on the server.
Now the first observation is an implementation detail and not really relevant in your case. The second observation is relevant for you since it will likely result in unexplained bugs- probably not in development- but when this code is actually running somewhere in a real scenario. Sockets are not streamlined. When the client sents over 1000 bytes. This might require 1 call to read on the server or 10. A call to read simply returns as soon as there is 'some' data available. What you need to do is implement some sort of protocol that communicates either how much data is being sent over- or when all the data has been sent over. I really recommend just to stick with the HTTP protocol since this is a well tested and well supported protocol that suits most scenario's.
The third observation might also cause bugs where the server is running out of resources since it leaves all connections open.
I've been looking into the Asynchronous Client and Asynchronous Server Socket examples on MSDN and have happily punched up the example that works flawlessly when one Client connects to one Server. My problem is that I need to synchronise a chunk of work with a number of machines so they execute at about the same time (like millisecond difference). The action is reasonably simple, talk to the child servers (all running on the same machine but on different ports for initial testing), simulate its processing and send a 'Ready' signal back to the caller. Once all the Servers have returned this flag (or a time-out occurs), a second message to is passed from the client to the acknowledged servers telling them to execute.
My approach so far has been to create two client instances, stored within a list, and start the routine by looping through the list. This works well but not particularly fast as each client's routine is ran synchronously. To speed up the process, I created a new thread and executed the routine on that for each client. Now this does work allowing two or more servers to return back and synchronise appropriately. Unfortunately, this is very error prone and the code errors with the 'ObjectDisposedException' exception on the following line of the 'ReceiveCallback' method...
// Read data from the remote device.
int bytesRead = client.EndReceive(ar);
With some investigation and debugging I tracked the sockets being passed to the routine (using its handle) and found while it isn't connected, it is always the second socket to return that fails and not the first that does successfully read its response. In addition, these socket instances (based upon the handle value) appear to be separate instances, but somehow the second (and subsequent responses) continue to error out on this line.
What is causing these sockets to inappropriately dispose of themselves before being legitmately processed? As they are running in separate threads and there are no shared routines, is the first socket being inappropriately used on the other instances? Tbh, I feel a bit lost at sea and while I could band-aid up these errors, the reliability of the code and potentially losing returning acknowledgements is not a favourable goal. Any pointers?
Kind regards
Turns out the shared / static ManualResetEvent was being set across the different instances so thread 1 would set the ManualResetEvent disposing the socket on the second thread. By ensuring that no methods / properties were shared / static - each thread and socket would execute under its own scope.
I have a server that has several clients C1...Cn to each of which there is a TCP connection established. There are less than 10,000 clients.
The message protocol is request/response based, where the server sends a request to a client and then the client sends a response.
The server has several threads, T1...Tm, and each of these may send requests to any of the clients. I want to make sure that only one of these threads can send a request to a specific client at any one time, while the other threads wanting to send a request to the same client will have to wait.
I do not want to block threads from sending requests to different clients at the same time.
E.g. If T1 is sending a request to C3, another thread T2 should not be able to send anything to C3 until T1 has received its response.
I was thinking of using a simple lock statement on the socket:
lock (c3Socket)
{
// Send request to C3
// Get response from C3
}
I am using asynchronous sockets, so I may have to use Monitor instead:
Monitor.Enter(c3Socket); // Before calling .BeginReceive()
And
Monitor.Exit(c3Socket); // In .EndReceive
I am worried about stuff going wrong and not letting go of the monitor and therefore blocking all access to a client. I'm thinking that my heartbeat thread could use Monitor.TryEnter() with a timeout and throw out sockets that it cannot get the monitor for.
Would it make sense for me to make the Begin and End calls synchronous in order to be able to use the lock() statement? I know that I would be sacrificing concurrency for simplicity in this case, but it may be worth it.
Am I overlooking anything here? Any input appreciated.
My answer here would be a state machine per socket. The states would be free and busy:
If socket is free, the sender thread would mark it busy and start sending to client and waiting for response.
You might want to setup a timeout on that wait just in case a client gets stuck somehow.
If the state is busy - the thread sleeps, waiting for signal.
When that client-related timeout expires - close the socket, the client is dead.
When a response is successfully received/parsed, mark the socket free again and signal/wakeup the waiting threads.
Only lock around socket state inquiry and manipulation, not the actual network IO. That means a lock per socket, plus some sort of wait primitive like a conditional variables (sorry, don't remember what's really available in .NET)
Hope this helps.
You certainly can't use the locking approach that you've described. Since your system is primarily asynchronous, you can't know what thread operations will be running on. This means that you may call Exit on the wrong thread (and have a SynchronizationLockException thrown), or some other thread may call Enter and succeed even though that client is "in use", just because it happened to get the same thread that Enter was originally called on.
I'd agree with Nikolai that you need to hold some additional state alongside each socket to determine whether it is currently in use or not. You woud of course need locking to update this shared state.
My asking is quite simple and is about asynchronous sockets, working with TCP protocol.
When I send some data with the "BeginSend" method, when will the callback be called?
Will it be called when the data is just sent out to the network, or when we are ensured that the data as reached its destination (like it should be regarding to TCP specification) ?
Thanks for your answers.
KiTe.
ps : I'm sorry if my english is a bit bad ^^.
From MSDN:
"When your application calls BeginSend, the system will use a separate thread to execute the specified callback method, and will block on EndSend until the Socket sends the number of bytes requested or throws an exception."
"The successful completion of a send does not indicate that the data was successfully delivered. If no buffer space is available within the transport system to hold the data to be transmitted, send will block unless the socket has been placed in nonblocking mode."
http://msdn.microsoft.com/en-us/library/38dxf7kt.aspx
When the callback is called you can be sure that the data has been cleared from the output buffer (the asynchronous operation uses a separate thread to ensure that your calling thread is not blocked in case there is no room in the transmit buffer and it has to wait to send the date) and that it will reach it's destination - but not that it has reached it yet.
Because of the TCP protocol's nature however, you can be sure (well, I guess almost sure) that it will get to the destination, eventually.
However, for timing purposes you should not consider the time of the callback as being the same as the time the data reaches the other party.