Evening,
I am writing this in the awareness that I am lacking a bit of fundamental understanding in named pipe server / client architecture and hope that somebody can help me. My goal is to have a server application that a client can connect to and in case the client crashes, a restart of the client application will allow for a reconnect to the server. Both applications run on the same machine.
Following scenario: I have a application/program that opens a named pipe server, like so
// Provide full access to the current user so more pipe instances can be created
PipeSecurity m_ps = new PipeSecurity();
m_ps.AddAccessRule(
new PipeAccessRule(WindowsIdentity.GetCurrent().User, PipeAccessRights.FullControl,
AccessControlType.Allow)
);
m_ps.AddAccessRule(
new PipeAccessRule(
new SecurityIdentifier(WellKnownSidType.AuthenticatedUserSid, null),
PipeAccessRights.ReadWrite, AccessControlType.Allow
)
);
/// Init the pipe
LauncherPipeServer = new NamedPipeServerStream("CommandPipe",
PipeDirection.In,
1000,
PipeTransmissionMode.Message,
PipeOptions.Asynchronous,
1,
1,
m_ps);
In a separate thread of this program I then wait for a connection from the client
...
LauncherPipeServerWorker = new System.ComponentModel.BackgroundWorker();
LauncherPipeServerWorker.DoWork += ServerPipeHandling_DoWork;
LauncherPipeServerWorker.RunWorkerAsync();
private void ServerPipeHandling_DoWork(object sender, System.ComponentModel.DoWorkEventArgs e)
{
//...
LauncherPipeServer.WaitForConnection();
/// React to incoming messages
}
I can run this connection and it works, doing what it should. The interesting bit comes, when the application running the client crashes. Upon a restart of that application, the client is unable to reconnect to the server.
Of the solutions that I found, I wasn't really able to comprehend if and how they could solve my problem. My understanding was that the server object can accept one or multiple incoming clients and send and receive messages from all of them. So in case my client crashes, I just restart it and it can connect and register as a new client. But it doesn't seem to work that way.
Most likely an exception is thrown but yo don't see it because it's a worker thread; you have to handle it in a try/catch/finally block.
Most likely you're useing a StreamReader to read the pipe contents and then dispose of it before calling .WaitForConnection again. Disposing of a StreamReader also closes its underlying stream (unless the leaveOpen ctor parameter was true). Then the next call to .WaitForConnection will throw but it might go unnoticed, leaving the impression that the waiting has begun.
Related
The problem at hand is to have a "Container" app and a "Child" app communicate via a named pipe. The problem is the pipe sometimes fails to be created.
When things go smoothly, communication works pretty well.
The problem I'm seeing is that sometimes the Pipe server will fail to initialize. I can see why it happens in some instances, namely that the previous version of my app is still running in the background and did not exit properly for some reason so it's hanging on to the pipe. But, I have also seen it fail when I put in a new random name for the pipe that should not be used by any other processes, this is the part that worries me. Perhaps it is a limitation set by the OS on the same process name OR on visual studio debug mode?
To illustrate this, I have some code that tries to create a server steam (the pipe server):
NamedPipeServerStream server = new NamedPipeServerStream(pipeName, PipeDirection.InOut, 1);
The exception I often see is this:
Could not create server:System.IO.IOException: All pipe instances are busy.
I have tried a few variations of this with security options passed to the pipe + increasing the number of allowed servers from 1 to something higher but then the "Child"/Client might connect to what I assume is another process that is not properly closed out, and hence the wrong pipe server.
My ideas are:
figure out a way to "force" take over a pipe.
figure out a way to close out all dead instances of my own app somehow?
negotiate a new pipe to use writing some random pipe name to a file that both apps can read in first? This seems overkill and still not ideal if I'm having the odd behavior when I can't create a pipe even if the name is different.
Since this is hard to recreate, I am simulating the problem by doing:
var pipeName = "myApp22";
NamedPipeServerStream server = new NamedPipeServerStream(pipeName, PipeDirection.InOut, 1);
// Here I would want to catch the exception and then force close, then repeat this:
NamedPipeServerStream server = new NamedPipeServerStream(pipeName, PipeDirection.InOut, 1);
Any input would be appreciated. Thanks.
We are using the IBM.XMS 8.0.0.5 .NET library to connect to the IBM MQ server and create our listeners.
But sometimes the VPN tunnel goes in sleep mode (this happens if one of our servers restarts for example). To prevent this, it is necessary to keep the VPN tunnel 'awake' by sending a network packet through the tunnel.
I looked around but IBM MQ does not have any implementation to test the connection to the server. I need some kind of 'ping' which will keep the tunnel up. But pinging is not allowed, I think they reject ICMP echos.
I am planning to create an async Task which will test the connection regularly depending on a configured interval.
Any advice on this one please?
PS Sometimes the connection falls asleep as I mentioned above. And without knowing anything the system engineer restarts the service which sends a dispose to the IBM server, even a dispose message leads to the connection being back up. And all items on the queue start being consumed by us while the service is being stopped... I need to solve this but I don't have a clue how I can do this.
EDIT
Connection is made and consumers are set up as follows:
var factoryFactory = XMSFactoryFactory.GetInstance(XMSC.CT_WMQ);
// Create WMQ Connection Factory.
var cf = factoryFactory.CreateConnectionFactory();
// Set the properties
cf.SetStringProperty(XMSC.WMQ_HOST_NAME, _parameters.IbmMqHost);
cf.SetIntProperty(XMSC.WMQ_PORT, _parameters.IbmMqPort);
cf.SetStringProperty(XMSC.WMQ_CHANNEL, _parameters.IbmMqChannel);
cf.SetIntProperty(XMSC.WMQ_CONNECTION_MODE, XMSC.WMQ_CM_CLIENT);
cf.SetStringProperty(XMSC.WMQ_QUEUE_MANAGER, _parameters.IbmMqQueueManager);
cf.SetStringProperty(XMSC.USERID, username);
cf.SetStringProperty(XMSC.PASSWORD, pw);
cf.SetIntProperty(XMSC.WMQ_CLIENT_RECONNECT_OPTIONS, XMSC.WMQ_CLIENT_RECONNECT);
// Create connection.
connection = cf.CreateConnection();
connection.ExceptionListener = ExceptionCallback;
session = connection.CreateSession(false, AcknowledgeMode.AutoAcknowledge);
destination = session.CreateQueue(queuename);
consumer = session.CreateConsumer(destination);
consumer.MessageListener = listener;
connection.Start();
just searched for a posibble solution to indetify when the client disconnecets.
i found this:
public bool IsConnected( Socket s)
{
try
{
return !(s.Poll(1, SelectMode.SelectRead) &&s.Available == 0);
}
catch (SocketException) { return false; }
}
im using a while loop in my main with thread.sleep(500) and running the Isconnectedmthod it works allright when i run it through the visual studio and when i click stop debugging it actually notify me in the server side program but when i just go to the exe in the bin directory and launch it-it's Indeed notify me for a connection but when i close the program (manually from the 'x' button) or through the task manager theIsConnected method apparently return still true.....
im using a simple tcp connection
client = new TcpClient();
client.Connect("10.0.0.2", 10);
server:
Socket s = tcpClient.Client;
while(true)
{
if (!IsConnected(s))
MessageBox.Show("disconnected");
}
(it's running on a thread btw).
any suggestion guys?
i even tried to close the connection when the client closes:
private void Form1_FormClosing(object sender, FormClosingEventArgs e)
{
client.Close();
s.Close();
Environment.Exit(0);
}
dont know what to do
What you are asking for is not possible. TCP will not report an error on the connection unless an attempt is made to send on the connection. If all your program ever does is receive, it will never notice that the connection no longer exists.
There are some platform-dependent exceptions to this rule, but none involving the simple disappearance of the remote endpoint.
The correct way for a client to disconnect is for it to gracefully close the connection with a "shutdown" operation. In .NET, this means the client code calls Socket.Shutdown(SocketShutdown.Send). The client must then continue to receive until the server calls Socket.Shutdown(SocketShutdown.Both). Note that the shutdown "reason" is generally "send" for the endpoint initiating the closure, and "both" for the endpoint acknowledging and completing the closure.
Each endpoint will detect that the other endpoint has shutdown its end by the completion of a receive operation with 0 as the byte count return value for that operation. Neither endpoint should actually close the socket (i.e. call Socket.Close()) until this two-way graceful closure has completed. I.e. each endpoint has both called Socket.Shutdown() and seen a zero-byte receive operation completion.
The above is how graceful closure works, and it should be the norm for server/client interactions. Of course, things do break. A client could crash, the network might be disconnected, etc. Typically, the right thing to do is to delay recognition of such problems as long as possible; for example, as long as the server and client have no need to actually communicate, then a temporary network outage should not cause an error. Forcing one is pointless in that case.
In other words, don't add code to try to detect a connection failure. For maximum reliability, let the network try to recover on its own.
In some less-common cases, it is desirable to detect connection failures earlier. In these cases, you can enable "keep alive" on the socket (to force data to be sent over the connection, thus detecting interruptions in the connection…see SocketOptionName.KeepAlive) or implement some timeout mechanism (to force the connection to fail if no data is sent after some period of time). I would generally recommend against the use of this kind of technique, but it's a valid approach in some cases.
I'm trying to make a stunnel clone in C# just for fun. The main loop goes something like this (ignore the catch-everything-and-do-nothing try-catches just for now)
ServicePointManager.ServerCertificateValidationCallback = Validator;
TcpListener a = new TcpListener (9999);
a.Start ();
while (true) {
Console.Error.WriteLine ("Spinning...");
try {
TcpClient remote = new TcpClient ("XXX.XX.XXX.XXX", 2376);
SslStream ssl = new SslStream(remote.GetStream(), false, new RemoteCertificateValidationCallback(Validator));
ssl.AuthenticateAsClient("mirai.ca");
TcpClient user = a.AcceptTcpClient ();
new Thread (new ThreadStart(() => {
Thread.CurrentThread.IsBackground = true;
try{
forward(user.GetStream(), ssl); //forward is a blocking function I wrote
}catch{}
})).Start ();
} catch {
Thread.Sleep (1000);
}
}
I found that if I do the remote SSL connection, as I did, before waiting for the user, then when the user connects the SSL is already set up (this is for tunneling HTTP so latency is pretty important). On the other hand, my server closes long-inactive connections, so if no new connection happens in, say, 5 minutes, everything locks up.
What is the best way?
Also, I observe my program generating as much as 200 threads, which of course means that context-switching overhead is pretty big and sometimes results in the whole thing just blocking for seconds, even with just one user tunneling through the program. My forward function goes, in a gist, like
new Thread(new ThreadStart(()=>in.CopyTo(out))).Start();
out.CopyTo(in);
of course with lots of error handling to prevent broken connections from holding up forever. This seems to stall a lot though. I can't figure how to use asynchronous methods like BeginRead which should help according to google.
For any kind of proxy server (including an stunnel clone), opening the backend connection after you accept the frontend connection is clearly much simpler to implement.
If you pre-open backend connections in anticipation of receiving frontend connections, you can certainly save an RTT (which is good for latency), but you have to deal with the issue you hinted at: the backend will close idle connections. At any time that you receive a frontend connections, you run the risk that the backend connection that you are about to associate with this frontend connection and which has been opened some time ago is too old to use and may be closed by the backend. You will have to manage a pool of currently open backend connections and periodically close and refresh them when they become idle for too long. There is even a race condition where if the backend decided the connection has been idle too long and decides to close it but the proxy server receives a new frontend connection at the same time, the frontend may decide to forward a request through the backend connection while the backend is closing this connection. That means that you must be able to know a priori how long backend connections can be idle for before the backend will close them (you must know what the timeout values that are configured on the backend are set to) so you can give them up just before the backend will decide they are too old.
So in summary: pre-opening backend connections will save an RTT versus opening them only on demand, but it is a lot of work, including subtle connection pool management that it quite tough to implement bug-free. Up to you to judge if the extra complexity is worth it.
By the way, concerning your comment about handling several hundred simultaneous connections, I recommend implementing such an I/O-bound program as a proxy server based around an event loop instead of based around threads. Basically, you use non-blocking sockets and process events in a single thread (e.g. "this socket has new data waiting to be forwarded to the other side") instead of spawning a thread for each connection (which can get expensive both in thread creation and context switches). In order to scale such an event-based model to multiple CPU cores, you can start a small number of parallel threads of processes (more or less one per CPU core) which each handle many hundreds (or thousands) of simultaneous connections.
Just to be clear, all of the TCPClients I'm referring to here are not instances of my own class, they are all instances of System.Net.Sockets.TcpClient from Mono's implementation of .NET 4.0.
I have a server that is listening for client connections, as servers do. Whenever it gets a new client it creates a new TCPClient to handle the connection on a new thread. I'm keeping track of all the connections and threads with a dictionary. If the client disconnects, it sends a disconnect message to the server, the TCPClient is closed, the dictionary entry is removed and the thread dies a natural death. No fuss, no muss. The server can handle multiple clients with no problem.
However, I'm simulating what happens if the client gets disconnected, doesn't have a chance to send a disconnect message, then reconnects. I'm detecting whether a client has reconnected with a username system (it'll be more secure when I'm done testing). If I just make a new TCPClient and leave the old one running, the system works just fine, but then I have a bunch of useless threads lying around taking up space and doing nothing. Slackers.
So I try to close the TCPClient associated with the old connection. When I do that, the new TCPClient also dies and the client program throws this error:
E/mono (12944): Unhandled Exception: System.IO.IOException: Write failure ---> System.Net.Sockets.SocketException: The socket has been shut down
And the server throws this error:
Unable to write data to the transport connection: An established connection was aborted by the software in your host machine.
Cannot read from a closed TextReader.
So closing the old TCPClient with a remote endpoint of say: 192.168.1.10:50001
Also breaks the new TCPClient with a remote endpoint of say:192.168.1.10:50002
So the two TCPClient objects have the same remote endpoint IP address, but different remote endpoint ports. But closing the one seems to stop the other from working. I want to be able to close the old TCPClient to do my cleanup, without closing the new TCPClient.
I suspect this is something to do with how TCPClient works with sockets at a low level, but not having any real understanding of that, I'm not in a position to fix it.
I had a similar issue on my socket server. I used a simple List instead of a dictionary to hold all of my current connections. In a continuous while loop that listens for new streams, I have a try / catch and in the catch block it kills the client if it has disconnected.
Something like this on the sever.cs:
public static void CloseClient(SocketClient whichClient)
{
ClientList.Remove(whichClient);
whichClient.Client.Close();
// dispose of the client object
whichClient.Dispose();
whichClient = null;
}
and then a simple dispose method on the client:
public void Dispose()
{
System.GC.SuppressFinalize(this);
}
EDIT: this paste is the OPs resolution which he or she found on their own with help from my code.
So to clarify, the situation is that I have two TCPClient objects TCPClientA and TCPClientB with different remote endpoints ports, but the same IP:
TCPClientA.Client.RemoteEndPoint.ToString();
returns: 192.168.1.10:50001
TCPClientB.Client.RemoteEndPoint.ToString();
returns: 192.168.1.10:50002
TCPClientA needs to be cleaned up because it's no longer useful, so I call
TCPClientA.Close();
But this closes the socket for the client at the other end of TCPClientB, for some reason. However, writing
TCPClientA.Client.Close();
TCPClientA.Close();
Successfully closes TCPClientA without interfering with TCPClientB. So I've fixed the problem, but I don't understand why it works that way.
Looks like you have found a solution but just so you are aware there are many similar pitfalls when writing client/server applications in .net. There is an open source network library (which is fully supported in mono) where these problems have already been solved, networkComms.net. A basic sample is here.
Disclaimer: This is a commercial product and I am the founder.
This is clearly an error in your code. Merely closing one inbound connection cannot possibly close another one. Clearly something else is happening elsewhere in your code.