If i have a client that is connected to a server and if the server crashes, how can i determine, form my client, if the connection is off ? the idea is that if in my client's while i await to read a line from my server ( String a = sr.ReadLine(); ) and while the client is waiting to recieve that line , the server crashes , how do i close that thread that contains my while ?
Many have told me that in that while(alive) { .. } I should just change the alive value to true , but if my program is currently awaiting for a line to read, it won't get to exit the while because it will be trapped at sr.ReadLine() .
I was thinking that if i can't send a line to the server i should just close the client thread with .abort() . Any Ideas ?
Have a TimeOut parameter in ReadLine method which takes a TimeSpan value and times out after that interval if the response is not received..
public string ReadLine(TimeSpan timeout)
{
// ..your logic.
)
For an example check these SO posts -
Implementing a timeout on a function returning a value
Implement C# Generic Timeout
Is the server app your own, or something off the shelf?
If it's yours, send a "heart beat" every couple of seconds to let the clients know that the connection and service are still alive. (This is a bit more reliable than just seeing if the connection is closed since it may be possible for the connection to remain open while the server app is locked.)
That the server crashes has nothing to do with your clients. There are several external factors that can make the connection go down: The client is one of them, internet/lan problems is another one.
It doesn't matter why something fails, the server should handle it anyway. Servers going down will make your users scream ;)
Regarding multi threading, I suggest that you look at the BeginXXX/EndXXX asynchronous methods. They give you much more power and a more robust solution.
Try to avoid any strategy that relies on thread abort(). If you cannot avoid it, make sure you understand the idiom for that mechanism, which involves having a separate appdomain and catching ThreadAbortException
If the server crashes I imagine you will have more problems than just fixing a while loop. Your program may enter an unstable state for other reasons. State should not be overlooked. That being said, a nice "server timed out" message may suffice. You could take it a step further and ping, then give a slightly more advanced message "server appears to be down".
Related
I have a REST service in a self hosted ASP.Net WebApi application (Console).
Some clients poll the server in specific intervals to fetch new data. In general all is working fine.
The problem is, that the server stops responding to requests after some random duration (~30mins - 2.5 hours). All client requests start to time out.
The weird thing is, the server doesn't seem to receive the requests anymore as no controller method is invoked anymore). Server didn't throw any exceptions and the console app is still responsive. So I can only suppose there is a problem, before the request reaches the API controller.
In the debugger everything seems fine.
How can I diagnose such an issue?
What else can I try to fix the described behavior?
Notes:
Tested on multiple systems
.Net 4.5.1
Asp.Net WebApi 5.1.2
I have found the issue, the reason this is happening is because of connection leaks. If you are sending requests and aren't closing them correctly, either after the request is finished, or within an exception, the amount of open connections will eventuelly reach it's max value. Either you change the max amount of open connections in the connectionstring or(the prefered way) make sure your code is handling the closing part:
SqlConnection myConnection = new SqlConnection(ConnectionString);
try
{
conn.Open();
someCall (myConnection);
}
finally
{
myConnection.Close();
}
Credit goes to How can I solve a connection pool problem between ASP.NET and SQL Server? Where you can read more about this.
In my case, the issue was caused by never ending tasks. Due a misusage of the ReactiveExtensions Api, I randomly created never ending tasks. It seems, at some point the task scheduler simply couldn't handle them anymore, although I'm not completely sure about that.
Thing learned: It seems, by doing bad things in your app code (too many tasks, SQL connections ...) you can kill the WebApi infrastructure, so that it doesn't handle requests - at any level - anymore.
I'm trying to write an asynch socket application which transfering complex objects over across sides..
I used the example here...
Everything is fine till i try send multi package data. When the transferred data requires multiple package transfer server application is suspending and server is going out of control without any errors...
After many hours later i find a solution; if i close client sender socket after each EndSend callback, the problem is solving. But i couldn't understand why this is necessary? Or are there any other solution for the situation?
My (2) projects is same with example above only i changed EndSend callback method like following:
public void EndSendCallback(IAsyncResult result)
{
Status status = (Status)result.AsyncState;
int size = status.Socket.EndSend(result);
status.Socket.Close(); // <--------------- This line solved the situation
Console.Out.WriteLine("Send data: " + size + " bytes.");
Console.ReadLine();
allDone.Set();
}
Thanks..
This is due to the example code given not handling multiple packages (and being broken).
A few observations:
The server can only handle 1 client at a time.
The server simply checks whether the data coming in is in a single read smaller than the data requested and if so, assumes that's the last part.
The server then ignores the client socket while leaving the connection open. This puts the responsibility of closing the connection on the client side which can be confusing and which will waste resources on the server.
Now the first observation is an implementation detail and not really relevant in your case. The second observation is relevant for you since it will likely result in unexplained bugs- probably not in development- but when this code is actually running somewhere in a real scenario. Sockets are not streamlined. When the client sents over 1000 bytes. This might require 1 call to read on the server or 10. A call to read simply returns as soon as there is 'some' data available. What you need to do is implement some sort of protocol that communicates either how much data is being sent over- or when all the data has been sent over. I really recommend just to stick with the HTTP protocol since this is a well tested and well supported protocol that suits most scenario's.
The third observation might also cause bugs where the server is running out of resources since it leaves all connections open.
I'm writing a simple C# tcp message-server that needs to respond to the fact that a connected client has been silent the last TimeSpan timeout. In other words
Client A connects.
Client A sends stuff.
Server responds to client A.
Client B connects.
timeout time passes without client A sending anything.
Server sends "ping" (not as in network-ping, but as in a message, SendPing) to A.
Client B sends stuff.
Server responds.
pingTimeout time after ping was sent to A, connection to A is dropped, and the client is removed.
Same happens if B is silent too long.
Simple story short. If no word has been heard from client[n] within timeout, send ping. If ping is replied to, simply update client[n].LastReceivedTime, however, if client[n] fails to respond within pingTimeout, drop the connection.
As far as I understand, this must be done with some kind of scheduler, cause simply making a loop which says something like this
while(true) {
foreach(var c in clients) {
if(DateTime.Now.Subtract(c.LastReceivedTime) >= timeout && !c.WaitingPing)
c.SendPing();
else if(DateTime.Now.Subtract(c.LastReceivedTime) >= timeout + pingTimeout && c.WaitingPing)
c.Drop();
}
}
would simply fry the CPU and would be no good at all. Is there a good simple algorithm/class for handling cases like this that can easily be implemented in C#? And it needs to support 100-500 clients at once (as a minimum, it is only positive if it can handle more).
Your solution would be Ok I think if you use a dedicated thread and put a Thread.Sleep(1000) in there so you dont as you say fry the CPU. Avoid blocking calls on this thread eg make sure your calls to SendPing and Drop are asynchronous so this thread only does one thing.
The other solution is to use a System.Timers.Timer per client connection which has an interval equal to your ping timer. I'm using this method and have tested this with 500 clients with no issues. (20 sec interval). If your interval is much shorter I would not recommend this and look at other solutions using a single thread to check (like your solution)
I'd like to implement a WebService containing a method whose reply will be delayed for less than 1 second to about an hour (it depends if the data is already cached or neeeds to be fetched).
Basically my question is what would be the best way to implement this if you are only able to connect from the client to the WebService (no notification possible)?
AFAIK this will only be possible by using some kind of polling. But polling is bad and so I'd rather like to avoid using it. The other extreme could be to just let the connection stay open as long as the method isn't done. But i guess this could end up in slowing down the webserver and the network. I considerd to combine these two technics. Then the client would call the method and the server will return after at least 10 seconds either with the message that the client needs to poll again or the actual result.
What are your thoughts?
You probably want to have a look at comet
I would suggest a sort of intelligent polling, if possible:
On first request, return a token to represent the request. This is what gets presented in future requests, so it's easy to check whether or not that request has really completed.
On future requests, hold the connection open for a certain amount of time (e.g. a minute, possibly specified on the client) and return either the result or a result of "still no results; please try again at X " where X is the best guess you have about when the response will be completed.
Advantages:
You allow the client to use the "hold a connection open" model which is relatively expensive (in terms of connections) but allows the response to be served as soon as it's ready. Make sure you don't hold onto a thread each connection though! (And have some kind of time limit...)
By saying when the client should come back, you can implement a backoff policy - even if you don't know when it will be ready, you could have a "backoff for 1, 2, 4, 8, 16, 30, 30, 30, 30..." minutes policy. (You should potentially check that the client isn't ignoring this.) You don't end up with masses of wasted polls for long misses, but you still get quick results quickly.
I think that for something which could take an hour to respond a web service is not the best mechanism to use.
Why is polling bad? Surely if you adjust the frequency of the polling it won't be so bad. Prehaps double the time between polls with a max of about five minutes.
Some web services I've worked with return a "please try again in " xml message when they can't respond immediately. I realise that this is just a refinement of the polling technique, but if your server can determine at the time of the request what the likely delay is going to be, it could tell the client that and then forget about it, leaving the client to ask again once the polling interval has expired.
There are timeouts on IIS and client - side, which will prevent you from leaving the connection open.
This is also not practical, because resources/connections are blocked on the server.
Why do you want the user to wait for such a long running task? Let them look up the status of the operation somewhere.
In developing a relatively simple web service, that takes the data provided by a post and records it in a database table, we're getting this error:
Exception caught: The remote server returned an error: (500) Internal Server Er
or.
Stack trace: at System.Net.HttpWebRequest.GetResponse()
on some servers, but no others. The ones that are getting this are the physical machines, the others are virtual, and obviously the physical servers are far more powerful.
As far as we can tell, the problem is that the DB connections aren't being released back to the pools after each query. I'm using the using pattern below:
using (VoteDaoDataContext dao = new VoteDaoDataContext())
{
dao.insert_response_and_update_count(answerVal, swid, agent, geo, DateTime.Now, ip);
dao.SubmitChanges();
msg += "Thank you for your vote.";
dao.Dispose();
}
I added the dao.Dispose() call to ensure that connections are released when the method finishes, but I don't know whether or not it's necessary.
Am I using this pattern correctly? Is there something else I need to do to ensure that connections get returned to the pools correctly?
Thanks!
Your diagnostic information is not good enough. An HTTP/500 isn't enough detail to really tell if your theory is correct. You're going to need to capture a full stack trace in your logging if you want to get to the problem. I think you've jumped to a conclusion here. And no, you do not need that Dispose() before the end of your using{} block. That's what using{} does.
I thought that dispose() call was redundant, but I wanted to be sure.
We're seeing the connection pools saturating in the SQL logs (I can't look at the directly, I'm just a developer, and this stuff's running in a prod environment), and my ops guy said he's seeing connections timing out... and once they time out, the server starts running again, until the next time it saturates the connection pool.
We're going through the process of tweaking the connection pool settings at the moment... I wanted to be certain that I wasn't doing anything wrong, since this is my first time using Linq.
Thanks!