I am kind of stumped with this one, and was hoping I could find some answers here.
Basically, I have an ASP.NET application that is running across 2 servers. Server A has all of the business logic/data access exposed as web services, and Server B has the website which talks to those services (via WCF, with net.tcp binding).
The problem occurs a few seconds after a recycle of my app pool is initiated by IIS on Server A. The recycle happens after the allotted time (using the default of 29 hours set in IIS).
In the server log (of Server A):
A worker process with process id of
'####' serving application pool
'AppPoolName' has requested a recycle
because the worker process reached its
allowed processing time limit.
I believe that this is normal behavior. The problem is that a few seconds later, I get this exception on Server B:
This channel can no longer be used to
send messages as the output session
was auto-closed due to a
server-initiated shutdown. Either
disable auto-close by setting the
DispatchRuntime.AutomaticInputSessionShutdown
to false, or consider modifying the
shutdown protocol with the remote
server.
This doesn't happen on every recycle; I assume that it happens when someone is hitting the site with a request WHILE the recycle happens.
Furthermore, my application is down until I intervene; this exception continues to occur every time a subsequent request is made to the page. I intervene by editting the web.config (by adding a space or something benign to the end of file) and saving it- I assume that that causes my application to recompile and brings the services back up. I also have experimented with running a batch file that does this for me every time the exception happens ;)
Now, I could barely find any information on this exception, and I've been looking for a while. Most of the information I did find pertains to WCF settings that I am not using.
I already read up on "DispatchRuntime.AutomaticInputSessionShutdown" and I don't think it pertains to this situation. This particular property refers to the service shutting down automatically in response to behavior on the client side, which is not what is happening here. Here, the service is shutdown because of IIS.
I did read this which went through some sort of work around to bring the service back up automatically, but I am really looking to understand what is going on here, not to hack around it!
I have started playing around with the settings in IIS7, specifically turning on/off Overlapped Recycling and increasing the process startup/shutdown times. I am wondering whether it is safe to turn off recycling completely (I believe if I put 0 for the recycling time interval?) But again, I want to know what's going on!
Anyway, if you need more information, let me know. Thanks in advance!
This is probably related to how you open and close WCF connections.
If you open a proxy when your app starts and then continue to use this, a break in the connection, which is caused by a restart on the server side. Results in a error on the client side, since the server that the proxy was talking to is no longer there.
When you restart the client side (changing the web.config) new proxies are created against a server that is running.
The way to fix this is to make sure that you close a WCF connection after you use it.
http://www.codeguru.com/csharp/.net/net_wcf/article.php/c15941/
You should also make sure that you're using the correct SessionMode for your Web Service. I remember having similar trouble with some of my Services until I sorted out the correct mode. This is especially true when you're mixing this with any other authentication mode that is not "None".
This link might have some pointer.
http://msdn.microsoft.com/en-us/library/ms731193.aspx
My suggestion is to simply stop using IIS to host your services. Unless there is something you really need from IIS, I would recommend just writing a standard Windows Service to host your WCF endpoints.
If you can't do that, then by all means turn off recycling. AppPool recycling is mainly there because web developers write crappy code. I know that sounds rather blunt, but if you have enough sense to write code that doesn't leak then there is no reason to have IIS constantly restart your program.
Related
I have some WCF services hosted in IIS 7.5, with application initialization module installed. Most of the services works perfectly fine expect one.
For that WCF I'm sure that application_start is called during the application pool initiate/recycle process refer to the log, however calling the .svc afterward took around 45s to respond, then the subsequent call is fast, sounds like the warmup is not happening on this WCF. I had no idea what are those 45s are doing.
As that WCF have numbers of .svc, first call to each of them also took around 40s to respond, even the first svc is already called, feels like they are trying to initiate individually.
One more question is that are there any differences between sending a warm up request to /service and /service/service.svc
? I've tried both of them but seem no differences. Cause I'm worried that if I have multiple .svc inside my WCF, do I have to send warmup request to all of them?
Took a look at the svclog and seems the issue is not related to Application Initialization
Solved
After research turn out the whole issue is not related to application initialization, but WCF's metadata exchange. Actually the image above already gave the hint of the issue. Thanks for your help Francesco B. !
http://www.synergex.com/blog/2015/06/25/why-is-that-first-wcf-operation-so-slow/
I've tried researching this, but haven't found much that sounds similar to something I'm needing to implement. In short, we'll be running an ASP Website on a server that will be accessed by clients. Ideally, we have a function that we want to initialize upon the start of a user's session, and stop when the session ends. While the session is happening, this function sends and receives messages via socket communication, meaning we need to access the send/receive functions of this class from pages in order to move information. What's the best way to go about this?
Look into SignalR. That's probably what you're wanting. Its "hubs" are effectively what you're looking for to spin up on session initiation, and spin down when the user disappears. It has a client-side JS library that automatically chooses the best connection method available (e.g., websockets > server-sent-events > long-polling), and it allows you to send messages both from the client to the server, and from the server to the client.
http://www.asp.net/signalr
Another alternative that I've played around with in the past is XSockets:
https://xsockets.net/
It's similar to SignalR in many respects, but it's not free.
It's hard to tell from you description, are you looking to communicate with the client browser via sockets? Or are you trying to communicate with some other service via sockets?
Web applications are not ideally suited for deterministic types of actions. It's difficult for the web server to know whether or not the client has actually closed their browser or not. In most cases, sessions simply time out after a period of inactivity (20+ minutes in most cases). So you cannot reliably know when the users session has actually ended.
To top it off, there are certain edge cases where Session_End will not fire. For instance, if the app pool recycles, then no Session_End event will fire. This may not be an issue, since if the app pool recycles your other connections would also recycle, but it's still an issue to keep in mind.
Finally, Web apps are not intended to be long running.
I'm currently working on a Windows service (my first) and I'm wondering how to handle disconnect events and the like. In essence, this Windows service polls our Exchange servers for new emails. Once an email is received we parse it and insert it into a database. Now, I have everything working so long as everything is working in my favour. Since that is impossible to maintain I need to look for ways to ensure my service stays on line regardless of what may happen that is out of my control (minus the server hosting the service that is).
The main issues I can foresee are our Exchange servers going down for whatever reason or losing internet connectivity. Two problems which can happen several times a year.
Currently, if an exception is thrown regarding connectivity issues I keep attempting to connect every n minutes with a 30 second time out. So say our Exchange servers go down (either planned maintenance or unforeseen events) for 2 hours then the service would try and reconnect every n minutes until a connection is made.
Is this a sustainable strategy to ensure my service always stays online? If not, what is a better way?
What I want to avoid is my service going down because Exchange had issues making me have to manually restart my Windows service.
Thank you.
Your strategy sounds like the only thing that's practical.
It may also be worth considering adding the ability to view event logs from the service remotely so you can diagnose issues that you don't currently know about. If you're really paranoid, a second "watcher" service could be used to periodically check the primary service and report if it fails.
I have a windows service written in C# .NET framework 3.5 and would like to know the best way to check if previous shutdown of a service was regular.
Upon starting the service, there should be a check if the last shutdown was regular (via stop service button in services management) or if somebody just killed the process (or it crashed for some reason not directly linked to the service itself).
I thought about writing encrypted XML on a hard drive upon starting a service, and then editing it with some values when service is being stopped. In that way, after I start the service again next time, I could check the XML and see if the values were edited in correct way during shutdown, and if they were not I'd know the process was killed or it crashed.
This way seems too unreliable and not a good practice. What do you suggest?
Clarification:
What the service does is it sits on a server and listens to connections from client machines. Once the connection has been established, it communicates to a remote database via web services and determines whether they have right to connect (and therefore use application that is the caller). One of the aspects of protection is concurrency check, and if I have a limit set to 5 work stations, I keep the TcpClient connection alive from windows service to, let's say 5 workstations, and the sixth one cannot connect.
If I kill the service process and restart it, the connections are gone and I have 5 "licensed" apps running on workstations, and now there are 5 free connection slots to be taken by 5 more.
I also cant see anything bad using a file. You could even use this file to log some more information.
Eg. you could attach to the AppDomains Unhanded Exception event and try to log that exception.
Or you could evaluate how log your service has been running/not running (parsing a logfile for that task is a little bit harder).
Of course - this is not an excuse for not using logfiles.
I went with this in the end:
Service used to check up on the connected workstations to see if they're alive, but now I've built in periodical check from all the workstations as well (they connect through a common router dll where I've built in the check). Every 10 seconds the connection is verified, and if there is none, the client will try to reconnect in 15 seconds, which will be successful if there was just a temporary network problem, but will fail if the service was shut down forcefully (since all it's Tcp objects will be lost).
I would suggest to use the EventLog. Add a log event when a service start or stops and read through the event logs to detect anomalies.
Here's a basic sample from CodeProject.
Here's a walkthrough from MSDN how to create/delete/read event logs and entries.
Unless the service is running some sort securiy system that you need to have a "tamper" proof system i dont see why using a file is a bad solution.
Personaly i think a encrpted xml file is overkill, a simple text file should be enough.
I think you are on the right track, I'm not sure why you want to edit the values, just use the file (or a registry key) as a marker to indicate that the service was started and is running. During a graceful shutdown remove the marker. You then just need to look for the existence of the marker to know whether you were shutdown gracefully or crashed.
If you are finding that the file isn't created reliably, then make sure you are closing and flushing and disposing of the file object rather than relying on the garbage collector.
--- EDIT following clarification ---
So the requirement is for a licensing system and not simply to determine if the service was shutdown gracefully. I'm guessing that the desire is for the 'licenses' to be cleared on a graceful shutdown and restored following a crash, the scenarios are interchangeable.
I would probably use a database backing store, with suitable security, to hold the license keys at the server. As each client connects and requests a license they are provided with a key that has to be presented for each communication from the client. The server is obviously verifying that the presented key is valid for the current session. Should the server be gracefully shutdown it can clear the key table, if it crashes then the keys would still be present and can be honoured. That's probably the simplest approach I can think of that's secure.
If there's yet more to the story then let us know.
I have created a timeclock application in C# that connects to a web service on our server in order to clock employees in/out. The application resides in the system tray and clocks users out if they shut down/suspend their machines or if they are idle for more than three hours to which it clocks them out at the time of last activity.
My issue arises that when a user brings his machine back up from a sleep state (which fires the SystemEvents.PowerModeChanged event), the application attempts to clock the employee back in but the network connection isn't fully initialized at that time and the web-service call times out.
An obvious solution, albeit it a hack, would be to put a delay on the clock in but this wouldn't necessarily fix the problem across the board. What I am looking to do is a sort of "relentless" clock in where it will wait until it can see the server until it actually attempts to clock in.
What is the best method to determine if a connection to a web service can be made?
The best way is going to be to actually try to make the connection and catch the errors. You can ping the machine, but that will only tell you if the machine is running and on the network, which doesn't necessarily reflect on whether the webservice is running and available.
When handling the event, put your connection code into a method that will loop through until success, catching errors and retrying.
Even a delay wouldn't be perfect as depending on the individual systems and other applications running it can take varying times for the network connection to be re-established.
Implement a queue where you post messages and have a thread periodically try to flush the in-memory queue to the web service.
if the problem is latency in re-establishing the network service, Ping is the solution; it's like ringing the doorbell to see if anyone is home
if ping succeeds, then try calling the web service, catching exceptions appropriately (I think both SocketException and SoapException can occur depending on readiness/responsiveness)
Ping can be disabled although the web service port is open. I wouldn't use this method...