Does anyone know why sometimes a Directory.Move() operation in C# hangs/waits instead of throwing an exception immediately?
For example:
If I use the Directory.Move() method inside a try block, then navigate to that folder in File Explorer, Windows creates some handles locking it.
Then, I expect the catch block to be executed immediately, but instead it's like the application just hangs for 10-15 seconds before it throws an exception.
The funny thing is, that if I go outside of the folder in File Explorer during these 10-15 seconds, then the application actually completes the Move() operation.
It's like: instead of throwing an exception immediately, Windows hangs for 10-15 seconds to see if the people who are responsible for the handles (locks) closes these handles by themselves.
Is there a way to make the application throw the exception immediately?
The answer to your question "Does anyone know why sometimes a move operation in C# hangs/waits instead of throwing an exception immediately?" is probably that the .net framework is issued with a Pending state from its request for an NTFS lock, and eventually gives up.
The System.IO.Directory.Move maps directly to a Kernel32 function; I guess that eventually this finds itself calling LockFileEx (https://msdn.microsoft.com/en-us/library/windows/desktop/aa365203(v=vs.85).aspx) which allows the caller to specify whether to immediately fail if the lock cannot be obtained, or wait for a specified time. I guess that Kernel32 uses the variant of this which allows for the setting of a timeout. It doesn't seem that the .net framework has any influence on what timeout it used.
Related
I have a console application which takes as input a folder, picks up all the files in that folder and process them.
The processing is sequentially and for each document it launches a separate STAThread that runs a WPF dependent action inside it.
The application manages to process ~1k documents before getting an OutOfMemoryException and throwing an error because the Dispatcher is null.
Looking with ProcessExplorer I can see that:
there aren't any running/hanging .NET threads
there are ~49k handles allocated when it crashes
out of these 4k are Thread handles
Questions:
What could cause the thread handles to not be released (I can see them being created and deleted live in ProcExplorer but it doesn't seem to keep up with the rate they're being created).
How could I see what are those 49k handles ? ProcessExplorer only shows about 5k items - what are the rest of them?
How can I workaround the OutOfMemory exception? My understanding is the entire process dies because it ends up allocating too much memory and causing fragmentation. I tried separating the threads via AppDomains + calling GC forcefully but nothing changed.
It's been a while but I'd like to capture the answer just in case someone runs into this issue as well.
The problem was caused by double shutdown of the WPF.
Because of how we were calling Dispatcher.InvokeShutdown, sometimes it resulted in multiple calls on the same in the same thread which even though doesn't result in any exceptions, every subsequent call to WPF will no longer dispose the handles and will result in a memory leak
I have an instance of System.Diagnostics.Process which was created via Process.GetProcessesByName.
After I sucessfully open the process, I perform various operations, such as reading its memory and the window title.
This operations are constantly executed based on a timer, by which I mean the Timer.Elapsed event handler is the source of the process operations.
Now, I noticed that I have a race condition that I've been unable to solve using anything I know. Here is how it happens:
timerElapsedEvent(...) {
if (!process.HasExited) {
process.Refresh(); // Update title.
var title = process.MainWindowTitle;
}
}
If the process is running, and my code enters the if block, there is a small chance the process might exit before the process.MainWindowTitle call is executed, which would cause an exception.
What I need is a way to somehow capture the exit event of the process and keep it alive untill it is safe to close it without crashing my application which is monitoring it, thus making sure it will wait for process.MainWindowTitle before closing (or any other solution that would solve this problem).
Moreover, at the same time, another method might be running a ReadProcessMemory, which would crash too.
How can I solve this?
PS: Process.Exit event handler doesn't work because it won't be fired before process.MainWindowTitle, it will only be fired after the current instruction is finished.
I'm pretty sure that somehow controlling the exit event is the only way to solve this because the HasExit could change at any time, doesn't mather how many checks I have before actually calling a method on the process.
PS2: I just realized this is a TOCTTOU case, which is unsolvable unless I can control the process I opened, so I'm leaving this here just to see if anyone knows a way to do that.
Short version: you can't.
There is a fundamental "time-of-check-to-time-of-use" issue here that you don't have enough control over to solve. The OS is always able to kill the process you are dealing with (either arbitrarily, or due to some failure in the process), between the time you check the HasExited property and the time you check the MainWindowTitle property.
The Process class doesn't do much to enforce getting the exception, but it does enough. In particular, calling Refresh() forces the class to "forget" anything it knows about the process, so that it will re-retrieve the information when you ask for it again. This includes the main window handle for the process.
The Process class uses the native window enumeration functions to search for the window handle for the known process ID. Since the process has exited, it fails to find the handle, returning a NULL value (IntPtr.Zero in managed terms). On seeing the null return value, the Process class forces the InvalidOperationException to be called.
The only reliable solution is to always be prepared to catch the exception. There will always be a chance that between checking for the state and trying to do something that relies on it, that state can change.
While academic, I find it interesting to note that if you set the EnableRaisingEvents property, the Process class can be (and usually is) even more efficient about detecting the exited process and throwing the exception.
In particular, when the EnableRaisingEvents property is set, the Process class registers to be notified by the OS (via the thread pool's RegisterWaitForSingleObject() method) when the process handle is signaled. I.e. the Process class does not even need to go through the effort of searching for the main window handle in this case, because it's notified almost instantly if the process exits.
(Of course, there's still potentially an internal race condition, in a very tiny window of opportunity, since the notification may not have arrived yet when the Process class checks for the has-exited state, but the process may still have exited before the Process class enumerates the windows).
Anyway, this last bit doesn't affect the basic answer; it's just a bit of trivia I learned and found interesting while wandering through the Process source code. :)
Is there a way to fire an Http call to an external web API within my own web API without having to wait for results?
The scenario I have is that I really don't care whether or not the call succeeds and I don't need the results of that query.
I'm currently doing something like this within one of my web API methods:
var client = new HttpClient() { BaseAddress = someOtherApiAddress };
client.PostAsync("DoSomething", null);
I cannot put this piece of code within a using statement because the call doesn't go through in that case. I also don't want to call .Result() on the task because I don't want to wait for the query to finish.
I'm trying to understand the implications of doing something like this. I read all over that this is really dangerous, but I'm not sure why. What happens for example when my initial query ends. Will IIS dispose the thread and the client object, and can this cause problems at the other end of the query?
Is there a way to fire an Http call to an external web API within my own web API without having to wait for results?
Yes. It's called fire and forget. However, it seems like you already have discovered it.
I'm trying to understand the implications of doing something like this
In one of the links in the answers you linked above state the three risks:
An unhandled exception in a thread not associated with a request will take down the process. This occurs even if you have a handler setup via the Application_Error method.
This means that any exception thrown in your application or in the receiving application won't be caught (There are methods to get past this)
If you run your site in a Web Farm, you could end up with multiple instances of your app that all attempt to run the same task at the same time. A little more challenging to deal with than the first item, but still not too hard. One typical approach is to use a resource common to all the servers, such as the database, as a synchronization mechanism to coordinate tasks.
You could have multiple fire-and forget calls when you mean to have just one.
The AppDomain your site runs in can go down for a number of reasons and take down your background task with it. This could corrupt data if it happens in the middle of your code execution.
Here is the danger. Should your AppDomain go down, it may corrupt the data that is being sent to the other API causing strange behavior at the other end.
I'm trying to understand the implications of doing something like
this. I read all over that this is really dangerous
Dangerous is relative. If you execute something that you don't care at all if it completes or not, then you shouldn't care at all if IIS decides to recycle your app while it's executing either, should you? The thing you'll need to keep in mind is that offloading work without registration might also cause the entire process to terminate.
Will IIS dispose the thread and the client object?
IIS can recycle the AppDomain, causing your thread to abnormally abort. Will it do so depends on many factors, such as how recycling is defined in your IIS, and if you're doing any other operations which may cause a recycle.
In many off his posts, Stephan Cleary tries to convey the point that offloading work without registering it with ASP.NET is dangerous and may cause undesirable side effects, for all the reason you've read. That's also why there are libraries such as AspNetBackgroundTasks or using Hangfire for that matter.
The thing you should most worry about is a thread which isn't associated with a request can cause your entire process to terminate:
An unhandled exception in a thread not associated with a request will
take down the process. This occurs even if you have a handler setup
via the Application_Error method.
Yes, there are a few ways to fire-and-forget a "task" or piece of work without needing confirmation. I've used Hangfire and it has worked well for me.
The dangers, from what I understand, are that an exception in a fire-and-forget thread could bring down your entire IIS process.
See this excellent link about it.
I have several long-running threads in an MVC3 application that are meant to run forever.
I'm running into a problem where a ThreadAbortException is being called by some other code (not mine) and I need to recover from this gracefully and restart the thread. Right now, our only recourse is to recycle the worker process for the appDomain, which is far from ideal.
Here's some details about this code works:
A singleton service class exists for this MVC3 application. It has to be a singleton because it caches data. This service is responsible for making request to a database. A 3rd party library is used for the actual database connection code.
In this singleton class we use a collection of classes that are called "QueryRequestors". These classes identify unique package+stored_procedure names for requests to the database, so that we can queue those calls. That is the purpose of the QueryRequestor class: to make sure calls to the same package+stored_procedure (although they may have infinite different parameters) are queued, and do not happen simultaneously. This eases our database strain considerably and improves performance.
The QueryRequestor class uses an internal BlockingCollection and an internal Task (thread) to monitor its queue (blocking collection). When a request comes into the singleton service, it finds the correct QueryRequestor class via the package+stored_procedure name, and it hands the query over to that class. The query gets put in the queue (blocking collection). The QueryRequestor's Task sees there's a request in the queue and makes a call to the database (now the 3rd party library is involved). When the results come back they are cached in the singleton service. The Task continues processing requests until the blocking collection is empty, and then it waits.
Once a QueryRequestor is created and up and running, we never want it to die. Requests come in to this service 24/7 every few minutes. If the cache in the service has data, we use it. When data is stale, the very next request gets queued (and subsequent simultaneous requests continue to use the cache, because they know someone (another thread) is already making a queued request, and this is efficient).
So the issue here is what to do when the Task inside a QueryRequestor class encounters a ThreadAbortException. Ideally I'd like to recover from that and restart the thread. Or, at the very least, dispose of the QueryRequestor (it's in a "broken" state now as far as I'm concerned) and start over. Because the next request that matches the package+stored_procedure name will create a new QueryRequestor if one is not present in the service.
I suspect the thread is being killed by the 3rd party library, but I can't be certain. All I know is that nowhere do I abort or attempt to kill the thread/task. I want it to run forever. But clearly we have to have code in place for this exception. It's very annoying when the service bombs because a thread has been aborted.
What is the best way to handle this? How can we handle this gracefully?
You can stop re-throwing of ThreadAbortException by calling Thread.ResetAbort.
Note that most common case of the exception is Redirect call, and canceling thread abort may case undesired effects of execution of request code that otherwise would be ignored due to killing the thread. It is common issue in WinForms (where separation of code and rendering is less clear) than in MVC (where you can return special redirect results from controllers).
Here's what I came up with for a solution, and it works quite nicely.
The real issue here isn't preventing the ThreadAbortException, because you can't prevent it anyway, and we don't want to prevent it. It's actually a good thing if we get an error report telling us this happened. We just don't want our app coming down because of it.
So, what we really needed was a graceful way to handle this Exception without bringing down the application.
The solution I came up with was to create a bool flag property on the QueryRequestor class called "IsValid". This property is set to true in the constructor of the class.
In the DoWork() call that is run on the separate thread in the QueryRequestor class, we catch the ThreadAbortException and we set this flag to FALSE. Now we can tell other code that this class is in an Invalid (broken) state and not to use it.
So now, the singleton service that makes use of this QueryRequestor class knows to check for this IsValid property. If it's not valid, it replaces the QueryRequestor with a new one, and life moves on. The application doesn't crash and the broken QueryRequestor is thrown away, replaced with a new version that can do the job.
In testing, this worked quite well. I would intentionally call Thread.Abort() on the DoWork() thread, and watch the Debug window for output lines. The app would report that the thread had been aborted, and then the singleton service was correctly replacing the QueryRequestor. The replacement was then able to successfully handle the request.
My process sometimes throws exception like dllnotfound after start. i have a monitor service responsible for maintaining the consistent state of the process.
how can i keep track of state of my process using windows service.
is there an open source implementation of windows service that maintains/track the state of process in windows.
That's not possible, exceptions are local to a thread first, local to a process secondary if it is unhandled. An unhandled exception will terminate the process. The only shrapnel you could pick up from such a dead process is the process exit code. Which should be set to 0xe0434f4e, the exception code for an unmanaged exception. No other relevant info is available, unless there's an unhandled exception handler in the process that logs state. That state is very unreliable, the process suffered a major heart attack.
Keeping multiple processes in synch and running properly when they may die from exceptions is extraordinarily difficult. Only death can be detect reliably, avoid doing more.
Edit: So the actual problem wasn't that the process was dying, but that the process was stuck in an exception handler dialog waiting for the user to hit debug or cancel. The solution to the problem was to disable the .net JIT debug dialog, instructions here
http://weblogs.asp.net/fmarguerie/archive/2004/08/27/how-to-turn-off-disable-the-net-jit-debugging-dialog.aspx
My original proposed solution is below
Not a window service, but this is a pretty easy .NET program to write.
use System.Diagnostics.Process to get a Process object for the process you want to check. You can use GetProcessByName if you want to open an existing process. If you create the process from C#, then you will already have the process object.
Then you just can WaitForExit either with or without a timeout on the Process object. or test the HasExited property, or register an Exited callback. Once the process has exited, you can check the ExitCode property to find out whether the process returned an error value.
Have your process write events and exceptions to the system's application log and have your monitor check for entries periodically to find events relating to your process, and you can check the system events for service start and stop events.
If the process itself is a windows service, you can check its status using the `System.ServiceProcess.ServiceController'.
this worked for now
http://weblogs.asp.net/fmarguerie/archive/2004/08/27/how-to-turn-off-disable-the-net-jit-debugging-dialog.aspx
In the case of DllNotFoundException and other things that happen at startup, you can have the application indicate when it's finished starting up. Have it write a timestamp to a file, for instance. Your monitor can compare the time the application started with the time in the file.
One thing you could do is to monitor the CPU usage of the process. I am assuming your process goes away when the exception is thrown. Therefore, the CPU usage of the process should be 0 since it is no longer available. Therefore, if the CPU usage stays at zero for a certain period of time, you can safely assume that the process has raised the exception. This method is not fool proof since you are basing your decision on CPU usage and a legitimate process may have a zero CPU usage for a given period of time. You can incorporate this check inside your monitoring service or you could write a simple VB script to check process CPU usage externally.