.NET Denying Access to Directories After Copy Operations - c#

I'm performing a "safe" copy of a directory over another directory as follows:
Given the source C:\Source and target C:\Target
Copy C:\Source to C:\Target-incoming
Move C:\Target (if it exists) to C:\Target-outgoing
Move C:\Target-incoming to C:\Target
Delete C:\Target-outgoing (if it exists)
If any of the first three steps fail, I'll attempt to put things back as they were to prevent data loss.
However, the move of C:\Target-incoming to C:\Target fails with "Access to the path C:\Target-incoming is denied" most of the time.
At the moment, inserting Thread.Sleep(100) just before the move operation fixes the problem for me. However, waiting .1 of a second seems ridiculous to me. Thread.Sleep(10) isn't enough to fix it. I also have the sinking feeling that the value I have to wait depends on the speed of disk IO.
So, my questions:
Can I prevent this from happening?
If not, is there a way of finding out when the lock on the directory is released after copying it?
Edit: For clarity, I'm doing all these operations in one method on one thread, and I'm just using Thread.Sleep() to pause code flow for a moment. The moves and copies are being done with standard .NET Directory.Move(), Directory.CreateDirectory() and File.CopyTo() methods. It would appear that the .NET methods are returning before the locks on the respective files are being released, causing the necessity to wait an amount of time before continuing.

What could be happening is probably that your thread is trying to "Move C:\Target-incoming to C:\Target" WHILE the "Move C:\Target to C:\Target-outgoing" is NOT finished YET.
This track is confirmed by the success of your process after short Thread Sleep.
Try to Chain your processes, i.e : Divide each step into specific methods, and call the methods one after the other (sync'ing the start of a method to the end of the previous one)
There are various ways to do that (among others syncing/locking/chaining different threads per process/step)
You could check Thread Synchronization in .NET
But of course, this is not the only possible cause for your problem.

After a bunch of testing, it seems like the very act of trying to move a locked folder gets the OS to hurry up and release the lock, even if the first attempt fails.
I wrote this extension method to DirectoryInfo:
public static void TryToMoveTo(this DirectoryInfo o, string targetPath) {
int attemptsRemaining = 5;
while (true) {
try {
o.MoveTo(targetPath);
break;
} catch (Exception) {
if (attemptsRemaining == 0) {
throw;
} else {
attemptsRemaining--;
System.Threading.Thread.Sleep(10);
}
}
}
}
While debugging the original problem, I settled on waiting for 100ms as anything less seemed to cause exceptions (I tried 10, 25, 50, 75 and 100ms). However, in the method above I wait 10ms before retrying, and I never, ever got more than one exception thrown in each of my hundreds of test runs.

You can always try waiting in a loop, up till a maximum # of tries. You can check to see if the directory is locked by calling CreateFile and checking it's return code. Be sure to read through the "flags" section of the docs because you need to pass in a special flag to open a directory.
Someone else mentioned in a comment that you may want to try Transactional NTFS. If you can, you might want to try that.

check wethere source and target directories exist before copying or moving using io.directory.exists
the access deneied error is caused by either source or target are not found.

Related

Is there a way to validate that the CNC GCode program actually started running?

My current solution to ask the CNC (via ThincAPI) whether or not the program has Completed is not working. It doesn't care if I change programs, once it is successful it will always report true even after changing the loaded program.
What I would like is a variable that I can reset right before firing cycle start so I can check and see if the program truly ran. Ideally I would reset this CycleComplete method that is already being used.
I think what I'm going to end up doing is writing to a macro (common) variable and setting a value, then having the GCode change that value at the very end of the GCode program. Then I will read that value to verify it changed.
Okuma.CMDATAPI.DataAPI.CProgram myCProgram;
myCProgram = new Okuma.CMDATAPI.DataAPI.CProgram();
...
case "cycle":
string cycle = myCProgram.CycleComplete().ToString();
Console.WriteLine(" Response: " + cycle);
return cycle;
You might have to check machine in Auto Mode, and running status by using
CMachine class with method
GetNCStatus ()
GetOperationMode()
In the case of schedule program, part program is loaded really fast by NC. As a result, you might always see RUNNING status.
Using CV is also a good way to ensure that program have been set/reset.
I suspect you must be using an SDF Scheduled Program and the next program is being called before your application has a chance to catch that the previous .MIN program has completed.
The CycleComplete() method will reset when a new program is selected.
If it is returning true and the program in question didn't complete, that is because the subsequent .MIN program completed.
I would suggest putting a Dwell in between the PSelect calls in the SDF to give your app time to catch that the previous .MIN has completed or not.

Monitor.TryEnter and Threading.Timer race condition

I have a Windows service that every 5 seconds checks for work. It uses System.Threading.Timer for handling the check and processing and Monitor.TryEnter to make sure only one thread is checking for work.
Just assume it has to be this way as the following code is part of 8 other workers that are created by the service and each worker has its own specific type of work it needs to check for.
readonly object _workCheckLocker = new object();
public Timer PollingTimer { get; private set; }
void InitializeTimer()
{
if (PollingTimer == null)
PollingTimer = new Timer(PollingTimerCallback, null, 0, 5000);
else
PollingTimer.Change(0, 5000);
Details.TimerIsRunning = true;
}
void PollingTimerCallback(object state)
{
if (!Details.StillGettingWork)
{
if (Monitor.TryEnter(_workCheckLocker, 500))
{
try
{
CheckForWork();
}
catch (Exception ex)
{
Log.Error(EnvironmentName + " -- CheckForWork failed. " + ex);
}
finally
{
Monitor.Exit(_workCheckLocker);
Details.StillGettingWork = false;
}
}
}
else
{
Log.Standard("Continuing to get work.");
}
}
void CheckForWork()
{
Details.StillGettingWork = true;
//Hit web server to grab work.
//Log Processing
//Process Work
}
Now here's the problem:
The code above is allowing 2 Timer threads to get into the CheckForWork() method. I honestly don't understand how this is possible, but I have experienced this with multiple clients where this software is running.
The logs I got today when I pushed some work showed that it checked for work twice and I had 2 threads independently trying to process which kept causing the work to fail.
Processing 0-3978DF84-EB3E-47F4-8E78-E41E3BD0880E.xml for Update Request. - at 09/14 10:15:501255801
Stopping environments for Update request - at 09/14 10:15:501255801
Processing 0-3978DF84-EB3E-47F4-8E78-E41E3BD0880E.xml for Update Request. - at 09/14 10:15:501255801
Unloaded AppDomain - at 09/14 10:15:10:15:501255801
Stopping environments for Update request - at 09/14 10:15:501255801
AppDomain is already unloaded - at 09/14 10:15:501255801
=== Starting Update Process === - at 09/14 10:15:513756009
Downloading File X - at 09/14 10:15:525631183
Downloading File Y - at 09/14 10:15:525631183
=== Starting Update Process === - at 09/14 10:15:525787359
Downloading File X - at 09/14 10:15:525787359
Downloading File Y - at 09/14 10:15:525787359
The logs are written asynchronously and are queued, so don't dig too deep on the fact that the times match exactly, I just wanted to point out what I saw in the logs to show that I had 2 threads hit a section of code that I believe should have never been allowed. (The log and times are real though, just sanitized messages)
Eventually what happens is that the 2 threads start downloading a big enough file where one ends up getting access denied on the file and causes the whole update to fail.
How can the above code actually allow this? I've experienced this problem last year when I had a lock instead of Monitor and assumed it was just because the Timer eventually started to get offset enough due to the lock blocking that I was getting timer threads stacked i.e. one blocked for 5 seconds and went through right as the Timer was triggering another callback and they both somehow made it in. That's why I went with the Monitor.TryEnter option so I wouldn't just keep stacking timer threads.
Any clue? In all cases where I have tried to solve this issue before, the System.Threading.Timer has been the one constant and I think its the root cause, but I don't understand why.
I can see in log you've provided that you got an AppDomain restart over there, is that correct? If yes, are you sure that you have the one and the only one object for your service during the AppDomain restart? I think that during that not all the threads are being stopped right in the same time, and some of them could proceed with polling the work queue, so the two different threads in different AppDomains got the same Id for work.
You probably could fix this with marking your _workCheckLocker with static keyword, like this:
static object _workCheckLocker;
and introduce the static constructor for your class with initialization of this field (in case of the inline initialization you could face some more complicated problems), but I'm not sure is this be enough for your case - during AppDomain restart static class will reload too. As I understand, this is not an option for you.
Maybe you could introduce the static dictionary instead of object for your workers, so you can check the Id for documents in process.
Another approach is to handle the Stopping event for your service, which probably could be called during the AppDomain restart, in which you will introduce the CancellationToken, and use it to stop all the work during such circumstances.
Also, as #fernando.reyes said, you could introduce heavy lock structure called mutex for a synchronization, but this will degrade your performance.
TL;DR
Production stored procedure has not been updated in years. Workers were getting work they should have never gotten and so multiple workers were processing update requests.
I was able to finally find the time to properly set myself up locally to act as a production client through Visual Studio. Although, I wasn't able to reproduce it like I've experienced, I did accidentally stumble upon the issue.
Those with the assumptions that multiple workers were picking up the work was indeed correct and that's something that should have never been able to happen as each worker is unique in the work they do and request.
It turns out that in our production environment, the stored procedure to retrieve work based on the work type has not been updated in years (yes, years!) of deploys. Anything that checked for work automatically got updates which meant when the Update worker and worker Foo checked at the same time, they both ended up with the same work.
Thankfully, the fix is database side and not a client update.

Handle system folders event in windows

I am writing some C# code and I need to detect if a specific folder on my windows file system has been opened while the application is running. Is there any way to do it? WinAPI maybe?
There are three API things I think you should check out:
FindFirstChangeNotification() http://msdn.microsoft.com/en-us/library/aa364417%28VS.85%29.aspx
That gives you a handle you can wait on and use to find changes to a file in a particular file, directory, or tree of directories. It won't tell you when a directory is browsed, but it will tell you when a file is saved, renamed, and so on and so forth.
SetWindowsHookEx() http://msdn.microsoft.com/en-us/library/ms644990%28v=VS.85%29.aspx
You can set that up to give you a callback when any number of events occur - in fact I'm pretty positive that you CAN get this callback when a directory is opened, but it will probably be inordinately difficult because you'll be intercepting messages to explorer's window. So you'll be rebooting during debugging.
Windows Shells http://msdn.microsoft.com/en-us/library/bb776778%28v=VS.85%29.aspx
If that wasn't painful enough, you can try writing a shell program.
If you're trying to write a rootkit, I suppose you don't want me to spoil the details for you. If you're NOT trying to write a rootkit, I suggest you look it up - carefully. There are open source rootkits, and they all basically have to monitor file access this way to hide from the user / OS.
Go with the Windows Shell Extensions. You can use Shell Namespace Extensions to make a "virtual" folder that isn't there (or hides a real one), like the GAC (C:\Windows\assembly)
Here are several examples of Shell Extension coding in .Net 4.0.
A Column Handler would let you know when a folder is "Opened", and even let you provide extra data for each of the files (new details columns).
Check out the FileSystemWatcher class.
The closest thing that I can think of, that may be useful to you, is using the static Directory class. It provides methods to determine the last time a file or directory was accessed. You could setup a BackgroundWorker to monitor if the directory was accessed during a specified interval. Keep track of the start and end of the interval by using DateTime, and if the last access time falls between those, then you can use the BackgroundWorker's ProgressChanged event to notify the application.
BackgroundWorker folderWorker = new BackgroundWorker();
folderWorker.WorkerReportsProgress = true;
folderWorker.WorkerSupportsCancellation = true;
folderWorker.DoWork += FolderWorker_DoWork;
folderWorker.ProgressChanged += FolderWorker_ProgressChanged;
folderWorker.RunWorkerAsync();
void FolderWorker_DoWork(object sender, DoWorkEventArgs e)
{
BackgroundWorker worker = (BackgroundWorker)sender;
while(!worker.CancellationPending)
{
DateTime lastAccess = Directory.GetLastAccessTime(DIRECTORY_PATH);
//Check to see if lastAccess falls between the last time the loop started
//and came to end.
if(/*your check*/)
{
object state; //Modify this if you need to send back data.
worker.ReportProgress(0, state);
}
}
}
void FolderWorker_ProgressChanged(object sender, ProgressChangedEventArgs e)
{
//Take action here from the worker.ReportProgress being invoked.
}
You could use the FileSystemInfo's LastAccessProperty. The problem though is that it can be cached.
FileSystemInfo: http://msdn.microsoft.com/en-us/library/975xhcs9.aspx
LastAccessTime Property: http://msdn.microsoft.com/en-us/library/system.io.filesysteminfo.lastaccesstimeutc.aspx
As noted that this can be pre-cached.
"The value of the LastAccessTimeUtc property is pre-cached if the current instance of the FileSystemInfo object was returned from any of the following DirectoryInfo methods:
GetDirectories
GetFiles
GetFileSystemInfos
EnumerateDirectories
EnumerateFiles
EnumerateFileSystemInfos
To get the latest value, call the Refresh method."
Therefore call the Refresh method but it still might not be up to date due to Windows caching the value. (This is according to msdn doc "FileSystemInfo.Refresh takes a snapshot of the file from the current file system. Refresh cannot correct the underlying file system even if the file system returns incorrect or outdated information. This can happen on platforms such as Windows 98." - link: http://msdn.microsoft.com/en-us/library/system.io.filesysteminfo.refresh.aspx
I think the only way you can reliably achieve this is by monitoring the currently running processes and watch closely for new Explorer.exe instances and/or new Explorer.exe spawned threads (the "Run every window on a separate process" setting gets in the way here).
I admit I don't have a clue about how to code this, but that's what I would look for.

Using a SSH library to connect to unix and tail a file: Is this the right approach?

As Ive stated with a few other questions, Ive been using a new SSH .NET library to connect to a Unix server and run various scripts and commands. Well, I've finally attempted to use it to run a Unix tail -f on a live log file and display the tail in a Winforms RichTextBox.
Since the library is not fully-fleshed out, the only kinda-sorta solution I've come up with seems lacking... like the feeling you get when you know there has to be a better way. I have the connection/tailing code in a separate thread as to avoid UI thread lock-ups. This thread supports cancellation request (which will allow the connection to gracefully exit, the only way to ensure the process Unix side is killed). Here's my code thus far (which for the record seems to work, I would just like some thoughts on if this is the right way to go about it):
PasswordConnectionInfo connectionInfo = new PasswordConnectionInfo(lineIP, userName, password);
string command = "cd /logs; tail -f " + BuildFileName() + " \r\n";
using (var ssh = new SshClient(connectionInfo))
{
ssh.Connect();
var output = new MemoryStream();
var shell = ssh.CreateShell(Encoding.ASCII, command, output, output);
shell.Start();
long positionLastWrite = 0;
while (!TestBackgroundWorker.CancellationPending) //checks for cancel request
{
output.Position = positionLastWrite;
var result = new StreamReader(output, Encoding.ASCII).ReadToEnd();
positionLastWrite = output.Position;
UpdateTextBox(result);
Thread.Sleep(1000);
}
shell.Stop();
e.Cancel = true;
}
The UpdateTextBox() function is a thread-safe way of updating the RichTextBox used to display the tail from a different thread. The positionLastWrite stuff is an attempt to make sure I don’t loose any data in between the Thread.Sleep(1000).
Now Im not sure about 2 things, first being that I have the feeling I might be missing out on some data each time with the whole changing MemoryStream position thing (due to my lack of experiance with MemoryStreams, and the second being that the whole sleep for 1 second then update again thing seems pretty archaic and inefficient... any thoughts?
Mh, I just realized that you are not the creator of the SSH library (although it's on codeplex so you could submit patches), anyway: You might want to wrap your loop into a try {} finally {} and call shell.Stop() in the finally block to make sure it is always cleaned up.
Depending on the available interfaces polling might be the only way to go and it is not inherently bad. Whether or not you loose data depends on what the shell object is doing for buffering: Does it buffer all output in memory, does it throw away some output after a certain time?
My original points still stand:
One thing which comes to mind is that it looks like the shell object is buffering the whole output in memory all the time which poses a potential resource problem (out of memory). One option of changing the interface is to use something like a BlockingQueue in the shell object. The shell is then enqueuing the output from the remote host in there and in your client you can just sit there and dequeue which will block if nothing is there to read.
Also: I would consider making the shell object (whatever type CreateShell returns) IDisposable. From your description it sounds shell.Stop() is required to clean up which won't happen in case some exception is thrown in the while loop.

Monitoring a remote process

I have a method that stops a service(s) but I also need to delete the logs. Usually this is not a problem but the process can take a little bit of time before closing. Again, although the service appears stopped, the process does take additional time to close properly. Since the process is still running, I cannot delete the logs so I need to find a way to monitor the .exe to know when its safe to delete the logs.
so far my best option is a do while loop, unfortunately the first iteration of the delete statement throws an exception and stops the program.
do
{
// delete logs
}
while (System.Diagnostics.Process.GetProcessesByName(processName, machineName).Length > 0);
Im sure there is a simple solution but my lack of experience is the real problem.
This is probably not the best answer either, but you could invert the loop to:
while (System.Diagnostics.Process.GetProcessesByName(processName, machineName).Length > 0)
{
// delete log files.
}
I would suppose this would evalutate the condition of the loop before executing the contents. But according to your statements, this will not execute the code until the process has exited.
A hackish way around this is to perform a loop, and break out manually once the conditions:
bool CloseProcessOperation = true; // Control variable incase you want to abort the loop
while (CloseProcessOperation)
{
if (System.Diagnostics.Process.GetProcessesByName(processName, machineName).Length > 0) { break; }
// break if no logs exist
// break for some other condition
// etc
// delete logs
}

Categories

Resources