I have a method that stops a service(s) but I also need to delete the logs. Usually this is not a problem but the process can take a little bit of time before closing. Again, although the service appears stopped, the process does take additional time to close properly. Since the process is still running, I cannot delete the logs so I need to find a way to monitor the .exe to know when its safe to delete the logs.
so far my best option is a do while loop, unfortunately the first iteration of the delete statement throws an exception and stops the program.
do
{
// delete logs
}
while (System.Diagnostics.Process.GetProcessesByName(processName, machineName).Length > 0);
Im sure there is a simple solution but my lack of experience is the real problem.
This is probably not the best answer either, but you could invert the loop to:
while (System.Diagnostics.Process.GetProcessesByName(processName, machineName).Length > 0)
{
// delete log files.
}
I would suppose this would evalutate the condition of the loop before executing the contents. But according to your statements, this will not execute the code until the process has exited.
A hackish way around this is to perform a loop, and break out manually once the conditions:
bool CloseProcessOperation = true; // Control variable incase you want to abort the loop
while (CloseProcessOperation)
{
if (System.Diagnostics.Process.GetProcessesByName(processName, machineName).Length > 0) { break; }
// break if no logs exist
// break for some other condition
// etc
// delete logs
}
Related
I've got a routine called GetEmployeeList that loads when my Windows Application starts.
This routine pulls in basic employee information from our Active Directory server and retains this in a list called m_adEmpList.
We have a few Windows accounts set up as Public Profiles that most of our employees on our manufacturing floor use. This m_adEmpList gives our employees the ability to log in to select features using those Public Profiles.
Once all of the Active Directory data is loaded, I attempt to "auto logon" that employee based on the System.Environment.UserName if that person is logged in under their private profile. (employees love this, by the way)
If I do not thread GetEmployeeList, the Windows Form will appear unresponsive until the routine is complete.
The problem with GetEmployeeList is that we have had times when the Active Directory server was down, the network was down, or a particular computer was not able to connect over our network.
To get around these issues, I have included a ManualResetEvent m_mre with the THREADSEARCH_TIMELIMIT timeout so that the process does not go off forever. I cannot login someone using their Private Profile with System.Environment.UserName until I have the list of employees.
I realize I am not showing ALL of the code, but hopefully it is not necessary.
public static ADUserList GetEmployeeList()
{
if ((m_adEmpList == null) ||
(((m_adEmpList.Count < 10) || !m_gotData) &&
((m_thread == null) || !m_thread.IsAlive))
)
{
m_adEmpList = new ADUserList();
m_thread = new Thread(new ThreadStart(fillThread));
m_mre = new ManualResetEvent(false);
m_thread.IsBackground = true;
m_thread.Name = FILLTHREADNAME;
try {
m_thread.Start();
m_gotData = m_mre.WaitOne(THREADSEARCH_TIMELIMIT * 1000);
} catch (Exception err) {
Global.LogError(_CODEFILE + "GetEmployeeList", err);
} finally {
if ((m_thread != null) && (m_thread.IsAlive)) {
// m_thread.Abort();
m_thread = null;
}
}
}
return m_adEmpList;
}
I would like to just put a basic lock using something like m_adEmpList, but I'm not sure if it is a good idea to lock something that I need to populate, and the actual data population is going to happen in another thread using the routine fillThread.
If the ManualResetEvent's WaitOne timer fails to collect the data I need in the time allotted, there is probably a network issue, and m_mre does not have many records (if any). So, I would need to try to pull this information again the next time.
If anyone understands what I'm trying to explain, I'd like to see a better way of doing this.
It just seems too forced, right now. I keep thinking there is a better way to do it.
I think you're going about the multithreading part the wrong way. I can't really explain it, but threads should cooperate and not compete for resources, but that's exactly what's bothering you here a bit. Another problem is that your timeout is too long (so that it annoys users) and at the same time too short (if the AD server is a bit slow, but still there and serving). Your goal should be to let the thread run in the background and when it is finished, it updates the list. In the meantime, you present some fallbacks to the user and the notification that the user list is still being populated.
A few more notes on your code above:
You have a variable m_thread that is only used locally. Further, your code contains a redundant check whether that variable is null.
If you create a user list with defaults/fallbacks first and then update it through a function (make sure you are checking the InvokeRequired flag of the displaying control!) you won't need a lock. This means that the thread does not access the list stored as member but a separate list it has exclusive access to (not a member variable). The update function then replaces (!) this list, so now it is for exclusive use by the UI.
Lastly, if the AD server is really not there, try to forward the error from the background thread to the UI in some way, so that the user knows what's broken.
If you want, you can add an event to signal the thread to stop, but in most cases that won't even be necessary.
Something tells me this might be a stupid question and I have in fact approached my problem from the wrong direction, but here goes.
I have some code that loops through all the documents in a folder - The alphabetical order of these documents in each folder is important, this importance is also reflected in the order the documents are printed. Here is a simplified version:
var wordApp = new Microsoft.Office.Interop.Word.Application();
foreach (var file in Directory.EnumerateFiles(folder))
{
fileCounter++;
// Print file, referencing a previously instantiated word application object
wordApp.Documents.Open(...)
wordApp.PrintOut(...)
wordApp.ActiveDocument.Close(...)
}
It seems (and I could be wrong) that the PrintOut code is asynchronous, and the application sometimes gets into a situation where the documents get printed out of order. This is confirmed because if I step through, or place a long enough Sleep() call, the order of all the files is correct.
How should I prevent the next print task from starting before the previous one has finished?
I initially thought that I could use a lock(someObject){} until I remembered that they are only useful for preventing multiple threads accessing the same code block. This is all on the same thread.
There are some events I can wire into on the Microsoft.Office.Interop.Word.Application object: DocumentOpen, DocumentBeforeClose and DocumentBeforePrint
I have just thought that this might actually be a problem with the print queue not being able to accurately distinguish lots of documents that are added within the same second. This can't be the problem, can it?
As a side note, this loop is within the code called from the DoWork event of a BackgroundWorker object. I'm using this to prevent UI blocking and to feedback the progress of the process.
Your event-handling approach seems like a good one. Instead of using a loop, you could add a handler to the DocumentBeforeClose event, in which you would get the next file to print, send it to Word, and continue. Something like this:
List<...> m_files = Directory.EnumerateFiles(folder);
wordApp.DocumentBeforeClose += ProcessNextDocument;
...
void ProcessNextDocument(...)
{
File file = null;
lock(m_files)
{
if (m_files.Count > 0)
{
file = m_files[m_files.Count - 1];
m_files.RemoveAt(m_files.Count - 1);
}
else
{
// Done!
}
}
if (file != null)
{
PrintDocument(file);
}
}
void PrintDocument(File file)
{
wordApp.Document.Open(...);
wordApp.Document.PrintOut(...);
wordApp.ActiveDocument.Close(...);
}
The first parameter of Application.PrintOut specifies whether the printing should take place in the background or not. By setting it to false it will work synchronously.
This is the case:
A business has several sites, each site holds several cameras that take alt of pictures daily (about a thousand pictures each). These pictures are then stored in a folder (one folder for a day) on one computer.
The business own an image analyzing program that gets an "in.xml" file as input and returns an "out.xml" file, for analyzing of one picture. This program must used and cannot be changed.
I wrote a UI for that program that runs on that folder and processes each camera from each site, sending pic after pic to that program which runs as a separate process.
Because this processing is async I have used events at the start and end of every pic's handling, and the same for sites and cameras on sites.
The program run on that business greatly, but sometimes it gets stuck after handling a pic, like it has missed the end_pic_analizing event, and is still waiting for it to be thrown.
I tried putting a timer for every picture, that moves to the next pic in such cases, but it still got stuck again, acting like is was missing the timer event as well.
This bug happens too many times, even when running almost as single process at that computer, and has got stuck even at the start of the process (happened at the third picture once). this bug doesn't depend on specific pictures either, because it can be stuck at different pics or not be stuck at all, while running repeatedly on the same folder.
Code samples:
on the Image class:
static public void image_timer_Elapsed(object sender, ElapsedEventArgs e)
{
//stop timer and calculate the how much time left for next check for file.
_timer.Stop();
_timerCount += (int)_timer.Interval;
int timerSpan = (int)(Daily.duration * 1000) - _timerCount;
//daily.duration is the max duration for seekin the "out.xml" file before quiting.
if (timerSpan < _timer.Interval) _timer.Interval = timerSpan + 1;
//check for file and analize it.
String fileName = Daily.OutPath + #"\out.xml";
ResultHandler.ResultOut output = ResultHandler.GetResult(ref _currentImage);
//if no file found and there is time left wait and check again
if (output == ResultHandler.ResultOut.FileNotFound && timerSpan > 0)
{
_timer.Start();
}
else //file found or time left
{
if (MyImage.ImageCheckCompleted != null)
MyImage.ImageCheckCompleted(_currentImage); //throw event
// the program is probably got stuck here.
}
On camera class:
static public void Camera_ImageCheckCompleted(MyImage image)
{
//if this is not the last image. (parent as Camera )
if (image.Id + 1 < image.parent.imageList.Count)
{
image.parent.imageList[image.Id + 1].RunCheck(); //check next image
}
else
{
if (Camera.CameraCheckCompleted != null)
Camera.CameraCheckCompleted(image.parent); // throw event
}
}
You don't appear to have any error handling or logging code, so if an exception is thrown your program will halt and you might not have a record of what happened. This is especially true since your program is processing the images asynchronously, so the main thread may have already exited by the time an error occurs in one of your processing threads.
So first and foremost, I would suggest throwing a try/catch block around all the code that gets run in the separate thread. If an exception gets thrown there, you will want to catch that and either fire ImageCheckCompleted with some special event arguments to indicate there was an error or fire some other event that you create specifically for when errors occur. That way your program can continue to process even if an exception is thrown inside your code.
try
{
//... Do your processing
// This will happen if everything worked correctly.
InvokeImageCheckCompleted(new ImageCheckCompletedEventArgs();
}
catch (Exception e)
{
// This will happen if an exception got thrown.
InvokeImageCheckCompleted(new ImageCheckCompletedEventArgs(e);
}
For the sake of simplicity, I'd suggest using a for loop to process each image. You can use a ManualResetEvent to block execution until the ImageCheckCompleted event fires for each check. This should make it easier to log the execution of each loop, catch errors that may be preventing the ImageCheckCompleted event from firing, and even possibly move on to process the next image if one of them appears to be taking too long.
Finally, if you can make your image processing thread-safe, you might consider using Parallel.ForEach to make it so that multiple images can be processed at the same time. This will probably significantly improve the overall speed of processing the batch.
I'm performing a "safe" copy of a directory over another directory as follows:
Given the source C:\Source and target C:\Target
Copy C:\Source to C:\Target-incoming
Move C:\Target (if it exists) to C:\Target-outgoing
Move C:\Target-incoming to C:\Target
Delete C:\Target-outgoing (if it exists)
If any of the first three steps fail, I'll attempt to put things back as they were to prevent data loss.
However, the move of C:\Target-incoming to C:\Target fails with "Access to the path C:\Target-incoming is denied" most of the time.
At the moment, inserting Thread.Sleep(100) just before the move operation fixes the problem for me. However, waiting .1 of a second seems ridiculous to me. Thread.Sleep(10) isn't enough to fix it. I also have the sinking feeling that the value I have to wait depends on the speed of disk IO.
So, my questions:
Can I prevent this from happening?
If not, is there a way of finding out when the lock on the directory is released after copying it?
Edit: For clarity, I'm doing all these operations in one method on one thread, and I'm just using Thread.Sleep() to pause code flow for a moment. The moves and copies are being done with standard .NET Directory.Move(), Directory.CreateDirectory() and File.CopyTo() methods. It would appear that the .NET methods are returning before the locks on the respective files are being released, causing the necessity to wait an amount of time before continuing.
What could be happening is probably that your thread is trying to "Move C:\Target-incoming to C:\Target" WHILE the "Move C:\Target to C:\Target-outgoing" is NOT finished YET.
This track is confirmed by the success of your process after short Thread Sleep.
Try to Chain your processes, i.e : Divide each step into specific methods, and call the methods one after the other (sync'ing the start of a method to the end of the previous one)
There are various ways to do that (among others syncing/locking/chaining different threads per process/step)
You could check Thread Synchronization in .NET
But of course, this is not the only possible cause for your problem.
After a bunch of testing, it seems like the very act of trying to move a locked folder gets the OS to hurry up and release the lock, even if the first attempt fails.
I wrote this extension method to DirectoryInfo:
public static void TryToMoveTo(this DirectoryInfo o, string targetPath) {
int attemptsRemaining = 5;
while (true) {
try {
o.MoveTo(targetPath);
break;
} catch (Exception) {
if (attemptsRemaining == 0) {
throw;
} else {
attemptsRemaining--;
System.Threading.Thread.Sleep(10);
}
}
}
}
While debugging the original problem, I settled on waiting for 100ms as anything less seemed to cause exceptions (I tried 10, 25, 50, 75 and 100ms). However, in the method above I wait 10ms before retrying, and I never, ever got more than one exception thrown in each of my hundreds of test runs.
You can always try waiting in a loop, up till a maximum # of tries. You can check to see if the directory is locked by calling CreateFile and checking it's return code. Be sure to read through the "flags" section of the docs because you need to pass in a special flag to open a directory.
Someone else mentioned in a comment that you may want to try Transactional NTFS. If you can, you might want to try that.
check wethere source and target directories exist before copying or moving using io.directory.exists
the access deneied error is caused by either source or target are not found.
Or not!
I have a fairly simple application timer program. The program will launch a user selected (from file dialog) executable and then terminate the process after the user specified number of minutes. During testing I found that a crash occurs when I call the Process.Kill() method and the application is minimized to the system tray.
The executable in question is Frap.exe which I use frequently and is the reason I wrote the app timer in the first place. I always minimize fraps to the tray, and this is when the crash occurs.
My use of Kill() is straight forward enough...
while (true)
{
//keep checking if timer expired or app closed externally (ie. by user)
if (dtEndTime <= DateTime.Now || p.HasExited)
{
if (!p.HasExited)
p.Kill();
break;
}
System.Threading.Thread.Sleep(500);
}
In searching for alternatives methods to close an external application programmatically, I found only Close() and Kill() (CloseMainWindow is not helpful to me at all). I tried using Close(), which works providing the application is minimized the tray. If the app is minimized, Close() doesn't cause a crash but the app remains open and active.
One thing I noticed in a few posts posts regarding closing external applications was the comment: "Personally I'd try to find a more graceful way of shutting it down though." made in THIS thread found here at stack flow (no offense to John). Thing is, I ran across comments like that on a few sites, with no attempt at what a graceful or elegant (or crash-free!!) method might be.
Any suggestions?
The crash experienced is not consistant and I've little to offer as to details. I am unable to debug using VS2008 as I get message - cant debug crashing application (or something similar), and depending on what other programs I have running at the time, when the Kill() is called some of them also crash (also programs only running in the tray) so I'm thinking this is some sort of problem specifically related to the system tray.
Is it possible that your code is being executed in a way such that the Kill() statement could sometimes be called twice? In the docs for Process.Kill(), it says that the Kill executes asynchronously. So, when you call Kill(), execution continues on your main thread. Further, the docs state that Kill will throw a Win32Exception if you call it on an app that is already in the process of closing. The docs state that you can use WaitForExit() to wait for the process to exit. What happens if you put a call to WaitForExit() immediately following the call to Kill(). The loop looks ok (with the break statement). Is it possible that you have code entering that loop twice?
If that's not the problem, maybe there is another way to catch that exception:
Try hooking the AppDomain.CurrentDomain.UnhandledException event
(currentDomain is a static member)
The problem is that Kill runs asynchronously, so if it's throwing an exception, it's occurring on a different thread. That's why your exception handler doesn't catch it. Further (I think) that an unhandled async exception (which is what I believe you have) will cause an immediate unload of your application (which is what is happening).
Edit: Example code for hooking the UnhandledExceptionEvent
Here is a simple console application that demonstrates the use of AppDomain.UnhandledException:
using System;
public class MyClass
{
public static void Main()
{
System.AppDomain.CurrentDomain.UnhandledException += MyExceptionHandler;
System.Threading.ThreadPool.QueueUserWorkItem(DoWork);
Console.ReadLine();
}
private static void DoWork(object state)
{
throw new ApplicationException("Test");
}
private static void MyExceptionHandler(object sender, System.UnhandledExceptionEventArgs e)
{
// get the message
System.Exception exception = e.ExceptionObject as System.Exception;
Console.WriteLine("Unhandled Exception Detected");
if(exception != null)
Console.WriteLine("Message: {0}", exception.Message);
// for this console app, hold the window open until I press enter
Console.ReadLine();
}
}
My first thought is to put a try/catch block around the Kill() call and log the exception you get, if there is one. It might give you a clue what's wrong. Something like:
try
{
if(!p.HasExited)
{
p.Kill();
}
break;
}
catch(Exception ex)
{
System.Diagnostics.Trace.WriteLine(String.Format("Could not kill process {0}, exception {1}", p.ToString(), ex.ToString()));
}
I dont think I should claim this to be "THE ANSWER" but its a decent 'work around'. Adding the following to lines of code...
p.WaitForInputIdle(10000);
am.hWnd = p.MainWindowHandle;
...stopped the crashing issue. These lines were placed immediately after the Process.Start() statement. Both lines are required and in using them I opened the door to a few other questions that I will be investigating over the next few days. The first line is just an up-to 10 second wait for the started process to go 'idle' (ie. finish starting). am.hWnd is a property in my AppManagement class of type IntPtr and this is the only usage of both sides of the assignment. For lack of better explaination, these two lines are analguous to a debouncing method.
I modified the while loop only slightly to allow for a call to CloseMainWindow() which seems to be the better route to take - though if it fails I then Kill() the app:
while (true)
{
//keep checking if timer expired or app closed externally (ie. by user)
if (dtEndTime <= DateTime.Now || p.HasExited) {
try {
if (!p.HasExited) // if the app hasn't already exitted...
{
if (!p.CloseMainWindow()) // did message get sent?
{
if (!p.HasExited) //has app closed yet?
{
p.Kill(); // force app to exit
p.WaitForExit(2000); // a few moments for app to shut down
}
}
p.Close(); // free resources
}
}
catch { // blah blah }
break;
}
System.Threading.Thread.Sleep(500);
}
My initial intention for getting the MainWindowHandle was to maximize/restore an app if minimized and I might implement that in the near future. I decided to see if other programs that run like Fraps (ie, a UI but mostly run in the system tray (like messanger services such as Yahoo et al.)). I tested with XFire and nothing I could do would return a value for the MainWindowHandle. Anyways, this is a serperate issue but one I found interesting.
PS. A bit of credit to JMarsch as it was his suggestion RE: Win32Exception that actually lead me to finding this work around - as unlikely as it seems it true.