I have some external exe that I need to run, and read both its output and errors.
prc = new Process();
prc.StartInfo.UseShellExecute = false;
prc.StartInfo.FileName = fileName;
prc.StartInfo.Arguments = arguments;
prc.StartInfo.LoadUserProfile = true;
prc.StartInfo.RedirectStandardOutput = true;
prc.StartInfo.RedirectStandardError = true;
prc.OutputDataReceived += (sendingProcess, outLine) => outputText.AppendLine(outLine.Data);
prc.ErrorDataReceived += (sendingProcess, errorLine) => errorText.AppendLine(errorLine.Data);
prc.Start();
prc.BeginOutputReadLine();
prc.BeginErrorReadLine();
Thread where this happens may get aborted at any time and there is nothing I can change about it, so I can't use prc.WaitForExit(); as it can't be aborted until process itself is killed.
That means that every time thread gets aborted, all execution just hangs up.
So I replaced it with this in separate method:
while (!Thread.CurrentThread.ThreadState.HasFlag(ThreadState.AbortRequested) && !Thread.CurrentThread.ThreadState.HasFlag(ThreadState.Aborted))
{
if (!process.HasExited) continue;
return process.ExitCode;
}
The only problem with this is outputs. Weirdly, there are sometimes strings missing in the end both from output and error, and adding Thread.Sleep solves this problem, meaning that asynchronous writing can't keep up for some reason.
This seems like a trivial problem, yet I can't find a reliable solition. Thread.Sleep is hardly a good practice, and execution time is very important. What is the best way to reliably get all errors and output?
The problem with the output is that the write ends of the anonymous pipes used to carry it to your process may not be flushed until the process exits - it doesn't always happen, but you can't rely on it not happening. Instead, why not run the process stuff, including WaitForExit() (or maybe just WaitForExit()) in a separate thread? Your original thread can wait for this worker to finish in Thread.Abortable manner (e.g. Thread.Join()), and you'll have your complete output. Or, if you aren't worried about ThreadAbortException but need to be able to cancel the WaitForExit() call, you can use WaitHandle.WaitAny(prc.Handle,cancellationToken.WaitHandle).
Related
I recently found that the cause of an obscure bug was due to Process.Start. The method triggered a pump in the event queue while the process was starting, which in turn made code run that was dependent on the process being started.
The bug seemed to be timing related since it did not happen consistently, and was quite difficult to reproduce. Since we have more places in the code where we use Process.Start, and we might very well use it more in the future, I thought it would be good to get to the bottom of how to trigger it more consistently.
We created a standalone winforms project with a single button, here is the click eventhandler:
private void button1_Click(object sender, EventArgs e) {
this.BeginInvoke(new MethodInvoker(() => { MessageBox.Show("Hello World"); }));
var process = new System.Diagnostics.Process() { EnableRaisingEvents = true };
process.Exited += CurrentEditorProcess_Exited;
// Does pump
process.StartInfo = new System.Diagnostics.ProcessStartInfo(#"C:\Users\micnil\Pictures\test.png");
// Does not pump
//process.StartInfo = new System.Diagnostics.ProcessStartInfo("Notepad.exe");
process.Start();
Thread.Sleep(3000);
Console.WriteLine("Done");
}
Running this code will:
show the messagebox,
finish opening the test image,
After exiting the messagebox, the program will sleep 3 seconds and then log "Done".
If I replace the ProcessStartInfo argument with "Notepad.exe" instead, the code will:
start Notepad,
log Done,
Show the message box.
This can be reproduced consistently. The problem is, the program that we are starting is more similar to notepad. It is our own custom text editor built in WPF. So why does starting our executable sometimes trigger a pump, but I cannot make Notepad do the same?
We found a few others that has been hit by the same problem:
Which blocking operations cause an STA thread to pump COM messages?
No answer to why
Process.Start causes processing of Windows Messages:
Qouting Dave Andersson
ShellExecuteEx may pump window messages when opening a document in its
associated application, specifically in the case where a DDE
conversation is specified in the file association.
Additionally, ShellExecuteEx may create a separate thread to do the
heavy lifting. Depending on the flags specified in the
SHELLEXECUTEINFO structure, the ShellExecuteEx call may not return
until the separate thread completes its work. In this case,
ShellExecuteEx will pump window messages to prevent windows owned by
the calling thread from appearing hung.
You can either ensure that the variables in question are initialized
prior to calling Process.Start, move the Process.Start call to a
separate thread, or call ShellExecuteEx directly with the
SEE_MASK_ASYNCOK flag set.
"May create a thread" - How do I know when?, "You can either ensure that the variables in question are initialized prior to calling Process.Start" - How?
We solved the problem by setting process.StartInfo.UseShellExecute = false before starting the process. But the question remains, does anyone know how to write a minimal program that would trigger a pump from starting a executable like Notepad?
Update:
Done some more investigation and have two new findings:
private void button1_Click(object sender, EventArgs e) {
this.BeginInvoke(new MethodInvoker(() => {
Log("BeginInvoke");
}));
string path = Environment.GetEnvironmentVariable("PROCESS_START_EXE");
var process = new System.Diagnostics.Process() { EnableRaisingEvents = true };
process.Exited += CurrentEditorProcess_Exited;
process.StartInfo = new System.Diagnostics.ProcessStartInfo(path, "params");
process.Start();
Thread.Sleep(3000);
Log("Finished Sleeping");
}
1:
When setting the environment variable PROCESS_START_EXE to "Notepad" the logging becomes:
BeginInvoke
Finished Sleeping
When setting the environment variable PROCESS_START_EXE to "Notepad.exe" (note the ".exe") the logging becomes:
Finished Sleeping
BeginInvoke
Which I find strange, but I don't think is related to the problem we were having. We always specify the exact path and filename to the executable, including ".exe"
2:
The scenario in which I found the bug, was that i launched the application through a Windows shortcut with a target similar to this:
C:\Windows\System32\cmd.exe /c "SET PROCESS_START_EXE=<path to custom editor> && START /D ^"<path to application directory>^" App.exe"
It first sets an environment variable with the path to the executable that is supposed to be started, and then launches the winforms application.
If I use this shortcut to launch the application, and use the environment variable to start the process, then the logging always becomes:
BeginInvoke
Finished Sleeping
This does seem like the problem we were having. Only that I couldn't reproduce it consistently every time. Why does it matter that I am setting the environment variable right before launching the application?
Note that if I use the same shortcut, but do not use the environment variable to start the process, the message loop will not pump.
This has been kind of a "criminal research" with very surprising outcome - at least to me.
We are running external commands using a Process object. We are capturing standard output (and standard error - I have removed this from the sample for clarity reasons) asynchronously. This has been answered several times here. The code looks like this and works fine:
var process = new Process
{
StartInfo =
{
UseShellExecute = false,
RedirectStandardOutput = true,
FileName = #"...\TestApp.exe"
}
};
process.OutputDataReceived += Process_OutputDataReceived;
process.Start();
process.BeginOutputReadLine();
process.WaitForExit();
process.OutputDataReceived -= Process_OutputDataReceived;
Now, what if I want to add a hard timeout to the WaitForExit()? Easy: there's an overloaded version of WaitForExit(int milliseconds). So a call like this should do:
process.WaitForExit(60000);
Now, this renders the output capture incomplete in some cases.
This is not a secret, at least not to Microsoft's developers. After several hours of research, I came to the conclusion that something must be wrong with WaitForExit's implementation. So I took a look into the code and I found this comment in Microsoft's implementation:
// If we have a hard timeout, we cannot wait for the streams
Here's my question - and I have done quite some effort of evaluation without success:
How can I have a hard timeout on WaitForExit() and at the same time make sure that the console output capture is complete?
Have you tried using the OnExited event in a derived class? See this. That might allow you to use time-out the wait yourself instead of passing it into WaitForExit.
There's actually another (better) solution - you can add an AutoResetEvent and set it when the data received from those two events is null. Then, you check not only if the process has exited, but that the two AutoResetEvent's WaitOne methods are true.
This was answered here: ProcessStartInfo hanging on "WaitForExit"? Why?
#Hans Passant Your comment directed me into the right direction. Actually, I do not really know how long to Sleep(). But I can add a simple watch dog for this. Since the process has already been finished by the time when I have to start waiting for the missing captures, I can assume that the OutputDataReceived event is fired continuously until the stream is empty. So all I did was adding a Stopwatch and restarting it each time at the beginning of the OutputDataReceived event method. In the main code, I replace the call to WaitForExit(60000) by the following code:
if (process.WaitForExit(60000))
{
while (outputWatch.ElapsedMilliseconds < 20) Thread.Sleep(20);
}
Okay, there's a Thread.Sleep() inside this code, as well. But it definitely counts to the 0.05%... :)
Consider this example C# code (irrelevant pieces left out):
using System.Diagnostics.Process;
var process = new Process();
var startInfo = process.StartInfo;
startInfo.RedirectStandardOutput = true;
startInfo.RedirectStandardError = true;
process.EnableRaisingEvents = true;
process.OutputDataReceived += OutputHandler;
process.ErrorDataReceived += ErrorHandler;
process.Exited += ExitHandler;
process.Start();
process.BeginOutputReadLine();
process.BeginErrorReadLine();
Now, I want to notify a listener that the process is finished after there is no more output (stdout/stderr) to read from it. How do I ensure in my ExitHandler method that all remaining stdout/stderr is processed by OutputHandler and ErrorHandler before determining that the process has truly finished?
There is an interlock when you explicitly use Process.WaitForExit(-1). It won't return until the asynchronous readers for stdout and stderr have indicated end-of-file status. Call it in your Exited event handler. You must use a timeout of -1 or this won't work. Or just WaitForExit(). Which is fine, you know it already exited.
The Exited event is called when the process aborts or terminates. There should therefore never be a situation whereby there is still data to read.
When the operating system shuts down a process, any process component
that is waiting for an exit is notified. The component can then access
the associated process information that is still resident in the
operating system memory (such as ExitTime property) by using the
handle that it has to the process.
Because the associated process has exited, the Handle property of the
component no longer points to an existing process resource. Instead,
it can be used only to access the operating system's information about
the process resource. The system is aware of handles to exited
processes that have not been released by Process components, so it
keeps the ExitTime and Handle property information in memory until the
Process component specifically frees the resources.
I need to run several instances of an external executable from my app. The average run time for this executable is about 3 minutes.
I want to redirect output from these processes, and update a progress bar in my GUI.
Of course I don't want to wait for them to return before I can continue using my app.
I think I should create a thread for every instance, and update my progress bar when a thread finishes.
Is this the right approach ?
Also, do you recommend a good resource / documentation to understand how it works ? I've found http://www.dotnetperls.com/threadpool only.
edit : these processes are network-based, ie: the run time may vary a lot depending on the link latency/bandwidth.
Concerning the progress bar, I would like to update it every time a process finishes. Is there a handler for that ? Later i will add more detailed update, based on the processes output to increase the progress done at each execution step.
edit 2 :
Thanks for your inputs. As I may have to run a lot of process (up to 20), and I don't want to saturate bandwidth, i'll run 5 in parallel max. Every time a process finishes, I increment the progress counter (for my progress bar) and I run another one until they're all completed, using :
Process p = new Process();
p.StartInfo.FileName = pathToApp;
p.EnableRaisingEvents = true;
p.Exited += OnCalibrationProcessExited;
p.Start();
private void OnCalibrationProcessExited(object sender, EventArgs e)
{
runAnotherOne function
}
Is it correct or is there a more elegant way to achieve this ?
I don't want my app to be blocked during execution of course.
Is it better to use background workers for this ?
You should be using Process and ProcessStartInfo.
You'll need to set ProcessStartInfo.UseShellExecute to false, ErrorDialog to false, RedirectStandardOutput to true (and possibly RedirectStandardError too).
You'll also need to provide a delegate to the Process object to handle to output generated by the external process through OutputDataReceived (and possibly ErrorDataReceived as well).
There's also an Exited delegate you can set that will be called whenever the process exits.
Example:
ProcessStartInfo processInfo = new ProcessStartInfo("Write500Lines.exe");
processInfo.ErrorDialog = false;
processInfo.UseShellExecute = false;
processInfo.RedirectStandardOutput = true;
processInfo.RedirectStandardError = true;
Process proc = Process.Start(processInfo);
proc.ErrorDataReceived += (sender, errorLine) => { if (errorLine.Data != null) Trace.WriteLine(errorLine.Data); };
proc.OutputDataReceived += (sender, outputLine) => { if (outputLine.Data != null) Trace.WriteLine(outputLine.Data); };
proc.BeginErrorReadLine();
proc.BeginOutputReadLine();
proc.WaitForExit();
Just waiting for each thread to end before updating the progress bar results in nothing happening ... then a quick jump .. 3 times. You may as well skip the progress bar.
The correct way to do it IMHO would be to calculate the toal work done across all 3 process:
totalwork = time1 + time2 + time3
Now, if you have multiple processors, it will take more like max(time1, time2, time3) but thats ok. It's a representation of work.
Have a shared variable for work-done. Each time a process does some more work, update the progress bar by calculating work-done += my-work-increment. The progress is just work-done/totalwork.
This will give good results regardless of whether the threads run sequentially or in parallel. Since you don't know how things are going to run (you might have a single processor cpu) this is the best approach.
Just create several instances of the Process class by calling the constructor, set the properties to redirect output stream, and then start them.
Your program won't wait for the called process to exit, as long as you don't call the WaitForExit method. No multi-threading is required.
Create a single thread.
In that thread (pseudocode):
Thread begins here
for each externalApp
Run the application with redirect output
Wait for exit
Update progress bar
end for
Thread ends here
See http://msdn.microsoft.com/en-us/library/ty0d8k56.aspx for wait for exit
Or... do you want to run the external apps in parallel?
Edit: based on the latest updates in the orginal post:
If you don't know the actual progress then don't use a regular progress bar. How about an infinite progress bar or a "working" icon?
An infinite progress bar could be a progress bar that fills up and the starts from beginning until everything is done. A working icon is like the Windows busy cursor (the ever spinning circle).
How about creating an ObservableCollection of TaskProgressInfo,
where TaskProgressInfo is a custom class, to which you write your progress.
Bind a WPF listview to that collection, using a datatemplate (with target type = TaskProgressInfo) to show a progressbar for each item (task).
Create an array of BackgroundWorkers that launch the external app and monitor it.
Each background worker should update its TaskProgressInfo, thus updating the datasource of a progressbar.
Upon completion each BackgroundWorker should remove its TaskProgressInfo from the ObservableCollection, thus removing a progress bar from the UI.
Since BackgroundWorker uses the background (UI) thread for report progress and completion,
the changes to the ObservableCollection will be done by its creating thread (thread safe).
Behind the scenes, .NET will use a ThreadPool - some backgroundworkers will share threads.
Here is the C# code I'm using to launch a subprocess and monitor its output:
using (process = new Process()) {
process.StartInfo.FileName = executable;
process.StartInfo.Arguments = args;
process.StartInfo.UseShellExecute = false;
process.StartInfo.RedirectStandardOutput = true;
process.StartInfo.RedirectStandardInput = true;
process.StartInfo.CreateNoWindow = true;
process.Start();
using (StreamReader sr = process.StandardOutput) {
string line = null;
while ((line = sr.ReadLine()) != null) {
processOutput(line);
}
}
if (process.ExitCode == 0) {
jobStatus.State = ActionState.CompletedNormally;
jobStatus.Progress = 100;
} else {
jobStatus.State = ActionState.CompletedAbnormally;
}
OnStatusUpdated(jobStatus);
}
I am launching multiple subprocesses in separate ThreadPool threads (but no more than four at a time, on a quad-core machine). This all works fine.
The problem I am having is that one of my subprocesses will exit, but the corresponding call to sr.ReadLine() will block until ANOTHER one of my subprocesses exits. I'm not sure what it returns, but this should NOT be happening unless there is something I am missing.
There's nothing about my subprocess that would cause them to be "linked" in any way - they don't communicate with each other. I can even look in Task Manager / Process Explorer when this is happening, and see that my subprocess has actually exited, but the call to ReadLine() on its standard output is still blocking!
I've been able to work around it by spinning the output monitoring code out into a new thread and doing a process.WaitForExit(), but this seems like very odd behavior. Anyone know what's going on here?
The MSDN documents about ProcessStartInfo.RedirectStandardOutput discuss in detail deadlocks that can arise when doing what you are doing here. A solution is provided that uses ReadToEnd but I imagine the same advice and remedy would apply when you use ReadLine.
Synchronous read operations introduce
a dependency between the caller
reading from the StandardOutput stream
and the child process writing to that
stream. These dependencies can cause
deadlock conditions. When the caller
reads from the redirected stream of a
child process, it is dependent on the
child. The caller waits for the read
operation until the child writes to
the stream or closes the stream. When
the child process writes enough data
to fill its redirected stream, it is
dependent on the parent. The child
process waits for the next write
operation until the parent reads from
the full stream or closes the stream.
The deadlock condition results when
the caller and child process wait for
each other to complete an operation,
and neither can continue. You can
avoid deadlocks by evaluating
dependencies between the caller and
child process.
The best solution seems to be async I/O rather than the sync methods:
You can use asynchronous read
operations to avoid these dependencies
and their deadlock potential.
Alternately, you can avoid the
deadlock condition by creating two
threads and reading the output of each
stream on a separate thread.
There is a sample here that ought to be useful to you if you go this route.
I think it's not your code that's the issue. Blocking calls can unblock for a number of reasons, not only because their task was accomplished.
I don't know about Windows, I must admit, but in the Unix world, when a child finishes, a signal is sent to the parent process and this wakes him from any blocking calls. This would unblock a read on whatever input the parent was expecting.
It wouldn't surprise me if Windows worked similarly. In any case, read up on the reasons why a blocking call may unblock.