Related
I need to communicate with a program from my C# code. I have received a communication library from the company that made the program, along with a quite detailed explanation on how to integrate with the application. I feel I have done everything correctly, the C++ functions all return success statusses, but the most important part, the callback function, is never invoked.
I cannot use [DllImport(...)], I have to use LoadLibrary, GetProcAddress and Marshal.GetDelegateForFunctionPointer to load the library into memory and call it's functions. This is so I keep an instance of the library in memory, so it remembers the callback function. This is because the callback function is registered and then used by subsequent requests to communicate additional information.
I'm keeping a reference to the callback function so the GC doesn't collect it and it isn't available for the C++ code.
When I wait for the callback, nothing happens. I don't get errors, I don't see anything specific happening, it just lets me wait forever. The callback doesn't happen.
I'm hoping that somebody can tell me what I have missed, because I don't see the solution anymore.
class Program
{
private static Callback keepAlive;
static void Main(string[] args)
{
try
{
var comLib = LoadLibrary(#"C:\BIN\ComLib.dll");
var isInstalledHandle = GetProcAddress(comLib, nameof(IsInstalled));
var isInstalled = Marshal.GetDelegateForFunctionPointer<IsInstalled>(isInstalledHandle);
var isInstalledResult = isInstalled("activation-key"); // returns success
Console.WriteLine("Is installed result: " + isInstalledResult);
var registerCallbackHandle = GetProcAddress(comLib, nameof(RegisterCallback));
var registerCallback = Marshal.GetDelegateForFunctionPointer<RegisterCallback>(registerCallbackHandle);
keepAlive = CallbackFunc;
var registerResult = registerCallback(keepAlive); // returns success
Console.WriteLine("Register result: " + registerResult);
var initHandle = GetProcAddress(comLib, nameof(Init));
var init = Marshal.GetDelegateForFunctionPointer<Init>(initHandle);
var initResult = init("ERP_INTEGRATION", "activation-key"); // returns success
Console.WriteLine("Init result: " + initResult);
Console.WriteLine("Wait...");
Console.ReadLine();
FreeLibrary(comLib);
}
catch (Exception e)
{
Console.WriteLine(e);
}
}
private static int CallbackFunc(int id, string data)
{
Console.WriteLine($"{id} > {data}");
return 0;
}
[UnmanagedFunctionPointer(CallingConvention.StdCall)]
private delegate int IsInstalled(string registryKey);
private delegate int Callback(int id, string data);
[UnmanagedFunctionPointer(CallingConvention.StdCall)]
private delegate int RegisterCallback(Callback callback);
[UnmanagedFunctionPointer(CallingConvention.StdCall)]
private delegate int Init(string id, string command);
[DllImport("kernel32.dll", CharSet = CharSet.Ansi, SetLastError = true)]
public static extern IntPtr LoadLibrary([MarshalAs(UnmanagedType.LPStr)] string lpLibFileName);
[DllImport("kernel32.dll", CharSet = CharSet.Ansi, SetLastError = true)]
public static extern IntPtr GetProcAddress(IntPtr hModule, [MarshalAs(UnmanagedType.LPStr)] string lpProcName);
[DllImport("kernel32.dll", CharSet = CharSet.Ansi, SetLastError = true)]
public static extern bool FreeLibrary(IntPtr hModule);
}
EDIT 1: I got back from the developers of the software. Apparently, because I'm doing all the work on the main thread, the app can't call back because the thread is busy. I should offload the app (and the callback method) to another thread so the app is free to call the callback funcion. Unfortunately, I have no idea how to do this.
EDIT 2: On request, I put the code in a WinForms app and the callback works nicely there. I do not fully understand why, except that the callback must happen while a) the main thread is free (waiting for input from the form) or b) it offloads the callback to another thread. I have no idea how to simulate either in a console app.
I found an answer that is a little hacky, but it works. I converted my console application to WinForms application. This gives me access to System.Windows.Forms.Application.DoEvents. That allows me to start up a Task which runs out of process and keeps refreshing the events that need to be handled. This allows the external program to invoke the callback function.
private bool _callbackOccured = false;
static async Task Main(string[] args)
{
try
{
var comLib = LoadLibrary(#"C:\BIN\ComLib.dll");
await Task.Run(async () => {
var isInstalledHandle = GetProcAddress(comLib, nameof(IsInstalled));
var isInstalled = Marshal.GetDelegateForFunctionPointer<IsInstalled>(isInstalledHandle);
var isInstalledResult = isInstalled("activation-key"); // returns success
Console.WriteLine("Is installed result: " + isInstalledResult);
var registerCallbackHandle = GetProcAddress(comLib, nameof(RegisterCallback));
var registerCallback = Marshal.GetDelegateForFunctionPointer<RegisterCallback>(registerCallbackHandle);
keepAlive = CallbackFunc;
var registerResult = registerCallback(keepAlive); // returns success
Console.WriteLine("Register result: " + registerResult);
var initHandle = GetProcAddress(comLib, nameof(Init));
var init = Marshal.GetDelegateForFunctionPointer<Init>(initHandle);
var initResult = init("ERP_INTEGRATION", "activation-key"); // returns success
Console.WriteLine("Init result: " + initResult);
while (!_callbackOccured)
{
await Task.Delay(100);
Appliction.DoEvents();
}
FreeLibrary(comLib);
}
}
catch (Exception e)
{
Console.WriteLine(e);
}
}
private static int CallbackFunc(int id, string data)
{
Console.WriteLine($"{id} > {data}");
_callbackOccured = true;
return 0;
}
Edit: it didn't fully work with Task.Run, I had to use an actual Thread.Start to get it to work correctly. A Task got killed too quickly and I still had the same problem that the external app was closed before it got to do anything usefull, but with a Thread it works as expected.
see answer to this: C# C++ Interop callback. Try converting your delegate to a native function pointer before you pass it to the library.
This is my class:
class Program
{
[DllImport("User32", CharSet = CharSet.Auto)]
public static extern int SystemParametersInfo(int uiAction, int uiParam,
string pvParam, uint fWinIni);
static void Main(string[] args)
{
while (true)
{
if (!Directory.Exists("C:/temp"))
{
Directory.CreateDirectory("C:/temp");
}
if (!File.Exists(#"c:\temp\image.png"))
{
using (WebClient client = new WebClient())
{
client.DownloadFile(new Uri("https://i.imgur.com/dhaP3Mu.png"), #"c:\temp\image.png");
}
}
if (!CheckWallpaper(#"c:\temp\image.png"))
{
SystemParametersInfo(0x0014, 0, #"c:\temp\image.png", 0x0001);
}
Thread.Sleep(1000);
}
}
static bool CheckWallpaper(string imagePath)
{
bool isSame = false;
try
{
using (RegistryKey key = Registry.CurrentUser.OpenSubKey(#"Control Panel\Desktop\WallPaper"))
{
if (key != null)
{
Object o = key.GetValue(imagePath);
if (o != null)
{
Version version = new Version(o as String); //"as" because it's REG_SZ...otherwise ToString() might be safe(r)
}
}
}
}
catch (Exception ex) //just for demonstration...it's always best to handle specific exceptions
{
//react appropriately
}
return isSame;
}
}
It's just a little prank, that makes the user impossible to change his wallpaper. But now I couldn't build it, cause eData instantly said that it's a virus of gen "Variant.Razy.587029". I can understand it thinks that it's a virus, so is there a way to improve my code that it doesn't look suspicious? I don't have the problem with Kaspersky. And I cannot change the virus scan settings, as it is not my pc obviously
EDIT: I'm not finished yet, for example the CheckWallpaper() always outputs false, I just wanted to test it.
EDIT2: I now changed the Wallpaper-check from "Looking in Registry" to "Compare %appdata%/Roaming/Microsoft/Windows/Themes/TranscodedWallpaper with my Image.png. I also added a Thread.Sleep(1000), else the program gets detected again"
As Glenn van Acker said, it's essentially a virus. If you really want such a program to be ignored by antiviruses, the "easiest" way is to deploy, sign and publish the program. Then write to antivirus companies to whitelist it.
I am new in C# and currently working on the backend code to support PIN pad. Basically, my code
OpenDevice() -> RequestPIN()
-> key in PIN on PIN PAD -> GetResultPIN()
-> ConsolePrintOutPIN() -> Print keyed PIN on the Console
I don't know how to write thread for this, so that once the "Enter key" is hit on the device after PIN, the system would automatically rolls to function GetResultPIN(). So, with my elementary knowledge, I wrote the following codes using Console.ReadLine() to separate each procedure:
static void Main(string[] args)
{
// 1. Open PIN Pad device
OpenDevice();
Console.ReadLine();// to hold up from the previous procedure, it is *not* for input data purpose
// 2. Request PIN from PIN Pad device.
// On the PIN Pad device, it reads:
// "Key in the PIN: "
RequestPIN();
Console.ReadLine();// to hold up from the previous procedure, it is *not* for input data purpose
// 3. get PIN from the device
GetResultPIN();
// 4. Print out on the Console
ConsolePrintOutPIN();
Console.ReadLine();// to hold up from the previous procedure, it is *not* for input data purpose
}
Question: Can anyone give me any suggestions on how to use threading/event/delegate that can avoid using Console.ReadLine()?
As commended above, Console.ReadLine() is used just to stop the procedure (sorry about my naivety of using it this way....) Once I use Console.ReadLine(), between RequestPIN() and GetResult(), the system would at least wait for me to Key in the PIN from the PIN PAD (connected to the computer through USB, not from key board), and then I would hit any key on the keyboard to pass Console.ReadLine() and GetResultPIN() would be able to get my PIN number from PIN Pad.....the whole program works now, it is not customer ready, because it is very choppy, doesn't flow due to Console.ReadLine() I added.....
So ideally, all the method would flow together. Once the device is opened, RequestPIN() should show on the PIN Pad screen asking for PIN number, some one can key in and hit enter on PIN Pad and it naturally flow to GetResultPIN() and read the result, and then it prints the PIN on the console...`
or
if the person doesn't key in PIN, the device would wait for 30s and directly goes to GetResultPIN() and print out "0000" on the Console
I have looked up treading and delegate, but am not sure how to use them in this situation.... Thank you!
Reference: RequestPin() and GetResultPIN are listed down below:
mIPAD.requestPIN(waitTime, pinMsg, minLen, maxLen, tone, option, ",");
//This function wraps device command 0x04.
//It directs the device to prompt the user to enter a PIN
//by displaying one of five predetermined messages and playing
// a specified sound.
//The messages on the device’s screen look like the figures below.
//The event associated with this function is
//OnPINRequestCompleteEvent.
waitTime: Time the device should wait for the user to begin PIN entry
pinMsg: Message to display as a user prompt, such as "Enter PIN", "ReEnter PIN", "Verify PIN", etc
minLen and maxLen: minimum length and maximum length of PIN
tone: beeping tone option
Option: Verify PIN, not Verify PIN, ISO0 FOrmat, ISO3 Format
Output would be: an integer, 0: Success, Non-Zero: Error
public void GetResultPIN()
{
StringBuilder sb = new StringBuilder();
sb.Append(mIPAD.pin.KSN);
// Key Serial Number:
//a given number from the device, unique for each device
sb.Append("," + mIPAD.pin.EPB);
// EPB: encryption of PIN after Dubpt TripleDES,
// essentially, EPB is PIN
sb.Append("," + mIPAD.getStatusCode());
//status code: Zero is good/done
// None-Zero is Error
sb.Append("\r\n");
result = sb.ToString();
}
Basically, the GetResultPIN() returns a string of random code, for example:
9A00030000047A2000AB,AD781711481B08A2,0 when PIN is successful. If the pin-input part is skipped, it would return ,,0.
Really hard to know if this will work or not without hardware to play with...
This is the way I envisioned it working:
static void Main()
{
OpenDevice();
RequestPIN();
if (GetResultPIN())
{
// do something with the PIN:
var pin = mIPAD.pin.EPB;
// ...
}
else
{
Console.WriteLine("0000");
}
}
public static bool GetResultPIN()
{
TimeSpan timeout = TimeSpan.FromSeconds(30);
System.Diagnostics.Stopwatch SW = new System.Diagnostics.Stopwatch();
SW.Start();
while (mIPAD.getStatusCode() != 0 && SW.Elapsed < timeout)
{
System.Threading.Thread.Sleep(50); // small call to prevent CPU usage ramping to 100%
}
return (mIPAD.getStatusCode() == 0);
}
You can rewrite your api to:
make GetResultPIN() return a value
use this value as input for ConsolePrintOutPIN()
In GetResultPIN you need to make a Task To ReadYour Pin and wait for it.
See : https://msdn.microsoft.com/en-us/library/dd537610(v=vs.110).aspx
You can do something like this:
public string GetResultPIN()
{
StringBuilder sb = new StringBuilder();
sb.Append(mIPAD.pin.KSN);
// Key Serial Number:
//a given number from the device, unique for each device
sb.Append("," + mIPAD.pin.EPB);
// EPB: encryption of PIN after Dubpt TripleDES,
// essentially, EPB is PIN
sb.Append("," + mIPAD.getStatusCode());
//status code: Zero is good/done
// None-Zero is Error
sb.Append("\r\n");
Thread.Sleep(20*1000); // it is in milliseconds
return sb.ToString();
}
Thanks for posting... The solution is still not ideal....
I also did some more testing regarding the function RequestPIN(). I have the following four scenarios:
User finishes keying in PIN sooner than the waitTime goes out.
onPINRequestComplete :
OpStatus:0
KSN:9A00030000047A2000C8
EPB:39DED176D3EA40B9
..............................
User doesn't finish keying in PIN when the waitTime is going out.
onPINRequestComplete :
OpStatus:2
KSN:00000000000000000000
EPB:0000000000000000
..............................
User cancels the PIN pad option by pressing "Cancel X" key on the PIN Pad.
onPINRequestComplete :
OpStatus:1
KSN:00000000000000000000
EPB:0000000000000000
..............................
User doesn't key in PIN at all during the waitTime, and then waitTime goes out.
onPINRequestComplete :
OpStatus:2
KSN:00000000000000000000
EPB:0000000000000000
..............................
So, scenario 1 and 3 would require the thread to wake up right away, while 2 and 4 would require the thread to wake up when the waiTime goes out. So using Thread.sleep(20*1000) within GetResultPIN() would work perfectly for scenario 2 and 4. As for 1 and 3, the user has to wait for a long time....
On the other hand, I found some code about Event
Within Car.cs:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace WaitOnTasksToComplete
{
class Car
{
public event Action OnChange;
private double speed;
public double Speed
{
get { return speed; }
set { speed = value;
if (speed >= 60)
{
if (OnChange != null)
{
OnChange();
}
}
}
}
}
}
Within Program.cs:
using System;
namespace WaitOnTasksToComplete
{
class Program
{
static void Main(string[] args)
{
Car c = new Car();
c.OnChange += C_OnChange;
c.Speed = 5;
c.Speed = 55;
c.Speed = 65;
c.Speed = 75;
}
private static void C_OnChange()
{
Console.WriteLine("Event fired: Car goes higher than 60 MPH.");
}
}
}
So, basically once the Car.speed jumps above 60, the alarm would show. I am considering borrowing the condition into my situation: Initialize OpStatus = -999. When OpStatus=0 or 1, keep executing GetResultPIN() and PrintMessagePIN(). If OpStatus=2 or others, keep waiting...
That is just my thoughts.... still have no clue how to implement it.... Any related ideas or suggestions would be appreciated.....
Ah, I figured out. I am basically using threading here. The main flow is OpenDevice()->RequestPIN()->Thread(()=>CheckOpStatus(getResultPIN)) -> Thread.Start(). Within the Thread, a loop is set to check every half second what the OpStatus is. Per my previous post, OpStatusis the output parameter of PIN Pad, zero- success; non-zero: failure. That said, the loop would proceed until either bool condition_WithinWaitTime or bool condition_NoKeyEvent breaks. After breaking out, invoke the getResultPIN and so on....
Here is my source code, as PIN input is one of my functions, the rest of which have very similar behavior in terms programming (request->manual operation->feedback), so I also included a delegate variable to represents all functions (card swiping, PIN, signature bla bla).
static void Main(string[] args)
{
OpenDevice();
EventGetPIN();
}
static void EventGetPIN()
{
myDel getResult = new myDel(GetResultPIN);
Thread thread1 = new Thread(() => CheckOpStatus(getResult));
myDel requestDel = new myDel(RequestPIN); requestDel();
thread1.Start();
}
static void CheckOpStatus(Delegate getResult)
{
int count = 0;
int checkingPeriod = 500;
int totalWaitTime = waitTime * 1000 + offsetTime;
string OpStatus;
string ksnStart = mIPAD.getKSN();
string ksn = ksnStart;
bool condition_WithinWaitTime = true;
bool condition_NoKeyEvent = true;
while (condition_WithinWaitTime & condition_NoKeyEvent)
{
count++;
OpStatus = mIPAD.getStatusCode().ToString();
ksn = mIPAD.getKSN();
//Console.WriteLine(OpStatus);
condition_WithinWaitTime = (count * checkingPeriod) < totalWaitTime;
condition_NoKeyEvent = (ksn == ksnStart);
Thread.Sleep(checkingPeriod);
}
getResult.DynamicInvoke();
}
I'm creating new processes using System.Diagnostics.Process class from my application. I want this processes to be killed when/if my application has crashed. But if I kill my application from Task Manager, child processes are not killed. Is there any way to make child processes dependent on parent process?
From this forum, credit to 'Josh'.
Application.Quit() and Process.Kill() are possible solutions, but have proven to be unreliable. When your main application dies, you are still left with child processes running. What we really want is for the child processes to die as soon as the main process dies.
The solution is to use "job objects" http://msdn.microsoft.com/en-us/library/ms682409(VS.85).aspx.
The idea is to create a "job object" for your main application, and register your child processes with the job object. If the main process dies, the OS will take care of terminating the child processes.
public enum JobObjectInfoType
{
AssociateCompletionPortInformation = 7,
BasicLimitInformation = 2,
BasicUIRestrictions = 4,
EndOfJobTimeInformation = 6,
ExtendedLimitInformation = 9,
SecurityLimitInformation = 5,
GroupInformation = 11
}
[StructLayout(LayoutKind.Sequential)]
public struct SECURITY_ATTRIBUTES
{
public int nLength;
public IntPtr lpSecurityDescriptor;
public int bInheritHandle;
}
[StructLayout(LayoutKind.Sequential)]
struct JOBOBJECT_BASIC_LIMIT_INFORMATION
{
public Int64 PerProcessUserTimeLimit;
public Int64 PerJobUserTimeLimit;
public Int16 LimitFlags;
public UInt32 MinimumWorkingSetSize;
public UInt32 MaximumWorkingSetSize;
public Int16 ActiveProcessLimit;
public Int64 Affinity;
public Int16 PriorityClass;
public Int16 SchedulingClass;
}
[StructLayout(LayoutKind.Sequential)]
struct IO_COUNTERS
{
public UInt64 ReadOperationCount;
public UInt64 WriteOperationCount;
public UInt64 OtherOperationCount;
public UInt64 ReadTransferCount;
public UInt64 WriteTransferCount;
public UInt64 OtherTransferCount;
}
[StructLayout(LayoutKind.Sequential)]
struct JOBOBJECT_EXTENDED_LIMIT_INFORMATION
{
public JOBOBJECT_BASIC_LIMIT_INFORMATION BasicLimitInformation;
public IO_COUNTERS IoInfo;
public UInt32 ProcessMemoryLimit;
public UInt32 JobMemoryLimit;
public UInt32 PeakProcessMemoryUsed;
public UInt32 PeakJobMemoryUsed;
}
public class Job : IDisposable
{
[DllImport("kernel32.dll", CharSet = CharSet.Unicode)]
static extern IntPtr CreateJobObject(object a, string lpName);
[DllImport("kernel32.dll")]
static extern bool SetInformationJobObject(IntPtr hJob, JobObjectInfoType infoType, IntPtr lpJobObjectInfo, uint cbJobObjectInfoLength);
[DllImport("kernel32.dll", SetLastError = true)]
static extern bool AssignProcessToJobObject(IntPtr job, IntPtr process);
private IntPtr m_handle;
private bool m_disposed = false;
public Job()
{
m_handle = CreateJobObject(null, null);
JOBOBJECT_BASIC_LIMIT_INFORMATION info = new JOBOBJECT_BASIC_LIMIT_INFORMATION();
info.LimitFlags = 0x2000;
JOBOBJECT_EXTENDED_LIMIT_INFORMATION extendedInfo = new JOBOBJECT_EXTENDED_LIMIT_INFORMATION();
extendedInfo.BasicLimitInformation = info;
int length = Marshal.SizeOf(typeof(JOBOBJECT_EXTENDED_LIMIT_INFORMATION));
IntPtr extendedInfoPtr = Marshal.AllocHGlobal(length);
Marshal.StructureToPtr(extendedInfo, extendedInfoPtr, false);
if (!SetInformationJobObject(m_handle, JobObjectInfoType.ExtendedLimitInformation, extendedInfoPtr, (uint)length))
throw new Exception(string.Format("Unable to set information. Error: {0}", Marshal.GetLastWin32Error()));
}
#region IDisposable Members
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
#endregion
private void Dispose(bool disposing)
{
if (m_disposed)
return;
if (disposing) {}
Close();
m_disposed = true;
}
public void Close()
{
Win32.CloseHandle(m_handle);
m_handle = IntPtr.Zero;
}
public bool AddProcess(IntPtr handle)
{
return AssignProcessToJobObject(m_handle, handle);
}
}
Looking at the constructor ...
JOBOBJECT_BASIC_LIMIT_INFORMATION info = new JOBOBJECT_BASIC_LIMIT_INFORMATION();
info.LimitFlags = 0x2000;
The key here is to setup the job object properly. In the constructor I'm setting the "limits" to 0x2000, which is the numeric value for JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE.
MSDN defines this flag as:
Causes all processes associated with
the job to terminate when the last
handle to the job is closed.
Once this class is setup...you just have to register each child process with the job. For example:
[DllImport("user32.dll", SetLastError = true)]
public static extern uint GetWindowThreadProcessId(IntPtr hWnd, out uint lpdwProcessId);
Excel.Application app = new Excel.ApplicationClass();
uint pid = 0;
Win32.GetWindowThreadProcessId(new IntPtr(app.Hwnd), out pid);
job.AddProcess(Process.GetProcessById((int)pid).Handle);
This answer started with #Matt Howells' excellent answer plus others (see links in the code below). Improvements:
Supports 32-bit and 64-bit.
Fixes some problems in #Matt Howells' answer:
The small memory leak of extendedInfoPtr
The 'Win32' compile error, and
A stack-unbalanced exception I got in the call to CreateJobObject (using Windows 10, Visual Studio 2015, 32-bit).
Names the Job, so if you use SysInternals, for example, you can easily find it.
Has a somewhat simpler API and less code.
Here's how to use this code:
// Get a Process object somehow.
Process process = Process.Start(exePath, args);
// Add the Process to ChildProcessTracker.
ChildProcessTracker.AddProcess(process);
To support Windows 7 requires:
A simple app.manifest change as #adam smith describes.
Registry settings to be added if you are using Visual Studio.
In my case, I didn't need to support Windows 7, so I have a simple check at the
top of the static constructor below.
/// <summary>
/// Allows processes to be automatically killed if this parent process unexpectedly quits.
/// This feature requires Windows 8 or greater. On Windows 7, nothing is done.</summary>
/// <remarks>References:
/// https://stackoverflow.com/a/4657392/386091
/// https://stackoverflow.com/a/9164742/386091 </remarks>
public static class ChildProcessTracker
{
/// <summary>
/// Add the process to be tracked. If our current process is killed, the child processes
/// that we are tracking will be automatically killed, too. If the child process terminates
/// first, that's fine, too.</summary>
/// <param name="process"></param>
public static void AddProcess(Process process)
{
if (s_jobHandle != IntPtr.Zero)
{
bool success = AssignProcessToJobObject(s_jobHandle, process.Handle);
if (!success && !process.HasExited)
throw new Win32Exception();
}
}
static ChildProcessTracker()
{
// This feature requires Windows 8 or later. To support Windows 7 requires
// registry settings to be added if you are using Visual Studio plus an
// app.manifest change.
// https://stackoverflow.com/a/4232259/386091
// https://stackoverflow.com/a/9507862/386091
if (Environment.OSVersion.Version < new Version(6, 2))
return;
// The job name is optional (and can be null) but it helps with diagnostics.
// If it's not null, it has to be unique. Use SysInternals' Handle command-line
// utility: handle -a ChildProcessTracker
string jobName = "ChildProcessTracker" + Process.GetCurrentProcess().Id;
s_jobHandle = CreateJobObject(IntPtr.Zero, jobName);
var info = new JOBOBJECT_BASIC_LIMIT_INFORMATION();
// This is the key flag. When our process is killed, Windows will automatically
// close the job handle, and when that happens, we want the child processes to
// be killed, too.
info.LimitFlags = JOBOBJECTLIMIT.JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE;
var extendedInfo = new JOBOBJECT_EXTENDED_LIMIT_INFORMATION();
extendedInfo.BasicLimitInformation = info;
int length = Marshal.SizeOf(typeof(JOBOBJECT_EXTENDED_LIMIT_INFORMATION));
IntPtr extendedInfoPtr = Marshal.AllocHGlobal(length);
try
{
Marshal.StructureToPtr(extendedInfo, extendedInfoPtr, false);
if (!SetInformationJobObject(s_jobHandle, JobObjectInfoType.ExtendedLimitInformation,
extendedInfoPtr, (uint)length))
{
throw new Win32Exception();
}
}
finally
{
Marshal.FreeHGlobal(extendedInfoPtr);
}
}
[DllImport("kernel32.dll", CharSet = CharSet.Unicode)]
static extern IntPtr CreateJobObject(IntPtr lpJobAttributes, string name);
[DllImport("kernel32.dll")]
static extern bool SetInformationJobObject(IntPtr job, JobObjectInfoType infoType,
IntPtr lpJobObjectInfo, uint cbJobObjectInfoLength);
[DllImport("kernel32.dll", SetLastError = true)]
static extern bool AssignProcessToJobObject(IntPtr job, IntPtr process);
// Windows will automatically close any open job handles when our process terminates.
// This can be verified by using SysInternals' Handle utility. When the job handle
// is closed, the child processes will be killed.
private static readonly IntPtr s_jobHandle;
}
public enum JobObjectInfoType
{
AssociateCompletionPortInformation = 7,
BasicLimitInformation = 2,
BasicUIRestrictions = 4,
EndOfJobTimeInformation = 6,
ExtendedLimitInformation = 9,
SecurityLimitInformation = 5,
GroupInformation = 11
}
[StructLayout(LayoutKind.Sequential)]
public struct JOBOBJECT_BASIC_LIMIT_INFORMATION
{
public Int64 PerProcessUserTimeLimit;
public Int64 PerJobUserTimeLimit;
public JOBOBJECTLIMIT LimitFlags;
public UIntPtr MinimumWorkingSetSize;
public UIntPtr MaximumWorkingSetSize;
public UInt32 ActiveProcessLimit;
public Int64 Affinity;
public UInt32 PriorityClass;
public UInt32 SchedulingClass;
}
[Flags]
public enum JOBOBJECTLIMIT : uint
{
JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE = 0x2000
}
[StructLayout(LayoutKind.Sequential)]
public struct IO_COUNTERS
{
public UInt64 ReadOperationCount;
public UInt64 WriteOperationCount;
public UInt64 OtherOperationCount;
public UInt64 ReadTransferCount;
public UInt64 WriteTransferCount;
public UInt64 OtherTransferCount;
}
[StructLayout(LayoutKind.Sequential)]
public struct JOBOBJECT_EXTENDED_LIMIT_INFORMATION
{
public JOBOBJECT_BASIC_LIMIT_INFORMATION BasicLimitInformation;
public IO_COUNTERS IoInfo;
public UIntPtr ProcessMemoryLimit;
public UIntPtr JobMemoryLimit;
public UIntPtr PeakProcessMemoryUsed;
public UIntPtr PeakJobMemoryUsed;
}
I carefully tested both the 32-bit and 64-bit versions of the structs by programmatically comparing the managed and native versions to each other (the overall size as well as the offsets for each member).
I've tested this code on Windows 7, 8, and 10.
This post is intended as an extension to #Matt Howells' answer, specifically for those who run into problems with using Job Objects under Vista or Win7, especially if you get an access denied error ('5') when calling AssignProcessToJobObject.
tl;dr
To ensure compatibility with Vista and Win7, add the following manifest to the .NET parent process:
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
<v3:trustInfo xmlns:v3="urn:schemas-microsoft-com:asm.v3">
<v3:security>
<v3:requestedPrivileges>
<v3:requestedExecutionLevel level="asInvoker" uiAccess="false" />
</v3:requestedPrivileges>
</v3:security>
</v3:trustInfo>
<compatibility xmlns="urn:schemas-microsoft-com:compatibility.v1">
<!-- We specify these, in addition to the UAC above, so we avoid Program Compatibility Assistant in Vista and Win7 -->
<!-- We try to avoid PCA so we can use Windows Job Objects -->
<!-- See https://stackoverflow.com/questions/3342941/kill-child-process-when-parent-process-is-killed -->
<application>
<!--The ID below indicates application support for Windows Vista -->
<supportedOS Id="{e2011457-1546-43c5-a5fe-008deee3d3f0}"/>
<!--The ID below indicates application support for Windows 7 -->
<supportedOS Id="{35138b9a-5d96-4fbd-8e2d-a2440225f93a}"/>
</application>
</compatibility>
</assembly>
Note that when you add new manifest in Visual Studio 2012 it will contain the above snippet already so you do not need to copy it from hear. It will also include a node for Windows 8.
full explanation
Your job association will fail with an access denied error if the process you're starting is already associated with another job. Enter Program Compatibility Assistant, which, starting in Windows Vista, will assign all kinds of processes to its own jobs.
In Vista you can mark your application to be excluded from PCA by simply including an application manifest. Visual Studio seems to do this for .NET apps automatically, so you're fine there.
A simple manifest no longer cuts it in Win7. [1] There, you have to specifically specify that you're compatible with Win7 with the tag in your manifest. [2]
This led me to worry about Windows 8. Will I have to change my manifest once again? Apparently there's a break in the clouds, as Windows 8 now allows a process to belong to multiple jobs. [3] So I haven't tested it yet, but I imagine that this madness will be over now if you simply include a manifest with the supportedOS information.
Tip 1: If you're developing a .NET app with Visual Studio, as I was, here [4] are some nice instructions on how to customize your application manifest.
Tip 2: Be careful with launching your application from Visual Studio. I found that, after adding the appropriate manifest, I still had problems with PCA when launching from Visual Studio, even if I used Start without Debugging. Launching my application from Explorer worked, however. After manually adding devenv for exclusion from PCA using the registry, starting applications that used Job Objects from VS started working as well. [5]
Tip 3: If you ever want to know if PCA is your problem, try launching your application from the command line, or copy the program to a network drive and run it from there. PCA is automatically disabled in those contexts.
[1] http://blogs.msdn.com/b/cjacks/archive/2009/06/18/pca-changes-for-windows-7-how-to-tell-us-you-are-not-an-installer-take-2-because-we-changed-the-rules-on-you.aspx
[2] http://ayende.com/blog/4360/how-to-opt-out-of-program-compatibility-assistant
[3] http://msdn.microsoft.com/en-us/library/windows/desktop/ms681949(v=vs.85).aspx:
"A process can be associated with more than one job in Windows 8"
[4] How can I embed an application manifest into an application using VS2008?
[5] How to stop the Visual Studio debugger starting my process in a job object?
Here's an alternative that may work for some when you have control of the code the child process runs. The benefit of this approach is it doesn't require any native Windows calls.
The basic idea is to redirect the child's standard input to a stream whose other end is connected to the parent, and use that stream to detect when the parent has gone away. When you use System.Diagnostics.Process to start the child, it's easy to ensure its standard input is redirected:
Process childProcess = new Process();
childProcess.StartInfo = new ProcessStartInfo("pathToConsoleModeApp.exe");
childProcess.StartInfo.RedirectStandardInput = true;
childProcess.StartInfo.CreateNoWindow = true; // no sense showing an empty black console window which the user can't input into
And then, on the child process, take advantage of the fact that Reads from the standard input stream will always return with at least 1 byte until the stream is closed, when they will start returning 0 bytes. An outline of the way I ended up doing this is below; my way also uses a message pump to keep the main thread available for things other than watching standard in, but this general approach could be used without message pumps too.
using System;
using System.IO;
using System.Threading;
using System.Windows.Forms;
static int Main()
{
Application.Run(new MyApplicationContext());
return 0;
}
public class MyApplicationContext : ApplicationContext
{
private SynchronizationContext _mainThreadMessageQueue = null;
private Stream _stdInput;
public MyApplicationContext()
{
_stdInput = Console.OpenStandardInput();
// feel free to use a better way to post to the message loop from here if you know one ;)
System.Windows.Forms.Timer handoffToMessageLoopTimer = new System.Windows.Forms.Timer();
handoffToMessageLoopTimer.Interval = 1;
handoffToMessageLoopTimer.Tick += new EventHandler((obj, eArgs) => { PostMessageLoopInitialization(handoffToMessageLoopTimer); });
handoffToMessageLoopTimer.Start();
}
private void PostMessageLoopInitialization(System.Windows.Forms.Timer t)
{
if (_mainThreadMessageQueue == null)
{
t.Stop();
_mainThreadMessageQueue = SynchronizationContext.Current;
}
// constantly monitor standard input on a background thread that will
// signal the main thread when stuff happens.
BeginMonitoringStdIn(null);
// start up your application's real work here
}
private void BeginMonitoringStdIn(object state)
{
if (SynchronizationContext.Current == _mainThreadMessageQueue)
{
// we're already running on the main thread - proceed.
var buffer = new byte[128];
_stdInput.BeginRead(buffer, 0, buffer.Length, (asyncResult) =>
{
int amtRead = _stdInput.EndRead(asyncResult);
if (amtRead == 0)
{
_mainThreadMessageQueue.Post(new SendOrPostCallback(ApplicationTeardown), null);
}
else
{
BeginMonitoringStdIn(null);
}
}, null);
}
else
{
// not invoked from the main thread - dispatch another call to this method on the main thread and return
_mainThreadMessageQueue.Post(new SendOrPostCallback(BeginMonitoringStdIn), null);
}
}
private void ApplicationTeardown(object state)
{
// tear down your application gracefully here
_stdInput.Close();
this.ExitThread();
}
}
Caveats to this approach:
the actual child .exe that is launched must be a console application so it remains attached to stdin/out/err. As in the above example, I easily adapted my existing application that used a message pump (but didn't show a GUI) by just creating a tiny console project that referenced the existing project, instantiating my application context and calling Application.Run() inside the Main method of the console .exe.
Technically, this merely signals the child process when the parent exits, so it will work whether the parent process exited normally or crashed, but its still up to the child processes to perform its own shutdown. This may or may not be what you want...
There is another relevant method, easy and effective, to finish child processes on program termination. You can implement and attach a debugger to them from the parent; when the parent process ends, child processes will be killed by the OS. It can go both ways attaching a debugger to the parent from the child (note that you can only attach one debugger at a time). You can find more info on the subject here.
Here you have an utility class that launches a new process and attaches a debugger to it. It has been adapted from this post by Roger Knapp. The only requirement is that both processes need to share the same bitness. You cannot debug a 32bit process from a 64bit process or vice versa.
public class ProcessRunner
{
// see http://csharptest.net/1051/managed-anti-debugging-how-to-prevent-users-from-attaching-a-debugger/
// see https://stackoverflow.com/a/24012744/2982757
public Process ChildProcess { get; set; }
public bool StartProcess(string fileName)
{
var processStartInfo = new ProcessStartInfo(fileName)
{
UseShellExecute = false,
WindowStyle = ProcessWindowStyle.Normal,
ErrorDialog = false
};
this.ChildProcess = Process.Start(processStartInfo);
if (ChildProcess == null)
return false;
new Thread(NullDebugger) {IsBackground = true}.Start(ChildProcess.Id);
return true;
}
private void NullDebugger(object arg)
{
// Attach to the process we provided the thread as an argument
if (DebugActiveProcess((int) arg))
{
var debugEvent = new DEBUG_EVENT {bytes = new byte[1024]};
while (!this.ChildProcess.HasExited)
{
if (WaitForDebugEvent(out debugEvent, 1000))
{
// return DBG_CONTINUE for all events but the exception type
var continueFlag = DBG_CONTINUE;
if (debugEvent.dwDebugEventCode == DebugEventType.EXCEPTION_DEBUG_EVENT)
continueFlag = DBG_EXCEPTION_NOT_HANDLED;
ContinueDebugEvent(debugEvent.dwProcessId, debugEvent.dwThreadId, continueFlag);
}
}
}
else
{
//we were not able to attach the debugger
//do the processes have the same bitness?
//throw ApplicationException("Unable to attach debugger") // Kill child? // Send Event? // Ignore?
}
}
#region "API imports"
private const int DBG_CONTINUE = 0x00010002;
private const int DBG_EXCEPTION_NOT_HANDLED = unchecked((int) 0x80010001);
private enum DebugEventType : int
{
CREATE_PROCESS_DEBUG_EVENT = 3,
//Reports a create-process debugging event. The value of u.CreateProcessInfo specifies a CREATE_PROCESS_DEBUG_INFO structure.
CREATE_THREAD_DEBUG_EVENT = 2,
//Reports a create-thread debugging event. The value of u.CreateThread specifies a CREATE_THREAD_DEBUG_INFO structure.
EXCEPTION_DEBUG_EVENT = 1,
//Reports an exception debugging event. The value of u.Exception specifies an EXCEPTION_DEBUG_INFO structure.
EXIT_PROCESS_DEBUG_EVENT = 5,
//Reports an exit-process debugging event. The value of u.ExitProcess specifies an EXIT_PROCESS_DEBUG_INFO structure.
EXIT_THREAD_DEBUG_EVENT = 4,
//Reports an exit-thread debugging event. The value of u.ExitThread specifies an EXIT_THREAD_DEBUG_INFO structure.
LOAD_DLL_DEBUG_EVENT = 6,
//Reports a load-dynamic-link-library (DLL) debugging event. The value of u.LoadDll specifies a LOAD_DLL_DEBUG_INFO structure.
OUTPUT_DEBUG_STRING_EVENT = 8,
//Reports an output-debugging-string debugging event. The value of u.DebugString specifies an OUTPUT_DEBUG_STRING_INFO structure.
RIP_EVENT = 9,
//Reports a RIP-debugging event (system debugging error). The value of u.RipInfo specifies a RIP_INFO structure.
UNLOAD_DLL_DEBUG_EVENT = 7,
//Reports an unload-DLL debugging event. The value of u.UnloadDll specifies an UNLOAD_DLL_DEBUG_INFO structure.
}
[StructLayout(LayoutKind.Sequential)]
private struct DEBUG_EVENT
{
[MarshalAs(UnmanagedType.I4)] public DebugEventType dwDebugEventCode;
public int dwProcessId;
public int dwThreadId;
[MarshalAs(UnmanagedType.ByValArray, SizeConst = 1024)] public byte[] bytes;
}
[DllImport("Kernel32.dll", SetLastError = true)]
private static extern bool DebugActiveProcess(int dwProcessId);
[DllImport("Kernel32.dll", SetLastError = true)]
private static extern bool WaitForDebugEvent([Out] out DEBUG_EVENT lpDebugEvent, int dwMilliseconds);
[DllImport("Kernel32.dll", SetLastError = true)]
private static extern bool ContinueDebugEvent(int dwProcessId, int dwThreadId, int dwContinueStatus);
[DllImport("Kernel32.dll", SetLastError = true)]
public static extern bool IsDebuggerPresent();
#endregion
}
Usage:
new ProcessRunner().StartProcess("c:\\Windows\\system32\\calc.exe");
One way is to pass PID of parent process to the child. The child will periodically poll if the process with the specified pid exists or not. If not it will just quit.
You can also use Process.WaitForExit method in child method to be notified when the parent process ends but it might not work in case of Task Manager.
I was looking for a solution to this problem that did not require unmanaged code. I was also not able to use standard input/output redirection because it was a Windows Forms application.
My solution was to create a named pipe in the parent process and then connect the child process to the same pipe. If the parent process exits then the pipe becomes broken and the child can detect this.
Below is an example using two console applications:
Parent
private const string PipeName = "471450d6-70db-49dc-94af-09d3f3eba529";
public static void Main(string[] args)
{
Console.WriteLine("Main program running");
using (NamedPipeServerStream pipe = new NamedPipeServerStream(PipeName, PipeDirection.Out))
{
Process.Start("child.exe");
Console.WriteLine("Press any key to exit");
Console.ReadKey();
}
}
Child
private const string PipeName = "471450d6-70db-49dc-94af-09d3f3eba529"; // same as parent
public static void Main(string[] args)
{
Console.WriteLine("Child process running");
using (NamedPipeClientStream pipe = new NamedPipeClientStream(".", PipeName, PipeDirection.In))
{
pipe.Connect();
pipe.BeginRead(new byte[1], 0, 1, PipeBrokenCallback, pipe);
Console.WriteLine("Press any key to exit");
Console.ReadKey();
}
}
private static void PipeBrokenCallback(IAsyncResult ar)
{
// the pipe was closed (parent process died), so exit the child process too
try
{
NamedPipeClientStream pipe = (NamedPipeClientStream)ar.AsyncState;
pipe.EndRead(ar);
}
catch (IOException) { }
Environment.Exit(1);
}
Use event handlers to make hooks on a few exit scenarios:
var process = Process.Start("program.exe");
AppDomain.CurrentDomain.DomainUnload += (s, e) => { process.Kill(); process.WaitForExit(); };
AppDomain.CurrentDomain.ProcessExit += (s, e) => { process.Kill(); process.WaitForExit(); };
AppDomain.CurrentDomain.UnhandledException += (s, e) => { process.Kill(); process.WaitForExit(); };
Yet another addition to the abundant richness of solutions proposed so far....
The problem with many of them is that they rely upon the parent and child process to shut down in an orderly manner, which isn't always true when development is underway. I found that my child process was often being orphaned whenever I terminated the parent process in the debugger, which required me to kill the orphaned process(es) with Task Manager in order to rebuild my solution.
The solution: Pass the parent process ID in on the commandline (or even less invasive, in the environment variables) of the child process.
In the parent process, the process ID is available as:
Process.CurrentProcess().Id;
In the child process:
Process parentProcess = Process.GetProcessById(parentProcessId);
parentProcess.Exited += (s, e) =>
{
// clean up what you can.
this.Dispose();
// maybe log an error
....
// And terminate with prejudice!
//(since something has already gone terribly wrong)
Process.GetCurrentProcess().Kill();
};
I am of two minds as to whether this is acceptable practice in production code. On the one hand, this should never happen. But on the other hand, it may mean the difference between restarting a process, and rebooting a production server. And what should never happen often does.
And it sure is useful while debugging orderly shutdown problems.
Solution that worked for me:
When creating process add tag process.EnableRaisingEvents = true;:
csc = new Process();
csc.StartInfo.UseShellExecute = false;
csc.StartInfo.CreateNoWindow = true;
csc.StartInfo.FileName = Path.Combine(HLib.path_dataFolder, "csc.exe");
csc.StartInfo.WindowStyle = ProcessWindowStyle.Hidden;
csc.StartInfo.ErrorDialog = false;
csc.StartInfo.RedirectStandardInput = true;
csc.StartInfo.RedirectStandardOutput = true;
csc.EnableRaisingEvents = true;
csc.Start();
I see two options:
If you know exactly what child process could be started and you are sure they are only started from your main process, then you could consider simply searching for them by name and kill them.
Iterate through all processes and kill every process that has your process as a parent (I guess you need to kill the child processes first). Here is explained how you can get the parent process id.
I've made a child process management library where the parent process and the child process are monitored due a bidirectional WCF pipe. If either the child process terminates or the parent process terminates each other is notified.
There is also a debugger helper available which automatically attaches the VS debugger to the started child process
Project site:
http://www.crawler-lib.net/child-processes
NuGet Packages:
https://www.nuget.org/packages/ChildProcesses
https://www.nuget.org/packages/ChildProcesses.VisualStudioDebug/
Just my 2018 version.
Use it aside your Main() method.
using System.Management;
using System.Diagnostics;
...
// Called when the Main Window is closed
protected override void OnClosed(EventArgs EventArgs)
{
string query = "Select * From Win32_Process Where ParentProcessId = " + Process.GetCurrentProcess().Id;
ManagementObjectSearcher searcher = new ManagementObjectSearcher(query);
ManagementObjectCollection processList = searcher.Get();
foreach (var obj in processList)
{
object data = obj.Properties["processid"].Value;
if (data != null)
{
// retrieve the process
var childId = Convert.ToInt32(data);
var childProcess = Process.GetProcessById(childId);
// ensure the current process is still live
if (childProcess != null) childProcess.Kill();
}
}
Environment.Exit(0);
}
call job.AddProcess better to do after start of the process:
prc.Start();
job.AddProcess(prc.Handle);
When calling AddProcess before the terminate, child processes are not killed. (Windows 7 SP1)
private void KillProcess(Process proc)
{
var job = new Job();
job.AddProcess(proc.Handle);
job.Close();
}
I had the same problem. I was creating child processes that never got killed if my main app crashed. I had to destroy manually child processes when debugging. I found that there was no need to make the children somewhat depend on parent. In my main, I added a try catch to do a CleanUp() of child processes on exit.
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
try
{
Application.Run(new frmMonitorSensors());
}
catch(Exception ex)
{
CleanUp();
ErrorLogging.Add(ex.ToString());
}
}
static private void CleanUp()
{
List<string> processesToKill = new List<string>() { "Process1", "Process2" };
foreach (string toKill in processesToKill)
{
Process[] processes = Process.GetProcessesByName(toKill);
foreach (Process p in processes)
{
p.Kill();
}
}
}
If you can catch situation, when your processes tree should be killed, since .NET 5.0 (.NET Core 3.0) you could use Process.Kill(bool entireProcessTree)
I'm creating new processes using System.Diagnostics.Process class from my application. I want this processes to be killed when/if my application has crashed. But if I kill my application from Task Manager, child processes are not killed. Is there any way to make child processes dependent on parent process?
From this forum, credit to 'Josh'.
Application.Quit() and Process.Kill() are possible solutions, but have proven to be unreliable. When your main application dies, you are still left with child processes running. What we really want is for the child processes to die as soon as the main process dies.
The solution is to use "job objects" http://msdn.microsoft.com/en-us/library/ms682409(VS.85).aspx.
The idea is to create a "job object" for your main application, and register your child processes with the job object. If the main process dies, the OS will take care of terminating the child processes.
public enum JobObjectInfoType
{
AssociateCompletionPortInformation = 7,
BasicLimitInformation = 2,
BasicUIRestrictions = 4,
EndOfJobTimeInformation = 6,
ExtendedLimitInformation = 9,
SecurityLimitInformation = 5,
GroupInformation = 11
}
[StructLayout(LayoutKind.Sequential)]
public struct SECURITY_ATTRIBUTES
{
public int nLength;
public IntPtr lpSecurityDescriptor;
public int bInheritHandle;
}
[StructLayout(LayoutKind.Sequential)]
struct JOBOBJECT_BASIC_LIMIT_INFORMATION
{
public Int64 PerProcessUserTimeLimit;
public Int64 PerJobUserTimeLimit;
public Int16 LimitFlags;
public UInt32 MinimumWorkingSetSize;
public UInt32 MaximumWorkingSetSize;
public Int16 ActiveProcessLimit;
public Int64 Affinity;
public Int16 PriorityClass;
public Int16 SchedulingClass;
}
[StructLayout(LayoutKind.Sequential)]
struct IO_COUNTERS
{
public UInt64 ReadOperationCount;
public UInt64 WriteOperationCount;
public UInt64 OtherOperationCount;
public UInt64 ReadTransferCount;
public UInt64 WriteTransferCount;
public UInt64 OtherTransferCount;
}
[StructLayout(LayoutKind.Sequential)]
struct JOBOBJECT_EXTENDED_LIMIT_INFORMATION
{
public JOBOBJECT_BASIC_LIMIT_INFORMATION BasicLimitInformation;
public IO_COUNTERS IoInfo;
public UInt32 ProcessMemoryLimit;
public UInt32 JobMemoryLimit;
public UInt32 PeakProcessMemoryUsed;
public UInt32 PeakJobMemoryUsed;
}
public class Job : IDisposable
{
[DllImport("kernel32.dll", CharSet = CharSet.Unicode)]
static extern IntPtr CreateJobObject(object a, string lpName);
[DllImport("kernel32.dll")]
static extern bool SetInformationJobObject(IntPtr hJob, JobObjectInfoType infoType, IntPtr lpJobObjectInfo, uint cbJobObjectInfoLength);
[DllImport("kernel32.dll", SetLastError = true)]
static extern bool AssignProcessToJobObject(IntPtr job, IntPtr process);
private IntPtr m_handle;
private bool m_disposed = false;
public Job()
{
m_handle = CreateJobObject(null, null);
JOBOBJECT_BASIC_LIMIT_INFORMATION info = new JOBOBJECT_BASIC_LIMIT_INFORMATION();
info.LimitFlags = 0x2000;
JOBOBJECT_EXTENDED_LIMIT_INFORMATION extendedInfo = new JOBOBJECT_EXTENDED_LIMIT_INFORMATION();
extendedInfo.BasicLimitInformation = info;
int length = Marshal.SizeOf(typeof(JOBOBJECT_EXTENDED_LIMIT_INFORMATION));
IntPtr extendedInfoPtr = Marshal.AllocHGlobal(length);
Marshal.StructureToPtr(extendedInfo, extendedInfoPtr, false);
if (!SetInformationJobObject(m_handle, JobObjectInfoType.ExtendedLimitInformation, extendedInfoPtr, (uint)length))
throw new Exception(string.Format("Unable to set information. Error: {0}", Marshal.GetLastWin32Error()));
}
#region IDisposable Members
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
#endregion
private void Dispose(bool disposing)
{
if (m_disposed)
return;
if (disposing) {}
Close();
m_disposed = true;
}
public void Close()
{
Win32.CloseHandle(m_handle);
m_handle = IntPtr.Zero;
}
public bool AddProcess(IntPtr handle)
{
return AssignProcessToJobObject(m_handle, handle);
}
}
Looking at the constructor ...
JOBOBJECT_BASIC_LIMIT_INFORMATION info = new JOBOBJECT_BASIC_LIMIT_INFORMATION();
info.LimitFlags = 0x2000;
The key here is to setup the job object properly. In the constructor I'm setting the "limits" to 0x2000, which is the numeric value for JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE.
MSDN defines this flag as:
Causes all processes associated with
the job to terminate when the last
handle to the job is closed.
Once this class is setup...you just have to register each child process with the job. For example:
[DllImport("user32.dll", SetLastError = true)]
public static extern uint GetWindowThreadProcessId(IntPtr hWnd, out uint lpdwProcessId);
Excel.Application app = new Excel.ApplicationClass();
uint pid = 0;
Win32.GetWindowThreadProcessId(new IntPtr(app.Hwnd), out pid);
job.AddProcess(Process.GetProcessById((int)pid).Handle);
This answer started with #Matt Howells' excellent answer plus others (see links in the code below). Improvements:
Supports 32-bit and 64-bit.
Fixes some problems in #Matt Howells' answer:
The small memory leak of extendedInfoPtr
The 'Win32' compile error, and
A stack-unbalanced exception I got in the call to CreateJobObject (using Windows 10, Visual Studio 2015, 32-bit).
Names the Job, so if you use SysInternals, for example, you can easily find it.
Has a somewhat simpler API and less code.
Here's how to use this code:
// Get a Process object somehow.
Process process = Process.Start(exePath, args);
// Add the Process to ChildProcessTracker.
ChildProcessTracker.AddProcess(process);
To support Windows 7 requires:
A simple app.manifest change as #adam smith describes.
Registry settings to be added if you are using Visual Studio.
In my case, I didn't need to support Windows 7, so I have a simple check at the
top of the static constructor below.
/// <summary>
/// Allows processes to be automatically killed if this parent process unexpectedly quits.
/// This feature requires Windows 8 or greater. On Windows 7, nothing is done.</summary>
/// <remarks>References:
/// https://stackoverflow.com/a/4657392/386091
/// https://stackoverflow.com/a/9164742/386091 </remarks>
public static class ChildProcessTracker
{
/// <summary>
/// Add the process to be tracked. If our current process is killed, the child processes
/// that we are tracking will be automatically killed, too. If the child process terminates
/// first, that's fine, too.</summary>
/// <param name="process"></param>
public static void AddProcess(Process process)
{
if (s_jobHandle != IntPtr.Zero)
{
bool success = AssignProcessToJobObject(s_jobHandle, process.Handle);
if (!success && !process.HasExited)
throw new Win32Exception();
}
}
static ChildProcessTracker()
{
// This feature requires Windows 8 or later. To support Windows 7 requires
// registry settings to be added if you are using Visual Studio plus an
// app.manifest change.
// https://stackoverflow.com/a/4232259/386091
// https://stackoverflow.com/a/9507862/386091
if (Environment.OSVersion.Version < new Version(6, 2))
return;
// The job name is optional (and can be null) but it helps with diagnostics.
// If it's not null, it has to be unique. Use SysInternals' Handle command-line
// utility: handle -a ChildProcessTracker
string jobName = "ChildProcessTracker" + Process.GetCurrentProcess().Id;
s_jobHandle = CreateJobObject(IntPtr.Zero, jobName);
var info = new JOBOBJECT_BASIC_LIMIT_INFORMATION();
// This is the key flag. When our process is killed, Windows will automatically
// close the job handle, and when that happens, we want the child processes to
// be killed, too.
info.LimitFlags = JOBOBJECTLIMIT.JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE;
var extendedInfo = new JOBOBJECT_EXTENDED_LIMIT_INFORMATION();
extendedInfo.BasicLimitInformation = info;
int length = Marshal.SizeOf(typeof(JOBOBJECT_EXTENDED_LIMIT_INFORMATION));
IntPtr extendedInfoPtr = Marshal.AllocHGlobal(length);
try
{
Marshal.StructureToPtr(extendedInfo, extendedInfoPtr, false);
if (!SetInformationJobObject(s_jobHandle, JobObjectInfoType.ExtendedLimitInformation,
extendedInfoPtr, (uint)length))
{
throw new Win32Exception();
}
}
finally
{
Marshal.FreeHGlobal(extendedInfoPtr);
}
}
[DllImport("kernel32.dll", CharSet = CharSet.Unicode)]
static extern IntPtr CreateJobObject(IntPtr lpJobAttributes, string name);
[DllImport("kernel32.dll")]
static extern bool SetInformationJobObject(IntPtr job, JobObjectInfoType infoType,
IntPtr lpJobObjectInfo, uint cbJobObjectInfoLength);
[DllImport("kernel32.dll", SetLastError = true)]
static extern bool AssignProcessToJobObject(IntPtr job, IntPtr process);
// Windows will automatically close any open job handles when our process terminates.
// This can be verified by using SysInternals' Handle utility. When the job handle
// is closed, the child processes will be killed.
private static readonly IntPtr s_jobHandle;
}
public enum JobObjectInfoType
{
AssociateCompletionPortInformation = 7,
BasicLimitInformation = 2,
BasicUIRestrictions = 4,
EndOfJobTimeInformation = 6,
ExtendedLimitInformation = 9,
SecurityLimitInformation = 5,
GroupInformation = 11
}
[StructLayout(LayoutKind.Sequential)]
public struct JOBOBJECT_BASIC_LIMIT_INFORMATION
{
public Int64 PerProcessUserTimeLimit;
public Int64 PerJobUserTimeLimit;
public JOBOBJECTLIMIT LimitFlags;
public UIntPtr MinimumWorkingSetSize;
public UIntPtr MaximumWorkingSetSize;
public UInt32 ActiveProcessLimit;
public Int64 Affinity;
public UInt32 PriorityClass;
public UInt32 SchedulingClass;
}
[Flags]
public enum JOBOBJECTLIMIT : uint
{
JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE = 0x2000
}
[StructLayout(LayoutKind.Sequential)]
public struct IO_COUNTERS
{
public UInt64 ReadOperationCount;
public UInt64 WriteOperationCount;
public UInt64 OtherOperationCount;
public UInt64 ReadTransferCount;
public UInt64 WriteTransferCount;
public UInt64 OtherTransferCount;
}
[StructLayout(LayoutKind.Sequential)]
public struct JOBOBJECT_EXTENDED_LIMIT_INFORMATION
{
public JOBOBJECT_BASIC_LIMIT_INFORMATION BasicLimitInformation;
public IO_COUNTERS IoInfo;
public UIntPtr ProcessMemoryLimit;
public UIntPtr JobMemoryLimit;
public UIntPtr PeakProcessMemoryUsed;
public UIntPtr PeakJobMemoryUsed;
}
I carefully tested both the 32-bit and 64-bit versions of the structs by programmatically comparing the managed and native versions to each other (the overall size as well as the offsets for each member).
I've tested this code on Windows 7, 8, and 10.
This post is intended as an extension to #Matt Howells' answer, specifically for those who run into problems with using Job Objects under Vista or Win7, especially if you get an access denied error ('5') when calling AssignProcessToJobObject.
tl;dr
To ensure compatibility with Vista and Win7, add the following manifest to the .NET parent process:
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
<v3:trustInfo xmlns:v3="urn:schemas-microsoft-com:asm.v3">
<v3:security>
<v3:requestedPrivileges>
<v3:requestedExecutionLevel level="asInvoker" uiAccess="false" />
</v3:requestedPrivileges>
</v3:security>
</v3:trustInfo>
<compatibility xmlns="urn:schemas-microsoft-com:compatibility.v1">
<!-- We specify these, in addition to the UAC above, so we avoid Program Compatibility Assistant in Vista and Win7 -->
<!-- We try to avoid PCA so we can use Windows Job Objects -->
<!-- See https://stackoverflow.com/questions/3342941/kill-child-process-when-parent-process-is-killed -->
<application>
<!--The ID below indicates application support for Windows Vista -->
<supportedOS Id="{e2011457-1546-43c5-a5fe-008deee3d3f0}"/>
<!--The ID below indicates application support for Windows 7 -->
<supportedOS Id="{35138b9a-5d96-4fbd-8e2d-a2440225f93a}"/>
</application>
</compatibility>
</assembly>
Note that when you add new manifest in Visual Studio 2012 it will contain the above snippet already so you do not need to copy it from hear. It will also include a node for Windows 8.
full explanation
Your job association will fail with an access denied error if the process you're starting is already associated with another job. Enter Program Compatibility Assistant, which, starting in Windows Vista, will assign all kinds of processes to its own jobs.
In Vista you can mark your application to be excluded from PCA by simply including an application manifest. Visual Studio seems to do this for .NET apps automatically, so you're fine there.
A simple manifest no longer cuts it in Win7. [1] There, you have to specifically specify that you're compatible with Win7 with the tag in your manifest. [2]
This led me to worry about Windows 8. Will I have to change my manifest once again? Apparently there's a break in the clouds, as Windows 8 now allows a process to belong to multiple jobs. [3] So I haven't tested it yet, but I imagine that this madness will be over now if you simply include a manifest with the supportedOS information.
Tip 1: If you're developing a .NET app with Visual Studio, as I was, here [4] are some nice instructions on how to customize your application manifest.
Tip 2: Be careful with launching your application from Visual Studio. I found that, after adding the appropriate manifest, I still had problems with PCA when launching from Visual Studio, even if I used Start without Debugging. Launching my application from Explorer worked, however. After manually adding devenv for exclusion from PCA using the registry, starting applications that used Job Objects from VS started working as well. [5]
Tip 3: If you ever want to know if PCA is your problem, try launching your application from the command line, or copy the program to a network drive and run it from there. PCA is automatically disabled in those contexts.
[1] http://blogs.msdn.com/b/cjacks/archive/2009/06/18/pca-changes-for-windows-7-how-to-tell-us-you-are-not-an-installer-take-2-because-we-changed-the-rules-on-you.aspx
[2] http://ayende.com/blog/4360/how-to-opt-out-of-program-compatibility-assistant
[3] http://msdn.microsoft.com/en-us/library/windows/desktop/ms681949(v=vs.85).aspx:
"A process can be associated with more than one job in Windows 8"
[4] How can I embed an application manifest into an application using VS2008?
[5] How to stop the Visual Studio debugger starting my process in a job object?
Here's an alternative that may work for some when you have control of the code the child process runs. The benefit of this approach is it doesn't require any native Windows calls.
The basic idea is to redirect the child's standard input to a stream whose other end is connected to the parent, and use that stream to detect when the parent has gone away. When you use System.Diagnostics.Process to start the child, it's easy to ensure its standard input is redirected:
Process childProcess = new Process();
childProcess.StartInfo = new ProcessStartInfo("pathToConsoleModeApp.exe");
childProcess.StartInfo.RedirectStandardInput = true;
childProcess.StartInfo.CreateNoWindow = true; // no sense showing an empty black console window which the user can't input into
And then, on the child process, take advantage of the fact that Reads from the standard input stream will always return with at least 1 byte until the stream is closed, when they will start returning 0 bytes. An outline of the way I ended up doing this is below; my way also uses a message pump to keep the main thread available for things other than watching standard in, but this general approach could be used without message pumps too.
using System;
using System.IO;
using System.Threading;
using System.Windows.Forms;
static int Main()
{
Application.Run(new MyApplicationContext());
return 0;
}
public class MyApplicationContext : ApplicationContext
{
private SynchronizationContext _mainThreadMessageQueue = null;
private Stream _stdInput;
public MyApplicationContext()
{
_stdInput = Console.OpenStandardInput();
// feel free to use a better way to post to the message loop from here if you know one ;)
System.Windows.Forms.Timer handoffToMessageLoopTimer = new System.Windows.Forms.Timer();
handoffToMessageLoopTimer.Interval = 1;
handoffToMessageLoopTimer.Tick += new EventHandler((obj, eArgs) => { PostMessageLoopInitialization(handoffToMessageLoopTimer); });
handoffToMessageLoopTimer.Start();
}
private void PostMessageLoopInitialization(System.Windows.Forms.Timer t)
{
if (_mainThreadMessageQueue == null)
{
t.Stop();
_mainThreadMessageQueue = SynchronizationContext.Current;
}
// constantly monitor standard input on a background thread that will
// signal the main thread when stuff happens.
BeginMonitoringStdIn(null);
// start up your application's real work here
}
private void BeginMonitoringStdIn(object state)
{
if (SynchronizationContext.Current == _mainThreadMessageQueue)
{
// we're already running on the main thread - proceed.
var buffer = new byte[128];
_stdInput.BeginRead(buffer, 0, buffer.Length, (asyncResult) =>
{
int amtRead = _stdInput.EndRead(asyncResult);
if (amtRead == 0)
{
_mainThreadMessageQueue.Post(new SendOrPostCallback(ApplicationTeardown), null);
}
else
{
BeginMonitoringStdIn(null);
}
}, null);
}
else
{
// not invoked from the main thread - dispatch another call to this method on the main thread and return
_mainThreadMessageQueue.Post(new SendOrPostCallback(BeginMonitoringStdIn), null);
}
}
private void ApplicationTeardown(object state)
{
// tear down your application gracefully here
_stdInput.Close();
this.ExitThread();
}
}
Caveats to this approach:
the actual child .exe that is launched must be a console application so it remains attached to stdin/out/err. As in the above example, I easily adapted my existing application that used a message pump (but didn't show a GUI) by just creating a tiny console project that referenced the existing project, instantiating my application context and calling Application.Run() inside the Main method of the console .exe.
Technically, this merely signals the child process when the parent exits, so it will work whether the parent process exited normally or crashed, but its still up to the child processes to perform its own shutdown. This may or may not be what you want...
There is another relevant method, easy and effective, to finish child processes on program termination. You can implement and attach a debugger to them from the parent; when the parent process ends, child processes will be killed by the OS. It can go both ways attaching a debugger to the parent from the child (note that you can only attach one debugger at a time). You can find more info on the subject here.
Here you have an utility class that launches a new process and attaches a debugger to it. It has been adapted from this post by Roger Knapp. The only requirement is that both processes need to share the same bitness. You cannot debug a 32bit process from a 64bit process or vice versa.
public class ProcessRunner
{
// see http://csharptest.net/1051/managed-anti-debugging-how-to-prevent-users-from-attaching-a-debugger/
// see https://stackoverflow.com/a/24012744/2982757
public Process ChildProcess { get; set; }
public bool StartProcess(string fileName)
{
var processStartInfo = new ProcessStartInfo(fileName)
{
UseShellExecute = false,
WindowStyle = ProcessWindowStyle.Normal,
ErrorDialog = false
};
this.ChildProcess = Process.Start(processStartInfo);
if (ChildProcess == null)
return false;
new Thread(NullDebugger) {IsBackground = true}.Start(ChildProcess.Id);
return true;
}
private void NullDebugger(object arg)
{
// Attach to the process we provided the thread as an argument
if (DebugActiveProcess((int) arg))
{
var debugEvent = new DEBUG_EVENT {bytes = new byte[1024]};
while (!this.ChildProcess.HasExited)
{
if (WaitForDebugEvent(out debugEvent, 1000))
{
// return DBG_CONTINUE for all events but the exception type
var continueFlag = DBG_CONTINUE;
if (debugEvent.dwDebugEventCode == DebugEventType.EXCEPTION_DEBUG_EVENT)
continueFlag = DBG_EXCEPTION_NOT_HANDLED;
ContinueDebugEvent(debugEvent.dwProcessId, debugEvent.dwThreadId, continueFlag);
}
}
}
else
{
//we were not able to attach the debugger
//do the processes have the same bitness?
//throw ApplicationException("Unable to attach debugger") // Kill child? // Send Event? // Ignore?
}
}
#region "API imports"
private const int DBG_CONTINUE = 0x00010002;
private const int DBG_EXCEPTION_NOT_HANDLED = unchecked((int) 0x80010001);
private enum DebugEventType : int
{
CREATE_PROCESS_DEBUG_EVENT = 3,
//Reports a create-process debugging event. The value of u.CreateProcessInfo specifies a CREATE_PROCESS_DEBUG_INFO structure.
CREATE_THREAD_DEBUG_EVENT = 2,
//Reports a create-thread debugging event. The value of u.CreateThread specifies a CREATE_THREAD_DEBUG_INFO structure.
EXCEPTION_DEBUG_EVENT = 1,
//Reports an exception debugging event. The value of u.Exception specifies an EXCEPTION_DEBUG_INFO structure.
EXIT_PROCESS_DEBUG_EVENT = 5,
//Reports an exit-process debugging event. The value of u.ExitProcess specifies an EXIT_PROCESS_DEBUG_INFO structure.
EXIT_THREAD_DEBUG_EVENT = 4,
//Reports an exit-thread debugging event. The value of u.ExitThread specifies an EXIT_THREAD_DEBUG_INFO structure.
LOAD_DLL_DEBUG_EVENT = 6,
//Reports a load-dynamic-link-library (DLL) debugging event. The value of u.LoadDll specifies a LOAD_DLL_DEBUG_INFO structure.
OUTPUT_DEBUG_STRING_EVENT = 8,
//Reports an output-debugging-string debugging event. The value of u.DebugString specifies an OUTPUT_DEBUG_STRING_INFO structure.
RIP_EVENT = 9,
//Reports a RIP-debugging event (system debugging error). The value of u.RipInfo specifies a RIP_INFO structure.
UNLOAD_DLL_DEBUG_EVENT = 7,
//Reports an unload-DLL debugging event. The value of u.UnloadDll specifies an UNLOAD_DLL_DEBUG_INFO structure.
}
[StructLayout(LayoutKind.Sequential)]
private struct DEBUG_EVENT
{
[MarshalAs(UnmanagedType.I4)] public DebugEventType dwDebugEventCode;
public int dwProcessId;
public int dwThreadId;
[MarshalAs(UnmanagedType.ByValArray, SizeConst = 1024)] public byte[] bytes;
}
[DllImport("Kernel32.dll", SetLastError = true)]
private static extern bool DebugActiveProcess(int dwProcessId);
[DllImport("Kernel32.dll", SetLastError = true)]
private static extern bool WaitForDebugEvent([Out] out DEBUG_EVENT lpDebugEvent, int dwMilliseconds);
[DllImport("Kernel32.dll", SetLastError = true)]
private static extern bool ContinueDebugEvent(int dwProcessId, int dwThreadId, int dwContinueStatus);
[DllImport("Kernel32.dll", SetLastError = true)]
public static extern bool IsDebuggerPresent();
#endregion
}
Usage:
new ProcessRunner().StartProcess("c:\\Windows\\system32\\calc.exe");
One way is to pass PID of parent process to the child. The child will periodically poll if the process with the specified pid exists or not. If not it will just quit.
You can also use Process.WaitForExit method in child method to be notified when the parent process ends but it might not work in case of Task Manager.
I was looking for a solution to this problem that did not require unmanaged code. I was also not able to use standard input/output redirection because it was a Windows Forms application.
My solution was to create a named pipe in the parent process and then connect the child process to the same pipe. If the parent process exits then the pipe becomes broken and the child can detect this.
Below is an example using two console applications:
Parent
private const string PipeName = "471450d6-70db-49dc-94af-09d3f3eba529";
public static void Main(string[] args)
{
Console.WriteLine("Main program running");
using (NamedPipeServerStream pipe = new NamedPipeServerStream(PipeName, PipeDirection.Out))
{
Process.Start("child.exe");
Console.WriteLine("Press any key to exit");
Console.ReadKey();
}
}
Child
private const string PipeName = "471450d6-70db-49dc-94af-09d3f3eba529"; // same as parent
public static void Main(string[] args)
{
Console.WriteLine("Child process running");
using (NamedPipeClientStream pipe = new NamedPipeClientStream(".", PipeName, PipeDirection.In))
{
pipe.Connect();
pipe.BeginRead(new byte[1], 0, 1, PipeBrokenCallback, pipe);
Console.WriteLine("Press any key to exit");
Console.ReadKey();
}
}
private static void PipeBrokenCallback(IAsyncResult ar)
{
// the pipe was closed (parent process died), so exit the child process too
try
{
NamedPipeClientStream pipe = (NamedPipeClientStream)ar.AsyncState;
pipe.EndRead(ar);
}
catch (IOException) { }
Environment.Exit(1);
}
Use event handlers to make hooks on a few exit scenarios:
var process = Process.Start("program.exe");
AppDomain.CurrentDomain.DomainUnload += (s, e) => { process.Kill(); process.WaitForExit(); };
AppDomain.CurrentDomain.ProcessExit += (s, e) => { process.Kill(); process.WaitForExit(); };
AppDomain.CurrentDomain.UnhandledException += (s, e) => { process.Kill(); process.WaitForExit(); };
Yet another addition to the abundant richness of solutions proposed so far....
The problem with many of them is that they rely upon the parent and child process to shut down in an orderly manner, which isn't always true when development is underway. I found that my child process was often being orphaned whenever I terminated the parent process in the debugger, which required me to kill the orphaned process(es) with Task Manager in order to rebuild my solution.
The solution: Pass the parent process ID in on the commandline (or even less invasive, in the environment variables) of the child process.
In the parent process, the process ID is available as:
Process.CurrentProcess().Id;
In the child process:
Process parentProcess = Process.GetProcessById(parentProcessId);
parentProcess.Exited += (s, e) =>
{
// clean up what you can.
this.Dispose();
// maybe log an error
....
// And terminate with prejudice!
//(since something has already gone terribly wrong)
Process.GetCurrentProcess().Kill();
};
I am of two minds as to whether this is acceptable practice in production code. On the one hand, this should never happen. But on the other hand, it may mean the difference between restarting a process, and rebooting a production server. And what should never happen often does.
And it sure is useful while debugging orderly shutdown problems.
Solution that worked for me:
When creating process add tag process.EnableRaisingEvents = true;:
csc = new Process();
csc.StartInfo.UseShellExecute = false;
csc.StartInfo.CreateNoWindow = true;
csc.StartInfo.FileName = Path.Combine(HLib.path_dataFolder, "csc.exe");
csc.StartInfo.WindowStyle = ProcessWindowStyle.Hidden;
csc.StartInfo.ErrorDialog = false;
csc.StartInfo.RedirectStandardInput = true;
csc.StartInfo.RedirectStandardOutput = true;
csc.EnableRaisingEvents = true;
csc.Start();
I see two options:
If you know exactly what child process could be started and you are sure they are only started from your main process, then you could consider simply searching for them by name and kill them.
Iterate through all processes and kill every process that has your process as a parent (I guess you need to kill the child processes first). Here is explained how you can get the parent process id.
I've made a child process management library where the parent process and the child process are monitored due a bidirectional WCF pipe. If either the child process terminates or the parent process terminates each other is notified.
There is also a debugger helper available which automatically attaches the VS debugger to the started child process
Project site:
http://www.crawler-lib.net/child-processes
NuGet Packages:
https://www.nuget.org/packages/ChildProcesses
https://www.nuget.org/packages/ChildProcesses.VisualStudioDebug/
Just my 2018 version.
Use it aside your Main() method.
using System.Management;
using System.Diagnostics;
...
// Called when the Main Window is closed
protected override void OnClosed(EventArgs EventArgs)
{
string query = "Select * From Win32_Process Where ParentProcessId = " + Process.GetCurrentProcess().Id;
ManagementObjectSearcher searcher = new ManagementObjectSearcher(query);
ManagementObjectCollection processList = searcher.Get();
foreach (var obj in processList)
{
object data = obj.Properties["processid"].Value;
if (data != null)
{
// retrieve the process
var childId = Convert.ToInt32(data);
var childProcess = Process.GetProcessById(childId);
// ensure the current process is still live
if (childProcess != null) childProcess.Kill();
}
}
Environment.Exit(0);
}
call job.AddProcess better to do after start of the process:
prc.Start();
job.AddProcess(prc.Handle);
When calling AddProcess before the terminate, child processes are not killed. (Windows 7 SP1)
private void KillProcess(Process proc)
{
var job = new Job();
job.AddProcess(proc.Handle);
job.Close();
}
I had the same problem. I was creating child processes that never got killed if my main app crashed. I had to destroy manually child processes when debugging. I found that there was no need to make the children somewhat depend on parent. In my main, I added a try catch to do a CleanUp() of child processes on exit.
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
try
{
Application.Run(new frmMonitorSensors());
}
catch(Exception ex)
{
CleanUp();
ErrorLogging.Add(ex.ToString());
}
}
static private void CleanUp()
{
List<string> processesToKill = new List<string>() { "Process1", "Process2" };
foreach (string toKill in processesToKill)
{
Process[] processes = Process.GetProcessesByName(toKill);
foreach (Process p in processes)
{
p.Kill();
}
}
}
If you can catch situation, when your processes tree should be killed, since .NET 5.0 (.NET Core 3.0) you could use Process.Kill(bool entireProcessTree)