I'm using WMI, I need to get some information, but when class is not available due to insufficient permissions, everything hangs up for a few (~5) seconds. Even setting low timeout doesn't work (not to mention that it would be stupid solution).
Problem isn't insufficient permissions, problem is "hang up".
Is there any way to check if current process has privileges to read information from some class to prevent "hang up" and "access denied" exception?
ConnectionOptions co = new ConnectionOptions();
co.Impersonation = ImpersonationLevel.Impersonate;
co.Authentication = AuthenticationLevel.Packet;
co.Timeout = new TimeSpan(0, 0, 1); // 1 second, but still hangs for ~5
co.EnablePrivileges = false; // false or true, doesn't matter
ManagementPath mp = new ManagementPath();
mp.NamespacePath = #"root\CIMV2\Security\MicrosoftTpm";
mp.Server = "";
ManagementScope ms = new ManagementScope(mp, co);
ms.Connect(); // Hangs for ~5 seconds and throws "access denied"
What you need to do is run the operation on a background thread. It may still take five seconds (or more) but your application will remain responsive instead of hanging. Try something like this:
Task.Factory.StartNew(() =>
{
// Your permission checking code here.....
}).ContinueWith((t) =>
{
// Inform user of permissions status.
});
If you aren't using a version of the framework that supports 'Task' try a BackgroundWorker instead. These are common ways to keep long running processes from hanging your app.
Related
Overall Goal
I'm attempting to kill all of the processes by a certain name (notepad.exe below) that I currently own. Generally speaking, it's along the lines of:
Get all of the applications with a certain name that I'm the owner of
In this case, "I" will usually be a service account
Kill all of them.
Questions
How likely is it that from the time I grab a PID to the time I kill it, another application could have spawned that uses that PID? If I grab a PID of ID 123, how likely is it that it could have closed and a different application now owns PID 123?
What is the best way I can reasonably pull this off while limiting the potential that I kill off the wrong PID?
What I have so Far
The below code is based on another SO answer and uses WMI to get all the processes by a certain name and list the users.
What's next: The next step is to kill the processes that are owned by me; however, how can I tell that the PIDs I have here will be the same PIDs I'm trying to kill?
static void Main(string[] args)
{
const string PROCESS_NAME = "notepad.exe";
var queryString = string.Format("Name = '{0}'", PROCESS_NAME);
var propertiesToSelect = new[] { "Handle", "ProcessId" };
var processQuery = new SelectQuery("Win32_Process", queryString, propertiesToSelect);
using (var searcher = new ManagementObjectSearcher(processQuery))
{
using (var processes = searcher.Get())
foreach (var aProcess in processes)
{
var process = (ManagementObject)aProcess;
var outParameters = new object[2];
var result = (uint)process.InvokeMethod("GetOwner", outParameters);
if (result == 0)
{
var user = (string)outParameters[0];
var domain = (string)outParameters[1];
var processId = (uint)process["ProcessId"];
Console.WriteLine("PID: {0} | User: {1}\\{2}", processId, domain, user);
// TODO: Use process data...
}
else
{
// TODO: Handle GetOwner() failure...
}
}
}
Console.ReadLine();
}
Yes, there is a risk of killing the wrong process. The reuse of PIDs probably is a history accident that has caused a lot of grief over the years.
Do it like this:
Find the PIDs you want to kill.
Obtain handles to those processes to stabilize the PIDs. Note, that this might obtain handles to wrong processes.
Re-find the PIDs you want to kill.
Kill those processes that you have stabilized and that are in the second find result set.
By inserting this lock-and-validate step you can be sure.
How likely is it that from the time I grab a PID to the time I kill it, another application could have spawned that uses that PID?
Another application wouldn't be assigned the same PID if it was spawned whilst the other one was alive. So this condition wouldn't happen as Windows' PIDs are unique decimal numbers to that specific process.
If I grab a PID of ID 123, how likely is it that it could have closed and a different application now owns PID 123?
This is technically feasible that the process could be closed between the time you gain your handle on the process and when you want to kill it. However, that would depend entirely on the lifespan of the process handling within your code. I guess there will always be edge cases where the application could be closed just as you're about to hook onto it, but if you're talking milliseconds/a couple of seconds I imagine it would be few and far between. As for Windows assigning the same PID immediately afterwards, I don't know for sure but they seem pretty random and now allocated again immediately after use, but they eventually would do.
What is the best way I can reasonably pull this off while limiting the potential that I kill off the wrong PID?
There is the Management Event Watcher class which appears to allow you to monitor the starting and stopping of processes. Maybe this could be used to capture events whenever they are closed for your given process name, so this way you know that it no longer exists?
Another answer discussing Management Event Watcher
MSDN ManagementEventWatcher class with example usage
Consider opposite approach - adjust permissions on service account so it can't kill processes of other users.
I believe such permissions are very close to default for non-admin accounts (or just default) - so unless you run service as box admin/system you may be fine with no-code solution.
A process id is guaranteed to stay the same as long as the process continues to run. Once the process exits... there is no guarantee.
When a new process starts, Windows will pick a random process ID and assign it to the new process. Its unlikely, but possible that the id chosen was associated with a process that recently exited.
Have you looked at System.Diagnostics.Process?
They have a GetProcessesByName method that will return a list of Process objects.
Process [] localByName = Process.GetProcessesByName("notepad");
Then you can simply iterate through the Processes and kill them. Since the Process object has a handle to the process... an attempt to kill it will generate a useful exception, which you can catch.
foreach (Process p in localByName)
{
try
{
p.Kill();
}
catch (Exception e)
{
// process either couldn't be terminated or was no longer running
}
}
I've an application that host WF4 workflow in IIS using WorkflowApplication
The workflow is defined by user (using a rehosted workflow designer) and the xml is stored in the database. Then, depending on user actions using the application, an xml is selected in database and the workflow are created / resumed.
My problem is: when the workflow reach a bookmarks and go idle, it stay locked for a various amount of time. Then, if the user try to make another action concerning this workflow, I got this exception:
The execution of an InstancePersistenceCommand was interrupted because the instance '52da4562-896e-4959-ae40-5cd016c4ae79' is locked by a different instance owner. This error usually occurs because a different host has the instance loaded. The instance owner ID of the owner or host with a lock on the instance is 'd7339374-2285-45b9-b4ea-97b18c968c19'.
Now it's time for some piece of code
When I workflow goes idle, I specify it should be unloaded:
private PersistableIdleAction handlePersistableIdle(WorkflowApplicationIdleEventArgs arg)
{
this.Logger.DebugFormat("Workflow '{1}' is persistableIdle on review '{0}'", arg.GetReviewId(), arg.InstanceId);
return PersistableIdleAction.Unload;
}
Foreach WorkflowApplication I need, I create a new SqlWorkflowInstanceStore:
var store = new SqlWorkflowInstanceStore(this._connectionString);
store.RunnableInstancesDetectionPeriod = TimeSpan.FromSeconds(5);
store.InstanceLockedExceptionAction = InstanceLockedExceptionAction.BasicRetry;
Here is how my WorkflowApplication is created
WorkflowApplication wfApp = new WorkflowApplication(root.RootActivity);
wfApp.Extensions.Add(...);
wfApp.InstanceStore = this.createStore();
wfApp.PersistableIdle = this.handlePersistableIdle;
wfApp.OnUnhandledException = this.handleException;
wfApp.Idle = this.handleIdle;
wfApp.Unloaded = this.handleUnloaded;
wfApp.Aborted = this.handleAborted;
wfApp.SynchronizationContext = new CustomSynchronizationContext();
return wfApp;
Then I call the Run method to start it.
Some explanations:
- root.RootActivity: it's the activity created from the workflow XML stored in database
- CustomSynchronizationContext: a synchronisation context that handle authorisations
- in the handleUnloaded method I log when a workflow is unloaded, and I see that the workflow is correctly unloaded before the next user action, but it seems the workflow stay locked after being unloaded (?)
Then, later, when I need to resume the workflow, I create the workflow the same way then I call:
wfApp.Load(workflowInstanceId);
which throw the "locked" exception specified above.
If I wait a few minutes, and try again, it works fine.
I read a post here that say we need to set an owner.
So I've also tried using a static SqlWorkflowInstanceStore with the owner set using this code:
if (_sqlWorkflowInstanceStore != null)
return _sqlWorkflowInstanceStore;
lock (_mutex)
{
if (_sqlWorkflowInstanceStore != null)
return _sqlWorkflowInstanceStore;
// Configure store
_sqlWorkflowInstanceStore = new SqlWorkflowInstanceStore(this._connectionString);
_sqlWorkflowInstanceStore.RunnableInstancesDetectionPeriod = TimeSpan.FromSeconds(5);
_sqlWorkflowInstanceStore.InstanceLockedExceptionAction = InstanceLockedExceptionAction.BasicRetry;
// Set owner - Store will be read-only beyond this point and will not be configurable anymore
var handle = _sqlWorkflowInstanceStore.CreateInstanceHandle();
var view = _sqlWorkflowInstanceStore.Execute(handle, new CreateWorkflowOwnerCommand(), TimeSpan.FromSeconds(5));
handle.Free();
_sqlWorkflowInstanceStore.DefaultInstanceOwner = view.InstanceOwner;
}
return _sqlWorkflowInstanceStore;
But then I've this kind of exception:
The execution of an InstancePersistenceCommand was interrupted because
the instance owner registration for owner ID
'9efb4434-8560-469f-9d03-098a2d48821e' has become invalid. This error
indicates that the in-memory copy of all instances locked by this
owner have become stale and should be discarded, along with the
InstanceHandles. Typically, this error is best handled by restarting
the host.
Does anyone know how to be sure that the lock on the workflow is released immediately when the workflow is unloaded ?
I've see some post doing this with a WorkflowServiceHost (using WorkflowIdleBehavior) but here I'm not using WorkflowServiceHost, I'm using WorkflowApplication
Thank you for any help!
I suspect the problem is with the InstanceOwner of the SqlWorkflowInstanceStore. It isn't deleted so the workflow needs to wait for the ownership of the previous one to time out.
Creating an instance owner
var instanceStore = new SqlWorkflowInstanceStore(connStr);
var instanceHandle = instanceStore.CreateInstanceHandle();
var createOwnerCmd = new CreateWorkflowOwnerCommand();
var view = instanceStore.Execute(instanceHandle, createOwnerCmd, TimeSpan.FromSeconds(30));
instanceStore.DefaultInstanceOwner = view.InstanceOwner;
Deleting an instance owner
var deleteOwnerCmd = new DeleteWorkflowOwnerCommand();
instanceStore.Execute(instanceHandle, deleteOwnerCmd, TimeSpan.FromSeconds(30));
Another possible issue is that when a workflow aborts the Unloaded callback isn't called.
I've found multiple online tutorials for establishing WMI connections to remote machines using c#. These tutorials describe a process like the following:
ConnectionOptions cOpts = new ConnectionOptions();
ManagementObjectCollection moCollection;
ManagementObjectSearcher moSearcher;
ManagementScope mScope;
ObjectQuery oQuery;
mScope = new ManagementScope(String.Format("\\\\{0}\\{1}", host.hostname, "ROOT\\CIMV2"), cOpts);
oQuery = new ObjectQuery("Select * from Win32_OperatingSystem");
moSearcher = new ManagementObjectSearcher(mScope, oQuery);
moCollection = moSearcher.Get();
The happy path cases - connecting to a local host, or connecting to a remote host with proper credentials - work fine. I'm working on a project where we need to support the case when the currently logged in account does not have access to the remote host we're attempting to connect to. That is, we need to catch this case, bring the bad credentials to the attention of the user, and prompt them to supply credentials again.
When I specify credentials in my ConnectionOptions object that do not have context on the remote machine, my call to moSearcher.Get() hangs (seemingly) indefinitely. Similarly, a call to the Connect() function in ManagementScope hangs in the same manner.
We have similar logic in place to perform the equivalent WMI commands in c++, and I can report that those return almost immediately if improper credentials are supplied. An appropriate "access is denied" message is returned. The hosts I'm using for test purposes right now are the same ones we use when testing our existing c++ logic, so I have no reason to believe that WMI is incorrectly configured in our environment.
I've searched for timeout issues surrounding WMI connections in c#. I've explored the Timeout property of ConnectionOptions and moSearcher.Options. I've also looked at the ReturnImmediately property of the EnumerationOptions object that can be associated with ManagementObjectSearcher instance. These options did not have the desired effect for me.
I suppose I could perform these WMI commands in a separate thread, and surround the thread with monitoring code that kills it if it hasn't returned in a reasonable amount of time. That seems like a fair amount of work that would be pushed to all consumers of the c# WMI routines, and I'm hoping there's an easier way. Plus, I'm not sure that killing an outstanding thread this way properly cleans up the WMI connection.
Pinging the remote host doesn't do me any good, because knowing the host is up and running does not tell me if the credentials I have are appropriate (and if the c# WMI calls will hang). Is there another way to validate the credentials against a remote host?
It's always possible that there's an obvious flag or API I'm missing, because I would think others have run into this problem. Any information/assistance would be appreciated. Thanks for reading this lengthy post.
I don't know what all your special functions are, but here's a little routine to help you troubleshoot that should be able to wrap your routine in a thread and give it 5 seconds to execute:
void Fake() {
bool ok = false;
ConnectionOptions cOpts = new ConnectionOptions();
ManagementObjectCollection moCollection;
ManagementObjectSearcher moSearcher;
ManagementScope mScope;
ObjectQuery oQuery;
if (cOpts != null) {
mScope = new ManagementScope(String.Format("\\\\{0}\\{1}", host.hostname, "ROOT\\CIMV2"), cOpts);
if (mScope != null) {
oQuery = new ObjectQuery("Select * from Win32_OperatingSystem");
if (oQuery != null) {
moSearcher = new ManagementObjectSearcher(mScope, oQuery);
if (moSearcher != null) {
ManualResetEvent mre = new ManualResetEvent(false);
Thread thread1 = new Thread(() => {
moCollection = moSearcher.Get();
mre.Set();
};
thread1.Start();
ok = mre.WaitOne(5000); // wait 5 seconds
} else {
Console.WriteLine("ManagementObjectSearcher failed");
}
} else {
Console.WriteLine("ObjectQuery failed");
}
} else {
Console.WriteLine("ManagementScope failed");
}
} else {
Console.WriteLine("ConnectionOptions failed");
}
}
Hope that helps or gives you some ideas.
I took jp's suggestion to surround the WMI API calls in a separate thread that could be killed if they exceeded a timeout. When testing, the separate thread threw an exception of type System.UnauthorizedAccessException. I removed the threading logic and added a catch statement to handle this exception type. Sure enough, the exception is caught almost immediately following the call to ManagementObjectSearcher.Get().
try
{
moCollection = moSearcher.Get();
}
catch (System.UnauthorizedAccessException)
{
return Program.ERROR_FUNCTION_FAILED;
}
catch (System.Runtime.InteropServices.COMException)
{
MessageBox.Show("Error, caught COMException.");
return Program.ERROR_FUNCTION_FAILED;
}
(Note that the System.Runtime.InteropServices.COMException catch statement already existed in my code)
I don't know why this exception isn't thrown (or at least is not brought to the user's attention through the VS 2010 IDE) when executed as part of the parent thread. In any event, this is exactly what I was looking for, and is consistent with the behavior of the WMI connection routines in c++.
I have two windows application, one is a windows service which create EventWaitHandle and wait for it. Second application is a windows gui which open it by calling EventWaitHandle.OpenExisting() and try to Set the event. But I am getting an exception in OpenExisting. The Exception is "Access to the path is denied".
windows Service code
EventWaitHandle wh = new EventWaitHandle(false, EventResetMode.AutoReset, "MyEventName");
wh.WaitOne();
Windows GUI code
try
{
EventWaitHandle wh = EventWaitHandle.OpenExisting("MyEventName");
wh.Set();
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
I tried the same code with two sample console application, it was working fine.
You need to use the version of the EventWaitHandle constructor that takes an EventWaitHandleSecurity instance. For example, the following code should work (it's not tested, but hopefully will get you started):
// create a rule that allows anybody in the "Users" group to synchronise with us
var users = new SecurityIdentifier(WellKnownSidType.BuiltinUsersSid, null);
var rule = new EventWaitHandleAccessRule(users, EventWaitHandleRights.Synchronize | EventWaitHandleRights.Modify,
AccessControlType.Allow);
var security = new EventWaitHandleSecurity();
security.AddAccessRule(rule);
bool created;
var wh = new EventWaitHandle(false, EventResetMode.AutoReset, "MyEventName", out created, security);
...
Also, if you're running on Vista or later, you need to create the event in the global namespace (that is, prefix the name with "Global\"). You'd also have to do this on Windows XP if you use the "Fast User Switching" feature.
This might be caused by the service process running at an elevated privilege level, but the GUI process is not. If you put the same code into two console apps, they'll both be running at user level and won't have any trouble accessing each other's named shared objects.
Try running the GUI app with the "Run as administrator" flag from the Windows start menu. If that solves the issue, you need to read up on how to request elevation within your code. (I haven't done that)
I want to kill a process programmatically in vista/windows 7 (I'm not sure if there's significant problems in the implementation of the UAC between the two to make a difference).
Right now, my code looks like:
if(killProcess){
System.Diagnostics.Process[] process = System.Diagnostics.Process.GetProcessesByName("MyProcessName");
// Before starting the new process make sure no other MyProcessName is running.
foreach (System.Diagnostics.Process p in process)
{
p.Kill();
}
myProcess = System.Diagnostics.Process.Start(psi);
}
I have to do this because I need to make sure that if the user crashes the program or exits abruptly, this secondary process is restarted when the application is restarted, or if the user wants to change the parameters for this secondary process.
The code works fine in XP, but fails in Windows 7 (and I assume in Vista) with an 'access is denied' message. From what the Almighty Google has told me, I need to run my killing program as administrator to get around this problem, but that's just weak sauce. The other potential answer is to use LinkDemand, but I don't understand the msdn page for LinkDemand as it pertains to processes.
I could move the code into a thread, but that has a whole host of other difficulties inherent to it that I really don't want to discover.
You are correct in that it's because you don't have administrative priveleges. You can solve this by installing a service under the local system user and running a custom command against it as needed.
In your windows form app:
private enum SimpleServiceCustomCommands { KillProcess = 128 };
ServiceControllerPermission scp = new ServiceControllerPermission(ServiceControllerPermissionAccess.Control, Environment.MachineName, "SERVICE_NAME");
scp.Assert();
System.ServiceProcess.ServiceController serviceCon = new System.ServiceProcess.ServiceController("SERVICE_NAME", Environment.MachineName);
serviceCon.ExecuteCommand((int)SimpleServiceCustomCommands.KillProcess);
myProcess = System.Diagnostics.Process.Start(psi);
In your service:
private enum SimpleServiceCustomCommands { KillProcess = 128 };
protected override void OnCustomCommand(int command)
{
switch (command)
{
case (int)SimpleServiceCustomCommands.KillProcess:
if(killProcess)
{
System.Diagnostics.Process[] process = System.Diagnostics.Process.GetProcessesByName("MyProcessName");
// Before starting the new process make sure no other MyProcessName is running.
foreach (System.Diagnostics.Process p in process)
{
p.Kill();
}
}
break;
default:
break;
}
}
I'll add the code for Simon Buchan's suggestion. It makes sense and should work as well, assuming your windows form is what launched the process in the first place.
Here's where you create the process. Notice the variable myProc. That's your handle on it:
System.Diagnostics.Process myProc = new System.Diagnostics.Process();
myProc.EnableRaisingEvents=false;
myProc.StartInfo.FileName="PATH_TO_EXE";
myProc.Start();
Later, just kill it with:
myProc.Kill();