My question is: "Can this be done better?" and if so, how? Any ideas?
We need to start a captive IE session from within an "invisible" C# .NET 3.5 application, and quit both the IE session and the "parent" application after processing a certain request.
I've been mucking around with this problem for the last week or so... and this morning I've finally reached what I think is a robust solution; but I'm a bit of a C# noob (though I've been a professional programmer for 10 years), so I'm seeking a second or third opinion; and any other options, critiques, suggestions, or comments... Especially: is SHDocVw still the preferred method of creating a "captive but not imbedded" Internet Explorer session?
As I see things, the tricky bit is disposing of the unmanaged InternetExplorerApplication COM object, so I've wrapped it in an IDisposable class called InternetExplorer
My basic approach is:
Application.Run MyApp, which is-a ApplicationContext, and is IQuitable.
I think an app is needed to keep the program open whilste we wait for the IE request?
I guess maybe a (non-daemon) listener-loop thread might also work?
MyApp's constructor creates a new InternetExporer object passing (IQuitable)this
InternetExporer's constructor starts a new IE session, and navigates it to a URL.
When a certain URL is requested InternetExporer calls-back the "parents" Quit method.
Background:
The real story is: I'm writing a plugin for MapInfo (A GIS Client). The plugin hijacks the "Start Extraction" HTTP request from IE to the server, modifies the URL slightly and sends a HTTPRequest in it's place. We parse the respose XML into MIF files [PDF 196K], which we then import and open in MapInfo. Then we quit the IE session, and close the "plugin" application.
SSCCE
using System;
using System.Windows.Forms;
// IE COM interface
// reference ~ C:\Windows\System32\SHDocVw.dll
using SHDocVw;
namespace QuitAppFromCaptiveIE
{
static class Program {
[STAThread]
static void Main() {
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new MyApp());
}
}
interface IQuitable {
void Quit();
}
public class MyApp : ApplicationContext, IQuitable {
private InternetExplorer ie = null;
public MyApp() {
// create a new Internet Explorer COM component - starts IE application.
this.ie = new InternetExplorer(this);
this.ie.Open("www.microsoft.com");
}
#region IQuitable Members
public void Quit() {
if (ie != null) {
ie.Dispose();
ie = null;
}
Application.Exit();
}
#endregion
}
class InternetExplorer : IDisposable, IQuitable
{
// allows us to end the parent application when IE is closed.
private IQuitable parent;
private bool _parentIsQuited = false;
private bool _ieIsQuited = false;
private SHDocVw.InternetExplorer ie; // Old (VB4 era) IE COM component
public InternetExplorer(IQuitable parent) {
// lock-onto the parent app to quit it when IE is closed.
this.parent = parent;
// create a new Internet Explorer COM component - starts IE application.
this.ie = new SHDocVw.InternetExplorerClass();
// hook-up our navigate-event interceptor
ie.BeforeNavigate2 += new DWebBrowserEvents2_BeforeNavigate2EventHandler(ie_BeforeNavigate2);
}
public void Open(string url) {
object o = null;
// make the captive IE session navigate to the given URL.
ie.Navigate(url, ref o, ref o, ref o, ref o);
// now make the ie window visible
ie.Visible = true;
}
// this callback event handler is invoked prior to the captive IE
// session navigating (opening) a URL. Navigate-TWO handles both
// external (normal) and internal (AJAX) requests.
// For example: You could create a history-log file of every page
// visited by each captive session.
// Being fired BEFORE the actual navigation allows you to hijack
// (ie intercept) requests to certain URLs; in this case a request
// to http://support.microsoft.com/ terminates the Browser session
// and this program!
void ie_BeforeNavigate2(object pDisp, ref object URL, ref object Flags, ref object TargetFrameName, ref object PostData, ref object Headers, ref bool Cancel) {
if (URL.Equals("http://support.microsoft.com/")) {
this.Quit();
}
}
#region IDisposable Members
public void Dispose() {
quitIE();
}
#endregion
private void quitIE() {
// close my unmanaged COM object
if (ie != null && !_ieIsQuited) {
_ieIsQuited = true;
ie.Quit();
ie = null;
}
}
#region IQuitable Members
public void Quit() {
// close my unmanaged COM object
quitIE();
// quit the parent app as well.
if (parent != null && !_parentIsQuited) {
_parentIsQuited = true;
parent.Quit();
parent = null;
}
}
#endregion
}
}
I'm reasonably sure that System.Windows.Forms.WebBrowser actually uses the IE Trident browser control internally. It shouldn't be necessary to do COM interop unless you are using C# 1.x.
The long and the short of it appears to be (I'm still NOT an expert, by any stretch of the imagination) that SHDocVw.dll is still the preferred method for launching a "captive" Internet Explorer session (as apposed to embedding a browser in your application).
The code I posted previously isn't the best solution, IMHO. In the final version:
IQuitable is history
Both MyApp and InternetExplorer classes implement IDisposable
Both Dispose methods just return if *_isDisposed* is true.
The following coded includes some pseudocode, for brevity:
private volatile bool _isDisposed = false;
/**
* _isDisposed stops the two "partners" in the conversation (us and
* Internet Explorer) from going into "infinite recursion", by calling
* each others Dispose methods within there Dispose methods.
*
* NOTE: I **think** that making _isDisposed volatile deals adequately
* with the inherent race condition, but I'm NOT certain! Comments
* welcome on this one.
*/
public void Dispose() {
if (!_isDisposed) {
_isDisposed = true;
try {
try { release my unmanaged resources } catch { log }
try {
IE calls MyApp.Dispose() here // and FALLOUT on failure
MyApp calls IE.Dispose(); here
} catch {
log
}
} finally {
base.Dispose(); // ALLWAYS dispose base, no matter what!
}
}
}
To quit the application from the IE class you just call it's local Dispose method, which calls MyApps Dispose, which calls IE's Dispose again but isDisposed is true so it just returns. Then we call Application.ExitThread() and fall out of MyApp's Dispose... and then we fall out of IE's Dispose method, and the event-system stops; and the application terminates nicely. Finally!
NOTE ON EDIT: I've just reused this approach with a Robo framework process, which is as a "captive process" of MyApp... the thing it's remote-controling. Convoluted, but it works... So I update my "self answer" with my latest learnings while I was here.
Related
Background info
I am writing an integration test that spawns a child process (c# console app). The test is counting some rows in the database after the process is spun up and after the process is closed. The process is closed via process.Kill()
When the process is killed in this manner, it doesn't hit the Stop method within the process. I need to call this stop method to stop threads and remove entries from the database in order for the test to pass.
Original Code
The console app process that I am spawning in my test:
static void Main(string[] args)
{
TaskManager tm = new TaskManagerProcess();
if (Environment.UserInteractive ||
(args.EmptyForNull().Any(a => a.Equals("-RunInteractive", StringComparison.OrdinalIgnoreCase) || a.Equals("/RunInteractive"))))
{
tm.ConsoleStart(args);
Console.WriteLine("Press [Enter] to shut down, any other key to mark");
while (true)
{
ConsoleKeyInfo key = Console.ReadKey(true);
if (key.Key == ConsoleKey.Enter)
break;
Console.WriteLine("========================================================");
Console.Out.Flush();
}
Console.WriteLine("Shutting down...");
tm.ConsoleStop();
}
else
{
ServiceBase.Run(tm);
}
}
}
The test code:
//count before starting child proc
int preCount;
//count after process is spun up
int runningsCount;
//count after stopped
int postCount;
//Get an initial count of the logged in modules before svc host is started
user = ApiMethod.GetLoggedInUsers().Where(x => x.RecId == userRecID).FirstOrDefault();
preCount = user.LoggedInModules.Count;
Process proc = Helper.StartProcess(ConnectionBundle);
//Give process time to spin up leaders and workers
await Task.Delay(TimeSpan.FromSeconds(30));
//Get a count of modules after process is spun up
user = ApiMethod.GetLoggedInUsers().Where(x => x.RecId == userRecID).FirstOrDefault();
runningCount = user.LoggedInModules.Count;
//Write a line terminator to the child svc host process -
//this allows it to shutdown normally
Helper.ProcessInput.WriteLine();
Helper.ProcessInput.Close();
Helper.KillProcess(proc);
await Task.Delay(TimeSpan.FromSeconds(5));
//Get count of logged in modules after process is closed
user = ApiMethod.GetLoggedInUsers().Where(x => x.RecId == userRecID).FirstOrDefault();
postCount = user.LoggedInModules.Count;
Helper is a static class that sets up the process start info(including args) and starts the process. In helper I've redirected the StandardInput and added a property ProcessInput which is set to the StandardInput of the created process.
My goal is to send input of "Enter" from the test to the spawned process so that it will break from the loop and call tm.ConsoleStop()
TaskManagerProcess is a private custom class that controls the process. It does not inherit from System.Diagnostics.Process. As an alternate approach, my test could interact with TaskManagerProcess directly. However, I can't make TaskManagerProcess public and I need to run TaskManagerProcess in its own AppDomain because calling ConsoleStop is disposing objects in the API that I need to finish the test.
Things I've Tried
[DllImport("Kernel32")]
private static extern bool SetConsoleCtrlHandler(CloseProcDelgate handler, bool add);
I tried adding a call to Kernel32.SetConsoleCtrlHandler (and the necessary delegate) to call ConsoleStop when the process is exited. This doesn't seem to work when the process is killed via process.Kill()
With the original process code, I noticed an exception when I wrote to the StandardInput. The exception message told me to use Console.Read instead of Console.ReadKey(). This actually works intermittently! I can sometimes get a breakpoint on int cKey = Console.Read() (with debugger attached to child process) but other times it doesn't hit the breakpoint.
while (true)
{
//Changing this to Console.Read instead of Console.ReadKey
//Allows us to send redirected input to process?
int cKey = Console.Read();
if ((ConsoleKey)cKey == ConsoleKey.Enter)
break;
Console.WriteLine("========================================================");
Console.Out.Flush();
}
Finally, I tried interacting with TaskManagerProcess directly. I made the private class internal, and marked the internals visible to my test assembly. I cannot make the class public.
When I go this route, calling tm.ConsoleStop() blows away some objects in my API so I can't check the count after this method is called. For this reason, I thought I would create a new AppDomain and call AppDomain.CreateInstanceAndUnwrap() on the TaskManagerProcess class. However, I get an exception here, I believe its due to the the fact that the class is internal.
I am really stuck at this point! Any help is appreciated and thanks for taking the time to read this!
Edit
I created a demo project here
that shows what I am trying to do and has both approaches in the Test method.
Initially I thought I couldn't call AppDomain.CreateInstanceAndUnwrap() because the TaskManagerProcess class was internal. However, after playing with my demo project, I think I just can't load the assembly.
I'm guessing here, but I believe your TaskManagerProcess is a service application. If it is not, please ignore this. If it is, be advised of including details like this in your question. Debugging service applications can be complicated, believe me, I've been there. But before proceed, more advise.
Test the methods in your modules, no whole running programs, as Michael Randall just said.
Unless absolutely necessary, don't do tests against a database. Mock whatever you need to test your code.
You should go back to your alternate approach of interact with TaskManagerProcess directly. From the code of your console app, the only working method I see called is tm.ConsoleStart(args), the rest inside the loop is console writing and reading. So you can't change the acces level of that class, again, I've been there. What I have done in the past to overcome this is to use conditional compilation to create a kind of public facade in my private or internal modules.
Suppose you have:
internal class TaskManagerContainer
{
private class TaskManagerProcess
{
internal void Start()
{
// stuff
}
private void DoSomething(int arg)
{
// more stuff
}
}
}
Change it like this:
#define TEST
// Symbol TEST can also be defined using the GUI of your IDE or compiler /define option
internal class TaskManagerContainer
{
//
#if TEST
public class TaskManagerProcess
#else
private class TaskManagerProcess
#endif
{
internal void Start()
{
// stuff
}
private void DoSomething(int arg)
{
// more stuff
}
#region Methods Facade for Testing
#if TEST
public void Start_Test()
{
Start();
}
private void DoSomething_Test(int arg)
{
DoSomething(arg);
}
#endif
#endregion
}
}
I really hope it will help you making the methods visible to the test assembly and it won't blow objects in you API.
I think I got it with a brute force approach.
while (!testProcess.HasExited)
{
testProcess.StandardInput.WriteLine();
}
Thanks everyone for the input!
I've got a problem with making calls to a third-party C++ dll which I've wrapped in a class using DllImport to access its functions.
The dll demands that before use a session is opened, which returns an integer handle used to refer to that session when performing operations. When finished, one must close the session using the same handle. So I did something like this:
public void DoWork(string input)
{
int apiHandle = DllWrapper.StartSession();
try
{
// do work using the apiHandle
}
catch(ApplicationException ex)
{
// log the error
}
finally
{
DllWrapper.CloseSession(apiHandle);
}
}
The problem I have is that CloseSession() sometimes causes the Dll in question to throw an error when running threaded:
System.AggregateException: One or more errors occurred. --->
System.AccessViolationException: Attempted to read or write protected
memory. This is often an indication that other memory is corrupt.
I'm not sure there's much I can do about stopping this error, since it seems to be arising from using the Dll in a threaded manner - it is supposed to be thread safe. But since my CloseSession() function does nothing except call that Dll's close function, there's not much wiggle room for me to "fix" anything.
The end result, however, is that the session doesn't close properly. So when the process tries again, which it's supposed to do, it encounters an open session and just keeps throwing new errors. That session absolutely has to be closed.
I'm at a loss as to how to design an error handling statement that's more robust any will ensure the session always closes?
I would change the wrapper to include disposal of the external resource and to also wrap the handle. I.e. instead of representing a session by a handle, you would represent it by a wrapper object.
Additionally, wrapping the calls to the DLL in lock-statements (as #Serge suggests), could prevent the multithreading issues completely. Note that the lock object is static, so that all DllWrappers are using the same lock object.
public class DllWrapper : IDisposable
{
private static object _lockObject = new object();
private int _apiHandle;
private bool _isOpen;
public void StartSession()
{
lock (_lockObject) {
_apiHandle = ...; // TODO: open the session
}
_isOpen = true;
}
public void CloseSession()
{
const int MaxTries = 10;
for (int i = 0; _isOpen && i < MaxTries; i++) {
try {
lock (_lockObject) {
// TODO: close the session
}
_isOpen = false;
} catch {
}
}
}
public void Dispose()
{
CloseSession();
}
}
Note that the methods are instance methods, now.
Now you can ensure the closing of the session with a using statement:
using (var session = new DllWrapper()) {
try {
session.StartSession();
// TODO: work with the session
} catch(ApplicationException ex) {
// TODO: log the error
// This is for exceptions not related to closing the session. If such exceptions
// cannot occur, you can drop the try-catch completely.
}
} // Closes the session automatically by calling `Dispose()`.
You can improve naming by calling this class Session and the methods Open and Close. The user of this class does not need to know that it is a wrapper. This is just an implementation detail. Also, the naming of the methods is now symmetrical and there is no need to repeat the name Session.
By encapsulating all the session related stuff, including error handling, recovery from error situations and disposal of resources, you can considerably diminish the mess in your code. The Session class is now a high-level abstraction. The old DllWrapper was somewhere at mid distance between low-level and high-level.
I'm writing a windows phone app which stores data in a local database. There are multiple threads in my app that access the database and up until this point I have used the technique described here with an AutoResetEvent to ensure that only one thread can access the database at any one time.
So far this has worked very reliably, but now I want to add a ScheduledTask to do some work in the background so I've potentially got multiple processes now competing for access to the database.
Can anyone advise how I can adapt the AutoResetEvent technique to be used across multiple processes on Windows Phone?
I have seen approaches using a Mutex. If I acquire the Mutex before each DB call and then release it afterwards (similar to the way I'm using AutoResetEvent), will this do the trick? Is there any potential problems with this technique? eg: performance?
Ok so first of all my problem was actually 2 problems:
Need to ensure that if the foreground app is running, the background process won't run
Need to ensure that only one thread can access the databasse at once and this needs to work across processes to cater for the (admittedly rare, but possible) scenario where the foreground app is started while the background process is in progress.
Based on the good work done in this thread, I created a couple of classes to help.
To solve problem (1), I created the SingleInstanceSynchroniser:
/// <summary>
/// Used to ensure only one instance (foreground app or background app) runs at once
/// </summary>
public class SingleInstanceSynchroniser : IDisposable
{
private bool hasHandle = false;
Mutex mutex;
private void InitMutex()
{
string mutexId = "Global\\SingleInstanceSynchroniser";
mutex = new Mutex(false, mutexId);
}
public SingleInstanceSynchroniser()
{
InitMutex();
hasHandle = mutex.WaitOne(0);
}
public void Dispose()
{
if (hasHandle && mutex != null)
mutex.ReleaseMutex();
}
public bool HasExclusiveHandle { get { return hasHandle; } }
}
Usage:
In App.xaml.cs:
...
SingleInstanceSynchroniser singleInstanceSynchroniser;
public App()
{
singleInstanceSynchroniser = new SingleInstanceSynchroniser();
...
In ScheduledAgent.cs:
SingleInstanceSynchroniser singleInstanceSynchroniser;
protected override void OnInvoke(ScheduledTask task)
{
singleInstanceSynchroniser = new SingleInstanceSynchroniser();
if (singleInstanceSynchroniser.HasExclusiveHandle)
{
//Run background process
...
}
else
{ //Do not run if foreground app is running
NotifyComplete();
}
}
To solve problem (2), I created the SingleAccessSynchroniser:
/// <summary>
/// Used to ensure only one call is made to the database at once
/// </summary>
public class SingleAccessSynchroniser : IDisposable
{
public bool hasHandle = false;
Mutex mutex;
private void InitMutex()
{
string mutexId = "Global\\SingleAccessSynchroniser";
mutex = new Mutex(false, mutexId);
}
public SingleAccessSynchroniser() : this(0)
{ }
public SingleAccessSynchroniser(int TimeOut)
{
InitMutex();
if (TimeOut <= 0)
hasHandle = mutex.WaitOne();
else
hasHandle = mutex.WaitOne(TimeOut);
if (hasHandle == false)
throw new TimeoutException("Timeout waiting for exclusive access on SingleInstance");
}
public void Release()
{
if (hasHandle && mutex != null)
{
mutex.ReleaseMutex();
hasHandle = false;
}
}
public void Dispose()
{
Release();
}
}
Usage: In all database calls:
using (var dbSync = new SingleAccessSynchroniser())
{
//Execute your database calls
}
This has been running reliably for a few weeks now. Hope someone else finds it useful.
I ran into some problems using Bens solution on Windows Phone 8. Please see this thread for a complete documentation of the problems.
I was able to resolve the issues by removing "Global\" from "Global\SingleInstanceSynchroniser".
Concurrent access to a database between an agent and an app shouldn't be an issue. In fact, using Linq2SQL is one of the recommended ways for communicating between the app and agent.
In practice, it's rarely necessary for the app and agent to run at the same time and so it may be more appropriate to prevent that happening instead.
Potential performance issues will be dependent upon what you're doing. You'll need to measure this to see if it's really an issue.
I am trying to automate multiple parallel instances of Office InfoPath 2010 via a windows service. I understand automating Office from a service is not supported however it is a requirement of my customer.
I can automate other Office applications in a parallel fashion, however InfoPath behaves differently.
What I have found is that there will only ever be one instance of the INFOPATH.EXE process created, no matter how many parallel calls to CreateObject("InfoPath.Application") are made. In contrast to this, multiple instances of WINWORD.EXE can be created via the similar mechanism CreateObject("Word.Application")
To reproduce this issue, a simple console application can be used.
static void Main(string[] args) {
// Create two instances of word in parallel
ThreadPool.QueueUserWorkItem(Word1);
ThreadPool.QueueUserWorkItem(Word2);
System.Threading.Thread.Sleep(5000);
// Attempt to create two instances of infopath in parallel
ThreadPool.QueueUserWorkItem(InfoPath1);
ThreadPool.QueueUserWorkItem(InfoPath2);
}
static void Word1(object context) {
OfficeInterop.WordTest word = new OfficeInterop.WordTest();
word.Test();
}
static void Word2(object context) {
OfficeInterop.WordTest word = new OfficeInterop.WordTest();
word.Test();
}
static void InfoPath1(object context) {
OfficeInterop.InfoPathTest infoPath = new OfficeInterop.InfoPathTest();
infoPath.Test();
}
static void InfoPath2(object context) {
OfficeInterop.InfoPathTest infoPath = new OfficeInterop.InfoPathTest();
infoPath.Test();
}
The InfoPathTest and WordTest classes (VB) are in another project.
Public Class InfoPathTest
Public Sub Test()
Dim ip As Microsoft.Office.Interop.InfoPath.Application
ip = CreateObject("InfoPath.Application")
System.Threading.Thread.Sleep(5000)
ip.Quit(False)
End Sub
End Class
Public Class WordTest
Public Sub Test()
Dim app As Microsoft.Office.Interop.Word.Application
app = CreateObject("Word.Application")
System.Threading.Thread.Sleep(5000)
app.Quit(False)
End Sub
End Class
The interop classes simply create the automation objects, sleep and then quit (although in the case of Word, I have completed more complex tests).
When running the console app, I can see (via Task Manager) two WINWORD.EXE processes created in parallel, and only a single INFOPATH.EXE process created. In fact when the first instance of InfoPathTest calls ip.Quit, the INFOPATH.EXE process terminates. When the second instance of InfoPathTest calls ip.Quit, a DCOM timeout exception is thrown - it appears as though the two instances were sharing the same underlying automation object, and that object no longer exists after the first call to ip.Quit.
At this stage my thoughts were only a single INFOPATH.EXE is supported per user login. I expanded the windows service to start two new processes (a console application called InfoPathTest), each running under a different user account. These new processes would then attempt to automate INFOPATH.EXE
Here's where it gets interesting, this actually works, but only on some machines, and I cannot figure out why that is the case.
And the service code (with help from AsproLock):
public partial class InfoPathService : ServiceBase {
private Thread _mainThread;
private bool isStopping = false;
public InfoPathService() {
InitializeComponent();
}
protected override void OnStart(string[] args) {
if (_mainThread == null || _mainThread.IsAlive == false) {
_mainThread = new Thread(ProcessController);
_mainThread.Start();
}
}
protected override void OnStop() {
isStopping = true;
}
public void ProcessController() {
while (isStopping == false) {
try {
IntPtr hWinSta = GetProcessWindowStation();
WindowStationSecurity ws = new WindowStationSecurity(hWinSta, System.Security.AccessControl.AccessControlSections.Access);
ws.AddAccessRule(new WindowStationAccessRule("user1", WindowStationRights.AllAccess, System.Security.AccessControl.AccessControlType.Allow));
ws.AddAccessRule(new WindowStationAccessRule("user2", WindowStationRights.AllAccess, System.Security.AccessControl.AccessControlType.Allow));
ws.AcceptChanges();
IntPtr hDesk = GetThreadDesktop(GetCurrentThreadId());
DesktopSecurity ds = new DesktopSecurity(hDesk, System.Security.AccessControl.AccessControlSections.Access);
ds.AddAccessRule(new DesktopAccessRule("user1", DesktopRights.AllAccess, System.Security.AccessControl.AccessControlType.Allow));
ds.AddAccessRule(new DesktopAccessRule("user2", DesktopRights.AllAccess, System.Security.AccessControl.AccessControlType.Allow));
ds.AcceptChanges();
ThreadPool.QueueUserWorkItem(Process1);
ThreadPool.QueueUserWorkItem(Process2);
} catch (Exception ex) {
System.Diagnostics.Debug.WriteLine(String.Format("{0}: Process Controller Error {1}", System.Threading.Thread.CurrentThread.ManagedThreadId, ex.Message));
}
Thread.Sleep(15000);
}
}
private static void Process1(object context) {
SecureString pwd2;
Process process2 = new Process();
process2.StartInfo.FileName = #"c:\debug\InfoPathTest.exe";
process2.StartInfo.UseShellExecute = false;
process2.StartInfo.LoadUserProfile = true;
process2.StartInfo.WorkingDirectory = #"C:\debug\";
process2.StartInfo.Domain = "DEV01";
pwd2 = new SecureString(); foreach (char c in "password") { pwd2.AppendChar(c); };
process2.StartInfo.Password = pwd2;
process2.StartInfo.UserName = "user1";
process2.Start();
process2.WaitForExit();
}
private static void Process2(object context) {
SecureString pwd2;
Process process2 = new Process();
process2.StartInfo.FileName = #"c:\debug\InfoPathTest.exe";
process2.StartInfo.UseShellExecute = false;
process2.StartInfo.LoadUserProfile = true;
process2.StartInfo.WorkingDirectory = #"C:\debug\";
process2.StartInfo.Domain = "DEV01";
pwd2 = new SecureString(); foreach (char c in "password") { pwd2.AppendChar(c); };
process2.StartInfo.Password = pwd2;
process2.StartInfo.UserName = "user2";
process2.Start();
process2.WaitForExit();
}
[DllImport("user32.dll", SetLastError = true)]
public static extern IntPtr GetProcessWindowStation();
[DllImport("user32.dll", SetLastError = true)]
public static extern IntPtr GetThreadDesktop(int dwThreadId);
[DllImport("kernel32.dll", SetLastError = true)]
public static extern int GetCurrentThreadId();
}
The InfoPathTest.exe process simply calls the InfoPathTest.Test() method detailed above.
In summary, this works, but only on certain machines. When it fails, the second INFOPATH.EXE process is actually created, but immediately quits with an exitcode of 0. There is nothing in the event logs, nor any exceptions in the code.
I've looked at many things to try and differentiate between working / non working machines, but I'm now stuck.
Any pointers appreciated, especially if you have other thoughts on how to automate multiple InfoPath instances in parallel.
I'm guessing you'd get similar behavior if you tried to do the same thing with Outlook, which would mean Microsoft thinks it is a bad idea to run multiple copies.
If that is so, I see two options.
Option one is to make your Infopath automation synchronous, running one instance at a time.
Option two, and I have NO idea if it would even work, would be to see if you can launch virtual machines to accomplish youe InfoPath work.
I hope this can at least spark some new train of though that will lead to success.
I’ve encountered a very similar issue with Outlook. The restriction of allowing only a single instance of the application to be running does not apply per user, but per interactive login session. You may read more about it in Investigating Outlook's Single-Instance Restriction:
Outlook was determining whether or not another instance was already running in the interactive login session. […] During Outlook's initialization, it checks to see if a window named "Microsoft Outlook" with class name "mspim_wnd32" exists, and if so, it assumes that another instance is already running.
There are ways of hacking around it – there is a tool for launching multiple Outlook instances on the Hammer of God site (scroll down) – but they will probably involve intercepting Win32 calls.
As for your code only working on certain machines: That’s probably due to a race condition. If both processes manage to start up fast enough simultaneously, then they won’t detect each other’s window, and assume that they’re the only instance running. However, if the machine is slow, one process would open its window before the other, thereby causing the second process to detect the first process’s window and shut itself down. To reproduce, try introducing a delay of several seconds between launching the first process and the second – this way, only the first process should ever succeed.
I have a class that instantiates a COM exe out of process. The class is
public class MyComObject:IDisposable
{
private bool disposed = false;
MyMath test;
public MyComObject()
{
test = new MyMath();
}
~MyComObject()
{
Dispose(false);
}
public double GetRandomID()
{
if (test != null)
return test.RandomID();
else
return -1;
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
private void Dispose(bool disposing)
{
if (test != null)
{
Marshal.ReleaseComObject(test);
test = null;
}
disposed = true;
}
}
and I call it as follows
static void Main(string[] args)
{
MyComObject test = new MyComObject();
MyComObject test2 = new MyComObject();
//Do stuff
test.Dispose();
test2.Dispose();
Console.ReadLine();
}
now, this cleans up my COM object when the program executes normally. However, if I close the program in the middle of its execution, the framework doesn't ever call the code that releases my unmanaged object. Which is fair enough. However, is there a way to force the program to clean itself up even though its been killed?
EDIT: it doesn't look promising from a hard kill from the taskmanager :(
Wrapping it in a try finally or using clause will get you most of the way there:
using (MyComObject test = new MyComObject())
using (MyComObject test2 = new MyComObject()) {
// do stuff
}
Console.ReadLine();
The specifics of how your app is being shutdown, though, will dictate any other measures you can do (like using CriticalFinalizerObject).
I think that a console app that gets closed (from the little x) is the same as a Ctrl-C - in which case you can subscribe to Console.CancelKeyPress for that.
Edit: You should also ReleaseComObject until it returns <= 0.
Well, one best practice is to use using statements:
using (MyComObject test = new MyComObject())
using (MyComObject test2 = new MyComObject())
{
//Do stuff
}
That essentially puts finally blocks in to call dispose automatically at the end of the scope. You should have using statements pretty much whenever you have an instance of IDisposable that you take responsibility for cleaning up. However, it doesn't fix the situation where your whole process is aborted abruptly. That's pretty rare though, and you may not want to worry about it. (It's pretty hard to get round it.) I'd have expected your finalizer to be called with your previous code though..