Access COM object through a windows process handle - c#

I'm currently automating an application at work using COM, and have an issue where anyone using my application has a problem if the original application is already open when my application runs. I know how to locate the process if it's open, but instead of having to worry about closing it, or working around it, etc., I want to try to use the existing application instead of opening a new one.
This is how I normally start the application in my automation program:
Designer.Application desApp = new Designer.Application();
Now I'm attempting to try and use the handle from an existing application:
Designer.Application desApp = (Designer.Application)((System.Diagnostics.Process.GetProcessesByName("Designer.exe")[0]).Handle)
(I know this doesn't work, since .Handle returns an IntPtr, but I'm using it as an example.)
Is there any way to accomplish this? How do I return a usable object if I know the handle/process?

The COM way of attaching to an existing automation object retrieve the object for the Running Object Table (ROT) http://msdn.microsoft.com/en-us/library/ms695276(VS.85).aspx.
You can use the IRunningObjectTable interface to register your COM objects in the ROT.
http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.comtypes.irunningobjecttable.aspx
And use to query the ROT for an existing instance of your object. System.Runtime.InteropServices.Marshal.GetActiveObject for example.

You cannot make this work in the client code, it has to be dealt with in the server. The server must call CoRegisterClassObject(), passing REGCLS_MULTIPLEUSE so that multiple clients are allowed to use the single server instance. There is no other mechanism to allow a client to obtain an interface pointer to the Application object.
This is very much by design, the server has to be designed and written to support such usage. It cannot be bolted on later.

Related

How to best deploy Sentry in cross-assembly environment?

So we built this library/framework thing full of code related to business processes and common elements that are shared across multiple applications (C#, .Net 4.7.1,WPF, MVVM). Our Logging stuff is all set up through this framework so naturally it felt like the best place for Sentry. All the references in our individual applications are manually pointed to the dlls the folder where our shared library thingy installs itself. So far so good.
When we set up Sentry initially everything seemed to work great. We do some updates and errors seem to be going way down. That's cause we are awesome and Sentry helped us be more awesome, right? Nope! Well I mean kind of.
The scope is being disposed of so we are no longer getting Unhandled exceptions. We didn't notice at first because we are still getting sentry logs when we are handling errors through our Logging.Log() method. This logging method calls SentrySdk.Init() which I suspect is disposing the client in the executing assembly.
We also started using Sentry for some simple Usage tracking by spinning up a separate project in Sentry called Usage-Tracker and passing a simple "DoThingApplication has been launched" with an ApplicationName.UsageTracker Enum as a parameter to our Logging method.
Question: What is a good way to handle this where my setup can have a Sentry instance that wraps my using(sentryClientStuff){ ComposeObjects(); } and still have my logging method look for the existing client and use it if it exists?
Caveats:
I believe before any of this happens we still need to make a call to send a Sentry log to our UsageTracker.
I would like to pass in as few options as possible if I'm setting up the Sentry Client/Scope in our shared library. Maybe Release and Environment. Maybe check tags for Fingerprint and set it in the Log method.
I'm open to new approaches to any of this.
Some related thoughts
Maybe there is a better way to handle references that could solve both this and some other pains of when they have become mismatched between client and shared framework/library thing
Maybe the answer can be found through adding some Unit Tests but I could use a Sentry specific example or a nudge there because I don't know a muc about that.
Maybe there is a way to use my shared library to return a Sentry Client or Scope that I could use in my client assembly that would not be so fragile and the library could somehow also use it.
Maybe there is a better solution I can't conceive because I'm just kind of an OK programmer and it escapes me. I'm open to any advice/correction/ridicule.
Maybe there is a smarter way to handle "Usage-Tracker" type signals in Sentry
Really I want a cross-assembly singleton kind of thing in practice.
There are really many things going on here. Also without looking at any code it's hard to picture how things are laid out. There's a better chance you can get the answer your are looking for if you share some (dummy even) example of the structure of your project.
I'll try to break it down and address what I can anyway:
With regards to:
Usage-Tracker:
You can create a new client and bind to a scope. That way any use of the SentrySdk static class (which I assume your Logger.Log routes to) will pick up.
In other words, call SentrySdk.Init as you currently do, with the options that are shared across any application using your shared library, and after that create a client using the DSN of your Usage-Tracker project in sentry. Push a scope, bind the client and you can use SentrySdk with it.
There's an example in the GitHub repo of the SDK:
using (SentrySdk.PushScope())
{
SentrySdk.AddBreadcrumb(request.Path, "request-path");
// Change the SentryClient in case the request is to the admin part:
if (request.Path.StartsWith("/admin"))
{
// Within this scope, the _adminClient will be used instead of whatever
// client was defined before this point:
SentrySdk.BindClient(_adminClient);
}
SentrySdk.CaptureException(new Exception("Error at the admin section"));
// Else it uses the default client
_middleware?.Invoke(request);
} // Scope is disposed.
The SDK only has to be initialized once but you can always create a new client with new SentryClient, push a new scope (SentrySdk.PushScope()) and bind it to that new scope (SentrySdk.BindClient). Once you pop the scope the client is no longer accessdible via SentrySdk.CaptureException or any other method on the static class SentrySdk.
You can also use the client directly, without binding it to the scope at all.
using (var c = new SentryClient(new SentryOptions { Dsn = new Dsn("...") })) {
c.CaptureMessage("hello world!");
}
The using block is there to make sure the background thread flushes the event.
Central place to initialize the SDK:
There will be configuration which you want to have fixed in your shared framework/library but surely each application (composition root) will have its own setting. Release is auto-discovered.
From docs.sentry.io:
The SDK will firstly look at the entry assembly’s AssemblyInformationalVersionAttribute, which accepts a string as value and is often used to set a GIT commit hash.
If that returns null, it’ll look at the default AssemblyVersionAttribute which accepts the numeric version number.
If you patch your assemblies in your build server, the correct Release should be reported automatically. If not, you could define it per application by taking a delegate that passes the SentryOptions as argument.
Something like:
Framework code:
public class MyLogging
{
void Init(Action<SentryOptions> configuration)
{
var o = new SentryOptions();
// Add things that should run for all users of this library:
o.AddInAppExclude("SomePrefixTrueForAllApplications");
o.AddEventProcessor(new GeneralEventProessor());
// Give the application a chance to reconfigure anything it needs:
configuration?.Invoke(o);
}
}
App code:
void Main()
{
MyLogging.Init(o => o.Environment = "my env");
}
The scope is being disposed of so we are no longer getting Unhandled exceptions."
Not sure I understand what's going on here. Pushing and popping (disposing) scopes don't affect the ability of the SDK to capture unhandled exceptions. Could you please share a repro?
This logging method calls SentrySdk.Init() which I suspect is disposing the client in the executing assembly.:
Unless you create a client "by hand" with new SentryClient, there's only 1 client in the running process. Please note I said running process and not assembly. Instances are not held within an assembly. The assembly only contains the code that can be executed. If you call SentrySdk.CaptureException it will dispatch the call to the SentryClient bound to the current scope. If you didn't PushScope, there's always an implicit scope, the root scope. In this case it's all transparent enough you shouldn't care there's a scope in there. You also can't dispose of that scope since you never got a handle to do so (you didn't call PushScope so you didn't get what it returns to call Dispose on).
All the references in our individual applications are manually pointed to the dlls the folder where our shared library thingy installs itself.:
One thing to consider, depending on your environment is to distribute packages via NuGet. I'm unsure whether you expect to use these libraries in non .NET Framework applications (like .NET Core). But considering .NET Core 3.0 is bringing Windows Desktop framework support like WPF and WinForm, it's possible that eventually you will. If that's the case, consider targeting .NET Standard instead of .NET Framework for your code libraries.

Can't set ADOX.Catalog.ActiveConnection to ADODB Connection coming from .NET

I've been tasked with the incremental porting of a legacy VB6 app (using MS Access as a database, don't ask) to .NET.
This is going to be a long one, but I think it's better to give a bit of context.
This app has a main MDI form with a menu, which is created dynamically based on the DLLs found in the app's folders. It's fundamentally a plug-in kind of thing: each DLL is represented by a menu item, which when clicked, will open up the main form contained in the DLL, calling SetParent() as needed.
The MDI form is my starting point. I want to rewrite just enough of it (redesigning and unit testing as I go, of course) to be able to open said forms. Once I'll have that one nailed, I will start rewriting one DLL at a time.
Every DLL needs an ADO connection, which I've been able to pass along from C#.
The thing is, one of those plug-ins (at least, but possibly many others) uses ADOX to do things on the database, and here lies the problem: when I try to set ADOX.Catalog's ActiveConnection property to the ADO connection, all I get is run-time error 3001: Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another.
I can't for the life of me figure out what I'm doing wrong.
The VB6 code is as simple as it can possibly get:
Dim c As New ADOX.Catalog
Set c.ActiveConnection = theAdoConnectionComingFromDotNet ' error!
The C# code that creates the ADO connection is as straightforward as the VB one:
var conn = new ADODB.Connection();
conn.Open("Provider=Microsoft.JET.Oledb;[...]");
and the call to Open() succeeds.
If I attempt to set the ActiveConnection on the C# side, like so:
var catalog = new ADOX.Catalog();
catalog.ActiveConnection = conn;
everything works.
Now, I could work around the problem by simply instantiating ADOX on the C# side and passing it to VB6, but tweaking the VB6 code (which of course has not a single unit test) could prove to be a PITA, and I'm not even sure that would be easy to do in the first place (because the app can use multiple Access DBs at once, opening and closing connections to each of them as needed).
So, anybody has any idea of what I'm doing wrong? From C# I've tried referencing ADODB from both the .NET tab and from the COM one (and the ADO version I chose from the COM tab is the right one: 2.5... again, don't ask), but still no joy.
EDIT
The exact same thing happens when I try to assign a RecordSet's ActiveConnection property to the connection that comes from C#, like so:
Dim rs As New ADODB.Recordset
Set rs.ActiveConnection = theAdoConnectionComingFromDotNet
Another workaround I can think of, since ActiveConnection is a Variant, would be to set it to the connection's ConnectionString property. That works, but it would create and open a new connection every time, and quite frankly I wouldn't like it.
It seems the wrong way around, but this is so suspiciously similar to a problem I had – where ADO stopped working in a COM object recompiled on a Win7 machine then used on an XP machine – that I think this may be down to the same thing. Namely, the disastrous Windows update that broke MDAC ADO in COM objects (very long thread, expect slow loading). If so, the official fix can be found here.
If it's not that and you can't find the solution, I think your best course of action is to just use the connection string workaround you mentioned in your edit. It's not ideal but you say you're going to start rewriting the DLLs anyway, so it will only be a temporary arrangement.

Calling 32bit COM from c# running in 64bit mode

I have a 3rd party COM object(32 bit) that I need to call from my c# application (64 bit).
I know I have to run the COM object in a separate process.
This COM object has many classes implemented in it, so I'm trying to avoid writing my own remoting wrapper that exposes all the methods. COM+ seems to be the most straightforward solution. I opened the Component Services menu, created a new COM+ Application, added my COM object as a component to this application. Everything seemed to import beautifully.
In my C# application, I added the original COM object as a reference (which automatically generates the type library). Using the type library reference, I can create objects from from the COM+ component (I see them begin to spin in the Component Services window), but when I try to access on of the methods of the object, I get an error saying the interface is not registered.
Does anyone have a clue? I went back and ran regsvr32 on the COM object, but I don't think it was necessary, and it didn't help.
Is my usage in C# correct? VS2008 autocomplete had no problem seeing those methods.
The exact exception is:
"Interface not registered (Exception from HRESULT:0x80040155)"
Unclear about exactly what the permissions and roles are about in the Component Services, I tried setting up the COM+ object identity to run under the System Account, both as a local service and as interactive user. I've added Everyone as a user in the Roles.
Everything is running locally, so there shouldn't be an issue with file privileges or anything like that.
I also want to reiterate that this COM object contains many classes. I successfully instantiated one class object in my client and set some property values.
I also successfully instantiated another class object, but received this exception when attempting to call a method of this second object .... so I don't think there is an issue with which registry my COM object is registered in.
We had a similar situation, working with a COM dll from VFP.
It all depends on rights and permissions, like Yahia says.
We got it working by doing this:
Install VFP oledb 9 drivers (dunno what you have so probably not required).
give Network Service IIS_IUSR full control on the COM folder (required so the DLL can do some logging in its own folder, when called from the website).
run regsvr32.exe "c:\xxx\yourfile.dll" -> this should be successful!
Create COM+ application, and add the DLL as a part
Set the application COM+ credentials on a user wigh sufficient rights
and we had to do some more settings on rights in application pool / IIS, but thats not required for you I guess.
Anyways, just make sure you have enough logging, make sure the dll is registered, and after that its all about rights rights rights..
Good luck with it!
Sorry to use the "Answer" to respond to comments, but it seems to be my only avenue.
The whole purpose of moving to a 64bit operating system was to gain the extra addressable memory space, so running the entire application in 32bit mode is not an option.
It might be relevant to the problem that after successfully creating three class objects, I was able to set properties in one, call a method with no arguments in the second, but it was calling a method in the third, which took the other two objects as arguments that threw the exception.

Creating a COM Automation Server in C#

I currently have a .NET class library written in C# that exposes its functionaility via COM to a C++ program (pre-.NET).
We now want to move the library out-of-process to free up address space in the main application (it is an image-processing application, and large images eat up address space). I remember from my VB6 days that one could create an "OLE automation server". The OS would automatically start and stop the server .exe as objects were created/destroyed. This looks like the perfect fit for us: as far as I can see nothing would change in the client except it would call CoCreateInstance with CLSCTX_LOCAL_SERVER instead of CLSCTX_INPROC_SERVER.
How would I create such an out-of-process server in C#? Either there is no information online about it, or my terminology is off/out of date!
You can actually do this in .NET (I've done it before as a proof-of-concept), but it's a bit of work to get everything working right (process lifetime, registration, etc).
Create a new Windows application. In the Main method, call RegistrationServices.RegisterTypeForComClients- this is a managed wrapper around CoRegisterClassObject that takes care of the class factory for you. Pass it the Type of the managed ComVisible class (the one you actually want to create- .NET supplies the class factory automatically) along with RegistrationClassContext.LocalServer and RegistrationConnectionType.SingleUse. Now you have a very basic exe that can be registered as a LocalServer32 for COM activation. You'll still have to work out the lifetime of the process (implement refcounts on the managed objects with constructors/finalizers- when you hit zero, call UnregisterTypeForComClients and exit)- you can't let Main exit until all your objects are dead.
The registration isn't too bad: create a ComRegisterFunction attributed method that adds a LocalServer32 key under HKLM\CLSID(yourclsidhere), whose default value is the path to your exe. Run regasm yourexe.exe /codebase /tlb, and you're good to go.
You could always expose your .NET class as COM classes using InteropServices and then configure the library as a COM+ application. The .NET library would run out-of-process and be hosted by a DLLHOST.EXE instance.
Here is an article in MSDN that covers all aspects of how to create COM localserver in c# (.net): link
Your post started a while ago and I had the same problem. The following link is absolute gold and tells you everything
http://www.andymcm.com/blog/2009/10/managed-dcom-server.html

Passing data to .NET C# WPF applications/DLLs

I have a .NET C# WPF application that I am trying to make into a single-instance application using a Mutex.
This .NET application is called by a C++-based DLL using CreateProcessAsUser() and is given parameters via environment variables.
Subsequent instances will also be created by the C++ DLL in the same way.
Subsequent instances would then need to pass their parameters to the first instance of the application before exiting.
The problem is what methods can be used in the .NET application so that the subsequent instances would be able to pass their data to the first instance of the .NET application? The simpler, the better.
I have researched some but I hope there are simpler ways.
Things I have researched:
Named Pipes
.NET Remoting
Windows Messaging (Sending WM_COPYDATA to the first instance window)
Since I am just trying to pass 4 strings to the first instance, I am trying to avoid the above mentioned methods because they are somewhat overkill for my problem.
The simplest I can think of is to export a function from the .NET application so that the subsequent instances of the .NET application can just call this function on the first instance of the .NET application and pass the data as the parameters of the function. However, is this possible in .NET? I've read that .NET EXE or DLLs could not export functions.
Thanks!
The simplest I can think of is to export a function from the .NET application and then the subsequent instances can just call this function and pass the parameters to it.
This is not how this works. You'll load the .NET assembly in the calling process, not magically cross the process boundary and talk to the child.
Just have the parent open the child with redirected pipes using the Process class, and have the child read from stdin using Console.Read*
thanks for the reply, Paul!
I've added more detail to my question above, though, coz I'm not sure if my scenario was understood correctly.
But regarding your answer, the parent of the .NET app will be a C++-based DLL and all it will do is call the .NET app and give it parameters. The C++-based DLL will also exit after this so I wouldn't want to add anymore behavior to it.
So, passing of data would then be done between the instances of the .NET applications only.
Since you are going from .NET to .NET, I'd recommend just doing a WCF call. You can use a named pipes transport between the two .NET instances to expose the "service" (which is what your first instance would expose).
Subsequent instances would do the single instance check, and if they detect an already running instance, they could make a WCF call to the service running in the first instance and pass the data that way.

Categories

Resources