Multiple processes and copies of static DLL data in COM DLL dependencies - c#

I wish to understand some unexpected behaviour in the running of COM DLLs, where is appears static C++ data is being shared across multiple processes. The environment is a little complex and my understanding of the various COM threading models rather weak, which I am hoping someone can help with.
The environment
An IIS Server on a 64 bit OS running multiple C# web services, with each service in its own 32-bit application pool, and therefore process
The pools have the "Enable 32-Bit Applications"=True setting
each 32 bit C# service calls a different in-process 32 bit COM DLL (so service A calls COM DLL 1, service C calls COM DLL 2. The COM DLLs are written in C++ using Qt 4.8 ActiveQt
The COM DLLs depend on a number of 32 bit C++ DLLs, which are shared i.e. both COM DLLs 1 and 2 depend on Utilities.dll
As far as I can tell, there is no ThreadingModel set for the COM DLLs, so I am expecting the system will fall back on the main STA.
I am aware this is frowned upon, but I do not have enough knowledge to change it currently.
Utilities.dll contains some static C++ data
The COM DLLs were registered using "regsvr32" and do not appear to be listed in "Component Services", though my knowledge of the latter is minimal.
The observed issue is that the static data in Utilities.dll appears to end up being shared between the different IIS processes, with undesirable consequences. I had expected that as the COM were in the main STA they would be accessed as if it were not thread safe, that each process would get its own copy of the DLL static data, but this appears not to be the case.
Can someone explain how the static data ends up shared between processes?
How can I avoid this? (apart from refactoring the code to remove all static data, which is not really viable currently)

If you are seeing data shared between COM objects, it means they are hosted in the same process. Yes, it's possible to share data between processes, but not accidentally. Since your app pools are different processes, it must be that these COM objects are hosted out of process, and it's just stubs that are loaded into the app pool.
If you have control of the Utilities.dll (and it sounds like you do) I would try adding some debugging information to find out what process id is hosting the COM objects. I expect you'll find that it doesn't match the app pool id, and you'll be able to use that id to find out what's going on.
Ideally it shouldn't matter where well-designed COM objects live, that's supposed to be something of an implementation detail. Is it possible to do away with the shared data structures?

Related

Reserve hardcoded memorymapped address in .NET before app starts up

We have a C# .NET application which binds to an unmanaged C dll.
The C dll uses CreateFileMapping() and MapViewOfFileEx() to share memory with other processes on the same machine.
MapViewOfFileEx() uses the last parameter (lpBaseAddress) and set it to 0x08000000 in our case. This is fine for all c/c++ applications if you call the initializing function in the dll early enough.
In c#, however, bigger application allocate a lot of memory before we even get the chance to call the dll's init function in the autogenerated static method Main(). (Small c# console apps work fine).
Is there a way to like preallocate the area 0x08000000 to 0x08400000 at startup of the .NET application before the automatic allocation for classes and stuff kicks in? Then it should be no problem to map the shared memory to 0x08000000 any time we like.
Restrictions:
The obvious solutions cannot be applied here.
It is not possible to use a dynamically assigned virtual address by calling MapViewOfFile as this would require heavy changes in the C dll which currently works with absolute pointers in all processes.
It is also not possible to negotiate a common base address between all processes, as they don't startup at the same time.

C# Restricting DLL's to only one instance

I essentially want to make an api for an application but I only want one instance of that dll to be running at one time.
So multiple applications also need to be able to use the DLL at the same time. As you would expect from a normal api.
However I want it to be the same instance of the dll that the different applications use. This is because of communication with hardware that I don't want to be able to overlap.
DLLs are usually loaded once per process, so if your application is guaranteed to only be running in single-instance mode, there's nothing else you have to do. Your single application instance will have only one loaded DLL.
Now, if you want to "share" a "single instance" of a DLL across applications, you will inevitably have to resort to a client-server architecture. Your DLL will have to be wrapped in a Windows Service, which would expose an HTTP (or WCF) API.
You can't do that as you intend to do. The best way to do this would be having a single process (a DLL is not a process) which receives and processes messages, and have your multiple clients use an API (this would be your DLL) that just sends messages to this process.
The intercommunication of those two processes (your single process and the clients sending or receiving the messages via your API) could be done in many ways, choose the one that suits you better (basically, any kind of client/server architecture, even if the clients and the server are running on the same hardware)
This is an XY-Problem type of question. Your actual requirement is serializing interactions with the underlying hardware, so they do not overlap. Perhaps this is what you should explicitly and specifically be asking about.
Your proposed solution is to have a DLL that is kind of an OS-wide singleton or something like that. This is actually what you are asking about; although it is still not the right approach, in my opinion. The OS is in charge of managing the lifetime of the DLL modules in each process. There are many aspects to this, but for one: most DLL instances are already being shared between every process (mostly code sections, resources and such - data, of course, is not shared by default).
To solve your actual problem, you would have to resort to multi-process synchronization techniques. In Windows, this works mostly through named kernel objects like mutexes, semaphores, events and such. Another approach would be to use IPC, as other folks have already mentioned in their respective answers, which then again would require in itself some kind of synchronization.
Maybe all this is already handled by that hardware's device driver. What would be the real scenarios in which overlapped interactions with the underlying hardware would have a negative impact on the applications that use your DLL?
To ensure you have loaded one DLL per machine, you would need to run a controlling assembly in separate AppDomain, then try creating named pipe for remoting (with IpcChannel) and claim hardware resources. IpcChannel will fail to create second time in the same environment. If you need high performance communication with your hardware, use remoting only for claiming and releasing resource by another assembly used by applications.
Mutex is one of solution for exclusive control of multiple processes.
***But Mutex will sometimes occur dead lock. Be careful if you use.

Launch multiple processes of COM-Object for a multiuser capable Webservice

I'm more or less new to .NET development as well as to techniques like WCF and COM(+). I think I'm little bit informed about these topics, even though I just skim over some of those.
Problem:
I want to develop a multi-client capable Webservice which should have the ability to make a "session"-based usage of an existing COM-Object. (The COM-Object already exists and couldn't be changed)
The COM-Object (Dll) itself loads two (unmanaged) Dlls. And now here comes the tricky part:
If I create ONE instance of the COM-Object in a sample C# client (console app), everything works fine at that point, because the Instance runs in process. (Those unmanaged Dlls just loaded once each process). So if I'm going to create another instance of the COM-Object in the same process, the app crashes. This is an expecting behavior because those Dlls are not thread save.
A first conclusion was that I have to create just ONE COM-Instance each (isolated) process!
But what about create multiple instances of the COM-Object in as Webapp which bases on user-sessions?
What I have tried already:
1. I created a new Appdomain for each COM-instance.
-> Dont work, because unmanaged Code don't know about Appdomains.
I made a COM+ library from the COM-Object and I created a WCF service with the utility (ComSvcConfig) and hosted it finally in IIS (WAS).
-> Don't work, because it runs in the same process (worker-processes).
I created a first WCF-Service of the COM-Object and created the appropriate service-operations for each COM-Function.
I created a second WCF-Service that just generates a new instance of the first Selfhosting WCF-Service. The second WCF-Service starts and returns an unique URL of the first service in order that each client have its own and Selfhosted service in a separated process.
-> Works! But it seems very complicated and it isn't probably a good programming style IMO.
Conclusion:
May I haven't considered several aspects to make a suitable solution (due to lack of knowledge). So may there are other (better) ways to solve the problem?
Do you have some advices and tips to solve the problem more conveniently?
Thanks in advance!

How to share a process?

How can I snuggle into another process? Like, share another process's name? So if my application is griddemo.exe, and I want to snug into, let's say, explorer.exe, is that possible? Just read something about CreateRemoteThread() from kernel32. Is that in the right direction? Would there be security/UAC issues?
First of all sorry, but my answer will be longer as another answers.
I use DLL injection since years in different version of operation system (from windows NT 4.0 till Windows 7) and I had no time any problem with any virus scanner (inclusive both Norton and McAfee in different versions). So I disagree with Stephen Cleary (see his answer) in this aspect.
Usage of CreateRemoteThread() is really only one of the ways. AppInit_DLLs is another way. Both has its advantage and disadvantage. The main advantage of AppInit_DLLs is a simplicity to inject DLL in any process. The main disadvantages of AppInit_DLLs approach are following:
All GUI application will load the DLL. If you want to load it only in one process like explorer.exe you can't do this. So the working space of all GUI processes will be increased by your DLL. An error in your DLL (especially inside of DllMain or in any dependency DLL of your DLL) can crash many processes which you don't currently know.
You can not inject your DLL with respect of AppInit_DLLs approach in a console application or in any EXE which have no dependency to User32.dll.
You should be very careful inside of your DllMain, because it will be called before User32.dll will be full initialized. So a safe DLL which you can use inside of DllMain of your DLL is Kernel32.dll.
With respect of CreateRemoteThread() one can start an additional thread in a process. The main problem of CreateRemoteThread() is that its lpStartAddress parameter must be an address from the remote process. So one have to use functions OpenProcess, VirtualAllocEx and WriteProcessMemory to write some information into the memory of the destination process. To be able to open a process one have to have debug privilege enabled. If you want to do only 2 + 2 inside of the destination process you can copy the corresponding binary code directly into destination process. All real interesting work can be done with usage of some Windows API. So mostly one don't copy a code. Instead of that one call LoadLibrary("MyPath\\MyDll.dll") inside of destination process. Because the prototype of LoadLibrary is the same as prototype of ThreadProc of CreateThread you can call LoadLibrary as a ThreadProc of CreateRemoteThread(). This way has the name DLL Injection.
I recommend you to use this DLL Injection only if it really required. If your destination application has some other way like plug-ins to load you DLL inside the process your should use this way instead of DLL Injection.
Some general problems you will have to solve after you have a working example of DLL Injection. This problems you can don't see at the first time, but after a long usage of your application you will see its importance:
You should find the moment when the destination process are already running before you can use CreateRemoteThread().
The destination application must be already initialized before you call CreateRemoteThread(). So you should not use CreateRemoteThread() too early. In case of explorer.exe you can use a start of your small trigger program from Run registry key. At the moment is explorer.exe fully prepared for DLL injection.
You should take in consideration 64-bit version of Windows.
Don't forget about DLL relocation inside of destination process. Be careful, that you DLL can be loaded in the destination process at the other address as in your process. Mostly it is a good idea to choose a good base address (linker option) for you DLL which you will inject. The Kernel32.dll can be sometime (very seldom) loaded at the other address as in your source process. You can create a DLL Injection code which are free of this problem.
Terminal Services isolates each terminal session by design. Therefore, CreateRemoteThread fails if the target process is in a different session than the calling process. The problem you can see on XP (which is not connected to domain) or especially on Vista or Windows 7 if you try make DLL injection from a windows service. To fix the problem you should make DLL Injection either from the process running on the same terminal session as destination process or you have to switch current session before using of CreateRemoteThread. Your process must have SE_TCB_NAME privilege enabled and use SetTokenInformation with TokenSessionId parameter. To get session id of the destination process you can use different methods. Functions with the prefix WTS (like WTSGetActiveConsoleSessionId) can be very useful.
So everything is not very easy, but it is really interesting subject where you can study a lot of things about operating system. You should only spend a little time to analyse your problem and different ways to solve it before you choose one way which corresponds your project requirements and start programming.
DLL injection is the traditional method of doing this. It's quite tricky, especially since virus scanners look askance at the practice. So even if you get it working, Norton/McAfee would be likely to block you - or block you in the future.
One easy way of DLL injection is the AppInit_DLLs registry value. Note that Microsoft has reserved the right to simply remove this functionality (and likely will do so in the future).
The Microsoft-approved way to achieve DLL injection is licensing Microsoft Detours.
Note that your DLL must be built against the CLR version 4.0 or higher to perform DLL injection safely, because this is the first version to support in-proc side-by-side.
If you mean injecting your code into another process, then dll injection is one technique:
http://en.wikipedia.org/wiki/DLL_injection
Haven't done this for years, so not sure how happy modern MS Windows operating systems (i.e. post XP) are going to be with this.
I've not tried this lately, but another way to do this would be to create a Hook DLL:
Create a DLL that contains a Hook Procedure like MessageProc.
Install this DLL into Windows\System32.
Use FindWindows(Ex) to locate your victim process' window.
Use GetWindowThreadProcessId() to find the owning thread of that window. This is necessary to avoid injecting your DLL into every single process on the system.
Use SetWindowsHookEx to hook that thread.
PostMessage a WM_USER message to the window - activating your Hook DLL if it isn't already active.
This would likely invoke the new Windows Vista/7 UIPI/UAC if you're not a sufficiently privileged user but this depends on many factors - your mileage may vary.

What are app domains used for?

I understand roughly what an AppDomain is, however I don't fully understand the uses for an AppDomain.
I'm involved in a large server based C# / C++ application and I'm wondering how using AppDomains could improve stability / security / performance.
In particular:
I understand that a fault or fatal exception in one domain does not affect other app domains running in the same process - Does this also hold true for unmanaged / C++ exceptions, possibly even heap corruption or other memory issues.
How does inter-AppDomain communication work?
How is using AppDomains different from simply spawning many processes?
The basic use case for an AppDomain is in an environment that is hosting 3rd party code, so it will be necessary not just to load assemblies dynamically but also unload them.
There is no way to unload an assembly individually. So you have to create a separate AppDomain to house anything that might need to be unloaded. You can then trash and rebuild the whole AppDomain when necessary.
By the way, native code corrupting the heap cannot be protected against by any feature of the CLR. Ultimately the CLR is implemented natively and shares the same address space. So native code in the process can scribble all over the internals of the CLR! The only way to isolate badly behaved (i.e. most) native code is actual process isolation at the OS level. Launch mutiple .exe processes and have them communicate via some IPC mechanism.
I highly recommend CLR Via C# by Jeffrey Richter. In particular chapter 21 goes into good detail regarding the purpose and uses of AppDomains.
In answer to your points/question:
AppDomains will not protect your application from rogue unmanaged code. If this is an issue you will most likely need to use full process isolation provided by the OS.
Communication between AppDomains is performed using .NET remoting to enforce isolation. This can be via marshal by reference or marshal by value semantics, with a trade off between performance and flexibility.
AppDomains are a lightweight way of achieving process like isolation within managed code. AppDomains are considered lightweight because you can create multiple AppDomains within a single process and so they avoid the resource and performance overhead multiple OS processes. Also, a single thread can execute code in one AppDomain and then in another AppDomain as Windows knows nothing about AppDomains (see this by using using System.AppDomain.CurrentDomain)
Actually, it is not true that a critical fail in one AppDomain can't impact others. In the case of bad things, the best bet it to tear down the process. There are a few examples, but to be honest I haven't memorised them - I simply took a mental note "bad things = tear down process (check)"
Benefits of AppDomain:
you can unload an AppDomain; I use this for a system that compiles itself (meta-programming) based on data from the database - it can spin up an appdomain to host the new dll for a while, and then swap it safely when new data is available (and built)
comms between AppDomains are relatively cheap. IMO this is the only time I am happy to use remoting (although you still need to be really careful about the objects on the boundary to avoid bleeding references between them, causing "fusion" to load extra dlls into the primary AppDomain, causing a leak) - it is really easy too - just CreateInstanceAndUnwrap (or is it CreateInstanceFromAndUnwrap?).
vs spawing an extra process - you could go either way; but you don't need another exe for AppDomain work, and it is much easier to set up any comms that you need
I'm not claiming to be an expert on AppDomains, so my answer will not be all-encompassing. Perhaps I should start off by linking to a great introduction by a guy who does come off as somewhat an expert, and what does seem like covering all aspects of AppDomain usage.
My own main encounter with AppDomains has been in the security field. There, the greatest advantage I've found has been the ability to have a master domain run in high trust spawning several child domains with restricted permissions. By restricting permissions in high trust, without the use of app domains, the restricted processes would still have the permission to elevate their own privileges.
App Domain segregation strategy for running completely independent code modules, in order to address memory sharing and stability concerns, is more of an illusion than a reality.

Categories

Resources