I understand that an application domain forms:
an isolation boundary for security,
versioning,
reliability,
and unloading of managed code,
but so does a process
Can someone please help me understand the practical benefits of an application domain?
I assumed app domain provides you a container to load one version of an assembly but recently I discovered that multiple versions of strong key assembly can be loaded in an app domain.
My concept of application domain is still not clear. And I am struggling to understand why this concept was implemented when the concept of process is present.
Thank you.
I can't tell if you are talking in general or specifically .NET's AppDomain.
I am going to assume .NET's AppDomain and why it can be really useful when you need that isolation inside of a single process.
For instance:
Say you are dealing with a library that had certain worker classes and you have no choice, but to use those workers and can't modify the code. It's your job to build a Windows Service that manages said workers and makes sure they all stay up and running and need to work in parallel.
Easy enough right? Well, you hoped. It turns out your worker library is prone to throwing exceptions, uses a static configuration, and is generally just a real PITA.
You could try to launch them in their own process, but monitor them, you'll need to implement namedpipes or try to thoughtfully parse the STDIN and STDOUT of the process.
What else can you do? Well AppDomain actually solves this. I can spawn an AppDomain for each worker, give them their own configuration, they can't screw each other up by changing static properties because they are isolated, and on top of that, if the library bombs out and I failed to catch the exception, it doesn't bother the workers in their domain. And during all of this, I can still communicate with those workers easily.
Sadly, I have had to do this before
EDIT: Started to write this as a comment response, but got too large
Individual processes can work great in many scenarios, however, there are just times where they can become a pain. I am not saying one should use an AppDomain over another process. I think it's uncommon you would need a separate process or AppDomain, but once you need it, you'll definitely know.
The main problem I see with processes in the scenario I've given above is that processes have their own downfalls that are easier to mitigate with the AppDomain.
A process can go rogue, become unresponsive, and crash or be killed at any point.
If you're managing processes, you need to keep track of the process ID and monitor the status of it. IPCs are great, but it does take time to get proper communication going back and forth as needed.
As an example let's say your process just dies. What do you do? Depending on the mechanism you chose to monitor, maybe the communication thread died, perhaps the work finished and you still show it as "processing". What do you do?
Now what happens when you have 20 processes and your management app dies. You don't have any real information, all you have is 20 "myprocess.exe" and maybe now have to start parsing the command line arguments they were started with to see which workers you actually have. Obviously with an AppDomain all 20 would have died too, but did you really gain anything with the process? You still have to code the ability to recover, however, now you have to also code all of the recovery for your processes instead of just firing the workers back up.
As with anything in programming, there's 1,000 different ways to achieve the same goal. It's up to you to decide which solution you feel is most appropriate.
Some practical benefits using app domain:
Multiple app domains can be run in a process. You can also stop individual app domain without stopping the entire process. This alone drastically increases the server scalability.
Managing app domain life cycle is done programmatically by runtime hosts (you can override it as well). For processes & threads, you have to explicitly manage their life cycle. Initialization, execution, termination, inter-process/multithread communication is complex and that's why it's easier to defer that to CLR management.
Source: https://learn.microsoft.com/en-us/dotnet/framework/app-domains/application-domains
Related
I am currently health checking an application which experiences UI stuttering during heavy usage.
Using Microsoft Concurrency Visualizer extension for Visual Studio 2015, showed that quite a lot of short-lived threads are created and stopped after ~100ms of execution.
Unfortunately, their displayed callstack is like clr.dll!0x98071 ntdll.dll!0x634fb and I am not quite sure how to extract useful information out of it.
I have no clue what is the purpose of those threads and which part of the code in the application is creating them.
How can I better identify where each one of them gets started?
In the code, I was able to grep a handful of Tasks, another of QueueUserWorkItems, several dozens of plain Thread instantiations, some System.Threading.Timer & System.Timers.Timer, no Reactive Extensions. I put breakpoints for all of them but it seems I am missing some...
I don't think those are from the threadpool because they would be displayed in synchronisation state in concurrency visualizer, instead they just end, and another one with another Id gets created later. But maybe I am misleading.
We also use a few third-party libs and a bunch of JuggerNET generated code, so maybe the origin is not even in the application itself.
I was finally able to find the culprit of those short-lived threads by looking closely at some of the cryptic callstacks, which included for instance:
mmdevapi.dll
wdmaud.drv
avrt.dll
audioses.dll
It led me to thinking that I should double check the sound alert system. It was indeed this one which spawned those threads.
Note:
I will not accept my answer however because I would like someone to share a better process or any kind of tips and tricks for diagnosing unwanted threads origin.
I essentially want to make an api for an application but I only want one instance of that dll to be running at one time.
So multiple applications also need to be able to use the DLL at the same time. As you would expect from a normal api.
However I want it to be the same instance of the dll that the different applications use. This is because of communication with hardware that I don't want to be able to overlap.
DLLs are usually loaded once per process, so if your application is guaranteed to only be running in single-instance mode, there's nothing else you have to do. Your single application instance will have only one loaded DLL.
Now, if you want to "share" a "single instance" of a DLL across applications, you will inevitably have to resort to a client-server architecture. Your DLL will have to be wrapped in a Windows Service, which would expose an HTTP (or WCF) API.
You can't do that as you intend to do. The best way to do this would be having a single process (a DLL is not a process) which receives and processes messages, and have your multiple clients use an API (this would be your DLL) that just sends messages to this process.
The intercommunication of those two processes (your single process and the clients sending or receiving the messages via your API) could be done in many ways, choose the one that suits you better (basically, any kind of client/server architecture, even if the clients and the server are running on the same hardware)
This is an XY-Problem type of question. Your actual requirement is serializing interactions with the underlying hardware, so they do not overlap. Perhaps this is what you should explicitly and specifically be asking about.
Your proposed solution is to have a DLL that is kind of an OS-wide singleton or something like that. This is actually what you are asking about; although it is still not the right approach, in my opinion. The OS is in charge of managing the lifetime of the DLL modules in each process. There are many aspects to this, but for one: most DLL instances are already being shared between every process (mostly code sections, resources and such - data, of course, is not shared by default).
To solve your actual problem, you would have to resort to multi-process synchronization techniques. In Windows, this works mostly through named kernel objects like mutexes, semaphores, events and such. Another approach would be to use IPC, as other folks have already mentioned in their respective answers, which then again would require in itself some kind of synchronization.
Maybe all this is already handled by that hardware's device driver. What would be the real scenarios in which overlapped interactions with the underlying hardware would have a negative impact on the applications that use your DLL?
To ensure you have loaded one DLL per machine, you would need to run a controlling assembly in separate AppDomain, then try creating named pipe for remoting (with IpcChannel) and claim hardware resources. IpcChannel will fail to create second time in the same environment. If you need high performance communication with your hardware, use remoting only for claiming and releasing resource by another assembly used by applications.
Mutex is one of solution for exclusive control of multiple processes.
***But Mutex will sometimes occur dead lock. Be careful if you use.
I have a process that I need to make into a service. This process runs autonomously right now so there are no concerns with user interaction I just need to "turn" it into a service. I got to thinking about it and decided that I could just create a service that launched the process, this would give me the added benefit of having outside control of the process.. I could watch it for an unexpected exit and re-launch it.. I could also watch its memory usage and kill it if it gets out of hand. I dont think I have seen many other applications do this and I was thinking there must be a reason why so...
It's going to add complexity.
Instead of just having the process exist, you'll now need to make a second executable to "launch and monitor" this process. This adds overhead (the service and process both running), adds complexity, and makes life as a whole a bit more difficult.
That being said, if you've got a .NET Console application, turning it into a service is incredibly trivial. Your Main routine basically just gets moved into a method, and launched in a thread. Once you do that, the service application is effectively done - it's just configuring the service (which can be done in a designer) and overriding OnStart to spin up a thread and call your routine.
This is a good idea, but you've reinvented the wheel. What you're thinking of is essentially server monitoring. There are several high-quality open source implementations of what you want.
Pretty much anything that you can do this way you can do with less complexity by just putting the application logic in the service. Not to mention that you get Service Recovery for free by doing it in the service directly.
I am writing a c# windows service which will perform some background processing - basically it is a consumer for a work queue.
It needs to not go down (stop processing new items), and if it does go down I need to be notified.
What are some design guidelines and considerations for a) ensuring that such a service is as reliable as possible, and b) sending out a notification if something does go wrong? I have considered, for instance, creating a watcher thread whose only job is to make sure the worker thread is still processing jobs.
There are a number of things that you can do here to help improve the reliability, as well as gauge that you have a solution that is going to meet your needs.
Testing
First and foremost though, the testing process that you go through will need to be a very solid one, test for those "unexpected" situations, loss of network connection, etc. Make sure that you are testing those, and seeing what is happening. Notification on failure, can be a bit of a "mixed bag". For example, you can't e-mail yourself if you don't have network connections available.
Proper Code Design
In addition to setting up valid test scenarios, be sure that your code is a bullet proof as possible, since you are creating a windows service, be sure that you are capturing, logging, and dealing with all errors possible, as if an error bubbles up to the OS, your service will go down.
Monitoring
Consider putting monitoring, in my day-job we have two types of monitoring used, errors are reported the the Windows Event log in some cases and Microsoft MOM is used to notify us of any/all issues that are going on in the environment. A second process that we use is a second scheduled job that every X minutes validates that the critical job is in a "Started" state, if it isn't in a started state, it will re-start it. Not elegant, but it works.
I think a MOM and/or Solar Winds or some other monitoring application which your system administrator might be using to monitor the machine on which the service is deployed & take proper action (send email, ring phones :)
I understand roughly what an AppDomain is, however I don't fully understand the uses for an AppDomain.
I'm involved in a large server based C# / C++ application and I'm wondering how using AppDomains could improve stability / security / performance.
In particular:
I understand that a fault or fatal exception in one domain does not affect other app domains running in the same process - Does this also hold true for unmanaged / C++ exceptions, possibly even heap corruption or other memory issues.
How does inter-AppDomain communication work?
How is using AppDomains different from simply spawning many processes?
The basic use case for an AppDomain is in an environment that is hosting 3rd party code, so it will be necessary not just to load assemblies dynamically but also unload them.
There is no way to unload an assembly individually. So you have to create a separate AppDomain to house anything that might need to be unloaded. You can then trash and rebuild the whole AppDomain when necessary.
By the way, native code corrupting the heap cannot be protected against by any feature of the CLR. Ultimately the CLR is implemented natively and shares the same address space. So native code in the process can scribble all over the internals of the CLR! The only way to isolate badly behaved (i.e. most) native code is actual process isolation at the OS level. Launch mutiple .exe processes and have them communicate via some IPC mechanism.
I highly recommend CLR Via C# by Jeffrey Richter. In particular chapter 21 goes into good detail regarding the purpose and uses of AppDomains.
In answer to your points/question:
AppDomains will not protect your application from rogue unmanaged code. If this is an issue you will most likely need to use full process isolation provided by the OS.
Communication between AppDomains is performed using .NET remoting to enforce isolation. This can be via marshal by reference or marshal by value semantics, with a trade off between performance and flexibility.
AppDomains are a lightweight way of achieving process like isolation within managed code. AppDomains are considered lightweight because you can create multiple AppDomains within a single process and so they avoid the resource and performance overhead multiple OS processes. Also, a single thread can execute code in one AppDomain and then in another AppDomain as Windows knows nothing about AppDomains (see this by using using System.AppDomain.CurrentDomain)
Actually, it is not true that a critical fail in one AppDomain can't impact others. In the case of bad things, the best bet it to tear down the process. There are a few examples, but to be honest I haven't memorised them - I simply took a mental note "bad things = tear down process (check)"
Benefits of AppDomain:
you can unload an AppDomain; I use this for a system that compiles itself (meta-programming) based on data from the database - it can spin up an appdomain to host the new dll for a while, and then swap it safely when new data is available (and built)
comms between AppDomains are relatively cheap. IMO this is the only time I am happy to use remoting (although you still need to be really careful about the objects on the boundary to avoid bleeding references between them, causing "fusion" to load extra dlls into the primary AppDomain, causing a leak) - it is really easy too - just CreateInstanceAndUnwrap (or is it CreateInstanceFromAndUnwrap?).
vs spawing an extra process - you could go either way; but you don't need another exe for AppDomain work, and it is much easier to set up any comms that you need
I'm not claiming to be an expert on AppDomains, so my answer will not be all-encompassing. Perhaps I should start off by linking to a great introduction by a guy who does come off as somewhat an expert, and what does seem like covering all aspects of AppDomain usage.
My own main encounter with AppDomains has been in the security field. There, the greatest advantage I've found has been the ability to have a master domain run in high trust spawning several child domains with restricted permissions. By restricting permissions in high trust, without the use of app domains, the restricted processes would still have the permission to elevate their own privileges.
App Domain segregation strategy for running completely independent code modules, in order to address memory sharing and stability concerns, is more of an illusion than a reality.