Dynamically update dlls on running instance - c#

We have a service running that connects with hundreds of devices over TCP. Every time we want to do an update of this service we need to restart it and this causes a connection loss for all devices.
To prevent this we want to divide our application into a connection part and a business logic/datalayer part. This will give us the option to update the business logic/datalayer without restarting the connection part. This could be done with WCF services, but the system should response as fast a possible and introducing another connection to something will cause an extra delay.
Would it be possible to update a dll file without restarting the application and give the application an instruction so it will load the new dll and discharge the old one? Off course as long as the interface between the layers don't break.

According to MSDN:
"There is no way to unload an individual assembly without unloading all of the application domains that contain it. Even if the assembly goes out of scope, the actual assembly file will remain loaded until all application domains that contain it are unloaded."
Reference: http://msdn.microsoft.com/en-us/library/ms173101(v=vs.90).aspx
My approach would probably involve some sort of local communication between communication layer and business logic, each on a different context (AppDomain) - via named pipes or memory mapped files, for example.

Here is a good example of loading / unloading assembly dynamically.
http://www.c-sharpcorner.com/uploadfile/girish.nehte/how-to-unload-an-assembly-loaded-dynamically-using-reflection/
Be careful about speed since the MethodInfo.Invoke is slow you might want to look into using DynamicMethod. Also creating / destroying app domains is slow.
http://www.wintellect.com/blogs/krome/getting-to-know-dynamicmethod
Also you can use what is called a "plugin" framework. Codeplex has one called the MEF "Managed Extensibility Framework"
http://mef.codeplex.com/

Related

C# Restricting DLL's to only one instance

I essentially want to make an api for an application but I only want one instance of that dll to be running at one time.
So multiple applications also need to be able to use the DLL at the same time. As you would expect from a normal api.
However I want it to be the same instance of the dll that the different applications use. This is because of communication with hardware that I don't want to be able to overlap.
DLLs are usually loaded once per process, so if your application is guaranteed to only be running in single-instance mode, there's nothing else you have to do. Your single application instance will have only one loaded DLL.
Now, if you want to "share" a "single instance" of a DLL across applications, you will inevitably have to resort to a client-server architecture. Your DLL will have to be wrapped in a Windows Service, which would expose an HTTP (or WCF) API.
You can't do that as you intend to do. The best way to do this would be having a single process (a DLL is not a process) which receives and processes messages, and have your multiple clients use an API (this would be your DLL) that just sends messages to this process.
The intercommunication of those two processes (your single process and the clients sending or receiving the messages via your API) could be done in many ways, choose the one that suits you better (basically, any kind of client/server architecture, even if the clients and the server are running on the same hardware)
This is an XY-Problem type of question. Your actual requirement is serializing interactions with the underlying hardware, so they do not overlap. Perhaps this is what you should explicitly and specifically be asking about.
Your proposed solution is to have a DLL that is kind of an OS-wide singleton or something like that. This is actually what you are asking about; although it is still not the right approach, in my opinion. The OS is in charge of managing the lifetime of the DLL modules in each process. There are many aspects to this, but for one: most DLL instances are already being shared between every process (mostly code sections, resources and such - data, of course, is not shared by default).
To solve your actual problem, you would have to resort to multi-process synchronization techniques. In Windows, this works mostly through named kernel objects like mutexes, semaphores, events and such. Another approach would be to use IPC, as other folks have already mentioned in their respective answers, which then again would require in itself some kind of synchronization.
Maybe all this is already handled by that hardware's device driver. What would be the real scenarios in which overlapped interactions with the underlying hardware would have a negative impact on the applications that use your DLL?
To ensure you have loaded one DLL per machine, you would need to run a controlling assembly in separate AppDomain, then try creating named pipe for remoting (with IpcChannel) and claim hardware resources. IpcChannel will fail to create second time in the same environment. If you need high performance communication with your hardware, use remoting only for claiming and releasing resource by another assembly used by applications.
Mutex is one of solution for exclusive control of multiple processes.
***But Mutex will sometimes occur dead lock. Be careful if you use.

Plugin architecture for .NET multi-agent simulation (runtime load/unload)

DESCRIPTION
I am currently designing an architecture for a C# multiagent simulation, where agent actions are driven by many modules in their "brain", which may read sensors, vote for an action or send messages/queries to other modules (all of this is implemented through the exchange of messages).
Of course, modules can have a state.
Modules run in parallel: they have an update method which consumes messages and queries, and perform some sort of computation. The update methods return iterators, and have multiple yields in their bodies, so that I can schedule modules cooperatively. I do not use a single thread for each module because I expect to have hundreds to thousands of modules for every agent, which would lead to a huge amount of RAM occupied by thread overhead.
I would like these modules to behave like runtime plugins, so that while the simulation is running I can add new module classes and rewrite/debug existing ones, without ever stopping the simulation process, and then use those classes to add and remove modules from the agents' brains, or just let existing modules change their behaviours due to new implementations of their methods.
POSSIBLE SOLUTIONS
I have come up with a number of possible solutions in the last few days, which all have something disappointing:
Compile my modules into DLLs, load each in a different AppDomain and then use AppDomain.CreateInstanceFromAndUnwrap() to instantiate the module, which I would then cast to some IModule interface, shared between my simulation and the modules (and implemented by each module class). The interface would expose just the SendMessage, the Update and a few other members, common to all modules.
The problem with this solution is that calls between AppDomains are much slower than direct calls (within the same AppDomain).
Also, I don't know the overhead of AppDomains, but I suppose that they are not free, so having thousands could become a problem.
Use some scripting language for the modules, while keeping C# for the underlying engine, so that there is no assembly loading/unloading. Instead, I would host an execution context for the scripting language for each module.
My main concern is that I do not know a scripting language which is big (as in 'python, lua, ruby, js are big, Autoit and Euphoria are not') fast, embeddable into .NET and allows step by step execution (which I need in order to perform cooperative scheduling of module execution).
Another concern about this is that I suppose I'd have to use a runtime context for each module, which in turn would have massive overhead.
Lastly, I suppose a scripting language would be probably slower than C#, which would reduce performance.
Avoid unloading of assemblies, instead renaming/versioning them somehow, so that I can have a ton of different versions, then just use the latest one for each type.
I'm not even sure this is possible (due to omonimous types and namespaces)
Even if possible, it would be very memory-inefficient.
Do a transparent restart of the simulation, which means pausing the simulation (and execution of the scheduler of brains/modules), serializing everything (including every module), exiting the simulation, recompiling the code, starting the simulation again, deserializing everything, catching any exception raised due to the changes I made to the class and resuming execution.
This is a lot of work, so I consider it my last resort.
Also, this whole process would be very slow at some point, depending on number of modules and their sizes, making it impractical
I could overcome this last problem (the whole process in solution 4 becoming slow), by mixing solutions 3 and 4, loading many many assemblies with some form of versioning and performing a restart to clean up the mess every now and then. Yet, I would prefer something that doesn't interrupt the whole simulation just because I made a small change in a module class.
ACTUAL QUESTION
So here is my question(s): is there any other solution? Did I miss any workaround to the problems of those I found?
For example, is there some scripting language for .NET which satisfies my needs (solution #2)? Is versioning possible, in the way I vaguely described it(Solution #3)?
Or even, more simply: is .NET the wrong platform for this project? (I'd like to stick with it because C# is my main language, but I could see myself doing this in Python or something alike if necessary)
Did you consider Managed Extensibility Framework?
I'm working in a simulation system that works in a very similar way, treating agent modules as plugins.
I created a Plugin Manager that handles every Domain loading related things, checking plugin validity in a dummy domain and then hotloading it in the engine domain.
Using AppDomain is where you can get the full control, and you can reduce process time by running your Plugin Manager's tasks in parallel.
AppDomains aren't cost free, but you can handle it using only two (or three if you need more isolation between validation and execution domains).
Once a plugin file is validated you can load it in the very main process at any time, creating a shadow copy in any domain's probing path (or in dynamic path if set) and targeting it instead of original file is useful to check versioning and updates.
Using a domain for validation and another to execution may require a swap context, who takes care of previous version instances while updating.
Keeping a time scheduled task to check new plugins and new versions, and then block plugin module usage, swap files, reload, and unblock, reinstancing new versions from previous if necessary.

Windows service split to multiple services to reduce load on it, but all services use a common dll for processing. Does it help reduce load?

I'm trying to reduce load on a windows service by splitting the load pie into multiple services, on the same server box.
But each of the service uses the same dll, shared between all the 5 windows service, to perform the underlying processing.
Would this model of distribution of load / load-balancing make sense?
Would I be better off, if I deploy each service with its own processor.dll?
Thanks
If I understand your question correctly, you are asking if sharing a dll would affect the load balancing of the app. as in, "am I really running two instances if I'm sharing the dll?"
In that case no it will not, there is no difference if both processes are using the exact same folder/dll or even exe file. there is no need to deploy different files for each service.
Every time you start the service, I'm assuming windows service, a new process is created in complete isolation.
The DLL will certainly help you to save some physical memory (especially if it's a large DLL). And of course disk space.
It may be more efficient to make a single multi-threaded service, then you will able to load several CPU cores while using common RAM structures.

Running a .Net application in a sandbox

Over the months, I've developed a personal tool that I'm using to compile C# 3.5 Xaml projects online. Basically, I'm compiling with the CodeDom compiler. I'm thinking about making it public, but the problem is that it is -very-very- easy to do anything on the server with this tool.
The reason I want to protect my server is because there's a 'Run' button to test and debug the app (in screenshot mode).
Is this possible to run an app in a sandbox - in other words, limiting memory access, hard drive access and BIOS access - without having to run it in a VM? Or should I just analyze every code, or 'disable' the Run mode?
Spin up an AppDomain, load assemblies in it, look for an interface you control, Activate up the implementing type, call your method. Just don't let any instances cross that AppDomain barrier (including exceptions!) that you don't 100% control.
Controlling the security policies for your external-code AppDomain is a bit much for a single answer, but you can check this link on MSDN or just search for "code access security msdn" to get details about how to secure this domain.
Edit: There are exceptions you cannot stop, so it is important to watch for them and record in some manner the assemblies that caused the exception so you will not load them again.
Also, it is always better to inject into this second AppDomain a type that you will then use to do all loading and execution. That way you are ensured that no type (that won't bring down your entire application) will cross any AppDomain boundary. I've found it is useful to define a type that extends MarshalByRefObject that you call methods on that executes insecure code in the second AppDomain. It should never return an unsealed type that isn't marked Serializable across the boundary, either as a method parameter or as a return type. As long as you can accomplish this you are 90% of the way there.

How to make my appDomain live longer?

Here is the situation that we're in.
We are distributing our assemblies (purely DLL) to our clients (we don't have control over their environment).
They call us by passing a list of item's id and we search through our huge database and return items with highest price. Since we have our SLA (30 milisecond) to meet, we are caching our items in memory cache (using Microsoft MemoryCache) We are caching about a million items.
The problem here is, it only caches throughout our client application lifetime. When the process exit, so are all the cached items.
Is there a way i can make my memorycache live longer, so that subsequent process can reused cached items?
I have consider having a window service and allow all these different processes to communicate with one on the same box, but that's going to create a huge mess when it comes to deployment.
We are using AppFabric as our distributed cache but the only way we can achieve our SLA is to use memorycache.
Any help would be greatly appreciated. Thank you
I don't see a way to make sure that your AppDomain lives longer - since all the calling assembly has to do is unload the AppDomain...
One option could be -although messy too- to implement some sort of "persisting MemoryCache"... to achieve performance you could/would use a ConcurrentDictionary persisted in a MemoryMappedFile...
Another option would be to use a local database - could even be Sqlite and implement to cache interface in-memory such that all writes/updates/deletes are "write-through" while reads are pure RAM-access...
Another option could be to include a EXE (as embedded resource for example) and start that from inside the DLL if it is not running... the EXE provides the MemoryCache, communication could be via IPC (for example shared memory...). Since the EXE is a separate process it would stay alive even after unloading your AppDomain... the problem with this is more whether the client likes and/or permissions allow it...
I really like Windows Service approach although I agree that could be a deployment mess...
The basic issue seems to be that you don't have control of the run-time Host - which is what controls the lifespan (and hence the cache).
I'd investigate creating some sort of (light-weight ?) host - maybe a .exe or a service.
The bulk of your DLL's would hang off the new host, but you could still deploy a "facade" DLL which in turn called your main solution (tied to your host). Yes you could have the external clients call your new host directly but that would mean changing / re-configuring those external callers where-as leaving your original DLL / API in place would isolate the external callers from your internal changes.
This would (I assume) mean completely gutting and re-structuring your solution, particularly whatever DLLs the external callers currently hit, because instead of processing the requests itself it's just going to pass the request off to your new host.
Performance
Inter-process communication is more expensive than keeping it within a process - I'm not sure how the change in approach would affect your performance and ability to hit the SLA.
In-particular, sparking up a new instance of the host will incur a performance hit.

Categories

Resources