Why use delegates when using object remoting MarshalByRefObj? - c#

My app allows plugins, I have a Core class (MarshalByRefObj) that plugins must inherit and this class offers various functionality. Now my question is, when this class is instantiated on main app domain and passed to the plugin in different app domain, what would be the benefit of using delegates in such scenario:
public class Core : MarshalByRefObject
{
public void DoSomething()
{
MyMainApp.Delegate1("SomeMethod", "Test");
}
}
So as you can see, my core class calls a delegate method on MyMainApp. I could as well just do MyMainApp.SomeMethod("test") instead.
However in many examples online about how remoting and plugin system works, everyone seems to be using delegates. Is there any specific reason for that? Could someone give me a more practical example of why?

Most of the time the controls in a user interface are created by the main thread unless you intentionally create them in another thread. Here is the important bit: ONLY the thread which created the control can access that control.
If you call DoSomething directly, and code in DoSomething wants to interact with a UI control, it will not be allowed and you will get an exception. MyMainApp.Delegate1("DoSomething" is equivalent to: Please execute the specified method on the main thread. Now it can access UI controls.
There are other reasons too but that is the most important bit to remember. See MSDN for more.

One of the benefits would be, that the information passed to the MyMainApp.Delegate1 is serialized for transport from the plugin appdomain to the main-appdomain. The Delegate1 method will execute the DoSomething in the main domain. They don't share memory (so no directly access to object instances is possible). So you can dynamically run methods on another appdomains. And if it's done via reflection, a plugin might be able to run unlisted methods.
I'd rather not use this type of construction, because there is no compile-time check on calling methods. I'd rather use interfaces that are in satelite assemblies. (to prevent the main-appdomain gets a reference to/loading the plugin assembly, so it can't be unloaded anymore)
The other thing:
If you call MyMainApp.SomeMethod("test") directly. This implies that the plugin must know the definition of the plugin loader. Meaning that you get a tight coupling (from the plugin) to the 'parent' application(s version). Which makes the whole plugin structure 'useless'. You could fix that by implementing a ISupportSomeMethod interface on the MyMainApp which is defined in a satelite assembly that is used by both the mainapp en the plugin. If your MyMainApp doesn't implement the ISupportSomeMethod interface, the plugin isn't compatible with that program. This way your MyMainApp can support multiple plugin structures.
In this case you prefer an event structure. Because the child object wants to trigger a method of it's parent. Too bad cross domain event calls are not useful, because your main module will load the assembly and it can't be unloaded. You could write a proxi class for that.

Related

Mef : "Cannot serialize" when trying to load an ApiController from another appdomain

I'm trying to find a way to update an Asp.net web api (.Net framework 4.5) at runtime (without recycling the main appdomain) by adding new ApiController (downloaded by another service).
I tried to use Mef and was able to load the new ApiController in the current appdomain, but I got stuck when trying to update an existing plugin (the assembly is already added to the appdomain, so I can't add the new one).
So I decided to load the plugin containing the ApiController in a separate appdomain and use MarshalByRefObject to load it from the main appdomain but it turns out that ApiController cannot be serialized.
Do you know how I could serialize it?
Do you know an alternative?
Edit:
I managed to load different versions of an assembly (in the same appdomain) if the assembly is signed, but it doesn't match my requirements.
I haven't used MEF (because it is as easy to implement its functionality from scratch, in contradiction to MAF), but this way I have some experience with bare AppDomains.
It is hard to tell much without seeing your code, but from what you wrote, it seems to me that you are confusing some things.
As you probably know and you already pointed out too, you can't actually update an already loaded assembly. Loading another version of it (having a different signature) means that you have two different assemblies loaded. The types within them will have different strong names. You could actually handle this if you want. The only way to unload an assembly is to unload the appdomain that contains it.
My problem is with this sentence:
... load the plugin containing the ApiController in a separate appdomain
and use MarshalByRefObject to load it from the main appdomain
Type (class) definition+code and instance data are two different things. Loading an assembly into an appdomain means you are loading type definition and code. Serialization comes into view when you want to transfer instance data across appdomain borders. You can't load type definition and code from an other appdomain as you wrote (actually you could but I doubt you need to). To be able to transfer instance data both sides need to have knowledge about the type definition of the instance being transferred. The serialization and the transferred in this case is managed by the .net remoting runtime.
You have two choices: either move all the instance data and have it serialized all the time or you choose MarshalByObjRef way as you said you did. Let's stay with this. To be able to work with an instance in an other appdomain, you will need to instantiate the type in the other appdomain using the activator (you can't use the new operator in this case), and get a reference to it which will be a proxy based on the type you know (that can be an interface or a base class too, not only the exact type). Reflection is somewhat limited in such a situation, even less is prepared asp.net to figure out methods of a remote object - but you could help it with proper interfaces.
So, let's imagine you have created an instance of the controller in the other appdomain, and you have a remoting reference on it assignable to an interface type that defines all methods you need to expose to asp.net. Now serialization will come into view when are trying to access the members of the controller class. Each method parameter and method return type needs to be serializable. But not the class itself, as it is a MashalByObjRefdescendant and will not be mashalled as an instance. And MashalByObjRef has nothing to do with how you are loading the assembly into the appdomain.
But wait! Both MarshalByObjRef and ApiController are abstract classes. How do you want to derive your actual controller class from both? You can't. Thus I don't think you can directly use apicontrollers from an other appdomain directly.
I could imagine two things:
1) Stay with loading the new signed version into the same assembly and make customize the routing mechanism to direct requests to the latest version (might not be still valid, but could be still a good starting point: https://www.strathweb.com/2013/08/customizing-controller-discovery-in-asp-net-web-api/).
Of course, on restart, you should load only the latest one if you don't need to have multiple versions in parallel.
2) Make a slightly complex infrastructure:
define an interface for the controller logic
create the apicontroller versionless and logicless, but capable of creating and unloading appdomains, loading assemblies into them, keep reference to the instances implementing the interface from above created in them, and directing the requests to those
be aware that you won't be able to pass some things (like controller context) to the logic in the other appdomain, you will have to extract what you need or recreate on the other side
this way you can have the logic MarshalByObjRef descendant in the "remote" appdomain and your controller ApiController descendant in the main appdomain.
I would create an interim abstract class extending ApiController with the capability of handling the above separation on its own. The rest of the application wouldn't be aware of this.
be aware of the lifetime services involved in remoting, which you can handle either by using a sponsor or overriding some methods of MarshalByObjRef.
Neither is simple approach, you will be facing some further challenges...

Allowing C# plugins register on application hooks

I am building a .NET based application, and would like to allow a more extensible and pluggable design.
For the sake of simplicity, the application exposes a set of operations and events:
DoSomething()
DoOtherThing()
OnError
OnSuccess
I would like to offer "plugins" to be loaded and hook into some of these operations (something like: When Event1 fires, run plugin1).
For example -- run plugin1.HandleError() when the OnError event fires.
This can be done easily with event subscription:
this.OnError += new Plugin1().HandleError();
The problem is that:
My app doesn't know of the type "Plugin1" (it is a plugin, my app
does not reference it directly).
Doing so will instantiate the plugin before time, something i do not
want to do.
In "traditional" plugin models, the application ("client" of plugins) loads and executes the plugin code at certain key points.For example -- an image processing application, when a certain operation is performed).
The control of when to instantiate the plugin code and when to execute it are known to the client application.
In my application, the plugin itself is to decide when it should execute ("Plugin should register on the OnError event").
Keeping the plugin "execution" code alongside "registration" code poses an issue that the plugin DLL with will get loaded into memory at registration time, something i wish to prevent.
For example, if i add a Register() method in the plugin DLL, the plugin DLL will have to get loaded into memory in order for the Register method to be called.
What could be a good design solution for this particular issue?
Lazily loading (or offering lazy/eager loading) of plugin DLLs.
Allowing plugins to control which various parts of the system/app they hook into.
You are trying to solve a non-existing problem. The mental image of all the types getting loaded when your code calls Assembly.LoadFrom() is wrong. The .NET framework takes full advantage of Windows being a demand-paged virtual memory operating system. And then some.
The ball gets rolling when you call LoadFrom(). The makes the CLR create a memory mapped file, a core operating system abstraction. It updates a bit of internal state to keep track of an assembly now being resident in the AppDomain, it is very minor. The MMF sets the up memory mapping, creating virtual memory pages that map the content of the file. Just a small descriptor in the processor's TLB. Nothing actually gets read from the assembly file.
Next you'll use reflection to try to discover a type that implements an interface. That causes the CLR to read some of the assembly metadata from the assembly. At this point, page faults cause the processor to map the content of some of the pages that cover the metadata section of the assembly into RAM. A handful of kilobytes, possibly more if the assembly contains a lot of types.
Next, the just-in-time compiler springs into action to generate code for the constructor. That causes the processor to fault the page that contains the constructor IL into RAM.
Etcetera. Core idea is that assembly content always gets read lazily, only when needed. This mechanism is not different for plugins, they work just like the regular assemblies in your solution. With the only difference that the order is slightly different. You load the assembly first, then immediately call the constructor. As opposed to calling the constructor of a type in your code and the CLR then immediately loading the assembly. It takes just as long.
What you need to do is to find the path of the dll, and then create an assembly object from it. From there, you will need to get classes you wish to retrieve (for instance, anything that implements your interface):
var assembly = Assembly.Load(AssemblyName.GetAssemblyName(fileFullName));
foreach (Type t in assembly.GetTypes())
{
if (!typeof(IMyPluginInterface).IsAssignableFrom(t)) continue;
var instance = Activator.CreateInstance(t) as IMyPluginInterface;
//Voila, you have lazy loaded the plugin dll and hooked the plugin class to your code
}
Of course, from here you are free to do whatever you wish, use methods, subscribe to events etc.
For loading plugin assemblies I tend to lean on my IoC container to load the assemblies from a directory (I use StructureMap), although you can load them manually as per #Oskar's answer.
If you wanted to support loading plugins whilst your application is running, StructureMap can be "reconfigured", thus picking up any new plugins.
For your application hooks you can dispatch events to an event bus. The example below uses StructureMap to find all registered event handlers, but you could use plain old reflection or another IoC container:
public interface IEvent { }
public interface IHandle<TEvent> where TEvent : IEvent {
void Handle(TEvent e);
}
public static class EventBus {
public static void RaiseEvent(TEvent e) where TEvent : IEvent {
foreach (var handler in ObjectFactory.GetAllInstances<IHandle<TEvent>>())
handler.Handle(e);
}
}
You can then raise an event like so:
public class Foo {
public Foo() {
EventBus.RaiseEvent(new FooCreatedEvent { Created = DateTime.UtcNow });
}
}
public class FooCreatedEvent : IEvent {
public DateTime Created {get;set;}
}
And handle it (in your plugin for example) like so:
public class FooCreatedEventHandler : IHandle<FooCreatedEvent> {
public void Handle(FooCreatedEvent e) {
Logger.Log("Foo created on " + e.Created.ToString());
}
}
I would definitely recommend this post by Shannon Deminick which covers many of the issues with developing pluggable applications. It's what we used as a base for our own "plugin manager".
Personally I would avoid loading the assemblies on demand. IMO it is better to have a slightly longer start up time (even less of an issue on a web application) than users of the running application having to wait for the necessary plugins to be loaded.
Would the Managed Extensibility Framework fit your needs?

get information on an assembly

I want to get information on an Assembly in my C# application. I use the following:
Assembly.GetCallingAssembly();
This works perfectly returning information on the calling Assembly.
I want to share this functionality with other applications, so I include this in a class in my class library.
I reference this class library in multiple applications. When I make a call to this method from my applications, it returns information on the class library and not the application. Is there a way I can alter my above code to return information on the web applications assembly while still having the code included in the class library?
Instead of having the class library being intelligent why don't you have the caller pass an Assembly as argument to the method? So when you call the method from within some application you would pass Assembly.GetExecutingAssembly() and the method within the class library will now be able to fetch the assembly of the actual caller.
I'm not sure what you provide on top of reflections, but maybe you're abstracting a facility that doesn't need to be abstracted. Reflections already handles this, so why not let it do its job?
But if this API gives you back some useful information (such as plugging nicely into your database, etc), then maybe your approach makes some sense. But I still recommend that you refactor it:
Put this code in only one shared project/assembly, and just link that project/assembly when you need to call that functionality. Needing to duplicate code to get your job done is considered code smell.
Take an Assembly object as a parameter, rather than trying to determine the current assembly. This will allow more flexibility in case you come up with some code that wants to get data on a bunch of other assemblies, and will still allow you to pass the current assembly. (Note: Darin already made this point)

How can I use a mock control in a Web project, and have the compiler not complain about it?

We have a couple mini-applications (single Web form look-up stuff) that need to run in the context of a much larger site and application (hereafter "the Monolith")
We don't want to install the Monolith on every developer's machine, so we want some developers to be able to develop these little apps in their own isolated sandbox project (hereafter "Sandbox"). The idea is that we would move (1) the resulting DLL, and (2) the Web form (aspx) file from the Sandbox to the Monolithic Web App, where it would run.
This is all well-and-good, except that a couple of these little apps need to use a control that exists in the Monolith. And this control won't run without all the infrastruction of the Monolith behind it.
So, we had the great idea of creating a mock control. We stubbed out a control with the same namespace, class name, and properties as the control in the Monolith. We compiled this into a DLL, and put in in the Sandbox. We can develop against it, and it just spits out Lorem Ipsum-type data, which is cool.
The only reference to the control is this:
<Namespace:MyControl runat="server"/>
In the Sandbox, this invokes the mock object. In the Monolith, this invokes the actual control. We figured that since the only connection to the control is the tag above, it should work on both sides. At runtime, it would just invoke different things, depending on where it was running.
So we moved the aspx file and the app's DLL (not the mock object DLL) into the Monolith...and it didn't run.
It seems we didn't count on the "mypage.aspx.designer.cs" file coming out of Visual Studio. This gets compiled into the DLL, and it has a reference all the way back to our mock object DLL. So, when it runs in the Monolith, it complains that it can't load the mock object DLL.
So, our goal is to have the control tag as above, and have that invoke different things depending on the environment. In both cases, it would execute the same namespace and class, but that namespace and class would be different things between the two environments. In the Monolith it would be our actual control. In the Sandbox, it would be our mock object.
Essentially, we want this tag evaluated at runtime, not compile time. We want the DLL free of any reference back to the mock object's DLL.
Possible? Other solutions for the core problem?
The Solution
In the end, it was quite simple.
I work on my app with a control that is compiled against my mock object DLL. When I'm ready to compile for the last time, I delete the reference to the mock object DLL, and add a reference to the Monolith DLL. Then I compile and use that DLL.
The resulting DLL has no idea it wasn't developed against the Monolith DLL all along.
This looks like a job for loose coupling! One of the most helpful, but least followed principles of software design is Dependency Inversion; classes should not depend directly on other concrete classes, but on abstractions such as an abstract base class or interface.
You had a great instinct with the mock object, but you implemented it incorrectly, if I understand right. You basically replaced a dependency on one concrete class (the Monolithic control) with another (the mocked control). The mock could have the same name, even the same namespace as the Monolith, but it resides in a different assembly than the Monolith control, which the dependent code expects because it was compiled to refer to that assembly, but does not find in the production environment.
I would start by creating an interface for your control, which defines the functionality available to consumers of the control (methods, properties, etc). Have both Monolith's control and the mocked control implement this interface, and declare usages of either control as being of the interface type instead of Monolith or the mock. Place this interface in a relatively lightweight DLL, seperate from the mocked object, the Sandbox and the existing Monolith DLLs, that you can place on developer's machines alongside the mocked object's DLL, and is also present in the Monolith codebase. Now, classes that need your control only really need the interface; you no longer need direct references to Monolith or to any mock DLL.
Now, when instantiating objects that are dependent on this control, you need some way of giving your new dependent object a concrete class that implements the interface, either the mock or the Monolith, instead of the dependent class creating a new one. This is called Dependency Injection, and is the natural extension of Dependency Inversion; specifying an interface is great, but if your dependent class has to know how to create a new instance of an object implementing that interface, you haven't gained anything. The logic for creating a concrete class must lie outside your dependent class.
So, define a third class that knows how to hook the control into classes that depend on it. The go-to method is to bring in an IoC framework, which acts as a big Factory for any object that either is a dependency, or has a dependency. You register one of the two controls as the implementation of the interface for a particular environment (the mock for dev boxes, the Monolith's control in production); the registration information is specific to each environment (usually located in the app.config). Then, instead of newing up an instance of the control class, you ask the container to give you an instance of a class that implements the interface, and out pops a new mock or Monolith control. Or, put the control and all dependent classes in the IoC container, and ask for the dependent classes, which the container will return fully "hydrated" with a reference to their control. All of this happens without any of the classes that depend on the control having to know where it came from, or even exactly what it is.
IoC frameworks can be a pain to integrate into an existing design, though. A workalike would be to create a Factory class that that uses reflection to dynamically instantiate either of the two controls, and place an AppSetting in the app.config file that will tell the factory which assembly and type to use for this environment. Then, wherever you'd normally new up a control, call the Factory instead.
I don't have a super quick fix for you (maybe someone else will) but you might be interested in a IoC container like Unity.

Pass and execute delegate in separate AppDomain

I want to exceute some piece of code in separate AppDomain with delegate. How can I do this?
UPD1: some more details about my problem
My program processing some data (one iteration is: get some data from DB, evaluate it and create assemblies at runtime, execute dynamic assemblies and write results to DB).
Current solution: each iteration running in separate thread.
Better solution: each iteration running in separate AppDomain (to unload dynamic asseblies).
UPD2: All, thanks for answers.
I have found one for me in this thread:
Replacing Process.Start with AppDomains
Although you can make a call into a delegate which will be handled by a separate AppDomain, I personally have always used the 'CreateInstanceAndUnwrap' method which creates an object in the foreign app domain and returns a proxy to it.
For this to work your object has to inherit from MarshalByRefObject.
Here is an example:
public interface IRuntime
{
bool Run(RuntimesetupInfo setupInfo);
}
// The runtime class derives from MarshalByRefObject, so that a proxy can be returned
// across an AppDomain boundary.
public class Runtime : MarshalByRefObject, IRuntime
{
public bool Run(RuntimeSetupInfo setupInfo)
{
// your code here
}
}
// Sample code follows here to create the appdomain, set startup params
// for the appdomain, create an object in it, and execute a method
try
{
// Construct and initialize settings for a second AppDomain.
AppDomainSetup domainSetup = new AppDomainSetup()
{
ApplicationBase = AppDomain.CurrentDomain.SetupInformation.ApplicationBase,
ConfigurationFile = AppDomain.CurrentDomain.SetupInformation.ConfigurationFile,
ApplicationName = AppDomain.CurrentDomain.SetupInformation.ApplicationName,
LoaderOptimization = LoaderOptimization.MultiDomainHost
};
// Create the child AppDomain used for the service tool at runtime.
childDomain = AppDomain.CreateDomain(
"Your Child AppDomain", null, domainSetup);
// Create an instance of the runtime in the second AppDomain.
// A proxy to the object is returned.
IRuntime runtime = (IRuntime)childDomain.CreateInstanceAndUnwrap(
typeof(Runtime).Assembly.FullName, typeof(Runtime).FullName);
// start the runtime. call will marshal into the child runtime appdomain
return runtime.Run(setupInfo);
}
finally
{
// runtime has exited, finish off by unloading the runtime appdomain
if(childDomain != null) AppDomain.Unload(childDomain);
}
In the above sample, it is coded to execute a 'Run' method passing in some setup information, and completion of the Run method is determined to indicate that all code in the child AppDomain has completed running, so we have a finally block that ensures the AppDomain is unloaded.
You often may want to be careful in which types you place in which assemblies - you may want to use an interface and place it in a separate assembly that both the caller (our code that sets up the appdomain, and calls into it) and the implementer (the Runtime class) are dependent on. This IIRC allows the parent AppDomain to only load the assembly that contains the interface, while the child appdomain will load both the assembly that contains Runtime and it's dependency (the IRuntime assembly). Any user defined types that are used by the IRuntime interface (e.g. our RuntimeSetupInfo class) should usually also be placed in the same assembly as IRuntime. Also, be careful of how you define these user defined types - if they are data transfer objects (as RuntimeSetupInfo probably is), you should probably mark them with the [serializable] attribute - so that a copy of the object is passed (serialized from the parent appdomain to the child). You want to avoid calls being marshalled from one appdomain to another since this is pretty slow. Passing DTOs by value (serialization) means accessing values on the DTO doesn't incur a cross-apartment call (since the child appdomain has it's own copy of the original). Of course, this also means that value changes are not reflected in the parent appdomain's original DTO.
As is coded in the example, the parent appdomain will actually end up loading both the IRuntime and Runtime assemblys but that is because in the call to CreateInstanceAndUnwrap I am using typeof(Runtime) to get the assembly name and fully qualified type name. You could instead hardcode or retrieve these strings from a file - which would decouple the dependency.
There also is a method on AppDomain named 'DoCallBack' which looks like it allows calling a delegate in a foreign AppDomain. However, the delegate type that it takes is of type 'CrossAppDomainDelegate'. The definition of which is:
public delegate void CrossAppDomainDelegate()
So, it won't allow you to pass any data into it. And, since I've never used it, I can't tell you if there are any particular gotchas.
Also, I'd recommend looking into the LoaderOptimization property. What you set this to, can have a significant affect on performance, since some settings of this property force the new appdomain to load separate copies of all assemblies (and JIT them etc.) even if (IIRC) the assembly is in the GAC (i.e. this includes CLR assemblies). This can give you horrible performance if you use a large # of assemblies from your child appdomain. For e.g., I've used WPF from child appdomains which caused huge startup delays for my app until I setup a more appropriate load policy.
In order to execute a delegate on another AppDomain you can use System.AppDomain.DoCallBack(). The linked MSDN page has an excellent example. Note that You can only use delegates of type CrossAppDomainDelegate.
You need to read up on .NET Remoting and specifically on Remote Objects as these are all you can pass through AppDomains.
The long and short of it is that your object is either passed by value or by reference (via a proxy).
By value requires that your object be Serializable. Delegates are not serializable afaik. That means that this is not a good route to follow.
By reference requires that you inherit from MarshalByRefObject. This way, the remoting infrastructure can create the proxy. However, it also means that your delegate will be executed on the machine that creates it - not on the client app domain.
All in all, it's gonna be tricky. You might want to consider making your delegates full fledged serializable objects so that they can be easily moved around with remoting (and will work well with other technologies).
This doesn't answer your question directly but perhaps it would be better to create a WCF service or web service in the other AppDomain to preserve isolation. I don't know your particular situation but isolated architectural design is almost always the right way to go.

Categories

Resources