Instantiate plugin class in a DLL - c#

I'm learning C# and am researching how to allow people to write plugins for an app I'm writing.
To start, I publish an API (a dll with interfaces) that their code must adhere to.
Now, I'm trying to understand how to work with their code. I've written a test plugin, built to a dll, and put it into a "plugins" directory that my script is watching.
I'm just not sure what to do next.
Since the API interfaces are shared my app knows what to expect. For example, they should have a main class which interfaces a Plugin interface.
// Example api interface:
public interface Plugin {
void Initialize();
}
// Example of their code:
public class TestPlugin : Plugin {
public void Initialize() {
// ... do stuff
}
}
My question is, how can I instantiate their TestPlugin, so that I can properly call Initialize and any other methods?
I have some ideas but am still too new to C# and don't want to jump the gun.

you need to find assemblies , load them and look for classes that implement IPlugin (please use Ixxx for interfaces)
There are helper libraries that do this for you although they feel over complex to me. MEF is the best known https://msdn.microsoft.com/en-us/library/dd460648(v=vs.110).aspx
If you want to roll your own.
Enumerate the 'plugins' directory for all .dll files
do an assembly.load on each one
enumerate the types and see if any classes supports IPLugin
if so do an activator.createinstance
good luck

The best way to do this would be to use MEF (Managed Extensibility Framework), otherwise known as the System.ComponentModel.Composition library.
If you did this, the library writer would put the following line above their class:
[Export(typeof(Plugin))]
Then you create some MEF classes to import any plugins. Start with a DirectoryCatalog since you are loading from a folder:
DirectoryCatalog pluginDir = new DirectoryCatalog("Plugins");
CompositionContainer mefContainer = new CompositionContainer(pluginDir);
Afterwards create a CompositionContainer from your catalog (shown above). Now you can have a class member marked with ImportMany like so:
[ImportMany]
private List<Plugin> plugins;
and call ComposeParts on the container, this will auto-populate your list with any exported classes found. Alternatively, you can directly ask for exports of a given type:
IEnumerable<Plugin> plugins = mefContainer.GetExportedValues<Plugin>();
One thing to note when using MEF, you get one, and only one, instance of each plugin. If you want multiple instances for some reason, have your users export a Factory.
If you want to go the hard way, you could load the assembly manually using Assembly.Load and then reflection to try and find the types implementing your interface. MEF does this work for you, so I would go with that.

Related

How to load third-party implementations at run-time

I'm making a .Net application to manage and install mods. The application itself shouldn't be able to install mods for any particular game but should be able to call 3rd-party extensions to do so.
Let's say my mod manager expects an implementation of the given interface:
interface IGameManager {
// Deploy a modding configuration to the targeted game
void Deploy();
// Remove all managed mods from the targeted game
void Purge();
// ...
}
And someone else, working on a different code base, implement IGameManager to manage a specific game:
class MinecraftManager: IGameManager {
// ...
}
Then this person compiles it, publishes it and everyone could simply feed this extension to the main mod manager so it can manage their mods for the targeted game.
But how? Is there a way for my application to safely load and use such third-party implementations at run-time? And how to facilitate the making of third-party extension (e.g. giving an interface to build on but more elegantly and maintenance-friendly)?
Edit 1: Invalid syntax in MinecraftManager signature
You are essentially trying to design a plugin system. There are many implementations that you could reuse but the general idea is that:
You need your manager to be able to discover extensions. There are many ways to do that but the simplest and most used approach is to place extension assemblies under a well known directory in the file system. Then your manager can enumerate the assemblies in that folder by enumerating the files (or if you prefer that each extension has it own subfolder enumerate the subfolders)
Load the assembly. For that you will use one of the Assembly.Load.. methods. Since it is not possible to unload assemblies, you may want to first load the assembly for reflection only and once you decide that the assembly is valid you can load it in the ApplicationDomain in order to use it.
Use relfection to enumerate all classes of the assembly you just loaded and find the ones that implement the right interface (IGameManager). Altenatively you can require that extensions contains an "entry point" class of known name, then look for that class by name (using reflection).
Create an instance of the class(es) and use it (perhaps also keep it in a collection of loaded extensions)
Regarding the interface that extensions must implement: You should put the interface (and any other supporting interfaces) in a separate assembly. The assembly should contain only interfaces, no implementation. You can then publish the assembly. Once published the interface should never change.
If you need to add functionality you should create a new interface. This way old versions of the manager will work with newer versions of an extension (that is designed to implement the new functionality as well). Also your manager can determine which interfaces are implemented by an extension and act accordingly (thus maintaining compatibility). If the new functionality is mandatory, your manager should discard any extension that does not implement both interfaces.
I just found out dotNet 4 already have an extensibility framework (Documentations here), which is cleaner and safer than using Assembly.Load().
Here's the snippet I use to load my plugins from a given directory, if anyone encounter the same problem:
// Where T is the type you want to retrieve from the assemblies
private static IEnumerable<Lazy<T>> LoadExternalAssemblyFromPath<T>(string path, string pattern) {
CreateDirectoryIfDoesntExist(path);
AggregateCatalog catalog = new AggregateCatalog();
catalog.Catalogs.Add(new DirectoryCatalog(path, pattern));
CompositionContainer container = new CompositionContainer(catalog);
return container.GetExports<T>();
}
// Usage:
LoadExternalAssemblyFromPath("C:/path/to/plugins", "*.dll");
Regarding the implementation of such plugin, Kouvarakis' solution on the matter is correct.

How to keep dynamically loaded assemblies form breaking code at compile time?

I am linking one of the external resource at runetime in my code using something like below:
System.Reflection.Assembly assembly = System.Reflection.Assembly.LoadFrom("MyNice.dll");
Type type = assembly.GetType("MyType");
Tool = Activator.CreateInstance(type) as Tool;
Now as you can see that at the end of the object creation, it has to cast the resulting object into the tool class because there are a lot of references to the methods and properties of Tool class in my code that if it is no there then the code will error out on compile time.
Now, this is a bad situation because I wanted to remove the Dll from my references and have it loaded dynamically at runtime but at the same to there are pieces of my code that referes to and are dependent to the Tool assembly. How can I make it independent? Do I have to use reflection all over my code or there is any easy alternative out there?
for example:
if (Tool.ApplicationIsOpen)
return StatusResult.Success;
is there in the same class which assumes that Tool class already exists and will break if I remove it from my references folder.
Any suggesitons?
I would suggest making shared DLL to reference from both projects that contains an interface in which Tool inherits.
In this shared project, make an interface such as ITool, that exposes the functionality you need for the consumer project.
Shared Project
public interface ITool
{
void Something();
}
Separate Project
public class Tool : ITool
{
public void Something()
{
// do something
}
}
Consumer Project
System.Reflection.Assembly assembly = System.Reflection.Assembly.LoadFrom("MyNice.dll");
Type type = assembly.GetTypes().FirstOrDefault(t => t.IsAssignableFrom(typeof(ITool)));
ITool tool = Activator.CreateInstance(type) as ITool;
Now you can delete the reference to the project containing Tool, but you still need the reference to the shared project that contains ITool. If you truly don't want any references, then explore the reflection route, but be warned it'll probably be messy.
This strategy is the basis for many plugin systems. I'd recommend you check out some Dependency Injection (DI for short) libraries that can do a lot of this heavy lifting for you.
Here is a list of DI libraries: http://www.hanselman.com/blog/ListOfNETDependencyInjectionContainersIOC.aspx
Personally I've been using Ninject lately.
Some relevant links:
Using Ninject in a plugin like architecture
Google something like "plugin framework DI C#"

How to organize code using an optional assembly reference?

I am working on a project and want to optionally use an assembly if available. This assembly is only available on WS 2008 R2, and my ideal product whould be a common binary for both computers with and without the assembly. However, I'm primarily developing on a Windows 7 machine, where I cannot install the assembly.
How can I organize my code so that I can (with minimum changes) build my code on a machine without the assembly and secondly, how do I ensure that I call the assembly functions only when it is present.
(NOTE : The only use of the optional assembly is to instantiate a class in the library and repeatedly call a (single) function of the class, which returns a boolean. The assembly is fsrmlib, which exposes advanced file system management operations on WS08R2.)
I'm currently thinking of writing a wrapper class, which will always return true if the assembly is not present. Is this the right way to go about doing this?
My approach would be to dynamically load the assembly, instead of hard-coding a reference. Your code could then decide whether to use the assembly (if it loaded) or return some other value. If you use the assembly, you'll need to use reflection to instantiate the class and use the method. That way your code will build and run on any platform, but it's behavior will change if it detects the presence of fsrmlib.
The System.Reflection.Assembly documentation has example code for doing this.
Hide the functionality behind an interface, say:
public interface IFileSystemManager
{
void Manage(IFoo foo);
}
Create two implementations:
An implementation that wraps the desired functionality from fsrmlib
A Null Object implementation that does nothing
Inject the IFileSystemManager into your consumers using Constructor Injection:
public class Consumer
{
private readonly IFileSystemManager fileSystemManager;
public Consumer(IFileSystemManager fileSystemManager)
{
if (fileSystemManager == null)
{
throw new ArgumentNullException("fileSystemManager");
}
this.fileSystemManager = fileSystemManager;
}
// Use the file system manager...
public void Bar()
{
this.fileSystemManager.Manage(someFoo);
}
}
Make the selection of IFileSystemManager a configuration option by delegating the mapping from IFileSystemManager to concrete class to the config file so that you can change the implementation without recompiling the application.
Configure applications running on WS 2008 R2 to use the implementation that wraps fsrmlib, and configure all other applications to use the Null Object implementation.
I would recommend that you use a DI Container for the configuration part instead of rolling this functionality yourself.
Alternatively you could also consider treating the IFileSystemManager as an add-in and use MEF to wire it up for you.

C# method that is executed after assembly is loaded

I write some C# class libary and I want to use Ninject to provide dependency injection for my classes. Is it possible for class libary to declare some code (method) that would be executed each fime the class libary is loaded. I need this to define bindings for Ninject.
It sounds like you are looking for the equivalent of C++'s DllMain. There is no way to do this in C#.
Can you give us some more information about your scenario and why you need code to execute in a DllMain style function?
Defining a static constructor on a type does not solve this problem. A static type constructor is only guaranteed to run before the type itself is used in any way. You can define a static constructor, use other code within the Dll that does not access the type and it's constructor will never run.
I have used Ninject quite a bit over the last 9 months. Sounds like what you need to do is "load" your modules that exist in your libray into the Ninject kernel in order to register the bindings.
I am not sure if you're using Ninject 1.x or the 2.0 beta. The two versions perform things slightly differently, though conceptually, they are the same. I'll stick with version 1.x for this discussion. The other piece of information I don't know is if your main program is instantiating the Ninject kernel and your library is simply adding bindings to that kernel, or if your library itself contains the kernel and bindings. I am assuming that you need to add bindings in your library to an existing Ninject kernel in the main assembly. Finally, I'll make the assumption that you are dynamically loading this library and that it's not statically linked to the main program.
The first thing to do is define a ninject module in your library in which you register all your bindings -- you may have already done this, but it's worth mentioning. For example:
public class MyLibraryModule : StandardModule {
public override void Load() {
Bind<IMyService>()
.To<ServiceImpl>();
// ... more bindings ...
}
}
Now that your bindings are contained within a Ninject module, you can easily register them when loading your assembly. The idea is that once you load your assembly, you can scan it for all types that are derived from StandardModule. Once you have these types, you can load them into the kernel.
// Somewhere, you define the kernel...
var kernel = new StandardKernel();
// ... then elsewhere, load your library and load the modules in it ...
var myLib = Assembly.Load("MyLibrary");
var stdModuleTypes = myLib
.GetExportedTypes()
.Where(t => typeof(StandardModule).IsAssignableFrom(t));
foreach (Type type in stdModuleTypes) {
kernel.Load((StandardModule)Activator.CreateInstance(type));
}
One thing to note, you can generalize the above code further to load multiple libraries and register multiple types. Also, as I mentioned above, Ninject 2 has this sort of capability built-in -- it actually has the ability to scan directories, load assemblies and register modules. Very cool.
If your scenario is slightly different than what I've outlined, similar principles can likely be adapted.
Have you tried the AppDomain.AssemblyLoad event? It fires after an assembly has been loaded.
AppDomain.CurrentDomain.AssemblyLoad += (s, e) =>
{
Assembly justLoaded = e.LoadedAssembly;
// ... etc.
};
Can you control the client code? If yes, instead of trying to do magic when loading assembly, I would go for implementing a single class like Registry which does the bindings, implementing an interface IRegistry. Then during loading you can look for the implementation of IRegistry in your assembly and fire necessary methods.
You can also have attributes on your classes:
[Component(Implements=typeof(IMyDependency)]
look for these attributes and load them to the container on the client side.
Or you can take a look at MEF which is a library for these kind of situations.
As far as I know the answer is no
.As I understand you want to configure your IoC container in your class library and If that's the case it's not a good idea to do that.If you define your bindings in your class library then what's the use of dependency injection? we use dependency injection so that we can inject dependencies at runtime then we can inject different objects in different scenarios.Although the best place to configure an IoC container is the start up of your application (since an IoC container is like a backbone for an application :) ) but it should be placed at a bootstrap that is responsible to start the application.In simple applications it can be the Main method.

Using Ninject in a plugin like architecture

I'm learning DI, and made my first project recently.
In this project I've implement the repository pattern. I have the interfaces and the concrete implementations. I wonder if is possible to build the implementation of my interfaces as "plugins", dlls that my program will load dynamically.
So the program could be improved over time without having to rebuild it, you just place the dll on the "plugins" folder, change settings and voilá!
Is this possible? Can Ninject help with this?
While Sean Chambers' solution works in the case that you control the plugins, it does not work in the case where plugins might be developed by third parties and you don't want them to have to be dependent on writing ninject modules.
This is pretty easy to do with the Conventions Extension for Ninject:
public static IKernel CreateKernel()
{
var kernel = new StandardKernel();
kernel.Scan(scanner => {
scanner.FromAssembliesInPath(#"Path\To\Plugins");
scanner.AutoLoadModules();
scanner.WhereTypeInheritsFrom<IPlugin>();
scanner.BindWith<PluginBindingGenerator<IPlugin>>();
});
return kernel;
}
private class PluginBindingGenerator<TPluginInterface> : IBindingGenerator
{
private readonly Type pluginInterfaceType = typeof (TPluginInterface);
public void Process(Type type, Func<IContext, object> scopeCallback, IKernel kernel)
{
if(!pluginInterfaceType.IsAssignableFrom(type))
return;
if (type.IsAbstract || type.IsInterface)
return;
kernel.Bind(pluginInterfaceType).To(type);
}
}
You can then get all loaded plugins with kernel.GetAll<IPlugin>().
The advantages of this method are:
Your plugin dlls don't need to know that they are being loaded with ninject
The concrete plugin instances will be resolved by ninject, so they can have constructors to inject types the plugin host knows how to construct.
This question applies to the same answer I provided over here: Can NInject load modules/assemblies on demand?
I'm pretty sure this is what you're looking for:
var kernel = new StandardKernel();
kernel.Load( Assembly.Load("yourpath_to_assembly.dll");
If you look at KernelBase with reflector in Ninject.dll you will see that this call will recursively load all modules in the loaded assemblies (Load method takes an IEnumerable)
public void Load(IEnumerable<Assembly> assemblies)
{
foreach (Assembly assembly in assemblies)
{
this.Load(assembly.GetNinjectModules());
}
}
I'm using this for scenarios where I don't want a direct assembly reference to something that will change very frequently and I can swap out the assembly to provide a different model to the application (granted I have the proper tests in place)
Extending on #ungood good answer, which is based on v.2, with v.3 of Ninject (currently on RC3) it could be made even easier. You needn't any IPluginGenerator anymore, just write:
var kernel = new StandardKernel();
kernel.Bind(scanner => scanner.FromAssembliesInPath(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location))
.SelectAllClasses()
.InheritedFrom<IPlugin>()
.BindToAllInterfaces());
Please note I'm looking for plugins implementing IPlugin (put your interface here) in the same path of the application.
you can easily do it with normal C# reflection, you don't need any extra technology.
There are quite a few examples on the web, e.g.
http://www.codeproject.com/KB/cs/c__plugin_architecture.aspx
In general in your main application, you need to load the assembly implementing the plugin, e.g.:
ass = Assembly.Load(name);
and then you need to create an instance of your plugin. If you know the name of the class it would look like this:
ObjType = ass.GetType(typename);
IPlugin plugin = (IPlugin)Activator.CreateInstance(ObjType);
and then you just use it.
Take a look at Managed Extensibility Framework. http://www.codeplex.com/MEF
There are multiple ways to go about this and you already have accomplished the main goal to achieve this in having concrete implementations through pre-defined interfaces. Realistically, if your interfaces remain stable, you should be able to build off of your core application.
I am not sure how the implementation would work with Ninject, however. You can do this with the Provider Model or with reflection - although I think reflection is overkill, if you don't absolutely need to do it.
With the provider model approach, you place the file in the /bin folder, or any other folder that you are probing, and adjust the .config file to reflect the presence of the provider. If you have a specific "plugin" folder, you can create a method called at the startup of the application and periodically, otherwise, to scan for new or removed instances and reload the providers.
This would work in ASP.NET, under C# or VB. However, if you are doing some sort of other application, you would need to consider another approach. The provider is really just Microsoft's spin on the Strategy Pattern.
I got this as a hit for Activator.CreateInstance + Ninject and just wanted to point out something in this area - hopefully it will inspire someone to come up with a real killer answer to this question on SO.
If you havent yet gone to the trouble of auto-scanning Modules and classes and registering them with Ninject properly, and are still creating your plugin via Activator.CreateInstance, then you can post-CreateInstance inject the dependencies in via
IKernel k = ...
var o = Activator.CreateInstance(...);
k.Inject( o );
Of course, this would only be a temporary solution on the way to something like http://groups.google.com/group/ninject/browse_thread/thread/880ae2d14660b33c
I think no need to framework. This tutorial is solved your problem http://www.codeproject.com/KB/cs/c__plugin_architecture.aspx
The problem is that you might need to recompile if the object you setup in the load of your module are used inside the program. The reason is that you program might not have the latest version of the assembly of your class. Example, if you create a new concrete class for one of your interface, let say you change the plugin dll. Now, Injector will load it, fine but when it will be returned inside your program (kernel.get(...)) your program might not have the assembly and will throw an error.
Example of what I am talking about:
BaseAuto auto = kernel.Get<BaseAuto>();//Get from the NInjector kernel your object. You get your concrete objet and the object "auto" will be filled up (interface inside him) with the kernel.
//Somewhere else:
public class BaseModule : StandardModule
{
public override void Load(){
Bind<BaseAuto>().ToSelf();
Bind<IEngine>().To<FourCylinder>();//Bind the interface
}
}
If you have create a new FourCylinder called SixCylinder, your real program will not have any reference to your new object. So, once you will load from the PlugIn the BaseModule.cs you might get some trouble with the reference. To be able to do it, you will need to distribute the new dll of this concrete implementation with your plugin that will have the Module that Injector will require to load the Interface to Concrete class. This can be done without problem but you start to have a whole application that reside on loading from Plugin and it might be problematic in some points. Be aware.
BUT, if you do want some PlugIn information you can get some tutorial from CodeProject.

Categories

Resources