I'm developing a system which needs to support customization via a plugins module. I'm coding against interfaces so that plugin code only needs to implement these interfaces in order to be able to plug into the system.
// for illustration purposes; not actual code
public interface IPluggable
{
void Setup(PluginConfig c);
bool Process(IProcessable p);
}
I read from configuration which plugins need to be loaded, where the assembly name and fully-qualified type name are specified.
<plugin assembly="Foo.Bar.PluginAssembly" type="Foo.Bar.Plugins.AwesomePlugin" />
Where the type Foo.Bar.Plugins.AwesomePlugin implements IPluggable and is contained in the assembly Foo.Bar.PluginAssembly.dll. With this information I proceed to create instances of the required plugins.
IPluggable plugin = (IPluggable)Activator.CreateInstance(assemblyName, typeName).Unwrap();
So my question is threefold:
What would be a recommended pattern for a plugin system? Does the approach I'm taking make sense or are there any obvious flaws/caveats I'm missing?
Is Activator.CreateInstance() a good choice for dynamically instantiating the plugin objects?
How can I be more specific about the assembly to load and its location? Say, if I want to load plugins only from assemblies located in a .\plugins subfolder.
Answers to your questions, in order:
I like this and I use patterns like this when I need to write plug in components. Other people recommend using various frameworks - I know that MEF is very popular. But I find that using the .NET framework is easy enough for me, and learning the MEF framework is just another thing I need to learn and remember. It's probably worth a try but up to you.
I've always used Assembly.CreateInstance, but the difference is probably not going affect you (Difference between Assembly.CreateInstance and Activator.CreateInstance?)
You simply use the System.IO namespace. The DirectoryInfo class has a method that enumerates all the files matching a given pattern (presumably *.dll). For each match I'd use System.Reflection namespace to interrogate and find any types that implement your interface, and then CreateInstance.
Just on MEF, my opinion is this: if I were going to be using a large, manageable and flexible plug-in system on a number of systems or projects then I'd be very interested in it, leveraging the work that other people have done to save time and avoid common pitfalls.
If I were writing a very simple, one-off plug-in system and I know the basics of how to do so using the .NET framework, I'd skip the overhead of learning MEF and write the code. I could write a reasonable plug-in process in far less than an hour, but after downloading, referencing, attempting to configure MEF - I doubt I'd have anything to show for it.
Related
I'm writing an app which plays host to a series of plug-ins. Those plug-ins generally use two libraries .Common and .UI which contain the interfaces that the plug-ins need to implement etc.
I am now at the point where I'm adding the capability for plug-ins to be subject to licensing. I have modified my host application such that it will only load plug-ins which define an interface instance (ILicenseInfoProvider) and export it through MEF. That bit is fine.
We have a selected provider of licensing code, and their licensing system involves use of a library. Now, I don't want to force each plug-in to be licensed through that system, and, by extension, require a reference to that system's assembly. So, I am planning on putting the code that references the third-party library in it's own assembly (something like .Licensing.Vendor). This way plug-ins can simply add a reference to that assembly, and include a class that looks somewhat like this:
[Export(typeof(ILicenseInfoProvider))]
class MyAssemblyLicenseInfoProvider : BaseVendorLicenseInfoProvider
{
public MyAssemblyLicenseInfoProvider() : base("My Assembly's Product Name")
}
I'm reasonably happy with that set-up, apart from one niggling thing - which is that the .Licensing.Vendor assembly will only contain a single class, which is the BaseVendorLicenseInfoProvider relating to the specific licensing system in use.
So, after all that, my question is pretty simple:
Does it seem overkill to put that class in it's own assembly, or is the benefit of not forcing all plug-ins to hold a reference to the third party library worth it?
At the moment there's a suitable purpose for the assembly - a publicly visible assembly for third parties to provide a means to interact via licensing. Seems perfectly reasonable to me:
even if there is only the one class currently, there may be more in the future
it's publicly visible, so you only want to provide only that which is necessary
it encapsulates a reasonable level of responsibility, namely licensing, without forcing specific implementations
I vote no, its not overkill, some plugins may not need a license, some may do..
It depends on what you are trying to achieve. Assemblies are a way of physically separating code whereas namespaces are a way of logically separating code.
Given that there can be a slight performance hit of loading too many assemblies (by which I mean a significant number, not just a few) then I suppose you could consider if it is possible to group as much as you can into one assembly but separate them by namespaces. But if you feel that it really does make sense to keep BaseVendorLicenseInfoProvider completely separate from everything else then I also do not see that as an issue.
At the end of the day it is all about what you feel is right, everyone has their own opinion of course but as long as what you have works for you then I don't see a problem.
I am currently working on an application and would like to add new functionality to it.
One would be to update the application's code directly.
Another would be to offer an extensibility layer where new features will be added to.
Having read multiple posts on Plugin architectures and using MEF for creating composable apps, i am a bit confused whether the 2 terms actually mean the same thing, and if not in what do they differ?
Also, i am interested to know of any good design solutions that assist in "opening up" my application to allow easier expansion in the future (new futures can be added "as an extension")
You will definitely need a plug-in based architecture to have a generic extensibility framework.
However, you do not necessarily need a Dependency Container or MEF.
It may be as simple as defining an IPlugIn interface and scanning assemblies for types implementing the interface. Then instantiating an instance of the type to get going.
I'm interested in creating a desktop application composed of modules such that the source code to those modules is embedded in the application itself, allowing the user to edit the application as they are running it and have the updated modules put into use without restarting the application. Can anyone suggest a good architecture for this?
I'm looking to use Microsoft.Net and C# for this. DLR is not an option.
Thanks!
It's not easy to suggest a good architecture for this in a short posting.
At first, i'd define a contract (an Interface) every module the user writes/modifies must implement. It should contain at least an Execute method.
Then I'd create a Wrapper-Class for these modules which:
loads the source code from a file
The wrapper compiles the file and also makes sure it implements the contract
Contains an indicator of whether the file could be compiled sucessfully
It should also implement the contract, for easy calling and handling
Then I'd have some kind of shell which contains a collection of all the module-wrappers. Any wrapper that sucessfully compiled would then let the Shell call the Execute method of the module interface.
When it comes to compiling and executing code on the fly, this link should provide all the information you need:
http://www.west-wind.com/presentations/dynamicCode/DynamicCode.htm
Well, a dynamic language certainly would have been the best fit...
You can use the types in the System.Reflection.Emit namespace to dynamically create assemblies.
However, it's going to be really painful because you'd need to load those dynamic assemblies into custom AppDomains because otherwise you'll not be able to unload them again.
This again means that you must address marshalling and assembly resolution issues related to cross-AppDomain communication.
What you are probably looking for is the concept of Dependency Injection.
Dependency Injection means that instead of having module X use module Y directly, module X only relies on an interface, and the application tells module X which implementation should use for it, e.g. using module Y.
There are several ways of implementing Dependency Injection. One is to have references to the interfaces in each of your modules, and explicitly let the application configure each of its modules with the right implementation of the interface.
The second wahy of implementing it (and probably the most useful in your case) is by using a central registry. Define all the interfaces that you want to have in your application. These are the interface for which you want to dynamically change the implementation. Then define identifications for these interfaces. These could be strings or integers or GUID's.
Then make a map in your application that maps the identifications to the interfaces, and fill the map with the correct implementations of the interfaces. In a C++ application (I'm not very skilled in C# yet) this could work like this:
std::map<std::string,IInterface> appInterfaces;
appInterfaces["database"] = new OracleDatabaseModule();
appInterfaces["userinterface"] = new VistaStyleUserInterface();
Make all modules go to this central registry whenever they want to use one of the modules. Make sure they don't access the modules directly, but they only pass via the registry. E.g.
MyModule::someMethod()
{
IDatabaseInterface *dbInterface = dynamic_cast<IDatabaseInterface *>(appInterfaces["database"]);
dbInterface->executeQuery(...);
}
If you now want to change the implementation for an interface in the application, you can simply change the entry in the registry, like this:
IInterface *iface = appInterfaces["database"];
if (iface) delete iface;
appInterface["database"] = new SqlServerDatabaseInterface();
I am working on an application that loads plugins at startup from a subdirectory, and currently i am doing this by using reflection to iterate over the types of each assembly and to find public classes implementing the IPluginModule interface.
Since Reflection involves a performance hit, and i expect that there will be several plugins after a while, i wondered if it would be useful to define a custom attribute applied at the assembly level, that could be checked before iterating over the types (possibly about a dozen types in an assembly, including 1 implementor of IPluginModule).
The attribute, if present, could then provide a method to return the needed types or instances, and iterating over the types would then only be a fallback mechanism. Storing the type info in a configuration file is not an option.
Would this improve performance, or does it just not matter compared to the time to actually takes to load the assembly from storage? Also, would this usage be appropriate for an attribute at all?
I will answer your question with a question: Why are you worried about this?
You're worrying about a potential performance hit in a one time operation because there might be several plugins at a later date.
Unless your application startup time is excessively long to a user, I wouldn't waste time thinking about it - there are probably much better things that you can work on to improve your application.
You could also have the plugable types in a configuration, so you know the exact classes instead of looping through all classes. Would have to have some configuration utility for this option...but could possibly get a good increase in performance depending on the number of classes you are looping through.
I believe both of Microsoft's two .net plugin frameworks, the Managed AddIn Framework (MAF) and the Managed Extensibility Framework (MEF) can use either attributes or reflection to discover plugins. So Microsoft seems to feel attributes are appropriate.
I'm not sure what the performance differences are, though.
A good solution is to cache all information about plugins. The first time the application is started it does a full scan of the plugin dlls, and saves the list of types found in a file. The next time the application starts, it loads the information from the file, which will be much faster than scanning all the dlls again. The application can also store a timestamp of each dll, so if it detects a change in a dll it can re-scan it and update the cache.
That's basically the approach followed by the Mono.Addins framework.
I'd have thought that asking an assembly for all the classes that are tagged with an attribute would also use reflection. It would then come down to which is a faster look up in the metadata, interface implementation or attribute marking?
I have to develop a system to monitor sensor information, but many sensors might be added in the future.
That said, the idea would be to develop a system that would consist of the application skeleton. The sensors (as each of them has its communication and data presentation characteristics) would be added as plugins to the system.
How would I code this on C#? Is it a case of component-driven development? Should I use dynamic libraries?
There are a huge number of ad-hoc plug-in systems for C#. One is described in Plugin Architecture using C# (at The Code Project). The general approach is that the host application publishes an assembly with interfaces. It enumerates through a folder and finds assemblies that define a class that implement its interfaces and loads them and instantiates the classes.
In practice you want to do more. It's best if the host application defines two interfaces, an IHost and an IPlugIn. The IHost interface provides services that a plug-in can subscribe to. The IPlugIn gets constructed taking an IHost.
To load a plug-in, you should do more than simply get a plug-in. You should enumerate all plug-ins that are loadable. Construct them each. Ask them if they can run. Ask them to export APIs into the host. Ask them to import APIs from the host. Plug-ins should be able to ask about the existence of other plug-ins.
This way, plug-ins can extend the application by offering more APIs.
PlugIns should include events. This way plug-ins can monitor the process of plug-ins loading and unloading.
At the end of the world, you should warn plug-ins that they're going to go away. Then take them out.
This will leave you with an application that can be written in a tiny framework and implemented entirely in plug-ins if you want it to.
As an added bonus, you should also make it so that in the plug-ins folder, you resolve shortcuts to plug-ins. This lets you write your application and deliver it to someone else. They can author a plug-in in their development environment, create a shortcut to it in the application's plug-ins folder and not have to worry about deploying after each compile.
Managed Extensibility Framework (MEF) is what you need here. You could also use a dependency injection container, but that's a bit not what you'd expect, though a perfectly viable solution in itself.
Each sensor should implement a standard interface so that routines that handle lists of sensors can treat them in a standard manner. Include a ID field in the interface that is unique too each type of sensor so you can handle special cases.
Look at the Reflection API to learn how to scan a directory of .NET Assemblies and look inside them.
Each assembly should have a factory class that it's job is to return a list of sensors that are in that assembly. I recommend that you make it a subroutine not a function and passes it a list that that it appends too. SensorDLL1 appends 4 sensors to the emptylist, SensorDLL2 appends 8 sensor to the list which now has 12 sensors and so on. This approach is the most flexible in the long run.
You will either have to make up a naming convention to find the factory class or use an attribute. Note I don't recommend just scanning the assembly for everything that implements your sensor interface as you could have code inside the factory that controls which sensors are available. This is useful for licensing.
Depending upon the Sensors themselves, this sounds like you would need to define a single interface which all Sensors will implement. Your main "application skeleton" will then work against the ISensor interface and need not concern itself with the concrete implementations of each of the Sensor classes/objects/components.
Whether each Sensor is simply a class within the same project, or a separate assembly is up to you, although if they are separate assemblies, you'd need a way to load these assemblies dynamically.
Some references which may help here are:
Command Pattern Design Pattern:
- http://en.wikipedia.org/wiki/Command_pattern
Observer Pattern Design Pattern:
- http://en.wikipedia.org/wiki/Observer_pattern
Dynamically loading assemblies:
- http://www.divil.co.uk/net/articles/plugins/plugins.asp
Hope this helps.
We once made a plug-in system in a school project of ours in 2006, Socio. You can find the code here and here.
The basic lesson learned was that it is very simply to dynamically load code in C#. If you just have a plugin DLL and an application which adheres to an interface of yours and links against a common DLL in which that interface exists, it just works™.
In essence, it is what plinth described in his answer.
Take a look at:
Composite UI Application Block
and Smart Client Software Factory
Its a very old post but still i thought it would be useful for someone to appPress.in where in we have developed a framework with plugin functionality. here we allow plugin to modify the UI of core application Horizontally and Vertically, add its own Pages, hook into events like Init, OnClick and OnChange.