Short Version
I have an application which utilizes a plug-in infrastructure. The plug-ins have configurable properties that help them know how to do their job. The plug-ins are grouped into profiles to define how to complete a task, and the profiles are stored in XML files serialized by the DataContractSerializer. The problem is when reading the configuration files, the application deserializing has to have knowledge of all of the plug-ins defined in the configuration file. I'm looking for a way to handle the resolution of unknown plug-ins. See the proposed solution section below for a couple of the ideas I've looked into implementing, but I am open to just about anything (though I'd rather not have to reinvent the application).
Detail
Background
I've developed a sort of Business Process Automation System for internal use for the company I'm currently working for in C# 4. It makes exhaustive use of 'plug-ins' to define everything (from the tasks that are to be performed to the definition of units of work) and relies heavily on a dynamic configuration model which in turn relies on C# 4/DLR dynamic objects to fulfill jobs. It's a little heavy while executing because of its dynamic nature but it works consistently and performs well enough for our needs.
It includes a WinForms configuration UI that uses Reflection extensively to determine the configurable properties/fields of the plug-ins, as well as, the properties/fields that define each unit of work to be processed. The UI is also built on top of the BPA engine so it has a thorough understanding of the (loose) object model put in place that allows the engine to do its job, which, coincidentally, has led to several user experience improvements, such as, ad-hoc job execution and configure-time validation of user input. Again there is room for improvement, however, it seems to do its job.
The configuration UI utilizes the DataContractSerializer to serialize/deserialize the settings specified, so any plug-ins referenced by the configuration must be loaded before (or at the time of) configuration load.
Structure
The BPA engine is implemented as a shared assembly (DLL) which is referenced by the BPA service (a Windows Service), the Configuration UI (WinForms app), and a plug-in tester (Console application version of the Windows Service). Each of the three applications that reference the shared assembly only include the minimum amount of code necessary to perform their specific purpose. Additionally, all plug-ins must reference a very thin assembly which basically just defines the interface(s) that the plugin must implement.
Problem
Because of the extensibility model used in the application, there has always been a requirement that the config UI is run from the same directory (on the same PC) as the Service application. That way the UI always knows about all of the assemblies that the Service knows about so they can be deserialized without running into missing assemblies. Now that we are getting close to roll out of the system, a demand to allow the Configuration UI remotely on any PC in our network has come about from our network admins for security purposes. Typically this wouldn't be a problem if there was always a known set of assemblies to deploy, however, with the ability to extend the application using user built assemblies, there has to be a way to resolve the assemblies from which the plug-ins can be instantiated/used.
Proposed (potentially obvious) Solution
Add a WCF service to the Service application to allow the typical CRUD operations against the configurations which that instance of the service is aware of and rework the configuration UI to act more like SSMS with a Connect/Disconnect model. This doesn't really solve the problem so we would also need to expose some sort of ServiceContract from the Service application to allow querying of the assemblies it knows about/has access to. That's fine and fairly straight forward however the question arises, "When should the UI find out about the assemblies that the Service is aware of?" On connect we could send all of the assemblies from the Service to the UI to ensure that it always knows about all of the assemblies the service does but that gets messy with AppDomain management (potentially unnecessarily) and assembly version conflicts. So I suggested hooking into the AppDomain.AssemblyResolve/AppDomain.TypeResolve events to only download the assemblies that the client isn't aware of yet and only as needed. This doesn't necessarily cleanup the AppDomain management issues but it definitely helps address the version conflicts and related issues.
Question
If you've stuck with me this long I applaud and thank you, but now I'm finally getting to the actual question here. After months of research and finally coming to a conclusion I am wondering if anyone here has had to deal with a similar issue and how you dealt with the pitfalls and shortcomings? Is there a standard way of handling this that I have missed completely, or do you have any recommendations based on how you have seen this successfully handled in the past? Do you see any problems with the proposed approaches or can you offer an alternative?
I'm aware that not everyone lives in my head so please let me know if you need further clarification/explanation. Thanks!
Update
I've given MEF a fair shake and feel that it is too simplistic for my purposes. It's not that it couldn't be bent to handle the plug-in requirements of my application, the problem is doing so would be too cumbersome and dirty to make it feasible. It is a nice suggestion and it has a lot of potential, but in its current state it just isn't there yet.
Any other ideas or feedback on my proposed solutions?
Update
I don't know if the issue I'm encountering is just too localized, if I failed to properly describe what I am trying to achieve, or if this question is just too unreasonably long to be read in its entirety; but the few answers I've received have been subtly helpful enough to help me think through the problem differently and identify some shortcomings in what I am after.
In short, what I'm trying to do is take three applications which in their current state share information (configuration/assemblies) using a common directory structure, and try to make those applications work across a network with minimal impact on usability and architecture.
File shares seem like the obvious answer to this problem (as #SimonMourier proposed in the comments), but using them translates into lack of control and debugability when something goes wrong. I can see them as a viable short term solution, but long term they just don't seem feasible.
tl;dr, but I'm 90% sure you should take a look into MEF.
When I first saw it I was like "aah, another acronym", but you'll see it's very simple, and it's built in into .NET 4. Best of all, it even runs seamlessly on mono and it's a matter of less than an hour (including coffee break) between hearing about it and compiling hello worlds to get used with the features. It's really that simple.
Basically, you "export" something in an assembly and "import" it into another (all via simple attribute decorations), and you choose where to search for it (example, on the applications directory, plug-ins folder, etc).
Edit: what if you try to download and load (and possibly cache) plugins on-the-fly on configuration load?
I think that you could be overlooking a relatively simple solution that derives somewhat from the Microsoft web.config approach:
Have two sections in the config file:
Section 1 contains enough information about the plugin (i.e. name, version) to allow you to load it into an app domain.
Section 2 contains the information serialized by the plugin.
On loading the plugin, pass the information in section 2 and let the plugin deserialize it according to its needs.
Maybe you can divide this problem into two
administrator allow users to download one of predefined configuration (set of libraries) and MEF helps to inject required dependencies
each activity from user should pass through security proxy, plugin modules not allowed call BL directly. Proxy could match custom security attribute and allowed activities.
i.e.
[MyRole(Name = new[] { "Security.Action" })]
void BlockAccount(string accountId){}
[MyRole(Name = new[] { "Manager.Action" })]
void CreateAccount(string userName){}
[MyRole(Name = new[] { "Security.View", "Manager.View" })]
List<> AcountList(Predicate p){}
and allow for AD groups (some abstract description)
corp\securityOperators = "Security.*" //allow calls to all security manipulation
corp\HQmanager = "Manager.View" //allow only view access
corp\Operator = "Manager.*"
I'm not sure I completely understand the problem but I think this situation calls for "type-preserving serialization" - that is, the serialized file contains enough type information to deserialize back to the original object graph without any hints from the calling application as to what types are involved.
I've used Json.NET to do this and I can highly recommend the library for type-preserving serialization of object graphs. It looks like the NetDataContractSerializer can also do this, from the MSDN Remarks
The NetDataContractSerializer differs from the DataContractSerializer in one important way: the NetDataContractSerializer includes CLR type information in the serialized XML, whereas the DataContractSerializer does not. Therefore, the NetDataContractSerializer can be used only if both the serializing and deserializing ends share the same CLR types.
I chose Json.NET because it can serialize POCOs without any special attributes or interfaces. Both Json.NET and the NetDataContractSerializer allow you to use a custom SerializationBinder - in here you could put any logic regarding loading assemblies that may not yet be loaded.
Unfortunately, changing serialization schemes might be the "breaking-est" change to suggest because all your existing files will become incompatible. You might be able to write a conversion utility that deserializes a file using the old method and serializes the resulting object graph using the new method.
Related
I have a winforms app. I give it to three clients and each one wants a small tweak or customization specific just to them. To accomplish this, I'd have to keep a separate version just for each client. I may wind up having many versions doing it this way. I thought dependency injection would be how to handle this but I hear you have to register your dependencies in the main method and you'd still have to add a reference to each clients DLL so I'd still need different versions. What is the preferred object oriented way to handle this? Any better ways to handle this?
You can use a Plug-in patten to load assembly at runtime: (from link)
Separated Interface (476) is often used when application code runs in multiple runtime environments, each requiring different implementations of particular behavior.
Most DI frameworks provide this functionality. You can search at get lots of Examples for framework you choose - if you don't want to roll your own.
Ninject
MEF
You can use a configuration file to configure your DI container, so that you can reuse the same binaries with different configuration files to implement the different customizations. But you need to be sure that you thoroughly test all of your different configurations. Slightly different versions of the same application are not trivial to maintain without causing unanticipated breaks.
Depending on the nature of the customixations, you might be able to capture all relevant modifications into a distinct part of the project (as opposed to keeping them spread all-over the project). If you can (e.g. a filering functionality is provided by the client), you can then load a DLL dynamically (e.g. based on a config file) and allow the functions in the DLL to perform the necessary functionality that accomplishes the customization (based on parameters provided by the main code).
This way you provide pre-defined hooks to your code that can be changed dynamically (even if only to load the dlls at startup time) as per the need of the client. You can separate these DLLs into multiple ones if there are distinct features that the clients want to change, but not necessarily all of the clinets all of the features. Then you can provide a "default" version of the DLLs.
Who develops the hooks is dependent on your setup with the clients.
Make sure you provide adequate documentation on how these hooks supposed to work -- even if you end up developing them.
I hope this question makes sense. Basically, I am looking for a set of guidelines, or even a tutorial, that will show how to make an application that can easily add and remove "modules" or "add-ins"
For example, in Microsoft Office, you will commonly see programs that you can download and install and they will just add an extra tab into Microsoft Word (for example) that will implement some new feature.
I have several applications that use basically the same data source, and I'd like to consolidate them and also leave open the possibility of adding more functionality in the future without 1. Requiring a brand new install and 2. Tweaking every piece of my code.
I'm looking for a place to start, mostly.
Thanks in advance.
**
Edit: To elaborate a little more...
The thing I have in mind specifically is an application that accesses a large set of data that is stored in text files and uses some of the data to create a few graphs and maybe some tables. I'd like the ability to add different graphs in the future using the same data. So, you can click Button_A and generate Graph_A, then a few weeks later, you can click Button_B and generate Graph_B.
It would be really nice if I could come up with a way that only required reading the data from the file(s) once, but I know that would involve having to adjust my DataReader class a bit.
One place to start would be to define an interface for your future modules, and build a utility that scans all the dll's therein, looking for classes that implement said interface.
Once you've found supporting classes you can create instances at runtime and add to your application. That's a common idiom in .NET for supporting "plug-ins"
The Activator class is a common way to create instances from a Type at runtime.
http://msdn.microsoft.com/en-us/library/system.activator.aspx
It's hard to give more details without more info in your question. Can you elaborate a bit?
Take a look at the Composite Application Library from Microsoft.
It is aimed at WPF but you could get some ideas from there.
As Adam said, the first thing to do is define the interface for your plugin modules - what can they expect to receive from the container, and what methods must the container be able to call?
As far as the container itself goes, I'm partial to MEF as a location technology; you can create catalogs and re-compose the system when new DLLs are added. I've built a similar system to this for parsing dissimilar files, and the composition capabilities of MEF are awesome for runtime discovery.
I'm currently working on a large real-time OLAP application. All data are hold in RAM (a few gigabytes) and the common tasks involve brute scanning over the large quantity of that data (which is fine). The results of processing are exposed via a Web service (singleton/multithreaded) and presented using Silverlight-based client.
The problem is that various customers need different functionality/algorithms and I don't know how to provide extensibility on the server-side. For the client side (Silverlight) I can use MEF/Prism, but I'm not sure what would be a good approach to tackle this problem on the server.
Please note that ideally other web-services should have a direct access (i.e. without marshaling) to the data of the main/current service which holds the large data model.
Are there any:
a) frameworks/libraries
b) patterns
c) good pracitces
which would help me to modularize the application and make the selection of desired modules and their deployment relatively easy?
Sounds to me like Dependency Inversion is required: isolate logical parts of the system (algorithms, etc) by defining interfaces, then use a DI / IoC framework to load the desired implementation at runtime (or on application start, etc).
I haven't used Ninject, but plenty of people love it, so you could try that; there's also Spring.Net.
Good Practices:
Ensure you have clear precise logging so you know what's being used and when.
Think about whether you want a 'default' implementation to load if the desired one fails, or whether you deliberately want to fail so that the wrong data isn't returned by mistake (such as the use of a different algorythm).
I've found that using attributes to decorate injectable modules is really helpful (especially in a web-based system that you don't have immeadiate access to) one reason for this is that you can build pages or controls that list all the known / available implementations at runtime.
You can also use the attribute approach to build a UI that lets users select which one they want; I use it for an open source web-application framework I built: http://www.morphological.geek.nz/Morphfolia/Capabilities/AttributeDriven.aspx
I am working on an application that loads plugins at startup from a subdirectory, and currently i am doing this by using reflection to iterate over the types of each assembly and to find public classes implementing the IPluginModule interface.
Since Reflection involves a performance hit, and i expect that there will be several plugins after a while, i wondered if it would be useful to define a custom attribute applied at the assembly level, that could be checked before iterating over the types (possibly about a dozen types in an assembly, including 1 implementor of IPluginModule).
The attribute, if present, could then provide a method to return the needed types or instances, and iterating over the types would then only be a fallback mechanism. Storing the type info in a configuration file is not an option.
Would this improve performance, or does it just not matter compared to the time to actually takes to load the assembly from storage? Also, would this usage be appropriate for an attribute at all?
I will answer your question with a question: Why are you worried about this?
You're worrying about a potential performance hit in a one time operation because there might be several plugins at a later date.
Unless your application startup time is excessively long to a user, I wouldn't waste time thinking about it - there are probably much better things that you can work on to improve your application.
You could also have the plugable types in a configuration, so you know the exact classes instead of looping through all classes. Would have to have some configuration utility for this option...but could possibly get a good increase in performance depending on the number of classes you are looping through.
I believe both of Microsoft's two .net plugin frameworks, the Managed AddIn Framework (MAF) and the Managed Extensibility Framework (MEF) can use either attributes or reflection to discover plugins. So Microsoft seems to feel attributes are appropriate.
I'm not sure what the performance differences are, though.
A good solution is to cache all information about plugins. The first time the application is started it does a full scan of the plugin dlls, and saves the list of types found in a file. The next time the application starts, it loads the information from the file, which will be much faster than scanning all the dlls again. The application can also store a timestamp of each dll, so if it detects a change in a dll it can re-scan it and update the cache.
That's basically the approach followed by the Mono.Addins framework.
I'd have thought that asking an assembly for all the classes that are tagged with an attribute would also use reflection. It would then come down to which is a faster look up in the metadata, interface implementation or attribute marking?
PREAMBLE:
This is by far the longest post I've left here...but I think it's required in this case.
I've had questions about these kinds of things for a long time: how to name assemblies, and how to divide up classes within them.
I'd like to give an example of an application here, with only a bare minimum of classes to demonstrate what I'm trying to understand.
Imagine an application that
Accepts client messages, store them in a db, and then later dequeues them to an MTA server.
It's a Web application that has both an ASP.NET interface to write a message + attach attachments.
There's also a Silverlight client, so the webapp exposes a ClientServices WCF ServiceContract, with one OperationContract (SaveMessage).
There's also a Windows client...does the same thing as the Silerlight contract.
OK. that should be enough of a fake scenario to demonstrate my cluelessness.
The above will need the following classes:
Message
MessageAddress
MessageAddressType (an enum with From, To)
MessageAddressCollection
MessageAttachment
MessageAttachmentType
MessageAttachmentCollection
MessageException
MessageAddressFormatException
MessageExtensions (static extension for Message)
MessageAddressExtensions (static extension for MessageAddress)
MessageAttachmentExtensions (static extension for MessageAttachment)
Project.Contract.dll
My first stab at organizing the above into the right assemblies would be observing that Message, MessageAddress, MessageAttachment, the enums needed for its properties (MessageAddressType, MessageAttachmentType) and the collections needed for them(MessageAddressCollection, MessageAttachmentCollection), are all to be marked as [DataContract] so that they can be serialized between the WCF client and the server.
Being common to both, I think I would move them into a neutral shared assembly called Contract.
Project.Client.dll
I'll need a Client proxy of the server [ServiceContract], that refs the classes in the Contract.dll.
So now the server, which also refs Project.Contract.dll could now save serialized Messages received from a WCF Client, and save them into a db.
Plugins
Next I would realize that I would like to have these objects be processed server side by 3rd party plugins (eg; a virus checker)...
But plugins should have readonly access (only) to the variables in order to check the variables, and throw errors if they see something they don't like.
So I would think about going back to have Message inherit from IMessageReadOnly ...but where to put that interface?
Project.Interfaces.dll
If I put it in an assembly called Project.Interfaces.dll, this would work for the plugins who could reference that without having a reference to Contracts.dll...but now the client has to reference both Contracts assembly AND Interfaces...doesn't sound like a good direction...
Duplicate Objects
Alternatively, I could have two Messages structures (and duplicate the other MessageAttachment, etc. classes as well)...one for communicating from client to server (in the Contracts.dll), and then use a second ServerMessage/ServerMessageAddress/ServerMessageAddressCollection on the server side, which inherits from IMessageReadOnly, and then it would appear that I am closer to what I want.
With duplicate objects, plugins are limited in access, while Server BL, etc. has full access for types relevant to its work, all while the client has different but identical objects...
In fact...they I should probably start considering them as non-identical, making it clearer in my head that the objects are just there to talk to clients, ie Contract/Comm objects)...
The Website UI
which brings up ...hum...if there are two different Messages, and they have now different properties...which one is the most appropriate for using to back the ASP.NET forms? The ServerMessage object seems fastest (no mapping going on between types)...but all the logic has already been worked out against client message objects (with different properties and internal logic). So would I use a ClientMessage, and map it to a Servermessage, to keep the various UI logics the same, across different mediums? or should i prefer mapping, and just rewrite the UI validation?
What about the third case, Silverlight...The Contracts assembly was a Full Framework assembly...which Silverlight can't ref (different framework/build mechanism)....so the assembly that i have on the Silverlight side might be exactly the same code, but has to be a different assembly. How does that work out?
What exactly to Consider as DataContract?
Finally...and this is, I swear, near the end of my huge question...what about the pesky extra classes that are not clearly DataContract?
For example, The MessageAddress was a DataContract. Ok. And the enums it exposed are part of it...Makes sense... But if the messageAddress constructor raises a MessageAddressFormatException...is it considered part of the DataContract?
Can there be Classes common to both Server, Client, AND Plugins?
Or is it an exception that is common to BOTH ServerMessageAddress and ClientMessageAddress, so should not be duplicated, and instead be in a Common assembly...so that in the end, the client has to bind to Contracts AND Common? (Didn't we just go down this alley with the Interfaces assembly?)
What about common Base classes/Interfaces?
And should these exceptions have common base classes? for example...ClientMessageAddressException, ServerMessageAddressException, ServerMessageVirusException (from plugin)...should I struggle to get them to -- as best as possible -- all derive from an abstract MessageException...or is there a time when enheritence/reusse just no longer an appropriate goal to strive for?
HUGE THANKS FOR READING THIS FAR.
I'm a developer and on the tech side I can bumble along ok...but these kinds of questions, where I've had to lay out the assemblies, the architecture, myself, leave me hugely perplexed...and lose me SOOOO much time, as I drive myself batty, moving things around from one assembly to another to see which one is the best fit, all while not really certain of what I am doing, and trying to not get circular references...
So -- really -- thanks for listening, and I hope this gets read by people who can describe how to lay out the above cleanly, hopefully expressing how to think my way through it for future projects as well.
After spending 10 minutes editing the question for formatting, I'm still going to downvote it. There's no way I'm going to read all that.
Go pick up a copy of
Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries (2nd Edition)
As an architect, I've learned that it doesn't pay to get too wrapped up in getting things absolutely perfect the first time, and perfect is subjective. Refactoring, especially moving classes between assemblies, doesn't have too huge a cost. It sounds to me like you're already thinking things through logically and correctly. Here's my opinions on a few of your questions:
Q: Should I have read-only contracts for my data contract classes?
The plugins most-likely shouldn't be aware of your data contracts at all. A virus checker may take a byte array, a spell checker a string and locale, etc. If you're making a general interface layer for the plugins, you should just isolate what's shared to the data specific to the plugin. This will allow you to maximize their reuse. Thus, I think you'll get little payoff on creating interfaces to your data contract structures, which should mostly be dumb bags of data with little logic that are practically interfaces themselves.
Q: Should I use the same data contract classes as my Silverlight app does in my ASP.NET application or use server-side classes directly?
I would go with the client message objects so you can benefit from code reuse. Object creation is fairly cheap, and I'm sure that most of the mapping would be one-to-one. It's not as fast, true, but that won't be the bottleneck in your application.
Q: Where do I put my exception classes?
I would put your example exception classes in the assembly with the data contract, since they are all raised due to contract violations or as a means to communicate errors while fulfilling the contract.
Q: Should the exceptions have common base classes?
I have yet to need to do this, but I don't know your code base as well as you do. My guess is that it will gain you little if anything.
Edit:
You may be overplanning for the future. In my experience, taking a YAGNI approach has allowed us to get the important things done more quickly. Making incremental design changes is preferred to spending valuable time building an elaborate architecture that you might never even benefit from.