Interface for plugins to implement in .Net - c#

I want to implement a plug-in system in my .net application, without the use of MEF.
My application loads and creates instances of types, that are contained in the DLLs.
There is an interface (IPluginContract) that the main application assembly uses to load dll types, and this very same interface is used by the dll projects (the plug-ins) to implement it.
So different projects need access to the same interface.
I can realize this requirement by pushing the interface class into a separate Class Library, that both main app and the plug-ins will reference.
Is it a correct way to work around the described problem?

Yes, pushing your interfaces out into a shared library is a preferred solution. You then only need to distribute this library to plugin developers, which could be considered as lightweight, but the plugin will be coupled to an exact version of the interface.
Another solution is a convention based solution, where plugin writers have types that "conform" to an interface e.g. have appropriate methods on a class which they can point to via a config file. You can then use reflection, IL generation, etc, to wire this up to a concrete internal interface\proxy. The benefit here is that plugins are then not hard-wired to a specific interface version, so there is more flexibility in versioning.
You could also consider versioning by maintaining all versions of your interface e.g. IPlugin_1, IPlugin_2, etc. It's then up to plugin writers to implement whichever version, and for you to be able to handle each version.

We have successfully taken two different approaches to this issue depending on the circumstances at the time (time to market, implementation difficulty, internals exposure concerns, etc):
1) Move the interface into its own DLL. This works well if the plugins don't need any other support objects/functions/data embedded in your main application DLL or if you don't want to expose public members in your main DLL to plugin writers.
2) Leave the interface in the main DLL. We have primarily used this when the refactoring cost to move the interface and associated classes was too high or when the plugins were completely self-contained (i.e. we author them for customers).

Related

Class visible only to shared project?

I want to separate platform-independent logic of my C# program into a shared project. Now I would like to hide repositories, service classes and such from my platform-specific projects. What access modifier can I use? internal doesn't seem to work, as they are compiled into the same executable (I think) and I don't want to go tag all my classes with InternalsVisibleToAttribute.
Is there a way to make classes in my shared project invisible to my platform-specific code?
There's only one place where you need to know the real type you're trying to instance - the platform provider. Everyone else should just use the interfaces that are platform-invariant.
All the platform-specific implementations can then be private or internal for all you care - you just need to ensure the provider has access. Your application will use the platform-specific provider to get the platform-specific instances, while only ever using the platform-invariant interfaces.
As for "being compiled into a single executable", that's not really important. Most likely you care entirely about compile-time checking, and that's still present regardless of how the final executable is packaged. There's some restrictions on reflection in a partial trust environment, but by that point you shouldn't care - you're only in it for the compile checks, not the runtime safety.
No, there is no such feature in C#. If you consider marking every other project with InternalsVisibleToAttribute an option, that would do the trick.
If possible, you could split off those other files (repositories, service files) to another assembly, which is not included in your shared project.

Why registering COM interfaces?

I've used COM for some years now but I keep learning new (and strange) things.
Recently I've realized that COM interfaces didn't had to be registered in the registry for components implementing them to work.
I've come to this conclusion after analysing the registry of a workstation where COM DLLs (implemented in .Net/C#) were registered with .reg files created by RegAsm because the user was not an administrator. And RegAsm only generates registry keys for COM classes and not interfaces.
If that's true my guess is that interfaces are important for early binding and have only to be present in TLB files. On the contrary registering implementations (classes) is essential because they are backed by physical code on the file-system that need to be referenced.
1) So am I crazy, missing something, or interfaces can be omitted?
2) If they can be omitted what are the consequences if any?
There are a lot things that you can't do without the interface being registered. Many of the features of COM -- marshaling, proxying, asynchronous calling -- have standard implementations that prevent you from having to roll this stuff yourself. For example, CoMarshalInterface is a standard way of taking any COM object interface and marshaling that interface into a stream so that it can be unmarshaled in another thread, process or machine. The interface information is critical in this -- without the interface metadata, the standard COM implementations of things like this won't work, as the infrastructure simply doesn't know enough about your interfaces to do what it needs to do in a generic way that works for all COM objects.
Additionally, while most automation clients (like VBA, C# and C++) can reference a type library file directly for purposes of early-binding, there are still limitations. For example, suppose you're working with a type library that contains some classes that implement interfaces from a different type library, or maybe the interfaces in the first type library accept parameters or return values that are defined by interfaces/enums/etc in another type library. In order for an automation client to work with these interfaces which contain cross-references, the cross-referenced type library must be discoverable somehow. Registration is the way this is accomplished.
Worth noting: In my experience, pretty much everything that works when a COM object is registered machine-wide (registered in HKLM) works exactly the same when registered per-user (in HKCU). This often makes COM registration more palatable in situations where machine-wide registration can't be performed (e.g. the user is not an admin). However, there are some significant gotchas, most notably https://techcommunity.microsoft.com/t5/Windows-Blog-Archive/Per-User-COM-Registrations-and-Elevated-Processes-with-UAC-on/ba-p/228531
Pretty vague, not sure I could read all the words between the bold ones. There is in general more than one way to skin this cat. COM requires using a class factory to get an object created, the generic work-horse one is CoCreateInstance(). CreateObject() is popular in scripting environments. You give it a number and it spits an interface pointer back. With the COM runtime taking care of the job to locate the executable file that contains the coclass, loading it and finding the proper class factory implementation.
Finding the executable is the tricky part, this is commonly done by info in the registry. Entered there when the component was registered. Not exclusively, a manifest can also be the source of this info. It needs to be embedded in the client app, one reason it is not a universal solution. More modern is the package manifest in a Windows Store/Phone/Universal application. Required, only very privileged components can still use the registry to let themselves be found. Microsoft components.
A completely different tack is having custom class factories. The way it is done in DirectX for example, it doesn't depend on the registry at all. You call CreateDevice() instead. Still calling this COM is a bit of a stretch, it is a more general technique called interface-based programming.
This all applies to objects, interfaces are different. You call IUnknown::QueryInterface() to obtain an interface pointer. No registration required, it is the coclass that handles it.
Nevertheless, you'll find lots and lots of registered interfaces with Regedit.exe in the HKLM\Software\Classes\Interface registry key. They take care of another COM detail, if the component does not live in the same machine or same process or the same thread as the client code then extra work must be done to get the call serialized across the machine/process/thread boundary. Same kind of thing that happens in .NET Remoting, it requires a proxy. An object that also implements the same interface but doesn't execute the method directly, passing the arguments to the stub instead so it can make the call.
Simple to do in .NET, Reflection makes it very easy. Not simple in COM, an extra component is required that knows how to serialize the arguments into an interop packet. And get the return value back the same way. Proxy/stubs are normally automatically built from the IDL. Or very common in .NET since it doesn't use IDL, you use the marshaller that digs out method details from the type library. A mechanism that's highly comparable to .NET Reflection, the type library plays the exact same role as .NET metadata does.
The ProxyStubClsId32 registry key inside the Interface key contains the CLSID of that component. You'll very commonly find {00000320-0000-0000-C000-000000000046} there, that's the system provided marshaller that uses the type library.
Regasm doesn't write the interface keys, it sets the ThreadingModel key for a .NET [ComVisible] class to "Both". So that the methods can be called both from an STA as well as an MTA thread without having to be marshaled. That's very optimistic and very rarely tested, writing thread-safe .NET code isn't that easy.
Regarding your first question, if the interface is not supposed to be used across COM contexts, or if the interface derives from IDispatch and you only use late-binding, you don't need to register it.
However, if you use early-binding, or if the interface is supposed to be used across COM contexts, you need to register it.
Just registering an interface doesn't enable marshaling, all argument types and return types must be marshalable too, i.e. not HANDLE or alike.
Regarding your second question, my hope is that you can answer yourself after reading the answer thus far. If not,
if you don't register an interface, you can't use it directly across COM contexts. If it derives from some registered interface, you can use that interface, such as the case of IDispatch-based interfaces.
However, very few interfaces are as general as IDispatch, so for any other base interface, you won't be able to use your derived interface's new methods.
In type libraries, if you don't register event dispinterfaces, then development tools (typically IDEs) won't be able to show you which events can be fired, or any event at all. The only other option is to implement the dispinterfaces by hand, if your programming language has that option, which requires documentation equivalent to the missing IDL in the first place.
One common extreme of this is to have all objects simply implement IDispatch and no other interface, but again this will hinder any effort a development tool might do towards method listing, code completion and/or argument choice (e.g. IntelliSense). Note that sometimes this is enough, such as when implementing a window.external object for IE's JScript, but it's a bit of lazyness when done in more general objects.
In general, if you're required very few extra effort to have interfaces registered, given you're already targeting COM, do so.

.NET Client supporting multiple versions of an unmanaged DLL

I am developing a .NET 4.0 client that will utilize a C Library for data processing. The user will be able to specify the DLL file they wish to load for processing.
I am doing late binding / assembly loading as described here. http://blogs.msdn.com/b/jonathanswift/archive/2006/10/03/dynamically-calling-an-unmanaged-dll-from-.net-_2800_c_23002900_.aspx
For each DLL, the same method call sequences will be the same in my client, but the method signatures will change or the data structs passed in will change. The data populated with the structures will be different depending on the version of the DLL and other factors. Example, the definition of MyStruct will change depending on the version of the DLL.
public delegate int INTF_my_method(ref MyStruct pDataStruct);
What design patterns or design decision are recommended for this approach? I need to load the appropriate C method delegates and data definitions based on the version of the DLL that the user has specified, and populate the structures appropriately. Has anyone done something like this before?
There is no clean approach to this, neither in managed code nor native code. The best you could possibly do is to declare an interface type that tries to cover all possible versions and then write concrete wrapper classes for each individual version of the API. If there's at least some common functionality then you can shovel that in a base class.
Notable too is that you cannot just let the user pick a DLL, you have to pair the DLL with the concrete wrapper class instance.
Building this kind of flexibility in your program is obviously very expensive.
You can load different versions of your DLLs, but only from separate AppDomains. That is, for each DLL you want to load, you will have to create a new AppDomain.

Interface Library Versioning - Breaking Changes

I currently have a C# project which uses plugins and has a fairly common approach to plugin handling: an IPlugin interface is stored in a dll which is linked in a tranditional dynamic way. The host app looks for class libraries exporting classes exposing this interface and loads them via reflection at run time.
The dll containing the interface also contains helper classes, for updating plugins, providing abstract base classes and so on.
My question is, what does it take to break the interface between my host and plugin assemblies? In other words, if I compile and distribute the host app and then distribute plugins that have been linked with a later version of the plugin dll (in which helper classes have changed, but IPlugin is defined in exactly the same way), will the host still pick up the plugins? How much of a change do I need to make to the plugin library before IPlugin is considered a different "type" by the reflection methods I am using?
If the assembly isn't loaded by a specific version than I would say the only breaking changes you will really encounter are when you change the interface contract. If you are just changing helper classes it shouldn't be a problem.

System with plugins in C#

I have to develop a system to monitor sensor information, but many sensors might be added in the future.
That said, the idea would be to develop a system that would consist of the application skeleton. The sensors (as each of them has its communication and data presentation characteristics) would be added as plugins to the system.
How would I code this on C#? Is it a case of component-driven development? Should I use dynamic libraries?
There are a huge number of ad-hoc plug-in systems for C#. One is described in Plugin Architecture using C# (at The Code Project). The general approach is that the host application publishes an assembly with interfaces. It enumerates through a folder and finds assemblies that define a class that implement its interfaces and loads them and instantiates the classes.
In practice you want to do more. It's best if the host application defines two interfaces, an IHost and an IPlugIn. The IHost interface provides services that a plug-in can subscribe to. The IPlugIn gets constructed taking an IHost.
To load a plug-in, you should do more than simply get a plug-in. You should enumerate all plug-ins that are loadable. Construct them each. Ask them if they can run. Ask them to export APIs into the host. Ask them to import APIs from the host. Plug-ins should be able to ask about the existence of other plug-ins.
This way, plug-ins can extend the application by offering more APIs.
PlugIns should include events. This way plug-ins can monitor the process of plug-ins loading and unloading.
At the end of the world, you should warn plug-ins that they're going to go away. Then take them out.
This will leave you with an application that can be written in a tiny framework and implemented entirely in plug-ins if you want it to.
As an added bonus, you should also make it so that in the plug-ins folder, you resolve shortcuts to plug-ins. This lets you write your application and deliver it to someone else. They can author a plug-in in their development environment, create a shortcut to it in the application's plug-ins folder and not have to worry about deploying after each compile.
Managed Extensibility Framework (MEF) is what you need here. You could also use a dependency injection container, but that's a bit not what you'd expect, though a perfectly viable solution in itself.
Each sensor should implement a standard interface so that routines that handle lists of sensors can treat them in a standard manner. Include a ID field in the interface that is unique too each type of sensor so you can handle special cases.
Look at the Reflection API to learn how to scan a directory of .NET Assemblies and look inside them.
Each assembly should have a factory class that it's job is to return a list of sensors that are in that assembly. I recommend that you make it a subroutine not a function and passes it a list that that it appends too. SensorDLL1 appends 4 sensors to the emptylist, SensorDLL2 appends 8 sensor to the list which now has 12 sensors and so on. This approach is the most flexible in the long run.
You will either have to make up a naming convention to find the factory class or use an attribute. Note I don't recommend just scanning the assembly for everything that implements your sensor interface as you could have code inside the factory that controls which sensors are available. This is useful for licensing.
Depending upon the Sensors themselves, this sounds like you would need to define a single interface which all Sensors will implement. Your main "application skeleton" will then work against the ISensor interface and need not concern itself with the concrete implementations of each of the Sensor classes/objects/components.
Whether each Sensor is simply a class within the same project, or a separate assembly is up to you, although if they are separate assemblies, you'd need a way to load these assemblies dynamically.
Some references which may help here are:
Command Pattern Design Pattern:
- http://en.wikipedia.org/wiki/Command_pattern
Observer Pattern Design Pattern:
- http://en.wikipedia.org/wiki/Observer_pattern
Dynamically loading assemblies:
- http://www.divil.co.uk/net/articles/plugins/plugins.asp
Hope this helps.
We once made a plug-in system in a school project of ours in 2006, Socio. You can find the code here and here.
The basic lesson learned was that it is very simply to dynamically load code in C#. If you just have a plugin DLL and an application which adheres to an interface of yours and links against a common DLL in which that interface exists, it just works™.
In essence, it is what plinth described in his answer.
Take a look at:
Composite UI Application Block
and Smart Client Software Factory
Its a very old post but still i thought it would be useful for someone to appPress.in where in we have developed a framework with plugin functionality. here we allow plugin to modify the UI of core application Horizontally and Vertically, add its own Pages, hook into events like Init, OnClick and OnChange.

Categories

Resources