I currently have a C# project which uses plugins and has a fairly common approach to plugin handling: an IPlugin interface is stored in a dll which is linked in a tranditional dynamic way. The host app looks for class libraries exporting classes exposing this interface and loads them via reflection at run time.
The dll containing the interface also contains helper classes, for updating plugins, providing abstract base classes and so on.
My question is, what does it take to break the interface between my host and plugin assemblies? In other words, if I compile and distribute the host app and then distribute plugins that have been linked with a later version of the plugin dll (in which helper classes have changed, but IPlugin is defined in exactly the same way), will the host still pick up the plugins? How much of a change do I need to make to the plugin library before IPlugin is considered a different "type" by the reflection methods I am using?
If the assembly isn't loaded by a specific version than I would say the only breaking changes you will really encounter are when you change the interface contract. If you are just changing helper classes it shouldn't be a problem.
Related
For reasons I can't explain here, I need to create a single dll file that can be used in .Net Framework applications. In that library, I use a 3rd party library that can not be (directly) used/imported by the end-user application. I use Fody/Costura to bundle the library into one dll file.
In my own library, I have a class that inherits from a class in the 3rd party library. I can use that class without problems in the application, but I get compile errors when I use properties of the base class. I can avoid these compile errors by creating a "proxy" in the child class.
Is there any way to make this work, without these proxy properties?
When implementing the facade pattern in Java, I can easily hide the subsystem of the facade by using the package-private modifier. As a result, there is only a small interface accessible from outside the facade/package, other classes of the sub-system are not visible.
As you already know, there is no package-private modifier in C#, but a similar one called internal. According to the docs, classes defined as internal are only accessible within the same assembly.
From what I unterstand, I have to create at least two assemblies (means practically two .exe/.dll files) in order to hide the subsystem of the facade physically. By physically I mean that the classes a) cannot be instantiated from outside and b) are not shown by intellisense outside the facade.
Do I really have to split my small project into one .exe and one .dll (for the facade) so that the internal keyword has an effect? My facade's subsystem only consists of 2 classes, an own .dll seems to be overkill.
If yes, what is the best practice way in Visual Studio to outsource my facade to its own assembly?
Don't get me wrong, I have no real need to split up my program into several assemblies. I just want to hide some classes behind my facade from IntelliSense and prevent instantiation from outside. But if I'm not wrong, there is no easier way that that.
Using a separate project is the general preferred approach. In fact, you often have interfaces or facades in a third assembly that both the implementation and UI assemblies reference.
That said, you can accomplish this in a single assembly using a nested private subclass.
public interface IMyService {}
public static class MyServiceBuilder
{
public static IMyService GetMyService()
{
//Most likely your real implementation has the service stored somewhere
return new MyService();
}
private sealed class MyService : IMyService
{
//...
}
}
The outer class effectively becomes your 'package' for privacy scoping purposes. You probably wouldn't want to do this for large 'packages'; in those cases, it's cleaner to move the code to a separate assembly and use internal.
Note that if you primary objection to multiple assemblies is deployment, you can actually merge multiple assemblies for the purpose of creating a simpler executable or library deployment. This way you can retain the insulation benefits of multiple projects/assemblies without having the headache of multiple files that can potentially be distributed or versioned independently.
I want to implement a plug-in system in my .net application, without the use of MEF.
My application loads and creates instances of types, that are contained in the DLLs.
There is an interface (IPluginContract) that the main application assembly uses to load dll types, and this very same interface is used by the dll projects (the plug-ins) to implement it.
So different projects need access to the same interface.
I can realize this requirement by pushing the interface class into a separate Class Library, that both main app and the plug-ins will reference.
Is it a correct way to work around the described problem?
Yes, pushing your interfaces out into a shared library is a preferred solution. You then only need to distribute this library to plugin developers, which could be considered as lightweight, but the plugin will be coupled to an exact version of the interface.
Another solution is a convention based solution, where plugin writers have types that "conform" to an interface e.g. have appropriate methods on a class which they can point to via a config file. You can then use reflection, IL generation, etc, to wire this up to a concrete internal interface\proxy. The benefit here is that plugins are then not hard-wired to a specific interface version, so there is more flexibility in versioning.
You could also consider versioning by maintaining all versions of your interface e.g. IPlugin_1, IPlugin_2, etc. It's then up to plugin writers to implement whichever version, and for you to be able to handle each version.
We have successfully taken two different approaches to this issue depending on the circumstances at the time (time to market, implementation difficulty, internals exposure concerns, etc):
1) Move the interface into its own DLL. This works well if the plugins don't need any other support objects/functions/data embedded in your main application DLL or if you don't want to expose public members in your main DLL to plugin writers.
2) Leave the interface in the main DLL. We have primarily used this when the refactoring cost to move the interface and associated classes was too high or when the plugins were completely self-contained (i.e. we author them for customers).
This strikes me as likely to have an agreed best-practice answer, but I can't seem to find one anywhere.
I have an application that will load and use classes that implement a specific interface. The implementation classes will be written by someone else and I want to send them the minimum they need to successfully implement the interface.
So far the best I've come up with is the following:
The application solution that contains:
A project that contains just the interface definition and compiles to a dll.
A project for the application that uses the interface and references the dll.
A separate solution for an example implementation that builds to a dll and references the interface dll.
Is this the best way to do this? i.e. distribute a compiled version of the interface to anyone that needs to implement the interface.
I tried using just a copy of the interface source files in the example implementation and my application failed to recognise the class as implementing the interface. Is this to be expected or is my class loading code bugged (it does work when the example references the pre-compiled dll)?
you should put your interface in an assembly and then distribute your assembly (or your whole project if needed) so that the other people who want to implement the interface just need to reference your assembly so they have access to the same interface (which is not the case if you just send the interface (.cs) file as your interface will be embedded in another assembly and thus will certainly have another namespace or assembly name and thats why your implementation class was not recognized as inheriting your interface cause basically it was not the same interface even if the methods and properties where the same ;))
i think your approach first is the best if you dont want people to change your code and just use the interface
otherwise just share the whole project containing the interface
That's the approach I've used, and seen in other projects - give the shared assembly a generic name such as MyApp.Interfaces, in case you end up with multiple shared interfaces.
An alternative approach is to use the Managed Extensibility Framework: http://msdn.microsoft.com/en-us/library/dd460648.aspx - but that may be overkill for a small project.
We have a couple mini-applications (single Web form look-up stuff) that need to run in the context of a much larger site and application (hereafter "the Monolith")
We don't want to install the Monolith on every developer's machine, so we want some developers to be able to develop these little apps in their own isolated sandbox project (hereafter "Sandbox"). The idea is that we would move (1) the resulting DLL, and (2) the Web form (aspx) file from the Sandbox to the Monolithic Web App, where it would run.
This is all well-and-good, except that a couple of these little apps need to use a control that exists in the Monolith. And this control won't run without all the infrastruction of the Monolith behind it.
So, we had the great idea of creating a mock control. We stubbed out a control with the same namespace, class name, and properties as the control in the Monolith. We compiled this into a DLL, and put in in the Sandbox. We can develop against it, and it just spits out Lorem Ipsum-type data, which is cool.
The only reference to the control is this:
<Namespace:MyControl runat="server"/>
In the Sandbox, this invokes the mock object. In the Monolith, this invokes the actual control. We figured that since the only connection to the control is the tag above, it should work on both sides. At runtime, it would just invoke different things, depending on where it was running.
So we moved the aspx file and the app's DLL (not the mock object DLL) into the Monolith...and it didn't run.
It seems we didn't count on the "mypage.aspx.designer.cs" file coming out of Visual Studio. This gets compiled into the DLL, and it has a reference all the way back to our mock object DLL. So, when it runs in the Monolith, it complains that it can't load the mock object DLL.
So, our goal is to have the control tag as above, and have that invoke different things depending on the environment. In both cases, it would execute the same namespace and class, but that namespace and class would be different things between the two environments. In the Monolith it would be our actual control. In the Sandbox, it would be our mock object.
Essentially, we want this tag evaluated at runtime, not compile time. We want the DLL free of any reference back to the mock object's DLL.
Possible? Other solutions for the core problem?
The Solution
In the end, it was quite simple.
I work on my app with a control that is compiled against my mock object DLL. When I'm ready to compile for the last time, I delete the reference to the mock object DLL, and add a reference to the Monolith DLL. Then I compile and use that DLL.
The resulting DLL has no idea it wasn't developed against the Monolith DLL all along.
This looks like a job for loose coupling! One of the most helpful, but least followed principles of software design is Dependency Inversion; classes should not depend directly on other concrete classes, but on abstractions such as an abstract base class or interface.
You had a great instinct with the mock object, but you implemented it incorrectly, if I understand right. You basically replaced a dependency on one concrete class (the Monolithic control) with another (the mocked control). The mock could have the same name, even the same namespace as the Monolith, but it resides in a different assembly than the Monolith control, which the dependent code expects because it was compiled to refer to that assembly, but does not find in the production environment.
I would start by creating an interface for your control, which defines the functionality available to consumers of the control (methods, properties, etc). Have both Monolith's control and the mocked control implement this interface, and declare usages of either control as being of the interface type instead of Monolith or the mock. Place this interface in a relatively lightweight DLL, seperate from the mocked object, the Sandbox and the existing Monolith DLLs, that you can place on developer's machines alongside the mocked object's DLL, and is also present in the Monolith codebase. Now, classes that need your control only really need the interface; you no longer need direct references to Monolith or to any mock DLL.
Now, when instantiating objects that are dependent on this control, you need some way of giving your new dependent object a concrete class that implements the interface, either the mock or the Monolith, instead of the dependent class creating a new one. This is called Dependency Injection, and is the natural extension of Dependency Inversion; specifying an interface is great, but if your dependent class has to know how to create a new instance of an object implementing that interface, you haven't gained anything. The logic for creating a concrete class must lie outside your dependent class.
So, define a third class that knows how to hook the control into classes that depend on it. The go-to method is to bring in an IoC framework, which acts as a big Factory for any object that either is a dependency, or has a dependency. You register one of the two controls as the implementation of the interface for a particular environment (the mock for dev boxes, the Monolith's control in production); the registration information is specific to each environment (usually located in the app.config). Then, instead of newing up an instance of the control class, you ask the container to give you an instance of a class that implements the interface, and out pops a new mock or Monolith control. Or, put the control and all dependent classes in the IoC container, and ask for the dependent classes, which the container will return fully "hydrated" with a reference to their control. All of this happens without any of the classes that depend on the control having to know where it came from, or even exactly what it is.
IoC frameworks can be a pain to integrate into an existing design, though. A workalike would be to create a Factory class that that uses reflection to dynamically instantiate either of the two controls, and place an AppSetting in the app.config file that will tell the factory which assembly and type to use for this environment. Then, wherever you'd normally new up a control, call the Factory instead.
I don't have a super quick fix for you (maybe someone else will) but you might be interested in a IoC container like Unity.