Is it bad practice to have two separate MEF plugin containers? - c#

Let's say I have a class (not a static class), A, that in some way uses plugins. I use MEF to manage those plugins, and add methods for my users to add parts catalogs. Example usage:
var myA = new A();
myA.LoadPlugins(new DirectoryCatalog("path/to/plugins"));
myA.DoStuffWithPlugins();
In the same namespace as A is class B. B also uses MEF to manage plugins, and has its own CompositionContainer. If a user wants to interact with B's plugins, she must use B's plugin management methods.
B is used just like A is, above.
My question is, is this bad? Should I care that there are two separate places to load plugins in my namespace? If it is bad, what are the alternatives?

My question is, is this bad? Should I care that there are two separate places to load plugins in my namespace? If it is bad, what are the alternatives?
Not necessarily. There is no reason you can't have two completely separate compositions within the same application.
That being said, there's also no real reason, in most cases, to have more than a single composition. MEF will compose both sets of data at once. In your case, you could compose your importers and your reports with the same composition container, which would have the advantage of allowing somebody who is extending your system to only create a single assembly which extends both portions of your application.
One potential minor red flag here is that these are two separate types within the same namespace, but each of which have their own plugin system. Typically, a framework with a full plugin system is going to be complex enough that I'd question whether they belong in the same namespace - though from going by type names of "A" and "B" it's impossible to know whether this is truly inappropriate.

I don't see any problem with that. I would recommend a base class for class A and class B to reuse methods.
class A : BaseCompositionClass {
// implementations
}
class B : BaseCompositionClass {
// implementations
}
You could use a single CatalogExportProvider and then query that provider for the matching imports and exports. You could then use a single CompositionFactory from which classA and classB request compositions.

Related

Using classes from other modules

I am working in a project that has several modules made by different teams.
I must use the repositories and the code-first entity classes from other modules (referencing the dll), but I can't access the code and I can't modify it.
I want to protect myself from changes in the structure of the external code, and I want to add functionality to those classes.
What is the best approach?
I am thinking about making something like a service layer; get the external data, adding some functionality and parse to my own classes to avoid extra dependence on the external assemblies in my code.
If some day the external classes change, I only need to modify this service layer.
What do you think? Other ways for doing it? I can only change my module.
Thanks a lot!
The teams must work together!
It is a good idea to work against interfaces instead of concrete classes. Classes should implement different interfaces representing their different facets. Maybe the classes themselves can be split into smaller ones having only one responsibility.
See: Interface segregation principle.
See: Single responsibility principle.
If you are using only a portion of an object, there is no point in making you dependent on the whole object. If you work against an interface representing the very aspects of the class you are working with, it is less likely that changes on other parts will affect you. But the teams must sit together and define those interfaces.
I cannot come up with a sophisticated method that could better fit the situation but what you need is some kind of abstraction. You could create a wrapper object or this could be as simple as following:
public class MyType
{
// Your own implementation
// Properties
// And methods
public static MyType Create(TheirEntity entity)
{
// Create an instance of your type from their object
}
// Or provide a conversion operator, it could be implicit if you'd like
public static explicit MyType(TheirEntity entity)
{
// Implement a conversion here
}
}
If you still want to use repositories from external libraries, why don't you inherit from the classes you want to extend? If you don't need to add properties or fields, I'd use extension methods. Those would allow you to use your project specific functionality on external classes.

How to implement the facade pattern in C# AND physically hide the subsystem

When implementing the facade pattern in Java, I can easily hide the subsystem of the facade by using the package-private modifier. As a result, there is only a small interface accessible from outside the facade/package, other classes of the sub-system are not visible.
As you already know, there is no package-private modifier in C#, but a similar one called internal. According to the docs, classes defined as internal are only accessible within the same assembly.
From what I unterstand, I have to create at least two assemblies (means practically two .exe/.dll files) in order to hide the subsystem of the facade physically. By physically I mean that the classes a) cannot be instantiated from outside and b) are not shown by intellisense outside the facade.
Do I really have to split my small project into one .exe and one .dll (for the facade) so that the internal keyword has an effect? My facade's subsystem only consists of 2 classes, an own .dll seems to be overkill.
If yes, what is the best practice way in Visual Studio to outsource my facade to its own assembly?
Don't get me wrong, I have no real need to split up my program into several assemblies. I just want to hide some classes behind my facade from IntelliSense and prevent instantiation from outside. But if I'm not wrong, there is no easier way that that.
Using a separate project is the general preferred approach. In fact, you often have interfaces or facades in a third assembly that both the implementation and UI assemblies reference.
That said, you can accomplish this in a single assembly using a nested private subclass.
public interface IMyService {}
public static class MyServiceBuilder
{
public static IMyService GetMyService()
{
//Most likely your real implementation has the service stored somewhere
return new MyService();
}
private sealed class MyService : IMyService
{
//...
}
}
The outer class effectively becomes your 'package' for privacy scoping purposes. You probably wouldn't want to do this for large 'packages'; in those cases, it's cleaner to move the code to a separate assembly and use internal.
Note that if you primary objection to multiple assemblies is deployment, you can actually merge multiple assemblies for the purpose of creating a simpler executable or library deployment. This way you can retain the insulation benefits of multiple projects/assemblies without having the headache of multiple files that can potentially be distributed or versioned independently.

In what layer of a project do common extension methods belong?

I'm curious what "layer" of a system common extensions that are global to the application belong in. For instance, I may have extensions that let me use the Rails-like "DaysAgo", "MonthsAgo", etc. type of extension method on integers. What layer of a project does this typically belong in? I was thinking Infrastructure, but that seems to mean database-related (e.g. base repositories and data access). I have a "Core" library project, so maybe it belongs there?
I understand that you want to group extensions that are related to a specific group of classes, but these are essentially used across the entire application. In the days before extension methods, they would be in a Utilities static class or the like, so where should they live now?
You can (better) create multiple infrastructure projects based on scope for example:
Infrastructure.Common (here comes the general infrastructure - best fit for extension methods)
Infrastructure.Data (Data access)
Infrastrcuture. bla bla bla
I would put these methods at the lowest possible layer where those objects/entities exist.
either in the interfaces, or in the entities or in a core assembly, the lowest the better so all upper layers can use it :)
I would put them in a base project (something like 'Core') that is referenced from all other projects. If you start a new project in some time you can reuse these extension methods easily.
I would think about the namespaces you will use for your different sets of extension methods. If some really belong to Data, just put them in a Core.Data namespace so your code in other projects won't be cluttered with extension methods that have no meaning in that context.
Consider putting logic that is outside the scope of the entire application in its own assembly or assemblies.

Partial classes in separate dlls

Is it possible to have two parts (same namespace, same class name) to a partial class in separate DLLs?
From MSDN -Partial Classes and Methods:
All partial-type definitions meant to
be parts of the same type must be
defined in the same assembly and the
same module (.exe or .dll file).
Partial definitions cannot span
multiple modules.
No. Partial classes are a purely language feature. When an assembly is compiled, the files are combined to create the type. It isn't possible to spread the files out into different assemblies.
Depending on what you want to do, though, you might be able to use extension methods to accomplish what you need.
No it is not possible. When the assembly is compiled the class needs to be finished.
While other answers do provide the unpleasant "No" that anyone landing on this page didn't want to see, I was struck by another thought that hasn't been mentioned here yet. If partial classes were allowed across assemblies, one would get access to private members of existing types that were not written by him, thus allowing him to manipulate them in ways that were not intended by the original author, thus jeopardizing the functionality of all inheriting classes too.
Not only that, those classes in other assemblies (and their children) would need to be recompiled to make it work. Thus it is logically not possible to allow splitting a class over different assemblies.
Note: Read #Zar Shardan's comment below. That is another very important issue, even more important than private member access.
You can use extension methods when you want to add a method to a class in a different dll.
The one drawback of this method is that you cant add static methods.
The question is why would you want to make a partial class in another assembly? You can define abstract classes and interfaces across assemblies, maybe you need to look into that.
You probably just want to create a Wrapper class within you own library, around the class in the 3rd part library. Then add whatever functionality to the wrapper class.

What are some advantages to using an interface in C#?

I was forced into a software project at work a few years ago, and was forced to learn C# quickly. My programming background is weak (Classic ASP).
I've learned quite a bit over the years, but due to the forced nature of how I learned C#, there are a lot of basic concepts I am unclear on.
Specifically, an interface. I understand the basics, but when writing an app, I'm having a hard time figuring out a practical use of one. Why would one want to write an interface for their application?
Thanks
Kevin
An interface says how something should work. Think of it as a contract or a template. It is key to things such as Inverson of Control or Dependancy Injection.
I use Structure Map as my IoC container. This allows me to define an interface for all of my classes. Where you might say
Widget w = new Widget();
I would say
IWidget w = ObjectFactory.GetInstance<IWidget>();
This is very powerful in that my code isn't saying necessarily what a Widget truely is. It just knows what a Widget can do based on the interface of IWidget.
This has some great power to it in that now that I am using an IoC container I can do a couple more nifty things. In my unit tests where I need to use a Widget I can create a mock for Widget. So say that my Widget does something very powerful by way of connecting to a database or a web service, my mock can simulate connecting to these resources and return to me stubbed data. This makes my test run faster and behave in a way that is more reliable. Because I am using StructureMap I can tell StructureMap to load the real implementation of my Widget during production use of my code and the mocked version of the Widget during testing either programatically or by configuration.
Also, because I am using an IoC container I can provide cool new features to my application such as writing three different ways to cache data. I can have a local developer box cache using a tool such as Lucene.NET for a local cache. I can have a development server use the .NET cache which runs great on one box. And then I can have a third option for my production servers use a cache layer such as MemCache Win32 or Velocity. As long as all three caching implementations conform to the same interface, their actual implementation doesn't concern me (or my code) at all. I simply ask StructureMap to go get the current environments implementation and then go to work.
If you follow Dependency Injection at all then interfaces come in handy here also with an IoC container such as StructureMap in that I can declare the usage of a class by way of an Interface in the constructor of my class.
public class Widget(IWidgetRepository repository, IWidgetService service) : IWidget
{
//do something here using my repository and service
}
And then when I new up an instance of Widget by way of StructureMap such as this
IWidget widget = ObjectFactory.GetInstance<IWidget>();
Notice that I am not specifying the repository or service in the constructor. StructureMap knows by way of the interfaces specified in the constructor how to go get the appropriate instances and pass them in too. This makes very powerful and clean code!
All from the simple definition of Interfaces and some clever usage of them!
One Simple Answer: Use interfaces to program against the contract rather than the implementation.
How could that possibly help? Starting to use interfaces will (hopefully) get you in the habit of coupling classes more loosely. When you code against your own concrete classes, it's easy to start poking the data structures without a strict separation of concerns. You end up with classes which "know" everything about the other classes and things can get pretty tangled. By limiting yourself to an interface, you only have the assurance that it fulfills the interface's contract. It injects a sometimes helpful friction against tight coupling.
The basic case is the "IWriter" case.
Suppose you are making a class that can write to the console, and it has all kinds of useful functions like write() and peek().
Then you would like to write a class that can write to the printer, so instead of reinventing a new class, you use the IWriter interface.
Now the cool thing about interfaces is you can write all your writing code, without knowing what is your writing target beforehand, and then can when the user decides (at runtime) weather he wants to write to the console or the printer, you just define the object as a console/printer writer and you don't need to change anything in your writing code, because they both use the same front end (interface).
An example. Consider an MDI application that shows reports, there's basically 2 different report types. A chart, and a grid. I need to Save these reports as PDF and I need to mail them to someone.
The event handler for the menu the user clicks to save a report to PDF could do this:
void ExportPDF_Clicked(...) {
if(currentDocument is ChartReport) {
ChartReport r = currentDocument as ChartReport;
r.SavePDF();
} else if(currentDocument is GridReport) {
GridReport r = currentDocument as GridReport;
r.SavePDF();
}
}
I'll rather make my ChartReport and GridReport implement this interface:
public interface Report {
void MailTo();
void SavePDF();
}
Now I can do:
void ExportPDF_Clicked(...) {
Report r = currentDocument as Report;
r.SavePDF();
}
Similar for other code that need to do the same operation(save it to a file,zoom in,print,etc.) on the different report types.
The above code will still work fine when I add a PivotTableReport also impelmenting Rpoert the next week.
IOC and Dependency injection have already been mentioned above, and I would urge you to look at them.
Largely, however, interfaces allow a contract to be specified for an object that doesn't require an inheritance model.
Lets say I have class Foo, that has functions x and y and property z, and I build my code around it.
If I discover a better way to do Foo, or another sort of Foo requires implementation, I can, of course, extend a base Foo class to FooA, FooB, MyFoo etc, however that would require that all Foos have the same core functionality, or, indeed that any future Foo creators have access to the base Foo class and understand its internal workings. In C#, that would mean future Foos could not inherit from anything else but Foo, as C# does not support multiple inheritance.
It would also require me to be aware of possible future states of Foo, and try not to inhibit them in my base Foo class.
Using an interface IFoo simply states the 'contract' that a class requires to work in my Foo framework, and I don't care what any future Foo classes may inherit from or look like internally, as long as they have fn x fn y and z. It makes a framework much more flexible and open to future additions.
If, however, Foo requires a large amount of core at its base to work that would not be applicable in a contract scenario, that is when you would favour inheritance.
Here is a book that talks all about interfaces. It promotes the notion that interfaces belong to the client, that is to say the caller. It's a nice notion. If you only need the thing that you're calling to implement - say - count() and get(), then you can define such an interface and let classes implement those functions. Some classes will have many other functions, but you're only interested in those two - so you need to know less about the classes you're working with. As long as they satisfy the contract, you can use them.
good article.
An interface is a contract that guarantees to a client how a class or struct will behave.
http://www.codeguru.com/csharp/csharp/cs_syntax/interfaces/article.php/c7563
This might be the clearest easiest way of explaining that I have come across:
"The answer is that they provide a fairly type-safe means of building routines that accept objects when you don't know the specific type of object that will be passed ahead of time. The only thing you know about the objects that will be passed to your routine are that they have specific members that must be present for your routine to be able to work with that object.
The best example I can give of the need for interfaces is in a team environment. Interfaces help define how different components talk to each other. By using an interface, you eliminate the possibility that a developer will misinterpret what members they must add to a type or how they will call another type that defines an interface. Without an interface, errors creep into the system and don't show up until runtime, when they are hard to find. With interfaces, errors in defining a type are caught immediately at compile time, where the cost is much less."
Couple of things, when you inherit from an interface it forces you to implement all the methods defined in the interface. For another, this is also a good way to bring in multiple inheritance which is not supported for regular classes.
http://msdn.microsoft.com/en-us/library/ms173156.aspx
Simple answer based on first principles:
A program is a universe with its own metaphysics (the reality/substance/stuff of the code) and epistemology (what you can know/believe/reason about the code). A good programming language tries to maximize the metaphysical flexibility (lets you make the stuff easily) while ensuring epistemic rigor (makes sure your universe is internally consistent).
So, think of implementation inheritance as a metaphysical building block (the stuff that makes up your little universe of code) and interface inheritance as an epistemic constraint (it allows you to believe something about your code).
You use interfaces when you only want to ensure that you can believe something. Most of the time that's all you need.
You mentioned having difficulty finding a practical use for interfaces.. I've found that they come into their own when building extensible applications, for example a plugin-based app where a third-party plugin must conform to specific rules.. These rules can be defined by an interface.
You could make it so that when the plugin is loaded, it must have an Init method that takes a class that implements IServices interface.
public interface IServices
{
DataManager Data { get; set; }
LogManager Log { get; set; }
SomeOtherManager SomeOther { get; set; }
}
public class MrPlugin
{
public void Init(IServices services)
{
// Do stuff with services
}
}
So.. If you have a class that implements the IServices interface, and then you instantiate it once, you can pass it to all the plugins upon initialisation and they can use whatever services you have defined in the interface.

Categories

Resources