I had a discussion with friends last week about consuming the classes which are in DLL(.net DLL). I have a .net DLL which I need consume it in my Exe.
Normally,
I add the DLL in my solution and reference the DLL in my Exe
Create the object of the Class(which is in my DLL)
Start calling methods /function in the class from the object just created.
But finally decided that, We should use Reflection not the way we are doing. Reason is Loose coupling. One can change the functionality in the DLL and compile it. In such situations, You don't need to compile client code.
I have a question with this background.
Suppose, I have an a very simple application(Say console application) and I have two classes both are writtern to do different work.
class Program
{
static void Main()
{
//How do you create a object of the class A.
}
}
class A
{
string A = "I am from A";
public B b;
public A
{
b = new B();
}
}
class B
{
string B = "I am from B";
public B
{
}
public void Print()
{
Console.WriteLine(B);
}
}
How do you create the object of Class A when all the three classes the same exe and how you create the same object when Class A and class B in different DLL.
one answer for the sencond part of the question is use interface and use reflection.
Is reflection is it really required, or it is kind of programming standard.
What is the best practice to create an object of the class.
Interfaces provide a way to have loose coupling.
If you want to provide the ability to extend or replace the functionality after the fact without recompiling or even redeploying, then you're basically looking at a plug-in type architecture on top of the interface based loose coupling.
You could either use reflection to iterate and create an instance of the object but the other option is configuration/registration. For example, in your config file (or registry etc...) you could point to the file and class that implements that interface and use System.Activator to create it at runtime.
Example:
http://msdn.microsoft.com/en-us/library/ms972962.aspx
Another more robust option is MEF. It's a plug-in framework developed b the .net framework team.
Check out this link:
http://mef.codeplex.com/
From that link (reenforcing your question):
"Application requirements change frequently and software is constantly evolving. As a result, such applications often become monolithic making it difficult to add new functionality. The Managed Extensibility Framework (MEF) is a new library in .NET Framework 4 and Silverlight 4 that addresses this problem by simplifying the design of extensible applications and components. "
Hope that helps.
Related
I have to build three different editions of a DLL which contain API calls of our software. I have so far figured out the following way of doing it using inheritance. Can someone please confirm if I'm using inheritance the correct way (or if you have a suggestion for doing it a proper/better way?) I am new to this so still learning C# project programming.
So far I have main class of API_calls (which are common for all DLL editions) as follows:
namespace APIcalls
{
public partial class API_Calls
{
public void Common_function1()
{
}
public void Common_function2()
{
}
}
}
Then I have three .cs class files with something like the following in each of them (Edition_A, Edition_B, and Edition_C are the differing factors for each edition of DLL), any additional calls are included in partial class API_Calls as follows:
namespace dll_edition
{
public class Edition_A
{
public Edition_A()
{
// Code here for checking if current DLL is authorized
// Otherwise throw an exception
}
}
}
namespace APIcalls
{
public partial class API_Calls : Edition_A
{
public void Additional_Edition_A_function1()
{
}
public void Additional_Edition_A_function2()
{
}
}
}
In each assembly build I include Edition_A file, or Edition_B file, or Edition_C file and then I build all three assemblies which gives me three DLLs.
My question: Is this the proper way of doing it? Is there any negative about how I have done it? Or is there a better way of doing this? My ultimate goal is to have three editions of DLL with some common API calls in them and then various API calls specific to each DLL type.
Thank you for any input that you may have!
-DD
From what I understand, you have set of common functions in a common base class that is to be used by different other classes.
There are various ways of doing it with their own pros and cons:-
1) Creating seperate libraries for each type which you are doing, in which only limited functionality goes to end user and size of dll is small.This is better suited if you have dlls working on plus and play model where you just dump the dll in the bin amd new functiinality is in place.
This also makes your changes centric, so you know where your changes are. But what if you have distributed you dll to end clients and they need method in other dll, you again have to republish your changes.
2) Doing it all in 1 dll, unwanted functionality is exposed to client, deployment package could be heavy. But you have all the functionality readily available.
To summarize would mainly depend on your business and deployment model.
Personally I am a bigger fan of doing it all in one DLL and using a Factory Pattern to determine which version gets run at runtime, but if it must be 3 based on your requirements here is what I recommend.
Create 4 DLLs.
The first project will just contain the edition interface (e.g. The structure of the DLL, but no content on how it will work). This interface can be attached to the classes for the different versions of the DLL. Using this structure will set up the calling code so that it can use dependency injections for different editions of the DLL.
The other 3 DLLs will be the different editions of the DLL that you are required to build.
I am linking one of the external resource at runetime in my code using something like below:
System.Reflection.Assembly assembly = System.Reflection.Assembly.LoadFrom("MyNice.dll");
Type type = assembly.GetType("MyType");
Tool = Activator.CreateInstance(type) as Tool;
Now as you can see that at the end of the object creation, it has to cast the resulting object into the tool class because there are a lot of references to the methods and properties of Tool class in my code that if it is no there then the code will error out on compile time.
Now, this is a bad situation because I wanted to remove the Dll from my references and have it loaded dynamically at runtime but at the same to there are pieces of my code that referes to and are dependent to the Tool assembly. How can I make it independent? Do I have to use reflection all over my code or there is any easy alternative out there?
for example:
if (Tool.ApplicationIsOpen)
return StatusResult.Success;
is there in the same class which assumes that Tool class already exists and will break if I remove it from my references folder.
Any suggesitons?
I would suggest making shared DLL to reference from both projects that contains an interface in which Tool inherits.
In this shared project, make an interface such as ITool, that exposes the functionality you need for the consumer project.
Shared Project
public interface ITool
{
void Something();
}
Separate Project
public class Tool : ITool
{
public void Something()
{
// do something
}
}
Consumer Project
System.Reflection.Assembly assembly = System.Reflection.Assembly.LoadFrom("MyNice.dll");
Type type = assembly.GetTypes().FirstOrDefault(t => t.IsAssignableFrom(typeof(ITool)));
ITool tool = Activator.CreateInstance(type) as ITool;
Now you can delete the reference to the project containing Tool, but you still need the reference to the shared project that contains ITool. If you truly don't want any references, then explore the reflection route, but be warned it'll probably be messy.
This strategy is the basis for many plugin systems. I'd recommend you check out some Dependency Injection (DI for short) libraries that can do a lot of this heavy lifting for you.
Here is a list of DI libraries: http://www.hanselman.com/blog/ListOfNETDependencyInjectionContainersIOC.aspx
Personally I've been using Ninject lately.
Some relevant links:
Using Ninject in a plugin like architecture
Google something like "plugin framework DI C#"
When implementing the facade pattern in Java, I can easily hide the subsystem of the facade by using the package-private modifier. As a result, there is only a small interface accessible from outside the facade/package, other classes of the sub-system are not visible.
As you already know, there is no package-private modifier in C#, but a similar one called internal. According to the docs, classes defined as internal are only accessible within the same assembly.
From what I unterstand, I have to create at least two assemblies (means practically two .exe/.dll files) in order to hide the subsystem of the facade physically. By physically I mean that the classes a) cannot be instantiated from outside and b) are not shown by intellisense outside the facade.
Do I really have to split my small project into one .exe and one .dll (for the facade) so that the internal keyword has an effect? My facade's subsystem only consists of 2 classes, an own .dll seems to be overkill.
If yes, what is the best practice way in Visual Studio to outsource my facade to its own assembly?
Don't get me wrong, I have no real need to split up my program into several assemblies. I just want to hide some classes behind my facade from IntelliSense and prevent instantiation from outside. But if I'm not wrong, there is no easier way that that.
Using a separate project is the general preferred approach. In fact, you often have interfaces or facades in a third assembly that both the implementation and UI assemblies reference.
That said, you can accomplish this in a single assembly using a nested private subclass.
public interface IMyService {}
public static class MyServiceBuilder
{
public static IMyService GetMyService()
{
//Most likely your real implementation has the service stored somewhere
return new MyService();
}
private sealed class MyService : IMyService
{
//...
}
}
The outer class effectively becomes your 'package' for privacy scoping purposes. You probably wouldn't want to do this for large 'packages'; in those cases, it's cleaner to move the code to a separate assembly and use internal.
Note that if you primary objection to multiple assemblies is deployment, you can actually merge multiple assemblies for the purpose of creating a simpler executable or library deployment. This way you can retain the insulation benefits of multiple projects/assemblies without having the headache of multiple files that can potentially be distributed or versioned independently.
Let's say I have a class (not a static class), A, that in some way uses plugins. I use MEF to manage those plugins, and add methods for my users to add parts catalogs. Example usage:
var myA = new A();
myA.LoadPlugins(new DirectoryCatalog("path/to/plugins"));
myA.DoStuffWithPlugins();
In the same namespace as A is class B. B also uses MEF to manage plugins, and has its own CompositionContainer. If a user wants to interact with B's plugins, she must use B's plugin management methods.
B is used just like A is, above.
My question is, is this bad? Should I care that there are two separate places to load plugins in my namespace? If it is bad, what are the alternatives?
My question is, is this bad? Should I care that there are two separate places to load plugins in my namespace? If it is bad, what are the alternatives?
Not necessarily. There is no reason you can't have two completely separate compositions within the same application.
That being said, there's also no real reason, in most cases, to have more than a single composition. MEF will compose both sets of data at once. In your case, you could compose your importers and your reports with the same composition container, which would have the advantage of allowing somebody who is extending your system to only create a single assembly which extends both portions of your application.
One potential minor red flag here is that these are two separate types within the same namespace, but each of which have their own plugin system. Typically, a framework with a full plugin system is going to be complex enough that I'd question whether they belong in the same namespace - though from going by type names of "A" and "B" it's impossible to know whether this is truly inappropriate.
I don't see any problem with that. I would recommend a base class for class A and class B to reuse methods.
class A : BaseCompositionClass {
// implementations
}
class B : BaseCompositionClass {
// implementations
}
You could use a single CatalogExportProvider and then query that provider for the matching imports and exports. You could then use a single CompositionFactory from which classA and classB request compositions.
I am working on a project and want to optionally use an assembly if available. This assembly is only available on WS 2008 R2, and my ideal product whould be a common binary for both computers with and without the assembly. However, I'm primarily developing on a Windows 7 machine, where I cannot install the assembly.
How can I organize my code so that I can (with minimum changes) build my code on a machine without the assembly and secondly, how do I ensure that I call the assembly functions only when it is present.
(NOTE : The only use of the optional assembly is to instantiate a class in the library and repeatedly call a (single) function of the class, which returns a boolean. The assembly is fsrmlib, which exposes advanced file system management operations on WS08R2.)
I'm currently thinking of writing a wrapper class, which will always return true if the assembly is not present. Is this the right way to go about doing this?
My approach would be to dynamically load the assembly, instead of hard-coding a reference. Your code could then decide whether to use the assembly (if it loaded) or return some other value. If you use the assembly, you'll need to use reflection to instantiate the class and use the method. That way your code will build and run on any platform, but it's behavior will change if it detects the presence of fsrmlib.
The System.Reflection.Assembly documentation has example code for doing this.
Hide the functionality behind an interface, say:
public interface IFileSystemManager
{
void Manage(IFoo foo);
}
Create two implementations:
An implementation that wraps the desired functionality from fsrmlib
A Null Object implementation that does nothing
Inject the IFileSystemManager into your consumers using Constructor Injection:
public class Consumer
{
private readonly IFileSystemManager fileSystemManager;
public Consumer(IFileSystemManager fileSystemManager)
{
if (fileSystemManager == null)
{
throw new ArgumentNullException("fileSystemManager");
}
this.fileSystemManager = fileSystemManager;
}
// Use the file system manager...
public void Bar()
{
this.fileSystemManager.Manage(someFoo);
}
}
Make the selection of IFileSystemManager a configuration option by delegating the mapping from IFileSystemManager to concrete class to the config file so that you can change the implementation without recompiling the application.
Configure applications running on WS 2008 R2 to use the implementation that wraps fsrmlib, and configure all other applications to use the Null Object implementation.
I would recommend that you use a DI Container for the configuration part instead of rolling this functionality yourself.
Alternatively you could also consider treating the IFileSystemManager as an add-in and use MEF to wire it up for you.