Maintaining and Deploying Two Versions of an Application Simultaneously - c#

I have a C# WinForms application in Visual Studio 2010 that is used by two different customers. The basic functionality of the application is the same for each customer, but certain lines of code (names of stored procedures, resources, certain behaviors) are different between versions. So far, I have kept the application in the same project, and used preprocessor directives when building/publishing to switch between which deployment to use. However, the scope of the project has grown to a point where this is no longer feasible.
Since so much of the code is shared, I'm trying to avoid duplicating source code files. I'm wondering what the best approach is to maintaining an application that requires different versions to be deployed simultaneously.

Use interfaces to define your classes. Having an interface means that you can have multiple implementations of the same interface, one for each of the clients. This will require you to analyze your existing codebase and identify logical separations in your code where these interfaces can be defined.
You then have the ability to load an interface as needed based on the client. You could, for example, do this via configuration. Based on a configuration value you load Implementation1 or Implementation2. There are many, many ways to accomplish this particular bit. You should read up on dependency injection, inversion of control and have a look at tools like Ninject, Autofac, Unity.
It may actually be difficult at first considering how you have been using preprocessor directives but seeing as how your application is growing, you will need this refactoring to happen. Keep in mind that if you do not do it now, this refactoring will be a lot more expensive later as your application becomes more complex.

The different functionality should be a part of the application's architecture. If you need different functionality for different customers, abstract it away - create an interface that wraps up the behaviour, then implement it in two different ways in two different assemblies. Then (depending on your deployment mechanism), you can ship your app with either one DLL or the other. To avoid having to recompile, add references, etc, you can use Dependency Injection frameworks such as Ninject, Castle Windsor, MEF etc. That's a "plugin-like" architecture, if code is sufficiently different.
If you're talking about text, colours, basic differences, they should simply not be hard coded but instead data-driven. If your app is internet-connected, it could download the appropriate settings when the user logs in. Else, something to indicate the text/colours/behaviour could be put in a config file specific to the customer. You can use config transforms to simplify that process.

You might be able to separate some of the differences by using resource, configuration, or property files of some kind. By this, I mean you store some kind of value in the file, such as the name of the stored procedure to use in a particular situation. Then your code reads the name from the file and runs it. You can change the values in the file without needing to rebuild your code for each deployment.

Related

why use a dll instead of a class

i joined a new project where they use c#.
I noticed that several dll's were being add in the references
From my knowledge and the e-learning that i have done, after building a class(which has some Methods & data), a DLL is generated.
Now in a new project, the class that just got converted into a DLL is added as a reference so that the functions defined in it could be called.
So, now my question is:
1) what is the need for converting the class file into a DLL file. Even it were a Class file, I could still be calling the functions defined in it by adding its namespace at the top of the code
2) If After adding the reference of the DLL , I deleted the entire contents of the project, leaving only the dll untouched(and in the same place), would the class using this dll still work
Separating your code into different projects (each of which will create a separate assembly) has various benefits:
It makes the structure of your code clear. For example, it can separate your storage layer from your business logic, and also from your user interface.
It allows reuse: two different user interfaces can refer to the same assembly containing the business logic, for example.
It allows greater encapsulation: classes which are only needed within their own assemblies can be declared as internal (which is the default for top-level classes in C# anyway) which means code in other assemblies won't even know about them. If all your code is in a single assembly, all those classes will "know about" each other.
Now choosing just how many projects to have is a balancing act - I've certainly seen applications where this has gone much too far, with lots of assemblies containing just a single class. If you have a large number of assemblies, that becomes a headache in terms of project and reference management. However, having too few assemblies makes it harder to reuse that code cleanly.
In addition to Jon Skeets answer, I'd like to add "updateability" as well. For me, this has two benefits
one is that the build time becomes smaller if only one project needs to be rebuilt
and second, pushing to "release" could be limited to a few dlls instead of one major .exe.
The first might not be a big deal in C# since projects build pretty fast, but for instance switching to C++ would be a big impact, since C++ code take a long time to compile.
The benefit of Separating is that it lets you change the internal implementation without breaking client code. It doesn't protect you if you decide that you need to change the interface to your code, but that's a different matter.
they can reuse their code. but if they use classes every time they need to implement these classes ( in the best way copy and paste all codes )
when they use dlls in instead of classes they can update all project easily by just Update one or more dll although if you use class in multiple projects you suould modify all classes in all projects.
I might add that a class is a language construct while an assembly is a deployment package.
Already in UML those are two totally different things.
http://en.wikipedia.org/wiki/Package_(UML)
When approaching the new idea of subdividing a solution, projects may be seen as "places" in which to put namespaces (i.e. folders) and classes (i.e. files).
It will take some time until you realize that a project best fits the concept of stratum (or layer) which is an architectural separation of a system.
When stratifying a system, you'll realize that the most crucial problem to tackle are the dependencies between strata (which would be the references to projects or dlls).
There cannot be loops but more important, you should study OCP (Open-Closed principle) and ISP (Interface Segregation Principle) and DIP (Dependency Inversion Principle) of SOLID:
http://en.wikipedia.org/wiki/SOLID_(object-oriented_design)
At that point a new question will emerge. How can you know which classes depend on each other or do not? You may draw class diagrams, but there is a conceptual approach to the problem. Over the years it becomes a "practice" of designing systems. The concepts are described for educational purposes in GRASP:
http://en.wikipedia.org/wiki/GRASP_(object-oriented_design)
The most important parts of GRASP for stratification are "Low Coupling" and "High Cohesion". In other words, you should batch functionally very similar classes in a stratum and separate through the stratification classes that functionally are not very much related to each other.

Changing winform app for specific clients

I have a winforms app. I give it to three clients and each one wants a small tweak or customization specific just to them. To accomplish this, I'd have to keep a separate version just for each client. I may wind up having many versions doing it this way. I thought dependency injection would be how to handle this but I hear you have to register your dependencies in the main method and you'd still have to add a reference to each clients DLL so I'd still need different versions. What is the preferred object oriented way to handle this? Any better ways to handle this?
You can use a Plug-in patten to load assembly at runtime: (from link)
Separated Interface (476) is often used when application code runs in multiple runtime environments, each requiring different implementations of particular behavior.
Most DI frameworks provide this functionality. You can search at get lots of Examples for framework you choose - if you don't want to roll your own.
Ninject
MEF
You can use a configuration file to configure your DI container, so that you can reuse the same binaries with different configuration files to implement the different customizations. But you need to be sure that you thoroughly test all of your different configurations. Slightly different versions of the same application are not trivial to maintain without causing unanticipated breaks.
Depending on the nature of the customixations, you might be able to capture all relevant modifications into a distinct part of the project (as opposed to keeping them spread all-over the project). If you can (e.g. a filering functionality is provided by the client), you can then load a DLL dynamically (e.g. based on a config file) and allow the functions in the DLL to perform the necessary functionality that accomplishes the customization (based on parameters provided by the main code).
This way you provide pre-defined hooks to your code that can be changed dynamically (even if only to load the dlls at startup time) as per the need of the client. You can separate these DLLs into multiple ones if there are distinct features that the clients want to change, but not necessarily all of the clinets all of the features. Then you can provide a "default" version of the DLLs.
Who develops the hooks is dependent on your setup with the clients.
Make sure you provide adequate documentation on how these hooks supposed to work -- even if you end up developing them.

Dynamic Assembly Resolution/Management

Short Version
I have an application which utilizes a plug-in infrastructure. The plug-ins have configurable properties that help them know how to do their job. The plug-ins are grouped into profiles to define how to complete a task, and the profiles are stored in XML files serialized by the DataContractSerializer. The problem is when reading the configuration files, the application deserializing has to have knowledge of all of the plug-ins defined in the configuration file. I'm looking for a way to handle the resolution of unknown plug-ins. See the proposed solution section below for a couple of the ideas I've looked into implementing, but I am open to just about anything (though I'd rather not have to reinvent the application).
Detail
Background
I've developed a sort of Business Process Automation System for internal use for the company I'm currently working for in C# 4. It makes exhaustive use of 'plug-ins' to define everything (from the tasks that are to be performed to the definition of units of work) and relies heavily on a dynamic configuration model which in turn relies on C# 4/DLR dynamic objects to fulfill jobs. It's a little heavy while executing because of its dynamic nature but it works consistently and performs well enough for our needs.
It includes a WinForms configuration UI that uses Reflection extensively to determine the configurable properties/fields of the plug-ins, as well as, the properties/fields that define each unit of work to be processed. The UI is also built on top of the BPA engine so it has a thorough understanding of the (loose) object model put in place that allows the engine to do its job, which, coincidentally, has led to several user experience improvements, such as, ad-hoc job execution and configure-time validation of user input. Again there is room for improvement, however, it seems to do its job.
The configuration UI utilizes the DataContractSerializer to serialize/deserialize the settings specified, so any plug-ins referenced by the configuration must be loaded before (or at the time of) configuration load.
Structure
The BPA engine is implemented as a shared assembly (DLL) which is referenced by the BPA service (a Windows Service), the Configuration UI (WinForms app), and a plug-in tester (Console application version of the Windows Service). Each of the three applications that reference the shared assembly only include the minimum amount of code necessary to perform their specific purpose. Additionally, all plug-ins must reference a very thin assembly which basically just defines the interface(s) that the plugin must implement.
Problem
Because of the extensibility model used in the application, there has always been a requirement that the config UI is run from the same directory (on the same PC) as the Service application. That way the UI always knows about all of the assemblies that the Service knows about so they can be deserialized without running into missing assemblies. Now that we are getting close to roll out of the system, a demand to allow the Configuration UI remotely on any PC in our network has come about from our network admins for security purposes. Typically this wouldn't be a problem if there was always a known set of assemblies to deploy, however, with the ability to extend the application using user built assemblies, there has to be a way to resolve the assemblies from which the plug-ins can be instantiated/used.
Proposed (potentially obvious) Solution
Add a WCF service to the Service application to allow the typical CRUD operations against the configurations which that instance of the service is aware of and rework the configuration UI to act more like SSMS with a Connect/Disconnect model. This doesn't really solve the problem so we would also need to expose some sort of ServiceContract from the Service application to allow querying of the assemblies it knows about/has access to. That's fine and fairly straight forward however the question arises, "When should the UI find out about the assemblies that the Service is aware of?" On connect we could send all of the assemblies from the Service to the UI to ensure that it always knows about all of the assemblies the service does but that gets messy with AppDomain management (potentially unnecessarily) and assembly version conflicts. So I suggested hooking into the AppDomain.AssemblyResolve/AppDomain.TypeResolve events to only download the assemblies that the client isn't aware of yet and only as needed. This doesn't necessarily cleanup the AppDomain management issues but it definitely helps address the version conflicts and related issues.
Question
If you've stuck with me this long I applaud and thank you, but now I'm finally getting to the actual question here. After months of research and finally coming to a conclusion I am wondering if anyone here has had to deal with a similar issue and how you dealt with the pitfalls and shortcomings? Is there a standard way of handling this that I have missed completely, or do you have any recommendations based on how you have seen this successfully handled in the past? Do you see any problems with the proposed approaches or can you offer an alternative?
I'm aware that not everyone lives in my head so please let me know if you need further clarification/explanation. Thanks!
Update
I've given MEF a fair shake and feel that it is too simplistic for my purposes. It's not that it couldn't be bent to handle the plug-in requirements of my application, the problem is doing so would be too cumbersome and dirty to make it feasible. It is a nice suggestion and it has a lot of potential, but in its current state it just isn't there yet.
Any other ideas or feedback on my proposed solutions?
Update
I don't know if the issue I'm encountering is just too localized, if I failed to properly describe what I am trying to achieve, or if this question is just too unreasonably long to be read in its entirety; but the few answers I've received have been subtly helpful enough to help me think through the problem differently and identify some shortcomings in what I am after.
In short, what I'm trying to do is take three applications which in their current state share information (configuration/assemblies) using a common directory structure, and try to make those applications work across a network with minimal impact on usability and architecture.
File shares seem like the obvious answer to this problem (as #SimonMourier proposed in the comments), but using them translates into lack of control and debugability when something goes wrong. I can see them as a viable short term solution, but long term they just don't seem feasible.
tl;dr, but I'm 90% sure you should take a look into MEF.
When I first saw it I was like "aah, another acronym", but you'll see it's very simple, and it's built in into .NET 4. Best of all, it even runs seamlessly on mono and it's a matter of less than an hour (including coffee break) between hearing about it and compiling hello worlds to get used with the features. It's really that simple.
Basically, you "export" something in an assembly and "import" it into another (all via simple attribute decorations), and you choose where to search for it (example, on the applications directory, plug-ins folder, etc).
Edit: what if you try to download and load (and possibly cache) plugins on-the-fly on configuration load?
I think that you could be overlooking a relatively simple solution that derives somewhat from the Microsoft web.config approach:
Have two sections in the config file:
Section 1 contains enough information about the plugin (i.e. name, version) to allow you to load it into an app domain.
Section 2 contains the information serialized by the plugin.
On loading the plugin, pass the information in section 2 and let the plugin deserialize it according to its needs.
Maybe you can divide this problem into two
administrator allow users to download one of predefined configuration (set of libraries) and MEF helps to inject required dependencies
each activity from user should pass through security proxy, plugin modules not allowed call BL directly. Proxy could match custom security attribute and allowed activities.
i.e.
[MyRole(Name = new[] { "Security.Action" })]
void BlockAccount(string accountId){}
[MyRole(Name = new[] { "Manager.Action" })]
void CreateAccount(string userName){}
[MyRole(Name = new[] { "Security.View", "Manager.View" })]
List<> AcountList(Predicate p){}
and allow for AD groups (some abstract description)
corp\securityOperators = "Security.*" //allow calls to all security manipulation
corp\HQmanager = "Manager.View" //allow only view access
corp\Operator = "Manager.*"
I'm not sure I completely understand the problem but I think this situation calls for "type-preserving serialization" - that is, the serialized file contains enough type information to deserialize back to the original object graph without any hints from the calling application as to what types are involved.
I've used Json.NET to do this and I can highly recommend the library for type-preserving serialization of object graphs. It looks like the NetDataContractSerializer can also do this, from the MSDN Remarks
The NetDataContractSerializer differs from the DataContractSerializer in one important way: the NetDataContractSerializer includes CLR type information in the serialized XML, whereas the DataContractSerializer does not. Therefore, the NetDataContractSerializer can be used only if both the serializing and deserializing ends share the same CLR types.
I chose Json.NET because it can serialize POCOs without any special attributes or interfaces. Both Json.NET and the NetDataContractSerializer allow you to use a custom SerializationBinder - in here you could put any logic regarding loading assemblies that may not yet be loaded.
Unfortunately, changing serialization schemes might be the "breaking-est" change to suggest because all your existing files will become incompatible. You might be able to write a conversion utility that deserializes a file using the old method and serializes the resulting object graph using the new method.

Need help avoiding the use of a Singleton

I'm not a hater of singletons, but I know they get abused and for that reason I want to learn to avoid using them when not needed.
I'm developing an application to be cross platform (Windows XP/Vista/7, Windows Mobile 6.x, Windows CE5, Windows CE6). As part of the process I am re-factoring out code into separate projects, to reduce code duplication, and hence a chance to fix the mistakes of the inital system.
One such part of the application that is being made separate is quite simple, its a profile manager. This project is responsible for storing Profiles. It has a Profile class that contains some configuration data that is used by all parts of the application. It has a ProfileManager class which contains Profiles. The ProfileManager will read/save Profiles as separate XML files on the harddrive, and allow the application to retrieve and set the "active" Profile. Simple.
On the first internal build, the GUI was the anti-pattern SmartGUI. It was a WinForms implementation without MVC/MVP done because we wanted it working sooner rather than being well engineered. This lead to ProfileManager being a singleton. This was so from anywhere in the application, the GUI could access the active Profile.
This meant I could just go ProfileManager.Instance.ActiveProfile to retrieve the configuration for different parts of the system as needed. Each GUI could also make changes to the profile, so each GUI had a save button, so they all had access to ProfileManager.Instance.SaveActiveProfile() method as well.
I see nothing wrong in using the singleton here, and because I see nothing wrong in it yet know singletons aren't ideal. Is there a better way this should be handled? Should an instance of ProfileManager be passed into every Controller/Presenter? When the ProfileManager is created, should other core components be made and register to events when profiles are changed. The example is quite simple, and probably a common feature in many systems so think this is a great place to learn how to avoid singletons.
P.s. I'm having to build the application against Compact Framework 3.5, which does limit alot of the normal .Net Framework classes which can be used.
One of the reasons singletons are maligned is that they often act as a container for global, shared, and sometimes mutable, state. Singletons are a great abstraction when your application really does need access to global, shared state: your mobile app that needs to access the microphone or audio playback needs to coordinate this, as there's only one set of speakers, for instance.
In the case of your application, you have a single, "active" profile, that different parts of the application need to be able to modify. I think you need to decide whether or not the user's profile truly fits into this abstraction. Given that the manifestation of a profile is a single XML file on disk, I think it's fine to have as a singleton.
I do think you should either use dependency injection or a factory pattern to get a hold of a profile manager, though. You only need to write a unit test for a class that requires the use of a profile to understand the need for this; you want to be able to pass in a programatically created profile at runtime, otherwise your code will have a tightly coupled dependency to some XML file on disk somewhere.
One thing to consider is to have an interface for your ProfileManager, and pass an instance of that to the constructor of each view (or anything) that uses it. This way, you can easily have a singleton, or an instance per thread / user / etc, or have an implementation that goes to a database / web service / etc.
Another option would be to have all the things that use the ProfileManager call a factory instead of accessing it directly. Then that factory could return an instance, again it could be a singleton or not (go to database or file or web service, etc, etc) and most of your code doesn't need to know.
Doesn't answer your direct question, but it does make the impact of a change in the future close to zero.
"Singletons" are really only bad if they're essentially used to replace "global" variables. In this case, and if that's what it's being used for, it's not necessarily Singleton anyway.
In the case you describe, it's fine, and in fact ideal so that your application can be sure that the Profile Manager is available to everyone that needs it, and that no other part of the application can instantiate an extra one that will conflict with the existing one. This reduces ugly extra parameters/fields everywhere too, where you're attempting to pass around the one instance, and then maintaining extra unnecessary references to it. As long as it's forced into one and only one instantiation, I see nothing wrong with it.
Singleton was designed to avoid multiple instantiations and single point of "entry". If that's what you want, then that's the way to go. Just make sure it's well documented.

Code Organization Connundrum: Web Project With Multiple Supporting DLLs?

I am trying to get a handle on the best practice for code
organization within my project. I have looked around on
the internet for good examples and, so far, I have seen
examples of a web project with one or multiple supporting
class libraries that it references or a web project with
sub-folders that follow its namespace conventions.
Assuming there is no right answer, this is what I currently
have for code organization:
MyProjectWeb
This is my web site. I am referencing my class libraries here.
MyProject.DLL
As the base namespace, I am using this DLL for files that
need to be generally consumable. For example, my class "Enums"
that has all the enumerations in my project lives there. As
does class MyProjectException for all exception handling.
MyProject.IO.DLL
This is a grouping of maybe 20 files that handle file upload and
download (so far).
MyProject.Utilities.DLL
ALl my common classes and methods bunched up together in one
generally consumable DLL. Each class follows a "XHelper" convention
such as "SqlHelper, AuthHelper, SerializationHelper, and so on...
MyProject.Web.DLL
I am using this DLL as the main client interface.
Right now, the majority of class files here are:
1) properties (such as School, Location, Account, Posts)
2) authorization stuff ( such as custom membership, custom role,
& custom profile providers)
My question is simply - does this seem logical?
Also, how do I avoid having to cross reference DLLs from one
project library to the next? For example, MyProject.Web.DLL
uses code from MyProject.Utilities.DLL and MyProject.Utilities.DLL
uses code from MyProject.DLL. Is this solved by clicking on properties and selecting "Dependencies"? I tried that but still don't seem to be accessing the namespaces of
the assembly I have selected. Do I have to reference every
assembly I need for each class library?
Responses appreciated and thanks for your patience.
It is logical in that it proceeds logically from your assumptions. The fact that you are asking the question leads me to believe you might not think it is rational.
In general, things should be broken down along conceptual boundaries rather than technical ones. MyProject.IO.DLL is an example of this principle surfacing in your current design. All of the IO things logically go together, so they end up in a single binary. Makes sense.
Breaking things down into namespaces based on their technical type - enum, class, etc. - is going to be a little more problematic.
The dependencies problem is the same one you'd have breaking one class up with many and it is resolved using the same technique: inversion of dependency. Where two things seemingly need to depend on one another, add an intermediary thing that represents the contract between the first two. This can be abstractions, constants, mediators etc... whatever you need to make it so that instead of thing A depending on thing B and thing B depending on thing A, you have things A and B depending on thing C.

Categories

Resources