Duplicate Functionality Amongst Multiple Projects - c#

I'm currently working on two social networking sites that have a lot in common, yet are distinctively different. I find myself writing a lot of the same code for both (including UI), and was wondering if there is a best practice that will limit duplicating code.
One of the main problems is that these projects are very independent of eachother and will likely have more differences than similaries soon. Also, once the initial work is done, they might be handed off to other programmers, so having shared code libraries might end up being a big problem.
Any suggestions from people that might have had to deal with a similiar situation?
PS: I'm the only developer on both of these projects, and it looks like it's going to stay that way for a while.

Abstracting shared functionality back to a framework or library with defined interfaces and default implementations is a common way to handle this. For example, your plugin architecture, if you choose to support one, is probably something that could be shared among all of your projects. Most of the time the things you want to share are pretty basic functionality or relatively abstract functionality that can be easily customized. The former are easier to recognize and factor out to common libraries. The latter may sometimes be more work than simply re-implementing the code with minor changes (sharing patterns rather than code).
One thing you want to be careful of is to let the actual re-use drive the design of common libraries rather than coming up with a shared architecture in advance. It's very tempting to get caught up in framework design and abstracting it out for shared use. Unfortunately you often find that the shared use never develops or develops in a different direction than you expected and you end up rewriting or throwing away much of the framework -- or even worse, keeping and maintaining unused code. Let YAGNI (you aren't gonna need it) be your guide and delay refactoring to common libraries until you actually have a need.

There are a couple (at least) of different approaches here, and you could certainly use both. Firstly you could remove some common code in to a separate project and just call that code staticaly. This is pretty easy to do and I sometimes take this approach with simple helper functions that probably don't belong in a class in my main project - a good example would be a math library or something like that. The other approach is to extract common functionality in to a class or interface which you then inherit and extend. Depending on what code you are looking to reuse you might use either (or both) of these approaches.
I suspect you will find it easier than you think. Try it with some simple code, set up a new project in the same solution, reference your library from your existing code and see how it goes. There is also no reason not to reference your shared project in multiple solutions either.
Having shared code libraries need not be a problem if the development gets handed off. For now you can have your 2 sites reference the same library (or libraries) which you maintain, but if and when you split the projects out to other teams you can give a copy of the shared code to each team.

Related

Should I write a library from some code I have that has the possibility to be needed across projects?

I am writing a C# application with Visual Studio that is divided in several modules (namespaces). Every module of course is going to be in charge of some particular function. For example there is a module that will deal with calling some firmware some place else, another module that will deal with just the UI with the user etc.
So far I have no problem doing that. But while writing the first module, I realized that perhaps the classes that I create for this, could very well be used for other future projects since their architectures seem similar.
So I am wondering if it would be a good decision to:
write a separate DLL that deals with all this functionality from scratch and then call that in my project
or just write the project, make it work and then later separate the particular module and shape it like a DLL.
You have to look at all the options and consider everything.
If you create a separate library then you need to be sure that you know all the requirements ahead of time, so that you can keep the library as stable as possible. As, each time you update the library you will need to update all of your projects which use it.
Creating a library will be at least little more work initially.
A well designed and developed library will give you the ability to be able to just drop it into a future project and be sure that it will do what you want.
A badly designed one will mean that you keep going back to it to make changes time and time again and have to keep updating all your projects, or maintain backwards compatibility which means you could end up with multiple different versions of the same method. And end up with something difficult to maintain and update.
You have to weigh all of this up against the advantages you would gain by using a library.
My experience, is that if you need to do something twice then often copy and paste is better. If you need to do it more than that, or sometimes if it is quite complex, then a library starts to pay off. But still for little things copy and paste is still easier, quicker, and lighter.

Function related libraries, or single service layer?

My team and I are looking for evidence to support either a multi library approach for like functionality, or condensing all of this functionality within a single service layer. It's important to note that this is going to be sitting behind a web api and either approach is valid, but we need to decide which holds more benefit. To illustrate the layer(s) we are looking at the following is what we'll have:
Solution
WebAPI
Services ---- This is what we're looking at
DataAccess
Bear in mind that if we did use the multi library approach we would still have a Services project, but it would be much thinner and have more specific functionality. We are not planning to independently deploy these libraries, but have everything needing to reference them either in the same solution, or access them via the web api.
What the rest of us would propose is something like the following:
Solution
WebAPI
Services
Services.Geography ---
Services.Membership --- This is the alternative approach
Services.ProductDelivery ---
etc...
The benefits we see in the first option is having all of this code organized within a single library which allows for easier extraction of duplicate code, potentially unit testing, and perhaps some relief from the build process.
The benefits of option two are having a clear delineation in functionality between projects, having isolated code which is potentially portable should the need arise, and generally being able to independently work on and configure different facets of the application.
The drawbacks we see in option one are that the Service layer now becomes responsible for each facet of the application, which bloats that library and in my opinion sort of violates Single Responsibility. We realize that rule is not as applicable to libraries so much as it is methods and classes, but it still seems like there are other benefits to be had by separating functionality. There's also the potential to mistakenly place code somewhere it doesn't belong, or use classes available to the entire project where they may not apply.
The drawbacks in option two are an obvious increase in overhead on project builds, working in configs (even though this may be desirable) and potentially cluttering the solution with more projects than necessary. I think we'd plan to consolidate like functionality into single projects (ie, we might build multiple implementations of ProductDelivery within that project to be able to switch between them or use different ones for different reasons).
We realize all of our business rules can be accomplished with either approach, we just have reached an impasse in deciding which approach is better practice.
So the question is, which of these two approaches is better practice?
I have 2 things that can make me think to chose the first option:
Your services use only one data layer library.
Your services are really short (something like implementing just the CRUD)
Split in libraries, a class that count few lines in an entire library, can be awkward. Unless, you know it will grow a lot after (in several classes, of course).
If not, I would say the option 2, is better because:
it's easier to replace a part of the service like that. Change the library that you want, and it's done.
it should be more abstracted, if you want to avoid strong coupling between each library
it should be easier to test a specific part of it.
it should be more configurable, and you can configure all of them in the project that references all of them (Even, if one or several libraries doesn't change a lot of things).
it should be less like a god library
it might be more exportable for some others projects, depending on how specialized your libraries are.
And I disagree, for these points:
a single library which allows for easier extraction of duplicate code
If you are careful, your duplicate code, can be extracted in a parent library, common to all the others. So all your duplicate code, should be automatically extracted (Except, if there is a lack of communication or people prefer to copy/paste code. But, one library would not change that. It might even be harder to find where the code already exists).
potentially unit testing
Why several libraries have to be more difficult to test ?
If you have several libraries, you will have to make them more abstracted, to allow the change. And then, your testing should be easy.
and perhaps some relief from the build process.
Why ? If all your libraries are well named, where would be your problems ?
Deploying a dll or several dlls, shouldn't be that hard.
If it's about the configuration, one library or more, you will still have the same configuration to make, not necessarily more (but probably a bit more).
I also disagree for the single responsability doesn't apply to libraries. It is.
Each library, should be responsible for one business, not all of them. If you finish with a set of libraries, it can become a framework. Even, for a framework, you will finish to have a single responsability but much more general, than the responsibility of methods, classes, libraries, etc...
But you might want an opinion from a more advanced architect/developer than me.
If someone disagree with me, don't hesitate to comment my answer. I would be happy to learn from your knowledge
With the comments from my first answer in mind.
The current plan is to have a single data layer. Many of our potential
other libraries would be third party api wrappers that don't
necessarily need to interact with the database. Those that do could
potentially have their own data layer which may or may not interact
with the same database or an independent database. I think doing that
makes them self contained and able to exist without the rest of the
solution. Still not totally sure if this is the approach we want to
take yet though.
Dependency injection ?
StructureMap as our IoC dependency resolver
You will end up with several libraries, unless, all your libraries you use, have to use themselves together.
You will either have your services becoming kind of proxies for the third party libraries, or your services will use proxies for the third party libraries.
But anyway, the proxy parts, should not be together in same library. It would be harder to change the third party library, if you do that.
If you chose the solution where your services use proxies for the third party libraries. You can inject these proxies into your services easily, thanks to the dependency injection.
If you change of third party library, change the proxy implementation and change the injection, and it's done.
But, if you chose to make your services the proxies. It's almost the same, but you have one layer less. And your service implementations have to be exported in different libraries. You will also have to be more careful when changing your service, because you will endup breaking things elsewhere in your app.
For that last point, having a proxy layer used by your services, sounds better to me, at the moment.
I'm still thinking. It will have more edits to come I imagine

Executable vs wrapper class

I am working on a project using asp.net and c# and I need to pull in something like wkhtmltopdf. I realize that there have been several good wrapper classes written to simplify calls to the dlls using c#. But is there a reason why I should not invoke the executable directly? Is there any performance or security gain from using a wrapper library?
Although, my specific need now is to use wkhtmltopdf, I have had the same question in the past when using libraries like imagemagick as well.
It's a matter of preference. By using the wrapper classes you mentioned, the work that you do implementing components that you may not be so familiar with is reduced, thereby freeing up your valuable time to concentrate on those aspects of the application where perhaps you can make your strongest value-add, such as the overall application architecture and design, or perhaps the application's business logic.
If you choose to write all the code yourself, then you may find that you're a less productive developer than your competition.
And, as #UweKeim points out in his comment, performance may be a factor as well. If the wrapper code does not perform to your needs, you may well need to bypass it and go straight to the component/code library you're calling.
It's important to strike a balance between use of code that others have written, versus your own. Important factors are things such as, how well is the 3rd party code written, how well is it supported, how well it performs, etc. Choose wisely!

How should I store my custom classes?

I've gotten to the point where I have made a few classes that I have found to be rather useful for a variety of different projects, they're either extensions of the already existing .Net ones or something entirely new.
Although I may not use them for EVERY project I would most certainly use them again at some point, my questions is what is the best way to keep these stored?
I was thinking about compiling them into a .dll that I can simply reference if necessary but at the moment there are only about 4 different classes, I've always thought that a .dll is more suited towards a larger amount of classes.
Would it just be simpler to store them somewhere in the cloud so I can access them from pretty much any computer?
What has worked best for you?
Edit: I'll be using more than one computer as I sometimes use the university computer facilities.
The classes range from memory management helper classes in XNA to niche functions in regular .Net/C#
If the classes don't fit together naturally as an assembly, keep the source files somewhere like Github and include them in your projects where needed. You can always rearrange them into components at a later date, when you feel it's worthwhile.
Are these classes in any way related? If you want to use one of them, do you need the others? If not, then those don't belong in a common package together.
Robert C. Martin provides some decent introduction in the chapter "Principles of Package and Component Design" of his book "Agile Software Development". There is also a C# adapted version with very similar content called "Agile Principles, Patterns and Practices in C#".
What I'm just saying is, packaging components is not only about thinking components X and Y are "cool enough" to be reused, but also about how you organize things and how well libraries or packages fit into the big picture.
You could compile them as a DLL and install them to the GAC. Then you can reference the DLLs from any project you need, just like any native C# library.
And I agree with Jim Brissom. Compile only the classes that go together as one assembly.
I keep my common classes in sourcegear and then share them into any projects as required.

When do you decide to split up large projects into smaller projects?

When/where do you decide to split a large Visual Studio project into smaller multiple projects? If it can be reusable? when project is too big? (but how big is too big?)
and When you do split the project, do you,
group by database tables
group by similar functionality
other..
Pros of many projects:
Easier to isolate code for unit testing. I like to isolate code that has a dependency on a big external server thing, for example code that talks to the SMTP server gets its own assembly, code that talks to the database gets it's own assembly, code that talks to the webserver, code that is pure business logic like validations.
Pros of few projects:
Visual studio goes faster
Some developers just don't get your vision
about dividing up responsibilities
and will start putting classes
everywhere, so you end up with the
pain of extra projects and the
benefits of putting everything into
one project.
Each project has a configuration and when you make a decision about project configuration, often you have to make the same chagne everywhere, such as setting or changing the strong name key
Pros of many Solutions
You hit the maximum project level later.
Only the stuff in your current solution gets compiled everytime you hit f5
If the project isn't expected to change in the life of your application, why re-compile it over and over? Call it done and move it to its own solution.
Cons of many Solutions
It's up to you to work out the dependencies between solutions and manually compile the dependencies first. This leads to complicated build scripts.
Projects should be cohesive. Logic should be related, and accomplishing a similar goal
This answer will depend on the size of the product you are supporting. In general we organize our projects along domain and logic. And we will divide those even further, the more you divide the more organize you must be, or you are going to hit the dreaded recursive dependency issue.
When I do choose to break up project it is when it grows to be too large or two areas are becoming too similar.
When complexity is rising I do not split by tables, i generally split functionality.
Re-usability is another excellent time to reduce lines of code, as well as introduce a new project. However be careful how many "utility" libraries you introduce because they do have impact on readability/understandability.
I do not think there is a line in sand that says, if you hit 3k SLOC, you have too much. It all is contextual.
I always have several projects (and therefore a solution) , instead of one project with all of my source in it.
In some cases, it is unavoidable because you are using and open source library and want to be able to debug it. But more pragmatically, I typically have my applications provide functionality via plugins. This allows me to change the behavior or offer a user-selectable behavior at runtime. In the non-plugin case, it allows you to update one portion of your program without updating everything. There are also cases where you can provide the main apparently, and only download the modules / assemblies when you need them.
One other reason is that you can create smaller test apps to exercise an assembly, rather than building a very large solution and potentially requiring a user to execute several (and irrelevant) GUI operations before even reaching the part you want to test. And this isn't just a testing concern -- maybe you have less-savvy users in your organization that only want to be presented with the bits that concern them.
When the overall purpose of the project remains the same, but the number of classes is becoming large, I tend to create folders and namespaces to better group functionality within the project. Classes that are coupled to each-other tend to go in the same folder/namespace, so that if I need to understand a given class, the related classes are nearby in the Solution Explorer. I usually only create new projects if I realize that a particular piece of functionality is very different in purpose or if there is a common dependency between existing projects.
I usually wind up with a few relatively small Framework projects that define interfaces for loose coupling between other projects, with larger projects for the different types of concrete functionality. That's always at least one project for the UI and one project for logic and data (often split into two projects if the data layer becomes very large in its own right.)
I move code to a new project, if it has general functionality (theoretically) usable by other projects too. If the project is large, because it represents a complex problem, then namespaces provide a great way to bring order in the code. Here you can for example introduce a (sub-)namespaces for each SQL table, etc. etc.

Categories

Resources