In an effort to introduce reusable code at my new employer I've elected to create a class library that will be referenced by 200+ existing small applications. This library contains logging, dbconnection logic, etc.
Is there a way to setup TFS online's build service to automatically determine which projects have referenced this common library as a nuget package? I'd like them to build after (or part of) the CI build for the common library runs.
The projects that will depend on the nuget package do exist in the same TFS Team Project, but are not in the same branches, each application has its own set of branches.
Not really, and I'd say that what you want to do kind of defeats the purpose of NuGet.
You have 200 applications consuming this common library. The common library presumably works. Awesome. When you release a new production-stable version of the package, you should bump its version number and let everything that's using the old version continue to do so.
It should be the responsibility of the consumer of that library to choose whether to update it or not when a newer version is made available. The team responsible for each application should be able to make a conscious decision to upgrade the component.
Also, keep the single responsibility principle in mind. Having a "god assembly" that contains logging, database logic, and other totally unrelated stuff sounds like a really bad idea, especially if these things are going to continue to evolve over time. You'll bump into a situation where an application needs New Feature X in the database piece, but unfortunately someone made Unrelated Breaking Change Y in the logger logic a few weeks ago. Now you have to integrate Unrelated Breaking Change Y into your application even if you don't want or need it.
Related
I started at a new company which manages multiple projects (around 30). However, all their projects are in one git-repository. I know wanted to split all our projects into one git-repository per project. To achieve that I went ahead and extracted every folder into a new folder, containing it's own git repository.
However, some references were broken. While investigating I found that project referencing was done in multiple ways, dependent on the project
Including the entire solution/project in the current solution.
Referencing the .csproj-file of another solution.
Referencing the built .dll (bin/debug).
In my opinion, the first way should not the way to be, right? This seems like a way too big overhead. So I'm split between 2 and 3, and I would like to hear how you people are doing it?
Looking forward for your input!
It's normal to have code you want to share between multiple solutions.
For this, we use projects like 'Infrastructure' or 'Logging' with their own CI builds. When done, we create a release build which uploads the dll's to a private nuget server.
These projects are than included as dll's in the other projects through nuget and updated when needed. You also don't break other solutions when you change something in your logging, you have to update the logging version first.
What I do is to have a nuget server in the company or you can use Azure DevOps to do that: https://learn.microsoft.com/en-us/azure/devops/artifacts/get-started-nuget?view=azure-devops.
After you set the nuget server you can update/import the packages for each project. So, when you update the code of any project, post it to the nuget server and you can update all other projects.
Your question sounds like "How to split a solution into smaller solutions".
30 projects is not that much, At 30 C#-projects it's just the starting point to split your solution.The solution is the base for the Repository also.
If you analyze the dependencies of C#-projects, you certainly can form kind of clusters,
are there basics- referenced by everything, and almost front end parts, referenced by nothing.
Basic-Projects (Depends from Nothing, but referenced by many) Tend to be more stable and have less frequent changes, it's also more dangerous to change, more danger of braking changes. It's good to make access to them more complicated (=put it in a different solution). You do not change it frivolously, just because you see the source code and you edit it.
The code and architecture becomes more cleaner, since programmers tend to use a wrapper, derived classes or interfaces to do, what they want to do, inside their active solution.
They do not change dependent solutions as fast and easy any more. So it stays more stable.
You can consider a Solution as a product of it's own, as a Library or Final Product.
So splitting projects in the aspect, what is potentially being used in upcoming projects in the next years, and what is being used as a product for one client only.
Suppose you start a new Product next week, what Projects would you most likely include there ? They belong into a library.
It's also simplifying life to new programmers, if you tell them "Just use it, you don't need to dig in the source code", or "get familiar with this solution only" if you group your C# projects into such clusters. They are not so overwhelmed by quantity.
Also the branching is done per solution, you create a branch on one solution per client request, and a branch of another solution to stay up to date with technology. This is much easier to handle with smaller bundles of projects.
Nuget-Server as proposed by others, is a good way to maintain updates. Without a Server you link to DLLs directly. If you do not have many updates, either you invest time in setting up the server, or in copying a few DLL's around, twice a year. One is not more complicated or time consuming as the other. Manual jobs done by different people cause the risk of human errors. But the task "copy all DLL's from one directory to another directory" might still work. Do not reference the output directory from one solution directly to the other solution. Put the "productive DLL" in a separate directory and do your update by saying "yes, I want to update - use it now". "Automated update" just if someone decides to built the other solution might cause trouble.
I fully understand what NuGet/OpenWrap were primarily made and design for and how it has been adopted and applied since it was released a while ago.
I can however see other cases to use it in yet another way. One of the things that I was thinking of draws attention to the run time dependencies.
The enterprise product suite that I'm working on basically comes with a core that consists of various services and optional modules. These modules plug right in to make specific functionality available to form unique solutions as per requirements. These unique solutions are getting deployed to remote servers inhouse, data centers, the cloud, your patio... pretty much anywhere.
Needles to say deployments of updates for bugfixes + maintenance are complicated and have to be carried out manually which have proven to be error-prone and clumsy. Especially since interface revisions and other components have to match and major deployments usually require a depolyment of each and every module.
Personally I'm not a big fan of creating installer packages (MSI, Web Installer, etc.) for every unique solution as this would get out of hand soon and doesn't scale very well.
I was wondering whether or not a package manager and custom feeds could help us streamlining this process. Maybe I'm thinking in the wrong direction and would appreciate comments and thoughts.
We've done that successfully. OpenWrap can simply be called to update packages into specific directories. Deploying an app is then jsut a matter of adding a new descriptor with the packages you want to see deployed, and letting openwrap do the resolve for you.
This works well especially because OpenWrap has the concept of a system repository (which is per user), which can also be redirected (in case you want to partition multiple repositories, one per application, or for testing...).
Deploying a new app is then only a matter of either adding a new folder with an associated descriptor, or adding the application straight into the system repository. Auto-update can be implemented by simply running the openwrap command-line tools in a batch job.
If you want to go one level up, you can make your application composite by leveraging the OpenWrap API, and adding / removing packages dynamically. We have runtime assembly resolving available.
We are in a situation whereby we have 4 developers with a bit of free time on our hands (talking about 3-4 weeks).
Across our code base, for different projects, there are a number of framework-y type of code that is re-written for every new project that we start. Since we have some free time on our hands, I'm in the process of creating a "standard" set of libraries that all projects can re-use, such as:
Caching
Logging
Although these 2 above would rely on libraries such as Enterprise Library, each new project would write its own wrappers around it, etc, so we're consolidating all these code.
I'm looking for suggestions on the standard libraries that you built in-house that is shared across many projects.
To give you some context, we build LOB internal apps and public facing websites - i.e. we are not a software house selling shrink-wrap, so we don't need stuff like a licensing module.
Any thoughts would be much appreciated - our developers are yearning to write some code, and I would very much love to give them something to do that would benefit the organization in the long run.
Cheers
Unit Testing Infrastructure - can you easily run all your unit tests? do you have unit tests?
Build Process - can you build/deploy an app from scratch, with only 1 or 2 commands?
Some of the major things we do:
Logging (with some wrappers around TraceSource)
Serialization wrappers (so you can serialize/deserialize in one line of code)
Compression (wrappers for the .NET functionality, to make it so you can do this in one line of code)
Encryption (same thing, wrappers for .NET Framework functionality, so the developer doesn't have to work in byte[]'s all the time)
Context - a class that walks the stack trace to bring back a data structure that has all the information about the current call (assembly, class, member, member type, file name, line number, etc)
etc, etc...
Hope that helps
ok, most importantly, don't reinvent the wheel!
Spend some time researching libraries which you can easily leverage:
For logging I highly recommend Log4Net.
For testing nUnit
For mocking, Rhino.
Also, take a look at Inversion of Control Containers, I recommend Castle Windsor.
For indexing I recommend Solr (on top of Lucene).
Next, write some wrappers:
These should be the entry point of you API (common library, but think of it as an API).
Focus on abstracting all the libraries you use internally in your API, so if you don't want to use Log4Net, or Castle Windsor anymore, you can by writing well structured abstractions and concentrating on loosely coupled design patterns.
Adopt Domain Driven Development:
Think of API(s) as Domains and modular abstractions that internally use other common APIs like you common Data Access library.
Suggestions:
I'd start with a super flexible general DAL library, that makes it super easy to access any type of data and multiple storage mediums.
I'd use Fluent nHibernate for the relational DB stuff, and I'd have all the method calls into the you data access implement LINQ, as it's a c# language feature.
using LINQ to query DBs, Indexes, files, xml etc.
Here is one thing that can keep all developers busy for a month:
Run your apps' unit tests in a profiler with code coverage (nUnit or VS Code Coverage).
Figure out which areas need more tests.
Write unit tests for those sub-systems.
Now, if the system was not written using TDD, chances are it'd be very monolithic and will require significant refactoring to introduce test surfaces. Hopefully, at the end of it you end up with a more modular, less tightly coupled. more testable system.
My attitude is that one should almost never write standard libraries. Instead, one should refactor existing, working code to remove duplication and improve ease of use and ease of testing.
The result will be very much like a "standard library", except that you will know that it works (you reran your unit tests after every change, right?), and you'll know that it will be used, since it was already being used. Otherwise, you run the risk of creating a wonderful standard library that isn't used and doesn't work when it is used.
A previous job encountered a little down time while the business sorted out what the next version should be. There were a few things we did that helped
Migrated from .net reoting to WCF
Searched for pain points in the code that all devs just hate to work with and refactor them
Introduce a good automated build system that would run unit tests and send out emails for failed builds. It would also package and place that version in a shared directory for the QA to pick up
Scripted the DB so that you can easily upgrade the database rather than being forced to take an out of date copy polluted with irrelevant data that other devs have been playing with.
Introduced proper bug tracking and triage process
Researched how we could migrate from winforms to wpf
Looked at CAB (composite application) or plugin frameworks so configuration would get simplier. (At that time setup and configuration was a tremendous amount of time)
Other things I would do now might be
Look at Postsharp to weave cross cutting concerns which would simplify logging, exception handling or anywhere code was repeated over and over again
Look at Automapper so that conversions from one type to another was driven by configuration rather than changing code in many places.
Look at education around TDD (if you dont do it) or BDD style unit tests.
Invest time in streamlining automated integration tests. (As this one is difficult to set up and configure manually it tends to get dropped of within SDLC)
Look at the viability on dev tools such as Resharper
HTH
As an ISV company we slowly run into the "structure your code"-issue. We mainly develop using Visual Studio 2008 and 2010 RC. Languages c# and vb.net. We have our own Team Foundation Server and of course we use Source Control.
When we started developing based on the .NET Framework, we also begun using Namespaces in a primitive way. With the time we 'became more mature', i mean we learned to use the namespaces and we structured the code more and more, but only in the solution scope.
Now we have about 100 different projects and solutions in our Source Safe. We realized that many of our own classes are coded very redundant, i mean, a Write2Log, GetExtensionFromFilename or similar Function can be found between one and 20 times in all these projects and solutions.
So my idea is:
Creating one single kind of root folder in Source Control and start an own namespace-hierarchy-structure below this root, let's name it CompanyName.
A Write2Log class would then be found in CompanyName.System.Logging.
Whenever we create a new solution or project and we need a log function, we will 'namespace' that solution and place it accordingly somewhere below the CompanyName root folder. To have the logging functionality we then import (add) the existing project to the
solution.
Those 20+ projects/solutions with the write2log class can then be maintained in one single place.
To my questions:
- is that a good idea, the philosophy of namespaces and source control?
- There must be a good book explaining the Namespaces combined with Source Control, yes? any hints/directions/tips?
- how do you manage your 50+ projects?
Here's how we do it (we're also an ISV, and we use TFS):
We have an in-house framework that all of our products use. The framework includes base classes for our Data Access Layer, services like logging, utility features, UI controls, etc).
So, we have a Team Project for our framework:
Framework\v1.0\Main\Framework
(note the repetition of "framework", looks weird, but it's important)
Then we have a Team Project for each product, and we branch the framwork into the team project:
ProductName\v1.0\Main\ProductName
ProductName\v1.0\Main\Framework (branched from \Framework\v1.0\main\Framework, we make this branch read-only)
any code under "\Main\ProductName" can reference any code under\Main\Framework
Further, if we need to create working branches of our product, we just branch at "Main" like so:
ProductName\v1.0\WIP\MyBranch\ (branched from Main, where MyBranch == Main)
That gives us 2 really cool features:
I can create branches without messing up my references as long as I keep everything below "Main" together. This is because VS will use relative paths to the references, and as long as I keep everything below Main together (and I do NOT reference anything "above" main, the relative paths remain intact.
If I update the "real" framework (under \Framework\v1.0)), I can choose for each product when I want to merge those framework updates into the product's code base.
(that's really useful if you use shared libraries, because it decouples internal releases of your shared framework from external releases of your products). If you are just moving to shared libraries, one of the problems you are going to encounter is "collisions", where a change to your shared code mandates changes to your product code in order to stay compatible. By branching your shared code, you can update your framework without immediately impacting all of your products at the same time.
We have a base product that has bespoke development for each client that extends and overwrites the functionality of the base.
We also have a custom framework that the base product and sits on top of.
We use inherited forms to override the base functionality and to date all the forms and classes have been lumped in the same projects i.e. UI, Data, Business...
We need to clean up the code base now to allow multiple client project to run off the base product at once and I was looking for advice around the following areas:
Ways of organising the solution to fit with the above requirements, the number of projects in the solution is quite large and we want to reduce this to increase developer productivity, we are think of making the Framework DLL references instead of project references
Are there any build and deployment tricks we are missing, we currently have a half automated build and release process
What is the best way to manage versioning
Any best practices for product development
I personally strongly believe that highly modular architecture will fit here nicely: core application should provide basic/common services, and all customer-specific functionality should be implemented as plug-ins (think MEF). Hence, several thoughts:
I'd go for one solution for core application plus additional solution for each and every customer.
One-step build is a must. Just invest some time in writing a handful of MSBuild scripts: this will pay off tenfold.
See APR's Version Numbering for inspiration.
Too broad a question.
I can give you an advice on your first question and maybe a little of the forth : If i were you I would go with a framework DLL solution that could easily be managed and futher developed by a team and different solutions for each subsequest project. However, the framework solution would have to be propely developed, with extra care to one design principle: Open/closed principle [1] so future development of the framework does not break the existing implementations.
[1] http://en.wikipedia.org/wiki/Open/closed_principle