Related
Background
There are a lot of good resources on how to use MVVM/MVP to separate your layers so that the designer and coder can work separately. This isn't a PRISM question, but I have also checked several Prism tutorials ( including this excellent 4+ hours series by Microsoft' Mike Taulty, and many more)
These tutorials/books/videos explain the internal workings, such as how to pass messages across view models, how to modularize the application, the best security practices and so on.
However, no one talks about how to actually *logically divide*an application (WPF or ASP.Net MVC) so that multiple people can work on it.
Question
How do you generally go about assigning responsibilities to your development team?
Assuming that you use a high level UML tool, once a high level diagram is ready but no code has been written, how do you ensure that
1- The developer(s) working on the UI will know and be able to access the class library functions that the class-library developers will write?
2- Two libraries that will be written for two different purposes by two different developers will inter operate?
I hope I am not being confusing here. The question is just for a few good rules of thumb. That as two people working on two projects ( WPF/Silverlight or ASP.Net MVC) in a solution will take two different ways , how do the methods/classes/functions written by one fit together with the others?
Thank you
However, no one talks about how to actually *logically divide*an application (WPF or ASP.Net MVC) so that multiple people can work on it.
You don't really need to divide an application so that multiple people can work on it: you can also use a Team Foundation Server. There's also a free version available for 5 users or fewer.
Tutorial: Getting Started with TFS in VS2010
As I understand of your question(s), you want to have an infrastructure in your project that people with different skills can work separately. If I were right, "Domain Driven Design" would be the best infrastructure you can choose.
Domain-driven design (DDD) is an approach to develop software for complex needs by connecting the implementation to an evolving model. The premise of domain-driven design is the following:
Placing the project's primary focus on the core domain and domain logic.
Basing complex designs on a model of the domain.
Initiating a creative collaboration between technical and domain experts to iteratively refine a conceptual model that addresses particular domain problems.
The term was coined by Eric Evans in his book of the same title.
There is a great project that can help you: Microsoft Spain - Domain Oriented N-Layered .NET 4.0 Sample App is based on simple scenarios easy to understand which is well documented.
Concepts like modularity, layering, etc. are discussed in the project very carefully which I believe can fulfill your expectation.
Obviously, you need a source control like Team Foundation Server or other alternatives that provide control over changes to source code among multiple developers.
That as two people working on two projects ( WPF/Silverlight or
ASP.Net MVC) in a solution will take two different ways , how do the
methods/classes/functions written by one fit together with the others?
You need maintain some common library for public functions. And the two developers need to make code review every 2 or 3 weeks. So they could find their difference and learn from each others.
I work in a .net c# application which contains 2 solutions for client and server. In server side there are 80+ projects that have been used to separate following Architectural layers,
Infrastructure Layer
Integration Layer (External Systems)
Domain Layer
Repository Layer
Manager Layer
Service Layer
In addition, almost every layer has test project.Now, the build time of the solution takes 2 to 3 minutes, and many developers (including me :)) feel we need to tackle this problem.
Therefore,proposed solution was to reduce the number of projects by merging the projects.In my view, it is probably a good solution to minimize the build time and we could achieve what we want.
Proposed solution is that we merge our projects into 3 areas, such as one library for production code, one library for test code, and one for deployment projects (WCF host ,etc) and logically divided layers in same project by separating the namespaces.
However, my concerns are
Could these separation good for the maintainability ? providing that more that hundread of classes for each namespace appox.
If we have common functionality such as helpers, where are we put those ?
Is there any other way to layering the solution ?
I guess you should split your solutions in logical layers.
As part of where do you put the helpers. Make a solution for it, on one of the lowest levels.
EXAMPLE
Software for a farm. You'll need to keep track of your animals, vegetables. You need a module for feeding the animals and one for Selling the animals and vegetables to the consumer market.
This could be splitted in a the following solutions
Back-end
Sell Module: Everyting for selling your products
Buy Module: Buying seeds, food for your animals, other products, ...
Sheduler Module: Trigger events for sow seeds, harvesting, ...
Prediction Module: Predicting harvests quantity's by the weather, and market prices, ...
...
Each of these back-end modules, can have it's own Data Access Layer, DTO, WCF Services, ...
This solution will only contain Business Logic, Data Access, ... . And there can be multiple front-end solution connecting to these back-end solutions.
Front-end
ASP.NET MVC Application: Webshop for selling to a consumer
WPF Application: Approving sells
Other WPF Application: Buying things.
Mobile application: Getting the events to your phone or something.
(Another option is to connect 2 or more backend solutions into 1 front-end solution)
...
This is a BIG change for your project and this will have an impact. Make sure you think this true, if you wan't to change it.
Multiple solutions will INCREASE your overall Build Time and it's important to have a nightly build so every developer can always work on the latest binaries, without having to build all the solutions on his local machine.
Note you can still use your layers in the different solutions:
Infrastructure Layer
Integration Layer (External Systems)
Domain Layer
Repository Layer
Manager Layer
Service Layer
To make this work all together and don't get messed up with binaries. You can map a drive I.E. X: where you have a folder binaries, where you have a folder for each solution. where each solutions copy's the assemblies on the post build event. (Script this, so it works on every machine)
If you have a good network infrastucture, you can also copy it on a server. So when you build all solutions for example in TFS, it can copy it to a location all developers can access.
If you build in TFS make sure your build order is correct, first the lowest layer, last the highest layer.
But as you split up your solution, in solutions you'll probably don't need them in every solution.
I recently read an article about Onion Architecture, maybe you can have a look at that too. (It's specific for ASP.NET MVC).
You can also have a look into CQRS.
Why 80+ projects while you only have 6 layers in your application ?
You might answer that they cover a large number of functional areas, but do you need all these functional areas in one solution in the first place ?
I'd recommend reflecting architectural divisions with projects and functional divisions with solutions. Different solutions can reuse the same projects. This way you'll have one project for each reusable architectural layer and as many Domain projects as there are functional areas.
I definitely wouldn't merge the projects... I think you'll quickly end up with spaghetti code in each layer as the developers take shortcuts (whether they mean to or not) that they shouldn't be taking.
I'd be more inclined to separate the layers out into separate solutions... and use binary references instead of project references across the tiers. This can play havoc with branching though, be careful.
I've seen build times drop by making the projects build to a common place - apparently this can prevent VS rebuilding projects when it doesn't need to - but I don't know if this is true or not.
Some ideas here: http://blogs.microsoft.co.il/blogs/arik/archive/2011/05/17/speed-up-visual-studio-builds.aspx
Finally.... is the three minutes for a full build or just to unit test one project? Focus on whichever is the biggest issue. If unit testing is taking a long time, you've got a problem with dependencies. If the full solution is taking a long time, get a build server and focus on bringing your unit test development time down.
Hope that helps
A low impact way I've dealt with a problem like that in the past is to create a series of solution files that include just one of the projects and its test project (and perhaps the project's dependencies). Then, get yourself a tool like NCrunch and do most of your coding in these solutions, probably using TDD. This will give you lightning fast feedback loops and is decidedly in the spirit of the layered, decoupled approach. When I've done this in the past, I find that I only actually run the entire application a few times a day, max, and I rely heavily on red-green-refactor, which is nice anyway.
If you want, you don't even have to source control these little solution files -- developers can create their own and they can be borderline throw-away.
Of course, this is by no means a panacea and won't address the problem of long compile times when you want to run the application, but it can definitely help simultaneously cut down on feedback time while promoting good design/development practice and it has the advantage of being extremely low risk and fast to setup.
I'm currently trying to convince management of the need to port one of our applications to .NET. The application has grown to be a bit of a monster in Access (backend in SQL), with 700 linked tables, 650 forms/subforms, 130 modules and 850 queries.
I pretty much know all the major benefits of doing this, but now need to look at how this can be achieved technically, so I can put a project plan together.
So, my plan was to convert the queries into stored procedures and/or views on the backend and re-write the forms in WPF or WinForms.
Now, the code is where I come unstuck. Is it possible to packaged up the code behind and modules into dlls and consume them whilst it is slowly ported to VB/C#?
What we can't be left with is half an application in VB/C# and half in Access, it must 'appear' to all be one application, even half way through the migration.
Thanks in advance.
EDIT: Just some more info about what we do and why we're looking at moving away from Access.
We are essentially an ISV and the Access application is our main product. This application has been developed over a period of 15 years, by many, many developers on an ad hoc basis. There is no documentation for this application.
We also have problems with getting branching in SCC to work properly, so we've currently got 4 or 5 code bases for the half a dozen clients we have. On top of that, all the testing we do is completely manual, which you can imagine is very labour intensive, and only scratches the surface of what really needs to be tested.
We're currently looking to expand, and have a number of sales leads that are in the final stages. I'm worried that with these new sales, we're going to be swamped with support and testing, and that this application is going to become even more entangled an buggy.
I'll also add to this the fact that we're just about to enter the spec phase of a brand new product, which is almost certainly going to be built in .NET. If we were to rewrite the Access application in .NET, then the people we use for that can go straight on to this new development. If we were to stay in Access, then we'd have to get some new Access people in, whom would have to be retrained once we start the new development.
So essentially it has come down to two choices, major refactoring work in Access to try and 'organise' the code a bit better, and those of you who have suggested culling parts are most probably right; I'm sure there are parts that are no longer used. However, I fear that if we stay in Access we still won't be able to build in effective testing and we still won't have proper SCC branching, which will lead to support continuing to be a nightmare, and any future developments on this product makings things worse. Either way there is a lot of work that we're about to embark on, which is either going to be done in Acces, or .NET.
I been involved in a lot of migration projects where one is converting from one platform to another. I've also seen spectacular cost overruns, and spectacular under estimations of how difficult these types of projects can be.
In some of the projects and platforms I've created and that I had built for about $25,000, the cost of replacing this application and rewriting it resulted in the other team of people taking over this project and the resulting cost was in excess of $750,000.
You're also making an assumption that the current system needs to be replaced. You MUST have clear in your mind what the actual goals of moving and replacing the current platform and software are. Simply rewriting something and moving the functionality over to another platform yields you very little, except spending a lot of money that really doesn't benefit your business of all (but hey those developers will take the money, if they convince you of the need of doing this)
You might want to take a read of this wonderful article by Joel on software (Joel by the way developed and created this forum stack overflow – and I was a moderator of some of his discussion forums for almost 10 years),. In this article, Joel warns and gives caution against out of the blue simply rewriting perfectly good software that does not rust or wear out.
Things You Should Never Do, Part I
by Joel Spolsky
They did. They did it by making the single worst strategic mistake that any software company can make:
They decided to rewrite the code from scratch.
Article here: http://www.joelonsoftware.com/articles/fog0000000069.html
Joel continues to note that in the past 10 years, that article still remains one of his more popular ones (and somewhat controversial)
It makes no sense to take a perfectly good running application that's been running great for 10 years and doing its job, and then simply rewrite it in another platform, especially if you don't have the Manpower and expertise and personnel available to maintain this new system. And this is especially more so, if the new systems not going to accomplish anything more than what the previous system was doing. In fact if you did have that Manpower, they'd likely already have STARTED converting this system already over time. I mean why all of sudden out of the blue did someone throw a light switch and all of a sudden realize that new developers need to be brought in to rewrite a system that's already been running fine?
I'll also point out having been in this business for a long time (both published and technical editor of access books) I also done migration projects from mainframe systems to desktops . And, I also done migration of desktop database systems to large mainframe systems .
I can only say that it is rare to see an application with that many tables. In fact this issue raises alarm bells right off the bat.
Because of such a large number of tables here, I have to think that there's likely either very many processes, and multiple applications cobble together here that represents this whole system . If this is not the case, then of course rewriting in .net does not make sense unless you address the un normalized nature of the system. The fact that the data is already in SQL server helps, but that just might mean that you had the horsepower and capacity and infrastructure to scale something that was poorly designed and the first place
A very big major portion of software flexibility comes from having properly normalized data models. The problem you have is that you have the data in SQL server, and it's very tempting to rewrite parts of the forms and functionality as .net forms, and continue to use the current existing data models. Unfortunately this put you in a bit of a rock and hard place, because you want to continue to use existing data, and start rewriting functionality in .net. However rewriting of functionality in .net without addressing the data models is a very bad idea.
In an ironic twist of fate, this is a catch 22, because likely if that system had really fantastic great designed data models, you might not even have the need to redesign and move this into .net. Access and SQL server can scale out to 100's of users with ease anway. And, access supports the use of class objects, and even source code control.
In other words keep in mind that people might be asking to rewrite this in .net because they believe the application will then magically have increased flexibility, and be able to be changed faster then that of their changing business rules. In fact often the opposite occurs, because access is a very RAD tool. This means that the frontline people can often make modifications due to business rules changing, faster than the IT Dept and their team of developers working away on the next great version of the application. And worse you don't want to saddle that IT Dept and those developers with a poor data model.
I mean, are you to now hire the IT department to build every single spreadsheet and excel for the people because are current business processes are not flexible enough? It would be wonderful if the IT Dept to go around to everybody's desk, hold their hand, and build the excel sheets CORRECTLY for everybody, but it's not practical in the real world. So in addition to taking access away from these people, you might as well take excel away from them also.
I am just pointing out that my spider sense suggests to me that the data models here are going to be a real challenge. Remember, I would always take poor code and great designed data models over the reverse (Great code, but terrible data models). The reason for this is with great data designs, then the code and applications practically write themselves. And with great data models, then the ease of which you can change for the ever changing business rules again favors great data designs over that of great code. You can also RE factor the code overtime WHEN you have good data models. So, with good data models you can move forms and functionality and the UI over into .net, and you can do this seamlessly and easily WHEN the existing data models make sense to keep.
Also it makes no sense to move to these new technologies and less you're going to introduce the possibility of introducing things like self serve web portals for the existing business processes. So, today we can now allow customers to manipulate and use some of that information that is currently locked up in the system. This might be simple as them checking the status of their orders instead of wasting valuable customer phone time. Or it might be something simple like how a major package company in Canada saved an estimated $10,000,000 in the first year of implementing their package tracking system. Or might be something as simple as allowing the customer to look up their account balance.
So right now in the marketplace, these self serve customer web portal systems allow customers to enter and use and get at their information instead of the calling up some employed within the organization who then turns around and launches the application and then manipulates the information for that customer while on the phone. Might just as well let the customer do this work! So from order status, to balances owing, to banking, or what ever it is, the real cherry model ticket today is to allow the customer to use at a cell serve web portal that represents all that valuable information that that internal application is creating .
As mentioned, you have to ask, is where is the Manpower and personnel going to come to build and maintain this application? Obviously the existing system with enormous numbers of forms and tables you are throwing out must somehow been created, and represents some significant investments of time and efforts . The key concept you have to ask, is where were those significant investments and resources coming from to build that existing application? Who is going to maintain the new system then? In other words you need to design the new system to reduce maintenance costs. (new versions of my software can reduce maintenance by as much as 10 or 15 hours per year for customer ).
At the end of the day, good software development and good designs are good designs. Using Access, or VB6, or vb.net don't matter if the system is meeting your business needs now.
I should also point out that the new version of access 2010, can create .web forms. They are XAML (zammel .net forms). I am pointing this out, because changing the front end skin from access to .net yields you VERY little unless the underlying data structures and designs are also modified to take advantages of new possible business processes that can be accomplished with new technologies (such as those cell serve web portals). Simply repaint the font ends with .net forms is really very much amounts to a waste of money in my humble opinion UNLESS other issues are being addressed such as the data model, or some type of web portal that'll improve flexibility here.
You have some great advice here already. Keep in mind this really comes down to what are the goals and reasons for these people desiring this software to be rewritten in .net. Those new goals and desires better not be based on the pretext of simply remaking the forms you have now into .net as that will really accomplish nothing at all, and will not improve their ability to address their changing business needs that the system they are currently using obviously had been doing in the past.
Good luck on this I don't think this is the kind of question that can be answered in a simple forum post, but at least you have lots to chew on here so you can get the ball rolling.
Instead of telling you how hard it is -- which I'm sure you already realize -- I'll try to toss out some hints:
1) Start by moving everything you can out of MS Access and into MS SQL. This means tables, stored procedures, views, etc. If you get this step right, your MS Access app will be a front-end for a real database, which is already a win.
2) Consider giving up before you start. Instead of porting everything, it might make more sense to recognize which features can just be left alone, while new ones get WCF or MVC front-ends.
3) It's tempting to port from VBA to VB.NET, on the theory that it's more similar, but I don't recommend this.
I'm working in department which is mainly responsible for replacing old Access applications with .NET solutions. In my company Access applications are used for simple scenarios to fulfill business needs of single employee or small group of employees.
Sometimes Access application grows, group of users grows or too many changes are needed. In that case department using that Access application can start a project to recreate application. When this begins we can be sure that new application will be far away from current Access application.
First the business analyst is assigned to the project. His responsibility is to map current solution and to discuss problems of the current solution and expectations for the new solution. I haven't seen a project where "customer" wanted only replacing of the current solution. Everytime the customer also wants some new features and extensions which were not possible in Access.
Business analyst creates some initial description of expected solution which is passed to architects. Architects decide which type of application will be build, what type of HW infrastructure will be needed and how the application will be connected to other systems (if needed). After this initial phase IT have big picture about application and about needed changes. Here some initial estimate is done so project can be planned and resources can be allocated. This estimate is boundary for the project. Than my team starts to do the job.
We are using agile approach so our customer (internal team) incrementally sees new features in the application. First we gather some initial set (backlog) of user stories (special form of requirements) and we estimate these user stories and we let the customer to prioritize them. We choose subset of user stories for iteration (usually 2-4 weeks). New user stories can be added to backlog any time but our selected user stories can't change during iteration. After the iteration we present customer working part of the software. Based on the working part customer can decide to change priorities in backlog or create new user stories. We repeat this approach until customer says stop or unit budget is consumed. The important point is that not all user stories have to be done. Project has been planned with some budget and some low priority user stories don't have to be overtake.
From the technical point of view it is project as any other except few differencies:
You have initial database and you always have to be sure that already implemented part in your new solution also have working migration of exisiting data.
You have existing UI. Users can like or can hate the UI. Make sure that you understand it so that you create UI which is not worse than existing one. I created applications where UI had to be completely different and I created applications where UI had to be exactly the same so that users didn't need additional training.
Try to add some new features so that new application is reasonable. It is always easier to explain needs for the new application if you can describe new needed features.
650 forms/subforms is large by any standards. That represents a major conversion project, and a 'slow' port will be a nightmare.
I would suggest developing a new .NET application 'spike' that contains the basic functionality that is absolutely required and then build upon it. At the same time, freeze the Access application from all but essential fixes.
There are a few tools that will convert MS Access forms to .NET, but they will likely fail on complex forms with sub-forms.
'Effortlessly' Convert Access Forms to VB Objects
So if a user is in Access and they take an action to open a form that happens to be a different executable written in C#, won't it 'appear' to be the same application?
There has to be a user group that would love a separate application that only has the 5-10 forms they use.
Getting rid of tables/forms/features that are not used is an increase in functionality. I don't know the level of documentation for this app, but start there. When users find out they have to document areas of an application and justify the need for parts they don't use, they'll volunteer to have it removed.
When/where do you decide to split a large Visual Studio project into smaller multiple projects? If it can be reusable? when project is too big? (but how big is too big?)
and When you do split the project, do you,
group by database tables
group by similar functionality
other..
Pros of many projects:
Easier to isolate code for unit testing. I like to isolate code that has a dependency on a big external server thing, for example code that talks to the SMTP server gets its own assembly, code that talks to the database gets it's own assembly, code that talks to the webserver, code that is pure business logic like validations.
Pros of few projects:
Visual studio goes faster
Some developers just don't get your vision
about dividing up responsibilities
and will start putting classes
everywhere, so you end up with the
pain of extra projects and the
benefits of putting everything into
one project.
Each project has a configuration and when you make a decision about project configuration, often you have to make the same chagne everywhere, such as setting or changing the strong name key
Pros of many Solutions
You hit the maximum project level later.
Only the stuff in your current solution gets compiled everytime you hit f5
If the project isn't expected to change in the life of your application, why re-compile it over and over? Call it done and move it to its own solution.
Cons of many Solutions
It's up to you to work out the dependencies between solutions and manually compile the dependencies first. This leads to complicated build scripts.
Projects should be cohesive. Logic should be related, and accomplishing a similar goal
This answer will depend on the size of the product you are supporting. In general we organize our projects along domain and logic. And we will divide those even further, the more you divide the more organize you must be, or you are going to hit the dreaded recursive dependency issue.
When I do choose to break up project it is when it grows to be too large or two areas are becoming too similar.
When complexity is rising I do not split by tables, i generally split functionality.
Re-usability is another excellent time to reduce lines of code, as well as introduce a new project. However be careful how many "utility" libraries you introduce because they do have impact on readability/understandability.
I do not think there is a line in sand that says, if you hit 3k SLOC, you have too much. It all is contextual.
I always have several projects (and therefore a solution) , instead of one project with all of my source in it.
In some cases, it is unavoidable because you are using and open source library and want to be able to debug it. But more pragmatically, I typically have my applications provide functionality via plugins. This allows me to change the behavior or offer a user-selectable behavior at runtime. In the non-plugin case, it allows you to update one portion of your program without updating everything. There are also cases where you can provide the main apparently, and only download the modules / assemblies when you need them.
One other reason is that you can create smaller test apps to exercise an assembly, rather than building a very large solution and potentially requiring a user to execute several (and irrelevant) GUI operations before even reaching the part you want to test. And this isn't just a testing concern -- maybe you have less-savvy users in your organization that only want to be presented with the bits that concern them.
When the overall purpose of the project remains the same, but the number of classes is becoming large, I tend to create folders and namespaces to better group functionality within the project. Classes that are coupled to each-other tend to go in the same folder/namespace, so that if I need to understand a given class, the related classes are nearby in the Solution Explorer. I usually only create new projects if I realize that a particular piece of functionality is very different in purpose or if there is a common dependency between existing projects.
I usually wind up with a few relatively small Framework projects that define interfaces for loose coupling between other projects, with larger projects for the different types of concrete functionality. That's always at least one project for the UI and one project for logic and data (often split into two projects if the data layer becomes very large in its own right.)
I move code to a new project, if it has general functionality (theoretically) usable by other projects too. If the project is large, because it represents a complex problem, then namespaces provide a great way to bring order in the code. Here you can for example introduce a (sub-)namespaces for each SQL table, etc. etc.
We have a base product that has bespoke development for each client that extends and overwrites the functionality of the base.
We also have a custom framework that the base product and sits on top of.
We use inherited forms to override the base functionality and to date all the forms and classes have been lumped in the same projects i.e. UI, Data, Business...
We need to clean up the code base now to allow multiple client project to run off the base product at once and I was looking for advice around the following areas:
Ways of organising the solution to fit with the above requirements, the number of projects in the solution is quite large and we want to reduce this to increase developer productivity, we are think of making the Framework DLL references instead of project references
Are there any build and deployment tricks we are missing, we currently have a half automated build and release process
What is the best way to manage versioning
Any best practices for product development
I personally strongly believe that highly modular architecture will fit here nicely: core application should provide basic/common services, and all customer-specific functionality should be implemented as plug-ins (think MEF). Hence, several thoughts:
I'd go for one solution for core application plus additional solution for each and every customer.
One-step build is a must. Just invest some time in writing a handful of MSBuild scripts: this will pay off tenfold.
See APR's Version Numbering for inspiration.
Too broad a question.
I can give you an advice on your first question and maybe a little of the forth : If i were you I would go with a framework DLL solution that could easily be managed and futher developed by a team and different solutions for each subsequest project. However, the framework solution would have to be propely developed, with extra care to one design principle: Open/closed principle [1] so future development of the framework does not break the existing implementations.
[1] http://en.wikipedia.org/wiki/Open/closed_principle