Is noda-time applicable in this case? - c#

The project I work on is basically a data collector. By way of orientation, it may help to think of it as Wireshark (or equivalent) with parsing/analysis capabilities of the Application layer (OSI Layer 7). The current version is a legacy MFC application with 15+ years under its belt. It still works, but maintenance, stability, and scalability are real concerns that we are facing. The project leadership team has recently agreed that we need to begin developing the next generation of the product, and we are targeting .NET as the product is strictly a Windows desktop-based product.
Given that our users routinely analyze log files collected literally around the world, message timestamping is very important. The current product uses _ftime_s() to assign a timestamp, and I had assumed we would simply use System.DateTime.UtcNow to get timestamps in the future on the .NET side. That is, until I read about noda-time. Now, I'm thinking that our problem domain requires more care concerning time-related functionality than I had ever cared to consider.
So a few questions.
From the description I've provided above, does it make sense to incorporate noda-time using NodaTime.Instant for timestamping?
Given a choice, I'd much prefer to pay for dedicated support than use an open-source project for fear (paranoia?) that the project is abandoned. Any thoughts or guidance on this point from those more inclined to embrace the open-source philosophy?
noda-time is currently in its 2nd beta. Is there a target date for NodaTime 1.0.0?

As Matt says, you could easily just use DateTimeOffset to represent an instant in time. I'd argue it's not as clear as using Instant, in that it suggests you might actually be interested in a local time and an offset, rather than it really just being a timestamp - but if this is the only reason you'd use Noda Time, it would make sense to stick with DateTimeOffset.
This is a reasonable fear, but you have my personal word that I'm not about to abandon Noda Time. Of course, the reverse argument is that if I did abandon it, you'd still be able to patch it - whereas if you used a commercial product and the company folded, you'd be stuck :) I do understand the concern though.
As it happens, I'm hoping to release v1.0.0 today :)

Related

How to avoid rewriting a VB6 from scratch when migrating to .Net

In our company we develop and sell a VB6 application and we think it's about time to migrate it to .NET.
The main reasons are:
We expect VB6 runtime support to end at some point in time, and we do not want to start the migration just then since it's probably gonna be a lengthy process.
There is just 1 1/2 VB6 developers left. The half one being me.
More and more customers asking for features like cloud and mobile device support.
I know that rewriting an application from scratch is the least recommended way for migrating to .NET. I totally aggree with that! Throwing away over a decade of code feels just wrong and would be such a waste of money spent, that I have a hard time recommending and justifying it towards our management.
But right now I don't see another way to do it.
Let me tell you a little bit about the application:
Like I said it has been developed for over a decade. There have been numerous developers working on it, most of them rather unexperienced at that time. We have one developer left from the initial team. That application has been his first and biggest software project and by now he realizes that many of the architectural decisions made over last 15 years have been horribly wrong, others were right at that time but have not been refactored to meet changes made in other parts of the application and so have become wrong at some point in time. This application seems to be a showcase example of code rot.
We are talking about an application of about 150 KSLOC, all in one single executable. It uses about 15 external DLLs, some of them third party ActiveX controls, some of them are our own .NET assemblies.
Adding new features to the application is still possible and being done, but takes ages compared to our other .NET applications. The reason is that every little change in the codebase requires changes all over the place. The only reason why changes are possible at all is because that one developer simply knows most the dependencies and quirks of the application. As you might have guessed the rate of unexpected side effects and bugs is quite high.
My first thought about migrating that application was to first clean up and refactor, then migrate/convert possibly using tools from Artinsoft/Microsoft/WhoEver and then refactor again to get a nice and clean .NET application.
But I see some problems:
There seems to be no way of refactoring the old application. There is no automated testing whatsoever, not even a formal method for manual testing. Every little change requires manual testing by experienced users who just know where defects might hide.
on the other hand I have established a process and set of tools for testing of our .NET applications which gives us a solid base for making refactorings
Converting that code to .NET without major refactoring feels like: Garbage in, garbage out. Even though I hate calling the old application garbage because somehow it works and has proven itself useful.
Our management has a habit of explicitly demanding quick and dirty solutions, disregarding the effects it has on the productivity and against all recommendations from the development team which has at some point started to deny the existence of quick and dirty solutions in order to be able to do things right. That does not mean that we polish features, but we do include the time to write tests and do refactoring in our estimates. So knowing this, I suspect that once the code is converted to .NET and fixed to the point where the application starts and seems to work, the refactoring-phase will be canceled and the application will be shipped to some customers.
So. What I think is that, despite the fact that rewriting from scratch will take a lot of time and resources, it might still be our only option.
Am I missing an option? Do you see possibilities of not having to rewrite that application?
I suggest that you take a step back and read this paper by Brian Foote & Joseph Yoder (University of Illinois). It provides some architectural insight into the problem you have and options to solve it. It's titled 'Big Ball of Mud' (please don't laugh, it is a serious paper). Here is the abstract:
While much attention has been focused on high-level software
architectural patterns, what is, in effect, the de-facto standard
software architecture is seldom discussed. This paper examines the
most frequently deployed architecture: the BIG BALL OF MUD. A BIG BALL
OF MUD is a casually, even haphazardly, structured system. Its
organization, if one can call it that, is dictated more by expediency
than design. Yet, its enduring popularity cannot merely be indicative
of a general disregard for architecture.
These patterns explore the forces that encourage the emergence of a
BIG BALL OF MUD, and the undeniable effectiveness of this approach to
software architecture. In order to become so popular, it must be doing
something right. If more high-minded architectural approaches are to
compete, we must understand what the forces that lead to a BIG BALL OF
MUD are, and examine alternative ways to resolve them.
A number of additional patterns emerge out of the BIG BALL OF MUD. We
discuss them in turn. Two principal questions underlie these patterns:
Why are so many existing systems architecturally undistinguished, and
what can we do to improve them?
BTW, I think your best option is to use the current application as your Requirements and rewrite everything in VB.NET or C# using a proper design.
There are four main options when you have an application like this:
Do nothing: this is always an option, as everybody knows, if it ain't broke don't fix it. However this might not be an option for several reasons such as needing to comply with some security requirements at the company, or simply because one of the components doesn't work in new platforms.
Rewrite: This would be the dream, right? being able to get rid of all the bad practices and duplicated code and so on? Well, it might be that way, however you have to think all the risks involved in developing a new application from scratch. Do you have all the formal requirements? what about test cases? do your team know every little detail in the code or would you need to go line by line trying to figure out how why that if is there? Also, how many bugs do
Buy something off-the-shelf: Since you are an ISV this won't be an option.
Migrate: Of course you'll be bound by the programming practices you used for the original development but you'll get to a new platform faster, all your business logic will be automatically migrated, you can actually hire developers for the new platform and you can get rid of the legacy elements. From here you can also take advantage of all the tools available to refactor code, continuous integration, unit testing, etc.
Also, with an automatic migration you can actually go further than just WinForms. There are also tools that can take your C# code all the way to the web using a modern architecture.
Of course, I work for Mobilize.Net (previously Artinsoft) and this is my biased perspective.
We've been working on this for around 15 years and have seen dozens of clients who come to us after trying to re-write their application and fail after months or even years of struggling without being able to deliver a working application.

Migrating application from Microsoft Access to VB or C#.NET

I'm currently trying to convince management of the need to port one of our applications to .NET. The application has grown to be a bit of a monster in Access (backend in SQL), with 700 linked tables, 650 forms/subforms, 130 modules and 850 queries.
I pretty much know all the major benefits of doing this, but now need to look at how this can be achieved technically, so I can put a project plan together.
So, my plan was to convert the queries into stored procedures and/or views on the backend and re-write the forms in WPF or WinForms.
Now, the code is where I come unstuck. Is it possible to packaged up the code behind and modules into dlls and consume them whilst it is slowly ported to VB/C#?
What we can't be left with is half an application in VB/C# and half in Access, it must 'appear' to all be one application, even half way through the migration.
Thanks in advance.
EDIT: Just some more info about what we do and why we're looking at moving away from Access.
We are essentially an ISV and the Access application is our main product. This application has been developed over a period of 15 years, by many, many developers on an ad hoc basis. There is no documentation for this application.
We also have problems with getting branching in SCC to work properly, so we've currently got 4 or 5 code bases for the half a dozen clients we have. On top of that, all the testing we do is completely manual, which you can imagine is very labour intensive, and only scratches the surface of what really needs to be tested.
We're currently looking to expand, and have a number of sales leads that are in the final stages. I'm worried that with these new sales, we're going to be swamped with support and testing, and that this application is going to become even more entangled an buggy.
I'll also add to this the fact that we're just about to enter the spec phase of a brand new product, which is almost certainly going to be built in .NET. If we were to rewrite the Access application in .NET, then the people we use for that can go straight on to this new development. If we were to stay in Access, then we'd have to get some new Access people in, whom would have to be retrained once we start the new development.
So essentially it has come down to two choices, major refactoring work in Access to try and 'organise' the code a bit better, and those of you who have suggested culling parts are most probably right; I'm sure there are parts that are no longer used. However, I fear that if we stay in Access we still won't be able to build in effective testing and we still won't have proper SCC branching, which will lead to support continuing to be a nightmare, and any future developments on this product makings things worse. Either way there is a lot of work that we're about to embark on, which is either going to be done in Acces, or .NET.
I been involved in a lot of migration projects where one is converting from one platform to another. I've also seen spectacular cost overruns, and spectacular under estimations of how difficult these types of projects can be.
In some of the projects and platforms I've created and that I had built for about $25,000, the cost of replacing this application and rewriting it resulted in the other team of people taking over this project and the resulting cost was in excess of $750,000.
You're also making an assumption that the current system needs to be replaced. You MUST have clear in your mind what the actual goals of moving and replacing the current platform and software are. Simply rewriting something and moving the functionality over to another platform yields you very little, except spending a lot of money that really doesn't benefit your business of all (but hey those developers will take the money, if they convince you of the need of doing this)
You might want to take a read of this wonderful article by Joel on software (Joel by the way developed and created this forum stack overflow – and I was a moderator of some of his discussion forums for almost 10 years),. In this article, Joel warns and gives caution against out of the blue simply rewriting perfectly good software that does not rust or wear out.
Things You Should Never Do, Part I
by Joel Spolsky
They did. They did it by making the single worst strategic mistake that any software company can make:
They decided to rewrite the code from scratch.
Article here: http://www.joelonsoftware.com/articles/fog0000000069.html
Joel continues to note that in the past 10 years, that article still remains one of his more popular ones (and somewhat controversial)
It makes no sense to take a perfectly good running application that's been running great for 10 years and doing its job, and then simply rewrite it in another platform, especially if you don't have the Manpower and expertise and personnel available to maintain this new system. And this is especially more so, if the new systems not going to accomplish anything more than what the previous system was doing. In fact if you did have that Manpower, they'd likely already have STARTED converting this system already over time. I mean why all of sudden out of the blue did someone throw a light switch and all of a sudden realize that new developers need to be brought in to rewrite a system that's already been running fine?
I'll also point out having been in this business for a long time (both published and technical editor of access books) I also done migration projects from mainframe systems to desktops . And, I also done migration of desktop database systems to large mainframe systems .
I can only say that it is rare to see an application with that many tables. In fact this issue raises alarm bells right off the bat.
Because of such a large number of tables here, I have to think that there's likely either very many processes, and multiple applications cobble together here that represents this whole system . If this is not the case, then of course rewriting in .net does not make sense unless you address the un normalized nature of the system. The fact that the data is already in SQL server helps, but that just might mean that you had the horsepower and capacity and infrastructure to scale something that was poorly designed and the first place
A very big major portion of software flexibility comes from having properly normalized data models. The problem you have is that you have the data in SQL server, and it's very tempting to rewrite parts of the forms and functionality as .net forms, and continue to use the current existing data models. Unfortunately this put you in a bit of a rock and hard place, because you want to continue to use existing data, and start rewriting functionality in .net. However rewriting of functionality in .net without addressing the data models is a very bad idea.
In an ironic twist of fate, this is a catch 22, because likely if that system had really fantastic great designed data models, you might not even have the need to redesign and move this into .net. Access and SQL server can scale out to 100's of users with ease anway. And, access supports the use of class objects, and even source code control.
In other words keep in mind that people might be asking to rewrite this in .net because they believe the application will then magically have increased flexibility, and be able to be changed faster then that of their changing business rules. In fact often the opposite occurs, because access is a very RAD tool. This means that the frontline people can often make modifications due to business rules changing, faster than the IT Dept and their team of developers working away on the next great version of the application. And worse you don't want to saddle that IT Dept and those developers with a poor data model.
I mean, are you to now hire the IT department to build every single spreadsheet and excel for the people because are current business processes are not flexible enough? It would be wonderful if the IT Dept to go around to everybody's desk, hold their hand, and build the excel sheets CORRECTLY for everybody, but it's not practical in the real world. So in addition to taking access away from these people, you might as well take excel away from them also.
I am just pointing out that my spider sense suggests to me that the data models here are going to be a real challenge. Remember, I would always take poor code and great designed data models over the reverse (Great code, but terrible data models). The reason for this is with great data designs, then the code and applications practically write themselves. And with great data models, then the ease of which you can change for the ever changing business rules again favors great data designs over that of great code. You can also RE factor the code overtime WHEN you have good data models. So, with good data models you can move forms and functionality and the UI over into .net, and you can do this seamlessly and easily WHEN the existing data models make sense to keep.
Also it makes no sense to move to these new technologies and less you're going to introduce the possibility of introducing things like self serve web portals for the existing business processes. So, today we can now allow customers to manipulate and use some of that information that is currently locked up in the system. This might be simple as them checking the status of their orders instead of wasting valuable customer phone time. Or it might be something simple like how a major package company in Canada saved an estimated $10,000,000 in the first year of implementing their package tracking system. Or might be something as simple as allowing the customer to look up their account balance.
So right now in the marketplace, these self serve customer web portal systems allow customers to enter and use and get at their information instead of the calling up some employed within the organization who then turns around and launches the application and then manipulates the information for that customer while on the phone. Might just as well let the customer do this work! So from order status, to balances owing, to banking, or what ever it is, the real cherry model ticket today is to allow the customer to use at a cell serve web portal that represents all that valuable information that that internal application is creating .
As mentioned, you have to ask, is where is the Manpower and personnel going to come to build and maintain this application? Obviously the existing system with enormous numbers of forms and tables you are throwing out must somehow been created, and represents some significant investments of time and efforts . The key concept you have to ask, is where were those significant investments and resources coming from to build that existing application? Who is going to maintain the new system then? In other words you need to design the new system to reduce maintenance costs. (new versions of my software can reduce maintenance by as much as 10 or 15 hours per year for customer ).
At the end of the day, good software development and good designs are good designs. Using Access, or VB6, or vb.net don't matter if the system is meeting your business needs now.
I should also point out that the new version of access 2010, can create .web forms. They are XAML (zammel .net forms). I am pointing this out, because changing the front end skin from access to .net yields you VERY little unless the underlying data structures and designs are also modified to take advantages of new possible business processes that can be accomplished with new technologies (such as those cell serve web portals). Simply repaint the font ends with .net forms is really very much amounts to a waste of money in my humble opinion UNLESS other issues are being addressed such as the data model, or some type of web portal that'll improve flexibility here.
You have some great advice here already. Keep in mind this really comes down to what are the goals and reasons for these people desiring this software to be rewritten in .net. Those new goals and desires better not be based on the pretext of simply remaking the forms you have now into .net as that will really accomplish nothing at all, and will not improve their ability to address their changing business needs that the system they are currently using obviously had been doing in the past.
Good luck on this I don't think this is the kind of question that can be answered in a simple forum post, but at least you have lots to chew on here so you can get the ball rolling.
Instead of telling you how hard it is -- which I'm sure you already realize -- I'll try to toss out some hints:
1) Start by moving everything you can out of MS Access and into MS SQL. This means tables, stored procedures, views, etc. If you get this step right, your MS Access app will be a front-end for a real database, which is already a win.
2) Consider giving up before you start. Instead of porting everything, it might make more sense to recognize which features can just be left alone, while new ones get WCF or MVC front-ends.
3) It's tempting to port from VBA to VB.NET, on the theory that it's more similar, but I don't recommend this.
I'm working in department which is mainly responsible for replacing old Access applications with .NET solutions. In my company Access applications are used for simple scenarios to fulfill business needs of single employee or small group of employees.
Sometimes Access application grows, group of users grows or too many changes are needed. In that case department using that Access application can start a project to recreate application. When this begins we can be sure that new application will be far away from current Access application.
First the business analyst is assigned to the project. His responsibility is to map current solution and to discuss problems of the current solution and expectations for the new solution. I haven't seen a project where "customer" wanted only replacing of the current solution. Everytime the customer also wants some new features and extensions which were not possible in Access.
Business analyst creates some initial description of expected solution which is passed to architects. Architects decide which type of application will be build, what type of HW infrastructure will be needed and how the application will be connected to other systems (if needed). After this initial phase IT have big picture about application and about needed changes. Here some initial estimate is done so project can be planned and resources can be allocated. This estimate is boundary for the project. Than my team starts to do the job.
We are using agile approach so our customer (internal team) incrementally sees new features in the application. First we gather some initial set (backlog) of user stories (special form of requirements) and we estimate these user stories and we let the customer to prioritize them. We choose subset of user stories for iteration (usually 2-4 weeks). New user stories can be added to backlog any time but our selected user stories can't change during iteration. After the iteration we present customer working part of the software. Based on the working part customer can decide to change priorities in backlog or create new user stories. We repeat this approach until customer says stop or unit budget is consumed. The important point is that not all user stories have to be done. Project has been planned with some budget and some low priority user stories don't have to be overtake.
From the technical point of view it is project as any other except few differencies:
You have initial database and you always have to be sure that already implemented part in your new solution also have working migration of exisiting data.
You have existing UI. Users can like or can hate the UI. Make sure that you understand it so that you create UI which is not worse than existing one. I created applications where UI had to be completely different and I created applications where UI had to be exactly the same so that users didn't need additional training.
Try to add some new features so that new application is reasonable. It is always easier to explain needs for the new application if you can describe new needed features.
650 forms/subforms is large by any standards. That represents a major conversion project, and a 'slow' port will be a nightmare.
I would suggest developing a new .NET application 'spike' that contains the basic functionality that is absolutely required and then build upon it. At the same time, freeze the Access application from all but essential fixes.
There are a few tools that will convert MS Access forms to .NET, but they will likely fail on complex forms with sub-forms.
'Effortlessly' Convert Access Forms to VB Objects
So if a user is in Access and they take an action to open a form that happens to be a different executable written in C#, won't it 'appear' to be the same application?
There has to be a user group that would love a separate application that only has the 5-10 forms they use.
Getting rid of tables/forms/features that are not used is an increase in functionality. I don't know the level of documentation for this app, but start there. When users find out they have to document areas of an application and justify the need for parts they don't use, they'll volunteer to have it removed.

To Workflow or Not to Workflow?

I am responsible for a team of developers who will are about to start development of a light weight insurance claims system. The system involves a lot of manual tasks and business workflows and we are looking at using Windows Workflow (.NET 4.0).
An example of the business domain is as follows:
A policy holder calls the contact centre to lodge a claim. This “event” fires two sub tasks which are manually actioned in parallel and may take a lengthy time to complete;
Check customer for fraud – A manual process whereby an operator calls various credit companies to check and assess the potential of a fraudulent customer. From here the sub task can enter a number of sub-statuses (Check in progress, Failed Reference Check, Passed Reference Check, etc)
Send item to repairs centre – A manual process where the item for which the policy holder lodged the claim is sent the repairs centre to be fixed. From here the sub task can enter a number of sub-statuses (Awaiting Repair, In Progress, Repaired, Posted, etc).
The claim can only proceed once the status of each sub task has reached a predefined status (based on the business rules).
On the surface it seems that Workflow is indeed the best technology choice; however I do have a few concerns in using WF 4.0.
Skill set – Looking at the average developer skill set I do not see many developers who understand or know Workflow.
Maintainability – There seems to be little support within the community for WF 4.0 projects and this coupled with the lack of skill set raise concerns around maintainability.
Barrier to entry – I have a feeling that Windows Workflow has a steep learning curve and it’s not always that easy to pick up.
New product – As Workflow has been completely rewritten for .NET 4.0 I see the product as a first generation product and may not have the necessary stability.
Reputation – Previous versions of Workflow were not well received, considered difficult to develop with and resulted in poor business uptake.
So my question is should we use Windows Workflow (WF) 4.0 for this situation or is there an alternative technology (e.g., Simple State Machine, etc) or even a better workflow engine to use?
I have done several WF4 projects so lets see if I can add any useful info to the other answers.
From the description of your business problem it sounds like WF4 is a good match, so no problems there.
Regarding your concerns you are right. Basically WF4 is a new product and is lacking some important features and has some rough edges. There is a learning curve, you do have to do some things differently. The main point is long running and serialization, which is something the average developer is not used to and requires some thought to get right as I hear far too often that people have problems serializing an entities framework data context.
Most of the time using workflow services hosted in IIS/WAS is the best route when doing these long running type of workflows. That makes solving the versioning problem not to hard either, just have the first message return the workflow version and make that a part of each subsequent message. Next put the WCF router in between that routes the message to the correct endpoint based on the version. The basic is never to change an existing workflow, always create a new one.
So what is my advise to you?
Don't take a big gamble on a unknown, and for you unproven, piece of technology. Do a small, non critical, piece of the application using WF4. That way if it works you can expand on it but if it fails you can rip it out and replace it with more traditional .NET code. That way you get real experience with WF4 instead of having to base a decision on second hand information and you learn a new and powerful technology in the process. If possible take a course on WF4 as that will save you a lot of time in getting up to speed (shameless self plug here).
About the Simple State Machine. I have not used it but I was under the impression it was for short running, in memory, state machines. One of the main benefits of WF4 is the long running aspects.
I have come to this dilemma couple of times and I had chosen not to use Work Flow foundation. Some of considerations (similar to yours) were
Involved work flows were lot simpler (a combination of state machine and sequential actions) and doing it in WF seems to overkill for efforts involved.
Learning curve for developers to understand and to use WF effectively was considered high. Status transition table describing valid transitions and actions to be taken are used for additional flexibility and developers were comfortable with it, easily understanding the concept and purpose.
Chances of business process changes were slim and rudimentary changes were easily possible with help of transition table. A change in transition would mean a database script while change in actions would result in new release/patch. However, probability of such occurrence was deemed to be low.
Looking back after 13-14 months, I still think that decision of not using WF was correct. IMO, WF makes sense where there is strong likely hood that work flow can change and/or business rules can change. WF allows to isolate workflow in separate file and so making it configurable by users will be simpler.
We have been using WF 4.0 the last couple of months. I have to say it's challenging to think the Workflow way. However, I can tell you it's worth it. We knew very little when we started. We've bought a beginner and professional book for WF 4.0 that helped. I, myself, watched many videos online and followed PDC 2009 for their breaking news about WF 4.0 and how it's different from the previous somewhat sucky versions.
One major thing that we had to propose a solution for is the way we can deal with In/Our Arguments in a workflow without bounding our custom activities to certain data types and how to pass parameters between activities. I have come up with a good solution for that, and the workflow experience that we have so far is not bad at all. Actually, we have a workflow-intensive application that is getting bigger and bigger and I really cannot imagine myself solving it in a different environment. I love the visual effect that it has: it keeps me away from the details of if/else etc constructs and makes the business rules apparent in a way that doesn't make you forced to dive into lines of code to know what's going on or how to fix some bug.
By the way, the project that we worked on is very similar to what you described and it's a medium-sized project.
You can tell from my words that I like it and I do recommend it although is incorporates some risks as it's a new technology and you have to come up with some innovative ideas.
my 2 cents...
I did three projects in WF 3.5 and I have to say it is not easy. It force you to think in the whole new way especially when persistance is used. Updating the application which contains hundreds of incomplete persisted workflow is challenging. Single breaking change in serialization crashes them all. Introducing multiple versions of the same library to support new and old running workflows is common. It was challenging.
I haven't tryed WF 4.0 yet but based on experience from BizTalk and WF 3.5 I think it will be similar.
Anyway the best approach you can take is to do Proof-of-Concept. Take single WF from your requirments and try to implment it in WF 4.0. You will spend some time with it but you will prove if you are able to do that in WF 4.0 and if there are any visible benefits.
If you decide to use WF 4.0 I insist that you check possibility to run WF as WCF service hosted in Windows AppFabric. AppFabric provides some out of the box functionality for hosting WFs.
I think it does not really make sense today to talk about Workflow in WF4 as a technology choice for this kind of problem. What is really appropriate, as mentioned by Ladislav Mrnka above, is WCF WF Services hosted in AppFabric.
My experience with this is that it pays great dividends and is very enjoyable, but problems arise in the beginning because it is not properly appreciated that for many programmers this is a methodology shift more than a technology shift. On the other hand, generalists and those with a problem-solving mindset saw WCF WF AppFabric as a set of exciting opportunities. So if the mix of people on the project are fairly conservative C# devs attached to their daily set of OO and patterns, it will be hard to introduce. If the team is more innovative, then adoption will be much easier because the potential and new doorways multiply with each discovery.
Two main conceptual problems programmers had in moving to this technology was:
a) Message correlation and messaged exchange patterns
b) Workflows and unit testing
In standard systems in C# for example a workflow is rarely explicit and therefore rarely unit tested. The overall workflow is left for testing by acceptance scenarios or integration. Introduce an explicit WF as a software artifact and suddenly standard devs want to try and unit test it, which is usually not worth doing.
The message correlation aspect of it is a bit of mindset shift for those not familiar with message exchange patterns. Most devs have dealt with in process and remote calls, web service and SOAP, and usually focussed on one or two of those. To abstract above it all and work with a general message based system can be confusing at first.
On the positive side though, the end result is something that saves a lot of time and creates a lot of opportunities. One main thing is that the worfklow, if visually clear, is something that can be worked on by end user, developer and analyst together, eliminating unnecessary steps in the development lifecycle and focusing the parties on one artifact. Further, it discourages islands of functionality in dedicated apps, with dedicated glue layers, by encouraging a suite of business processes in WF per business domain. Further, with AppFabric, the plumbing for persistence, logging, and waking up scheduled activities is all done for you. WF4 performance is outstanding too.
My recommendation would be to find the most innovative or explorative team member do the initial scouting to discover the tricky parts, get the core functions working, and have that initial person be responsible for then compartmentalising the remaining work.
In order to do an insurance claim system of any complexity that involves roles and "sub-tasks" you really need an BPM solution, not just workflow. Workflow Foundation 4.0 is slick but it really doesn't not come close to the functionalities of a BPM product.
BPM solutions, like Metastorm BPM, Global360, and K2.NET, provide human centric workflow, tasks, roles, and system integration that can model and streamline the business processes like your insurance claim system. Use ASP.NET to build the interface that integrates with the BPM workflow engine as their built in designers are usually limited and force you to use their custom built web control which usually are not as full featured as the ASP.NET web controls.
Go with the technology your team knows and feels comfortable with. Workflow Foundation is not a product that you can use straight away - it's rather a set of pieces you can embed in your application in order to build a workflow system. IMHO the workflow logic is the least important piece of technology, first of all you have to concentrate on the GUI because business owners will not see anything but the GUI. But if your system is a success then you have to be prepared for neverending change requests and new requirements so you have to implement your business logic so that it's easy to change and easy to divide into separate processes to suit different user needs (sometimes contradicting). BPM helps in this task because it allows you to have separate, multiple versions of business processes suiting various business needs. You don't need full fledged BPM engine for that but it's useful to code your business logic so that it can be versioned and divided into individual business processes - the worst thing to have is an unmantainable and intertangled blob of code that handles 'everything' and that no one can understand. There are many ideas for that - state machines, DSLs (domain specific languages), scripts etc - you decide what the implementation should be. But you should always think in terms of business processes and organize your logic accordingly so that it reflects these processes.
And be prepared for coexistence of many variants of business logic and data structures - this is the most difficult design task imho.
I'm in a situation where I have to use 4.0 as .NET 4.5 isn't accredited for use in our prod environment yet. I had major pain understanding generally how to get long running workflows going to suit our business need but eventually found an elegant solution. It's not something which just anyone coming later to support can just pick up with ease because there's so much to think about, but I do believe in WF as a tool for managing workflow states.
One big thing I take issue with WF 4.0 though is Maurice's comment:
The basic is never to change an existing workflow, always create a new one
That's great if you just want a new version, but what if you have 50,000 persisted workflows and realise at some point that there's a bug in the workflow? You need to be able to update the xamlx and still be coupled to the existing instances. I've tried ungzipping the various metadata columns in the SQL Server instances table to find something that ties the instance to the workflow definition without any luck.
I did write a synchronisation application for importing data from an old system into our new WF 4.0 driven one. We basically load the data into the system, then run the process which goes about automatically calling into the workflow steps and calling validation methods, essentially mocking user interaction. This only really worked well with us due to the architecture we implemented for access to the workflow service host. It's great as a one off, where after running you can go through and do checks to ensure consistency of the data migration process, but having to use this approach for potentially hundreds of thousands of cases once a system is live isn't really an approach that instills confidence and over burdens the process of integration simple bug fixes.
My recommendation is that you avoid WF 4.0 altogether and just go straight to 4.5 if you're environment supports it. The Dynamic Updates and Side by Side Versioning it provides caters for bug fixing and WF versioning all out of the box. I've still yet to investigate exactly how as 4.5 still isn't accredited for use by our client, but eagerly awaiting this opportunity.
What I'm desperately hoping for is that our client doesn't request changes to policy (and therefore workflow adjustments) and that the current workflows hold up without any bugs. The latter being a vain and empty hope as bugs always pop up.
I really can't understand what was going through the WF dev team's heads to release a system where out of the box you can't fix bugs easily. They should have developed a technique for re-binding an instance to new xamlx.

Planning a programming project by example (C# or C++)

I am in the last year of undergraduate degree and i am stumped by the lack of example in c++ and c# large project in my university. All the mini project and assignment are based on text based database, which is so inefficient, and console display and command, which is frustrating.
I want to develop a complete prototype of corporate software which deals in Inventory, Sales, Marketing, etc. Everything you would usually find in SAP. I am grateful if any of you could direct me to a books or article or sample program.
Some of the question are :
How to plan for this kind of programming? should i use the concept of 1 object(such as inventory) have its own process and program and have an integrator sit for all the program, or should i integrate it in 1 big program?
How to build and address a database? i have little bit knowledge in database and i know SQL but i never address database in a program before. Database are table, and how do you suppose to represent a table in a OOP way?
For development type, which is better PHP and C++ or C# and ASP.NET? I am planning to use Web Interface to set form and information, but using a background program to handle the compute. .NET is very much integrated and coding should be much faster, but i really wonder about performance if compared to PHP and C++ package
thank you for the info
This may not answer your question directly, but I thought this might help you get started in some way. So here it goes: I would say, "think through the process". This means, think through the software development process:
Gather requirements
Identify and define the problem.
Get as much information/facts as you can. (turn on green light, think about everything that you want to go into your software)
Come up with a baseline (turn on red lights, what you really want? the minimum functionality your software "must have" - cant live without)
Analyze
Know what you don’t know, what are the missing facts?
Evaluate your information or lack of it/reliability of information source.
Infer facts that you don’t know.
Form an assumption, opinion, or possible solutions.
Consider alternatives and implications of each solution.
Form an action plan.
Identify technology pros/cons.
Decide technology
Comeup with a functional specs.
Research
Dig into stuff that you would want to know (Best database, ORM, design practices, code samples - gather everything, read about inventory systems that are already there)
Design
Develop
Test
Fix
Prepare deployment plan
Release the product
Gather user feedback
Analyze user feedback
Plan for items in next release.
Repeat steps
And Enjoy!
Before I start this is a shallow answer to a deep question.
1) It looks like you have a reasonable grasp of the major components of your target application. As a .net developer I'd build assemblies that matched broad areas of functionality (not sure what the equivalent is in PHP) and then you can use those assemblies together as a single large app, or seperately as required. It's unlikely you'll get it right first time, so build it how it feels right, and then do some ruthless refactoring to make it better once you've got a handled on the problem.
2) This whole area is covered by Object Relational Mapping - ORM, NHibernate is the best of the bunch in the .Net world. BTW if you learn that you'll be way ahead of the game come graduation/work time. Raw sql is so last decade. I guess you know that Sql Server Express is a free download?
3) For development go with the languages/environment you feel most comfortable in. My preference is .net, and the integrated coding is much faster. Performance is definitely good enough, especially as this is learning project - SO runs on .Net and that supports a gazillion users pretty well.
Enjoy
I don't have any good recommendations for SAP-like projects in particular, but in general the best examples to use for things like this are well-established open-source projects. Anything else is going to be a "toy" example in one way or the other, and will be simplified and cleaned up. It's the "cleaned up" that makes it most unrealistic -- one of the really key things that makes real-world large software projects different from university examples is that the real world is messy, and real-world requirements are messy, and collaboration between lots of people with not quite the same priorities is messy, and real-world software projects have to adapt to and thrive in this messiness.
In answer to your specific questions, though:
1.) Do things in a modular way. This means you have something you can test and work with as soon as you get the first module done. That's especially important when you're learning, because (a) you probably won't have time to actually finish the whole thing, (b) you'll learn a lot from writing the first bit that you'll want to apply in future bits and then you'll probably want to rewrite the first bit, and (c) you'll learn even more from using the first bit.
2.) There are many views on this, and many online articles and books. I can't answer that in an answer here (except to note that in some cases trying to represent it in an OOP way is the wrong programming paradigm -- be careful about overconstraining the answer by the question you ask!); the right answer is to find things to read and spend some days reading them.
3.) You do not care about that sort of performance issue here. Successful programs have been written in both forms. You care about what will teach you the most, and what you are comfortable working with. Either one should be fine. You'll probably find more open-source pieces to look at with PHP and C++.
Your question pretty much covers the whole gamut of planning for a project; a whole thesis might be written (+:
Keep in mind what your team and your teaching-staff want out of the project.
1) Modular is my choice. It'll force you to address the application one module at a time and keep you focussed, but that is subject to
The familiarity of your team with the preferred/recommended language for this project.
Time in hand
Remember that modular means you will necessarily have to provide for module integration too.
2) C++ or C# ? Whichever offers the more learning experience. My own experience with both mentioned technologies is limited, but I remember there used to be a Database Template Library (DTL). C# on the other hand will probably be faster to develop. I could be wrong. There are any number of free DBMS engines available on the net. Unless the assignment explicitly recommends using a text, opt for one of these.
3) I concur w/Brooks up there ^^^
Good Luck!
You are a university undergraduate. And you are talking about complete inventory system.
I suggest building a blog application first with all the best practices (like blogengine), then move to e-commerece sites (nopcommerce, dotcommerce). And then do whatever you like.
This is a common problem with undergrads like you, of jumping way higher without building any simple projects first.
As a full time PHP developer, PHP sucks! ASP.net is okay (mmm... no it sucks too), but it locks you into proprietary licenses.
If you're starting from scratch, go for node.js. It's c++ and server-side javascript. Yes, it's new, but it has engineering promise. It'll be more commonplace in a few years.
And if you're worried about performance, don't. Javascript in V8 is extremely fast.
http://shootout.alioth.debian.org/u32/which-programming-languages-are-fastest.php
Here are some node.js links to get you started:
http://www.delicious.com/homer6/nodejs
Enjoy.

Ever done a total rewrite of a large C++ application in C#? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I know Joel says to never do it, and I agree with this in most cases. I do think there are cases where it is justified.
We have a large C++ application (around 250,000 total lines of code) that uses a MFC front end and a Windows service as the core components. We are thinking about moving the project to C#.
The reasons we are thinking about rewriting are:
Faster development time
Use of WCF and other .NET built-in features
More consistent operation on various
systems
Easier 64 bit support
Many nice .NET libraries and
components out there
Has anyone done a rewrite like this? Was it successful?
EDIT:
The project is almost 10 years old now, and we are getting to the point that adding new features we want would be writing significant functionality that .NET already has built-in.
Have you thought about instead of re writing from scratch you should start to separate out the GUI and back end layer if it is not already, then you can start to write pieces of it in C#.
the 250,000 lines were not written overnight they contains hundreds of thousands of man years of effort, so nobody sane enough would suggest to rewrite it all from scratch all at once.
The best approach if you guys are intend on doing it is piece by piece. otherwise ask for several years of development effort from your management while no new features are implemented in your existing product (basically stagnating in front of competition)
My company actually did that. We had a C++ code base of roughly that size, and everybody (programmers, management, customers) more or less agreed that it wasn't the best piece of software. We wanted some features that would have been extremely hard to implement in the old code base, so we decided (after many discussions and test projects) to rewrite it in .NET. We reused the code that was modular enough using C++/CLI (about 20% of it - mostly performance-critical number-crunching stuff that should have been written in C++ anyway), but the rest was re-written from scratch. It took about 2 man-years, but that number really depends a lot on the kind of application, the size of your team and on your programmers, of course. I would consider the whole thing a success: We were able to re-architect the whole system to enable new features that would have been near-impossible with the old code base. We also could avoid problems we often had in the old software by re-designing around them. Also, the new system is much more flexible and modular in the places where we learned that flexibility was needed. (Actually I'm sometimes surprised at how easily new features can be incorporated into the new system even though we never though of them when we designed it.)
So in a nutshell: For a medium-sized project (100k-500kloc) a rewrite is an option, but you should definitely be aware of the price and risk your taking. I would only do it if the old codebase is really low-quality and resists refactoring.
Also, there's two mistakes you shouldn't do:
Hire a new .NET programmer and let him/her do the rewrite - someone new can help, but most of the work and especially the design has to be done by developers who have enough experience with the old code, so they have a solid understanding of the requirements. Otherwise, you'll just repeat your old mistakes (plus a couple of new ones) in a different language.
Let a C++ programmer do the rewrite as their first C# project. That's a recipe for disaster, for obvious reasons. When you tackle a project of that size, you must have a solid understanding of the framework you're using.
(I think these two mistakes might reasons why so many rewrites fail.)
Its been tried before, not only C++ => C#, but VB6 => VB.NET, C++ => Java and any other old => new that you can think of. it never really worked. I think that because ppl don't consider that transformation for what it really is (a total rewrite) they tend to take it lightly.
The migration story from C++ => .NET should be thru CLI, carefully deciding what managed and whats remains unmanaged and s-l-o-w-l-y "fixing" piece by piece.
Expression Blend was originally an MFC app. The current version uses WPF for the UI but the engine is still all native. I saw a great talk by principal architect Henry Sowizral about a year ago where he described the process of the migration. Make the engine UI agnostic and you will be able to support whatever the latest UI technology is. The Expression team at one point had what he referred to as the hydra-headed version. Two front-end UIs running simultaneously with one underlying engine - in this way they could see where behavior had unintentionally deviated from the previous version. Since the UI subscribed to events and notifications, changes made in a WPF toolwindow were reflected in the old MFC toolwindow.
EDIT: Looks like some powerpoints are available here or as html here.
I've been through a project that did exactly what you're describing with approximately the same size codebase. Initially, I was completely onboard with the rewrite. It ended up taking 3+ years and nearly turned into a death march. In general, I now agree far more with the incrementalists.
Based on our experience, though, I will say that such a rewrite (especially if you're able to reuse some C++ business logic code in .NET), is not as technically dangerous as it may seem. However, it can be very socially dangerous!
First, you have to make sure that everyone fully understands that what you are undertaking initially is a "rewrite" (or "remake") not an upgrade or "reimagining." The 1998 Psycho was a shot-for-shot remake of the 1960 original. The 2003 Battlestar Galactica was a reimagining of the 1978 original. See the difference?
In our case, the initial plan was to recreate the existing product in .NET. That would not have been technically daunting, since we understood the original well. However, in practice, the urge to add and fix and improve just a few things proved irresistible, and ultimately added 2-3 years to the timeline.
Second, you have to make sure that everyone from the execs to sales staff to the end users is ok with your current product remaining unchanged during the development of the remake. If your market is moving is such a way that you won't be able to sustain your business during that period, then don't do it.
So the main obstacles for us turned out to be social, rather than technical. Users and business interests became very frustrated with the lack of visible progress. Everyone felt compelled to push for their own pet improvements and features, too, so our final product bore only a superficial resemblance to the original. It was definitely a reimagining rather than a remake.
In the end it seems to have turned out ok for us, but it was a real grind, and not something we'd choose to do again. We burned through a lot of goodwill and patience (both internal and external), which could've largely been avoided with an incremental approach.
C++ won't automatically translate to C# (not so you'd want to maintain it, anyway), and you're talking about using different frameworks.
That means you're doing a total rewrite of 250K lines of code. This is effectively the same as a new 250K-line project, except that you've got the requirements nicely spec'd out to start with. Well, not "nicely"; there's doubtless some difficult-to-understand code in there, some likely because of important issues that made elegance difficult, and the overall structure will be somewhat obscured.
That's a very large project. At the end, what you'll have is code that does the same thing, likely with more bugs, probably fairly badly structured (although you can refactor that over time), with more potential for future development. It won't have any of the new features people have been asking for during the project (unless you like living dangerously).
I'm not saying not to do it. I'm saying that you should know what you're proposing, what the cost will be, and what the benefits would be. In most cases, this adds up to "Don't do that!"
I did something similar. Part of my job involves developing & supporting some software called ContractEdge. It was originally developed in Visual C++ 6 by a team in India. Then I took over the development role after it was basically done in 2004. Later on, when Windows Vista was made available as a Beta I discovered that ContractEdge would crash in Vista. The same thing happened in the release candidate.
So I was faced with a decision. Either hunt for the problem in tens of thousands of lines of mostly unfamiliar code, or take the opportunity to rewrite it in .NET. Well, I rewrote it in VB.NET 2.0 in about 2 months. I approached it as a total rewrite, essentially scrapping everything and I simply focused on duplicating the functionality with a different language. As it turns out I only had to write about 1/10th the number of lines of code as the original. Then we held a one month long beta program to iron out any remaining bugs. Immediately after that we launched it and it's been a big success ever since, with fewer problems than the C++ version it replaced.
In our particular scenario I think the rewrite worked out well. The decision was made easier based on the fact that nobody on our team was as familiar with C++ as they were with .NET. So from that perspective, maintainability is now far easier. Nowadays I do think C++ is too low-level of a language for most business software. You really can get a lot more done in .NET with less code. I wrote about this subject on my blog.
Total rewrite for the sake of rewrite? I would not recommend it.
In addition to other responses, I would not take "faster development time" for granted. Sure, for most "business" data-centric applications it will probably be the case, but there are many areas where .NET will not bring in significant productivity increases, plus you need to take the learning curve into account.
We've done a big C++ >> C# migration as we move to .NET. It's a quite tough project. Management would hardly bite the funding for it, so you have to go for a compromise. Best approach is to leave the innermost (or lowest) layers in C++ and cover the upper part with C#, with better APIs designed with newer concepts like readability and API-usability in mind, safe-guarded with unit tests and advanced tools like FxCop. These are obviously great wins.
It also helps you layer your components a bit better as it forces certain cuts. The end product is not nice as you might end up copying a lot of code in C++ because years and years of coding contains many bug fixes and many undocumented and hard-to-understand optimizations. Add to that all the pointer tricks you could do in C (our code has evolved from C into C++ over time). As you stabilize you find yourself more and more reading the C++ code and moving it into the C# - as opposed to 'cleaner design' goals you had in mind in the beginning.
Then you find out that interop performance sucks. That may call for a second rewrite - maybe use unsafe C# code now. Grrr!
If all the team members come from C++, the new code is also look like a C++ design. Try to go for a mix of C# and C++ developers in the team, so you can get a more .NET-alike API at the end.
After a while, the project may lose interest and mgmt may not fund the entire re-write so you end up getting a C#-sugarcoated C++ code, and you may still have unicode/64-bit issues unresolved. It really calls for a very very careful planning.
I was involved in a very similar size project. It was necessary to rewrite the GUI front end because of new hardware and new requirements. We decided to port this to .NET using C++/CLI. We were able to reuse more then halve of the code and porting it work quite well.
We were able to take advantage of .NET where it made the most sense. This made major parts of the code much cleaner. We found the book "Pro Visual C++/CLI and the .NET 2.0 platform" by Stephen R. G. Fraser very helpful.
Have you considered a port to C++.NET? It might be less painful.
I'm currently rewriting a rather large web application.
One thing to remember is that when converting from one language to another especially something like C++ to .Net is that you may end up with less, and probably cleaner, code due either due to language advances or framework code.
That's one advantage for future maintainability, even aside from the opportunity to re-architect the less robust aspects of the old application.
Some additional comments.
Depending on the lifespan of your application you may be forced to rewrite it in a modern language since I suspect that C++ developers will become increasingly hard to find.
Just moving the app to a new language will not reap that great rewards. You'll probably want to do a redesign of the app as well! Do not underestimate the effort required to do this. I would guess the effort for a redesign + rewrite could be as much as 50% of the effort for the original implementation. (Of course, 50% is a totally unscientific guess).
It's way to easy fool yourself into thinking "Well, C# and WPF are just so much more productive that rewriting this mess would be a piece of cake!"
Interestingly most of the answers from people who have done this seem positive. The most important thing IMO is to have good unit tests so that you can be sure your rewrite does what you want it to do (which may not be exactly the same as what the old code did).

Categories

Resources