What factors drive you to use embedded scripting in your applications? - c#

What factors do you weigh in your decision process when you select an embedded scripting, and can issues of configuration and variation be overcome purely with IOC and the Plugin pattern?
Many times the Strategy Pattern and Decorator Pattern are good way to tame variation in domain logic. The problem set I am faced with is that calculations and starting points of workflows will vary from through out the year based on different marketing campaigns, and not all data requirements and business rules are know until just before the campaign begins. Being a small shop we would like to have a solution where we can make configuration changes, test rigorously and yet not be forced to compile/link and re-deploy on a constant basis.

Many times I have made internal data structures mutable through Python or Lua. Somtimes externalizing business logic into loadable modules. This makes your solution a reusable framework too that you can deploy to other clients cheaply. It also make 'in the field' fixes quicker (just update a text file) and gives clients freedom to modify their own dynamic data.
If you can safely expose the API without breaking things ... why not?
My two concerns are:
Can I externalize logic that is specific to a client? or that might be tailored later?
Can I do this whilst meeting performance and intellectual property constraints?
A clean framework == Reusable code and a platform you can quickly modify to solve problems of the same ilk without recoding.

I've used this technique in 2 scenarios:
I want to provide some facility for customers/clients to be able to provide some (limited) customisation themselves. e.g. to colour a chart red if a value exceeds 50, so allowing them to type a short script like "if val > limit then 'red'" etc. You have to be careful to limit what the user can program, however (for obvious reasons)
When I need some customisation of the software in the field (i.e. recompilation isn't an option) and I know I'll need a little more than being able to turn components on/off. So I'll implement some mechanism where the application configures itself via scriptable components
(I'm mainly Java-based, so I'm using the Java 6 embedded scripting plus a variety of languages to achieve the above, depending on the focus)

Related

Adding new types to compiled code (treating it like data)

This question is strictly about the inefficient architecture of an existing system that needs to be rebuilt. It solicits validation from fellow developers who have had experience with managing such awkward systems. I have tried to abstract it as best as I could below.
The application caters to a very complex need and it delivers very well. The problem is that internal plumbing makes code management and scalability a nightmare. The little information I can share about context includes the fact that we need to treat code as a data commodity. In other words, the system can only function if implemented classes are added to it on a continuing basis.
What the application delivers to end-users is not data, but an [Action] that requires a code execution context. So the application has to execute some code on the target system in order to deliver what the user expects. Now these expectations are not known at compile-time and new ones need to be added almost on a daily basis. That means, developers keep adding [Actions] to the system regularly.
The existing system links to these [Action] classes statically! Not only does that make code management a nightmare, but also requires a recompile every time an action is added.
My first instinct was to have the system dynamically link to assemblies at runtime where each assembly would contain a bunch of actions. This would be akin to adding extensibility capabilities to the application. I thought about the MEF framework but it just did not feel right.
The only alternative I can think of is storing each action in the database as either source code or a compiled module. Each has its own trade-offs such as storing as source is less secure but gives me more control over code review and continued maintenance. Storing as compiled has the benefit of server-side assembly signing.
I would appreciate some advice about how to structure a system like this.
I don't think you need a more flexible architecture, but a more flexible software process. Adding new functionality on a daily basis is what most developers do. That doesn't a valid argument for a plugin system.
You don't need a plugin architecture. You need a good software development methodology, such as the agile processes (such as Scrum and XP), and make sure you be able to do this:
Let developers build new components in braches.
After thorough testing, merge new functionality to the main branch.
This way the main branch always has production quality and you can roll out new versions each day using continuous integration and continuous delivery.

Dynamic Business Rules

I am creating an interface where users can build their own business rules out of domain specific objects at runtime, have those rules persisted in the database and then used by the application. Some of these are complex predicates and others require combinations of domain objects in what seems fairly complicated relations. So far I have looked into GoF, dynamics with eval, and CodeDom. Does anyone have suggestion on what should be used?
Actually, you can just develop your application with WF rules engine API without using WF. http://blogs.microsoft.co.il/blogs/bursteg/archive/2007/08/09/WF-Rules-Engine-without-Workflow.aspx This will save you from a lot of work.
Kaizen, depending on the scope and kind of your dynamic rules you could eventually use a workflow engine, like MS WF to define the rules as workflow activities for example... in this way you isolate the logic and do not need a full rebuild of the application when you need to change anything in the workflow.
This might not be the best solution but could be an alternative...
Having spent a year building a rules engine and fighting on approaches I can tell you its not easy. Especially when you focus on what your goal is. If its to get users to write the rules for the system, you really need to focus hard on that area. Whats easy for a developer is perhaps much harder for most business users. We built a rules authoring platform in Excel that was compiled into C# and run dynamically ... problem was users found the spreadsheets and flow of logic too complicated and hired ASp.NET contractors to do build the rules.
BizTalk has an engine that I believe can be used for .NEt apps
http://www.microsoft.com/biztalk/en/us/business-rule-framework.aspx
Have fun!
How often do the rules change? Building a system that let's the business build (and version) their own rules is significantly more challenging than building a system that lets a programmer update the rules dynamically.
When a similar requirement came up in a past project, the business admitted that while yes, the rules will change; they won't change so often that it has to be them making the updates.
We ended up using IronPython for the dynamic parts and storing the code in the database and the system would pull up the appropriate rules on load. The rest of the app was written in C#. A win for us and for the business.

Design pattern for handling many parameters and business rules

I am working on a project that is responsible for creating a "job" which is composed of one or more "tasks" which are persisted to a database through a DAL. The job and task columns are composed of values that are set according to business rules.
The class, as it exists now, is getting complicated and unwieldy because the business rules dictate that it needs access to many databases across our system to decide whether a job can be created and/or how it should be set up.
To further complicate things it needs to be necessary to submit a list of jobs and it needs to be callable in a variety of ways (as a referenced assembly, via windows service, or via web service).
Here are some examples of the things it does:
Generate a job cost estimate
Take in an account and/or user to which assign the job
Emit an event for job submission progress tracking
Merge in data from an outside, user-defined list (.csv, .xls, ect.)
Copy files from a local drive to a network accessible drive (if necessary)
My question is: What are the best practices or design patterns to make this as manageable and simple as possible?
Seems like the class needs to be refactored as it would appear to violate the Single Responsibility Principle. I would recommend that each one of the bullet points above have its separate implementation class. In this way you would be implementing the facade pattern , where your main class represents the high level abstraction of what the system is doing.
This type of program can get really messy if not kept clean from the ground up. I myself always try to stick with the basic 3-Tier Application (Presentation, Business, Data). There is a lot of good information out there for building applications in this manner, and it's best to do some demo projects, and read what others have to say about the subject. Here is the MSDN reference.
I myself had to redesign an application that did something very similar. Once I got my Data Layer separated and worked out from everything else my life became a lot easier.
My best advice is take the time to Plan a lot. Use diagrams, flowcharts, etc. etc.. When a program is this complex, I like to have the groundwork for my layers laid out before I ever start writing code.
Given your description of the requirements, there's no real "simple" way to go about this. Its requisite functionality is massive and diverse. My only suggestions are to make the entire thing into a DLL library (or even a set of DLLs), to separate the various frontends so that referencing the assembly need not rely on the Windows service (for instance); and to stick to basic OOP commandments like loose coupling.
Besides recommending to use SOLID and go the extra mile to keep it DRY, I'll suggest to introduce the concept of rules in the system.
By modeling the rules you can switch to a more configurable / flexible approach. You can combine multiple rules to expose different operations that affect the outcome in jobs and the related tasks.
This allows you to have rules that are composed of others. Depending on the scenario you have, that could greatly simplify how you deal with it, since some operations that involve implicit rules that are spread across all those system can be expressed as a combination of simple rules. I'd keep it as simple as possible, but as you extend it you might find the need for different ways to combine the rules, and patterns will emerge on their own.
As for SOLID, I recommend to check the ebook here and try to keep an evolving code approach.

Migrating application from Microsoft Access to VB or C#.NET

I'm currently trying to convince management of the need to port one of our applications to .NET. The application has grown to be a bit of a monster in Access (backend in SQL), with 700 linked tables, 650 forms/subforms, 130 modules and 850 queries.
I pretty much know all the major benefits of doing this, but now need to look at how this can be achieved technically, so I can put a project plan together.
So, my plan was to convert the queries into stored procedures and/or views on the backend and re-write the forms in WPF or WinForms.
Now, the code is where I come unstuck. Is it possible to packaged up the code behind and modules into dlls and consume them whilst it is slowly ported to VB/C#?
What we can't be left with is half an application in VB/C# and half in Access, it must 'appear' to all be one application, even half way through the migration.
Thanks in advance.
EDIT: Just some more info about what we do and why we're looking at moving away from Access.
We are essentially an ISV and the Access application is our main product. This application has been developed over a period of 15 years, by many, many developers on an ad hoc basis. There is no documentation for this application.
We also have problems with getting branching in SCC to work properly, so we've currently got 4 or 5 code bases for the half a dozen clients we have. On top of that, all the testing we do is completely manual, which you can imagine is very labour intensive, and only scratches the surface of what really needs to be tested.
We're currently looking to expand, and have a number of sales leads that are in the final stages. I'm worried that with these new sales, we're going to be swamped with support and testing, and that this application is going to become even more entangled an buggy.
I'll also add to this the fact that we're just about to enter the spec phase of a brand new product, which is almost certainly going to be built in .NET. If we were to rewrite the Access application in .NET, then the people we use for that can go straight on to this new development. If we were to stay in Access, then we'd have to get some new Access people in, whom would have to be retrained once we start the new development.
So essentially it has come down to two choices, major refactoring work in Access to try and 'organise' the code a bit better, and those of you who have suggested culling parts are most probably right; I'm sure there are parts that are no longer used. However, I fear that if we stay in Access we still won't be able to build in effective testing and we still won't have proper SCC branching, which will lead to support continuing to be a nightmare, and any future developments on this product makings things worse. Either way there is a lot of work that we're about to embark on, which is either going to be done in Acces, or .NET.
I been involved in a lot of migration projects where one is converting from one platform to another. I've also seen spectacular cost overruns, and spectacular under estimations of how difficult these types of projects can be.
In some of the projects and platforms I've created and that I had built for about $25,000, the cost of replacing this application and rewriting it resulted in the other team of people taking over this project and the resulting cost was in excess of $750,000.
You're also making an assumption that the current system needs to be replaced. You MUST have clear in your mind what the actual goals of moving and replacing the current platform and software are. Simply rewriting something and moving the functionality over to another platform yields you very little, except spending a lot of money that really doesn't benefit your business of all (but hey those developers will take the money, if they convince you of the need of doing this)
You might want to take a read of this wonderful article by Joel on software (Joel by the way developed and created this forum stack overflow – and I was a moderator of some of his discussion forums for almost 10 years),. In this article, Joel warns and gives caution against out of the blue simply rewriting perfectly good software that does not rust or wear out.
Things You Should Never Do, Part I
by Joel Spolsky
They did. They did it by making the single worst strategic mistake that any software company can make:
They decided to rewrite the code from scratch.
Article here: http://www.joelonsoftware.com/articles/fog0000000069.html
Joel continues to note that in the past 10 years, that article still remains one of his more popular ones (and somewhat controversial)
It makes no sense to take a perfectly good running application that's been running great for 10 years and doing its job, and then simply rewrite it in another platform, especially if you don't have the Manpower and expertise and personnel available to maintain this new system. And this is especially more so, if the new systems not going to accomplish anything more than what the previous system was doing. In fact if you did have that Manpower, they'd likely already have STARTED converting this system already over time. I mean why all of sudden out of the blue did someone throw a light switch and all of a sudden realize that new developers need to be brought in to rewrite a system that's already been running fine?
I'll also point out having been in this business for a long time (both published and technical editor of access books) I also done migration projects from mainframe systems to desktops . And, I also done migration of desktop database systems to large mainframe systems .
I can only say that it is rare to see an application with that many tables. In fact this issue raises alarm bells right off the bat.
Because of such a large number of tables here, I have to think that there's likely either very many processes, and multiple applications cobble together here that represents this whole system . If this is not the case, then of course rewriting in .net does not make sense unless you address the un normalized nature of the system. The fact that the data is already in SQL server helps, but that just might mean that you had the horsepower and capacity and infrastructure to scale something that was poorly designed and the first place
A very big major portion of software flexibility comes from having properly normalized data models. The problem you have is that you have the data in SQL server, and it's very tempting to rewrite parts of the forms and functionality as .net forms, and continue to use the current existing data models. Unfortunately this put you in a bit of a rock and hard place, because you want to continue to use existing data, and start rewriting functionality in .net. However rewriting of functionality in .net without addressing the data models is a very bad idea.
In an ironic twist of fate, this is a catch 22, because likely if that system had really fantastic great designed data models, you might not even have the need to redesign and move this into .net. Access and SQL server can scale out to 100's of users with ease anway. And, access supports the use of class objects, and even source code control.
In other words keep in mind that people might be asking to rewrite this in .net because they believe the application will then magically have increased flexibility, and be able to be changed faster then that of their changing business rules. In fact often the opposite occurs, because access is a very RAD tool. This means that the frontline people can often make modifications due to business rules changing, faster than the IT Dept and their team of developers working away on the next great version of the application. And worse you don't want to saddle that IT Dept and those developers with a poor data model.
I mean, are you to now hire the IT department to build every single spreadsheet and excel for the people because are current business processes are not flexible enough? It would be wonderful if the IT Dept to go around to everybody's desk, hold their hand, and build the excel sheets CORRECTLY for everybody, but it's not practical in the real world. So in addition to taking access away from these people, you might as well take excel away from them also.
I am just pointing out that my spider sense suggests to me that the data models here are going to be a real challenge. Remember, I would always take poor code and great designed data models over the reverse (Great code, but terrible data models). The reason for this is with great data designs, then the code and applications practically write themselves. And with great data models, then the ease of which you can change for the ever changing business rules again favors great data designs over that of great code. You can also RE factor the code overtime WHEN you have good data models. So, with good data models you can move forms and functionality and the UI over into .net, and you can do this seamlessly and easily WHEN the existing data models make sense to keep.
Also it makes no sense to move to these new technologies and less you're going to introduce the possibility of introducing things like self serve web portals for the existing business processes. So, today we can now allow customers to manipulate and use some of that information that is currently locked up in the system. This might be simple as them checking the status of their orders instead of wasting valuable customer phone time. Or it might be something simple like how a major package company in Canada saved an estimated $10,000,000 in the first year of implementing their package tracking system. Or might be something as simple as allowing the customer to look up their account balance.
So right now in the marketplace, these self serve customer web portal systems allow customers to enter and use and get at their information instead of the calling up some employed within the organization who then turns around and launches the application and then manipulates the information for that customer while on the phone. Might just as well let the customer do this work! So from order status, to balances owing, to banking, or what ever it is, the real cherry model ticket today is to allow the customer to use at a cell serve web portal that represents all that valuable information that that internal application is creating .
As mentioned, you have to ask, is where is the Manpower and personnel going to come to build and maintain this application? Obviously the existing system with enormous numbers of forms and tables you are throwing out must somehow been created, and represents some significant investments of time and efforts . The key concept you have to ask, is where were those significant investments and resources coming from to build that existing application? Who is going to maintain the new system then? In other words you need to design the new system to reduce maintenance costs. (new versions of my software can reduce maintenance by as much as 10 or 15 hours per year for customer ).
At the end of the day, good software development and good designs are good designs. Using Access, or VB6, or vb.net don't matter if the system is meeting your business needs now.
I should also point out that the new version of access 2010, can create .web forms. They are XAML (zammel .net forms). I am pointing this out, because changing the front end skin from access to .net yields you VERY little unless the underlying data structures and designs are also modified to take advantages of new possible business processes that can be accomplished with new technologies (such as those cell serve web portals). Simply repaint the font ends with .net forms is really very much amounts to a waste of money in my humble opinion UNLESS other issues are being addressed such as the data model, or some type of web portal that'll improve flexibility here.
You have some great advice here already. Keep in mind this really comes down to what are the goals and reasons for these people desiring this software to be rewritten in .net. Those new goals and desires better not be based on the pretext of simply remaking the forms you have now into .net as that will really accomplish nothing at all, and will not improve their ability to address their changing business needs that the system they are currently using obviously had been doing in the past.
Good luck on this I don't think this is the kind of question that can be answered in a simple forum post, but at least you have lots to chew on here so you can get the ball rolling.
Instead of telling you how hard it is -- which I'm sure you already realize -- I'll try to toss out some hints:
1) Start by moving everything you can out of MS Access and into MS SQL. This means tables, stored procedures, views, etc. If you get this step right, your MS Access app will be a front-end for a real database, which is already a win.
2) Consider giving up before you start. Instead of porting everything, it might make more sense to recognize which features can just be left alone, while new ones get WCF or MVC front-ends.
3) It's tempting to port from VBA to VB.NET, on the theory that it's more similar, but I don't recommend this.
I'm working in department which is mainly responsible for replacing old Access applications with .NET solutions. In my company Access applications are used for simple scenarios to fulfill business needs of single employee or small group of employees.
Sometimes Access application grows, group of users grows or too many changes are needed. In that case department using that Access application can start a project to recreate application. When this begins we can be sure that new application will be far away from current Access application.
First the business analyst is assigned to the project. His responsibility is to map current solution and to discuss problems of the current solution and expectations for the new solution. I haven't seen a project where "customer" wanted only replacing of the current solution. Everytime the customer also wants some new features and extensions which were not possible in Access.
Business analyst creates some initial description of expected solution which is passed to architects. Architects decide which type of application will be build, what type of HW infrastructure will be needed and how the application will be connected to other systems (if needed). After this initial phase IT have big picture about application and about needed changes. Here some initial estimate is done so project can be planned and resources can be allocated. This estimate is boundary for the project. Than my team starts to do the job.
We are using agile approach so our customer (internal team) incrementally sees new features in the application. First we gather some initial set (backlog) of user stories (special form of requirements) and we estimate these user stories and we let the customer to prioritize them. We choose subset of user stories for iteration (usually 2-4 weeks). New user stories can be added to backlog any time but our selected user stories can't change during iteration. After the iteration we present customer working part of the software. Based on the working part customer can decide to change priorities in backlog or create new user stories. We repeat this approach until customer says stop or unit budget is consumed. The important point is that not all user stories have to be done. Project has been planned with some budget and some low priority user stories don't have to be overtake.
From the technical point of view it is project as any other except few differencies:
You have initial database and you always have to be sure that already implemented part in your new solution also have working migration of exisiting data.
You have existing UI. Users can like or can hate the UI. Make sure that you understand it so that you create UI which is not worse than existing one. I created applications where UI had to be completely different and I created applications where UI had to be exactly the same so that users didn't need additional training.
Try to add some new features so that new application is reasonable. It is always easier to explain needs for the new application if you can describe new needed features.
650 forms/subforms is large by any standards. That represents a major conversion project, and a 'slow' port will be a nightmare.
I would suggest developing a new .NET application 'spike' that contains the basic functionality that is absolutely required and then build upon it. At the same time, freeze the Access application from all but essential fixes.
There are a few tools that will convert MS Access forms to .NET, but they will likely fail on complex forms with sub-forms.
'Effortlessly' Convert Access Forms to VB Objects
So if a user is in Access and they take an action to open a form that happens to be a different executable written in C#, won't it 'appear' to be the same application?
There has to be a user group that would love a separate application that only has the 5-10 forms they use.
Getting rid of tables/forms/features that are not used is an increase in functionality. I don't know the level of documentation for this app, but start there. When users find out they have to document areas of an application and justify the need for parts they don't use, they'll volunteer to have it removed.

To Workflow or Not to Workflow?

I am responsible for a team of developers who will are about to start development of a light weight insurance claims system. The system involves a lot of manual tasks and business workflows and we are looking at using Windows Workflow (.NET 4.0).
An example of the business domain is as follows:
A policy holder calls the contact centre to lodge a claim. This “event” fires two sub tasks which are manually actioned in parallel and may take a lengthy time to complete;
Check customer for fraud – A manual process whereby an operator calls various credit companies to check and assess the potential of a fraudulent customer. From here the sub task can enter a number of sub-statuses (Check in progress, Failed Reference Check, Passed Reference Check, etc)
Send item to repairs centre – A manual process where the item for which the policy holder lodged the claim is sent the repairs centre to be fixed. From here the sub task can enter a number of sub-statuses (Awaiting Repair, In Progress, Repaired, Posted, etc).
The claim can only proceed once the status of each sub task has reached a predefined status (based on the business rules).
On the surface it seems that Workflow is indeed the best technology choice; however I do have a few concerns in using WF 4.0.
Skill set – Looking at the average developer skill set I do not see many developers who understand or know Workflow.
Maintainability – There seems to be little support within the community for WF 4.0 projects and this coupled with the lack of skill set raise concerns around maintainability.
Barrier to entry – I have a feeling that Windows Workflow has a steep learning curve and it’s not always that easy to pick up.
New product – As Workflow has been completely rewritten for .NET 4.0 I see the product as a first generation product and may not have the necessary stability.
Reputation – Previous versions of Workflow were not well received, considered difficult to develop with and resulted in poor business uptake.
So my question is should we use Windows Workflow (WF) 4.0 for this situation or is there an alternative technology (e.g., Simple State Machine, etc) or even a better workflow engine to use?
I have done several WF4 projects so lets see if I can add any useful info to the other answers.
From the description of your business problem it sounds like WF4 is a good match, so no problems there.
Regarding your concerns you are right. Basically WF4 is a new product and is lacking some important features and has some rough edges. There is a learning curve, you do have to do some things differently. The main point is long running and serialization, which is something the average developer is not used to and requires some thought to get right as I hear far too often that people have problems serializing an entities framework data context.
Most of the time using workflow services hosted in IIS/WAS is the best route when doing these long running type of workflows. That makes solving the versioning problem not to hard either, just have the first message return the workflow version and make that a part of each subsequent message. Next put the WCF router in between that routes the message to the correct endpoint based on the version. The basic is never to change an existing workflow, always create a new one.
So what is my advise to you?
Don't take a big gamble on a unknown, and for you unproven, piece of technology. Do a small, non critical, piece of the application using WF4. That way if it works you can expand on it but if it fails you can rip it out and replace it with more traditional .NET code. That way you get real experience with WF4 instead of having to base a decision on second hand information and you learn a new and powerful technology in the process. If possible take a course on WF4 as that will save you a lot of time in getting up to speed (shameless self plug here).
About the Simple State Machine. I have not used it but I was under the impression it was for short running, in memory, state machines. One of the main benefits of WF4 is the long running aspects.
I have come to this dilemma couple of times and I had chosen not to use Work Flow foundation. Some of considerations (similar to yours) were
Involved work flows were lot simpler (a combination of state machine and sequential actions) and doing it in WF seems to overkill for efforts involved.
Learning curve for developers to understand and to use WF effectively was considered high. Status transition table describing valid transitions and actions to be taken are used for additional flexibility and developers were comfortable with it, easily understanding the concept and purpose.
Chances of business process changes were slim and rudimentary changes were easily possible with help of transition table. A change in transition would mean a database script while change in actions would result in new release/patch. However, probability of such occurrence was deemed to be low.
Looking back after 13-14 months, I still think that decision of not using WF was correct. IMO, WF makes sense where there is strong likely hood that work flow can change and/or business rules can change. WF allows to isolate workflow in separate file and so making it configurable by users will be simpler.
We have been using WF 4.0 the last couple of months. I have to say it's challenging to think the Workflow way. However, I can tell you it's worth it. We knew very little when we started. We've bought a beginner and professional book for WF 4.0 that helped. I, myself, watched many videos online and followed PDC 2009 for their breaking news about WF 4.0 and how it's different from the previous somewhat sucky versions.
One major thing that we had to propose a solution for is the way we can deal with In/Our Arguments in a workflow without bounding our custom activities to certain data types and how to pass parameters between activities. I have come up with a good solution for that, and the workflow experience that we have so far is not bad at all. Actually, we have a workflow-intensive application that is getting bigger and bigger and I really cannot imagine myself solving it in a different environment. I love the visual effect that it has: it keeps me away from the details of if/else etc constructs and makes the business rules apparent in a way that doesn't make you forced to dive into lines of code to know what's going on or how to fix some bug.
By the way, the project that we worked on is very similar to what you described and it's a medium-sized project.
You can tell from my words that I like it and I do recommend it although is incorporates some risks as it's a new technology and you have to come up with some innovative ideas.
my 2 cents...
I did three projects in WF 3.5 and I have to say it is not easy. It force you to think in the whole new way especially when persistance is used. Updating the application which contains hundreds of incomplete persisted workflow is challenging. Single breaking change in serialization crashes them all. Introducing multiple versions of the same library to support new and old running workflows is common. It was challenging.
I haven't tryed WF 4.0 yet but based on experience from BizTalk and WF 3.5 I think it will be similar.
Anyway the best approach you can take is to do Proof-of-Concept. Take single WF from your requirments and try to implment it in WF 4.0. You will spend some time with it but you will prove if you are able to do that in WF 4.0 and if there are any visible benefits.
If you decide to use WF 4.0 I insist that you check possibility to run WF as WCF service hosted in Windows AppFabric. AppFabric provides some out of the box functionality for hosting WFs.
I think it does not really make sense today to talk about Workflow in WF4 as a technology choice for this kind of problem. What is really appropriate, as mentioned by Ladislav Mrnka above, is WCF WF Services hosted in AppFabric.
My experience with this is that it pays great dividends and is very enjoyable, but problems arise in the beginning because it is not properly appreciated that for many programmers this is a methodology shift more than a technology shift. On the other hand, generalists and those with a problem-solving mindset saw WCF WF AppFabric as a set of exciting opportunities. So if the mix of people on the project are fairly conservative C# devs attached to their daily set of OO and patterns, it will be hard to introduce. If the team is more innovative, then adoption will be much easier because the potential and new doorways multiply with each discovery.
Two main conceptual problems programmers had in moving to this technology was:
a) Message correlation and messaged exchange patterns
b) Workflows and unit testing
In standard systems in C# for example a workflow is rarely explicit and therefore rarely unit tested. The overall workflow is left for testing by acceptance scenarios or integration. Introduce an explicit WF as a software artifact and suddenly standard devs want to try and unit test it, which is usually not worth doing.
The message correlation aspect of it is a bit of mindset shift for those not familiar with message exchange patterns. Most devs have dealt with in process and remote calls, web service and SOAP, and usually focussed on one or two of those. To abstract above it all and work with a general message based system can be confusing at first.
On the positive side though, the end result is something that saves a lot of time and creates a lot of opportunities. One main thing is that the worfklow, if visually clear, is something that can be worked on by end user, developer and analyst together, eliminating unnecessary steps in the development lifecycle and focusing the parties on one artifact. Further, it discourages islands of functionality in dedicated apps, with dedicated glue layers, by encouraging a suite of business processes in WF per business domain. Further, with AppFabric, the plumbing for persistence, logging, and waking up scheduled activities is all done for you. WF4 performance is outstanding too.
My recommendation would be to find the most innovative or explorative team member do the initial scouting to discover the tricky parts, get the core functions working, and have that initial person be responsible for then compartmentalising the remaining work.
In order to do an insurance claim system of any complexity that involves roles and "sub-tasks" you really need an BPM solution, not just workflow. Workflow Foundation 4.0 is slick but it really doesn't not come close to the functionalities of a BPM product.
BPM solutions, like Metastorm BPM, Global360, and K2.NET, provide human centric workflow, tasks, roles, and system integration that can model and streamline the business processes like your insurance claim system. Use ASP.NET to build the interface that integrates with the BPM workflow engine as their built in designers are usually limited and force you to use their custom built web control which usually are not as full featured as the ASP.NET web controls.
Go with the technology your team knows and feels comfortable with. Workflow Foundation is not a product that you can use straight away - it's rather a set of pieces you can embed in your application in order to build a workflow system. IMHO the workflow logic is the least important piece of technology, first of all you have to concentrate on the GUI because business owners will not see anything but the GUI. But if your system is a success then you have to be prepared for neverending change requests and new requirements so you have to implement your business logic so that it's easy to change and easy to divide into separate processes to suit different user needs (sometimes contradicting). BPM helps in this task because it allows you to have separate, multiple versions of business processes suiting various business needs. You don't need full fledged BPM engine for that but it's useful to code your business logic so that it can be versioned and divided into individual business processes - the worst thing to have is an unmantainable and intertangled blob of code that handles 'everything' and that no one can understand. There are many ideas for that - state machines, DSLs (domain specific languages), scripts etc - you decide what the implementation should be. But you should always think in terms of business processes and organize your logic accordingly so that it reflects these processes.
And be prepared for coexistence of many variants of business logic and data structures - this is the most difficult design task imho.
I'm in a situation where I have to use 4.0 as .NET 4.5 isn't accredited for use in our prod environment yet. I had major pain understanding generally how to get long running workflows going to suit our business need but eventually found an elegant solution. It's not something which just anyone coming later to support can just pick up with ease because there's so much to think about, but I do believe in WF as a tool for managing workflow states.
One big thing I take issue with WF 4.0 though is Maurice's comment:
The basic is never to change an existing workflow, always create a new one
That's great if you just want a new version, but what if you have 50,000 persisted workflows and realise at some point that there's a bug in the workflow? You need to be able to update the xamlx and still be coupled to the existing instances. I've tried ungzipping the various metadata columns in the SQL Server instances table to find something that ties the instance to the workflow definition without any luck.
I did write a synchronisation application for importing data from an old system into our new WF 4.0 driven one. We basically load the data into the system, then run the process which goes about automatically calling into the workflow steps and calling validation methods, essentially mocking user interaction. This only really worked well with us due to the architecture we implemented for access to the workflow service host. It's great as a one off, where after running you can go through and do checks to ensure consistency of the data migration process, but having to use this approach for potentially hundreds of thousands of cases once a system is live isn't really an approach that instills confidence and over burdens the process of integration simple bug fixes.
My recommendation is that you avoid WF 4.0 altogether and just go straight to 4.5 if you're environment supports it. The Dynamic Updates and Side by Side Versioning it provides caters for bug fixing and WF versioning all out of the box. I've still yet to investigate exactly how as 4.5 still isn't accredited for use by our client, but eagerly awaiting this opportunity.
What I'm desperately hoping for is that our client doesn't request changes to policy (and therefore workflow adjustments) and that the current workflows hold up without any bugs. The latter being a vain and empty hope as bugs always pop up.
I really can't understand what was going through the WF dev team's heads to release a system where out of the box you can't fix bugs easily. They should have developed a technique for re-binding an instance to new xamlx.

Categories

Resources