I want to implement a workflow system on a new website which i am developing. Basically have
an order object (in future may have many more objects) which can have different statuses i.e. initial,assigned,dispatched,cancelled etc. It is the case that the order can only go from one status to another e.g can go from assigned to dispatched but cant go from initial to dispatched etc. i am hoping that maybe someone can give me an approach which is best to take for something like this??????
Try Windows Workflow Foundation, it might be overkill for your application.
If you your WF system is that simple and you do not expect it to evolve much, you could use regular objects with an enumerated type or a dictionary / list of statuses.
Type and value together will give you current status and a list of available actions. Persistence of WF objects will also be very easy.
Related
A common REST API pattern is to define a single item lookup from a collection like this:
//get user with id 123
GET /users/123
Another common REST API pattern is to define a search using a POST + body like this:
POST /users/
{
FirstName:"John",
LastName:"Smith"
}
For the sake of consolidation, development, maintenance and support throughput, how common is it to implement all lookups through a single search like this?:
POST /users/
{
Id:123
FirstName:"John",
LastName:"Smith"
}
It seems like if an org is trying to maximize development throughput and minimize maintenance and support overhead then consolidating the API call like this appears to be a reasonable solution. How common is it for developers to implement this type of pattern these days?
This isn't a great question for SO, given that it's primarily opinion based.
It seems like if an org is trying to maximize development throughput and minimize maintenance and support overhead then consolidating the API call like this appears to be a reasonable solution.
Which is better, your opinion above, or the single responsibility principle.
Presumably, if you given a resource ID, the underlying implementation can efficiently look it up.
Search, assumes a search like implementation--that is, searching for a resource given a set of parameters. This can be efficient or inefficient depending on it's underlying implementation.
If you were to implement a single API call that has different behavior depending on it's arguments, you end up with more complex implementation, which is harder to test, which may make that implementation more error prone.
With an API design that alters the control flow based on the presence of inputs--it opens up design choices around whether it's an error if both sets of inputs are provided, or whether one set takes priority over another set. Further in a priority case, if one set produces no results, do you fall back to the other set?
Often in design, the simpler the implementation the more easily it's functionality is to reason about.
Thinking about the principle of least surprise, an API that better conforms to convention would be easier conceptually to understand than one that does not. While that isn't a strong argument in and of itself, there is merit to having an API that can be used in a fashion similar to other popular REST APIs.
As a consumer of your API, when should I use the ID and when should I use search? Contrast that with an API that shows very clearly, that if I have and ID I can use that ID to retrieve a resource, AND if I don't I can use Search to find that resource.
Also food for thought, why implement search as a POST, and not a GET with query strings parameters?
In my opinion:
If one variable (like an id) is enough. Use the first.
If you need more info use the second.
The third makes no sense to me because if you have the ID, you don't need to provide anything else; so why should I bother the client (or myself if I put myself in his/her position) to bother with to set-up the object structure.
I think this is related to test-driven development. Make it yourself (and others, who will use your API) as clear and easy as possible.
We've got a business logic/data access layer that we're exposing on a couple of different endpoints via a WCF service. We've created DTOs for use as the data contract of the service. We'll be using the service via the different endpoints for multiple different applications. In some of the applications, we only need a few fields from the DTO while in others we may need almost all of them. For those in which we only need a few, we really don't want to be sending the entire object "over the wire" every time - we'd like to pare it down to what we actually need for a given application.
I've gone back and forth between creating specific sets of DTOs for use with each application (overkill?) and using something like EmitDefaultValue=false on the members that are only needed in certain apps. I've also considered using the XmlSerializer rather than DataContractSerializer in order to have greater control over the serialization within the service.
My question is - first off, should we worry that much about the size of data we're passing? Second, assuming the answer is 'yes' or that we decide to care about it even if it is 'no', what approach is recommended here and why?
EDIT
Thanks for the responses so far. I was concerned we might be getting into premature optimizations. I'd like to leave the question open for now, however, in hopes that I can get some answers to the rest of it, both for my own edification and in case anybody else has this question and has a valid reason to need to optimize.
first off, should we worry that much about the size of data we're passing?
You didn't give the number/sizes of the fields but in general: No. You've already got the envelope(s) and the overhead of setting up the channel, a few more bytes won't matter much.
So unless we're talking about hundreds of doubles or something similar, I would first wait and if there's a real problem: experiment and measure.
Should you worry? Maybe. Performance/stress test your services and find out.
If you decide you do care...a couple options:
Create a different service (or maybe different operations in the same service) that return partially hydrated DataContracts. So these new services and/or operations return the same DataContrcts, but only partially hydrated.
Create "lite" versions of your DataContracts and return those instead. Basically the same as option 1, but with this approach you don't have to worry about consumers misusing the full DataContract (potentially getting null reference exceptions and such).
I prefer option 2, but if you have control over your consumers, option 1 might work for you.
It seems you may be entering the "premature optimization" zone. I'd avoid using application specific DataContracts for an entity because of the maintenance work it will cause problems in the long run. However, if your application has a valid need to hide information from some client applications and not other then its good to have multiple DataContracts for a given entity. #Henk is right, unless you're dealing with massive deeply nested entities (in which case you have a different problem) then do not "optimize" your design simply to reduce network transmission packets.
Here's the story so far:
I'm doing a C# winforms application to facilitate specifying equipment for hire quotations.
In it, I have a List<T> of ~1500 stock items.
These items have a property called AutospecQty that has a get accessor that needs to execute some code that is specific to each item. This code will refer to various other items in the list.
So, for example, one item (let's call it Item0001) has this get accessor that may need to execute some code that may look something like this:
[some code to get the following items from the list here]
if(Item0002.Value + Item0003.Value > Item0004.Value)
{ return Item0002.Value }
else
{ return Item0004.Value }
Which is all well and good, but these bits of code are likely to change on a weekly basis, so I'm trying to avoid redeploying that often. Also, each item could (will) have wildly different code. Some will be querying the list, some will be doing some long-ass math functions, some will be simple addition as above...some will depend on variables not contained in the list.
What I'd like to do is to store the code for each item in a table in my database, then when the app starts just pull the relevant code out and bung it in a list, ready to be executed when the time comes.
Most of the examples I've seen on the internot regarding executing a string as code seem quite long-winded, convoluted, and/or not particularly novice-coder friendly (I'm a complete amateur), and don't seem to take into account being passed variables.
So the questions are:
Is there an easier/simpler way of achieving what I'm trying to do?
If 1=false (I'm guessing that's the case), is it worth the effort of all the potential problems of this approach, or would my time be better spent writing an automatic update feature into the application and just keeping it all inside the main app (so the user would just have to let the app update itself once a week)?
Another (probably bad) idea I had was shifting all the autospec code out to a separate DLL, and either just redeploying that when necessary, or is it even possible to reference a single DLL on a shared network drive?
I guess this is some pretty dangerous territory whichever way I go. Can someone tell me if I'm opening a can of worms best left well and truly shut?
Is there a better way of going about this whole thing? I have a habit of overcomplicating things that I'm trying to kick :P
Just as additional info, the autospec code will not be user-input. It'll be me updating it every week (no-one else has access to it), so hopefully that will mitigate some security concerns at least.
Apologies if I've explained this badly.
Thanks in advance
Some options to consider:
1) If you had a good continuous integration system with automatic build and deployment, would deploying every week be such an issue?
2) Have you considered MEF or similar which would allow you to substitute just a single DLL containing the new rules?
3) If the formula can be expressed simply (without needing to eval some code, e.g. A+B+C+D > E+F+G+H => J or K) you might be able to use reflection to gather the parameter values and then apply them.
4) You could use Expressions in .NET 4 and build an expression tree from the database and then evaluate it.
Looks like you may be well served by implementing the specification pattern.
As wikipedia describes it:
whereby business logic can be recombined by chaining the business logic together using boolean logic.
Have you considered something like MEF, then you could have lots of small dlls implementing various versions of your calculations and simply reference which one to load up from the database.
That is assuming you can wrap them all in a single (or small number of) interfaces.
I would attack this problem by creating a domain specific language which the program could interpret to execute the rules. Then put snippits of the DSL code in the database.
As you can see, I also like to overcomplicate things. :-) But it works as long as the long-term use is simplified.
You could have your program compile up your rules at runtime into a class that acts like a plugin using the CSharpCodeProvider.
See Compiling code during runtime for a sample of how to do this.
So I have a structure like this:
Widget:
Component 1:
Component 2:
Component 3:
...
Component n:
I'm building an ASP.NET MVC web app that as part of its functionality will allow a user to create a Widget object and assign Component objects (which have a number of properties) as "children" of the Widget object. Users might have no Components or might add 50. In addition, they may edit a Widget object and arbitrarily remove or change Component properties.
Everything in the application is working but I'm not happy with the way this is structured. On Submit currently, I submit all Components with ALL their properties. I delete all components currently associated with this Widget and then enumerate over each Component and re-add it.
...But I'm not happy with this solution. For some Widgets with a massive amount of components (say 500) this process can be time consuming even if the user only changed one component. But the alternative (tracking Creates/Updates/Deletes on a per Componenent basis) seems really painful to build.
I'm sure that I can do this better, so I'm interested in knowing what sort of patterns can be applied to solve this problem (generally speaking) and particular in the context of web applications.
Why is tracking the Create/Update/Delete so much harder? Take a look at my response to a similar question about finding the differences between what is in your repository and what is being posted back. Provided each Component has a unique ID (which it sounds like it does), it shouldn't be that difficult. Also it should be somewhat quicker for larger Widgets with lots of Components as you're not rebuilding its list every time.
I'm in the process of designing a system that will allow me to represent broad-scope tasks as workflows, which expose their workitems via an IEnumerable method. The intention here is to use C#'s 'yield' mechanism to allow me to write psuedo-procedural code that the workflow execution system can execute as it sees fit.
For example, say I have a workflow that includes running a query on the database and sending an email alert if the query returns a certain result. This might be the workflow:
public override IEnumerable<WorkItem> Workflow() {
// These would probably be injected from elsewhere
var db = new DB();
var emailServer = new EmailServer();
// other workitems here
var ci = new FindLowInventoryItems(db);
yield return ci;
if (ci.LowInventoryItems.Any()) {
var email = new SendEmailToWarehouse("Inventory is low.", ci.LowInventoryItems);
yield return email;
}
// other workitems here
}
CheckInventory and EmailWarehouse are objects deriving from WorkItem, which has an abstract Execute() method that the subclasses implement, encapsulating the behavior for those actions. The Execute() method gets called in the workflow framework - I have a WorkflowRunner class which enumerates the Workflow(), wraps pre- and post- events around the workitem, and calls Execute in between the events. This allows the consuming application to do whatever it needs in before or after workitems, including canceling, changing workitem properties, etc.
The benefit to all this, I think, is that I can express the core logic of a task in terms of the workitems responsible for getting the work done, and I can do it in a fairly straightforward, almost procedural way. Also because I'm using IEnumerable, and C#'s syntactic sugar that supports it, I can compose these workflows - like higher-level workflows that consume and manipulate sub-workflows. For example I wrote a simple workflow that just interleaves two child workflows together.
My question is this - does this sort of architecture seem reasonable, especially from a maintainability perspective? It seems to achieve several goals for me - self-documenting code (the workflow reads procedurally, so I know what will be executed in what steps), separation of concerns (finding low inventory items does not depend on sending email to the warehouse), etc. Also - are there any potential problems with this sort of architecture that I'm not seeing? Finally, has this been tried before - am I just re-discovering this?
Personally, this would be a "buy before build" decision for me. I'd buy something before I'd write it.
I work for a company that's rather large and can be foolish with its money, so if you're writing this for yourself and can't buy something I'll retract the comment.
Here are a few random ideas:
I'd externalize the workflow into a configuration that I could read in on startup, maybe from a file or a database.
It'd look something like a finite state machine with states, transitions, events, and actions.
I'd want to be able to plug in different actions so I could customize different flows on the fly.
I'd want to be able to register different subscribers who would want to be notified when a particular event happened.
I wouldn't expect to see anything as hard-coded as that e-mail server. I'd rather encapsulate that into an EmailNotifier that I could plug into events that demanded it. What about a beeper notification? Or a cell phone? Blackberry? Same architecture, different notifier.
Do you want to include a handler for human interaction? All the workflows that I deal with are a mix of human and automated processing.
Do you anticipate wanting to connect to other systems, like databases, other apps, web services?
It's a tough problem. Good luck.
#Erik: (Addressing a comment about the applicability of my answer.) If you enjoy the technical challenge of designing and building your own custom workflow system then my answer is not helpful. But if you are trying to solve a real-world WF problem with code that needs to be supported into the future then I recommend using the built-in WF system.
Workflow support is now part of the .Net framework and is called "Workflow Foundation (WF)". It is almost certainly easier to learn how to use the built-in library than to write one of your own, as duffymo pointed out in his "buy before build" comment.
Workflows are expressed in XAML and are supported by a designer in Visual Studio.
There are three types of Workflows (from Wikipedia, link below)
Sequential Workflow (Typically Flow
Chart based, progresses from one stage
to next and does not step back)
State
Machine Workflow (Progress from
'State' to 'State', these workflows
are more complex and return to a
previous point if required)
Rules-driven Workflow (Implemented
based on Sequential/StateMachine
workflow. The rules dictate the
progress of the workflow)
Wikipedia: Windows Workflow Foundation
MSDN: Getting Started with Workflow Foundation (WF)