Related
Here's the story so far:
I'm doing a C# winforms application to facilitate specifying equipment for hire quotations.
In it, I have a List<T> of ~1500 stock items.
These items have a property called AutospecQty that has a get accessor that needs to execute some code that is specific to each item. This code will refer to various other items in the list.
So, for example, one item (let's call it Item0001) has this get accessor that may need to execute some code that may look something like this:
[some code to get the following items from the list here]
if(Item0002.Value + Item0003.Value > Item0004.Value)
{ return Item0002.Value }
else
{ return Item0004.Value }
Which is all well and good, but these bits of code are likely to change on a weekly basis, so I'm trying to avoid redeploying that often. Also, each item could (will) have wildly different code. Some will be querying the list, some will be doing some long-ass math functions, some will be simple addition as above...some will depend on variables not contained in the list.
What I'd like to do is to store the code for each item in a table in my database, then when the app starts just pull the relevant code out and bung it in a list, ready to be executed when the time comes.
Most of the examples I've seen on the internot regarding executing a string as code seem quite long-winded, convoluted, and/or not particularly novice-coder friendly (I'm a complete amateur), and don't seem to take into account being passed variables.
So the questions are:
Is there an easier/simpler way of achieving what I'm trying to do?
If 1=false (I'm guessing that's the case), is it worth the effort of all the potential problems of this approach, or would my time be better spent writing an automatic update feature into the application and just keeping it all inside the main app (so the user would just have to let the app update itself once a week)?
Another (probably bad) idea I had was shifting all the autospec code out to a separate DLL, and either just redeploying that when necessary, or is it even possible to reference a single DLL on a shared network drive?
I guess this is some pretty dangerous territory whichever way I go. Can someone tell me if I'm opening a can of worms best left well and truly shut?
Is there a better way of going about this whole thing? I have a habit of overcomplicating things that I'm trying to kick :P
Just as additional info, the autospec code will not be user-input. It'll be me updating it every week (no-one else has access to it), so hopefully that will mitigate some security concerns at least.
Apologies if I've explained this badly.
Thanks in advance
Some options to consider:
1) If you had a good continuous integration system with automatic build and deployment, would deploying every week be such an issue?
2) Have you considered MEF or similar which would allow you to substitute just a single DLL containing the new rules?
3) If the formula can be expressed simply (without needing to eval some code, e.g. A+B+C+D > E+F+G+H => J or K) you might be able to use reflection to gather the parameter values and then apply them.
4) You could use Expressions in .NET 4 and build an expression tree from the database and then evaluate it.
Looks like you may be well served by implementing the specification pattern.
As wikipedia describes it:
whereby business logic can be recombined by chaining the business logic together using boolean logic.
Have you considered something like MEF, then you could have lots of small dlls implementing various versions of your calculations and simply reference which one to load up from the database.
That is assuming you can wrap them all in a single (or small number of) interfaces.
I would attack this problem by creating a domain specific language which the program could interpret to execute the rules. Then put snippits of the DSL code in the database.
As you can see, I also like to overcomplicate things. :-) But it works as long as the long-term use is simplified.
You could have your program compile up your rules at runtime into a class that acts like a plugin using the CSharpCodeProvider.
See Compiling code during runtime for a sample of how to do this.
Let's assume I am in charge of developing a Scrabble game, being that one of the principal requirements of the client is the ability to later try out different ways and modes of the game. I already made a design that is flexible enough to support those kinds of changes. The only question left is what to expose to the client(objects' access modifiers), and how to organize it (how to expose my objects in namespaces/packages).
How should I define things such that the client can both easily use my standard implementation (a standard Scrabble game, and yet be able to make all the modifications that he wants? I guess what I need is a kind of framework, on which he can work on.
I organized my classes/interfaces in a non-strict layered system:
Data Types
Contains basic data types that might be used in the whole system. This package and its members can be accessed by anyone in the system. All its members are public.
Domain
Contains all the interfaces I've defined and that might be useful to be able to make client's new Scrabble's implementations. Also contains value types, like Piece, that are used in the game. All its members are public.
Implementations
Contains all the needed classes/code to implement my standard Scrabble game in a Implementations.StandardScrabble package. If the client decides to implement other variants of the game, he can create them in Implementations.XYZ, for example.
These classes are all package protected and the only thing that is available to the outside of the package is a Game façade. Uses both Domain and Data Types packages.
UI
Contains the UI class that I have implemented so that both the client and the users of the program can run the game (my implementation). Can access all the other layers.
There are several drawbacks to the way I am organizing things, the most obvious being that if the client wants to create its own version of the game, he will have to basically implement almost everything by himself(I share in the Domain the interfaces, but he can do almost nothing with them). I feel I should maybe pass all the Implementation's classes to the Domain and then only have a Façade that builds up my standard Scrabble in the Implementations namespace?
How would you approach this? Is there any recomended reading on how to build this kind of programs (basically, frameworks)?
Thanks
I think that you're trying to give too much freedom to a client. This must be making things that difficult for you to handle. Based on what you have described it seems that a client will be able to modify almost all parts of your game - model, logic, UI... I think it would be better to restrict modifiable areas in your application but expose some via general Plugin interface set. This would make it easier for a user as well - he will only need to learn how plugins work, not the entire application's logic. Define areas for your plugins if you want - UI plugin, game mode plugin and so on. Many production applications and games work in such way (recall Diablo II and that AMAZING variety of plugins it has!).
For the algorithms and strategies I would define interfaces and default implementations, and provide abstract superclasses which are extended by you own implementations, so that all the boilerplate code is in the abstract superclass. In addition I would allow the client to subclass your impl. Just make more than one impl, and you see what to place where.
But most important: Give your client the code. If he needs to understand where to place his code, he should be able to see what you have coded, too. No need to hide stuff.
Whatever design you come up with, I would err on the side of hiding as much of the implementation as possible. Once you expose an implementation, you cannot take it back (unless you're ready to wage a flame war with your client base). You can always provide default implementations later as you see fit.
Generally, I'd start with only providing thin interfaces. Then, before providing abstract classes, I might offer utility classes (e.g., Factories, Builders, etc.).
I'd recommend reading Effective Java by Josh Bloch for useful general practices when designing object-oriented code.
MVC/Compund Pattern
You may release earlier version of your package.
later on you can upgrade it based on user requirement.
If you are using MVC or other compound pattern wisely, I believe you also can upgrade your package easily.
Today I had an epiphany, and it was that I was doing everything wrong. Some history: I inherited a C# application, which was really just a collection of static methods, a completely procedural mess of C# code. I refactored this the best I knew at the time, bringing in lots of post-college OOP knowledge. To make a long story short, many of the entities in code have turned out to be Singletons.
Today I realized I needed 3 new classes, which would each follow the same Singleton pattern to match the rest of the software. If I keep tumbling down this slippery slope, eventually every class in my application will be Singleton, which will really be no logically different from the original group of static methods.
I need help on rethinking this. I know about Dependency Injection, and that would generally be the strategy to use in breaking the Singleton curse. However, I have a few specific questions related to this refactoring, and all about best practices for doing so.
How acceptable is the use of static variables to encapsulate configuration information? I have a brain block on using static, and I think it is due to an early OO class in college where the professor said static was bad. But, should I have to reconfigure the class every time I access it? When accessing hardware, is it ok to leave a static pointer to the addresses and variables needed, or should I continually perform Open() and Close() operations?
Right now I have a single method acting as the controller. Specifically, I continually poll several external instruments (via hardware drivers) for data. Should this type of controller be the way to go, or should I spawn separate threads for each instrument at the program's startup? If the latter, how do I make this object oriented? Should I create classes called InstrumentAListener and InstrumentBListener? Or is there some standard way to approach this?
Is there a better way to do global configuration? Right now I simply have Configuration.Instance.Foo sprinkled liberally throughout the code. Almost every class uses it, so perhaps keeping it as a Singleton makes sense. Any thoughts?
A lot of my classes are things like SerialPortWriter or DataFileWriter, which must sit around waiting for this data to stream in. Since they are active the entire time, how should I arrange these in order to listen for the events generated when data comes in?
Any other resources, books, or comments about how to get away from Singletons and other pattern overuse would be helpful.
Alright, here's my best shot at attacking this question:
(1) Statics
The Problem with static that you may be having is that it means different things in .NET and say, C++. Static basically means it's accessible on the class itself. As for it's acceptability id say it's more of something you'd use to do non-instance specific operations on a class. Or just general things like Math.Abs(...). What you should use for a global config is probably a statically accessed property for holding the current/active configuration. Also maybe some static classes for loading/saving setting the config, however the config should be an Object so it can be passed around manipulated, etc.
public class MyConfiguration
{
public const string DefaultConfigPath = "./config.xml";
protected static MyConfiguration _current;
public static MyConfiguration Current
{
get
{
if (_current == null)
Load(DefaultConfigPath);
return _current;
}
}
public static MyConfiguration Load(string path)
{
// Do your loading here
_current = loadedConfig;
return loadedConfig;
}
// Static save function
//*********** Non-Static Members *********//
public string MyVariable { get; set; }
// etc..
}
(2) Controller/Hardware
You should probably look into a reactive approach, IObserver<> or IObservable<>, it's part of the Reactive Framework (Rx).
Another approach is using a ThreadPool to schedule your polling tasks, as you may get a large number of threads if you have a lot of hardware to pool. Please make sure before using any kind of Threading to learn a lot about it. It's very easy to make mistakes you may not even realize. This Book is an excelent source and will teach you lots.
Either way you should probably build services (just a name really) for managing your hardware which are responsible for collecting information about a service (essentially a model-pattern). From there your central controller can use them to access the data keeping the program logic in the controller, and the hardware logic in the service.
(3) Global Configuration
I may have touched this subject in point #1 but generally that's where we go, if you find yourself typing too much you can always pull it out of there assuming the .Instance is an object.
MyConfiguration cfg = MyConfiguration.Current
cfg.Foo // etc...
(4) Listening For data
Again the reactive framework could help you out, or you could build up an event-driven model that uses triggers for incoming data. This will make sure you're not blocking on a thread till data comes in. It can reduce the complexity of your application greatly.
for starters, you can limit use of singleton through the "Registry" pattern, which effectively means you have one singleton which allows you to get to a bunch of other preconfigured objects.
This is not a "fix" but an improvement, it makes the many objects that are singletons a little more normal and testable. eg... (totally contrived example)
HardwareRegistry.SerialPorts.Serial1.Send("blah");
but the real problem seems to be you are struggling with making a set of objects that work nicely together. There's two kind of steps in OO.... configuring objects, and letting objects do their thing.
so perhaps look at how you can configure non singleton objects to work together, and then hang that off a registry.
Static :-
Plenty of exceptions to the rules here, but in general, avoid it, but it is useful for doing singletons, and creating methods that do "general" computing outside the context of an object. ( like Math.Min )
Data Monitoring :-
its often better to do as you hint at, create a thread with a bunch of preconfigured objects that will do your monitoring. Use message passing to communicate between threads (through a thread safe queue) to limit thread locking problems. Use the registry pattern to access hardware resources.
you want something like a InstrumentListner that uses an InstrumentProtocol (which you subclass for each protocol) to I dunno, LogData. The command pattern may be of use here.
Configuration:-
have your configuration information and use something like the "builder" pattern to translate your configuration into a set of objects set up in a particular way. ie, don't make your classes aware of configuation, make a object that configures objects in a particular way.
Serial Ports :-
I do a bunch of work with these, what I have is a serial connection, which generates a stream of characters which it posts as an event. Then I have something that interprets the protocol stream into meaningful commands. My protocol classes work with a generic "IConnection" of which a SerialConnection inherits..... I also have TcpConnections, MockConnections, etc, to be able to inject test data, or pipe serial ports from one computer to another, etc. So Protocol classes just interpret a stream, have a statemachine, and dispatch commands. The protocol is preconfigured with a Connection, Various things get registered with the protocol, so when it has meaningful data they will be triggered and do their thing. All this is built from a configuration at the beginning, or rebuilt on the fly if something changes.
Since you know about Dependency Injection, have you considered using an IoC container to manage lifetimes? See my answer to a question on static classes.
You (the OP) seem preoccupied with OO design, well, I'll put it this way when thinking about the static variables things. The core concept is encapsulation and reuse; somethings you could care less about reusing but you almost always want the encapsulation. If it's a static variable, it's not really encapsulated, is it? Think about who needs to access it, why, and how far you can HIDE it from client code. Good designs often can change their internals without much breakage to clients, that is what you want to think about. I agree with Scott Meyers (Effective C++) about many things. OOP goes way beyond the class keyword. If you've never heard of it it, look up properties: yes they can be static, and C# has a very good way of using them. As opposed to literally using a static variable. Like I hinted at the start of this list item: think about how not to shoot yourself in the foot later as the class changes with time, that's something many programmers fail to do when designing classes.
Take a look at that Rx framework someone mentioned. The threading model to use, for such a situation as you described, is not readily decidable without more specifics about the use case IMHO. Be sure you know what you're doing with threads. A lot of people can't figure out threads to save their lives; it's no that hard, being tread safe can be when (re)using code. Remember controllers should often be separate from the objects they are controlling (E.g. not the same class); if you don't know it, look up a book on MVC and buy gang of four.
Depends on what you need. For many applications a class that is almost entirely filled with static data, is good enough; like a singleton for free. It can be done very OO. Sometimes you would rather have multiple instances or play with injection, that makes it more complex.
I suggest threads and events. The ease of making code event driven is actually one of the nicer things about C# IMHO.
Hmm, killing off singletons...
In my experience, a lot of the more common uses that young programmers put singletons to, are little more than a waste of the class keyword. Namely something they meant as a stateful module being rolled into a highlander class; and there are some bad singleton implementations out there to match. Whether this is because they failed to learn what they're doing, or only had Java in college, I dunno. Back in C land, it's called a using data at file scope and exposing an API. In C# (and Java) you're kind of bound to it being a class more than many languages. OOP != class keyword; learn the lhs well.
A decently written class can use static data to effectively implement a singleton, and make the compiler do the leg work of keeping it one, or as one as you are ever going to get of anything. Do NOT replace singletons with inheritance unless you seriously know what the heck you are doing. Poorly done inheritance of such things, leads to more brittle code that knows waaaay to much. Classes should be dumb, data is smart. That sounds stupid unless you look at the statement deeply. Using inheritance IMHO for such a thing, is generally a bad thing(tm), languages have the concept of modules/packages for a reason.
If you are up for it, hey you did convert it to singletons ages ago right? Sit down and think a bit: how can I best structure this app, in order to make it work XXX way, then think how doing it XXX way impacts things, for example is doing this one way going to be a source of contention among threads? You can go over a lot of things in an hour like that. When you get older, you'll learn better techniques.
Here is one suggestion for an XXX way to start with: (visualize) write(^Hing) a composite controller class, that works as a manager over the objects it references. Those objects were your singletons, not the the controller holds them, and they are but instances of those classes. This isn't the best design for a lot of applications (particularly can be an issue in heavily threaded ones IMHO), but it will generally solve what causes most younglings to reach for a singleton, and it will perform suitably for a vast array of programs. It's uh, like design pattern CS 102. Forget the singleton you learned in CS 607.
That controlling class, perhaps "Application' would be a more apt ;), basically solves your need for singletons and for storing configuration. How to do it in a sublimely OO way (assuming you do understand OOP) and not shoot yourself in the foot (again), is an exercise for your own education.
If it shows, I am not a fan of the so called singleton pattern, particularly how it is often misused. Moving a code base away from it, often depends on how much refactoring you are prepared to use. Singletons are like global variables: convenient but not butter. Hmm, I think I'll put that in my quotations file, has a nice phrase to it...
Honestly, you know more about the code base and the application in question then anyone here. So no one can really design it for you, and advice speaks less then action, at least where I come from.
I limit myself to at most two singletons in an application / process. One is usually called SysConfig and houses things that might otherwise end up as global variables or other corrupt concepts. I don't have a name for the second one since, so far, I've never actually reached my limit. :-)
Static member variables have their uses but I view them as I view proctologists. A lifesaver when you need one but the odds should be a "million to one" (Seinfeld reference) that you can't find a better way to solve the problem.
Create a base instrument class that implements a threaded listener. Derived classes of that would have instrument specific drivers, etc. Instantiate a derived class for each instrument then store the object in a container of some sort. At cleanup time just iterate through the container. Each instrument instance should be constructed by passing it some registration information on where to send its output/status/whatever. Use your imagination here. The OO stuff gets quite powerful.
I recently had to tackle a similar problem, and what I did seemed to work well for me, maybe it will help you:
(1) Group all "global" information into a single class. Let's call it Configuration.
(2) For all classes which used to use these static objects, change them to (ultimately) inherit from a new abstract base class which looks something like
abstract class MyBaseClass {
protected Configuration config; // You can also wrap it in a property
public MyBaseClass(Configuration config) {
this.config = config;
}
}
(3) Change all constructors of classes deriving from MyBaseClass accordingly. Then just create one instance of Configuration at start and pass it on everywhere.
Cons:
You need to refactor many of your constructors and every place in which they are called
This won't work well if you do not derive your top-level classes from Object. Well, you can add the config field to the derived class, it's just less elegant.
Pros
Not a lot of effort to just change inheritance and constructors, and bang - you can switch all Configuration.Instance with config.
You get rid of static variables completely; so no problems now if, for example, your application suddenly turns into a library and someone is trying to invoke multiple methods concurrently or whatever.
Great question. A few quick thoughts from me...
static in C# should only be used for data that is exactly the same for all instances of a given class. Since you're currently stuck in Singleton hell you only have one instance of everything anyways, but once you break out of that this is the general rule (at least, it is for me). If you start threading your classes you may want to back off on static usage because then you have potential concurrency issues, but that's something that can be tackled later.
I'm not sure how your hardware actually works, but assuming that there's some basic functionality that's the same across all of them (like, how you interface with them at a raw data level or similar) then this is a perfect instance to create a class hierarchy. The base class implements the low level / similar stuff with virtual methods for descendant classes to override to actually properly interpret the data / feed it onward / whatever.
Good luck.
Updated version of question
Hi.
My company has a few legacy code bases which I hope to get under test as soon as they migrate to .NET 3.5. I have selected Moq as my Mocking framework (I fell in love with the crisp syntax immediately).
One common scenario, which I expect to see a lot of in the future, is where I see an object which interacts with some other objects.
I know the works of Michael Feathers and I am getting good at identifying inflection points and isolating decent sized components. Extract and Override is king.
However, there is one feature which would make my life a whole lot easier.
Imagine Component1 interacting with Component2. Component2 is some weird serial line interface to a fire central or somesuch with a lot of byte inspection, casting and pointer manipulation. I do not wish to understand component2 and its legacy interface consumed by Component1 carries with it a lot of baggage.
What I would like to do, is to extract the interface of Component2 consumed by Component1 and then do something like this:
component1.FireCentral = new Mock<IComponent2> (component2);
I am creating a normal mock, but I am pasing in an instance of the real Component2 in as a constructor argument into the Mock object. It may seem like I'm making my test depending on Component2, but I am not planning on keeping this code. This is part of the "place object under test" ritual.
Now, I would fire up the real system (with a physical fire central connected) and then interact with my object.
What I then would wish for, is to inspect the mock to see a log of how component1 interacted with component2 (using the debugger to inspect some collection of strings on the mock). And, even better, the mock could provide a list of expectations (in C#) that would create this behavior in a mock that did not depend on Component2, which I would then use in my test code.
In short. Using the mocking framework to record the interaction so that I can play it back in my test code.
Old version of question
Hi.
Working with legacy code and with a lot of utility classes, I sometimes find myself wondering how a particular class is acted upon by its surroundings in a number of scenarios.
One case that I was working on this morning involved subclassing a regular MemoryStream so that it would dump its contents to file when reaching a certain size.
// TODO: Remove
private class MyLimitedMemoryStream : MemoryStream
{
public override void Write(byte[] buffer, int offset, int count)
{
if (GetBuffer().Length > 10000000)
{
System.IO.FileStream file = new FileStream("c:\\foobar.html",FileMode.Create);
var internalbuffer = GetBuffer();
file.Write(internalbuffer,0,internalbuffer.Length);
}
base.Write(buffer, offset, count);
}
}
(and I used a breakpoint in here to exit the program after the file was written).
This worked, and I found which webform (web part->web part->web part) control rendered incorrectly. However, memorystream has a bunch of write's and writeline's.
Can I use a mocking framework to quickly get an overview on how a particular instance is acted upon? Any clever tricks there? We use Rhino Mocks.
I see this as a great assett in working with legacy code. Especially if recorded actions in a scenario easily can be set up as new expectations/acceptance criteria for that same scenario replicated in a unit test.
Every input is appreciated. Thank you for reading.
Welcome to the small club of "visionaries" who understand this requirement :)
Unfortunately I'll tell you, I don't think this exists yet for .NET. I'm also pretty sure it doesn't exist for Java either... as I've been searching periodically for a few years, and even offered a cash bounty for this on a pay-for-work website, and nothing turned up (some Russian developer offered to implement it from scratch, but it was outside my budget).
But, I have created a Proof Of Concept in PHP to demonstrate the idea and perhaps other get people interested in developing this for other languages (.NET for you, Java for me).
Here is the PHP proof-of-concept:
http://code.google.com/p/php-mock-recorder/
I don't think you can use a mocking framework to easily get an overview of how an particular instance is acted upon.
You can however use the mocking framework to define how it should be acted upon in order to verify that it is acted upon in this way. In legacy code this often requires making the code testable, e.g. introducing interfaces etc.
One technique that can be used with legacy code without needing to much restructuring of the code is using logging seams. You can read more about this in this InfoQ article: Using Logging Seams for Legacy Code Unit Testing.
If you want more tips on how to test legacy code, I recommend the book Working Effectively with Legacy Code by Michael Feathers.
Hope this helps!
Yes, this is possible. If you use a strict mock and run a unit test that exercises the mock, the test will fail, telling you which unexpected method was called.
Is this what you're looking for?
Mock frameworks weren't designed for this problem. I don't see how you can make this work with either Moq or RhinoMocks. Even the powerful TypeMock may not be able to do what you're asking. Mock frameworks weren't built for this.
Instead, use an aspect-oriented programming (AOP) tool to weave pre- and post- method invocation calls. This will do exactly what you want: see all interactions for a particular type. For example, in the PostSharp AOP framework, you simply specify methods you'd like called before and after a method call on some other object:
public class Component2TracerAttribute : OnMethodBoundaryAspect
{
public override void OnEntry( MethodExecutionEventArgs eventArgs)
{
if (eventArgs.Method == somethingOnComponent2) // Pseudo-code
{
Trace.TraceInformation("Entering {0}.", eventArgs.Method);
}
}
public override void OnExit(MethodExecutionEventArgs eventArgs)
{
if (eventArgs.Method == somethingOnComponent2) // Pseudo-code
{
Trace.TraceInformation("Leaving {0}.", eventArgs.Method);
}
}
}
That will log all the methods that are called on component 2.
I'm in the process of designing a system that will allow me to represent broad-scope tasks as workflows, which expose their workitems via an IEnumerable method. The intention here is to use C#'s 'yield' mechanism to allow me to write psuedo-procedural code that the workflow execution system can execute as it sees fit.
For example, say I have a workflow that includes running a query on the database and sending an email alert if the query returns a certain result. This might be the workflow:
public override IEnumerable<WorkItem> Workflow() {
// These would probably be injected from elsewhere
var db = new DB();
var emailServer = new EmailServer();
// other workitems here
var ci = new FindLowInventoryItems(db);
yield return ci;
if (ci.LowInventoryItems.Any()) {
var email = new SendEmailToWarehouse("Inventory is low.", ci.LowInventoryItems);
yield return email;
}
// other workitems here
}
CheckInventory and EmailWarehouse are objects deriving from WorkItem, which has an abstract Execute() method that the subclasses implement, encapsulating the behavior for those actions. The Execute() method gets called in the workflow framework - I have a WorkflowRunner class which enumerates the Workflow(), wraps pre- and post- events around the workitem, and calls Execute in between the events. This allows the consuming application to do whatever it needs in before or after workitems, including canceling, changing workitem properties, etc.
The benefit to all this, I think, is that I can express the core logic of a task in terms of the workitems responsible for getting the work done, and I can do it in a fairly straightforward, almost procedural way. Also because I'm using IEnumerable, and C#'s syntactic sugar that supports it, I can compose these workflows - like higher-level workflows that consume and manipulate sub-workflows. For example I wrote a simple workflow that just interleaves two child workflows together.
My question is this - does this sort of architecture seem reasonable, especially from a maintainability perspective? It seems to achieve several goals for me - self-documenting code (the workflow reads procedurally, so I know what will be executed in what steps), separation of concerns (finding low inventory items does not depend on sending email to the warehouse), etc. Also - are there any potential problems with this sort of architecture that I'm not seeing? Finally, has this been tried before - am I just re-discovering this?
Personally, this would be a "buy before build" decision for me. I'd buy something before I'd write it.
I work for a company that's rather large and can be foolish with its money, so if you're writing this for yourself and can't buy something I'll retract the comment.
Here are a few random ideas:
I'd externalize the workflow into a configuration that I could read in on startup, maybe from a file or a database.
It'd look something like a finite state machine with states, transitions, events, and actions.
I'd want to be able to plug in different actions so I could customize different flows on the fly.
I'd want to be able to register different subscribers who would want to be notified when a particular event happened.
I wouldn't expect to see anything as hard-coded as that e-mail server. I'd rather encapsulate that into an EmailNotifier that I could plug into events that demanded it. What about a beeper notification? Or a cell phone? Blackberry? Same architecture, different notifier.
Do you want to include a handler for human interaction? All the workflows that I deal with are a mix of human and automated processing.
Do you anticipate wanting to connect to other systems, like databases, other apps, web services?
It's a tough problem. Good luck.
#Erik: (Addressing a comment about the applicability of my answer.) If you enjoy the technical challenge of designing and building your own custom workflow system then my answer is not helpful. But if you are trying to solve a real-world WF problem with code that needs to be supported into the future then I recommend using the built-in WF system.
Workflow support is now part of the .Net framework and is called "Workflow Foundation (WF)". It is almost certainly easier to learn how to use the built-in library than to write one of your own, as duffymo pointed out in his "buy before build" comment.
Workflows are expressed in XAML and are supported by a designer in Visual Studio.
There are three types of Workflows (from Wikipedia, link below)
Sequential Workflow (Typically Flow
Chart based, progresses from one stage
to next and does not step back)
State
Machine Workflow (Progress from
'State' to 'State', these workflows
are more complex and return to a
previous point if required)
Rules-driven Workflow (Implemented
based on Sequential/StateMachine
workflow. The rules dictate the
progress of the workflow)
Wikipedia: Windows Workflow Foundation
MSDN: Getting Started with Workflow Foundation (WF)