Refactoring a C# Save Command Handler - c#

I have the following command handler. The handler takes a command object and uses its properties to either create or update an entity.
It decides this by the Id property on the command object which is nullable. If null, then create, if not, then update.
public class SaveCategoryCommandHandler : ICommandHandler<SaveCategoryCommand>
{
public SaveCategoryCommandHandler(
ICategoryRepository<Category> categoryRepository,
ITracker<User> tracker,
IMapProcessor mapProcessor,
IUnitOfWork unitOfWork,
IPostCommitRegistrator registrator)
{
// Private fields are set up. The definitions for the fields have been removed for brevity.
}
public override void Handle(SaveCategoryCommand command)
{
// The only thing here that is important to the question is the below ternary operator.
var category = command.Id.HasValue ? GetForUpdate(command) : Create(command);
// Below code is not important to the question. It is common to both create and update operations though.
MapProcessor.Map(command, category);
UnitOfWork.Commit();
Registrator.Committed += () =>
{
command.Id = category.Id;
};
}
private Category GetForUpdate(SaveCategoryCommand command)
{
// Category is retrieved and tracking information added
}
private Category Create(SaveCategoryCommand command)
{
// Category is created via the ICategoryRepository and some other stuff happens too.
}
}
I used to have two handlers, one for creating and one for updating, along with two commands for creating and updating. Everything was wired up using IoC.
After refactoring into one class to reduce the amount of code duplication I ended up with the above handler class. Another motivation for refactoring was to avoid having two commands (UpdateCategoryCommand and CreateCategoryCommand) which was leading to more duplication with validation and similar.
One example of this was having to have two validation decorators for what were effectively the same command (as they differed by only having an Id property). The decorators did implement inheritance but it is still a pain when there are a lot of commands to deal with.
There are a few things that bug me about the refactored handler.
One is the number of dependencies being injected. Another is that there is a lot going on the class. The if ternary bothers me - it seems like a bit of a code smell.
One option is to inject some sort of helper class into the handler. This could implement some sort of ICategoryHelper interface with concrete Create and Update implementations. This would mean the ICategoryRepository and ITracker dependencies could be replaced with a single dependency on ICategoryHelper.
The only potential issue is that this would require some sort of conditional injection from the IoC container based on whether the Id field on the Command was null or not.
I am using SimpleInjector and am unsure of the syntax of how to do this or even if it can be done at all.
Is this doing this via IoC also a smell, or should it be the handlers responsibility to do this?
Are there any other patterns or approaches for solving this problem? I had thought a decorator could possibly be used but I can't really think of how to approach doing it that way.

My experience is that having two separate commands (SaveCategoryCommand and UpdateCategoryCommand) with one command handler gives the best results (although two separate command handlers might sometimes be okay as well).
The commands should not inherit from a CategoryCommandBase base class, but instead the data that both commands share should be extracted to a DTO class that is exposed as a property on both classes (composition over inheritance). The command handler should implement both interfaces and this allows it to contain shared functionality.
[Permission(Permissions.CreateCategories)]
class SaveCategory {
[Required, ValidateObject]
public CategoryData Data;
// Assuming name can't be changed after creation
[Required, StringLength(50)]
public string Name;
}
[Permission(Permissions.ManageCategories)]
class UpdateCategory {
[NonEmptyGuid]
public Guid CategoryId;
[Required, ValidateObject]
public CategoryData Data;
}
class CategoryData {
[NonEmptyGuid]
public Guid CategoryTypeId;
[Required, StringLength(250)]
public string Description;
}
Having two commands works best, because when every action has its own command, it makes it easier to log them, and allows them to give different permissions (using attributes for instance, as shown above). Having shared data object works best, because it allows you to pass it around in the command handler and allows the view to bind to it. And inheritance is almost always ugly.
class CategoryCommandHandler :
ICommandHandler<SaveCategory>,
ICommandHandler<UpdateCategory> {
public CategoryCommandHandler() { }
public void Handle(SaveCategory command) {
var c = new Category { Name = command.Name };
UpdateCategory(c, command.Data);
}
public void Handle(UpdateCategory command) {
var c = this.repository.GetById(command.CategoryId);
UpdateCategory(c, command.Data);
}
private void UpdateCategory(Category cat, CategoryData data) {
cat.CategoryTypeId = data.CategoryDataId;
cat.Description = data.Description;
}
}
Do note that CRUDy operations will always result in solutions that seem not as clean as that task based operations will. That's one of the many reasons I push developers and requiremwnt engineers to think about the tasks they want to perform. This results in better UI, greater UX, more expressive audit trails, more pleasant design, and better overall software. But some parts of your application will always be CRUDy; no matter what you do.

I think, that you can separate this command to two well defined commands, e.g. CreateCategory and UpdateCategory (of course you should choose the most appropriate names). Also, design both commands via Template Method design pattern. In base class you can define protected abstract method for category creation and in 'Handle' method you should call this protected method, after that you can process remaining logic of original 'Handle' method:
public abstract class %YOUR_NAME%CategoryBaseCommandHandler<T> : ICommandHandler<T>
{
public override void Handle(T command)
{
var category = LoadCategory(command);
MapProcessor.Map(command, category);
UnitOfWork.Commit();
Registrator.Committed += () =>
{
command.Id = category.Id;
};
}
protected abstract Category LoadCategory(T command);
}
In derived classes you just override LoadCategory method.

Related

wcf - multiple KnowType commands - proper pattern for execution

I have a WCF service used to execute some commands from clients.
I have a base class, AbstractCommand and a number of derived classes which define concrete commands.
The WCF web service has a single method void Execute(AbstractCommand command). It can accept concrete commands (classes derived from AbstractCommand) by means of the [KnownType] attribute. The commands are executed against a database via a Repository.
To simplify, let's say that commands are executed in the service something like this:
public void Excecute(AbstractCommand command) {
// Concrete command 1
var theCommand = command as ConcreteCommand1;
if (theCommand != null) {
var par1 = theCommand.Par1;
var par2 = theCommand.Par2;
...
_repository.DoSomething(par1, par2...);
return;
}
// Concrete command 2
var theCommand = command as ConcreteCommand2;
...
This if-branching looks a little bit scary and I want to refactor it.
I'm thinking about something like this: The AbstractCommand should define and ConcreteCommands should implement a method Execute which will look like this:
public class ConcreteCommand1 : AbstractCommand {
public int Par1 { get; set; }
public int Par2 { get; set; }
...
public void Execure(IRepository repository) {
repository.DoSomething(Par1, Par2...);
so that in the service I no longer need to have that nasty if-branching and can do just this:
public void Excecute(AbstractCommand command) {
command.Execure(_repository);
}
It looks fine. The only drawback with this approach is that now ConcreteCommand classes instead of just being DTOs (par1, par2...) need to define logic in them (the Execute method) and should be aware about the IRepository.
Can anybody suggest a better approach?
In the end I implemented the solution in the way I described above - it is quite flexible and extensible.
If there are any suggestions for improvements - please welcome.

Strategy Pattern with each algorithm having a different method signature

I am doing a refactor over certain code.
We have a list of investors with amounts assigned to each. The total of amounts should be equal to another total, but sometimes there are a couple of cents of difference, so we use different algorithms to assign these differences to each investor.
The current code is something like this:
public void Round(IList<Investors> investors, Enum algorithm, [here goes a list of many parameters]) {
// some checks and logic here - OMMITED FOR BREVITY
// pick method given algorithm Enum
if (algoritm == Enum.Algorithm1) {
SomeStaticClass.Algorithm1(investors, remainders, someParameter1, someParameter2, someParameter3, someParameter4)
} else if (algoritm == Enum.Algorithm2) {
SomeStaticClass.Algorithm2(investors, remainders, someParameter3)
}
}
so far we only have two algorithms. I have to implement the third one. I was given the possibility to refactor both existing implementations as well as do some generic code to make this function for future algorithms, maybe custom to each client.
My first thought was "ok, this is a strategy pattern". But the problem I see is that both algorithms receive a different parameter list (except for the first two). And future algorithms can receive a different list of parameters as well. The only thing in "common" is the investor list and the remainders.
How can I design this so I have a cleaner interface?
I thought of
Establishing an interface with ALL possible parameters, and share it
among all implementations.
Using an object with all possible parameters as properties, and use that generic object as part of the interface. I
would have 3 parameters: The list of investors, the remainders object, and a "parameters" object. But in this case, I have a similar problem. To instantiate each object and fill the required properties depends on the algorithm (unless I set all of them). I
would have to use a factory (or something) to instantiate it, using all parameters in the interface, am I right? I would be moving the problem of too many parameters to that "factory" or whatever.
Using a dynamic object instead of a statically typed object. Still
presents the same problems as before, the instantiation
I also thought of using the Visitor Pattern, but as I understand, that would be the case if I had different algorithms for different entities to use, like, another class of investors. So I don't think it is the right approach.
So far the one that convinces me the most is the second, although I am still a bit reticent about it.
Any ideas?
Thanks
Strategy has different implementations. Its straightforward when all alternate Concrete Strategies require same type signature. But when concrete implementations start asking for different data from Context, we have to gracefully take a step back by relaxing encapsulation ("breaking encapsulation" is known drawback of strategy), either we can pass Context to strategies in method signature or constructor depending upon how much is needed.
By using interfaces and breaking big object trees in to smaller containments we can restrict the access to most of the Context state.
following code demonstrates passing through method parameter.
public class Context {
private String name;
private int id;
private double salary;
Strategy strategy;
void contextInterface(){
strategy.algorithmInterface(this);
}
public String getName() {
return name;
}
public int getId() {
return id;
}
public double getSalary() {
return salary;
}
}
public interface Strategy {
// WE CAN NOT DECIDE COMMON SIGNATURE HERE
// AS ALL IMPLEMENTATIONS REQUIRE DIFF PARAMS
void algorithmInterface(Context context);
}
public class StrategyA implements Strategy{
#Override
public void algorithmInterface(Context context) {
// OBSERVE HERE BREAKING OF ENCAPSULATION
// BY OPERATING ON SOMEBODY ELSE'S DATA
context.getName();
context.getId();
}
}
public class StrategyB implements Strategy{
#Override
public void algorithmInterface(Context context) {
// OBSERVE HERE BREAKING OF ENCAPSULATION
// BY OPERATING ON SOMEBODY ELSE'S DATA
context.getSalary();
context.getId();
}
}
Okay, I might be going in the wrong direction... but it seems kinda weird that you're passing in arguments to all the algorithms, and the identifier to which algorithm to actually use. Shouldn't the Round() function ideally just get what it needs to operate?
I'm imagining the function that invokes Round() to look something like:
if (something)
algToUse = Enum.Algorithm1;
else
if (otherthing)
algToUse = Enum.Algorithm2;
else
algToUse = Enum.Algorithm3;
Round(investors, remainder, algToUse, dayOfMonth, lunarCycle, numberOfGoblinsFound, etc);
... what if, instead, you did something like this:
public abstract class RoundingAlgorithm
{
public abstract void PerformRounding(IList<Investors> investors, int remainders);
}
public class RoundingRandomly : RoundingAlgorithm
{
private int someNum;
private DateTime anotherParam;
public RoundingRandomly(int someNum, DateTime anotherParam)
{
this.someNum = someNum;
this.anotherParam = anotherParam;
}
public override void PerformRounding(IList<Investors> investors, int remainder)
{
// ... code ...
}
}
// ... and other subclasses of RoundingAlgorithm
// ... later on:
public void Round(IList<Investors> investors, RoundingAlgorithm roundingMethodToUse)
{
// ...your other code (checks, etc)...
roundingMethodToUse.Round(investors, remainders);
}
... and then your earlier function simply looks like:
RoundingAlgorithm roundingMethod;
if (something)
roundingMethod = new RoundingByStreetNum(1, "asdf", DateTime.Now);
else
if (otherthing)
roundingMethod = new RoundingWithPrejudice(null);
else
roundingMethod = new RoundingDefault(1000);
Round(investors, roundingMethod);
... basically, instead of populating that Enum value, just create a RoundingAlgorithm object and pass that in to Round() instead.

Is this design pattern(extended view helper) acceptable?

The class(ViewHelper) that takes care of the user input and sending it back to the model is getting bigger and I want to create an extended class(ExtendedViewHelper) that inherit the ViewHeper class. The problem is that I don't know if it's following pure OO-design. Here comes a class diagram:
Now some code to simplify it even more:
//ViewHelper class
public ViewHelper(View tempForm)
{
xForm = tempForm;
//some more code
}
//ExtendedViewHelper class
public ExtendedViewHelper(View yForm): base(xForm)
{
//some more code
}
//And the View
public View()
{
//Instantiating the object to ExtendedViewHelper
viewHelper = new ExtendedViewHelper(this);
//Calling method from class ViewHelper
viewHelper.OnButtonClicked();
//and from ExtendedViewHelper
((ExtendedViewHelper)viewHelper).OnSecondBtnClicked();
}
Would you say that this is a good solution to the problem(if it's even considered as a problem) or am I overengineering things? Is there a better solution or should I only use Viewhelper(~700 row of code)?
The best solutions are the ones that create the least amount of coupling and the simplest possible classes.
Your View currently depends on it's ViewHelper. This is acceptable.
However, if your View ever casts something as an ExtendedViewHelper, it is then coupled to two objects, which could give the system two reasons to change and two places where things can break. This violates the Single Responsibility Principle.
The one role of the View should be to display things. It should not be concerned with where the system functionality exists or how to process commands.
The ViewHelper also should have one role. It should act as the go-between from the View to the Controller/Services/Functionality Layer. The ViewHelper should never have implementation details of how any operations are performed.
So a better solution looks like this:
public View()
{
//Instantiating the object to ExtendedViewHelper
viewHelper = new ExtendedViewHelper(this);
//Calling method from class ViewHelper
viewHelper.OnButtonClicked();
//and from ExtendedViewHelper
viewHelper.OnSecondBtnClicked();
}
//OldViewHelper Constructor
public ViewHelper(View tempForm, OldFunctionalityService oldService)
{
xForm = tempForm;
xService = oldService;
}
//First Button Implementation Code
public void OnButtonClicked()
{
xService.DoStuff();
}
//NewViewHelper Constructor
public ViewHelper(View tempForm, OldFunctionalityService oldService, NewFunctionalityService newService)
{
xForm = tempForm;
xService = oldService;
xNewService = newService;
}
//Second Button Implementation Code
public void OnSecondBtnClicked()
{
xNewService.DoStuff();
}

C# Activator.CreateInstance generic instance getting lost

FYI: the verbose preamble is to help explain why I am using Activator.CreateInstance. I have a number of entities (objects corresponding to database column information) that are "contained" in multiple databases, each of which has a different table/column setup. So I am able to retrieve an entity from each database, but the way I retrieve it is different per database. The database type is not known till runtime and could vary throughout execution. I have created the following setup:
First define the query operations each entity should support and each entity reader should support these operations.
public abstract class Operations<T> {
public delegate T findDelegate(int id);
public findDelegate find;
}
// there are many of these N=1,2,..., but here is one
// use abstract class since implementation of queries should be done by handlers
public class EntityNReader : Operations<EntityN> {
public Reader();
}
Define an interface for "Handler" classes, i.e. these classes implement the query operations listed above.
public interface IHandler<T> {
public string find(int id);
}
// there are many of these N,M=1,2..., but here is one
// use of interface is to force handlers to implement all query types
public class EntityNHandlerForDbTypeM : IHandler<EntityN> {
public string find(int id) {/*whatever*/}
}
This allows the developers to create a single class for handling EntityN query operations for DbTypeM. Now, create a Database class that contains the reader objects and binds the handler methods to the reader delegates.
public class Database {
// there are many of these, but here is one
public EntityNReader EntitiesN;
public Database(string dbType) {
// this is called for each EntityNReader
bindHandlers<Reader, TypeN>(MyReader, dbType);
// ...
// nullreferenceexception
EntitiesN.find(0);
}
// this is a factory that also performs late binding
private void bindHandlers<T,K>(T reader, string dbTypeM)
where T: Operations<K>, new()
{
// create instance of EntityNReader
r = (T)Activator.CreateInstance(typeof(T));
// r != null
// create instance of handler
IHandler<K> h = (IHandler<K>)(Activator.CreateInstance(
Type.GetType("namespace.to.EntityNHandlerForDbTypeM"),
new object[] { this }
));
// bind delegates
r.find = h.find;
}
}
As you can see in Databases constructor, the way the code is written now, I get a NullReferenceException even though instance r of EntityNReader is created and (verified to be) not null.
However, if I instantiate EntitiesN where it is declared in Database instead of within bindHandlers, the code compiles and everything works. The reason I don't just do this is that (subsequently) I would like to conditionally create readers/handlers inside of bindHandlers at the time the Database class is instantiated.
What is happening here? Link to actual code if necessary.
P.S. I am relatively new to programming, so I am open to hearing how an experience developer might design this component (especially if I am heading down the wrong path).
I realize your code was just samples, but I spotted this right off the bat...
if (supports[typeof(Entity1).Name]) { bindHandlers<Entity1Reader, Entity1>(Entities1, CamaDbType); }
if (supports[typeof(Entity2).Name]) { bindHandlers<Entity1Reader, Entity1>(Entities1, CamaDbType); }
Is it possible that you have a simple copy/paste mistake? Notice that Entities1 is passed in for both bindHandlers calls.

Is it a good design option to call a class' initializing method within a factory before injecting it

I am in the process of refactoring a rather large portion of spaghetti code. In a nutshell it is a big "God-like" class that branches into two different processes depending in some condition. Both processes are lengthy and have lots of duplicated code.
So my first effort has been to extract those two processes into their own classes and putting the common code in a parent they both inherit from.
It looks something like this:
public class ExportProcess
{
public ExportClass(IExportDataProvider dataProvider, IExporterFactory exporterFactory)
{
_dataProvider = dataProvider;
_exporterFactory = exporterFactory;
}
public void DoExport(SomeDataStructure someDataStructure)
{
_dataProvider.Load(someDataStructure.Id);
var exporter = _exporterFactory.Create(_dataProvider, someDataStructure);
exporter.Export();
}
}
I am an avid reader of Mark Seemann's blog and in this entry he explains that this code has a temporal coupling smell since it is necessary to call the Load method on the data provider before it is in a usable state.
Based on that, and since the object is being injected to the ones returned by the factory anyway, I am thinking of changing the factory to do this:
public IExporter Create(IExportDataProvider dataProvider, SomeDataStructure someDataStructure)
{
dataProvider.Load(someDataStructure.Id);
if(dataProvider.IsNewExport)
{
return new NewExportExporter(dataProvider, someDataStructure);
}
return new UpdateExportExporter(dataProvider, someDataStructure);
}
Because of the name "DataProvider" you probably guessed that the Load method is actually doing a database access.
Something tells me an object doing a database access inside the create method of an abstract factory is not a good design.
Are there any guidelines, best practices or something that say this is effectively a bad idea?
Thanks for your help.
Typically, a factory is used to resolve concrete types of a requested interface or abstract type, so you can decouple consumers from implementation. So usually a factory is just going to discover or specify the concrete type, help resolve dependencies, and instantiate the concrete type and return it. However, there's no hard or fast rule as to what it can or can't do, but it is sensible to give it enough access to only to resources that it needs to resolve and instantiate concrete types.
Another good use of a factory is to hide from consumers types dependencies that are not relevant to the consumer. For example, it seems IExportDataProvider is only relevant internally, and can be abstracted away from consumers (such as ExportProcess).
One code smell in your example, however, is how IExportDataProvider is used. The way it currently seems to work, you get an instance of it once, but it's possible to change its state in subsequent usages (by calling Load). This can lead to issues with concurrency and corrupted state. Since I don't know what that type does or how it's actually used by your IExporter, it's hard to make a recommendation. In my example below, I make an adjustment so that we can assume that the provider is stateless, and instead Load returns some sort of state object that the factory can use to resolve the concrete type of exporter, and then provide data to it. You can adjust that as you see fit. On the other hand, if the provider has to be stateful, you'll want to create an IExportDataProviderFactory, use it in your exporter factory, and create a new instance of the provider from the factory for each call to exporter factory's Create.
public interface IExporterFactory
{
IExporter Create(SomeDataStructure someData);
}
public class MyConcreteExporterFactory : IExporterFactory
{
public MyConcreteExporterFactory(IExportDataProvider provider)
{
if (provider == null) throw new ArgumentNullException();
Provider = provider;
}
public IExportDataProvider Provider { get; private set; }
public IExporter Create(SomeDataStructure someData)
{
var providerData = Provider.Load(someData.Id);
// do whatever. for example...
return providerData.IsNewExport ? new NewExportExporter(providerData, someData) : new UpdateExportExporter(providerData, someData);
}
}
And then consume:
public class ExportProcess
{
public ExportProcess(IExporterFactory exporterFactory)
{
if (exporterFactory == null) throw new ArgumentNullException();
_exporterFactory = factory;
}
private IExporterFactory _exporterFactory;
public void DoExport(SomeDataStructure someData)
{
var exporter = _exporterFactory.Create(someData);
// etc.
}
}

Categories

Resources