Chain modifications in events - c#

ASP.NET Boilerplate have EventBus system and we have events
EntityCreatingEventData
EntityCreatedEventData
EntityDeletingEventData
EntityDeletedEventData
…
But these events work after calling SaveChanges() (data already in DB). We want event system before calling SaveChanges(), when data is not written in DB yet
We also want recursive event system, for example :
creating object A => call EntityCreatingBeforeSaveEventData(a) => in this handle we create new object B and call Repository.Insert(b) => call EntityCreatingBeforeSaveEventData(b)...
And this process calling while exist any modification in DB Context.

Dependent on entity.Id
It is not possible to have a domain event system before calling SaveChanges().
ASP.NET Boilerplate detects changes when calling SaveChanges().
Recursive events can cause an infinite loop — see #1616.
An object with an Id that has not been set (auto-generated) cannot be identified.
If you are dependent on such a system, then it may be poor separation of concerns.
Independent of entity.Id
You can use IEventBus directly.
Trigger event:
public class AManager : DomainService, IAManager
{
public void CreateA()
{
var a = new A();
Repository.Insert(a);
EventBus.Trigger(new EntityCreatingBeforeSaveEventData<A>
{
Property = a.SomeProperty // Can pass other properties
});
}
}
public class BManager : DomainService, IBManager
{
// Similar to above
}
Handle event:
public class AHandler : IEventHandler<EntityCreatingBeforeSaveEventData<A>>, ITransientDependency
{
public IBManager BManager { get; set; }
public void HandleEvent(EntityCreatingBeforeSaveEventData<A> eventData)
{
var aSomeProperty = eventData.Property;
BManager.CreateB();
}
}

Related

Blazor components: notify of collection-changed event causing thread collisions

I am working on an ASP.NET Core Blazor application with .Net Core 3.0 (I am aware of 3.1, but due to Mordac I am stuck with this version for now).
I have a multiple-component page, and some of those components require access to the same data and need to all be updated when the collection is updated. I've been trying to use EventHandler-based callbacks, but those get invoked on their own threads at about the same time (if I understand correctly), causing the callbacks in the .razor components to attempt to make service calls to the context at the same time.
Note: I've tried making my DbContext`s lifetime transient, but I'm still getting the race conditions.
It's quite possible that I gotten myself into an async blender and don't know how to get out.
I've tentatively concluded that the event EventHandler methodology will not work here. I need some way to trigger "collection changed" updates to the components without triggering a race condition.
I've thought about updating the services involved in these race conditions with the following:
Replace every search function with a publically bindable collection property
Having every create/update/delete call update every single one of these collections
This would allow the components to bind directly to the collections that are changed, which I think will cause every binding to it in any component to update without the needing to be explicitly told, and this in turn would allow me to ditch the "collection changed" event handling entirely.
But I'm hesitant to try this and haven't done it yet because it would introduce a fair amount of overhead on each major service function.
Other ideas? Please help. If a collection has changed, I want Blazor components that rely on that collection to somehow be able to update, whether through notifications or binding or some other way.
The following code is a heavy simplification of what I've got, and it's still causing race conditions when the event handlers are invoked from the service.
Model
public class Model
{
public int Id { get; set; }
public string Msg { get; set; }
}
MyContext
public class MyContext : DbContext
{
public MyContext() : base()
{
Models = Set<Model>();
}
public MyContext(DbContextOptions<MyContext> options) : base(options)
{
Models = Set<Model>();
}
public DbSet<Model> Models { get; set; }
}
ModelService
public class ModelService
{
private readonly MyContext context;
private event EventHandler? CollectionChangedCallbacks;
public ModelService(MyContext context)
{
this.context = context;
}
public void RegisterCollectionChangedCallback(EventHandler callback)
{
CollectionChangedCallbacks += callback;
}
public void UnregisterCollectionChangedCallback(EventHandler callback)
{
CollectionChangedCallbacks -= callback;
}
public async Task<Model[]> FindAllAsync()
{
return await Task.FromResult(context.Models.ToArray());
}
public async Task CreateAsync(Model model)
{
context.Models.Add(model);
await context.SaveChangesAsync();
// No args necessary; the callbacks know what to do.
CollectionChangedCallbacks?.Invoke(this, EventArgs.Empty);
}
}
Startup.cs (excerpt)
public void ConfigureServices(IServiceCollection services)
{
services.AddRazorPages();
services.AddServerSideBlazor();
string connString = Configuration["ConnectionStrings:DefaultConnection"];
services.AddDbContext<MyContext>(optionsBuilder => optionsBuilder.UseSqlServer(connString), ServiceLifetime.Transient);
services.AddScoped<ModelService>();
}
ParentPage.razor
#page "/simpleForm"
#using Data
#inject ModelService modelService
#implements IDisposable
#if (AllModels is null)
{
<p>Loading...</p>
}
else
{
#foreach (var model in AllModels)
{
<label>#model.Msg</label>
}
<label>Other view</label>
<ChildComponent></ChildComponent>
<button #onclick="(async () => await modelService.CreateAsync(new Model()))">Add</button>
}
#code {
private Model[] AllModels { get; set; } = null!;
public bool ShowForm { get; set; } = true;
private object disposeLock = new object();
private bool disposed = false;
public void Dispose()
{
lock (disposeLock)
{
disposed = true;
modelService.UnregisterCollectionChangedCallback(CollectionChangedCallback);
}
}
protected override async Task OnInitializedAsync()
{
AllModels = await modelService.FindAllAsync();
modelService.RegisterCollectionChangedCallback(CollectionChangedCallback);
}
private void CollectionChangedCallback(object? sender, EventArgs args)
{
// Feels dirty that I can't await this without changing the function signature. Adding async
// will make it unable to be registered as a callback.
InvokeAsync(async () =>
{
AllModels = await modelService.FindAllAsync();
// Protect against event-handler-invocation race conditions with disposing.
lock (disposeLock)
{
if (!disposed)
{
StateHasChanged();
}
}
});
}
}
ChildComponent.razor
Copy-paste (for the sake of demonstration) of ParentPage minus the label, ChildComponent, and model-adding button.
Note: I've also experimented with attempting to insert a block of code into the HTML portion of the component, but that didn't work either since I can't use an await there.
Possibly bad idea that I experimented with (and that still didn't avoid the threading collision):
#if (AllModels is null)
{
<p><em>Loading...</em></p>
#Load();
#*
Won't compile.
#((async () => await Load())());
*#
}
else
{
...every else
}
#code {
...Initialization, callbacks, etc.
// Note: Have to return _something_ or else the #Load() call won't compile.
private async Task<string> Load()
{
ActiveChargeCodes = await chargeCodeService.FindActiveAsync();
}
}
Please help. I'm experimenting in (for me) uncharted territory.
Since i'm currently in a situation that looks awfully lot like yours, let me share what i found out. My issue was "StateHasChanged()". Since i've seen that call in your code too, maybe the following helps:
i got a pretty simple callback handler:
case AEDCallbackType.Edit:
// show a notification in the UI
await ShowNotification(new NotificationMessage() { Severity = NotificationSeverity.Success, Summary = "Data Saved", Detail = "", Duration = 3000 });
// reload entity in local context to update UI
await dataService.ReloadCheckAfterEdit(_currentEntity.Id);
the notification function does this:
async Task ShowNotification(NotificationMessage message)
{
notificationService.Notify(message);
await InvokeAsync(() => { StateHasChanged(); });
}
the reload function does this:
public async Task ReloadCheckAfterEdit(int id)
{
Check entity = context.Checks.Find(id);
await context.Entry(entity).ReloadAsync();
}
The problem was the StateHasChanged() call. It tells the UI to re-render. The UI consists of a datagrid component. The datagrid calls a query in the dataservice, to fetch data from the DB.
This happens just right before "ReloadAsync" is called, which is "awaited". Once ReloadAsync actually executes, it happens in a different thread, causing the dreaded "A second operation started on this context before a previous operation completed" exception.
My Solution was to remove the StateHasChanged line completely from where it was, and call it once after everything else was completed. No more concurrent caller issues.
Good luck solving this, i feel your pain.

Trouble connecting to my database with entity framework c# web api

I have a LogContext Model :
using System.Data.Entity;
namespace Logging.Models
{
public class LogContext : DbContext
{
// You can add custom code to this file. Changes will not be overwritten.
//
// If you want Entity Framework to drop and regenerate your database
// automatically whenever you change your model schema, add the following
// code to the Application_Start method in your Global.asax file.
// Note: this will destroy and re-create your database with every model change.
//
// System.Data.Entity.Database.SetInitializer(new System.Data.Entity.DropCreateDatabaseIfModelChanges<Logging.Models.ProductContext>());
public LogContext() : base("name=LogContext")
{
Database.SetInitializer<LogContext>(null);
}
public DbSet<Log> Logs { get; set; }
}
}
but when I try to reference the Logs in my other LogContext class under App_code I'm getting an error trying to reference the context.Logs.Load();
"cannot be accessed with an instance reference; qualify it with a type name"
How do I reference and render all the rows in my table? What am i doing wrong?
Thanks
using System;
using System.Collections.Generic;
using System.Linq;
using Logging.Controllers;
using Logging.Models;
namespace Logging
{
public class LogContext : IDisposable
{
private static readonly List<Log> Logs = new List<Log>();
static LogContext()
{
using (var context = new LogContext())
{
**context.Logs.Load();**
}
//Logs.Add(new Log() { Id = 1, LoggerName = "TESTSYS1", InnerException = "InnerException", LogText = "LogText", ThreadID = 1, StackTrace = "Stack Trace", eLevel = "INFO" });
//Logs.Add(new Log() { Id = 2, LoggerName = "TESTSYS2", InnerException = "InnerException", LogText = "LogText", ThreadID = 2, StackTrace = "Stack Trace", eLevel = "ERROR" });
//Logs.Add(new Log() { Id = 3, LoggerName = "TESTSYS3", InnerException = "InnerException", LogText = "LogText", ThreadID = 3, StackTrace = "Stack Trace", eLevel = "WARN" });
}
void IDisposable.Dispose()
{
}
public void GetLoggies()
{
using (var context = new LogContext())
{
foreach (var log in context.GetLogs())
{
Logs.Add(log);
}
}
}
public Log GetLog(int id)
{
var log = Logs.Find(p => p.Id == id);
return log;
}
public IEnumerable<Log> GetLogs()
{
return LogContext.Logs;
}
public Log AddLog(Log p)
{
Logs.Add(p);
return p;
}
public void Delete(int id)
{
var product = Logs.FirstOrDefault(p => p.Id == id);
if (product != null)
{
Logs.Remove(product);
}
}
public bool Update(int id, Log log)
{
Log rLog = Logs.FirstOrDefault(p => p.Id == id);
if (rLog != null)
{
rLog = log;
return true;
}
return false;
}
}
}
The problem is frankly very bad design.
Your class here has the same name as your context and also has a member with the same name as a member on your context, i.e. Logs. This is a case study in how intelligent the compiler is, in that the only reason the whole thing doesn't explode, is because it's able to make some sense out of which you want in which place, given context. Still, it might guess wrong, and you will certainly get confused at some point. If you insist on maintaining it this way, you should fully-qualify all uses of your actual context class, i.e. new Namespace.To.LogContext(), so the compiler isn't just guessing.
Using using around a context is a hugely bad idea. A context instance should ideally be request-scoped. Among other things, the context employs change tracking, and when you start passing entities between different context instances, you're going to run headlong into a brick wall. Instead, you should inject your context into this class and save it as a field on the class.
Implementing IDisposable is not something you should do lightly. There's a very particular way it needs to be implemented or you're actually causing more harm than good.
public class Base: IDisposable
{
private bool disposed = false;
//Implement IDisposable.
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (!disposed)
{
if (disposing)
{
// Free other state (managed objects).
}
// Free your own state (unmanaged objects).
// Set large fields to null.
disposed = true;
}
}
// Use C# destructor syntax for finalization code.
~Base()
{
// Simply call Dispose(false).
Dispose (false);
}
}
See: https://msdn.microsoft.com/en-us/library/b1yfkh5e(v=vs.100).aspx
However, if you inject your context, this class will no longer own the context, and therefore wouldn't need to even implement IDisposable. And, for the love of everything good and holy, don't implement IDisposable when you're injecting dependencies. I see far too many developers do this and end up with strange bugs because resources are being disposed incorrectly.
Finally, just throw this class away completely. What you're essentially trying to create here (incorrectly) is a repository, and you don't need that. Entity Framework already implements the repository and unit of work patterns. As you can see from your methods here, all you're doing is basically proxying from your method to a nearly equivalent method on the DbSet. You're buying yourself nothing but just an additional layer that now has to be maintained, more entropy for your application code, and technical debt. For a more detail description of why this is the wrong approach see: https://softwareengineering.stackexchange.com/a/220126/65618

unit of work - I don't need to use transactions?

If I use Microsoft implementation unit of work from this tutorial:
http://www.asp.net/mvc/tutorials/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
public class UnitOfWork : IDisposable
{
private SchoolContext context = new SchoolContext();
private GenericRepository<Department> departmentRepository;
private GenericRepository<Course> courseRepository;
public GenericRepository<Department> DepartmentRepository
{
get
{
if (this.departmentRepository == null)
{
this.departmentRepository = new GenericRepository<Department>(context);
}
return departmentRepository;
}
}
public GenericRepository<Course> CourseRepository
{
get
{
if (this.courseRepository == null)
{
this.courseRepository = new GenericRepository<Course>(context);
}
return courseRepository;
}
}
public void Save()
{
context.SaveChanges();
}
//......
}
I don't need to use transactions when I must add related items? For example when I must add order and order positions to database I don't need to start transaction because if something will go wrong then method Save() won't execute yes? Am I right?
_unitOfWork.OrdersRepository.Insert(order);
_unitOfWork.OrderPositionsRepository.Insert(orderPosition);
_unitOfWork.Save();
??
SaveChanges itself is transactional. Nothing happens at the database level when you call Insert, which based on the tutorial merely calls Add on the DbSet. Only once SaveChanges is called on the context does the database get hit and everything that happened up to that point is sent in one transaction.
You need transactions if you have multiple save changes in one method ... or chain of method calls using the same context.
Then you can roll back over the multiple save changes when your final update fails.
An example would be multiple repositories wrapping crud for an entity under the unit of work (IE a generic class). You may have many functions inserting and saving in each repository. However at the end you may find an issue which causes you to roll back previous saves.
EG in a service layer that needs to hit many repositories and execute a complex operation.

Multiple Ajax Requests per MVC 4 View

I'm using the repository pattern with a context and ninject as the IOC. I have a service which handles getting and setting page properties in the database.
public class MyContext : DbContext
{
public MyContext() : base ("DefaultConnection")
{
}
public DbSet<PageProperty> PageProperties { get; set; }
public DbSet<Contact> Contacts { get; set; }
}
public class DefaultRepository : IRepository
{
MyContext _context;
public DefaultRepository(MyContext context)
{
_context = context;
}
public IQueryable<PageProperty> PageProperties { get { return _context.PageProperties; } }
public IQueryable<Contact> Contacts { get { return _context.Contacts; } }
}
public class ModuleLoader : NinjectModule
{
public ModuleLoader()
{
}
public override void Load()
{
var context = new MyContext();
context.Database.Initialize(false);
Bind<MyContext>().ToConstant(context).InSingletonScope();
Bind<IRepository>().To<DefaultRepository>();
Bind<IPagePropertyProvider>().To<DefaultPagePropertyProvider>().InSingletonScope();
}
}
public class DefaultPagePropertyProvider : IPagePropertyProvider
{
IRepository _repository;
object _syncLock = new object();
public DefaultPagePropertyProvider (IRepository repository)
{
_repository = repository;
}
public string GetValue(string pageName, string propertyName
{
lock (_syncLock)
{
var prop = page.PageProperties.FirstOrDefault(x => x.Property.Equals(propertyName) && x.PageName.Equals(pageName)).Value;
return prop;
}
}
public void SetValue(string pageName, string propertyName, string value)
{
var pageProp = _repository.PageProperties.FirstOrDefault(x => x.Property.Equals(propertyName) && x.PageName.Equals(pageName));
pageProp.Value = value;
_repository.SaveSingleEntity(pageProp);
}
}
In my view I am doing 3 ajax calls, one to get a list from contacts to fill out a table, one ajax call to determine how many pages i have depending on the page size I'm using, and one ajax call to set the page size that I want to use. so a select box changes the page size (How many contacts per page: [ 30]) , a table that displays the contacts (generated from jquery which decifers json), and finally a div containing a list of page numbers to click. The workflow is, call GetContacts(), this function then queries the PageProperties to find out the page size to use, then call GetPages(), this function also queries PageProperties to find out what page size to use, SetPageSize() which sets the page size. So GetContacts() and GetPages() is used when a page is selected from the div, SetPageSize() then GetContacts() and GetPages() is called when the select box change event is fired. GetContacts() and GetPages() is only called when the first SetPageSize() $.ajax request is done() and there is a success from that function.
Now, before I added lock(syncLock) in the DefaultPageProperty service and before I added InSingletonScope to both that service and the context, I was getting two errors.
The connection was not closed. The connection's current state is connecting.
An EdmType cannot be mapped to CLR classes multiple times
I assumed because the connection was in a connecting state, that the context was being reused and reused and reused, so I thought putting that to SingletonScope() would mean that only one connection was made, then I thought the same about DefaultPageProperty and then because I was making async calls to that service, I should put a lock over the database querying.
It works, and the problems don't exist. But I don't know if what I have done is correct within the pattern I'm using, I'm wondering if I've missed something fundamental? My question is, is this a proper/viable solution which won't create any caveats later down the road? Have I actually solved the issue or just created more?
I redesigned the way I do my context now.
I have my context then I implement IDbContextFactory<TContext> called DefaultContextFactory<MyContext> and I inject them.
In the Repository I have in the public constructor _context = contextFactory.Create();.
Then throughout the repository i just use _context.WhatEver() and its fine.
I also did in the ModuleLoader Bind<IRepository>().To<DefaultRepository>().InTransientScope() in order to make every call to it create a new repository!
I don't need a repository factory because I only have one repository!

Separating the service layer from the validation layer

I currently have a service layer based on the article Validating with a service layer from the ASP.NET site.
According to this answer, this is a bad approach because the service logic is mixed with the validation logic which violates the single responsibility principle.
I really like the alternative that is supplied but during re-factoring of my code I have come across a problem that I am unable to solve.
Consider the following service interface:
interface IPurchaseOrderService
{
void CreatePurchaseOrder(string partNumber, string supplierName);
}
with the following concrete implementation based on the linked answer:
public class PurchaseOrderService : IPurchaseOrderService
{
public void CreatePurchaseOrder(string partNumber, string supplierName)
{
var po = new PurchaseOrder
{
Part = PartsRepository.FirstOrDefault(p => p.Number == partNumber),
Supplier = SupplierRepository.FirstOrDefault(p => p.Name == supplierName),
// Other properties omitted for brevity...
};
validationProvider.Validate(po);
purchaseOrderRepository.Add(po);
unitOfWork.Savechanges();
}
}
The PurchaseOrder object that is passed to the validator also requires two other entities, Part and Supplier (let's assume for this example that a PO only has a single part).
Both the Part and Supplier objects could be null if the details supplied by the user do not correspond to entities in the database which would require the validator to throw an exception.
The problem I have is that at this stage the validator has lost the contextual information (the part number and the supplier name) so is unable to report an accurate error to the user. The best error I can supply is along the lines of "A purchase order must have an associated part" which would not make sense to the user because they did supply a part number (it just does not exist in the database).
Using the service class from the ASP.NET article I am doing something like this:
public void CreatePurchaseOrder(string partNumber, string supplierName)
{
var part = PartsRepository.FirstOrDefault(p => p.Number == partNumber);
if (part == null)
{
validationDictionary.AddError("",
string.Format("Part number {0} does not exist.", partNumber);
}
var supplier = SupplierRepository.FirstOrDefault(p => p.Name == supplierName);
if (supplier == null)
{
validationDictionary.AddError("",
string.Format("Supplier named {0} does not exist.", supplierName);
}
var po = new PurchaseOrder
{
Part = part,
Supplier = supplier,
};
purchaseOrderRepository.Add(po);
unitOfWork.Savechanges();
}
This allows me to provide much better validation information to the user but means that the validation logic is contained directly in the service class, violating the single responsibility principle (code is also duplicated between service classes).
Is there a way of getting the best of both worlds? Can I separate the service layer from the validation layer whilst still providing the same level of error information?
Short answer:
You are validating the wrong thing.
Very long answer:
You are trying to validate a PurchaseOrder but that is an implementation detail. Instead what you should validate is the operation itself, in this case the partNumber and supplierName parameters.
Validating those two parameters by themselves would be awkward, but this is caused by your design—you're missing an abstraction.
Long story short, the problem is with your IPurchaseOrderService interface. It shouldn't take two string arguments, but rather one single argument (a Parameter Object). Let's call this Parameter Object CreatePurchaseOrder:
public class CreatePurchaseOrder
{
public string PartNumber;
public string SupplierName;
}
With the altered IPurchaseOrderService interface:
interface IPurchaseOrderService
{
void CreatePurchaseOrder(CreatePurchaseOrder command);
}
The CreatePurchaseOrder Parameter Object wraps the original arguments. This Parameter Object is a message that describes the intend of the creation of a purchase order. In other words: it's a command.
Using this command, you can create an IValidator<CreatePurchaseOrder> implementation that can do all the proper validations including checking the existence of the proper parts supplier and reporting user friendly error messages.
But why is the IPurchaseOrderService responsible for the validation? Validation is a cross-cutting concern and you should prevent mixing it with business logic. Instead you could define a decorator for this:
public class ValidationPurchaseOrderServiceDecorator : IPurchaseOrderService
{
private readonly IValidator<CreatePurchaseOrder> validator;
private readonly IPurchaseOrderService decoratee;
ValidationPurchaseOrderServiceDecorator(
IValidator<CreatePurchaseOrder> validator,
IPurchaseOrderService decoratee)
{
this.validator = validator;
this.decoratee = decoratee;
}
public void CreatePurchaseOrder(CreatePurchaseOrder command)
{
this.validator.Validate(command);
this.decoratee.CreatePurchaseOrder(command);
}
}
This way you can add validation by simply wrapping a real PurchaseOrderService:
var service =
new ValidationPurchaseOrderServiceDecorator(
new CreatePurchaseOrderValidator(),
new PurchaseOrderService());
Problem, of course, with this approach is that it would be really awkward to define such decorator class for each service in the system. That would cause severe code publication.
But the problem is caused by a flaw. Defining an interface per specific service (such as the IPurchaseOrderService) is typically problematic. You defined the CreatePurchaseOrder and, therefore, already have such a definition. You can now define one single abstraction for all business operations in the system:
public interface ICommandHandler<TCommand>
{
void Handle(TCommand command);
}
With this abstraction you can now refactor PurchaseOrderService to the following:
public class CreatePurchaseOrderHandler : ICommandHandler<CreatePurchaseOrder>
{
public void Handle(CreatePurchaseOrder command)
{
var po = new PurchaseOrder
{
Part = ...,
Supplier = ...,
};
unitOfWork.Savechanges();
}
}
With this design, you can now define one single generic decorator to handle all validations for every business operation in the system:
public class ValidationCommandHandlerDecorator<T> : ICommandHandler<T>
{
private readonly IValidator<T> validator;
private readonly ICommandHandler<T> decoratee;
ValidationCommandHandlerDecorator(
IValidator<T> validator, ICommandHandler<T> decoratee)
{
this.validator = validator;
this.decoratee = decoratee;
}
void Handle(T command)
{
var errors = this.validator.Validate(command).ToArray();
if (errors.Any())
{
throw new ValidationException(errors);
}
this.decoratee.Handle(command);
}
}
Notice how this decorator is almost the same as the previously defined ValidationPurchaseOrderServiceDecorator, but now as a generic class. This decorator can be wrapped around your new service class:
var service =
new ValidationCommandHandlerDecorator<PurchaseOrderCommand>(
new CreatePurchaseOrderValidator(),
new CreatePurchaseOrderHandler());
But since this decorator is generic, you can wrap it around every command handler in your system. Wow! How's that for being DRY?
This design also makes it really easy to add cross-cutting concerns later on. For instance, your service currently seems responsible for calling SaveChanges on the unit of work. This can be considered a cross-cutting concern as well and can easily be extracted to a decorator. This way your service classes become much simpler with less code left to test.
The CreatePurchaseOrder validator could look as follows:
public sealed class CreatePurchaseOrderValidator : IValidator<CreatePurchaseOrder>
{
private readonly IRepository<Part> partsRepository;
private readonly IRepository<Supplier> supplierRepository;
public CreatePurchaseOrderValidator(
IRepository<Part> partsRepository,
IRepository<Supplier> supplierRepository)
{
this.partsRepository = partsRepository;
this.supplierRepository = supplierRepository;
}
protected override IEnumerable<ValidationResult> Validate(
CreatePurchaseOrder command)
{
var part = this.partsRepository.GetByNumber(command.PartNumber);
if (part == null)
{
yield return new ValidationResult("Part Number",
$"Part number {command.PartNumber} does not exist.");
}
var supplier = this.supplierRepository.GetByName(command.SupplierName);
if (supplier == null)
{
yield return new ValidationResult("Supplier Name",
$"Supplier named {command.SupplierName} does not exist.");
}
}
}
And your command handler like this:
public class CreatePurchaseOrderHandler : ICommandHandler<CreatePurchaseOrder>
{
private readonly IUnitOfWork uow;
public CreatePurchaseOrderHandler(IUnitOfWork uow)
{
this.uow = uow;
}
public void Handle(CreatePurchaseOrder command)
{
var order = new PurchaseOrder
{
Part = this.uow.Parts.Get(p => p.Number == partNumber),
Supplier = this.uow.Suppliers.Get(p => p.Name == supplierName),
// Other properties omitted for brevity...
};
this.uow.PurchaseOrders.Add(order);
}
}
Note that command messages will become part of your domain. There is a one-to-one mapping between use cases and commands and instead of validating entities, those entities will be an implementation detail. The commands become the contract and will get validation.
Note that it will probably make your life much easier if your commands contain as much IDs as possible. So your system would could benefit from defining a command as follows:
public class CreatePurchaseOrder
{
public int PartId;
public int SupplierId;
}
When you do this you won't have to check if a part by the given name does exist. The presentation layer (or an external system) passed you an ID, so you don't have to validate the existence of that part anymore. The command handler should of course fail when there's no part by that ID, but in that case there is either a programming error or a concurrency conflict. In either case no need to communicate expressive user friendly validation errors back to the client.
This does, however, moves the problem of getting the right IDs to the presentation layer. In the presentation layer, the user will have to select a part from a list for us to get the ID of that part. But still I experienced the this to make the system much easier and scalable.
It also solves most of the problems that are stated in the comments section of the article you are referring to, such as:
The problem with entity serialization goes away, because commands can be easily serialized and model bind.
DataAnnotation attributes can be applied easily to commands and this enables client side (Javascript) validation.
A decorator can be applied to all command handlers that wraps the complete operation in a database transaction.
It removes the circular reference between the controller and the service layer (via the controller's ModelState), removing the need for the controller to new the service class.
If you want to learn more about this type of design, you should absolutely check out this article.

Categories

Resources