Does it make more sense to remove an object using its name/id or passing the actual object? For example:
void MyList::remove(MyObject &myObject) { /* blah */ }
// or
void MyList::remove(std::string id) { /* blah */ }
I've used both but I can't really see what the advantages vs disadvantages. Is there a preferred standard?
EDIT: this would probably be a better example providing what I'm trying to do:
Let's say I have an Account class with a collection of Transactions. Am I better to pass Transaction object or the id of the Tranaction?
class Account
{
private List<Transaction> transaction = new List<Transaction>();
public void Remove(Transaction transaction) { }
// OR
public void Remove(string name) { }
// OR
public void Remove(Guid id) { }
}
NOTE: this question has both C++ and C# code...
You may not have the item reference always. So it is better to have remove methods by name or id.
You can decide which one is preferable (name or id) according to your business requirement. name or id has to be unique otherwise it will remove the wrong item. So the business requirement has to decide it.
Related
Suppose I have a CQRS command that looks like below:
public sealed class DoSomethingCommand : IRequest
{
public Guid Id { get; set; }
public Guid UserId { get; set; }
public string A { get; set; }
public string B { get; set; }
}
That's processed in the following command handler:
public sealed class DoSomethingCommandHandler : IRequestHandler<DoSomethingCommand, Unit>
{
private readonly IAggregateRepository _aggregateRepository;
public DoSomethingCommand(IAggregateRepository aggregateRepository)
{
_aggregateRepository = aggregateRepository;
}
public async Task<Unit> Handle(DoSomethingCommand request, CancellationToken cancellationToken)
{
// Find aggregate from id in request
var id = new AggregateId(request.Id);
var aggregate = await _aggregateRepository.GetById(id);
if (aggregate == null)
{
throw new NotFoundException();
}
// Translate request properties into a value object relevant to the aggregate
var something = new AggregateValueObject(request.A, request.B);
// Get the aggregate to do whatever the command is meant to do and save the changes
aggregate.DoSomething(something);
await _aggregateRepository.Save(aggregate);
return Unit.Value;
}
}
I have a requirement to save auditing information such as the "CreatedByUserID" and "ModifiedByUserID". This is a purely technical concern because none of my business logic is dependent on these fields.
I've found a related question here, where there was a suggestion to raise an event to handle this. This would be a nice way to do it because I'm also persisting changes based on the domain events raised from an aggregate using an approach similar to the one described here.
(TL;DR: Add events into a collection in the aggregate for every action, pass the aggregate to a single Save method in the repository, use pattern matching in that repository method to handle each event type stored in the aggregate to persist the changes)
e.g.
The DoSomething behavior from above would look something like this:
public void DoSomething(AggregateValueObject something)
{
// Business logic here
...
// Add domain event to a collection
RaiseDomainEvent(new DidSomething(/* required information here */));
}
The AggregateRepository would then have methods that looked like this:
public void Save(Aggregate aggregate)
{
var events = aggregate.DequeueAllEvents();
DispatchAllEvents(events);
}
private void DispatchAllEvents(IReadOnlyCollection<IEvent> events)
{
foreach (var #event in events)
{
DispatchEvent((dynamic) #event);
}
}
private void Handle(DidSomething #event)
{
// Persist changes from event
}
As such, adding a RaisedByUserID to each domain event seems like a good way to allow each event handler in the repository to save the "CreatedByUserID" or "ModifiedByUserID". It also seems like good information to have when persisting domain events in general.
My question is related to whether there is an easy to make the UserId from the DoSomethingCommand flow down into the domain event or whether I should even bother doing so.
At the moment, I think there are two ways to do this:
Option 1:
Pass the UserId into every single use case on an aggregate, so it can be passed into the domain event.
e.g.
The DoSomething method from above would change like so:
public void DoSomething(AggregateValueObject something, Guid userId)
{
// Business logic here
...
// Add domain event to a collection
RaiseDomainEvent(new DidSomething(/* required information here */, userId));
}
The disadvantage to this method is that the user ID really has nothing to do with the domain, yet it needs to be passed into every single use case on every single aggregate that needs the auditing fields.
Option 2:
Pass the UserId into the repository's Save method instead. This approach would avoid introducing irrelevant details to the domain model, even though the repetition of requiring a userId parameter on all the event handlers and repositories is still there.
e.g.
The AggregateRepository from above would change like so:
public void Save(Aggregate aggregate, Guid userId)
{
var events = aggregate.DequeueAllEvents();
DispatchAllEvents(events, userId);
}
private void DispatchAllEvents(IReadOnlyCollection<IEvent> events, Guid userId)
{
foreach (var #event in events)
{
DispatchEvent((dynamic) #event, Guid userId);
}
}
private void Handle(DidSomething #event, Guid userId)
{
// Persist changes from event and use user ID to update audit fields
}
This makes sense to me as the userId is used for a purely technical concern, but it still has the same repetitiveness as the first option. It also doesn't allow me to encapsulate a "RaisedByUserID" in the immutable domain event objects, which seems like a nice-to-have.
Option 3:
Could there be any better ways of doing this or is the repetition really not that bad?
I considered adding a UserId field to the repository that can be set before any actions, but that seems bug-prone even if it removes all the repetition as it would need to be done in every command handler.
Could there be some magical way to achieve something similar through dependency injection or a decorator?
It will depend on the concrete case. I'll try to explain couple of different problems and their solutions.
You have a system where the auditing information is naturally part of the domain.
Let's take a simple example:
A banking system that makes contracts between the Bank and a Person. The Bank is represented by a BankEmployee. When a Contract is either signed or modified you need to include the information on who did it in the contract.
public class Contract {
public void AddAdditionalClause(BankEmployee employee, Clause clause) {
AddEvent(new AdditionalClauseAdded(employee, clause));
}
}
You have a system where the auditing information is not natural part of the domain.
There are couple of things here that need to be addressed. For example can users only issue commands to your system? Sometimes another system can invoke commands.
Solution: Record all incomming commands and their status after processing: successful, failed, rejected etc.
Include the information of the command issuer.
Record the time when the command occured. You can include the information about the issuer in the command or not.
public interface ICommand {
public Datetime Timestamp { get; private set; }
}
public class CommandIssuer {
public CommandIssuerType Type { get; pivate set; }
public CommandIssuerInfo Issuer {get; private set; }
}
public class CommandContext {
public ICommand cmd { get; private set; }
public CommandIssuer CommandIssuer { get; private set; }
}
public class CommandDispatcher {
public void Dispatch(ICommand cmd, CommandIssuer issuer){
LogCommandStarted(issuer, cmd);
try {
DispatchCommand(cmd);
LogCommandSuccessful(issuer, cmd);
}
catch(Exception ex){
LogCommandFailed(issuer, cmd, ex);
}
}
// or
public void Dispatch(CommandContext ctx) {
// rest is the same
}
}
pros: This will remove your domain from the knowlegde that someone issues commands
cons: If you need more detailed information about the changes and match commands to events you will need to match timestamps and other information. Depending on the complexity of the system this may get ugly
Solution: Record all incomming commands in the entity/aggregate with the corresponding events. Check this article for a detailed example. You can include the CommandIssuer in the events.
public class SomethingAggregate {
public void Handle(CommandCtx ctx) {
RecordCommandIssued(ctx);
Process(ctc.cmd);
}
}
You do include some information from the outside to your aggregates, but at least it's abstracted, so the aggregate just records it. It doesn't look so bad, does it?
Solution: Use a saga that will contain all the information about the operation you are using. In a distributed system, most of the time you will need to do this so it whould be a good solution. In another system it will add complexity and an overhead that you maaaay not wan't to have :)
public void DoSomethingSagaCoordinator {
public void Handle(CommandContext cmdCtx) {
var saga = new DoSomethingSaga(cmdCtx);
sagaRepository.Save(saga);
saga.Process();
sagaRepository.Update(saga);
}
}
I've used all methods described here and also a variation of your Option 2. In my version when a request was handled, the Repositoires had access to a context that conained the user info, so when they saved events this information was included in EventRecord object that had both the event data and the user info. It was automated, so the rest of the code was decoupled from it. I did used DI to inject the contex to the repositories. In this case I was just recording the events to an event log. My aggregates were not event sourced.
I use these guidelines to choose an approach:
If its a distributed system -> go for Saga
If it's not:
Do I need to relate detailed information to the command?
Yes: pass Commands and/or CommandIssuer info to aggregates
If no then:
Does the dabase has good transactional support?
Yes: save Commandsand CommandIssueroutside of aggregates.
No: save Commandsand CommandIssuer in aggreages.
There exists an "Audit" object that is used throughout the code base that I'm trying to refactor to allow for dependency injection, and eventually better unit testing. Up until this point I have had no problems creating interfaces for my classes, and injecting those through the constructor. This class however, is different. I see why/how it's different, but I'm not sure how to go about fixing it to work "properly".
Here is an example (dumbed down version, but the problem persists even in the example):
namespace ConsoleApplication1.test.DI.Original
{
public class MultiUseDependencies
{
public MultiUseDependencies()
{
}
public void Update()
{
Audit a = new Audit();
a.preAuditValues = "Update";
// if data already exists, delete it
this.Delete();
// Update values, implementation not important
// Audit changes to the data
a.AuditInformation();
}
public void Delete()
{
Audit a = new Audit();
a.preAuditValues = "Delete";
// Delete data, implementation omitted.
a.AuditInformation();
}
}
public class Audit
{
public string preAuditValues { get; set; }
public void AuditInformation()
{
Console.WriteLine("Audited {0}", preAuditValues);
}
}
}
In the above, the Update function (implementation not shown) gets the "pre change" version of the data, deletes the data (and audits it), inserts/updates the changes to the data, then audits the insert/update.
If I were to run from a console app:
Console.WriteLine("\n");
test.DI.Original.MultiUseDependencies mud = new test.DI.Original.MultiUseDependencies();
mud.Update();
I would get:
Audited Delete
Audited Update
This is the expected behavior. Now in the way the class is implemented, I can already see there will be a problem, but I'm not sure how to correct it. See the (initial) refactor with DI:
namespace ConsoleApplication1.test.DI.Refactored
{
public class MultiUseDependencies
{
private readonly IAudit _audit;
public MultiUseDependencies(IAudit audit)
{
_audit = audit;
}
public void Update()
{
_audit.preAuditValues = "Update";
// if data already exists, delete it
this.Delete();
// Update values, implementation not important
// Audit changes to the data
_audit.AuditInformation();
}
public void Delete()
{
_audit.preAuditValues = "Delete";
// Delete data, implementation omitted.
_audit.AuditInformation();
}
}
public interface IAudit
{
string preAuditValues { get; set; }
void AuditInformation();
}
public class Audit : IAudit
{
public string preAuditValues { get; set; }
public void AuditInformation()
{
Console.WriteLine("Audited {0}", preAuditValues);
}
}
}
Running:
Console.WriteLine("\n");
test.DI.Refactored.MultiUseDependencies mudRefactored = new test.DI.Refactored.MultiUseDependencies(new test.DI.Refactored.Audit());
mudRefactored.Update();
I get (as expected, but incorrect):
Audited Delete
Audited Delete
The above is expected based on the implementation, but incorrect as per the original behavior. I'm not sure how exactly to proceed. The original implementation relies on distinct Audits to correctly keep track of what's changing. When I'm passing in the implementation of IAudit in the refactor, I am only getting a single instance of Audit, where the two are butting heads with each other.
Basically before the refactor, Audit is scoped on the function level. After the refactor, Audit is scoped on the class.
Is there an easy way to correct this?
Here's a fiddle with it in action:
https://dotnetfiddle.net/YbpTm4
The problem is in your design. Audit is an object that is mutatable and that makes it runtime data. Injecting runtime data into the constructors of your components is an anti-pattern.
The solution is to change the design, for instance by defining an IAudit abstraction like this:
public interface IAuditHandler {
void AuditInformation(string preAuditValues);
}
For this abstraction you can create the following implementation:
public class AuditHandler : IAuditHandler {
public void AuditInformation(string preAuditValues) {
var audit = new Audit();
audit.preAuditValues = preAuditValues;
audit.AuditInformation();
}
}
The consumers can now depend on IAuditHandler:
public class MultiUseDependencies
{
private readonly IAuditHandler _auditHandler;
public MultiUseDependencies(IAuditHandler auditHandler) {
_auditHandler = auditHandler;
}
public void Update() {
this.Delete();
_auditHandler.AuditInformation("Update");
}
public void Delete() {
// Delete data, implementation omitted.
_auditHandler.AuditInformation("Delete");
}
}
But I should even take it a step further, because with your current approach you are polluting business code with cross-cutting concerns. The code for the audit trail is spread out and duplicated throughout your code base.
This however would be quite a change in your application's design, but would probably be very beneficial. You should definitely read this article to get an idea how you can improve your design this way.
Try this:
public void Update()
{
// if data already exists, delete it
this.Delete();
//preAuditValues should be changed after the delete or it will keep
//the old value
_audit.preAuditValues = "Update";
// Update values, implementation not important
// Audit changes to the data
_audit.AuditInformation();
}
Or this should work too:
public void Delete()
{
string oldValue = _audit.preAuditValues;
_audit.preAuditValues = "Delete";
// Delete data, implementation omitted.
_audit.AuditInformation();
//Restoring oldValue after finished with Delete
_audit.preAuditValues = oldValue ;
}
I have a business object that contains a collection of ACL items and I'm trying to decide whether to put the authorization code in the business object like this:
class Foo()
{
public IEnumerable<Permission> Permissions { get; set; }
public bool HasPermission(string username, FooOperation operation)
{
// check this Foo's Permissions collection and return the result
}
}
class FooHandler()
{
public void SomeOperation(Foo foo)
{
if(foo.HasPermission(username, FooPermission.SomeOperation))
{
// do some operation
}
}
}
Or in the object handler like this:
class Foo()
{
public IEnumerable<Permission> Permissions { get; set; }
}
class FooHandler()
{
public void SomeOperation(Foo foo)
{
if(SecurityManager.HasPermission(foo, username, FooPermission.SomeOperation))
{
// do some operation
}
}
}
class SecurityManager
{
public HasPermission(Foo foo, string username, FooPermission operation)
{
// check foo's Permissions collection and return the result
}
}
What are the pros and cons of each approach? Keeping in mind that Permissions collection will be public in either scenario b/c I'm using Entity Framework in my data layer to persist the business objects directly (I'm willing to change this down the road if necessary).
The second approach is nearest to a MVC Controller structure :)
but for your question, the best practice is to separate authorization from business logic, and you can implement access management as a separated method and call in any where that you need check access permissions. This is very equivalent to Authorize filter in a MVC controller.
Addition description:
I would like to remove ACL collection from business objects and retrieve them from the repository within the SecurityManager class.
I'm trying to implement basic auditing for a system where users can login, change their passwords and emails etc.
The functions I want to audit are all in the business layer and I would like to create an Audit object that stores the datetime the function was called including the result.
I recently attended a conference and one of the sessions was on well-crafted web applications and I am trying to implement some of the ideas. Basically I am using an Enum to return the result of the function and use a switch statement to update the UI in that layer. The functions use an early return which doesn't leave any time for creating, setting and saving the audit.
My question is what approaches do others take when auditing business functions and what approach would you take if you had a function like mine (if you say ditch it I'll listen but i'll be grumpy).
The code looks a little like this:
function Login(string username, string password)
{
User user = repo.getUser(username, password);
if (user.failLogic1) { return failLogic1Enum; }
if (user.failLogic2) { return failLogic2Enum; }
if (user.failLogic3) { return failLogic3Enum; }
if (user.failLogic4) { return failLogic4Enum; }
user.AddAudit(new (Audit(AuditTypeEnum LoginSuccess));
user.Save();
return successEnum;
}
I could expand the if statements to create a new audit in each one but then the function starts to get messy. I could do the auditing in the UI layer in the switch statement but that seems wrong.
Is it really bad to stick it all in try catch with a finally and use the finally to create the Audit object and set it's information in there thus solving the early return problem? My impression is that a finally is for cleaning up not auditing.
My name is David, and I'm just trying to be a better code. Thanks.
I can't say I have used it, but this seems like a candidate for Aspect Oriented Programming. Basically, you can inject code in each method call for stuff like logging/auditing/etc in an automated fashion.
Separately, making a try/catch/finally block isn't ideal, but I would run a cost/benefit to see if it is worth it. If you can reasonably refactor the code cheaply so that you don't have to use it, do that. If the cost is exorbitant, I would make the try/finally. I think a lot of people get caught up in the "best solution", but time/money are always constraints, so do what "makes sense".
The issue with an enum is it isn't really extensible. If you add new components later, your Audit framework won't be able to handle the new events.
In our latest system using EF we created a basic POCO for our audit event in the entity namespace:
public class AuditEvent : EntityBase
{
public string Event { get; set; }
public virtual AppUser AppUser { get; set; }
public virtual AppUser AdminUser { get; set; }
public string Message{get;set;}
private DateTime _timestamp;
public DateTime Timestamp
{
get { return _timestamp == DateTime.MinValue ? DateTime.UtcNow : _timestamp; }
set { _timestamp = value; }
}
public virtual Company Company { get; set; }
// etc.
}
In our Task layer, we implemented an abstract base AuditEventTask:
internal abstract class AuditEventTask<TEntity>
{
internal readonly AuditEvent AuditEvent;
internal AuditEventTask()
{
AuditEvent = InitializeAuditEvent();
}
internal void Add(UnitOfWork unitOfWork)
{
if (unitOfWork == null)
{
throw new ArgumentNullException(Resources.UnitOfWorkRequired_Message);
}
new AuditEventRepository(unitOfWork).Add(AuditEvent);
}
private AuditEvent InitializeAuditEvent()
{
return new AuditEvent {Event = SetEvent(), Timestamp = DateTime.UtcNow};
}
internal abstract void Log(UnitOfWork unitOfWork, TEntity entity, string appUserName, string adminUserName);
protected abstract string SetEvent();
}
Log must be implemented to record the data associated with the event, and SetEvent is implemented to force the derived task to set it's event's type implicitly:
internal class EmailAuditEventTask : AuditEventTask<Email>
{
internal override void Log(UnitOfWork unitOfWork, Email email, string appUserName, string adminUserName)
{
AppUser appUser = new AppUserRepository(unitOfWork).Find(au => au.Email.Equals(appUserName, StringComparison.OrdinalIgnoreCase));
AuditEvent.AppUser = appUser;
AuditEvent.Company = appUser.Company;
AuditEvent.Message = email.EmailType;
Add(unitOfWork);
}
protected override string SetEvent()
{
return AuditEvent.SendEmail;
}
}
The hiccup here is the internal base task - the base task COULD be public so that later additions to the Task namespace could use it - but overall I think that gives you the idea.
When it comes to implementation, our other tasks determine when logging should occur, so in your case:
AuditEventTask task;
if (user.failLogic1) { task = new FailLogin1AuditEventTask(fail 1 params); }
if (user.failLogic2) { task = new FailLogin2AuditEventTask(fail 2 params); }
if (user.failLogic3) { task = new FailLogin3AuditEventTask(etc); }
if (user.failLogic4) { task = new FailLogin4AuditEventTask(etc); }
task.Log();
user.Save();
This is a very hard to explain question and I hope my code extract explains most of it.
Let's say you have the following database design:
musicstyle relations http://img190.yfrog.com/img190/2080/musicstylerelations.jpg
And you want to build one generic interface to modify the musicstyle relations between all three entities. Currently I have created a MusicStyleController which requires the type of Entity it is related to (Member, Event or Band).
[AcceptVerbs(HttpVerbs.Post)]
public JsonResult DeleteMusicStyle(int id, string type, int typeid)
{
if (!(Session["MemberLoggedIn"] is Member)) return Json(string.Empty);
Member member = (Member)Session["MemberLoggedIn"];
switch (type) {
case "member":
_memberService.DeleteMusicStyle(member, id);
break;
case "band":
Band band = _bandService.GetBand(typeid);
_bandService.DeleteMusicStyle(band, id);
break;
case "event":
Event #event = _eventService.GetEvent(typeid);
_bandService.DeleteMusicStyle(#event, id);
break;
}
return SelectedMusicStyles();
}
I make myself sick writing such code, but can't find another, more elegant way.
Note that this function is called using jquery.post().
The question
How would you refactor this code, and would you normalize the database even more? Keep in mind that I'm using the Entity Framework as a data model.
Assuming that id represents the member's id, you could create 3 separate functions to handle each type, thus separating your concerns more than they are now.
Example:
[AcceptVerbs(HttpVerbs.Post)]
public JsonResult DeleteMusicStyleByMember(int id)
{
if (!(Session["MemberLoggedIn"] is Member)) return Json(string.Empty);
Member member = (Member)Session["MemberLoggedIn"];
_memberService.DeleteMusicStyle(member, id);
return SelectedMusicStyles();
}
[AcceptVerbs(HttpVerbs.Post)]
public JsonResult DeleteMusicStyleByBand(int id, int typeid)
{
Band band = _bandService.GetBand(typeid);
_bandService.DeleteMusicStyle(band, id);
return SelectedMusicStyles();
}
[AcceptVerbs(HttpVerbs.Post)]
public JsonResult DeleteMusicStyleByEvent
(int id, int typeid)
{
Event event = _eventService.GetEvent(typeid);
_bandService.DeleteMusicStyle(event, id);
return SelectedMusicStyles();
}
Then you would just modify your jquery post to go to the respective methods depending on what you're trying to do.
How would you refactor this code?
1) The code which checks the user is logged in should be moved:
if (!(Session["MemberLoggedIn"] is Member)) return Json(string.Empty);
Member member = (Member)Session["MemberLoggedIn"];
This is a cross cutting concern, which should be applied using a security framework, Spring pops to mind as an example.
2) I would avoid using a singleton pattern to represent this use-cases, they can quickly turn into a collection of scripts which when grow large can be difficult to know where to place code. Consider using the Command Pattern instead.
This pattern will allow you to return the results as JSON, XML or any other format based on the interfaces you which your command to conform too.
class DeleteMusicStyleByBandCommand : JsonResultModelCommand, XmlResultModelCommand {
public DeleteMusicStyleByBand(int id, int typeid) {
//set private members
}
public void execute() {
..
}
public JsonResult getJsonResult() { .. }
public XmlResult getXmlResult() { .. }
}
The Command pattern IMHO is much better at representing use-cases than many methods in a Service..