I'm working on my first DDD project, and I think I understand the basic roles of entities, data access objects, and their relationship. I have a basic validation implementation that stores each validation rule with it's associated entity. This works fine for rules that apply to only the current entity, but falls apart when other data is needed. For example, if I have the restriction that a username must be unique, I would like the IsValid() call to return false when there is an existing user with the current name.
However, I'm not finding any clean way to keep this validation rule on the entity itself. I'd like to have an IsNameUnique function on the entity, but most of the solutions to do this would require me to inject a user data access object. Should this logic be in an external service? If so, how do I still keep the logic with the entity itself? Or is this something that should be outside of the user entity?
Thanks!
I like Samuel's response, but for the sake of simplicity, I would recommend implementing a Specification. You create a Specification that returns a boolean to see if an object meets certain criteria. Inject an IUserRepository into the Specification, check if a user already exists with that name, and return a boolean result.
public interface ISpecification<T>
{
bool IsSatisfiedBy(TEntity entity);
}
public class UniqueUsernameSpecification : ISpecification<User>
{
private readonly IUserRepository _userRepository;
public UniqueUsernameSpecification(IUserRepository userRepository)
{
_userRepository = userRepository;
}
public bool IsSatisfiedBy(User user)
{
User foundUser = _userRepository.FindUserByUsername(user.Username);
return foundUser == null;
}
}
//App code
User newUser;
// ... registration stuff...
var userRepository = new UserRepository();
var uniqueUserSpec = new UniqueUsernameSpecification(userRepository);
if (uniqueUserSpec.IsSatisfiedBy(newUser))
{
// proceed
}
In DDD, there ia a concept called aggregate. It is basically responsible for consistency within the application.
IMHO, in this case specifically, I guess the CustomerRepository would be inside something like the "Customer aggregate", being the Customer class the aggregate root.
The root would then be responsible for doing all this stuff, and no one else could access the CustomerRepository options. And there are some possibilities:
The CustomerRepository could throw an exception if the name is not unique (and the Customer would catch and return the error, or something like that)
The CustomerRepository could have an IsValidUserName(), that the Customer would call before doing anything else
Any other option you can think of
I'm going to say that this is simply beyond the scope of DDD.
What you have with DDD is an aggregation of events that produce a useful model. However, data relationships between such models are not necessarily possible.
What consistency model are you using?
If you can commit multiple events in an ACID transaction you can ensure that changes to a group of aggregates happen atomically.
If you use an eventual consistency model you might not be able to actually validate these things until later. And when you do, you might need to compensate for things that have supposedly happened but are no longer valid.
Uniqueness must be answered within a context. If you model is small (in the thousands) you can have an aggregate represent the set of values that you want to be unique. Assuming a group aggregate transaction is possible.
If not, well, then you simply project your model to a database that support an uniqueness constraint. If this projection fails you have to go back to your aggregate and somehow mark it as invalid. All the while you need to consider failure. This is where a distributed long running process, like the saga pattern can be useful but also require that you think about additional things.
In summary, if you cannot use a storage model with a strong consistency guarantee everything becomes a lot more complicated. That said, there are good, well through out patterns for managing failures in a distributed environment but it turns the problem on it's head a bit because now you need to consider failure, at every point along the way, which is a good thing but it will require a bigger time investment.
Related
I'm working on a domain model writing my software all DDD and stuff doing a great job, when I suddenly bump into the same problem I have been facing over and over again and now it's time to share some insights. The root of the problem lies in the uniqueness of data.
For example, let's say we're writing this awesome domain model for a user. Obviously the username is unique and just to be as flexible as possible we want the user to be able to change his name, so I implemented the following method:
public void SetUsername(string value)
{
if (string.IsNullOrWhiteSpace(value))
{
throw new UserException(UserErrorCode.UsernameNullOrEmpty,
"The username cannot be null or empty");
}
if (!Regex.IsMatch(value, RegularExpressions.Username))
{
throw new UserException(UserErrorCode.InvalidUsername,
"The username {value} does not meet the required ");
}
if (!Equals(Username, value))
{
Username = value;
SetState(TrackingState.Modified);
}
}
Again, this is all fine and fancy, but this function lacks the ability to check if the username is unique or not. So writing all these nice articles about DDD, this would be a nice use-case for a Domain Service. Ideally, I would inject that service using dependency injection but this ruins the constructor of my domain model. Alternatively, I can demand an instance of a domain service as a function argument like so: public void SetUsername(string value, IUsersDomainService service) and to be honest I don't see any solid alternatives.
Who has faced this problem and maybe came up with a nice rock-solid solution?
I agree with #TomTom. But as most times with software decisions, it depends, there is almost always a tradeoff. As a rule of thumb, you gain more by not injecting a domain service into an entity. This is a common question when one is starting with DDD and CQRS+ES. And has been thoroughly discussed in the CQRS mailing list here
However, there are some cases where the approach you suggested (known as method injection) might be beneficial it depends on the scenario. I’ll try and drive some analysis points next.
Consider the case where you want to make some validation before creating an entity. Let's think of a hypothetical and way oversimplified international banking context, with the following entity:
public class BankNote
{
private BankNote() {}
public static FromCurrency(
Currency currency,
ISupportedCurrencyService currencyService)
{
currencyService.IsAvailable(currency);
}
}
I am using the factory method pattern FromCurrency inside your entity to abstract the entity creation process and add some validation so that the entity is always created in the correct state.
Since the supported currencies might change overtime, and the logic of which currencies are supported is a different responsibility than the bank note issuing logic, injecting the ISupportedCurrencyService in the factory method gives us the following benefits:
By the way, the method dependency injection for domain services is suggested in the book: Hands-On Domain-Driven Design with .NET Core
By Alexey Zimarev. Chapter 5 "Implementing the Model" page 120
Pros
The BankNote is always created with a supported Currency, even if the currencies supported change overtime.
Since we are depending on an interface instead of a concrete implementation, we can easily swap and change the implementation without changing the entity.
The service is never stored as an instance variable of the class, so no risk of depending on it more than we need.
Cons
If we keep going this way we might add a lot of dependencies injected into the entity and it will become hard to maintain overtime.
We still are adding a loosely coupled dependency to the entity and hence the entity now needs to know about that interface. We are violating the Single Responsibility Principle, and now you would need to mock the ISupportedCurrencyService to test the factory method.
We can’t instantiate the entity without depending on a service implemented externally from the domain. This can cause serious memory leak and performance issues depending on the scenario.
Another approach
You can avoid all the cons if you call the service before trying to instantiate the entity. Say having a different class for the factory instead of a factory method, and make that separate factory use the ISupportedCurrencyService and only then call the entity constructor.
public class BankNoteFactory
{
private readonly ISupportedCurrencyService _currencyService;
private BankNoteFactory(
ISupportedCurrencyService currencyService)
=> _currencyService = currencyService;
public BankNote FromCurrency(
Currency currency)
{
if(_currencyService.IsAvailable(currency))
return new BanckNote(currency);
// To call the constructor here you would also need
// to change the constructor visibility to internal.
}
}
Using this approach you would end with one extra class and an entity that could be instantiated with unsupported currencies, but with better SRP compliance.
I'm creating a Web API with users having different roles, in addition as any other application I do not want User A to access User B's resources. Like below:
Orders/1 (User A)
Orders/2 (User B)
Of course I can grab the JWT from the request and query the database to check if this user owns that order but that will make my controller Actions' too heavy.
This example uses AuthorizeAttribute but it seems too broad and I'll have to add tons of conditionals for all routes in the API to check which route is being accessed and then query the database making several joins that lead back to the users table to return if the request Is Valid or not.
Update
For Routes the first line of defense is a security policy which
require certain claims.
My question is about the second line of defense that is responsible to
make sure users only access their data/resources.
Are there any standard approaches to be taken in this scenario ?
The approach that I take is to automatically restrict queries to records owned by the currently authenticated user account.
I use an interface to indicate which data records are account specific.
public interface IAccountOwnedEntity
{
Guid AccountKey { get; set; }
}
And provide an interface to inject the logic for identifying which account the repository should be targeting.
public interface IAccountResolver
{
Guid ResolveAccount();
}
The implementation of IAccountResolver I use today is based on the authenticated users claims.
public class ClaimsPrincipalAccountResolver : IAccountResolver
{
private readonly HttpContext _httpContext;
public ClaimsPrincipalAccountResolver(IHttpContextAccessor httpContextAccessor)
{
_httpContext = httpContextAccessor.HttpContext;
}
public Guid ResolveAccount()
{
var AccountKeyClaim = _httpContext
?.User
?.Claims
?.FirstOrDefault(c => String.Equals(c.Type, ClaimNames.AccountKey, StringComparison.InvariantCulture));
var validAccoutnKey = Guid.TryParse(AccountKeyClaim?.Value, out var accountKey));
return (validAccoutnKey) ? accountKey : throw new AccountResolutionException();
}
}
Then within the repository I limit all returned records to being owned by that account.
public class SqlRepository<TRecord, TKey>
where TRecord : class, IAccountOwnedEntity
{
private readonly DbContext _dbContext;
private readonly IAccountResolver _accountResolver;
public SqlRepository(DbContext dbContext, IAccountResolver accountResolver)
{
_dbContext = dbContext;
_accountResolver = accountResolver;
}
public async Task<IEnumerable<TRecord>> GetAsync()
{
var accountKey = _accountResolver.ResolveAccount();
return await _dbContext
.Set<TRecord>()
.Where(record => record.AccountKey == accountKey)
.ToListAsync();
}
// Other CRUD operations
}
With this approach, I don't have to remember to apply my account restrictions on each query. It just happens automatically.
Using [Authorize] attribute is called declarative authorization. But it is executed before the controller or action is executed. When you need a resource-based authorization and document has an author property, you must load the document from storage before authorization evaluation. It's called imperative authorization.
There is a post on Microsoft Docs how to deal with imperative authorization in ASP.NET Core. I think it is quite comprehensive and it answers your question about standard approach.
Also here you can find the code sample.
To make sure User A cannot view Order with Id=2 (belongs to User B). I would do one of this two things:
One:
Have a GetOrderByIdAndUser(long orderId, string username), and of course you take username from the jwt.
If the user does't own the order he wont see it, and no extra db-call.
Two:
First fetch the Order GetOrderById(long orderId) from database and then validate that username-property of the order is the same as the logged on user in the jwt.
If the user does't own the order Throw exception, return 404 or whatever, and no extra db-call.
void ValidateUserOwnsOrder(Order order, string username)
{
if (order.username != username)
{
throw new Exception("Wrong user.");
}
}
You can make multiple policies in the ConfigureServices method of your startup, containing Roles or, fitting to your example here, names, like this:
AddPolicy("UserA", builder => builder.RequireClaim("Name", "UserA"))
or replace "UserA" with "Accounting" and "Name" with "Role".
Then restrict controller methods by role:
[Authorize(Policy = "UserA")
Of course this in on the controller level again, but you don't have to hack around tokens or the database. This will give your a direct indicator as to what role or user can use what method.
Your statements are wrong, and you are also designing it wrong.
Over optimization is the root of all evil
This link can be summarized in "test the performance before claiming it won't work."
Using the identity (jwt token or whatever you configured) to check if the actual user is accessing the right resource (or maybe better to serve just the resources it owns) is not too heavy.
If it becomes heavy, you are doing something wrong.
It might be that you have tons of simultaneous access and you just need to cache some data, like a dictionary order->ownerid that gets cleared over time... but that doesn't seem the case.
about the design: make a reusable service that can get injected and have a method to access every resource you need which accept an user (IdentityUser, or jwt subject, just the user id, or whatever you have)
something like
ICustomerStore
{
Task<Order[]> GetUserOrders(String userid);
Task<Bool> CanUserSeeOrder(String userid, String orderid);
}
implement accordingly and use this class to sistematically check if the user can access the resources.
The first question you need to answer is "When I can make this authorisation decision?". When do you actually have the information needed to make the check?
If you can almost always determine the resource being accessed from route data (or other request context), then an policy with matching requirement and handler may be appropriate. This works best when you are interacting with data clearly silo'd out by resource - as it doesn't help at all with things like filtering of lists, and you'll have to fall back to imperative checks in these cases.
If you can't really figure out whether a user can fiddle with a resource until you've actually examined it then you are pretty much stuck with imperative checks. There is standard framework for this but it isn't imo as useful as the policy framework. It's probably valuable at some point to write an IUserContext which can be injected at the point you query you domain (so into repos/wherever you use linq) which encapsulates some of these filters (IEnumerable<Order> Restrict(this IEnumerable<Order> orders, IUserContext ctx)).
For a complex domain there won't be an easy silver bullet. If you use an ORM it may be able to help you - but don't forget that navigable relationships in your domain will allow code to break context, particularly if you haven't been strict on trying to keep aggregates isolated (myOrder.Items[n].Product.Orderees[notme]...).
Last time I did this I managed to use the policy-based-on-route approach for 90% of cases, but still had to do some manual imperative checks for the odd listing or complex query. The danger in using imperative checks, as I'm sure you are aware, is that you forget them. A potential solution for this is to apply your [Authorize(Policy = "MatchingUserPolicy")] at controller level, add an additional policy "ISolemlySwearIHaveDoneImperativeChecks" on the action, and then in your MatchUserRequirementsHandler, check the context and bypass the naive user/order matching checks if imperative checks have been 'declared'.
I have an issue where I would like my handler to use data generated from the handlers:
UpdateUserProfileImageCommandHandlerAuthorizeDecorator
UpdateUserProfileImageCommandHandlerUploadDecorator
UpdateUserProfileImageCommandHandler
My problem is both architectural and performance.
UpdateUserCommandHandlerAuthorizeDecorator makes a call to the repository (entityframework) to authorize the user. I have other decorators similar to this that should use and modify entities and send it up the chain.
UpdateUserCommandHandler should just save the user to the database. I currently have to make another repository call and update the entity while I could have worked on the entity from the previous decorator.
My issue is that the command only accepts the user Id and some properties to update. In the case where I get the user entity from the Authorize decorator, how can I still work on that entity up the chain? Is it Ok to add that User property to the command and work on that?
Code:
public class UpdateUserProfileImageCommand : Command
{
public UpdateUserProfileImageCommand(Guid id, Stream image)
{
this.Id = id;
this.Image = image;
}
public Stream Image { get; set; }
public Uri ImageUri { get; set; }
}
public class UpdateUserProfileImageCommandHandlerAuthorizeDecorator : ICommandHandler<UpdateUserProfileImageCommand>
{
public void Handle(UpdateUserProfileImageCommand command)
{
// I would like to use this entity in `UpdateUserProfileImageCommandHandlerUploadDecorator`
var user = userRespository.Find(u => u.UserId == command.Id);
if(userCanModify(user, currentPrincipal))
{
decoratedHandler(command);
}
}
}
public class UpdateUserProfileImageCommandHandlerUploadDecorator : ICommandHandler<UpdateUserProfileImageCommand>
{
public void Handle(UpdateUserProfileImageCommand command)
{
// Instead of asking for this from the repository again, I'd like to reuse the entity from the previous decorator
var user = userRespository.Find(u => u.UserId == command.Id);
fileService.DeleteFile(user.ProfileImageUri);
var command.ImageUri = fileService.Upload(generatedUri, command.Image);
decoratedHandler(command);
}
}
public class UpdateUserProfileImageCommandHandler : ICommandHandler<UpdateUserProfileImageCommand>
{
public void Handle(UpdateUserProfileImageCommand command)
{
// Again I'm asking for the user...
var user = userRespository.Find(u => u.UserId == command.Id);
user.ProfileImageUri = command.ImageUri;
// I actually have this in a PostCommit Decorator.
unitOfWork.Save();
}
}
You should not try to pass on any extra data just for the sake of performance. Besides, usng decorators, you can't change the contract. Instead you should allow that user entity to be cached and this should typically be the responsibility of the repository implementation. With Entity Framework this is actually rather straightforward. You can call DbSet.Find(id) and EF will first look up the entity in the cache. This prevents unneeded round trips to the database. I do this all the time.
So the only thing you have to do is add a Find(key) or GetById method to your repository that maps to EF's Find(key) method and you're done.
Furthermore, I agree with Pete. Decorators should be primarily for cross-cutting concerns. Adding other things in decorators can be okay sometimes, but you seem to split up the core business logic over both the handler and its decorators. Writing the file to disk be longs to the core logic. You might be conserned about adhering to the Single Responsibility, but it seems to me that your splitting a single responsibility out over multiple classes. That doesn't mean that your command handlers should be big. As Pete said, you probably want to extract this to a service and inject this service into the handler.
Validating the authorization is a cross-cutting concern, so having this in a decorator seems okay, but there are a few problems with your current implementation. First of all, doing it like this will cause you to have many non-generic decorators, which leads to a lot of maintenance. Besides, you silently skip the execution if the user is unauthorized which is typically not what you want.
Instead of silently skipping, consider throwing an exception and prevent the user from being able to call this functionality under normal circumstances. This means that if an exception is thrown, there's either a bug in your code, or the user is hacking your system. Silently skipping without throwing an exception can make it much harder to find bugs.
Another thing is that you might want to consider is trying to implement this authorization logic as generic decorator. For instance have a generc authorization decorator or validation decorator. This might not always be possible, but you might be able to mark commands with an attribute. For instance, in the system I'm currently working on we mark our commands like this:
[PermittedRole(Role.LabManagement)]
We have a AuthorizationVerifierCommandHandlerDecorator<TCommand> that checks the attributes of the command being executed and verifies whether the current user is allowed to execute that command.
UPDATE
Here's an example of what I think your UpdateUserProfileImageCommandHandler could look like:
public class UpdateUserProfileImageCommandHandler
: ICommandHandler<UpdateUserProfileImageCommand>
{
private readonly IFileService fileService;
public UpdateUserProfileImageCommandHandler(IFileService fileService)
{
this.fileService = fileService;
}
public void Handle(UpdateUserProfileImageCommand command)
{
var user = userRespository.GetById(command.Id);
this.fileService.DeleteFile(user.ProfileImageUri);
command.ImageUri = this.fileService.Upload(generatedUri, command.Image);
user.ProfileImageUri = command.ImageUri;
}
}
Why do this via decorators in the first place?
Validation
The normal approach is to have clients do any and all validation required before submitting the command. Any command that is created/published/executed should have all (reasonable) validation performed before submitting. I include 'reasonable' because there are some things, like uniqueness, that can't be 100% validated beforehand. Certainly, authorization to perform a command can be done before submitting it.
Split Command Handlers
Having a decorator that handles just a portion of the command handling logic, and then enriches the command object seems like over-engineering to me. IMHO, decorators should be use to extend a given operation with additional functionality, e.g. logging, transactions, or authentication (although like I said, I don't think that applies for decorating command handlers).
It seems that uploading the image, and then assigning the new image URL in the database are the responsibility of one command handler. If you want the details of those two different operations to be abstracted, then inject your handlers with classes that do so, like IUserimageUploader.
Generally
Normally, commands are considered immutable, and should not be changed once created. This is to help enforce that commands should contain up front all the necessary information to complete the operation.
I'm a little late here, but what I do is define a IUserContext class that you can IoC inject. That way you can load the important user data once and then cache it and all other dependencies can use the same instance. You can then have that data expire after so long and it'll take care of itself.
I've been noticing static classes getting a lot of bad rep on SO in regards to being used to store global information. (And global variables being scorned upon in general) I'd just like to know what a good alternative is for my example below...
I'm developing a WPF app, and many views of the data retrieved from my db are filtered based on the ID of the current logged in user. Similarly, certain points in my app should only be accessable to users who are deemed as 'admins'.
I'm currently storing a loggedInUserId and an isAdmin bool in a static class.
Various parts of my app need this info and I'm wondering why it's not ideal in this case, and what the alternatives are. It seems very convienient to get up and running.
The only thing I can think of as an alternative is to use an IoC Container to inject a Singleton instance into classes which need this global information, the classes could then talk to this through its interface. However, is this overkill / leading me into analysis paralysis?
Thanks in advance for any insight.
Update
So I'm leaning towards dependency injection via IoC as It would lend itself better to testability, as I could swap in a service that provides "global" info with a mock if needed. I suppose what remains is whether or not the injected object should be a singleton or static. :-)
Will prob pick Mark's answer although waiting to see if there's any more discussion. I don't think there's a right way as such. I'm just interested to see some discussion which would enlighten me as there seems to be a lot of "this is bad" "that is bad" statements on some similar questions without any constructive alternatives.
Update #2
So I picked Robert's answer seeing as it is a great alternative (I suppose alternative is a weird word, probably the One True Way seeing as it is built into the framework). It's not forcing me to create a static class/singleton (although it is thread static).
The only thing that still makes me curious is how this would have been tackled if the "global" data I had to store had nothing to do with User Authentication.
Forget Singletons and static data. That pattern of access is going to fail you at some time.
Create your own custom IPrincipal and replace Thread.CurrentPrincipal with it at a point where login is appropriate. You typically keep the reference to the current IIdentity.
In your routine where the user logs on, e.g. you have verified their credentials, attach your custom principal to the Thread.
IIdentity currentIdentity = System.Threading.Thread.CurrentPrincipal.Identity;
System.Threading.Thread.CurrentPrincipal
= new MyAppUser(1234,false,currentIdentity);
in ASP.Net you would also set the HttpContext.Current.User at the same time
public class MyAppUser : IPrincipal
{
private IIdentity _identity;
private UserId { get; private set; }
private IsAdmin { get; private set; } // perhaps use IsInRole
MyAppUser(userId, isAdmin, iIdentity)
{
if( iIdentity == null )
throw new ArgumentNullException("iIdentity");
UserId = userId;
IsAdmin = isAdmin;
_identity = iIdentity;
}
#region IPrincipal Members
public System.Security.Principal.IIdentity Identity
{
get { return _identity; }
}
// typically this stores a list of roles,
// but this conforms with the OP question
public bool IsInRole(string role)
{
if( "Admin".Equals(role) )
return IsAdmin;
throw new ArgumentException("Role " + role + " is not supported");
}
#endregion
}
This is the preferred way to do it, and it's in the framework for a reason. This way you can get at the user in a standard way.
We also do things like add properties if the user is anonymous (unknown) to support a scenario of mixed anonymous/logged-in authentication scenarios.
Additionally:
you can still use DI (Dependancy Injection) by injecting the Membership Service that retrieves / checks credentials.
you can use the Repository pattern to also gain access to the current MyAppUser (although arguably it's just making the cast to MyAppUser for you, there can be benefits to this)
There are many other answers here on SO that explains why statics (including Singleton) is bad for you, so I will not go into details (although I wholeheartedly second those sentiments).
As a general rule, DI is the way to go. You can then inject a service that can tell you anything you need to know about the environment.
However, since you are dealing with user information, Thread.CurrentPrincipal may be a viable alternative (although it is Thread Static).
For convenience, you can wrap a strongly typed User class around it.
I'd try a different approach. The static data class is going to get you in trouble -- this is from experience. You could have a User object (see #Robert Paulson's answer for a great way to do this) and pass that to every object as you construct it -- it might work for you but you'll get a lot template code that just repeats everywhere.
You could store all your objects in a database / encrypted file with the necessary permissions and then dynamically load all of them based on your Users permissions. With a simple admin form on the database, it's pretty easy to maintain (the file is a little bit harder).
You could create a RequiresAdminPermissionAttribute object to apply to all your sensitive objects and check it at run-time against your User object to conditionally load to objects.
While the route you're on now has merit, I think there are some better options to try.
I've commonly seen examples like this on business objects:
public void Save()
{
if(this.id > 0)
{
ThingyRepository.UpdateThingy(this);
}
else
{
int id = 0;
ThingyRepository.AddThingy(this, out id);
this.id = id;
}
}
So why here, on the business object? This seems like contextual or data related more so than business logic.
For example, a consumer of this object might go through something like this...
...Get form values from a web app...
Thingy thingy = Thingy.CreateNew(Form["name"].Value, Form["gadget"].Value, Form["process"].Value);
thingy.Save();
Or, something like this for an update...
... Get form values from a web app...
Thingy thingy = Thingy.GetThingyByID(Int32.Parse(Form["id"].Value));
Thingy.Name = Form["name"].Value;
Thingy.Save();
So why is this? Why not contain actual business logic such as calculations, business specific rules, etc., and avoid retrieval/persistence?
Using this approach, the code might look like this:
... Get form values from a web app...
Thingy thingy = Thingy.CreateNew(Form["name"].Value, Form["gadget"].Value, Form["process"].Value);
ThingyRepository.AddThingy(ref thingy, out id);
Or, something like this for an update...
... get form values from a web app ...
Thingy thingy = ThingyRepository.GetThingyByID(Int32.Parse(Form["id"].Value));
thingy.Name = Form["Name"].Value;
ThingyRepository.UpdateThingy(ref thingy);
In both of these examples, the consumer, who knows best what is being done to the object, calls the repository and either requests an ADD or an UPDATE. The object remains DUMB in that context, but still provides it's core business logic as pertains to itself, not how it is retrieved or persisted.
In short, I am not seeing the benefit of consolidating the GET and SAVE methods within the business object itself.
Should I just stop complaining and conform, or am I missing something?
This leads into the Active Record pattern (see P of EAA p. 160).
Personally I am not a fan. Tightly coupling business objects and persistence mechanisms so that changing the persistence mechanism requires a change in the business object? Mixing data layer with domain layer? Violating the single responsibility principle? If my business object is Account then I have the instance method Account.Save but to find an account I have the static method Account.Find? Yucky.
That said, it has its uses. For small projects with objects that directly conform to the database schema and have simple domain logic and aren't concerned with ease of testing, refactoring, dependency injection, open/closed, separation of concerns, etc., it can be a fine choice.
Your domain objects should have no reference to persistance concerns.
Create a repository interface in the domain that will represent a persistance service, and implement it outside the domain (you can implement it in a separate assembly).
This way your aggregate root doesn't need to reference the repository (since it's an aggregate root, it should already have everyting it needs), and it will be free of any dependency or persistance concern. Hence easier to test, and domain focused.
While I have no understanding of DDD, it makes sense to have 1 method (which will do UPSERT. Insert if record doesn't exist, Update otherwise).
User of the class can act dumb and call Save on an existing record and Update on a new record.
Having one point of action is much clearer.
EDIT: The decision of whether to do an INSERT or UPDATE is better left to the repository. User can call Repository.Save(....), which can result in a new record (if record is not already in DB) or an update.
If you don't like their approach make your own. Personally Save() instance methods on business objects smell really good to me. One less class name I need to remember. However, I don't have a problem with a factory save but I don't see why it would be so difficult to have both. IE
class myObject
{
public Save()
{
myObjFactory.Save(this);
}
}
...
class myObjectFactory
{
public void Save(myObject obj)
{
// Upsert myObject
}
}