I am attempting my first foray into DDD, and I asked a question about bulk imports here, but I am going in circles trying to apply the validation for my domain model.
Essentially I want to run through all the validation without throwing exceptions so that I can reject the command with all the validation errors via a list of CommandResult objects inside the Command object. Whilst some are just mandatory field checks which are configurable and so will be handled outside the aggregate, there are also business rules, so I don't want to duplicate the validation logic and don't want to fall into an anaemic model by moving everything outside the aggregate to maintain the always-valid mantra of entities.
I am at a bit of a loss, so thought it be best I ask the experts if this I am going about things correctly before I start muddying the waters further!
To try and demonstrate:
Take the below, we have fairly simple UserProfile aggregate, the constructor takes the minimum information required for a profile to exist.
public class UserProfile : AggregateRoot
{
public Guid Id {get; private set; }
public Name Name {get private set;}
public CardDetail PaymentInformation {get; private set;}
public UserProfile(Guid id, Name name, CardDetail paymentInformation)
{
Name = name;
PaymentInformation = paymentInformation;
}
}
public class CardDetail : ValueObject
{
public string Number {get; private set;}
public string CVC {get; private set; }
public DateTime? IssueDate {get; private set;}
public DateTime ExpiryDate {get;private set;}
public CardDetail(string number, string cvc, DateTime? issueDate, DateTime expiryDate)
{
if(!IsValidCardNumber(number))
{
/*Do something to say details invalid, but not throw exception, possibly?*/
}
Number = number;
CVC = cvc;
IssueDate = issueDate
ExpiryDate = expiryDate;
}
private bool IsValidCardNumber(string number)
{
return Regex.IsMatch(/*regex for card number*/);
}
}
I then have a method which accepts a command object, which will construct a UserProfile and save to the database, but I want to validate before saving
public void CreateProfile(CreateProfileCommand command)
{
var paymentInformation = new CardDetail(command.CardNumber, command.CardCVC, command.CardIssueDate, command.CardExpiryDate)
var errors = /* list of errors added to from card detail validation, possibly? */
var profile = new UserProfile(/* pass args, add to errors? */
if(errors.Any())
{
command.Results.Add(errors.Select(x => new CommandResult { Severity = Severity.Error, Message = x.Message });
return;
}
/* no errors, so continue to save */
}
Now, I could handle exceptions and add them to the command result, but that seems expensive and surely violates the rule of allowing exceptions to control flow? but on the other hand I want to keep entities and value object valid, so I find myself in a bit of a rut!
Also, in the example above, the profile could be imported or done manually from a creation screen, but the user should get all error messages rather than each one in the order they occur. In the application I am working on, the rules applied are a bit more complex, but the idea is the same. I am aware that I shouldn't let a UI concern impact the domain as such, but I don't want to have to duplicate all validation twice more so that I can make sure the command won't fail as that will cause maintainability issues further down the line (the situation I find myself in and trying to resolve!)
The question is maybe a bit broad and around architectural design which is something you should decide upon, but I will try and assist anyway -I just cannot help myself.
Firstly: This is a great article that might already hint at you are too critical about your design: http://jeffreypalermo.com/blog/the-fallacy-of-the-always-valid-entity/
You would need to decide about the way your system is going to handle validation.
That is, do you want a system where the domain will just absolutely never ever fail consistency? Then you might need additional classes to sanitize any commands as you have and validate them before you accept or reject the change to the domain(Sanitation layer). Alternatively, as in that article, it might indicate that there is a completely different type of object required to deal with a specific case. (something like legacy data which does not conform to current rules)
Is it acceptable for the domain to throw an exception when something seriously goes wrong? Then discard all changes in the current aggregate (or even current context) and notify the user.
If you are looking for a peaceful intermediate solution, maybe consider something like this:
public OperationResult UpdateAccount(IBankAccountValidator validator, IAccountUpdateCommand newAccountDetails)
{
var result = validator.Validate(newAccountDetails);
if(result.HasErrors)
{
result.AddMessage("Could not update bank account", Severity.Error);
return result;
}
//apply further logic here
//return success
}
Now you can have all the validation logic in a separate class, but you have to pass that and call via the double dispatch and you will add the result handling as seen above in every call.
You will truly have to decide what style is acceptable for you/team and what will remain maintainable in the long run.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 11 months ago.
Improve this question
What is the best practice respecting the encapsulation?
I am using code first strategy with Entity Framework in .NET 6.
public class Employee
{
[Required]
public string Name { get; private set; }
public void SetName(string value)
{
this.Name = value;
}
}
or
public class Employee
{
private string _name;
[Required]
public string Name {
get { return _name; }
set { _name = value; }
}
}
or
public class Employee
{
[Required]
public string Name {get;set;}
}
Or is there a better way?
Thanks!
It is entirely personal preference and working through the scenarios you expect to encounter and how you want to safeguard or streamline them.
As a general rule I personally advocate for simplicity. A simple domain that is easy to understand is easy for other developers and consumers to pick up or otherwise be instructed. Often these decisions are made to try and restrict developers so-as to silo the domain so that UI developers for example either cannot directly modify data, or try and tightly control access. This may be necessary in very large projects/teams and can work provided your "gate keepers" can keep updates regular and consistent so that everyone can do what needs to be done, but often due to time constraints or responsibilities changing hands (gatekeepers leave and get backfilled by others that don't understand or agree) then bypasses inevitably leak into the model just leading to a confusing and unnecessarily complicated mess.
When it comes to the domain, I generally follow a more DDD-based approach similar to your first example, except I only use methods where I expect that there is a validation or specific combination of state that the entity can enforce itself. The responsibility for mutator methods like this either fall on the entity or the repository. (As I typically use a repository pattern)
For a value that can just change and might have simple validation or none at all, I will just use public setters. For no validation:
public string SomeValue { get; set; }
for basic validation that the entity can validate itself, using either attributes or validation logic inside the setter:
private string _someValue;
public string SomeValue
{
get { return _someValue; }
set
{
if (string.IsNullOrEmpty(value)) throw new ArgumentException("SomeValue is not optional.");
_someValue = value;
}
}
Often, Updates to state involve changing more than one thing where the combination of data should be validated together against the current remainder of the entity state. We don't want to set values one at a time because this means that the entity state could be left in an invalid state, or there is no guarantee that a caller will not simply set one value, ignoring the fact that the other values are technically invalid. As a very rough example of the concept, without getting into validation, it would be updating an address. Sure, it is possible that we may want to make a correction to a single address field, but typically if we are changing one address field we will most likely be invalidating the rest. For example, if I have an address that contains a Street Name, Number, City, PostCode, and Country, changing just the city or just the country would most often make the address completely invalid. In these cases I would use a Setter method to encapsulate updating an address:
public string Country { get; internal set; }
public string City { get; internal set; }
public string PostCode { get; internal set; }
public string StreetName { get; internal set; }
public string StreetNumber { get; set; }
public void UpdateAddress(string country, string city, string postCode, string streetName, string streetNumber)
{ // ...
}
It might be fine to allow them to just change the street number on it's own, or possibly even the street name without calling UpdateAddress so these might have public setters. City and Country might be FK values (CityId/CountryId) so there would be even less need to update these independenty. Simply having this method gate-keep the setting of the value should send a clear message to developers that they should be ensuring the complete and valid address details are sent at once, not relying on them correctly chaining piecemeal updates.
Where I might want to validate changes against existing data state, I would use an Internal setter, and have the update method as part of the Repository. For example if I want to allow them to update a Name, but ensure the name is unique. The repository has access to the domain, so I find it's a good location for this responsibility:
public void UpdateUserName(User user, string newName)
{
if (user == null) throw new ArgumentNullException("user");
if (string.IsNullOrEmpty(newName)) throw new ArgumentNullException("newName");
if (user.Name == newName) return; // Nothing to do.
var nameExists = _context.Users.Any(x => x.Name == newName && x.UserId != user.UserId);
if (nameExists) throw new ArgumentException("The name is not unique.");
user.Name = newName; // Allowed via the internal Setter.
}
It would be expected that if this was talking to a UI that the UI would validate that the name was unique prior to saving, but persistence should validate in case this can be called by other avenues like APIs, where things like unique constraints on the DB serve as the final guard.
Similarly, when it comes to creating entities, I will use factory methods much like the above in the Repository classes to do things like CreateAddress(...) which ensure that address entities are not simply newed up and filled adhoc. This ensures that when an entity is created, all required fields & relationships are provided and filled. The objective of this approach is to help ensure that from the point an entity is created and at every point through its mutation it is in a valid and complete state.
Hopefully that gives you some food for thought on the subject. Ultimately though you should look at what is important for your particular scenario and what real and actual problems you want to address. Don't get too caught up on trying to ward off hypothetical worst-case scenarios and ending up with something so rigid that it negatively impacts your coding responsiveness.
Suppose I have a CQRS command that looks like below:
public sealed class DoSomethingCommand : IRequest
{
public Guid Id { get; set; }
public Guid UserId { get; set; }
public string A { get; set; }
public string B { get; set; }
}
That's processed in the following command handler:
public sealed class DoSomethingCommandHandler : IRequestHandler<DoSomethingCommand, Unit>
{
private readonly IAggregateRepository _aggregateRepository;
public DoSomethingCommand(IAggregateRepository aggregateRepository)
{
_aggregateRepository = aggregateRepository;
}
public async Task<Unit> Handle(DoSomethingCommand request, CancellationToken cancellationToken)
{
// Find aggregate from id in request
var id = new AggregateId(request.Id);
var aggregate = await _aggregateRepository.GetById(id);
if (aggregate == null)
{
throw new NotFoundException();
}
// Translate request properties into a value object relevant to the aggregate
var something = new AggregateValueObject(request.A, request.B);
// Get the aggregate to do whatever the command is meant to do and save the changes
aggregate.DoSomething(something);
await _aggregateRepository.Save(aggregate);
return Unit.Value;
}
}
I have a requirement to save auditing information such as the "CreatedByUserID" and "ModifiedByUserID". This is a purely technical concern because none of my business logic is dependent on these fields.
I've found a related question here, where there was a suggestion to raise an event to handle this. This would be a nice way to do it because I'm also persisting changes based on the domain events raised from an aggregate using an approach similar to the one described here.
(TL;DR: Add events into a collection in the aggregate for every action, pass the aggregate to a single Save method in the repository, use pattern matching in that repository method to handle each event type stored in the aggregate to persist the changes)
e.g.
The DoSomething behavior from above would look something like this:
public void DoSomething(AggregateValueObject something)
{
// Business logic here
...
// Add domain event to a collection
RaiseDomainEvent(new DidSomething(/* required information here */));
}
The AggregateRepository would then have methods that looked like this:
public void Save(Aggregate aggregate)
{
var events = aggregate.DequeueAllEvents();
DispatchAllEvents(events);
}
private void DispatchAllEvents(IReadOnlyCollection<IEvent> events)
{
foreach (var #event in events)
{
DispatchEvent((dynamic) #event);
}
}
private void Handle(DidSomething #event)
{
// Persist changes from event
}
As such, adding a RaisedByUserID to each domain event seems like a good way to allow each event handler in the repository to save the "CreatedByUserID" or "ModifiedByUserID". It also seems like good information to have when persisting domain events in general.
My question is related to whether there is an easy to make the UserId from the DoSomethingCommand flow down into the domain event or whether I should even bother doing so.
At the moment, I think there are two ways to do this:
Option 1:
Pass the UserId into every single use case on an aggregate, so it can be passed into the domain event.
e.g.
The DoSomething method from above would change like so:
public void DoSomething(AggregateValueObject something, Guid userId)
{
// Business logic here
...
// Add domain event to a collection
RaiseDomainEvent(new DidSomething(/* required information here */, userId));
}
The disadvantage to this method is that the user ID really has nothing to do with the domain, yet it needs to be passed into every single use case on every single aggregate that needs the auditing fields.
Option 2:
Pass the UserId into the repository's Save method instead. This approach would avoid introducing irrelevant details to the domain model, even though the repetition of requiring a userId parameter on all the event handlers and repositories is still there.
e.g.
The AggregateRepository from above would change like so:
public void Save(Aggregate aggregate, Guid userId)
{
var events = aggregate.DequeueAllEvents();
DispatchAllEvents(events, userId);
}
private void DispatchAllEvents(IReadOnlyCollection<IEvent> events, Guid userId)
{
foreach (var #event in events)
{
DispatchEvent((dynamic) #event, Guid userId);
}
}
private void Handle(DidSomething #event, Guid userId)
{
// Persist changes from event and use user ID to update audit fields
}
This makes sense to me as the userId is used for a purely technical concern, but it still has the same repetitiveness as the first option. It also doesn't allow me to encapsulate a "RaisedByUserID" in the immutable domain event objects, which seems like a nice-to-have.
Option 3:
Could there be any better ways of doing this or is the repetition really not that bad?
I considered adding a UserId field to the repository that can be set before any actions, but that seems bug-prone even if it removes all the repetition as it would need to be done in every command handler.
Could there be some magical way to achieve something similar through dependency injection or a decorator?
It will depend on the concrete case. I'll try to explain couple of different problems and their solutions.
You have a system where the auditing information is naturally part of the domain.
Let's take a simple example:
A banking system that makes contracts between the Bank and a Person. The Bank is represented by a BankEmployee. When a Contract is either signed or modified you need to include the information on who did it in the contract.
public class Contract {
public void AddAdditionalClause(BankEmployee employee, Clause clause) {
AddEvent(new AdditionalClauseAdded(employee, clause));
}
}
You have a system where the auditing information is not natural part of the domain.
There are couple of things here that need to be addressed. For example can users only issue commands to your system? Sometimes another system can invoke commands.
Solution: Record all incomming commands and their status after processing: successful, failed, rejected etc.
Include the information of the command issuer.
Record the time when the command occured. You can include the information about the issuer in the command or not.
public interface ICommand {
public Datetime Timestamp { get; private set; }
}
public class CommandIssuer {
public CommandIssuerType Type { get; pivate set; }
public CommandIssuerInfo Issuer {get; private set; }
}
public class CommandContext {
public ICommand cmd { get; private set; }
public CommandIssuer CommandIssuer { get; private set; }
}
public class CommandDispatcher {
public void Dispatch(ICommand cmd, CommandIssuer issuer){
LogCommandStarted(issuer, cmd);
try {
DispatchCommand(cmd);
LogCommandSuccessful(issuer, cmd);
}
catch(Exception ex){
LogCommandFailed(issuer, cmd, ex);
}
}
// or
public void Dispatch(CommandContext ctx) {
// rest is the same
}
}
pros: This will remove your domain from the knowlegde that someone issues commands
cons: If you need more detailed information about the changes and match commands to events you will need to match timestamps and other information. Depending on the complexity of the system this may get ugly
Solution: Record all incomming commands in the entity/aggregate with the corresponding events. Check this article for a detailed example. You can include the CommandIssuer in the events.
public class SomethingAggregate {
public void Handle(CommandCtx ctx) {
RecordCommandIssued(ctx);
Process(ctc.cmd);
}
}
You do include some information from the outside to your aggregates, but at least it's abstracted, so the aggregate just records it. It doesn't look so bad, does it?
Solution: Use a saga that will contain all the information about the operation you are using. In a distributed system, most of the time you will need to do this so it whould be a good solution. In another system it will add complexity and an overhead that you maaaay not wan't to have :)
public void DoSomethingSagaCoordinator {
public void Handle(CommandContext cmdCtx) {
var saga = new DoSomethingSaga(cmdCtx);
sagaRepository.Save(saga);
saga.Process();
sagaRepository.Update(saga);
}
}
I've used all methods described here and also a variation of your Option 2. In my version when a request was handled, the Repositoires had access to a context that conained the user info, so when they saved events this information was included in EventRecord object that had both the event data and the user info. It was automated, so the rest of the code was decoupled from it. I did used DI to inject the contex to the repositories. In this case I was just recording the events to an event log. My aggregates were not event sourced.
I use these guidelines to choose an approach:
If its a distributed system -> go for Saga
If it's not:
Do I need to relate detailed information to the command?
Yes: pass Commands and/or CommandIssuer info to aggregates
If no then:
Does the dabase has good transactional support?
Yes: save Commandsand CommandIssueroutside of aggregates.
No: save Commandsand CommandIssuer in aggreages.
I'm making a Filesharing system for a school project using a n-tier architecture.
I want to validate user input in my business logic and be able to notify the user what input has errors and which error it is.
I don't really know how to approach this. My business logic has a method to insert a new upload like this:
public bool NewFile(File entity)
{
return repo.Insert(entity);
}
This is my model of the File object:
public class File : Upload
{
public int UploadId { get; set; }
public string FileType { get; set; }
public string Category { get; set; }
public int Upvote { get; set; }
public int Downvote { get; set; }
}
The upload model contains properties like title, description etc.
How will I be able to notify the user about input errors with a method that returns a Boolean? Do I make a separate validation class and make the method return an instance of the validation class? Or do I throw custom exceptions with the right error message and catch it in my presentation layer?
Would appreciate it if anyone could point me in the right direction
I don't know what framework your are using but the good method to validate user input would be to perform validations before trying to insert in database.
There is this solution that is quite common in asp.net mvc, you might be able to use it in your case.
If that is note convenient I would suggest to use a try/catch around your insert but you would have to do the logic yourself to notify the user what input threw the error and how it can be fixed (maybe there would be a size limit for example?).
EDIT: try/catch is enough for a school project but in production you would prefer anticipating any possible errors before the insert. Like so :
public bool NewFile(File entity)
{
if( /* check a validation rule */)return false;
else if( /* check another rule */ )return false;
return repo.Insert(entity);
}
of course if you want to send back information to the user maybe you would prefer returning a string message explaining what validation rule did not pass.
regards
I'm writing an add-in for another piece of software through its API. The classes returned by the API can only be access through the native software and the API. So I am writing my own stand alone POCO/DTO objects which map to the API classes. I'm working on a feature which will read in a native file, and return a collection of these POCO objects which I can stole elsewhere. Currently I'm using JSON.NET to serialize these classes to JSON if that matters.
For example I might have a DTO like this
public class MyPersonDTO
{
public string Name {get; set;}
public string Age {get; set;}
public string Address {get; set;}
}
..and a method like this to read the native "Persons" into my DTO objects
public static class MyDocReader
{
public static IList<MyPersonDTO> GetPersons(NativeDocument doc)
{
//Code to read Persons from doc and return MyPersonDTOs
}
}
I have unit tests setup with a test file, however I keep running into unexpected problems when running my export on other files. Sometimes native objects will have unexpected values, or there will be flat out bugs in the API which throw exceptions when there is no reason to.
Currently when something "exceptional" happens I just log the exception and the export fails. But I've decided that I'd rather export what I can, and record the errors somewhere.
The easiest option would be to just log and swallow the exceptions and return what I can, however then there would be no way for my calling code to know when there was a problem.
One option I'm considering is returning a dictionary of errors as a separate out parameter. The key would identify the property which could not be read, and the value would contain the details of the exception/error.
public static class MyDocReader
{
public static IList<MyPersonDTO> persons GetPersons(NativeDocument doc, out IDictionary<string, string> errors)
{
//Code to read persons from doc
}
}
Alternatively I was also considering just storing the errors in the return object itself. This inflates the size of my object, but has the added benefit of storing the errors directly with my objects. So later if someone's export generates an error, I don't have to worry about tracking down the correct log file on their computer.
public class MyPersonDTO
{
public string Name {get; set;}
public string Age {get; set;}
public string Address {get; set;}
public IDictionary<string, string> Errors {get; set;}
}
How is this typically handled? Is there another option for reporting the errors along with the return values that I'm not considering?
Instead of returning errors as part of the entities you could wrap the result in a reply or response message. Errors could then be a part of the response message instead of the entities.
The advantage of this is that the entities are clean
The downside is that it will be harder to map the errors back to offending entities/attributes.
When sending batches of entities this downside can be a big problem. When the API is more single entity oriented it wouldn't matter that much.
In principal, if something goes wrong in API (which cannot be recovered), the calling code must be aware that an exception has occurred. So it can have a strategy in place to deal with it.
Therefor, the approach that comes to my mind is influenced by the same philosophy -
1> Define your own exception Lets say IncompleteReadException.
This Exception shall have a property IList<MyPersonDTO> to store the records read until the exception occurred.
public class IncompleteReadException : Exception
{
IList<MyPersonDTO> RecordsRead { get; private set; }
public IncompleteReadException(string message, IList<MyPersonDTO> recordsRead, Exception innerException) : base(message,innerException)
{
this.RecordsRead = recordsRead;
}
}
2> When an exception occurs while reading, you can catch the original exception, wrap the original exception in this one & throw IncompleteReadException
This will allow the calling code (Application code), to have a strategy in place to deal with the situation when incomplete data is read.
Instead of throwing Exceptions throughout your code, you can return some extra information together with whatever you want to return.
public (ErrMsg Msg, int? Result) Divide(int x, int y)
{
ErrMsg msg = new ErrMsg();
try
{
if(x == 0){
msg = new ErrMsg{Severity = Severity.Warning, Text = "X is zero - result will always be zero"};
return (msg, x/y);
}
else
{
msg = new ErrMsg{Severity = Severity.Info, Text = "All is well"};
return (msg, x/y);
}
}
catch (System.Exception ex)
{
logger.Error(ex);
msg = new ErrMsg{Severity=Severity.Error, Text = ex.Message};
return (msg, null);
}
}
Just curious if someone can shed some light on if this is a good practice or not?
Currently I am working on a C# project that performs and Inserts a record and runs through 4 or 5 methods to validate that the record can be added and returns a string that tells the presentation layer if the record has been submitted or not.
Is this a good practice? Pros/Cons?
The call from the presentation is:
protected void btnProduct_Click(object sender, EventArgs e)
{
lblProduct.Text = ProductBLL.CreateProduct(txtProductType.Text, txtProduct.Text, Convert.ToInt32(txtID.Text);
}
The BLL method is:
public class AccountBLL
{
// Create The Product w/ all rules validated
public static string CreateProduct(string productType, string product, int id)
{
// CHECK IF PRODUCT NAME IN DB
else if (ValidateIfProductNameExists(product) == true)
{
return "Invalid Product Name";
}
// CHECK IF 50 PRODUCTS CREATED
else if (ValidateProductCount(id) == true)
{
return "Max # of Products created Can't add Product";
}
// CHECK IF PRODUCT TYPE CREATED
else if (ValidateProductType(productType) == false)
{
return "No Product Type Created";
}
// NOW ADD PRODUCT
InsertProduct(productType, product,id);
return "Product Created Successfully";
}
As mentioned in the previous post, use Enum types.
Below is a sample code that could be used in your application.
public struct Result
{
public Result(ActionType action, Boolean success, ErrorType error) :
this()
{
this.Action = action;
this.HasSuceeded = success;
this.Error = error;
}
public ActionType Action { get; private set; }
public Boolean HasSuceeded { get; private set; }
public ErrorType Error { get; private set; }
}
public enum ErrorType
{
InvalidProductName, InvalidProductType, MaxProductLimitExceeded, None,
InvalidCategoryName // and so on
}
public enum ActionType
{
CreateProduct, UpdateProduct, DeleteProduct, AddCustomer // and so on
}
public class ProductBLL
{
public Result CreateProduct(String type, String name, Int32 id)
{
Boolean success = false;
// try to create the product
// and set the result appropriately
// could create the product without errors?
success = true;
return new Result(ActionType.CreateProduct, success, ErrorType.None);
}
}
Don't use hardcoded strings.
Use an Enum for the return value, you can do much more and more efficiently with enums.
Validations must be done, only thing you can improve is to put the whole validation process in a single method.
After you call the method, you can have a single if sentence in the main method to check the enum returned.
if (IsValidated(productType, product,id) == MyEnumType.Success) { }
I'd use exceptions rather than a string or a enum...
I would recommend looking at the Validation framework used by Imar Spaanjaar in his N-Layer architecture series. The framework he uses if very versatile and it even supports Localization through using Resource files for the validation strings.
It is not a best practice to return a string with the status of the method.
The main reason is that it violates the separation of concerns between the UI layer and the business layer. You've taken the time to separate out the business logic into its own business layer; that's a good thing. But now the business layer is basically returning the error message directly to the UI. The error message to display to the user should be determined by the UI layer.
With the current implementation the business layer also becomes hard to use (for anyone without explicit knowledge of the implementation) because there is no contract. The current contract is that the method will return a string that you should display to the user. This approach makes reuse difficult. Two common scenarios that could cause headaches are if you want to support a new language (localization) or if you want to expose some of these business methods as a service.
I've been bitten when trying to use some old code like this before. The scenario is that I want to reuse the method because it does exactly what I want but that I want to take some action if a specific error occurs. In this case you end up either rewriting the business logic (which is sometimes not possible) or you end up having to hard code a horrible if statement into your application. e.g.
if (ProductBLL.CreateProduct(productType, product, ID) ==
"Max # of Products created Can't add Product")
{
...
}
Then a requirement comes down that the message should be changed to something different ("The maximum number of products has been exceeded. Please add less products and try again."). This will break the above code. In production. On a Saturday night.
So in summary: don't do it.