C# MVC Logging method execution and performance - c#

There are many other posts regarding the recording of method execution time (for example through postsharp, through action filters, or through a custom method attribute).
So to record the time for a method to complete is relatively straightforward at this point.
What I am looking to do however, is to get more fine-grained performance metrics on a per-request basis utilizing, for example, session id to track all operations that occured for a given request - and the time elapses for all of them, not just the parent (i.e. action controller) method.
For example, I would like to be able to do something like:
namespace MvcApplication1.Controllers
{
public class ProductController : Controller
{
//record start of method
public ActionResult Index()
{
//record start of service1.method call
var data = service1.method();
//store total time of service1.method call
//record start of db call
var objects = db.select(obj).where(id=0)
//store total time of db call
return View();
}
//record total time of method
}
}
Ideally I want to link all of these operations (the parent method, the service call and the db call) together - the most likely candidate would be through the session id - but that means that each call would need access to the session id.
From what I've read, the best way of accomplishing this would be to utilize a method attribute to record the parent performance time, and then some sort of custom library function to store the various timing of the calls (probably using nlog to record).
What I am asking for are opinions on what the best way (if at all possible) to accomplish the above?
Am I missing something with any third party libraries that exist - i.e. does Unity or Postsharp provide this functionality (or some other library)?
Is it possible to link all of these records via the session id? For example, I don't see how to via postsharp (1) store individual method calls INSIDE the MVC action, and (2) to pass variables between calls.

According to your question, you need to log all operations related for a request. I'll provide my point of view, I hope that would be useful.
If you'll use an existing framework or not depends of many reasons, for now I'll focus on and custom implementation.
First, to accomplish this issue you need a log structure:
using System;
public enum LogEntryType
{
Event,
Message,
Warning,
Error
}
public class LogEntry
{
public int? LogEntryID { get; set; }
public int? LogEntryType { get; set; }
public DateTime? EntryDate { get; set; }
public TimeSpan? ElapsedTime { get; set; }
public string Key { get; set; }
public string Description { get; set; }
}
Next, you need to create a logger object and invoke on each point you want to log, for example:
namespace MvcApp.Controllers
{
public class ProductController : Controller
{
protected ILogger Logger;
public ProductController(ILogger logger;)
{
Logger = logger;
}
public ActionResult Index()
{
Logger.Write(LogEntry.Event, Server.SessionID, "Start of '{0}' action call", "Index");
var serviceStopwatch = Stopwatch.StartNew();
Logger.Write(LogEntry.Task, Server.SessionID, "Start of '{0}' task's execution", "GetData");
var data = service.GetData();
serviceStopwatch.Stop();
Logger.Write(LogEntry.Task, Server.SessionID, serviceStopwatch.Elapsed, "End of '{0}' task's execution", "GetData");
var dbCallStopwatch = Stopwatch.StartNew();
Logger.Write(LogEntry.Task, Server.SessionID, "Start of '{0}' db call", "GetObjects");
var objects = repository.GetObjects();
dbCallStopwatch.Stop();
Logger.Write(LogEntry.Task, Server.SessionID, dbCallStopwatch.Elapsed, "End of '{0}' db call", "GetObjects");
Logger.Write(LogEntry.Event, Server.SessionID, "End of '{0}' action call", "Index");
return View();
}
}
}
In the code above, we take the key's value from server's session id (automatic generated) for group all entries.
The Logger.Write method's signatures should be something like these:
public void Write(LogEntryType logEntryType, string key, string message, params string[] args)
{
var item = new LogEntry
{
LogEntryType = (int?)logEntryType,
EntryDate = DateTime.Now,
Key = key,
Description = string.Format(message, args)
};
// Code for save log entry to text file, database, send email if it's an error, etc.
}
public void Write(LogEntryType logEntryType, string key, TimeSpan elapsedTime, string message, params string[] args)
{
var item = new LogEntry
{
LogEntryType = (int?)logEntryType,
EntryDate = DateTime.Now,
ElapsedTime = elapsedTime,
Key = key,
Description = string.Format(message, args)
};
// Code for save log entry to text file, database, send email if it's an error, etc.
}
Usually in real business applications, we need to have workflow definitions for execution metrics and other stuffs, but at this moment I don't know how complex do you want to develop this feature.
If you add all logger's calls in your required point and save all of them into a database (sql or nosql), next you would extract all information about one session id events.
As you can see above, there are some log entry type definitions: warning and errors, suppose that you add try-catch block for error handling, inside of catch block if there is an exception you can log it:
Logger.Write(LogEntry.Error, Server.SessionID, "There was an error on '{0}' task. Details: '{1}'", "Index", ex.Message);
As an additional point, it's better to implement async operations to avoid server blocking for requests.
If this answer makes sense we can improve the concepts, this is a basic idea how you can solve your issue.

Related

DDD: Is there an elegant way to pass auditing information through while updating an aggregate root?

Suppose I have a CQRS command that looks like below:
public sealed class DoSomethingCommand : IRequest
{
public Guid Id { get; set; }
public Guid UserId { get; set; }
public string A { get; set; }
public string B { get; set; }
}
That's processed in the following command handler:
public sealed class DoSomethingCommandHandler : IRequestHandler<DoSomethingCommand, Unit>
{
private readonly IAggregateRepository _aggregateRepository;
public DoSomethingCommand(IAggregateRepository aggregateRepository)
{
_aggregateRepository = aggregateRepository;
}
public async Task<Unit> Handle(DoSomethingCommand request, CancellationToken cancellationToken)
{
// Find aggregate from id in request
var id = new AggregateId(request.Id);
var aggregate = await _aggregateRepository.GetById(id);
if (aggregate == null)
{
throw new NotFoundException();
}
// Translate request properties into a value object relevant to the aggregate
var something = new AggregateValueObject(request.A, request.B);
// Get the aggregate to do whatever the command is meant to do and save the changes
aggregate.DoSomething(something);
await _aggregateRepository.Save(aggregate);
return Unit.Value;
}
}
I have a requirement to save auditing information such as the "CreatedByUserID" and "ModifiedByUserID". This is a purely technical concern because none of my business logic is dependent on these fields.
I've found a related question here, where there was a suggestion to raise an event to handle this. This would be a nice way to do it because I'm also persisting changes based on the domain events raised from an aggregate using an approach similar to the one described here.
(TL;DR: Add events into a collection in the aggregate for every action, pass the aggregate to a single Save method in the repository, use pattern matching in that repository method to handle each event type stored in the aggregate to persist the changes)
e.g.
The DoSomething behavior from above would look something like this:
public void DoSomething(AggregateValueObject something)
{
// Business logic here
...
// Add domain event to a collection
RaiseDomainEvent(new DidSomething(/* required information here */));
}
The AggregateRepository would then have methods that looked like this:
public void Save(Aggregate aggregate)
{
var events = aggregate.DequeueAllEvents();
DispatchAllEvents(events);
}
private void DispatchAllEvents(IReadOnlyCollection<IEvent> events)
{
foreach (var #event in events)
{
DispatchEvent((dynamic) #event);
}
}
private void Handle(DidSomething #event)
{
// Persist changes from event
}
As such, adding a RaisedByUserID to each domain event seems like a good way to allow each event handler in the repository to save the "CreatedByUserID" or "ModifiedByUserID". It also seems like good information to have when persisting domain events in general.
My question is related to whether there is an easy to make the UserId from the DoSomethingCommand flow down into the domain event or whether I should even bother doing so.
At the moment, I think there are two ways to do this:
Option 1:
Pass the UserId into every single use case on an aggregate, so it can be passed into the domain event.
e.g.
The DoSomething method from above would change like so:
public void DoSomething(AggregateValueObject something, Guid userId)
{
// Business logic here
...
// Add domain event to a collection
RaiseDomainEvent(new DidSomething(/* required information here */, userId));
}
The disadvantage to this method is that the user ID really has nothing to do with the domain, yet it needs to be passed into every single use case on every single aggregate that needs the auditing fields.
Option 2:
Pass the UserId into the repository's Save method instead. This approach would avoid introducing irrelevant details to the domain model, even though the repetition of requiring a userId parameter on all the event handlers and repositories is still there.
e.g.
The AggregateRepository from above would change like so:
public void Save(Aggregate aggregate, Guid userId)
{
var events = aggregate.DequeueAllEvents();
DispatchAllEvents(events, userId);
}
private void DispatchAllEvents(IReadOnlyCollection<IEvent> events, Guid userId)
{
foreach (var #event in events)
{
DispatchEvent((dynamic) #event, Guid userId);
}
}
private void Handle(DidSomething #event, Guid userId)
{
// Persist changes from event and use user ID to update audit fields
}
This makes sense to me as the userId is used for a purely technical concern, but it still has the same repetitiveness as the first option. It also doesn't allow me to encapsulate a "RaisedByUserID" in the immutable domain event objects, which seems like a nice-to-have.
Option 3:
Could there be any better ways of doing this or is the repetition really not that bad?
I considered adding a UserId field to the repository that can be set before any actions, but that seems bug-prone even if it removes all the repetition as it would need to be done in every command handler.
Could there be some magical way to achieve something similar through dependency injection or a decorator?
It will depend on the concrete case. I'll try to explain couple of different problems and their solutions.
You have a system where the auditing information is naturally part of the domain.
Let's take a simple example:
A banking system that makes contracts between the Bank and a Person. The Bank is represented by a BankEmployee. When a Contract is either signed or modified you need to include the information on who did it in the contract.
public class Contract {
public void AddAdditionalClause(BankEmployee employee, Clause clause) {
AddEvent(new AdditionalClauseAdded(employee, clause));
}
}
You have a system where the auditing information is not natural part of the domain.
There are couple of things here that need to be addressed. For example can users only issue commands to your system? Sometimes another system can invoke commands.
Solution: Record all incomming commands and their status after processing: successful, failed, rejected etc.
Include the information of the command issuer.
Record the time when the command occured. You can include the information about the issuer in the command or not.
public interface ICommand {
public Datetime Timestamp { get; private set; }
}
public class CommandIssuer {
public CommandIssuerType Type { get; pivate set; }
public CommandIssuerInfo Issuer {get; private set; }
}
public class CommandContext {
public ICommand cmd { get; private set; }
public CommandIssuer CommandIssuer { get; private set; }
}
public class CommandDispatcher {
public void Dispatch(ICommand cmd, CommandIssuer issuer){
LogCommandStarted(issuer, cmd);
try {
DispatchCommand(cmd);
LogCommandSuccessful(issuer, cmd);
}
catch(Exception ex){
LogCommandFailed(issuer, cmd, ex);
}
}
// or
public void Dispatch(CommandContext ctx) {
// rest is the same
}
}
pros: This will remove your domain from the knowlegde that someone issues commands
cons: If you need more detailed information about the changes and match commands to events you will need to match timestamps and other information. Depending on the complexity of the system this may get ugly
Solution: Record all incomming commands in the entity/aggregate with the corresponding events. Check this article for a detailed example. You can include the CommandIssuer in the events.
public class SomethingAggregate {
public void Handle(CommandCtx ctx) {
RecordCommandIssued(ctx);
Process(ctc.cmd);
}
}
You do include some information from the outside to your aggregates, but at least it's abstracted, so the aggregate just records it. It doesn't look so bad, does it?
Solution: Use a saga that will contain all the information about the operation you are using. In a distributed system, most of the time you will need to do this so it whould be a good solution. In another system it will add complexity and an overhead that you maaaay not wan't to have :)
public void DoSomethingSagaCoordinator {
public void Handle(CommandContext cmdCtx) {
var saga = new DoSomethingSaga(cmdCtx);
sagaRepository.Save(saga);
saga.Process();
sagaRepository.Update(saga);
}
}
I've used all methods described here and also a variation of your Option 2. In my version when a request was handled, the Repositoires had access to a context that conained the user info, so when they saved events this information was included in EventRecord object that had both the event data and the user info. It was automated, so the rest of the code was decoupled from it. I did used DI to inject the contex to the repositories. In this case I was just recording the events to an event log. My aggregates were not event sourced.
I use these guidelines to choose an approach:
If its a distributed system -> go for Saga
If it's not:
Do I need to relate detailed information to the command?
Yes: pass Commands and/or CommandIssuer info to aggregates
If no then:
Does the dabase has good transactional support?
Yes: save Commandsand CommandIssueroutside of aggregates.
No: save Commandsand CommandIssuer in aggreages.

DDD validation without throwing exceptions

I am attempting my first foray into DDD, and I asked a question about bulk imports here, but I am going in circles trying to apply the validation for my domain model.
Essentially I want to run through all the validation without throwing exceptions so that I can reject the command with all the validation errors via a list of CommandResult objects inside the Command object. Whilst some are just mandatory field checks which are configurable and so will be handled outside the aggregate, there are also business rules, so I don't want to duplicate the validation logic and don't want to fall into an anaemic model by moving everything outside the aggregate to maintain the always-valid mantra of entities.
I am at a bit of a loss, so thought it be best I ask the experts if this I am going about things correctly before I start muddying the waters further!
To try and demonstrate:
Take the below, we have fairly simple UserProfile aggregate, the constructor takes the minimum information required for a profile to exist.
public class UserProfile : AggregateRoot
{
public Guid Id {get; private set; }
public Name Name {get private set;}
public CardDetail PaymentInformation {get; private set;}
public UserProfile(Guid id, Name name, CardDetail paymentInformation)
{
Name = name;
PaymentInformation = paymentInformation;
}
}
public class CardDetail : ValueObject
{
public string Number {get; private set;}
public string CVC {get; private set; }
public DateTime? IssueDate {get; private set;}
public DateTime ExpiryDate {get;private set;}
public CardDetail(string number, string cvc, DateTime? issueDate, DateTime expiryDate)
{
if(!IsValidCardNumber(number))
{
/*Do something to say details invalid, but not throw exception, possibly?*/
}
Number = number;
CVC = cvc;
IssueDate = issueDate
ExpiryDate = expiryDate;
}
private bool IsValidCardNumber(string number)
{
return Regex.IsMatch(/*regex for card number*/);
}
}
I then have a method which accepts a command object, which will construct a UserProfile and save to the database, but I want to validate before saving
public void CreateProfile(CreateProfileCommand command)
{
var paymentInformation = new CardDetail(command.CardNumber, command.CardCVC, command.CardIssueDate, command.CardExpiryDate)
var errors = /* list of errors added to from card detail validation, possibly? */
var profile = new UserProfile(/* pass args, add to errors? */
if(errors.Any())
{
command.Results.Add(errors.Select(x => new CommandResult { Severity = Severity.Error, Message = x.Message });
return;
}
/* no errors, so continue to save */
}
Now, I could handle exceptions and add them to the command result, but that seems expensive and surely violates the rule of allowing exceptions to control flow? but on the other hand I want to keep entities and value object valid, so I find myself in a bit of a rut!
Also, in the example above, the profile could be imported or done manually from a creation screen, but the user should get all error messages rather than each one in the order they occur. In the application I am working on, the rules applied are a bit more complex, but the idea is the same. I am aware that I shouldn't let a UI concern impact the domain as such, but I don't want to have to duplicate all validation twice more so that I can make sure the command won't fail as that will cause maintainability issues further down the line (the situation I find myself in and trying to resolve!)
The question is maybe a bit broad and around architectural design which is something you should decide upon, but I will try and assist anyway -I just cannot help myself.
Firstly: This is a great article that might already hint at you are too critical about your design: http://jeffreypalermo.com/blog/the-fallacy-of-the-always-valid-entity/
You would need to decide about the way your system is going to handle validation.
That is, do you want a system where the domain will just absolutely never ever fail consistency? Then you might need additional classes to sanitize any commands as you have and validate them before you accept or reject the change to the domain(Sanitation layer). Alternatively, as in that article, it might indicate that there is a completely different type of object required to deal with a specific case. (something like legacy data which does not conform to current rules)
Is it acceptable for the domain to throw an exception when something seriously goes wrong? Then discard all changes in the current aggregate (or even current context) and notify the user.
If you are looking for a peaceful intermediate solution, maybe consider something like this:
public OperationResult UpdateAccount(IBankAccountValidator validator, IAccountUpdateCommand newAccountDetails)
{
var result = validator.Validate(newAccountDetails);
if(result.HasErrors)
{
result.AddMessage("Could not update bank account", Severity.Error);
return result;
}
//apply further logic here
//return success
}
Now you can have all the validation logic in a separate class, but you have to pass that and call via the double dispatch and you will add the result handling as seen above in every call.
You will truly have to decide what style is acceptable for you/team and what will remain maintainable in the long run.

Rebus: advice for adding a usercontext to each message

First let's define 'UserContext' as being a number of properties required to execute the receiving message in the correct context of the user, so it is a bit more than just a string. In my case this also includes data on which application 'instance' the user was working.
As I see it there are 2 main options to provide a 'UserContext' for a message:
As a Header
As a base class for the message
When using a Header, I need to provide my own serialization, when using a base class, Rebus will solve the serialization for me.
So I spiked using a base class using a little sample program:
public class UserContext
{
public string Name { get; set; }
public int UserId { get; set; }
public Guid AppId { get; set; }
}
public class UserContextMessageBase
{
public UserContext UserContext { get; set; }
}
public class SimpleMessage : UserContextMessageBase
{
public string Data { get; set; }
}
internal class Program
{
private static void Main(string[] args)
{
using (var adapter = new BuiltinContainerAdapter())
using (var timer = new Timer())
{
//adapter.Register(typeof(UserContextHandler));
adapter.Register(typeof(SimpleMessageHandler));
var bus = Configure.With(adapter)
.Transport(t => t.UseMsmqAndGetInputQueueNameFromAppConfig())
.MessageOwnership(d => d.FromRebusConfigurationSection())
//.SpecifyOrderOfHandlers(o => o.First<UserContextHandler>())
.CreateBus()
.Start();
timer.Elapsed += delegate
{
bus.Send(new Messages.SimpleMessage { Data = Guid.NewGuid().ToString() });
};
timer.Interval = 10000;
timer.Start();
Console.WriteLine("Press enter to quit");
Console.ReadLine();
}
}
}
internal class UserContextHandler : IHandleMessages<UserContextMessageBase>
{
protected UserContext _context;
public void Handle(UserContextMessageBase message)
{
var old = Console.ForegroundColor;
if (_context != null)
{
Console.ForegroundColor = ConsoleColor.Red;
Console.WriteLine("Context is already populated");
}
Console.ForegroundColor = ConsoleColor.DarkYellow;
Console.WriteLine("Processing UserContextMessageBase");
// create the correct Context to process the message
_context = message.UserContext;
Console.ForegroundColor = old;
}
}
internal class SimpleMessageHandler : **UserContextHandler**, IHandleMessages<SimpleMessage>
{
public void Handle(SimpleMessage message)
{
// allow to use the _context to process this message
Console.WriteLine("Received SimpleMessage {0}", message.Data);
}
}
But when I run the program, I see that the SimpleMessage is getting processed twice. Is this 'by design' or perhaps a bug?
On the other hand, I can uncomment the registration for the UserContextHandler, and not inherit the SimpleMessageHandler from the UserContextHandler, but then I would have to stuff the UserContext into the MessageContext, and use it as such from the SimpleMessageHandler.
In my opinion, both approaches are valid - personally, I'd lean towards using headers because they're less noisy, and because that's really what they're there for :) but, as you correctly state, that requires that you somehow take care of "serializing" the user context into one or more headers, deserializing it again upon receiving each message.
The header approach could be done pretty elegantly, though, in the MessageSent and MessageContextEstablished events for sending and receiving respectively, staying out of your message handlers, and then the user context could be made available in the message context.
The other approach with using a message base class is definitely valid too, and I can see that you're hit by the fact that the lookup for the incoming message will get a new handler instance for each lookup - therefore, the pipeline will contain two handler instances, and the message will then be dispatched "as much as possible" (i.e. once for each compatible type/supertype) to each handler instance, thus resulting in effectively handling the message twice.
In your case, I suggest you do as you hint at towards the end: Make the UserContextHandler a separate handler that you ensure gets to be first in the pipeline, thus allowing it to stash the user context in MessageContext.GetCurrent().Items for all subsequent handlers to extract.
I'd love to cook an example, though, showing a way to do exactly what you need, but by using headers (possibly in the form of simply a ;-separated list of key-value pairs, or something similar), but I'm afraid I cannot promise that such an example would be available within the next few days.
Let me know if it works out for you :)
Update: I've added a sample to Rebus' sample repo that demonstrates how an ambient user context can be picked up and passed around in a message header, including a few nifties around configuration and DI - it's called UserContextHeaders - check it out :)

Reading the document immediately after storing it using a transaction in ravendb

Here i'm storing a document of type transactionSummary using transaction scope as follows
public class TransactionSummary
{
[JsonIgnore]
public Guid? Etag { get; set; }
public String Id { get; set; }
public String TransactId { get; set; }
public OpenOrClosed BalanceType { get; set; }
public TransactStatus Status { get; set; }
public String PayeeAccountNo { get; set; }
public Decimal AmountPaid { get; set; }
}
using (var trans = new TransactionScope())
{
using (IDocumentSession sess = GetConnection())
{
sess.Store(fldtrans);
sess.SaveChanges();
}
trans.complete();
}
after storing it immediately i have a need to retrieve it so i'm doing it as follows
using (IDocumentSession sess = GetConnection())
{
sess.Advanced.AllowNonAuthoritativeInformation = false;
sess.Advanced.UseOptimisticConcurrency = true;
transact = sess.Query<TransactionSummary>().Where(x => x.TransactId ==transactid).FirstOrDefault();
transact.Etag = sess.Advanced.GetEtagFor(transact);
}
Here I'm getting a exception as follows
ex = {"Value cannot be null.\r\nParameter name: key"}StackTrace = "
at System.Collections.Generic.Dictionary2.FindEntry(TKey key)\r\n at System.Collections.Generic.Dictionary2.TryGetValue(TKey key, TValue& value)\r\n at Raven.Client.Document.InMemoryDocumentSessionOperations.GetDocumentMetadata[T](T instance) in c...
I understand that it takes certain time to commit the transaction so when the document is read immediately it is getting failed. But how can i overcome this so that i do not sacrificce my requirement.
Matt here i'm doing lot of other work too in that transaction scope i've just shown the glimpse for you to understand one of it is i'm posting the TransactionId to a queue and my background service fetches the transactionId(not the document Id) and does the some other process which needs to be done post transaction.Here what happens is the queue fetches the transactionId earlier before the transaction is being affected in the real database.
This is my Getconnection code for reference.
public class DataAccess : IDataAccess
{
static IDocumentStore _docStore ;
public DataAccess()
{
_docStore = new DocumentStore { Url = "http://localhost:8081" };
_docStore.Initialize();
_docStore.Conventions.IdentityPartsSeparator = "-";
}
#region IDataAccess Members
public IDocumentSession GetConnection()
{
IDocumentSession sess = _docStore.OpenSession();
_docStore.DatabaseCommands.EnsureDatabaseExists("MyDB");
return sess=_docStore.OpenSession("MyDB");
}
Based on just what you have shown, there's no need to explicitly define a transaction scope. There is already an implicit transaction around the unit of work defined by the scope of the session. The only time you should need to explicitly use TransactionScope is if you are making calls to two or more separate databases with different sessions - or calling raven and some other transaction-aware process.
I'm not sure why you would want to query in a new session immediately after storing. You certainly will have stale index issues to contend with. If you really must do this, you should probably just load the document by its Id.
Perhaps you are not aware of this, but the Id is available immediately after calling .Store() in your first session - even before you save changes. And if you want to get the document's etag, you can make the call to .GetEtagFor() right after the .SaveChanges() call in the first session. There's really no need to create another session for either of these purposes.
If you haven't already, you should also read this RavenDB KB article about optimistic concurrency and etag issues. I think you'll find most of your concerns addressed there.
One last thing - please update your question to show the code for your GetConnection() method. It's hard to tell if you are using IDocumentSession and IDocumentStore properly without showing that. Thanks.

C# OOP - Return a string from a Business Object that performs Validation and Inserts a record - Good Practice or Not?

Just curious if someone can shed some light on if this is a good practice or not?
Currently I am working on a C# project that performs and Inserts a record and runs through 4 or 5 methods to validate that the record can be added and returns a string that tells the presentation layer if the record has been submitted or not.
Is this a good practice? Pros/Cons?
The call from the presentation is:
protected void btnProduct_Click(object sender, EventArgs e)
{
lblProduct.Text = ProductBLL.CreateProduct(txtProductType.Text, txtProduct.Text, Convert.ToInt32(txtID.Text);
}
The BLL method is:
public class AccountBLL
{
// Create The Product w/ all rules validated
public static string CreateProduct(string productType, string product, int id)
{
// CHECK IF PRODUCT NAME IN DB
else if (ValidateIfProductNameExists(product) == true)
{
return "Invalid Product Name";
}
// CHECK IF 50 PRODUCTS CREATED
else if (ValidateProductCount(id) == true)
{
return "Max # of Products created Can't add Product";
}
// CHECK IF PRODUCT TYPE CREATED
else if (ValidateProductType(productType) == false)
{
return "No Product Type Created";
}
// NOW ADD PRODUCT
InsertProduct(productType, product,id);
return "Product Created Successfully";
}
As mentioned in the previous post, use Enum types.
Below is a sample code that could be used in your application.
public struct Result
{
public Result(ActionType action, Boolean success, ErrorType error) :
this()
{
this.Action = action;
this.HasSuceeded = success;
this.Error = error;
}
public ActionType Action { get; private set; }
public Boolean HasSuceeded { get; private set; }
public ErrorType Error { get; private set; }
}
public enum ErrorType
{
InvalidProductName, InvalidProductType, MaxProductLimitExceeded, None,
InvalidCategoryName // and so on
}
public enum ActionType
{
CreateProduct, UpdateProduct, DeleteProduct, AddCustomer // and so on
}
public class ProductBLL
{
public Result CreateProduct(String type, String name, Int32 id)
{
Boolean success = false;
// try to create the product
// and set the result appropriately
// could create the product without errors?
success = true;
return new Result(ActionType.CreateProduct, success, ErrorType.None);
}
}
Don't use hardcoded strings.
Use an Enum for the return value, you can do much more and more efficiently with enums.
Validations must be done, only thing you can improve is to put the whole validation process in a single method.
After you call the method, you can have a single if sentence in the main method to check the enum returned.
if (IsValidated(productType, product,id) == MyEnumType.Success) { }
I'd use exceptions rather than a string or a enum...
I would recommend looking at the Validation framework used by Imar Spaanjaar in his N-Layer architecture series. The framework he uses if very versatile and it even supports Localization through using Resource files for the validation strings.
It is not a best practice to return a string with the status of the method.
The main reason is that it violates the separation of concerns between the UI layer and the business layer. You've taken the time to separate out the business logic into its own business layer; that's a good thing. But now the business layer is basically returning the error message directly to the UI. The error message to display to the user should be determined by the UI layer.
With the current implementation the business layer also becomes hard to use (for anyone without explicit knowledge of the implementation) because there is no contract. The current contract is that the method will return a string that you should display to the user. This approach makes reuse difficult. Two common scenarios that could cause headaches are if you want to support a new language (localization) or if you want to expose some of these business methods as a service.
I've been bitten when trying to use some old code like this before. The scenario is that I want to reuse the method because it does exactly what I want but that I want to take some action if a specific error occurs. In this case you end up either rewriting the business logic (which is sometimes not possible) or you end up having to hard code a horrible if statement into your application. e.g.
if (ProductBLL.CreateProduct(productType, product, ID) ==
"Max # of Products created Can't add Product")
{
...
}
Then a requirement comes down that the message should be changed to something different ("The maximum number of products has been exceeded. Please add less products and try again."). This will break the above code. In production. On a Saturday night.
So in summary: don't do it.

Categories

Resources