How to manage unicity in my buisness logic layer? - c#

In a aspnetcore mvc executing context .
I have this simple entity.
public class Foo
{
public int Id { get; private set; }
public string Name{ get; private set; }
public string Code { get; private set; }
private Foo() { }
public Foo(string Name, string Code)
{
GuardClauses.IsNullOrWhiteSpace(Name,nameof(Name), "cannot be null or empty");
GuardClauses.IsNullOrWhiteSpace(Code, nameof(Code), "cannot be null or empty");
this.Nom = Nom;
this.Code = Code;
}
}
In my DbContext I have this code field/constraint that ensures the Code is unique from a persistence point of view.
protected override void OnModelCreating(ModelBuilder builder)
{
builder.Entity<Foo>()
.HasIndex(u => u.Code)
.IsUnique();
}
I want the addNewFoo method in my service class to ensure that for all Foos in my Application the property code is unique, before adding it.
I try as much as I can to respect persistence ignorance principle, but I'm not as skilled as I wish to do that.
For starters, is it the role of a Builder to determine if the Code field is Unique?
Secondly I know that in my validation layer I can determine if there is an existing foo already with the same Code that the actual foo I'm currently trying to add. But this approchah isn't thread safe or transactional.
The fact is I don't want to wait the moment I add my foo too have a SqlException, just to know it cannot be done.
What is the best approach to ensure unicity in my application with the
Fail Fast principle in mind.

Because there is't a concrete example or description of a system I will generalize a bit. If you provide a concrete example I can add additional info. Every solution has a Context to which it applies to best and of course there is always a trade-off
Let's ask couple of querstions regarding the nature of this Code and what it represents
Who is responcible for the Code generation: the User of the system or the System itself?
Can the Code be completely random (UUID for example)?
Is the code generated by some special algorithm (SSN or maybe a CarPartNumber that is composed of different parts with a special meaning)
And one more very important question:
How frequently do we expect these unique violations to occur?
If the answer to question 2 is Yes, then you don't have a problem. You can have a duplicate UUID's, but the chances are very low. You can add a Unique Constrant to you DB just in case and treat this violation as a normal error that you don't care about since it's gonna happen every once in a million years.
If the answer to question 3 is Yes, then we have a different sittuation. In a multi-user system you cannot avoid concurrency. There are couple of way to deal with the sittuation:
Option 1: Optimistic Offline Lock
Option 2: Pessimistic Offline Lock
Option 3: If System is generating codes, have a special service and queue code generation requests.
If you choose to use a lock, you can either lock the whole resource Foo or only lock the Code generation.
Option 1:
You will have to handle the SQLException. This one is used in most applications today because it ensures smooth User Experience by not causing the application to stall for large amounts of time because someone has locked a resource.
You can use an abstraction for example a Repository. Define your own application level exception UniqueCodeViolationException that will be thrown by the Repository. The repository will try{}catch{} the SQLException, process it and wrap it in UniqueCodeViolationException when is compares the error codes. This won't save you the check but at least if will hide the concrete errors and you will have the processing in only one place.
Option2:
Sometimes you realy need to ensure that there is no concurrency, so you use this option. In this case you will have to lock the process of creation of Foo for only one user and don't allow other to be able to even be able to open the dialog/form/page for creating Foo if there is a lock.
This ensures consistency and avoids the problem by creating a system that is basically unusable for multiple users that target the same Foo. It's quite possible that the application you are building will have only one person responsible for Foo creation or it may that concurrency is very low, so this may be a good solution.
I have friends who use this lock in an application for Insurances. Usually in their application one person is going to one office to make an Insurance. so the possibility of concurrency in the creation of an insurance for the same person is very low, but the cost of having multiple Insurances to the same person is very hight.
Option 3:
On the other hand If your Code is generated by the System, you can have a CodeGenerationService that deals with code generations and ensures that unique codes are generated. You can have a queue with these requests. In each generation operation in the Service you can check if this code exists and return an Error (or throw exception).
Now to question 4. If you don't expect to have collisions often, just add a Unique Constraint in your DB and treat it as a general unexpected error. Add the check if the Code already exists and show an error if it does.
You still can have a concurrency here so there will be a slim change that one user will add a Foo and another will get an error "Oooops... something whent wrong.. please try again". Since this will happen once in a 100 years it's ok.
The last solution will make your system a lot simpler by ignoring special situations that can occur in rare situations.

Related

Should I throw on null parameters in private/internal methods?

I'm writing a library that has several public classes and methods, as well as several private or internal classes and methods that the library itself uses.
In the public methods I have a null check and a throw like this:
public int DoSomething(int number)
{
if (number == null)
{
throw new ArgumentNullException(nameof(number));
}
}
But then this got me thinking, to what level should I be adding parameter null checks to methods? Do I also start adding them to private methods? Should I only do it for public methods?
Ultimately, there isn't a uniform consensus on this. So instead of giving a yes or no answer, I'll try to list the considerations for making this decision:
Null checks bloat your code. If your procedures are concise, the null guards at the beginning of them may form a significant part of the overall size of the procedure, without expressing the purpose or behaviour of that procedure.
Null checks expressively state a precondition. If a method is going to fail when one of the values is null, having a null check at the top is a good way to demonstrate this to a casual reader without them having to hunt for where it's dereferenced. To improve this, people often use helper methods with names like Guard.AgainstNull, instead of having to write the check each time.
Checks in private methods are untestable. By introducing a branch in your code which you have no way of fully traversing, you make it impossible to fully test that method. This conflicts with the point of view that tests document the behaviour of a class, and that that class's code exists to provide that behaviour.
The severity of letting a null through depends on the situation. Often, if a null does get into the method, it'll be dereferenced a few lines later and you'll get a NullReferenceException. This really isn't much less clear than throwing an ArgumentNullException. On the other hand, if that reference is passed around quite a bit before being dereferenced, or if throwing an NRE will leave things in a messy state, then throwing early is much more important.
Some libraries, like .NET's Code Contracts, allow a degree of static analysis, which can add an extra benefit to your checks.
If you're working on a project with others, there may be existing team or project standards covering this.
If you're not a library developer, don't be defensive in your code
Write unit tests instead
In fact, even if you're developing a library, throwing is most of the time: BAD
1. Testing null on int must never be done in c# :
It raises a warning CS4072, because it's always false.
2. Throwing an Exception means it's exceptional: abnormal and rare.
It should never raise in production code. Especially because exception stack trace traversal can be a cpu intensive task. And you'll never be sure where the exception will be caught, if it's caught and logged or just simply silently ignored (after killing one of your background thread) because you don't control the user code. There is no "checked exception" in c# (like in java) which means you never know - if it's not well documented - what exceptions a given method could raise. By the way, that kind of documentation must be kept in sync with the code which is not always easy to do (increase maintenance costs).
3. Exceptions increases maintenance costs.
As exceptions are thrown at runtime and under certain conditions, they could be detected really late in the development process. As you may already know, the later an error is detected in the development process, the more expensive the fix will be. I've even seen exception raising code made its way to production code and not raise for a week, only for raising every day hereafter (killing the production. oops!).
4. Throwing on invalid input means you don't control input.
It's the case for public methods of libraries. However if you can check it at compile time with another type (for example a non nullable type like int) then it's the way to go. And of course, as they are public, it's their responsibility to check for input.
Imagine the user who uses what he thinks as valid data and then by a side effect, a method deep in the stack trace trows a ArgumentNullException.
What will be his reaction?
How can he cope with that?
Will it be easy for you to provide an explanation message ?
5. Private and internal methods should never ever throw exceptions related to their input.
You may throw exceptions in your code because an external component (maybe Database, a file or else) is misbehaving and you can't guarantee that your library will continue to run correctly in its current state.
Making a method public doesn't mean that it should (only that it can) be called from outside of your library (Look at Public versus Published from Martin Fowler). Use IOC, interfaces, factories and publish only what's needed by the user, while making the whole library classes available for unit testing. (Or you can use the InternalsVisibleTo mechanism).
6. Throwing exceptions without any explanation message is making fun of the user
No need to remind what feelings one can have when a tool is broken, without having any clue on how to fix it. Yes, I know. You comes to SO and ask a question...
7. Invalid input means it breaks your code
If your code can produce a valid output with the value then it's not invalid and your code should manage it. Add a unit test to test this value.
8. Think in user terms:
Do you like when a library you use throws exceptions for smashing your face ? Like: "Hey, it's invalid, you should have known that!"
Even if from your point of view - with your knowledge of the library internals, the input is invalid, how you can explain it to the user (be kind and polite):
Clear documentation (in Xml doc and an architecture summary may help).
Publish the xml doc with the library.
Clear error explanation in the exception if any.
Give the choice :
Look at Dictionary class, what do you prefer? what call do you think is the fastest ? What call can raises exception ?
Dictionary<string, string> dictionary = new Dictionary<string, string>();
string res;
dictionary.TryGetValue("key", out res);
or
var other = dictionary["key"];
9. Why not using Code Contracts ?
It's an elegant way to avoid the ugly if then throw and isolate the contract from the implementation, permitting to reuse the contract for different implementations at the same time. You can even publish the contract to your library user to further explain him how to use the library.
As a conclusion, even if you can easily use throw, even if you can experience exceptions raising when you use .Net Framework, that doesn't mean it could be used without caution.
Here are my opinions:
General Cases
Generally speaking, it is better to check for any invalid inputs before you process them in a method for robustness reason - be it private, protected, internal, protected internal, or public methods. Although there are some performance costs paid for this approach, in most cases, this is worth doing rather than paying more time to debug and to patch the codes later.
Strictly Speaking, however...
Strictly speaking, however, it is not always needed to do so. Some methods, usually private ones, can be left without any input checking provided that you have full guarantee that there isn't single call for the method with invalid inputs. This may give you some performance benefit, especially if the method is called frequently to do some basic computation/action. For such cases, doing checking for input validity may impair the performance significantly.
Public Methods
Now the public method is trickier. This is because, more strictly speaking, although the access modifier alone can tell who can use the methods, it cannot tell who will use the methods. More over, it also cannot tell how the methods are going to be used (that is, whether the methods are going to be called with invalid inputs in the given scopes or not).
The Ultimate Determining Factor
Although access modifiers for methods in the code can hint on how to use the methods, ultimately, it is humans who will use the methods, and it is up to the humans how they are going to use them and with what inputs. Thus, in some rare cases, it is possible to have a public method which is only called in some private scope and in that private scope, the inputs for the public methods are guaranteed to be valid before the public method is called.
In such cases then, even the access modifier is public, there isn't any real need to check for invalid inputs, except for robust design reason. And why is this so? Because there are humans who know completely when and how the methods shall be called!
Here we can see, there is no guarantee either that public method always require checking for invalid inputs. And if this is true for public methods, it must also be true for protected, internal, protected internal, and private methods as well.
Conclusions
So, in conclusion, we can say a couple of things to help us making decisions:
Generally, it is better to have checks for any invalid inputs for robust design reason, provided that performance is not at stake. This is true for any type of access modifiers.
The invalid inputs check could be skipped if performance gain could be significantly improved by doing so, provided that it can also be guaranteed that the scope where the methods are called are always giving the methods valid inputs.
private method is usually where we skip such checking, but there is no guarantee that we cannot do that for public method as well
Humans are the ones who ultimately use the methods. Regardless of how the access modifiers can hint the use of the methods, how the methods are actually used and called depend on the coders. Thus, we can only say about general/good practice, without restricting it to be the only way of doing it.
The public interface of your library deserves tight checking of preconditions, because you should expect the users of your library to make mistakes and violate the preconditions by accident. Help them understand what is going on in your library.
The private methods in your library do not require such runtime checking because you call them yourself. You are in full control of what you are passing. If you want to add checks because you are afraid to mess up, then use asserts. They will catch your own mistakes, but do not impede performance during runtime.
Though you tagged language-agnostic, it seems to me that it probably doesn't exist a general response.
Notably, in your example you hinted the argument: so with a language accepting hinting it'll fire an error as soon as entering the function, before you can take any action.
In such a case, the only solution is to have checked the argument before calling your function... but since you're writing a library, that cannot have sense!
In the other hand, with no hinting, it remains realistic to check inside the function.
So at this step of the reflexion, I'd already suggest to give up hinting.
Now let's go back to your precise question: to what level should it be checked?
For a given data piece it'd happen only at the highest level where it can "enter" (may be several occurrences for the same data), so logically it'd concern only public methods.
That's for the theory. But maybe you plan a huge, complex, library so it might be not easy to ensure having certainty about registering all "entry points".
In this case, I'd suggest the opposite: consider to merely apply your controls everywhere, then only omit it where you clearly see it's duplicate.
Hope this helps.
In my opinion you should ALWAYS check for "invalid" data - independent whether it is a private or public method.
Looked from the other way... why should you be able to work with something invalid just because the method is private? Doesn't make sense, right? Always try to use defensive programming and you will be happier in life ;-)
This is a question of preference. But consider instead why are you checking for null or rather checking for valid input. It's probably because you want to let the consumer of your library to know when he/she is using it incorrectly.
Let's imagine that we have implemented a class PersonList in a library. This list can only contain objects of the type Person. We have also on our PersonList implemented some operations and therefore we do not want it to contain any null values.
Consider the two following implementations of the Add method for this list:
Implementation 1
public void Add(Person item)
{
if(_size == _items.Length)
{
EnsureCapacity(_size + 1);
}
_items[_size++] = item;
}
Implementation 2
public void Add(Person item)
{
if(item == null)
{
throw new ArgumentNullException("Cannot add null to PersonList");
}
if(_size == _items.Length)
{
EnsureCapacity(_size + 1);
}
_items[_size++] = item;
}
Let's say we go with implementation 1
Null values can now be added in the list
All opoerations implemented on the list will have to handle theese null values
If we should check for and throw a exception in our operation, consumer will be notified about the exception when he/she is calling one of the operations and it will at this state be very unclear what he/she has done wrong (it just wouldn't make any sense to go for this approach).
If we instead choose to go with implementation 2, we make sure input to our library has the quality that we require for our class to operate on it. This means we only need to handle this here and then we can forget about it while we are implementing our other operations.
It will also become more clear for the consumer that he/she is using the library in the wrong way when he/she gets a ArgumentNullException on .Add instead of in .Sort or similair.
To sum it up my preference is to check for valid argument when it is being supplied by the consumer and it's not being handled by the private/internal methods of the library. This basically means we have to check arguments in constructors/methods that are public and takes parameters. Our private/internal methods can only be called from our public ones and they have allready checked the input which means we are good to go!
Using Code Contracts should also be considered when verifying input.

Does the need to make the code simpler justify the use of wrong abstractions?

Suppose we have a CommandRunner class that runs Commands, when a Command is created it's kept in the processingQueue for proccessing, if the execution of the Command finishes with errors the Command is moved to the faultedQueue for later processing but when everything is OK the Command is moved to the archiveQueue, the archiveQueue is not going to be processed in any way
the CommandRunner is something like this
class CommandRunner
{
public CommandRunner(IQueue<Command> processingQueue,
IQueue<Command> faultedQueue,
IQueue<Command> archiveQueue)
{
this.processingQueue = processingQueue;
this.faultedQueue= faultedQueue;
this.archiveQueue= archiveQueue;
}
public void RunCommands()
{
while(processingQueue.HasItems)
{
var current = processingQueue.Dequeue();
var result = current.Run();
if(result.HasError)
curent.MoveTo(faultedQueue);
else
curent.MoveTo(archiveQueue);
...
}
}
}
The CommandeRunner recives the three dependecies as a PersistentQueue the PersistentQueue is responsible for the long term storage of the Commands and so we free the CommandRunner from handling this
And the only purpose of the archiveQueue is to keep the design homogenous, to keep the CommandRunner persistence ignorant and with few dependencies
for example we can imagine a Property like this
IEnumerable<Command> AllCommands
{
get
{
return Enumerate(archiveQueue).Union(processingQueue).Union(faultedQueue);
}
}
many portions of the class need to do so(handle the Archive as a Queue to make the code simpler as shown above)
Does it make sense to use a Queue even if it's not the best abstraction, or do I have to use another abstraction for the archive concept.
what are other alternatives to meet these requirement?
Keep in mind that code, especially running code usually gets tangled and messy as time pass. To combat this, good names, good design, and meaningful comments come into play.
If you don't going to process the archiveQueue, and it's just a storage for messages that has been successfully processed, you can always store it as a different type (list, collection, set, whatever suits your needs), and then choose one of the following two:
Keep the name archiveQueue and change the underlying type. I would leave a comment where it's defined (or injected) saying : Notice that this might not be an actual queue. Name is for consistency reasons only.
Change the name to archiveRepository or something similar, while keeping the queue type. Obviously, since it's still a queue, you'll leave a comment saying: Notice, this is actually a queue.
Another thing to keep in mind, is that if you have n people working on your code base, you'll probably get n+1 different perferences about which way it shoud be done :)
Queue is a useful structure when you need to take care about the order of items inside it. If you need in your command post process, take care about the orders commands ran, then the queue can be a good choice.
If you don't need info about the order or commands, maybe you can use a List (on System.Collections namespace).
I think your choice are good, in the same case, I'll use a queues, we have a good example with OS design principles, inside OS (on Kernel) the process are queued for execution, clearly the OS queues are more complicated because they have other variables in mind like priority, and CPU utilization, but we can learn about the use of queues like data structures in process management.

Best practice for interface to allow adding, deleting etc. child objects w/ broadcasting events (similar to ObservableCollection)

I'm trying to specify an interface for a Folder. That interface should allow to
- Add or delete files of type IFile
- Get an List of IFile
- Broadcast events whenever a file was added/deleted/changed (e.g. for the GUI to subscribe to)
and I'm trying to find the best way to do it. So far, I came up with three ideas:
1
public interface IFolder_v1
{
ObservableCollection<IFile> files;
}
2
public interface IFolder_v2
{
void add(IFile);
void remove(IFile);
IEnumerable<IFile> files { get; }
EventHandler OnFileAdded { get; }
EventHandler OnFileRemoved { get; }
EventHandler OnFileDeleted { get; }
}
3
public interface IFolder_v3
{
void add(IFile);
void remove(IFile);
IEnumerable<IFile> files { get; }
EventHandler<CRUD_EventArgs> OnFilesChanged { get; }
}
public class CRUD_EventArgs : EventArgs
{
public enum Operations
{
added,
removed,
updated
}
private Operations _op;
public CRUD_EventArgs(Operations operation)
{
this._op = operation;
}
public Operations operation
{
get
{
return this._op;
}
}
}
Idea #1 seems really nice to implement as doesn't require much code, but has some problems: What, for example, if an implementation of IFolder only allows to add files of specific types (Say, text files), and throws an exception whenever another file is being added? I don't think that would be feasible with a simple ObservableCollection.
Idea #2 seems ok, but requires more code. Also, defining three separate events seems a bit tedious - what if an object needs to subscribe to all events? We'd need to subscribe to 3 different eventhandlers for that. Seems annoying.
Also a little less easy to use than solution #1 as now, one needs to call .Add to add files, but a list of files is stored in .files etc. - so the naming conventions are a bit less clear than having everything bundled up in one simple sub-object (.files from idea #1).
Idea #3 circumvents all of those problems, but has the longest code. Also, I have to use a custom EventArgs class, which I can't imagine is particularly clean in an interface definition? (Also seems overkill to define a class like that for simple CRUD event notifications, shouldn't there be an existing class of some sort?)
Would appreciate some feedback on what you think is the best solution (possibly even something I haven't thought of at all). Is there any best practice?
Take a look at the Framework's FileSystemWatcher class. It does pretty much what you need, but if anyway you still need to implement your own class, you can take ideas by looking at how it is implemented (which is by the way similar to your #2 approach).
Having said that, I personally think that #3 is also a very valid approach. Don't be afraid of writing long code (within reasonable limits of course) if the result is more readable and maintainable than it would be with shorter code.
Personally I would go with #2.
In #1 you just expose a entire collection of objects, allowing everyone to do anything with them.
#3 seems less self explanatory to me. Though - I like to keep thing simple when coding so I may be biased.
If watchers are going to be shorter-lived than the thing being watched, I would avoid events. The pattern exemplified by ObservableCollection, where the collection gives a subscribed observer an IDisposable object which can be used to unsubscribe is a much better approach. If you use such a pattern, you can have your class hold a weak reference (probably use a "long" weak reference) to the the subscription object, which would in turn hold a strong reference (probably a delegate) to the subscriber and to the weak reference which identifies it. Abandoned subscriptions will thus get cleaned up by the garbage collector; it will be the duty of a subscriber to ensure that a strongly-rooted reference exists to the subscription object.
Beyond the fact that abandoned subscriptions can get cleaned up, another advantage of using the
"disposable subscription-object" approach is that unsubscription can easily be made lock-free and thread-safe, and run in constant time. To dispose a subscription, simply null out the delegate contained therein. If each attempt to add a subscription causes the subscription manager to inspect a couple of subscriptions to ensure that they are still valid, the total number of subscriptions in existence will never grow to more than twice the number that were valid as of the last garbage collection.

#Transactional equivalent in C# and concurrency for DDD application services

I'm reading Vaughn Vernon's book on Implementing Domain Driven design. I have also been going through the book code, C# version, from his github here.
The Java version of the book has decorators #Transactional which I believe are from the spring framework.
public class ProductBacklogItemService
{
#Transactional
public void assignTeamMemberToTask(
string aTenantId,
string aBacklogItemId,
string aTaskId,
string aTeamMemberId)
{
BacklogItem backlogItem =
backlogItemRepository.backlogItemOfId(
new TenantId(aTenantId),
new BacklogItemId(aBacklogItemId));
Team ofTeam =
teamRepository.teamOfId(
backlogItem.tennantId(),
backlogItem.teamId());
backlogItem.assignTeamMemberToTask(
new TeamMemberId(aTeamMemberId),
ofTeam,
new TaskId(aTaskId));
}
}
What would be the equivalent manual implementation in C#? I'm thinking something along the lines of:
public class ProductBacklogItemService
{
private static object lockForAssignTeamMemberToTask = new object();
private static object lockForOtherAppService = new object();
public voice AssignTeamMemberToTask(string aTenantId,
string aBacklogItemId,
string aTaskId,
string aTeamMemberId)
{
lock(lockForAssignTeamMemberToTask)
{
// application code as before
}
}
public voice OtherAppsService(string aTenantId)
{
lock(lockForOtherAppService)
{
// some other code
}
}
}
This leaves me with the following questions:
Do we lock by application service, or by repository? i.e. Should we not be doing backlogItemRepository.lock()?
When we are reading multiple repositories as part of our application service, how do we protect dependencies between repositories during transactions (where aggregate roots reference other aggregate roots by identity) - do we need to have interconnected locks between repositories?
Are there any DDD infrastructure frameworks that handle any of this locking?
Edit
Two useful answers came in to use transactions, as I haven't selected my persistence layer I am using in-memory repositories, these are pretty raw and I wrote them (they don't have transaction support as I don't know how to add!).
I will design the system so I do not need to commit to atomic changes to more than one aggregate root at the same time, I will however need to read consistently across a number of repositories (i.e. if a BacklogItemId is referenced from multiple other aggregates, then we need to protect against race conditions should BacklogItemId be deleted).
So, can I get away with just using locks, or do I need to look at adding TransactionScope support on my in-memory repository?
TL;DR version
You need to wrap your code in a System.Transactions.TransactionScope. Be careful about multi-threading btw.
Full version
So the point of aggregates is that the define a consistency boundary. That means any changes should result in the state of the aggregate still honouring it's invariants. That's not necessarily the same as a transaction. Real transactions are a cross-cutting implementation detail, so should probably be implemented as such.
A warning about locking
Don't do locking. Try and forget any notion you have of implementing pessimistic locking. To build scalable systems you have no real choice. The very fact that data takes time to be requested and get from disk to your screen means you have eventual consistency, so you should build for that. You can't really protect against race conditions as such, you just need to account for the fact they could happen and be able to warn the "losing" user that their command failed. Often you can start finding these issues later on (seconds, minutes, hours, days, whatever your domain experts tell you the SLA is) and tell users so they can do something about it.
For example, imagine if two payroll clerks paid an employee's expenses at the same time with the bank. They would find out later on when the books were being balanced and take some compensating action to rectify the situation. You wouldn't want to scale down your payroll department to a single person working at a time in order to avoid these (rare) issues.
My implementation
Personally I use the Command Processor style, so all my Application Services are implemented as ICommandHandler<TCommand>. The CommandProcessor itself is the thing looking up the correct handler and asking it to handle the command. This means that the CommandProcessor.Process(command) method can have it's entire contents processed in a System.Transactions.TransactionScope.
Example:
public class CommandProcessor : ICommandProcessor
{
public void Process(Command command)
{
using (var transaction = new TransactionScope())
{
var handler = LookupHandler(command);
handler.Handle(command);
transaction.Complete();
}
}
}
You've not gone for this approach so to make your transactions a cross-cutting concern you're going to need to move them a level higher in the stack. This is highly-dependent on the tech you're using (ASP.NET, WCF, etc) so if you add a bit more detail there might be an obvious place to put this stuff.
Locking wouldn't allow any concurrency on those code paths.
I think you're looking for a transaction scope instead.
I don't know what persistency layer you are going to use but the standard ones like ADO.NET, Entity Framework etc. support the TransactionScope semantics:
using(var tr = new TransactionScope())
{
doStuff();
tr.Complete();
}
The transaction is committed if tr.Complete() is called. In any other case it is rolled back.
Typically, the aggregate is a unit of transactional consistency. If you need the transaction to spread across multiple aggregates, then you should probably reconsider your model.
lock(lockForAssignTeamMemberToTask)
{
// application code as before
}
This takes care of synchronization. However, you also need to revert the changes in case of any exception. So, the pattern will be something like:
lock(lockForAssignTeamMemberToTask)
{
try {
// application code as before
} catch (Exception e) {
// rollback/restore previous values
}
}

Name for this pattern? (Answer: lazy initialization with double-checked locking)

Consider the following code:
public class Foo
{
private static object _lock = new object();
public void NameDoesNotMatter()
{
if( SomeDataDoesNotExist() )
{
lock(_lock)
{
if( SomeDataDoesNotExist() )
{
CreateSomeData();
}
else
{
// someone else also noticed the lack of data. We
// both contended for the lock. The other guy won
// and created the data, so we no longer need to.
// But once he got out of the lock, we got in.
// There's nothing left to do.
}
}
}
}
private bool SomeDataDoesNotExist()
{
// Note - this method must be thread-safe.
throw new NotImplementedException();
}
private bool CreateSomeData()
{
// Note - This shouldn't need to be thread-safe
throw new NotImplementedException();
}
}
First, there are some assumptions I need to state:
There is a good reason I couldn't just do this once an app startup. Maybe the data wasn't available yet, etc.
Foo may be instantiated and used concurrently from two or more threads. I want one of them to end up creating some data (but not both of them) then I'll allow both to access that same data (ignore thread safety of accessing the data)
The cost to SomeDataDoesNotExist() is not huge.
Now, this doesn't necessarily have to be confined to some data creation situation, but this was an example I could think of.
The part that I'm especially interested in identifying as a pattern is the check -> lock -> check. I've had to explain this pattern to developers on a few occasions who didn't get the algorithm at first glance but could then appreciate it.
Anyway, other people must do similarly. Is this a standardized pattern? What's it called?
Though I can see how you might think this looks like double-checked locking, what it actually looks like is dangerously broken and incorrect double-checked locking. Without an actual implementation of SomeDataDoesNotExist and CreateSomeData to critique we have no guarantee whatsoever that this thing is actually threadsafe on every processor.
For an example of an analysis of how double-checked locking can go wrong, check out this broken and incorrect version of double-checked locking:
C# manual lock/unlock
My advice: don't use any low-lock technique without a compelling reason and a code review from an expert on the memory model; you'll probably get it wrong. Most people do.
In particular, don't use double-checked locking unless you can describe exactly what memory access reorderings the processors can do on your behalf and provide a convincing argument that your solution is correct given any possible memory access reordering. The moment you step away even slightly from a known-to-be-correct implementation, you need to start the analysis over from scratch. You can't assume that just because one implementation of double-checked locking is correct, that they all are; almost none of them are correct.
Lazy initialization with double-checked locking?
The part that I'm especially interested in identifying as a pattern is the check -> lock -> check.
That is called double-checked locking.
Beware that in older Java versions (before Java 5) it is not safe because of how Java's memory model was defined. In Java 5 and newer changes were made to the specification of Java's memory model so that it is now safe.
The only name that comes to mind for this kind of is "Faulting". This name is used in iOS Core-Data framework to similar effect.
Basically, your method NameDoesNotMatter is a fault, and whenever someone invokes it, it results in the object to get populated or initialized.
See http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/CoreData/Articles/cdFaultingUniquing.html for more details on how this design pattern is used.

Categories

Resources