I have a payment system written in Asp.Net Core & Entity Framework & SQL Server.
I have to deal with many security situations where I need to prevent 2 or more actions performed by the user at the same time. For example:
Every time a payment transaction happens
Check user balance and block if there is not enough credit.
Perform the payment and reduce the user balance.
Now if a user fires 2 or more request to create payments, some of the requests will past the available credit validation.
Since I have many scenarios similar to this one I thought about a general solution that can solve all of them. I thought about adding a middleware which will check each request and to the following:
If the request is a GET request it will pass - this will enable concurrency GET requests.
If the request is a POST/PUT/DELETE request it will check if there is already an existing POST/PUT/DELETE for this specific user (assuming the user is logged in). If there is, a response of bad request will return to the client.
In order to do this in the correct way and support more than 1 server, I understand that I need to do this in the database level. I know that I can lock a specific row in Oracle and thinking about locking the user row on the beginning of each UPDATE/CREAT/DELETE and release it in the end. What is the best approach to do this with EF? Is there a better solution?
I'm using the UnitOfWork pattern which each request has its own scope.
I'd vote against using row locks as a mechanism of request synchronization:
even though Oracle is known for not escalating row locks, there are transaction-level and other optimizations that may decide to escalate, which can lead to reduced scalability and deadlocks.
if two users want to transfer money to each other, there's a chance they'll be deadlocked. (If you don't have such a feature right now, you may have it in the future, so better create an architecture that wouldn't be invalidated that easily).
Now, a system that returns "bad request" just because another request from the same user happens to take a longer time, is of course a fail-safe system, but its reliability (a metric of running without failures) suffers. I'd expect any payment system in the world to be both fail-safe and reliable.
Is there a better solution?
An architecture based on CQRS and shared-nothing approaches:
ASP.NET server ("web tier"):
directly performs read (GET) operations, as it does now
submits write (POST/PUT/DELETE) operations into a queue, and returns HTTP 200 immediately.
Application tier: a cluster of (micro)services that fetch and perform the write requests, in a shared-nothing manner:
at any moment, requests from any particular user are processed by at most one thread in the whole system (across all processes and machines).
the shared-nothing approach ensures that you never have to concurrently process requests from the same user.
Implementing shared-nothing
The shared-nothing architecture can be implemented by partitioning (AKA sharding). For example:
you have N processing threads running (inside some processes) on a cluster of M machines
each machine is assigned a unique role to run a specific range of threads out of these N
each request from a user is always dispatched to the same specific thread by calculating: thread_index = HASH(User) % N, or if User ID is an integer: thread_index = USER_ID % N.
how the dispatched requests are passed to the processing threads depends on the chosen queue. For example, web tier can submit requests to N different topics, or it can directly push the requests to a distributed actor (see Akka.Net), or you can just use database table as a queue, and make each thread fetch the requests that belong to it.
In addition, you'll need an orchestrator to ensure that each of the M machines is up and running. If a machine goes down, the orchestrator spins up another machine with the same role. For example, if you dockerize your services, you can use Kubernetes with StatefulSet.
Stumbled recently accross the same thoughts. For some reason some of my users are able to post a formular twice, which results in duplicated data.
Even if this is a old question, i hope it will helps someone.
Like you mentioned, one approach is to use the database for the locking of properties but like you, i couldn't find a solid implementation. I'm also assuming that you have a monolithic application as #felix-b mentioned a very good solution.
I gone the way of making the threads that normally would run concurrent to run in sequence. This solution may suffer some disadvantages, but i could not find any. Please let me know your thoughts.
So i solved it with a dictonary containing the UserId and a SemaphoreSlim.
Then i simply marked the controllers with a ActionFilter and throttle the execution of a controller method per user.
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method)]
public class AtomicOperationPerUserAttribute : ActionFilterAttribute
{
private readonly ILogger<AtomicOperationPerUserAttribute> _logger;
private readonly IConcurrencyService _concurrencyService;
public AtomicOperationPerUserAttribute(ILogger<AtomicOperationPerUserAttribute> logger, IConcurrencyService concurrencyService)
{
_logger = logger;
_concurrencyService = concurrencyService;
}
public override void OnActionExecuting(ActionExecutingContext context)
{
int userId = YourWayToGetTheUserId; //mine was with context.HttpContext.AppSpecificExtensionMethod()
_logger.LogInformation($"User {userId} claims sempaphore with RequestId {context.HttpContext.TraceIdentifier}");
var semaphore = _concurrencyService.SemaphorePerUser(userId);
semaphore.Wait();
}
public override void OnActionExecuted(ActionExecutedContext context)
{
int userId = YourWayToGetTheUserId; //mine was with context.HttpContext.AppSpecificExtensionMethod()
var semaphore = _concurrencyService.SemaphorePerUser(userId);
_logger.LogInformation($"User {userId} releases sempaphore with RequestId {context.HttpContext.TraceIdentifier}");
semaphore.Release();
}
}
The "ConcurrentService" is a Singleton registered in the Startup.cs.
public interface IConcurrencyService
{
SemaphoreSlim SemaphorePerUser(int userId);
}
public class ConcurrencyService : IConcurrencyService
{
public static ConcurrentDictionary<int, SemaphoreSlim> Semaphors = new ConcurrentDictionary<int, SemaphoreSlim>();
public SemaphoreSlim SemaphorePerUser(int userId)
{
return Semaphors.GetOrAdd(userId, new SemaphoreSlim(1, 1));
}
}
Since in my case i needed dependencies in the ActionFilter is mark a controller action with [ServiceFilter(typeof(AtomicOperationPerUserAttribute))]
accordingly i registered the "services" in the Startup.cs
services.AddScoped<AtomicOperationPerUserAttribute>();
services.AddSingleton<IConcurrencyService, ConcurrencyService>();
Related
I'm trying to implement DDD in my small project but Not able to understand how to raise domain event in below case.
Account Domain
public class Account : BaseEntity
{
public string PhoneNumber { get; set; }
public int OTP { get; set; }
public Account()
{
}
public Account(string phoneNumber, short otp)
{
this.PhoneNumber = phoneNumber;
this.OTP = otp;
CreatedDate = DateTime.Now;
RowKey = Guid.NewGuid().ToString();
PartitionKey = phoneNumber;
}
}
Account Service
public async Task<bool> GenerateOTP(string phoneNumber)
{
if (phoneNumber.Length != 10)
throw new ArgumentException(ApplicationConstraint.InvalidNumber);
var otp = Convert.ToInt16(new Random().Next(1000, 9999));
var account = new Account(phoneNumber, otp);
await this.accountRepository.AddEntity(account);
return true;
}
Account Repository Azure Storage table is my database
public virtual async Task AddEntity(TEntity entity)
{
TableOperation insertOperation = TableOperation.Insert(entity);
await table.ExecuteAsync(insertOperation);
}
I want to raise domain event only when data get saved in the database. For a workaround, I'm calling messaging service from account service.
Given the limited information provided, one option would be to create an AccountCreated event, (or an EntityCreated event if this is a cross-cutting concern) and publish it through some bus where consumers can asynchronousle receive it and do any subsequent processing needed.
The event need not use domain entities, and it can contain the information/data necessary to do any subsequent processing without the need to access a shared db (and as such adhering to DDD & microservice guidelines).
----Edit----
In the above I assumed that this is an established system and Azure storage isn't something that can change. Publishing an event, and handling it is pretty simple, but there are some things you need to be aware of. In general, you have 3 options here:
Publishing right after saving isn't wrong. It's simple way to do it, and (if you adopt an event-first methodology) you can do it in a generic way across your entities, minimal work. However, you need to be concious of how to deal with errors. Specifically, the issue is that if you store the entity first, before publishing the event, and then the process crashes for whatever reason, the event may be missed, so later workflows will not kick-off. If you do the reverse (publish then store), you run the risk of double-publishing the event. In this case you have two options:
If you store-then-publish: just accept the (really rare) possiblity of not publishing an event. This is something you need to speak to the business, and you can minigate the severity by logging the event before trying to save the entity.
If you publish-then-store: (you'll need to do this if the cost of fixing any issues ad-hoc are too great) you can fix the problem by having your consumers check the id of the incoming message if they ever have processed it before and reject it if they did OR make the process idempotent (if possible), meaning that doing the process twice isn't a problem
Using event sourcing. This isn't difficult in my opinion, but obviously it's an overhead if this is a a simple application, and while not difficult, it does need a significant amount of reading up if you're not familiar with it. If this is a non-trivial application, event sourcing can help a lot, because observers can just observe the events in the buffer and respond to that (so not need to explicitly publish the changes).
Append the event in a separate table within the same transaction where you're storing the entity, and use the outbox pattern implementation (publish those events from a separate service, marking them as published once they've been published). Honestly, the pattern shown on that is a bit simplistic, and there are a lot of tricky and small complexities, so prefer to use an existing one if you can find.
Honestly, if you can get away with 1.1, do that. It's simple and problems only very rarely appear. Just log the operation before you do it so that you can manually do it in the rare case of issues.
I am currently developing an application in ASP.NET CORE 2.0
The following is the action inside my controller that get's executed when the user clicks submit button.
The following is the function that get's called the action
As a measure to prevent duplicate inside a database I have the function
IsSignedInJob(). The function works
My Problem:
Sometimes when the internet connection is slow or the server is not responding right away it is possible to click submit button more than once. When the connection is reestablished the browser (in my case Chrome) sends multiple HttpPost request to the server. In that case the functions(same function from different instances) are executed so close in time that before the change in database is made, other instances are making the same change without being aware of each other.
Is there a way to solve this problem on a server side without being to "hacky"?
Thank you
As suggested on the comments - and this is my preferred approach-, you can simply disable the button once is clicked the first time.
Another solution would be to add something to a dictionary indicating that the job has already been registered but this will probably have to use a lock as you need to make sure that only one thread can read-write at a time. A Concurrent collection won't do the trick as the problem is not whether this operation is thread-safe or not. The IsSignedInJob method you have can do this behind the scenes but I wouldn't check the database for this as the latency could be too high. Adding/removing a Key from a dictionary should be a lot faster.
Icarus's answer is great for the user experience and should be implemented. If you also need to make sure the request is only handled once on the server side you have a few options. Here is one using the ReaderWRiterLockSlim class.
private ReaderWriterLockSlim cacheLock = new ReaderWriterLockSlim();
[HttpPost]
public async SomeMethod()
{
if (cacheLock.TryEnterWriteLock(timeout));
{
try
{
// DoWork that should be very fast
}
finally
{
cacheLock.ExitWriteLock();
}
}
}
This will prevent overlapping DoWork code. It does not prevent DoWork from finishing completely, then another post executing that causes DoWork again.
If you want to prevent the post from happening twice, implement the AntiForgeryToken, then store the token in session. Something like this (haven't used session in forever) may not compile, but you should get the idea.
private const SomeMethodTokenName = "SomeMethodToken";
[HttpPost]
public async SomeMethod()
{
if (cacheLock.TryEnterWriteLock(timeout));
{
try
{
var token = Request.Form.Get["__RequestVerificationToken"].ToString();
var session = Session[SomeMethodTokenName ];
if (token == session) return;
session[SomeMethodTokenName] = token
// DoWork that should be very fast
}
finally
{
cacheLock.ExitWriteLock();
}
}
}
Not exactly perfect, two different requests could happen over and over, you could store in session the list of all used tokens for this session. There is no perfect way, because even then, someone could technically cause a OutOfMemoryException if they wanted to (to many tokens stored in session), but you get the idea.
Try not to use asynchronous processing. Remove task,await and async.
Short question
How can I lock my entity so that only one operation by only one user can be performed on it at a time in MVC project?
Long question
I have MVC project where I want my action methods to be [SessionState(SessionStateBehavior.ReadOnly)]. But when doing this users can execute another action methods even before one long running action method has not completed. As I have a lot calculations and action methods have to be executed in predefined order, executing another Action method before one ends creates lots of problems. To give example I have main entity called Report, I have to somehow ensure that one report undergoes only one operation by only one user at a time. So I have to lock my Report. Even if I do not use [SessionState(SessionStateBehavior.ReadOnly)] I have to lock report so that multiple users do not edit same reports at a time and for other specific reasons. Currently I am writing this information to database roughly something like:
ReportId
LockedUserId
IsInPorcess
I have to set IsInProcess to true every time before operation begins and reset it to false after operation completed. As I have lots of action methods I created ActionFilter something like below:
public class ManageReportLockAttribute
: FilterAttribute, IActionFilter
{
public ManageReportLockAttribute()
{
}
public void OnActionExecuting(ActionExecutingContext filterContext)
{
...
ReportLockInfo lockInfo = GetFromDatabase(reportId);
if(lockInfo.IsInProcess)
RedirectToInformationView();
lockInfo.IsInProcess = true;
SaveToDatabase(lockInfo);
...
}
public void OnActionExecuted(ActionExecutedContext filterContext)
{
...
ReportLockInfo lockInfo = GetFromDatabase(reportId);
lockInfo.IsInProcess = false;
SaveToDatabase(lockInfo);
...
}
}
It works, for most part, but it has some strange problems (see this question for more info).
My question is that "How can I achieve same functionality (locking report) by different more acceptable way?".
I feel like it is something similar to locking when using multithreading, but it is not exactly same IMO.
Sorry for long, broad and awkward question, but I want a direction to follow. Thanks in advance.
One reason why OnActionExecuted is not called though OnActionExecuting runs as expected is that there are unhandled exceptions that occur in OnActionExecuting. Especially when dealing with the database, there are various reasons that could lead to an exception, e.g.:
User1 starts the process and locks the entity.
User2 also wants to start the process before User1 has saved the change. So the check of IsInProcess does not lead to the redirection and User2 also wants to save the lock. In this case, a concurrency violation should occur because User1 has saved the entity in the meantime.
To illustrate the process over time (C is the check whether IsInProcess is set, S is SaveChanges): first a good case:
User1 CS
User2 CS (check is done after save, no problem)
Now a bad case:
User1 CS
User2 CS (check takes place after check for User1, but before SaveChanges becomes effective ==> concurrency violation)
As the example shows, it is critical to make sure that only one user can place the lock. There are several ways to handle this. In all cases make sure that there are as few reasons for exceptions in OnActionExecuting as possible. Handle and log the exceptions.
Please note that all synchronisation methods will have a negative impact on the performance of your application. So if you haven't already thought about whether you could avoid having to lock the report by restructuring your actions or the data model, this would be the first thing to do.
Easy approach: thread synchronisation
An easy approach is to use thread synchronisation. This approach will only work if the application runs in a single process and not in a web farm/the cloud. You need to decide whether you will be able to change the application if it will be installed in a farm at a later point in time. This sample shows an easy approach (that uses a static object for locking):
public class ManageReportLockAttribute
: FilterAttribute, IActionFilter
{
private static readonly object lockObj = new object();
// ...
public void OnActionExecuting(ActionExecutingContext filterContext)
{
...
ReportLockInfo lockInfo = GetFromDatabase(reportId);
if(lockInfo.IsInProcess)
RedirectToInformationView();
lock(lockObj)
{
// read anew just in case the lock was set in the meantime
// A new context should be used.
lockInfo = GetFromDatabase(reportId);
if(lockInfo.IsInProcess)
RedirectToInformationView();
lockInfo.IsInProcess = true;
SaveToDatabase(lockInfo);
...
}
}
public void OnActionExecuted(ActionExecutedContext filterContext)
{
...
lock(lockObj)
{
lockInfo = GetFromDatabase(reportId);
if (lockInfo.IsInProcess) // check whether lock was released in the meantime
{
lockInfo.IsInProcess = false;
SaveToDatabase(lockInfo);
}
...
}
}
}
For details on using lock see this link. If you need more control, have a look at the overview of thread synchronization with C#. A named mutex is an alternative that provides locking in a more fine coarsed manner.
If you want to lock on reportId instead of a static object, you need to use a lock object that is the same for the same reportId. A dictionary can store the lock objects:
private static readonly IDictionary<int, object> lockObjectsByReportId = new Dictionary<int, object>();
private static object GetLockObjectByReportId(int reportId)
{
int lockObjByReportId;
if (lockObjectsByReportId.TryGetValue(reportId, out lockObjByReportId))
return lockObjByReportId;
lock(lockObj) // use global lock for a short operation
{
if (lockObjectsByReportId.TryGetValue(reportId, out lockObjByReportId))
return lockObjByReportId;
lockObjByReportId = new object();
lockObjectsByReportId.Add(reportId, lockObjByReportId);
return lockObjByReportId;
}
}
Instead of using lockObj in OnActionExecuting and OnActionExecuted, you'd use the function:
// ...
lock(GetLockObjectByReportId(reportId))
{
// ...
}
Database approach: Transactions and isolation levels
Another way to handle this is to use database transactions and isolation levels. This approach will also work in a multi-server environment. In this case, you'd not use the entity framework for database access but move the code to a stored procedure that is run on the database server. By running the stored procedure in a transaction and picking the right isolation level, you can avoid that a user can read the data while another one is changing them.
This link shows an overview of isolation levels for SQL Server.
I read many posts saying multithreaded applications must use a separate session per thread. Perhaps I don't understand how the locking works, but if I put a lock on the session in all repository methods, would that not make a single static session thread safe?
like:
public void SaveOrUpdate(T instance)
{
if (instance == null) return;
lock (_session)
using (ITransaction transaction = _session.BeginTransaction())
{
lock (instance)
{
_session.SaveOrUpdate(instance);
transaction.Commit();
}
}
}
EDIT:
Please consider the context/type of applications I'm writing:
Not multi-user, not typical user-interaction, but a self-running robot reacting to remote events like financial data and order-updates, performing tasks and saves based on that. Intermittently this can create clusters of up to 10 saves per second. Typically it's the same object graph that needs to be saved every time. Also, on startup, the program does load the full database into an entity-object-graph. So it basically just reads once, then performs SaveOrUpdates as it runs.
Given that the application is typically editing the same object graph, perhaps it would make more sense to have a single thread dedicated to applying these edits to the object graph and then saving them to the database, or perhaps a pool of threads servicing a common queue of edits, where each thread has it's own (dedicated) session that it does not need to lock. Look up producer/consumer queues (to start, look here).
Something like this:
[Producer Threads]
Edit Event -\ [Database Servicer Thread]
Edit Event ------> Queue -> Dequeue and Apply to Session -> Database
Edit Event -/
I'd imagine that a BlockingCollection<Action<Session>> would be a good starting point for such an implementation.
Here's a rough example (note this is obviously untested):
// Assuming you have a work queue defined as
public static BlockingCollection<Action<Session>> myWorkQueue = new BlockingCollection<Action<Session>>();
// and your eventargs looks something like this
public class MyObjectUpdatedEventArgs : EventArgs {
public MyObject MyObject { get; set; }
}
// And one of your event handlers
public MyObjectWasChangedEventHandler(object sender, MyObjectUpdatedEventArgs e) {
myWorkQueue.Add(s=>SaveOrUpdate(e.MyObject));
}
// Then a thread in a constant loop processing these items could work:
public void ProcessWorkQueue() {
var mySession = mySessionFactory.CreateSession();
while (true) {
var nextWork = myWorkQueue.Take();
nextWork(mySession);
}
}
// And to run the above:
var dbUpdateThread = new Thread(ProcessWorkQueue);
dbUpdateThread.IsBackground = true;
dbUpdateThread.Start();
At least two disadvantages are:
You are reducing the performance significantly. Having this on a busy web server is like having a crowd outside a cinema but letting people go in through a person-wide entrance.
A session has its internal identity map (cache). A single session per application means that the memory consumption grows as users access different data from the database. Ultimately you can even end up with the whole database in the memory which of course would just not work. This requires then calling a method to drop the 1st level cache from time to time. However, there is no good moment to drop the cache. You just can't drop in at the beginning of a request because other concurrent sessions could suffer from this.
I am sure people will add other disadvantages.
This link http://msdn.microsoft.com/en-us/library/aa772153(VS.85).aspx says:
You can register up to five notification requests on a single LDAP connection. You must have a dedicated thread that waits for the notifications and processes them quickly. When you call the ldap_search_ext function to register a notification request, the function returns a message identifier that identifies that request. You then use the ldap_result function to wait for change notifications. When a change occurs, the server sends you an LDAP message that contains the message identifier for the notification request that generated the notification. This causes the ldap_result function to return with search results that identify the object that changed.
I cannot find a similar behavior looking through the .NET documentation. If anyone knows how to do this in C# I'd be very grateful to know. I'm looking to see when attributes change on all the users in the system so I can perform custom actions depending on what changed.
I've looked through stackoverflow and other sources with no luck.
Thanks.
I'm not sure it does what you need, but have a look at http://dunnry.com/blog/ImplementingChangeNotificationsInNET.aspx
Edit: Added text and code from the article:
There are three ways of figuring out things that have changed in Active Directory (or ADAM). These have been documented for some time over at MSDN in the aptly titled "Overview of Change Tracking Techniques". In summary: Polling for Changes using uSNChanged. This technique checks the 'highestCommittedUSN' value to start and then performs searches for 'uSNChanged' values that are higher subsequently. The 'uSNChanged' attribute is not replicated between domain controllers, so you must go back to the same domain controller each time for consistency. Essentially, you perform a search looking for the highest 'uSNChanged' value + 1 and then read in the results tracking them in any way you wish. Benefits This is the most compatible way. All languages and all versions of .NET support this way since it is a simple search. Disadvantages There is a lot here for the developer to take care of. You get the entire object back, and you must determine what has changed on the object (and if you care about that change). Dealing with deleted objects is a pain. This is a polling technique, so it is only as real-time as how often you query. This can be a good thing depending on the application. Note, intermediate values are not tracked here either. Polling for Changes Using the DirSync Control. This technique uses the ADS_SEARCHPREF_DIRSYNC option in ADSI and the LDAP_SERVER_DIRSYNC_OID control under the covers. Simply make an initial search, store the cookie, and then later search again and send the cookie. It will return only the objects that have changed. Benefits This is an easy model to follow. Both System.DirectoryServices and System.DirectoryServices.Protocols support this option. Filtering can reduce what you need to bother with. As an example, if my initial search is for all users "(objectClass=user)", I can subsequently filter on polling with "(sn=dunn)" and only get back the combination of both filters, instead of having to deal with everything from the intial filter. Windows 2003+ option removes the administrative limitation for using this option (object security). Windows 2003+ option will also give you the ability to return only the incremental values that have changed in large multi-valued attributes. This is a really nice feature. Deals well with deleted objects. Disadvantages This is .NET 2.0+ or later only option. Users of .NET 1.1 will need to use uSNChanged Tracking. Scripting languages cannot use this method. You can only scope the search to a partition. If you want to track only a particular OU or object, you must sort out those results yourself later. Using this with non-Windows 2003 mode domains comes with the restriction that you must have replication get changes permissions (default only admin) to use. This is a polling technique. It does not track intermediate values either. So, if an object you want to track changes between the searches multiple times, you will only get the last change. This can be an advantage depending on the application. Change Notifications in Active Directory. This technique registers a search on a separate thread that will receive notifications when any object changes that matches the filter. You can register up to 5 notifications per async connection. Benefits Instant notification. The other techniques require polling. Because this is a notification, you will get all changes, even the intermediate ones that would have been lost in the other two techniques. Disadvantages Relatively resource intensive. You don't want to do a whole ton of these as it could cause scalability issues with your controller. This only tells you if the object has changed, but it does not tell you what the change was. You need to figure out if the attribute you care about has changed or not. That being said, it is pretty easy to tell if the object has been deleted (easier than uSNChanged polling at least). You can only do this in unmanaged code or with System.DirectoryServices.Protocols. For the most part, I have found that DirSync has fit the bill for me in virtually every situation. I never bothered to try any of the other techniques. However, a reader asked if there was a way to do the change notifications in .NET. I figured it was possible using SDS.P, but had never tried it. Turns out, it is possible and actually not too hard to do. My first thought on writing this was to use the sample code found on MSDN (and referenced from option #3) and simply convert this to System.DirectoryServices.Protocols. This turned out to be a dead end. The way you do it in SDS.P and the way the sample code works are different enough that it is of no help. Here is the solution I came up with:
public class ChangeNotifier : IDisposable
{
LdapConnection _connection;
HashSet<IAsyncResult> _results = new HashSet<IAsyncResult>();
public ChangeNotifier(LdapConnection connection)
{
_connection = connection;
_connection.AutoBind = true;
}
public void Register(string dn, SearchScope scope)
{
SearchRequest request = new SearchRequest(
dn, //root the search here
"(objectClass=*)", //very inclusive
scope, //any scope works
null //we are interested in all attributes
);
//register our search
request.Controls.Add(new DirectoryNotificationControl());
//we will send this async and register our callback
//note how we would like to have partial results
IAsyncResult result = _connection.BeginSendRequest(
request,
TimeSpan.FromDays(1), //set timeout to a day...
PartialResultProcessing.ReturnPartialResultsAndNotifyCallback,
Notify,
request);
//store the hash for disposal later
_results.Add(result);
}
private void Notify(IAsyncResult result)
{
//since our search is long running, we don't want to use EndSendRequest
PartialResultsCollection prc = _connection.GetPartialResults(result);
foreach (SearchResultEntry entry in prc)
{
OnObjectChanged(new ObjectChangedEventArgs(entry));
}
}
private void OnObjectChanged(ObjectChangedEventArgs args)
{
if (ObjectChanged != null)
{
ObjectChanged(this, args);
}
}
public event EventHandler<ObjectChangedEventArgs> ObjectChanged;
#region IDisposable Members
public void Dispose()
{
foreach (var result in _results)
{
//end each async search
_connection.Abort(result);
}
}
#endregion
}
public class ObjectChangedEventArgs : EventArgs
{
public ObjectChangedEventArgs(SearchResultEntry entry)
{
Result = entry;
}
public SearchResultEntry Result { get; set;}
}
It is a relatively simple class that you can use to register searches. The trick is using the GetPartialResults method in the callback method to get only the change that has just occurred. I have also included the very simplified EventArgs class I am using to pass results back. Note, I am not doing anything about threading here and I don't have any error handling (this is just a sample). You can consume this class like so:
static void Main(string[] args)
{
using (LdapConnection connect = CreateConnection("localhost"))
{
using (ChangeNotifier notifier = new ChangeNotifier(connect))
{
//register some objects for notifications (limit 5)
notifier.Register("dc=dunnry,dc=net", SearchScope.OneLevel);
notifier.Register("cn=testuser1,ou=users,dc=dunnry,dc=net", SearchScope.Base);
notifier.ObjectChanged += new EventHandler<ObjectChangedEventArgs>(notifier_ObjectChanged);
Console.WriteLine("Waiting for changes...");
Console.WriteLine();
Console.ReadLine();
}
}
}
static void notifier_ObjectChanged(object sender, ObjectChangedEventArgs e)
{
Console.WriteLine(e.Result.DistinguishedName);
foreach (string attrib in e.Result.Attributes.AttributeNames)
{
foreach (var item in e.Result.Attributes[attrib].GetValues(typeof(string)))
{
Console.WriteLine("\t{0}: {1}", attrib, item);
}
}
Console.WriteLine();
Console.WriteLine("====================");
Console.WriteLine();
}