Checking the throttle before using a GraphServiceClient - c#

I have several GraphServiceClients and I'm using them to retrieve information from Microsoft Graph API. There's a throttle on the GraphServiceClient calls. As far as I understood from this documentation, you can't call APIs more than 10,000 times in a 10-minute time frame and you can only use 4 concurrent requests at the same time. What's a thread-safe and efficient way to check if I have reached the maximum limit?
My implementation
I came up with this but I'm not sure if it's actually how the Microsoft Graph is checking for the limits.
public class ThrottledClient
{
private readonly TimeSpan _throttlePeriod;
private readonly int _throttleLimit;
public ThrottledClient(int throttleLimit, TimeSpan throttlePeriod)
{
_throttleLimit = throttleLimit;
_throttlePeriod = throttlePeriod;
}
private readonly ConcurrentQueue<DateTime> _requestTimes = new();
public required GraphServiceClient GraphClient { get; set; }
public async Task CheckThrottleAsync(CancellationToken cancellationToken)
{
_requestTimes.Enqueue(DateTime.UtcNow);
if(_requestTimes.Count > _throttleLimit)
{
Console.WriteLine($"Count limit, {DateTime.Now:HH:mm:ss}");
_requestTimes.TryDequeue(out var oldestRequestTime);
var timeRemaining = oldestRequestTime + _throttlePeriod - DateTime.UtcNow;
if(timeRemaining > TimeSpan.Zero)
{
Console.WriteLine($"Sleeping for {timeRemaining}");
await Task.Delay(timeRemaining, cancellationToken).ConfigureAwait(false);
Console.WriteLine($"Woke up, {DateTime.Now:HH:mm:ss}");
}
}
}
}
public class Engine
{
public async Task RunAsync()
{
var client = GetClient();
await client.CheckThrottleAsync(_cts.Token).ConfigureAwait(false);
await DoSomethingAsync(client.GraphClient).ConfigureAwait(false);
}
}
I can think of other ways to use like lock or Semaphore but again, I'm not sure if I'm thinking about this correctly.

I believe you can use a Graph Developer proxy to test these.
Microsoft Graph Developer Proxy aims to provide a better way to test applications that use Microsoft Graph. Using the proxy to simulate errors, mock responses, and demonstrate behaviors like throttling, developers can identify and fix issues in their code early in the development cycle before they reach production.
More details can be found here https://github.com/microsoftgraph/msgraph-developer-proxy

We use Polly to automatically retry failed http calls. It also has support for exponentially backing of.
So why not handle the error in a way that works for your application, instead of trying to figure out what the limit it before hand (and doing an extra call counting to the limit). You can test those scenarios with the Graph Developer proxy from the other answer.
We also use a circuit breaker to fail quick without extra call to graph and retry later.
The 4 concurrent requests you’re mentioning are for Outlook resources (I’ve written many user voices, GitHub issues and escalations on it). Since Q4 2022, you can do 20 requests in a batch for all resources (in the same tenant). Batching reduces the http overhead and might help you to overcome throttling limits, by combining requests in a smart way.

Related

Do Azure Functions retry policies count towards billing and timeouts?

Microsoft have recently released retry policies for Azure Functions (preview), which can be applied using the FixedDelayRetry and ExponentialBackoffRetry attributes. Do these retry policies hook into the Azure Functions runtime and operate at a level below the function invocations, or are they effectively the same as an await Task.Delay in user-code? Specifically, would the retry delays of these policies count towards the function execution time, and hence be billed and cause timeouts if they exceed the 10-minute maximum duration on consumption plan?
I can only find the following relevant method in the source code (simplified version below), which enforces retry delays using an await Task.Delay, but I might be missing something.
namespace Microsoft.Azure.WebJobs.Host.Executors
{
internal static class FunctionExecutorExtensions
{
public static async Task<IDelayedException> TryExecuteAsync(this IFunctionExecutor executor, Func<IFunctionInstance> instanceFactory, ILoggerFactory loggerFactory, CancellationToken cancellationToken)
{
var attempt = 0;
while (true)
{
var functionInstance = instanceFactory.Invoke();
var functionException = await executor.TryExecuteAsync(functionInstance, cancellationToken);
if (functionException == null)
return null; // function invocation succeeded
if (functionInstance.FunctionDescriptor.RetryStrategy == null)
return functionException; // retry is not configured
var retryStrategy = functionInstance.FunctionDescriptor.RetryStrategy;
if (retryStrategy.MaxRetryCount != -1 && ++attempt > retryStrategy.MaxRetryCount)
return functionException; // retry count exceeded
TimeSpan nextDelay = retryStrategy.GetNextDelay(retryContext);
await Task.Delay(nextDelay);
}
}
}
}
There are many shortcomings in other aspects of these retry policies, so unless they offer some advantage through their integration with the functions runtime, I'd prefer to stick with a mature reusable library such as Polly.
The retry policies cannot be customized. ExponentialBackoffRetry uses a hardcoded factor of 2 that cannot be changed. FixedDelayRetry does not support random jitter, and ExponentialBackoffRetry has a hardcoded random jitter of ±20%.
Retry failures get logged as errors, needlessly cluttering the error logs. It doesn't seem possible to disable these retry errors without losing all other function errors too.
There is no way of getting the retry count. RetryContext still gives a binding error on the latest non-beta packages.
Technical documentation is sparse, with the above details needing to be inferred from the source code.

Azure Durable Function error handling: Is there a way to identify which retry you are on?

The Durable Functions documentation specifies the following pattern to set up automatic handling of retries when an exception is raised within an activity function:
public static async Task Run(DurableOrchestrationContext context)
{
var retryOptions = new RetryOptions(
firstRetryInterval: TimeSpan.FromSeconds(5),
maxNumberOfAttempts: 3);
await ctx.CallActivityWithRetryAsync("FlakyFunction", retryOptions, "ABC");
// ...
}
However I can't see a way to check which retry you're up to within the activity function:
[FunctionName("FlakyFunction")]
public static string[] MyFlakyFunction(
[ActivityTrigger] string id,
ILogger log)
{
// Is there a built-in way to tell what retry attempt I'm up to here?
var retry = ??
DoFlakyStuffThatMayCauseException();
}
EDIT: I know it can probably be handled by mangling some sort of count into the RetryOptions.Handle delegate, but that's a horrible solution. It can be handled manually by maintaining an external state each time it's executed, but given that there's an internal count of retries I'm just wondering if there's any way to access that. Primary intended use is debugging and logging, but I can think of many other uses.
There does not seem to be a way to identify the retry. Activity functions are unaware of state and retries. When the CallActivityWithRetryAsync call is made the DurableOrchestrationContext calls the ScheduleWithRetry method of the OrchestrationContext class inside the DurableTask framework:
public virtual Task<T> ScheduleWithRetry<T>(string name, string version, RetryOptions retryOptions, params object[] parameters)
{
Task<T> RetryCall() => ScheduleTask<T>(name, version, parameters);
var retryInterceptor = new RetryInterceptor<T>(this, retryOptions, RetryCall);
return retryInterceptor.Invoke();
}
There the Invoke method on the RetryInterceptor class is called and that does a foreach loop over the maximum number of retries. This class does not expose properties or methods to obtain the number of retries.
Another workaround to help with debugging could be logging statements inside the activity function. And when you're running it locally you can put in a breakpoint there to see how often it stops there. Note that there is already a feature request to handle better logging for retries. You could add your feedback there or raise a new issue if you feel that's more appropriate.
To be honest, I think it's good that an activity is unaware of state and retries. That should be the responsibility of the orchestrator. It would be useful however if you could get some statistics on retries to see if there is a trend in performance degradation.

Lock create/update/delete requests per user

I have a payment system written in Asp.Net Core & Entity Framework & SQL Server.
I have to deal with many security situations where I need to prevent 2 or more actions performed by the user at the same time. For example:
Every time a payment transaction happens
Check user balance and block if there is not enough credit.
Perform the payment and reduce the user balance.
Now if a user fires 2 or more request to create payments, some of the requests will past the available credit validation.
Since I have many scenarios similar to this one I thought about a general solution that can solve all of them. I thought about adding a middleware which will check each request and to the following:
If the request is a GET request it will pass - this will enable concurrency GET requests.
If the request is a POST/PUT/DELETE request it will check if there is already an existing POST/PUT/DELETE for this specific user (assuming the user is logged in). If there is, a response of bad request will return to the client.
In order to do this in the correct way and support more than 1 server, I understand that I need to do this in the database level. I know that I can lock a specific row in Oracle and thinking about locking the user row on the beginning of each UPDATE/CREAT/DELETE and release it in the end. What is the best approach to do this with EF? Is there a better solution?
I'm using the UnitOfWork pattern which each request has its own scope.
I'd vote against using row locks as a mechanism of request synchronization:
even though Oracle is known for not escalating row locks, there are transaction-level and other optimizations that may decide to escalate, which can lead to reduced scalability and deadlocks.
if two users want to transfer money to each other, there's a chance they'll be deadlocked. (If you don't have such a feature right now, you may have it in the future, so better create an architecture that wouldn't be invalidated that easily).
Now, a system that returns "bad request" just because another request from the same user happens to take a longer time, is of course a fail-safe system, but its reliability (a metric of running without failures) suffers. I'd expect any payment system in the world to be both fail-safe and reliable.
Is there a better solution?
An architecture based on CQRS and shared-nothing approaches:
ASP.NET server ("web tier"):
directly performs read (GET) operations, as it does now
submits write (POST/PUT/DELETE) operations into a queue, and returns HTTP 200 immediately.
Application tier: a cluster of (micro)services that fetch and perform the write requests, in a shared-nothing manner:
at any moment, requests from any particular user are processed by at most one thread in the whole system (across all processes and machines).
the shared-nothing approach ensures that you never have to concurrently process requests from the same user.
Implementing shared-nothing
The shared-nothing architecture can be implemented by partitioning (AKA sharding). For example:
you have N processing threads running (inside some processes) on a cluster of M machines
each machine is assigned a unique role to run a specific range of threads out of these N
each request from a user is always dispatched to the same specific thread by calculating: thread_index = HASH(User) % N, or if User ID is an integer: thread_index = USER_ID % N.
how the dispatched requests are passed to the processing threads depends on the chosen queue. For example, web tier can submit requests to N different topics, or it can directly push the requests to a distributed actor (see Akka.Net), or you can just use database table as a queue, and make each thread fetch the requests that belong to it.
In addition, you'll need an orchestrator to ensure that each of the M machines is up and running. If a machine goes down, the orchestrator spins up another machine with the same role. For example, if you dockerize your services, you can use Kubernetes with StatefulSet.
Stumbled recently accross the same thoughts. For some reason some of my users are able to post a formular twice, which results in duplicated data.
Even if this is a old question, i hope it will helps someone.
Like you mentioned, one approach is to use the database for the locking of properties but like you, i couldn't find a solid implementation. I'm also assuming that you have a monolithic application as #felix-b mentioned a very good solution.
I gone the way of making the threads that normally would run concurrent to run in sequence. This solution may suffer some disadvantages, but i could not find any. Please let me know your thoughts.
So i solved it with a dictonary containing the UserId and a SemaphoreSlim.
Then i simply marked the controllers with a ActionFilter and throttle the execution of a controller method per user.
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method)]
public class AtomicOperationPerUserAttribute : ActionFilterAttribute
{
private readonly ILogger<AtomicOperationPerUserAttribute> _logger;
private readonly IConcurrencyService _concurrencyService;
public AtomicOperationPerUserAttribute(ILogger<AtomicOperationPerUserAttribute> logger, IConcurrencyService concurrencyService)
{
_logger = logger;
_concurrencyService = concurrencyService;
}
public override void OnActionExecuting(ActionExecutingContext context)
{
int userId = YourWayToGetTheUserId; //mine was with context.HttpContext.AppSpecificExtensionMethod()
_logger.LogInformation($"User {userId} claims sempaphore with RequestId {context.HttpContext.TraceIdentifier}");
var semaphore = _concurrencyService.SemaphorePerUser(userId);
semaphore.Wait();
}
public override void OnActionExecuted(ActionExecutedContext context)
{
int userId = YourWayToGetTheUserId; //mine was with context.HttpContext.AppSpecificExtensionMethod()
var semaphore = _concurrencyService.SemaphorePerUser(userId);
_logger.LogInformation($"User {userId} releases sempaphore with RequestId {context.HttpContext.TraceIdentifier}");
semaphore.Release();
}
}
The "ConcurrentService" is a Singleton registered in the Startup.cs.
public interface IConcurrencyService
{
SemaphoreSlim SemaphorePerUser(int userId);
}
public class ConcurrencyService : IConcurrencyService
{
public static ConcurrentDictionary<int, SemaphoreSlim> Semaphors = new ConcurrentDictionary<int, SemaphoreSlim>();
public SemaphoreSlim SemaphorePerUser(int userId)
{
return Semaphors.GetOrAdd(userId, new SemaphoreSlim(1, 1));
}
}
Since in my case i needed dependencies in the ActionFilter is mark a controller action with [ServiceFilter(typeof(AtomicOperationPerUserAttribute))]
accordingly i registered the "services" in the Startup.cs
services.AddScoped<AtomicOperationPerUserAttribute>();
services.AddSingleton<IConcurrencyService, ConcurrencyService>();

BadHttpRequestException due to MinRequestBodyDataRate and poor connection

I have an ASP.NET Core web application that accepts file uploads from authenticated users, along with some error handling to notify me when an exception occurs (Exceptional). My problem is I'm periodically getting alerts of BadHttpRequestExceptions due to users accessing the application via mobile devices in areas with unreliable coverage. I used to get this with 2.0 too, but until the exception description got updated in 2.1, I never knew it was specifically related to MinRequestBodyDataRate, or that it was configurable.
I've up'd the defaults (240 bytes over 5 seconds) with the following
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseKestrel(options =>
{
options.Limits.MinRequestBodyDataRate = new MinDataRate(240.0, TimeSpan.FromSeconds(10.0));
})
.UseStartup<Startup>();
This doubles the duration it will wait for the client/browser, but I'm still getting error alerts. I can't control the poor reception of the users, but I'm still hoping to try and find a way to mitigate the errors. I can extend the minimum even more, but I don't want to do it for the entire application if possible. Ideally it'd be to the specific route/action handling the upload. The documentation I was able to find indicated this minimum can be configured using the IHttpMinRequestBodyDataRateFeature. I thought about putting this inside the action, but I don't think the action is even called until the modelbinding has gotten the entire file uploaded in order to bind it to my parameter.
public async Task<IActionResult> Upload(IFormFile file)
{
var minRequestBodyDataRateFeature = HttpContext.Features.Get<IHttpMinRequestBodyDataRateFeature>();
minRequestBodyDataRateFeature.MinDataRate = new MinDataRate(240, TimeSpan.FromSeconds(30));
myDocumentService.SaveFile(file);
}
This is somewhat mitigated by the fact that I had previously implemented chunked uploading, and only save the file to the service/database when the last chunk comes it (building up the file gradually in a temporary location in between). But I'm still get these BadHttpRequestExceptions even with the extended duration done via CreateWebHostBuilder (chunking not shown, as it's not relevant).
I'm thinking the best bet might be to try and wire it up in the configure method that sets up the middleware, but I'm not sure about the best way to only apply it to one action. Url maybe? And I'm wondering what implications there would be if I disabled the min rate entirely by setting it to null.
I'm basically just looking to make the connection more tolerant of poor quality connections while not upsetting too much of the day to day for other aspects of the application.
I need a way to apply the change (potentially in the middleware Startup.Configure()), in such a way that it only applies to the affected route/url/action.
Are there implications to consider if I disable it entirely (versus enlarging it) for that route/url/action? Entire application?
Failing any of that, is it safe to simply ignore these errors and never log them? Do I create a blind spot to a malicious interactions with the application/server?
I have noticed the same issue. I have bigger files that users can download from my site and some people, depending on the region, have slow download rates and the download stops after a while. I also see the exceptions in the log.
I do not recommend to disable the feature, because of some stuck or very slow connections, that maybe sum up. Just increase the time and lower the rate to a value where you can say that your web page is still usable.
I would not do that for specific URL's. It does not hurt when the pages have the same settings unless you notice some performance issues.
Also keep in mind that there is also a MinResponseDataRate option.
The best way is to set some reasonable values at the application startup for all routes. There will always be exceptions in the log from people that losing the connection to the internet and kestrel has to close the connection after a while to free up resources.
public static IWebHost BuildWebHost(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseKestrel(options =>
{
options.Limits.MinRequestBodyDataRate =
new MinDataRate(bytesPerSecond: 80, gracePeriod: TimeSpan.FromSeconds(20));
options.Limits.MinResponseDataRate =
new MinDataRate(bytesPerSecond: 80, gracePeriod: TimeSpan.FromSeconds(20));
})
.ConfigureLogging((hostingContext, logging) =>
{
logging.ClearProviders();
logging.SetMinimumLevel(LogLevel.Trace);
})
.UseNLog()
.UseStartup<Startup>()
.Build();
}
Please also have a look at your async Upload method. Make sure you have an await or return a task. I don't think it compiles this way.
Here is an option you can do using a ResourceFilter which runs before model binding.
using System;
using Microsoft.AspNetCore.Mvc.Filters;
using Microsoft.AspNetCore.Server.Kestrel.Core;
using Microsoft.AspNetCore.Server.Kestrel.Core.Features;
using Microsoft.Extensions.DependencyInjection;
namespace YourNameSpace
{
public class RateFilter : Attribute, IResourceFilter
{
private const string EndPoint = "YourEndPoint";
public void OnResourceExecuting(ResourceExecutingContext context)
{
try
{
if (!context.HttpContext.Request.Path.Value.Contains(EndPoint))
{
throw new Exception($"This filter is intended to be used only on a specific end point '{EndPoint}' while it's being called from '{context.HttpContext.Request.Path.Value}'");
}
var minRequestRateFeature = context.HttpContext.Features.Get<IHttpMinRequestBodyDataRateFeature>();
var minResponseRateFeature = context.HttpContext.Features.Get<IHttpMinResponseDataRateFeature>();
//Default Bytes/s = 240, Default TimeOut = 5s
if (minRequestRateFeature != null)
{
minRequestRateFeature.MinDataRate = new MinDataRate(bytesPerSecond: 100, gracePeriod: TimeSpan.FromSeconds(10));
}
if (minResponseRateFeature != null)
{
minResponseRateFeature.MinDataRate = new MinDataRate(bytesPerSecond: 100, gracePeriod: TimeSpan.FromSeconds(10));
}
}
catch (Exception ex)
{
//Log or Throw
}
}
public void OnResourceExecuted(ResourceExecutedContext context)
{
}
}
}
Then you can use the attribute on a specific end point like
[RateFilter]
[HttpPost]
public IActionResult YourEndPoint(YourModel request)
{
return Ok();
}
You can further customize the filter to take in the endpoint/rates as
ctor parameters.
You can also remove the check for specific endpoint
You can return instead of throw

Registering change notification with Active Directory using C#

This link http://msdn.microsoft.com/en-us/library/aa772153(VS.85).aspx says:
You can register up to five notification requests on a single LDAP connection. You must have a dedicated thread that waits for the notifications and processes them quickly. When you call the ldap_search_ext function to register a notification request, the function returns a message identifier that identifies that request. You then use the ldap_result function to wait for change notifications. When a change occurs, the server sends you an LDAP message that contains the message identifier for the notification request that generated the notification. This causes the ldap_result function to return with search results that identify the object that changed.
I cannot find a similar behavior looking through the .NET documentation. If anyone knows how to do this in C# I'd be very grateful to know. I'm looking to see when attributes change on all the users in the system so I can perform custom actions depending on what changed.
I've looked through stackoverflow and other sources with no luck.
Thanks.
I'm not sure it does what you need, but have a look at http://dunnry.com/blog/ImplementingChangeNotificationsInNET.aspx
Edit: Added text and code from the article:
There are three ways of figuring out things that have changed in Active Directory (or ADAM). These have been documented for some time over at MSDN in the aptly titled "Overview of Change Tracking Techniques". In summary: Polling for Changes using uSNChanged. This technique checks the 'highestCommittedUSN' value to start and then performs searches for 'uSNChanged' values that are higher subsequently. The 'uSNChanged' attribute is not replicated between domain controllers, so you must go back to the same domain controller each time for consistency. Essentially, you perform a search looking for the highest 'uSNChanged' value + 1 and then read in the results tracking them in any way you wish. Benefits This is the most compatible way. All languages and all versions of .NET support this way since it is a simple search. Disadvantages There is a lot here for the developer to take care of. You get the entire object back, and you must determine what has changed on the object (and if you care about that change). Dealing with deleted objects is a pain. This is a polling technique, so it is only as real-time as how often you query. This can be a good thing depending on the application. Note, intermediate values are not tracked here either. Polling for Changes Using the DirSync Control. This technique uses the ADS_SEARCHPREF_DIRSYNC option in ADSI and the LDAP_SERVER_DIRSYNC_OID control under the covers. Simply make an initial search, store the cookie, and then later search again and send the cookie. It will return only the objects that have changed. Benefits This is an easy model to follow. Both System.DirectoryServices and System.DirectoryServices.Protocols support this option. Filtering can reduce what you need to bother with. As an example, if my initial search is for all users "(objectClass=user)", I can subsequently filter on polling with "(sn=dunn)" and only get back the combination of both filters, instead of having to deal with everything from the intial filter. Windows 2003+ option removes the administrative limitation for using this option (object security). Windows 2003+ option will also give you the ability to return only the incremental values that have changed in large multi-valued attributes. This is a really nice feature. Deals well with deleted objects. Disadvantages This is .NET 2.0+ or later only option. Users of .NET 1.1 will need to use uSNChanged Tracking. Scripting languages cannot use this method. You can only scope the search to a partition. If you want to track only a particular OU or object, you must sort out those results yourself later. Using this with non-Windows 2003 mode domains comes with the restriction that you must have replication get changes permissions (default only admin) to use. This is a polling technique. It does not track intermediate values either. So, if an object you want to track changes between the searches multiple times, you will only get the last change. This can be an advantage depending on the application. Change Notifications in Active Directory. This technique registers a search on a separate thread that will receive notifications when any object changes that matches the filter. You can register up to 5 notifications per async connection. Benefits Instant notification. The other techniques require polling. Because this is a notification, you will get all changes, even the intermediate ones that would have been lost in the other two techniques. Disadvantages Relatively resource intensive. You don't want to do a whole ton of these as it could cause scalability issues with your controller. This only tells you if the object has changed, but it does not tell you what the change was. You need to figure out if the attribute you care about has changed or not. That being said, it is pretty easy to tell if the object has been deleted (easier than uSNChanged polling at least). You can only do this in unmanaged code or with System.DirectoryServices.Protocols. For the most part, I have found that DirSync has fit the bill for me in virtually every situation. I never bothered to try any of the other techniques. However, a reader asked if there was a way to do the change notifications in .NET. I figured it was possible using SDS.P, but had never tried it. Turns out, it is possible and actually not too hard to do. My first thought on writing this was to use the sample code found on MSDN (and referenced from option #3) and simply convert this to System.DirectoryServices.Protocols. This turned out to be a dead end. The way you do it in SDS.P and the way the sample code works are different enough that it is of no help. Here is the solution I came up with:
public class ChangeNotifier : IDisposable
{
LdapConnection _connection;
HashSet<IAsyncResult> _results = new HashSet<IAsyncResult>();
public ChangeNotifier(LdapConnection connection)
{
_connection = connection;
_connection.AutoBind = true;
}
public void Register(string dn, SearchScope scope)
{
SearchRequest request = new SearchRequest(
dn, //root the search here
"(objectClass=*)", //very inclusive
scope, //any scope works
null //we are interested in all attributes
);
//register our search
request.Controls.Add(new DirectoryNotificationControl());
//we will send this async and register our callback
//note how we would like to have partial results
IAsyncResult result = _connection.BeginSendRequest(
request,
TimeSpan.FromDays(1), //set timeout to a day...
PartialResultProcessing.ReturnPartialResultsAndNotifyCallback,
Notify,
request);
//store the hash for disposal later
_results.Add(result);
}
private void Notify(IAsyncResult result)
{
//since our search is long running, we don't want to use EndSendRequest
PartialResultsCollection prc = _connection.GetPartialResults(result);
foreach (SearchResultEntry entry in prc)
{
OnObjectChanged(new ObjectChangedEventArgs(entry));
}
}
private void OnObjectChanged(ObjectChangedEventArgs args)
{
if (ObjectChanged != null)
{
ObjectChanged(this, args);
}
}
public event EventHandler<ObjectChangedEventArgs> ObjectChanged;
#region IDisposable Members
public void Dispose()
{
foreach (var result in _results)
{
//end each async search
_connection.Abort(result);
}
}
#endregion
}
public class ObjectChangedEventArgs : EventArgs
{
public ObjectChangedEventArgs(SearchResultEntry entry)
{
Result = entry;
}
public SearchResultEntry Result { get; set;}
}
It is a relatively simple class that you can use to register searches. The trick is using the GetPartialResults method in the callback method to get only the change that has just occurred. I have also included the very simplified EventArgs class I am using to pass results back. Note, I am not doing anything about threading here and I don't have any error handling (this is just a sample). You can consume this class like so:
static void Main(string[] args)
{
using (LdapConnection connect = CreateConnection("localhost"))
{
using (ChangeNotifier notifier = new ChangeNotifier(connect))
{
//register some objects for notifications (limit 5)
notifier.Register("dc=dunnry,dc=net", SearchScope.OneLevel);
notifier.Register("cn=testuser1,ou=users,dc=dunnry,dc=net", SearchScope.Base);
notifier.ObjectChanged += new EventHandler<ObjectChangedEventArgs>(notifier_ObjectChanged);
Console.WriteLine("Waiting for changes...");
Console.WriteLine();
Console.ReadLine();
}
}
}
static void notifier_ObjectChanged(object sender, ObjectChangedEventArgs e)
{
Console.WriteLine(e.Result.DistinguishedName);
foreach (string attrib in e.Result.Attributes.AttributeNames)
{
foreach (var item in e.Result.Attributes[attrib].GetValues(typeof(string)))
{
Console.WriteLine("\t{0}: {1}", attrib, item);
}
}
Console.WriteLine();
Console.WriteLine("====================");
Console.WriteLine();
}

Categories

Resources