ASP.NET Cache - long running operation - c#

I'm storing data in cache as not to hit the database constantly (doesn't matter if the data is a little stale), the dataset isn't particularly large but the operation can take some time due to the complexity of the query (lot's of joins and sub queries). I have a static helper class and the data is used for binding on individual pages. The page calls it like so:
public static List<MyList> MyDataListCache
{
get
{
var myList = HttpContext.Current.Cache["myList"];
if (myList == null)
{
var result = MyLongRunningOperation();
HttpContext.Current.Cache.Add("myList", result, null, DateTime.Now.AddMinutes(3),
Cache.NoSlidingExpiration, CacheItemPriority.Normal, null);
return result;
}
}
else
{
return (List<MyList>)myList;
}
}
}
This works fine unless lot's of people hit the page at the same time when the item is out of cache. Hundreds of the LongRunningOperations are spun up and cause the application to crash. How do I avoid this problem? I've tried using async tasks to subscribe to if the task is currently running but had no luck in getting it to work.

Upon starting this service, you should call LongRunningOperation() immediately to warm up your cache.
Second, you always want something to be returned, so I would consider a background task to refresh this cache prior to it's expiry.
Doing these two things will void the situation you described. The cache will be refreshed by a backgroundworker, and so everyone is happy :)

Related

Handling multiple instances of the same controller

I am currently developing an application in ASP.NET CORE 2.0
The following is the action inside my controller that get's executed when the user clicks submit button.
The following is the function that get's called the action
As a measure to prevent duplicate inside a database I have the function
IsSignedInJob(). The function works
My Problem:
Sometimes when the internet connection is slow or the server is not responding right away it is possible to click submit button more than once. When the connection is reestablished the browser (in my case Chrome) sends multiple HttpPost request to the server. In that case the functions(same function from different instances) are executed so close in time that before the change in database is made, other instances are making the same change without being aware of each other.
Is there a way to solve this problem on a server side without being to "hacky"?
Thank you
As suggested on the comments - and this is my preferred approach-, you can simply disable the button once is clicked the first time.
Another solution would be to add something to a dictionary indicating that the job has already been registered but this will probably have to use a lock as you need to make sure that only one thread can read-write at a time. A Concurrent collection won't do the trick as the problem is not whether this operation is thread-safe or not. The IsSignedInJob method you have can do this behind the scenes but I wouldn't check the database for this as the latency could be too high. Adding/removing a Key from a dictionary should be a lot faster.
Icarus's answer is great for the user experience and should be implemented. If you also need to make sure the request is only handled once on the server side you have a few options. Here is one using the ReaderWRiterLockSlim class.
private ReaderWriterLockSlim cacheLock = new ReaderWriterLockSlim();
[HttpPost]
public async SomeMethod()
{
if (cacheLock.TryEnterWriteLock(timeout));
{
try
{
// DoWork that should be very fast
}
finally
{
cacheLock.ExitWriteLock();
}
}
}
This will prevent overlapping DoWork code. It does not prevent DoWork from finishing completely, then another post executing that causes DoWork again.
If you want to prevent the post from happening twice, implement the AntiForgeryToken, then store the token in session. Something like this (haven't used session in forever) may not compile, but you should get the idea.
private const SomeMethodTokenName = "SomeMethodToken";
[HttpPost]
public async SomeMethod()
{
if (cacheLock.TryEnterWriteLock(timeout));
{
try
{
var token = Request.Form.Get["__RequestVerificationToken"].ToString();
var session = Session[SomeMethodTokenName ];
if (token == session) return;
session[SomeMethodTokenName] = token
// DoWork that should be very fast
}
finally
{
cacheLock.ExitWriteLock();
}
}
}
Not exactly perfect, two different requests could happen over and over, you could store in session the list of all used tokens for this session. There is no perfect way, because even then, someone could technically cause a OutOfMemoryException if they wanted to (to many tokens stored in session), but you get the idea.
Try not to use asynchronous processing. Remove task,await and async.

auto refresh cache ASP.NET [duplicate]

This question already has an answer here:
Automatically refresh ASP.NET Output Cache on expiry
(1 answer)
Closed 5 years ago.
I have a website with a lot of data in it.
I use C# .NET MVC4 for development.
I have a big slow page loading problem when the cache is empty.
currently I'm using a cache that contains all the data that I need and when the cache is on the pages loads right away, but when the cache expires it takes about 10s the page to be fully loaded.
I'm looking for an option to auto refresh the cache when it expires,
I've been searching over goolge, but couldn't find anything in that matter
How is it should be done?
Or are there other options to solve this problem?
Thanks
You could cache it on the first call with a TTL, let it invalidate, and then the next call will get it and cache it back again. The problem with this is that you are slowing down your thread while it has to go fetch the data as it is unavailable, and multiple threads will wait for it (assuming you lock the read to prevent flooding).
One way to get around the first load issue is to prime your cache on application start up. This assures that when your application is ready to be used, the data is already loaded up and will be fast. Create a quick interface like ICachePrimer { void Prime() }, scan your assemblies for it, resolve them, then run them.
The way I like to get around the empty cache on invalidation issue is to refresh the data before it is removed. To easily do this in .Net, you can utilize the MemoryCache's CacheItemPolicy callbacks.
UpdateCallback occurs before the item is removed, and allows you to refresh the item.
RemovedCallback occurs after the item has been removed.
In the example below, my CachedRepository will refresh the cached item when it is invalidated. Other threads will continue to receive the "old" value until the refresh completes.
public class MyCachedRepository : IMyRepository
{
private readonly IMyRepository _baseRepository;
private readonly ObjectCache _cache;
public MyCachedRepository(IMyRepository baseRepository, ObjectCache cache)
{
_baseRepository = baseRepository;
_cache = cache;
}
public string GetById(string id)
{
var value = _cache.Get(id) as string;
if (value == null)
{
value = _baseRepository.GetById(id);
if (value != null)
_cache.Set(id, value, GetPolicy());
}
return value;
}
private CacheItemPolicy GetPolicy()
{
return new CacheItemPolicy
{
UpdateCallback = CacheItemRemoved,
SlidingExpiration = TimeSpan.FromMinutes(0.1), //set your refresh interval
};
}
private void CacheItemRemoved(CacheEntryUpdateArguments args)
{
if (args.RemovedReason == CacheEntryRemovedReason.Expired || args.RemovedReason == CacheEntryRemovedReason.Removed)
{
var id = args.Key;
var updatedEntity = _baseRepository.GetById(id);
args.UpdatedCacheItem = new CacheItem(id, updatedEntity);
args.UpdatedCacheItemPolicy = GetPolicy();
}
}
}
Source: http://pdalinis.blogspot.in/2013/06/auto-refresh-caching-for-net-using.html
There is no mechanism to auto refresh a cache when the keys expire. All caching systems employ passive expiration. The keys are invalidated the first time they are requested after the expiration, not automatically at that exact expiration time.
What you're talking about is essentially a cache that never expires, which is easy enough to achieve. Simply either pass no expiration (if the caching mechanism allows it) or a far-future expiration. Then, your only problem is refreshing the cache on some schedule, so that it does not become stale. For that, one option is to create a console application that sets the values in the cache (importantly, without caring if there's something there already) and then use Task Scheduler or similar to schedule it to run at set intervals. Another option is to use something like Revalee to schedule callbacks into your web application at defined intervals. This basically the same as creating a console app, only the code could be integrated into your same website project.
You can also use Hangfire to perform the scheduling directly within your web application, and could use that to run a console application, hit a URL, whatever. The power of Hangfire is that it allow you pretty much schedule any process you want, but that also means you have to actually provide the code for what should happen, i.e. actually connect with HttpClient and fetch the URL, rather than just telling Revallee to hit a particular URL.

First query is slow and pre-generated views aren't being hit (probably)

I'm having a bit of trouble with the time it takes EF to pull some entities. The entity in question has a boatload of props that live in 1 table, but it also has a handful of ICollection's that relate to other tables. I've abandoned the idea of loading the entire object graph as it's way too much data and instead will have my Silverlight client send out a new request to my WCF service as details are needed.
After slimming down to 1 table's worth of stuff, it's taking roughly 8 seconds to pull the data, then another 1 second to .ToList() it up (I expect this to be < 1 second). I'm using the stopwatch class to take measurements. When I run the SQL query in SQL management studio, it takes only a fraction of a second so I'm pretty sure the SQL statement itself isn't the problem.
Here is how I am trying to query my data:
public List<ComputerEntity> FindClientHardware(string client)
{
long time1 = 0;
long time2 = 0;
var stopwatch = System.Diagnostics.Stopwatch.StartNew();
// query construction always takes about 8 seconds, give or a take a few ms.
var entities =
DbSet.Where(x => x.CompanyEntity.Name == client); // .AsNoTracking() has no impact on performance
//.Include(x => x.CompanyEntity)
//.Include(x => x.NetworkAdapterEntities) // <-- using these 4 includes has no impact on SQL performance, but faster to make lists without these
//.Include(x => x.PrinterEntities) // I've also abandoned the idea of using these as I don't want the entire object graph (although it would be nice)
//.Include(x => x.WSUSSoftwareEntities)
//var entities = Find(x => x.CompanyEntity.Name == client); // <-- another test, no impact on performance, same execution time
stopwatch.Stop();
time1 = stopwatch.ElapsedMilliseconds;
stopwatch.Restart();
var listify = entities.ToList(); // 1 second with the 1 table, over 5 seconds if I use all the includes.
stopwatch.Stop();
time2 = stopwatch.ElapsedMilliseconds;
var showmethesql = entities.ToString();
return listify;
}
I'm assuming that using the .Include means eager loading, although it isn't relevant in my current case as I just want the 1 table's worth of stuff. The SQL generated by this statement (which executes super fast in SSMS) is:
SELECT
[Extent1].[AssetID] AS [AssetID],
[Extent1].[ClientID] AS [ClientID],
[Extent1].[Hostname] AS [Hostname],
[Extent1].[ServiceTag] AS [ServiceTag],
[Extent1].[Manufacturer] AS [Manufacturer],
[Extent1].[Model] AS [Model],
[Extent1].[OperatingSystem] AS [OperatingSystem],
[Extent1].[OperatingSystemBits] AS [OperatingSystemBits],
[Extent1].[OperatingSystemServicePack] AS [OperatingSystemServicePack],
[Extent1].[CurrentUser] AS [CurrentUser],
[Extent1].[DomainRole] AS [DomainRole],
[Extent1].[Processor] AS [Processor],
[Extent1].[Memory] AS [Memory],
[Extent1].[Video] AS [Video],
[Extent1].[IsLaptop] AS [IsLaptop],
[Extent1].[SubnetMask] AS [SubnetMask],
[Extent1].[WINSserver] AS [WINSserver],
[Extent1].[MACaddress] AS [MACaddress],
[Extent1].[DNSservers] AS [DNSservers],
[Extent1].[FirstSeen] AS [FirstSeen],
[Extent1].[IPv4] AS [IPv4],
[Extent1].[IPv6] AS [IPv6],
[Extent1].[PrimaryUser] AS [PrimaryUser],
[Extent1].[Domain] AS [Domain],
[Extent1].[CheckinTime] AS [CheckinTime],
[Extent1].[ActiveComputer] AS [ActiveComputer],
[Extent1].[NetworkAdapterDescription] AS [NetworkAdapterDescription],
[Extent1].[DHCP] AS [DHCP]
FROM
[dbo].[Inventory_Base] AS [Extent1]
INNER JOIN [dbo].[Entity_Company] AS [Extent2]
ON [Extent1].[ClientID] = [Extent2].[ClientID]
WHERE
[Extent2].[CompanyName] = #p__linq__0
Which is basically a select all columns in this table, join a second table that has a company name, and filter with a where clause of companyname == input value to the method. The particular company I'm pulling only returns 75 records.
Disabling object tracking with .AsNoTracking() has zero impact on execution time.
I also gave the Find method a go, and it had the exact same execution time. The next thing I tried was to pregenerate the views in case the issue was there. I am using code first, so I used the EF power tools to do this.
This long period of time to run this query causes too long of a delay for my users. When I hand write the SQL code and don't touch EF, it is super quick. Any ideas as to what I'm missing?
Also, maybe related or not, but since I'm doing this in WCF which is stateless I assume absolutely nothing gets cached? The way I think about it is that every new call is a firing up this WCF service library for the first time, therefore there is no pre-existing cache. Is this an accurate assumption?
Update 1
So I ran this query twice within the same unit test to check out the cold/warm query thing. The first query is horrible as expected, but the 2nd one is lightning fast clocking in at 350ms for the whole thing. Since WCF is stateless, is every single call to my WCF service going to be treated as this first ugly-slow query? Still need to figure out how to get this first query to not suck.
Update 2
You know those pre-generated views I mentioned earlier? Well... I don't think they are being hit. I put a few breakpoints in the autogenerated-by-EF-powertools ReportingDbContext.Views.cs file, and they never get hit. This coupled with the cold/warm query performance I see, this sounds like it could be meaningful. Is there a particular way I need to pregenerate views with the EF power tools in a code first environment?
Got it! The core problem was the whole cold query thing. How to get around this cold query issue? By making a query. This will "warm up" EntityFramework so that subsequent query compilation is much faster. My pre-generated views did nothing to help with the query I was compiling in this question, but they do seem to work if I want to dump an entire table to an array (a bad thing). Since I am using WCF which is stateless, will I have to "warm up" EF for every single call? Nope! Since EF lives in the app domain and not the context, I just to need to do my warm up on the init of the service. For dev purposes I self host, but in production it lives in IIS.
To do the query warm up, I made a service behavior that takes care of this for me. Create your behavior class as such:
using System;
using System.Collections.ObjectModel;
using System.ServiceModel;
using System.ServiceModel.Channels; // for those without resharper, here are the "usings"
using System.ServiceModel.Description;
public class InitializationBehavior : Attribute, IServiceBehavior
{
public InitializationBehavior()
{
}
public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
{
}
public void AddBindingParameters(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase, Collection<ServiceEndpoint> endpoints,
BindingParameterCollection bindingParameters)
{
Bootstrapper.WarmUpEF();
}
public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
{
}
}
I then used this to do the warmup:
public static class Bootstrapper
{
public static int initialized = 0;
public static void WarmUpEF()
{
using (var context = new ReportingDbContext())
{
context.Database.Initialize(false);
}
initialized = 9999; // I'll explain this
}
}
This SO question helped with the warmup code:
How do I initialize my Entity Framework queries to speed them up?
You then slap this behavior on your WCF service like so:
[InitializationBehavior]
public class InventoryService : IInventoryService
{
// implement your service
}
I launched my services project in debug mode which in turn fired up the initialization behavior. After spamming the method that makes the query referenced in my question, my breakpoint in the behavior wasn't being hit (other than being hit when I first self hosted it). I verified that it was it by checking out the static initialized variable. I then published this bad boy into IIS with my verification int and it had the exact same behavior.
So, in short, if you are using Entity Framework 5 with a WCF service and don't want a crappy first query, warm it up with a service behavior. There are probably other/better ways of doing this, but this way works too!
edit:
If you are using NUnit and want to warm up EF for your unit tests, setup your test as such:
[TestFixture]
public class InventoryTests
{
[SetUp]
public void Init()
{
// warm up EF.
using (var context = new ReportingDbContext())
{
context.Database.Initialize(false);
}
// init other stuff
}
// tests go here
}

Is locking single session in repository thread safe? (NHibernate)

I read many posts saying multithreaded applications must use a separate session per thread. Perhaps I don't understand how the locking works, but if I put a lock on the session in all repository methods, would that not make a single static session thread safe?
like:
public void SaveOrUpdate(T instance)
{
if (instance == null) return;
lock (_session)
using (ITransaction transaction = _session.BeginTransaction())
{
lock (instance)
{
_session.SaveOrUpdate(instance);
transaction.Commit();
}
}
}
EDIT:
Please consider the context/type of applications I'm writing:
Not multi-user, not typical user-interaction, but a self-running robot reacting to remote events like financial data and order-updates, performing tasks and saves based on that. Intermittently this can create clusters of up to 10 saves per second. Typically it's the same object graph that needs to be saved every time. Also, on startup, the program does load the full database into an entity-object-graph. So it basically just reads once, then performs SaveOrUpdates as it runs.
Given that the application is typically editing the same object graph, perhaps it would make more sense to have a single thread dedicated to applying these edits to the object graph and then saving them to the database, or perhaps a pool of threads servicing a common queue of edits, where each thread has it's own (dedicated) session that it does not need to lock. Look up producer/consumer queues (to start, look here).
Something like this:
[Producer Threads]
Edit Event -\ [Database Servicer Thread]
Edit Event ------> Queue -> Dequeue and Apply to Session -> Database
Edit Event -/
I'd imagine that a BlockingCollection<Action<Session>> would be a good starting point for such an implementation.
Here's a rough example (note this is obviously untested):
// Assuming you have a work queue defined as
public static BlockingCollection<Action<Session>> myWorkQueue = new BlockingCollection<Action<Session>>();
// and your eventargs looks something like this
public class MyObjectUpdatedEventArgs : EventArgs {
public MyObject MyObject { get; set; }
}
// And one of your event handlers
public MyObjectWasChangedEventHandler(object sender, MyObjectUpdatedEventArgs e) {
myWorkQueue.Add(s=>SaveOrUpdate(e.MyObject));
}
// Then a thread in a constant loop processing these items could work:
public void ProcessWorkQueue() {
var mySession = mySessionFactory.CreateSession();
while (true) {
var nextWork = myWorkQueue.Take();
nextWork(mySession);
}
}
// And to run the above:
var dbUpdateThread = new Thread(ProcessWorkQueue);
dbUpdateThread.IsBackground = true;
dbUpdateThread.Start();
At least two disadvantages are:
You are reducing the performance significantly. Having this on a busy web server is like having a crowd outside a cinema but letting people go in through a person-wide entrance.
A session has its internal identity map (cache). A single session per application means that the memory consumption grows as users access different data from the database. Ultimately you can even end up with the whole database in the memory which of course would just not work. This requires then calling a method to drop the 1st level cache from time to time. However, there is no good moment to drop the cache. You just can't drop in at the beginning of a request because other concurrent sessions could suffer from this.
I am sure people will add other disadvantages.

Registering change notification with Active Directory using C#

This link http://msdn.microsoft.com/en-us/library/aa772153(VS.85).aspx says:
You can register up to five notification requests on a single LDAP connection. You must have a dedicated thread that waits for the notifications and processes them quickly. When you call the ldap_search_ext function to register a notification request, the function returns a message identifier that identifies that request. You then use the ldap_result function to wait for change notifications. When a change occurs, the server sends you an LDAP message that contains the message identifier for the notification request that generated the notification. This causes the ldap_result function to return with search results that identify the object that changed.
I cannot find a similar behavior looking through the .NET documentation. If anyone knows how to do this in C# I'd be very grateful to know. I'm looking to see when attributes change on all the users in the system so I can perform custom actions depending on what changed.
I've looked through stackoverflow and other sources with no luck.
Thanks.
I'm not sure it does what you need, but have a look at http://dunnry.com/blog/ImplementingChangeNotificationsInNET.aspx
Edit: Added text and code from the article:
There are three ways of figuring out things that have changed in Active Directory (or ADAM). These have been documented for some time over at MSDN in the aptly titled "Overview of Change Tracking Techniques". In summary: Polling for Changes using uSNChanged. This technique checks the 'highestCommittedUSN' value to start and then performs searches for 'uSNChanged' values that are higher subsequently. The 'uSNChanged' attribute is not replicated between domain controllers, so you must go back to the same domain controller each time for consistency. Essentially, you perform a search looking for the highest 'uSNChanged' value + 1 and then read in the results tracking them in any way you wish. Benefits This is the most compatible way. All languages and all versions of .NET support this way since it is a simple search. Disadvantages There is a lot here for the developer to take care of. You get the entire object back, and you must determine what has changed on the object (and if you care about that change). Dealing with deleted objects is a pain. This is a polling technique, so it is only as real-time as how often you query. This can be a good thing depending on the application. Note, intermediate values are not tracked here either. Polling for Changes Using the DirSync Control. This technique uses the ADS_SEARCHPREF_DIRSYNC option in ADSI and the LDAP_SERVER_DIRSYNC_OID control under the covers. Simply make an initial search, store the cookie, and then later search again and send the cookie. It will return only the objects that have changed. Benefits This is an easy model to follow. Both System.DirectoryServices and System.DirectoryServices.Protocols support this option. Filtering can reduce what you need to bother with. As an example, if my initial search is for all users "(objectClass=user)", I can subsequently filter on polling with "(sn=dunn)" and only get back the combination of both filters, instead of having to deal with everything from the intial filter. Windows 2003+ option removes the administrative limitation for using this option (object security). Windows 2003+ option will also give you the ability to return only the incremental values that have changed in large multi-valued attributes. This is a really nice feature. Deals well with deleted objects. Disadvantages This is .NET 2.0+ or later only option. Users of .NET 1.1 will need to use uSNChanged Tracking. Scripting languages cannot use this method. You can only scope the search to a partition. If you want to track only a particular OU or object, you must sort out those results yourself later. Using this with non-Windows 2003 mode domains comes with the restriction that you must have replication get changes permissions (default only admin) to use. This is a polling technique. It does not track intermediate values either. So, if an object you want to track changes between the searches multiple times, you will only get the last change. This can be an advantage depending on the application. Change Notifications in Active Directory. This technique registers a search on a separate thread that will receive notifications when any object changes that matches the filter. You can register up to 5 notifications per async connection. Benefits Instant notification. The other techniques require polling. Because this is a notification, you will get all changes, even the intermediate ones that would have been lost in the other two techniques. Disadvantages Relatively resource intensive. You don't want to do a whole ton of these as it could cause scalability issues with your controller. This only tells you if the object has changed, but it does not tell you what the change was. You need to figure out if the attribute you care about has changed or not. That being said, it is pretty easy to tell if the object has been deleted (easier than uSNChanged polling at least). You can only do this in unmanaged code or with System.DirectoryServices.Protocols. For the most part, I have found that DirSync has fit the bill for me in virtually every situation. I never bothered to try any of the other techniques. However, a reader asked if there was a way to do the change notifications in .NET. I figured it was possible using SDS.P, but had never tried it. Turns out, it is possible and actually not too hard to do. My first thought on writing this was to use the sample code found on MSDN (and referenced from option #3) and simply convert this to System.DirectoryServices.Protocols. This turned out to be a dead end. The way you do it in SDS.P and the way the sample code works are different enough that it is of no help. Here is the solution I came up with:
public class ChangeNotifier : IDisposable
{
LdapConnection _connection;
HashSet<IAsyncResult> _results = new HashSet<IAsyncResult>();
public ChangeNotifier(LdapConnection connection)
{
_connection = connection;
_connection.AutoBind = true;
}
public void Register(string dn, SearchScope scope)
{
SearchRequest request = new SearchRequest(
dn, //root the search here
"(objectClass=*)", //very inclusive
scope, //any scope works
null //we are interested in all attributes
);
//register our search
request.Controls.Add(new DirectoryNotificationControl());
//we will send this async and register our callback
//note how we would like to have partial results
IAsyncResult result = _connection.BeginSendRequest(
request,
TimeSpan.FromDays(1), //set timeout to a day...
PartialResultProcessing.ReturnPartialResultsAndNotifyCallback,
Notify,
request);
//store the hash for disposal later
_results.Add(result);
}
private void Notify(IAsyncResult result)
{
//since our search is long running, we don't want to use EndSendRequest
PartialResultsCollection prc = _connection.GetPartialResults(result);
foreach (SearchResultEntry entry in prc)
{
OnObjectChanged(new ObjectChangedEventArgs(entry));
}
}
private void OnObjectChanged(ObjectChangedEventArgs args)
{
if (ObjectChanged != null)
{
ObjectChanged(this, args);
}
}
public event EventHandler<ObjectChangedEventArgs> ObjectChanged;
#region IDisposable Members
public void Dispose()
{
foreach (var result in _results)
{
//end each async search
_connection.Abort(result);
}
}
#endregion
}
public class ObjectChangedEventArgs : EventArgs
{
public ObjectChangedEventArgs(SearchResultEntry entry)
{
Result = entry;
}
public SearchResultEntry Result { get; set;}
}
It is a relatively simple class that you can use to register searches. The trick is using the GetPartialResults method in the callback method to get only the change that has just occurred. I have also included the very simplified EventArgs class I am using to pass results back. Note, I am not doing anything about threading here and I don't have any error handling (this is just a sample). You can consume this class like so:
static void Main(string[] args)
{
using (LdapConnection connect = CreateConnection("localhost"))
{
using (ChangeNotifier notifier = new ChangeNotifier(connect))
{
//register some objects for notifications (limit 5)
notifier.Register("dc=dunnry,dc=net", SearchScope.OneLevel);
notifier.Register("cn=testuser1,ou=users,dc=dunnry,dc=net", SearchScope.Base);
notifier.ObjectChanged += new EventHandler<ObjectChangedEventArgs>(notifier_ObjectChanged);
Console.WriteLine("Waiting for changes...");
Console.WriteLine();
Console.ReadLine();
}
}
}
static void notifier_ObjectChanged(object sender, ObjectChangedEventArgs e)
{
Console.WriteLine(e.Result.DistinguishedName);
foreach (string attrib in e.Result.Attributes.AttributeNames)
{
foreach (var item in e.Result.Attributes[attrib].GetValues(typeof(string)))
{
Console.WriteLine("\t{0}: {1}", attrib, item);
}
}
Console.WriteLine();
Console.WriteLine("====================");
Console.WriteLine();
}

Categories

Resources