How to distribute work to a pool of computers - c#

I have some data that needs to be processed. The data is a tree. The processing goes like this: Take a node N. Check if all of its children have already been processed. If not, process them first. If yes, process N. So we go from top to bottom (recursively) to the leaves, then process leaves, then the leaves' parent nodes and so on, upwards until we arrive at the root again.
I know how to write a program that runs on ONE computer that takes the data (i.e. the root node) and processes it as described above. Here is a sketch in C#:
// We assume data is already there, so I do not provide constructor/setters.
public class Data
{
public object OwnData { get; }
public IList<Data> Children { get; }
}
// The main class. We just need to call Process once and wait for it to finish.
public class DataManager
{
internal ISet<Data> ProcessedData { get; init; }
public DataManager()
{
ProcessedData = new HashSet<Data>();
}
public void Process(Data rootData)
{
new DataHandler(this).Process(rootData);
}
}
// The handler class that processes data recursively by spawning new instances.
// It informs the manager about data processed.
internal class DataHandler
{
private readonly DataManager Manager;
internal DataHandler(ProcessManager manager)
{
Manager = manager;
}
internal void Process(Data data)
{
if (Manager.ProcessedData.Contains(data))
return;
foreach (var subData in data.Children)
new DataHandler(Manager).Process(subData);
... // do some processing of OwnData
Manager.ProcessedData.Add(data);
}
}
But how can I write the program so that I can distribute the work to a pool of computers (that are all in the same network, either some local one or the internet)? What do I need to do for that?
Some thoughts/ideas:
The DataManager should run on one computer (the main one / the sever?); the DataHandlers should run on all the others (the clients?).
The DataManager needs to know the computers by some id (what id would that be?) which are set during construction of DataManager.
The DataManager must be able to create new instances of DataHandler (or kill them if something goes wrong) on these computers. How?
The DataManager must know which computers currently have a running instance of DataHandler and which not, so that it can decide on which computer it can spawn the next DataHandler (or, if none is free, wait).
These are not requirements! I do not know if these ideas are viable.
In the above thoughts I assumed that each computer can just have one instance of DataHandler. I know this is not necessarily so (because CPU cores and threads...), but in my use case it might actually be that way: The real DataManager and DataHandler are not standalone but run in a SolidWorks context. So in order to run any of that code, I need to have a running SolidWorks instance. From my experience, more than one SolidWorks instance on the same Windows does not work (reliably).
From my half-knowledge it looks like what I need is a kind of multi-computer-OS: In a single-computer-setting, the points 2, 3 and 4 are usually taken care of by the OS. And point 1 kind of is the OS (the OS=DataManager spawns processes=DataHandlers; the OS keeps track of data=ProcessedData and the processes report back).
What exactly do I want to know?
Hints to words, phrases or introductory articles that allow me to dive into the topic (in order to become able to implement this). Possibly language-agnostic.
Hints to C# libraries/frameworks that are fit for this situation.
Tips on what I should or shouldn't do (typical beginners issues). Possibly language-agnostic.
Links to example/demonstration C# projects, e.g. on GitHub. (If not C#, VB is also alright.)

You should read up on microservices and queues. Like rabbitmq.
The producer/ consumer approach.
https://www.rabbitmq.com/getstarted.html
If you integrate your microservices with Docker, you can do some pretty nifty stuff.

Related

How to pass data to injected services

I want to code a small WinForms or WPF program that controls that only one person/computer a time uses a specific shared file through a network share. This file needs to be copied locally when working and copied back to the network share when the work is finished.
I want to create a small WPF application with only one button to flag the file as locked, copy it locally and open the application associated to that file extension. When this application is closed, the file is copied back to the network share and the lock is released. Each computer will use this application to access the file, so it should never be two computers editing the same file at the same time.
The application has to have some kind of configuration file with the path for local and remote folders, the name of the file and the path of the application to open that file. To ease setting up the application it will be done with a SettingsWindow.
I am trying to do it with IoC and some kind of lightweight DI container (i.e. SimpleInjector), but I am having some questions about how to do it properly:
Program.cs
static class Program
{
[STAThread]
static void Main()
{
var container = Bootstrap();
// Any additional other configuration, e.g. of your desired MVVM toolkit.
RunApplication(container);
}
private static Container Bootstrap()
{
// Create the container as usual.
var container = new Container();
// Register your types, for instance:
container.Register<ILauncher, Launcher>();
container.Register<IFileSyncService, FileSyncronizer>();
container.Register<IAppRunnerService, AppRunnerService>();
container.Register<ILockService, LockService>();
// Register your windows and view models:
container.Register<MainWindow>();
container.Register<ConfigurationWindow>();
container.Verify();
return container;
}
private static void RunApplication(Container container)
{
try
{
var app = new App();
var mainWindow = container.GetInstance<MainWindow>();
app.Run(mainWindow);
}
catch (Exception ex)
{
//Log the exception and exit
}
}
}
MainWindow
I modified the constructor to receive an interface ILauncher that will be resolved by the DI container.
public partial class MainWindow : Window
{
public MainWindow(ILauncher launcher)
{
InitializeComponent();
}
…
}
ILauncher interface
public interface ILauncher
{
void Run();
}
Launcher implementation
The launcher implementation will take care of coordinating every task needed to launch the application and edit the file. This consists of: checking and acquiring the lock, synchronizing the file, executing and monitoring that the application has been closed. To follow the Single Responsibility Principle (SRP) this is done through some services injected:
public class Launcher : ILauncher
{
public Launcher(
ILockService lockService,
IAppRunnerService appRunnerService,
IFileSyncService fileSyncService
)
{
LockService = lockService;
AppRunnerService = appRunnerService;
FileSyncService = fileSyncService;
}
public ILockService LockService { get; }
public IAppRunnerService AppRunnerService { get; }
public IFileSyncService FileSyncService { get; }
public void Run()
{
//TODO check lock + adquire lock
//TODO Sinchronize file
//TODO Subscribe to the IAppRunnerService.OnStop event
//TODO Start app through IAppRunnerService.Run method
}
}
My questions:
Creating new instances with the “new” keyword or with manual calls to the container inside classes or windows is a bad practice, so the entry point of the desktop application (usually some kind of a main window) should ask (via the constructor) for everything. It doesn’t seem a proper way when the application grows. Am I right? What is the solution?
In my application, every service needs some kind of runtime data (the executable file path, the local or remote directory…). The container instantiates them early in the application execution, even before this data is known. How can the services receive this data? (Please note that data could be modified later by the SettingsWindow).
Sorry for the extension of this post but I wanted to make my problem and the context clear.
As you've said, you have to ask for some services in the entry point, but what do you mean ask for everything? Your entry point should have only the services it directly uses speciifed as ctor parameters. If your application grows and you suddenly have a dozen services called directly by your entry point - that could indicate that some of the services could be consolidated into a bigger packet. That's opinion-based territory though, as long as you consider it maintainable it's okay.
Many possible solutions here, one would be an ISettingsManager allowing for realtime access to the settings across any services that take it as a dependency.

Pass parameters from a project to a specific class in another project

I just started to learn C# for a school project but I'm stuck on something.
I have a solution with 2 projects (and each project has a class), something like this:
Solution:
Server (project) (...) MyServerClass.cs, Program.cs
App (project) (...) MyAppClass.cs, Program.cs
In my "MyServerClass.cs", I have this:
class MyServerClass
{
...
public void SomeMethod()
{
Process.Start("App.exe", "MyAppClass");
}
}
How can I properly send, for example, an IP address and port? Would something like this work?
class MyServerClass
{
....
public void SomeMethod()
{
string ip = "127.0.0.1";
int port = 8888;
Process.Start("App.exe", "MyAppClass " + ip + " " + port);
}
}
Then in my "MyAppClass.cs", how can I receive that IP address and port?
EDIT:
The objective of this work is to practice processes/threads/sockets. The idea is having a server that receives emails and filter them, to know if they're spam or not. We got to have 4 or 5 filters. The idea was having them as separated projects (ex: Filter1.exe, Filter2.exe, ...), but I was trying to have only 1 project (ex: Filters.exe) and have the filters as classes (Filter1.cs, Filter2.cs, ...), and then create a new process for each different filter.
I guess I'll stick to a project for each filter!
Thanks!
There are a number of ways to achieve this, each with their own pros/cons.
Some possible solutions:
Pass the values in on the command line. Pros: Easy. Cons: Can only be passed in once on launch. Unidirectional (child process can't send info back). Doesn't scale well for complex structured data.
Create a webservice (either in the server or client). Connect to it and either pull/push the appropriate settings. Pros: Flexible, ongoing, potentially bi-directional with some form of polling and works if client/server are on different hosts. Cons: A little bit more complex, requires one app to be able to locate the web address of the other which is trivial locally and more involved over a network.
Use shared memory via a memory mapped file. This approach allows multiple processes to access the same chunk of memory. One process can write the required data and the others can read it. Pros: Efficient, bi-directional, can be disk-backed to persist state through restarts. Cons: Requires pointers and an understanding of how they work. Requires a little more manipulation of data to perform a read/write.
There are dozens more ways. Without knowing your situation in detail, it's hard to recommend one over another.
Edit Re: Updated requirements
Ok, command line is definitely a good choice here. A quick detour into some architecture...
There's no reason you can't do this with a single project.
First up, use an interface to make sure all your filters are interchangeable. Something like this...
public interface IFilter {
FilterResult Filter(string email);
void SetConfig(string config);
}
SetConfig() is optional but potentially useful to reconfigure a filter without a recompile.
You also need to decide what your IFilter's FilterResult is going to be. Is it a pass/fail? Or a score? Maybe some flags and other metrics.
If you wanted to do multiple projects, you'd put that interface in a "shared" or "common" project on its own and reference it from every other project. This also makes it easy for third parties to develop a filter.
Anyway, next up. Let's look at how the filter is hosted. You want something that's going to listen on the network but that's not the responsibility of the filter itself, so we need a network client. What you use here is up to you. WCF in one flavour or another seems to be a prime candidate. Your network client class should take in its constructor a network port to listen on and an instance of the filter...
public class NetworkClient {
private string endpoint;
private IFilter filter;
public NetworkClient(string Endpoint, IFilter Filter) {
this.filter = Filter;
this.endpoint = Endpoint;
this.Setup();
}
void Setup() {
// Set up your network client to listen on endpoint.
// When it receives a message, pass it to filter.Filter(msg);
}
}
Finally, we need an application to host everything. It's up to you whether you go for a console app or winforms/wpf. Depends if you want the process to have a GUI. If it's running as a service, the UI won't be visible on a user desktop anyway.
So, we'll have a process that takes the endpoint for the NetworkClient to listen on, a class name for the filter to use, and (optionally) a configuration string to be passed in to the filter before first use.
So, in your app's Main(), do something like this...
static void Main() {
try {
const string usage = "Usage: Filter.exe Endpoint FilterType [Config]";
var args = Environment.GetCommandLineArgs();
Type filterType;
IFilter filter;
string endpoint;
string config = null;
NetworkClient networkClient;
switch (args.Length) {
case 0:
throw new InvalidOperationException(String.Format("{0}. An endpoint and filter type are required", usage));
case 1:
throw new InvalidOperationException(String.Format("{0}. A filter type is required", usage));
case 2:
// We've been given an endpoint and type
break;
case 3:
// We've been given an endpoint, type and config.
config = args[3];
break;
default:
throw new InvalidOperationException(String.Format("{0}. Max three parameters supported. If your config contains spaces, ensure you are quoting/escaping as required.", usage));
}
endpoint = args[1];
filterType = Type.GetType(args[2]); //Look at the overloads here to control where you're searching
// Now actually create an instance of the filter
filter = (IFilter)Activator.CreateInstance(filterType);
if (config != null) {
// If required, set config
filter.SetConfig(config);
}
// Make a new NetworkClient and tell it where to listen and what to host.
networkClient = new NetworkClient(endpoint, filter);
// In a console, loop here until shutdown is requested, however you've implemented that.
// In winforms, the main UI loop will keep you alive.
} catch (Exception e) {
Console.WriteLine(e.ToString()); // Or display a dialog
}
}
You should then be able to invoke your process like this...
Filter.exe "127.0.0.1:8000" MyNamespace.MyFilterClass
or
Filter.exe "127.0.0.1:8000" MyNamespace.MyFilterClass "dictionary=en-gb;cutoff=0.5"
Of course, you can use a helper class to convert the config string into something your filter can use (like a dictionary).
When the network client gets a FilterResult back from the filter, it can pass the data back to the server / act accordingly.
I'd also suggest a little reading on Dependency Injection / Inversion of control and Unity. It makes a pluggable architecture much, much simpler. Instead of instantiating everything manually and tracking concrete instances, you can just do something like...
container.Resolve<IFilter>(filterType);
And the container will make sure that you get the appropriate instance for your thread/context.
Hope that helps

Is locking single session in repository thread safe? (NHibernate)

I read many posts saying multithreaded applications must use a separate session per thread. Perhaps I don't understand how the locking works, but if I put a lock on the session in all repository methods, would that not make a single static session thread safe?
like:
public void SaveOrUpdate(T instance)
{
if (instance == null) return;
lock (_session)
using (ITransaction transaction = _session.BeginTransaction())
{
lock (instance)
{
_session.SaveOrUpdate(instance);
transaction.Commit();
}
}
}
EDIT:
Please consider the context/type of applications I'm writing:
Not multi-user, not typical user-interaction, but a self-running robot reacting to remote events like financial data and order-updates, performing tasks and saves based on that. Intermittently this can create clusters of up to 10 saves per second. Typically it's the same object graph that needs to be saved every time. Also, on startup, the program does load the full database into an entity-object-graph. So it basically just reads once, then performs SaveOrUpdates as it runs.
Given that the application is typically editing the same object graph, perhaps it would make more sense to have a single thread dedicated to applying these edits to the object graph and then saving them to the database, or perhaps a pool of threads servicing a common queue of edits, where each thread has it's own (dedicated) session that it does not need to lock. Look up producer/consumer queues (to start, look here).
Something like this:
[Producer Threads]
Edit Event -\ [Database Servicer Thread]
Edit Event ------> Queue -> Dequeue and Apply to Session -> Database
Edit Event -/
I'd imagine that a BlockingCollection<Action<Session>> would be a good starting point for such an implementation.
Here's a rough example (note this is obviously untested):
// Assuming you have a work queue defined as
public static BlockingCollection<Action<Session>> myWorkQueue = new BlockingCollection<Action<Session>>();
// and your eventargs looks something like this
public class MyObjectUpdatedEventArgs : EventArgs {
public MyObject MyObject { get; set; }
}
// And one of your event handlers
public MyObjectWasChangedEventHandler(object sender, MyObjectUpdatedEventArgs e) {
myWorkQueue.Add(s=>SaveOrUpdate(e.MyObject));
}
// Then a thread in a constant loop processing these items could work:
public void ProcessWorkQueue() {
var mySession = mySessionFactory.CreateSession();
while (true) {
var nextWork = myWorkQueue.Take();
nextWork(mySession);
}
}
// And to run the above:
var dbUpdateThread = new Thread(ProcessWorkQueue);
dbUpdateThread.IsBackground = true;
dbUpdateThread.Start();
At least two disadvantages are:
You are reducing the performance significantly. Having this on a busy web server is like having a crowd outside a cinema but letting people go in through a person-wide entrance.
A session has its internal identity map (cache). A single session per application means that the memory consumption grows as users access different data from the database. Ultimately you can even end up with the whole database in the memory which of course would just not work. This requires then calling a method to drop the 1st level cache from time to time. However, there is no good moment to drop the cache. You just can't drop in at the beginning of a request because other concurrent sessions could suffer from this.
I am sure people will add other disadvantages.

A question about making a C# class persistent during a file load

Apologies for the indescriptive title, however it's the best I could think of for the moment.
Basically, I've written a singleton class that loads files into a database. These files are typically large, and take hours to process. What I am looking for is to make a method where I can have this class running, and be able to call methods from within it, even if it's calling class is shut down.
The singleton class is simple. It starts a thread that loads the file into the database, while having methods to report on the current status. In a nutshell it's al little like this:
public sealed class BulkFileLoader {
static BulkFileLoader instance = null;
int currentCount = 0;
BulkFileLoader()
public static BulkFileLoader Instance
{
// Instanciate the instance class if necessary, and return it
}
public void Go() {
// kick of 'ProcessFile' thread
}
public void GetCurrentCount() {
return currentCount;
}
private void ProcessFile() {
while (more rows in the import file) {
// insert the row into the database
currentCount++;
}
}
}
The idea is that you can get an instance of BulkFileLoader to execute, which will process a file to load, while at any time you can get realtime updates on the number of rows its done so far using the GetCurrentCount() method.
This works fine, except the calling class needs to stay open the whole time for the processing to continue. As soon as I stop the calling class, the BulkFileLoader instance is removed, and it stops processing the file. What I am after is a solution where it will continue to run independently, regardless of what happens to the calling class.
I then tried another approach. I created a simple console application that kicks off the BulkFileLoader, and then wrapped it around as a process. This fixes one problem, since now when I kick off the process, the file will continue to load even if I close the class that called the process. However, now the problem I have is that cannot get updates on the current count, since if I try and get the instance of BulkFileLoader (which, as mentioned before is a singleton), it creates a new instance, rather than returning the instance that is currently in the executing process. It would appear that singletons don't extend into the scope of other processes running on the machine.
In the end, I want to be able to kick off the BulkFileLoader, and at any time be able to find out how many rows it's processed. However, that is even if I close the application I used to start it.
Can anyone see a solution to my problem?
You could create a Windows Service which will expose, say, a WCF endpoint which will be its API. Through this API you'll be able to query services' status and add more files for processing.
You should make your "Bulk Uploader" a service, and have your other processes speak to it via IPC.
You need a service because your upload takes hours. And it sounds like you'd like it to run unattended if necessary,, and you'd like it to be detached from the calling thread. That's what services do well.
You need some form of Inter-Process Communication because you'd like to send information between processes.
For communicating with your service see NetNamedPipeBinding
You can then send "Job Start" and "Job Status" commands and queries whenever you feel like to your background service.

Registering change notification with Active Directory using C#

This link http://msdn.microsoft.com/en-us/library/aa772153(VS.85).aspx says:
You can register up to five notification requests on a single LDAP connection. You must have a dedicated thread that waits for the notifications and processes them quickly. When you call the ldap_search_ext function to register a notification request, the function returns a message identifier that identifies that request. You then use the ldap_result function to wait for change notifications. When a change occurs, the server sends you an LDAP message that contains the message identifier for the notification request that generated the notification. This causes the ldap_result function to return with search results that identify the object that changed.
I cannot find a similar behavior looking through the .NET documentation. If anyone knows how to do this in C# I'd be very grateful to know. I'm looking to see when attributes change on all the users in the system so I can perform custom actions depending on what changed.
I've looked through stackoverflow and other sources with no luck.
Thanks.
I'm not sure it does what you need, but have a look at http://dunnry.com/blog/ImplementingChangeNotificationsInNET.aspx
Edit: Added text and code from the article:
There are three ways of figuring out things that have changed in Active Directory (or ADAM). These have been documented for some time over at MSDN in the aptly titled "Overview of Change Tracking Techniques". In summary: Polling for Changes using uSNChanged. This technique checks the 'highestCommittedUSN' value to start and then performs searches for 'uSNChanged' values that are higher subsequently. The 'uSNChanged' attribute is not replicated between domain controllers, so you must go back to the same domain controller each time for consistency. Essentially, you perform a search looking for the highest 'uSNChanged' value + 1 and then read in the results tracking them in any way you wish. Benefits This is the most compatible way. All languages and all versions of .NET support this way since it is a simple search. Disadvantages There is a lot here for the developer to take care of. You get the entire object back, and you must determine what has changed on the object (and if you care about that change). Dealing with deleted objects is a pain. This is a polling technique, so it is only as real-time as how often you query. This can be a good thing depending on the application. Note, intermediate values are not tracked here either. Polling for Changes Using the DirSync Control. This technique uses the ADS_SEARCHPREF_DIRSYNC option in ADSI and the LDAP_SERVER_DIRSYNC_OID control under the covers. Simply make an initial search, store the cookie, and then later search again and send the cookie. It will return only the objects that have changed. Benefits This is an easy model to follow. Both System.DirectoryServices and System.DirectoryServices.Protocols support this option. Filtering can reduce what you need to bother with. As an example, if my initial search is for all users "(objectClass=user)", I can subsequently filter on polling with "(sn=dunn)" and only get back the combination of both filters, instead of having to deal with everything from the intial filter. Windows 2003+ option removes the administrative limitation for using this option (object security). Windows 2003+ option will also give you the ability to return only the incremental values that have changed in large multi-valued attributes. This is a really nice feature. Deals well with deleted objects. Disadvantages This is .NET 2.0+ or later only option. Users of .NET 1.1 will need to use uSNChanged Tracking. Scripting languages cannot use this method. You can only scope the search to a partition. If you want to track only a particular OU or object, you must sort out those results yourself later. Using this with non-Windows 2003 mode domains comes with the restriction that you must have replication get changes permissions (default only admin) to use. This is a polling technique. It does not track intermediate values either. So, if an object you want to track changes between the searches multiple times, you will only get the last change. This can be an advantage depending on the application. Change Notifications in Active Directory. This technique registers a search on a separate thread that will receive notifications when any object changes that matches the filter. You can register up to 5 notifications per async connection. Benefits Instant notification. The other techniques require polling. Because this is a notification, you will get all changes, even the intermediate ones that would have been lost in the other two techniques. Disadvantages Relatively resource intensive. You don't want to do a whole ton of these as it could cause scalability issues with your controller. This only tells you if the object has changed, but it does not tell you what the change was. You need to figure out if the attribute you care about has changed or not. That being said, it is pretty easy to tell if the object has been deleted (easier than uSNChanged polling at least). You can only do this in unmanaged code or with System.DirectoryServices.Protocols. For the most part, I have found that DirSync has fit the bill for me in virtually every situation. I never bothered to try any of the other techniques. However, a reader asked if there was a way to do the change notifications in .NET. I figured it was possible using SDS.P, but had never tried it. Turns out, it is possible and actually not too hard to do. My first thought on writing this was to use the sample code found on MSDN (and referenced from option #3) and simply convert this to System.DirectoryServices.Protocols. This turned out to be a dead end. The way you do it in SDS.P and the way the sample code works are different enough that it is of no help. Here is the solution I came up with:
public class ChangeNotifier : IDisposable
{
LdapConnection _connection;
HashSet<IAsyncResult> _results = new HashSet<IAsyncResult>();
public ChangeNotifier(LdapConnection connection)
{
_connection = connection;
_connection.AutoBind = true;
}
public void Register(string dn, SearchScope scope)
{
SearchRequest request = new SearchRequest(
dn, //root the search here
"(objectClass=*)", //very inclusive
scope, //any scope works
null //we are interested in all attributes
);
//register our search
request.Controls.Add(new DirectoryNotificationControl());
//we will send this async and register our callback
//note how we would like to have partial results
IAsyncResult result = _connection.BeginSendRequest(
request,
TimeSpan.FromDays(1), //set timeout to a day...
PartialResultProcessing.ReturnPartialResultsAndNotifyCallback,
Notify,
request);
//store the hash for disposal later
_results.Add(result);
}
private void Notify(IAsyncResult result)
{
//since our search is long running, we don't want to use EndSendRequest
PartialResultsCollection prc = _connection.GetPartialResults(result);
foreach (SearchResultEntry entry in prc)
{
OnObjectChanged(new ObjectChangedEventArgs(entry));
}
}
private void OnObjectChanged(ObjectChangedEventArgs args)
{
if (ObjectChanged != null)
{
ObjectChanged(this, args);
}
}
public event EventHandler<ObjectChangedEventArgs> ObjectChanged;
#region IDisposable Members
public void Dispose()
{
foreach (var result in _results)
{
//end each async search
_connection.Abort(result);
}
}
#endregion
}
public class ObjectChangedEventArgs : EventArgs
{
public ObjectChangedEventArgs(SearchResultEntry entry)
{
Result = entry;
}
public SearchResultEntry Result { get; set;}
}
It is a relatively simple class that you can use to register searches. The trick is using the GetPartialResults method in the callback method to get only the change that has just occurred. I have also included the very simplified EventArgs class I am using to pass results back. Note, I am not doing anything about threading here and I don't have any error handling (this is just a sample). You can consume this class like so:
static void Main(string[] args)
{
using (LdapConnection connect = CreateConnection("localhost"))
{
using (ChangeNotifier notifier = new ChangeNotifier(connect))
{
//register some objects for notifications (limit 5)
notifier.Register("dc=dunnry,dc=net", SearchScope.OneLevel);
notifier.Register("cn=testuser1,ou=users,dc=dunnry,dc=net", SearchScope.Base);
notifier.ObjectChanged += new EventHandler<ObjectChangedEventArgs>(notifier_ObjectChanged);
Console.WriteLine("Waiting for changes...");
Console.WriteLine();
Console.ReadLine();
}
}
}
static void notifier_ObjectChanged(object sender, ObjectChangedEventArgs e)
{
Console.WriteLine(e.Result.DistinguishedName);
foreach (string attrib in e.Result.Attributes.AttributeNames)
{
foreach (var item in e.Result.Attributes[attrib].GetValues(typeof(string)))
{
Console.WriteLine("\t{0}: {1}", attrib, item);
}
}
Console.WriteLine();
Console.WriteLine("====================");
Console.WriteLine();
}

Categories

Resources