What architecture to choose (WCF services, server side logic) - c#

I am developing server side logic to process requests and respond with data to front-end server as well as to mobile app direct connections.
I have implemented SessionContext class that basically ensures, that there is matching session record in DB for every service that is called (with exception for forgot password cases, etc).
I am now trying to implement event logging. I want to have common logic so I may log all requests, exceptions, data, etc.
I have come up with this code, but somehow I don't feel good about it - too much code for each service method. Are there any clever tricks that I might implement to make it shorter and easier to read/code?
Current implementation would use EventLogic class to log event to event table. At some point some events might be related to session so I am passing eventLog as parameter to SessionContext (to create link between event and session). SessionContext saves entity data on successful dispose... i have a gut feeling that something is wrong with my design.
public Session CreateUser(string email, string password, System.Net.IPAddress ipAddress)
{
using (var eventLog = new EventLogic())
{
try
{
eventLog.LogCreateUser(email, password, ipAddress);
using (var context = SessionContext.CreateUser(eventLog, email, password, ipAddress))
{
return new Session()
{
Id = context.Session.UId,
HasExpired = context.Session.IsClosed,
IsEmailVerified = context.Session.User.IsEmailVerified,
TimeCreated = context.Session.TimeCreated,
PublicUserId = CryptoHelper.GuidFromId(context.Session, context.Session.UserId, CryptoHelper.TargetTypeEnum.PublicUser),
ServerTime = context.Time
};
}
}
catch (Exception e)
{
eventLog.Exception(e);
}
}
}

You should consider using something like SLF4J + LogBack (for example) for logging.
If your classes follow SRP, you should have not more than one call type like LogCreateUser per your application. And that means, there is no need to extract logging logic into a new class.

Related

How to raise domain Event When I don't want to share actual domain model

I'm trying to implement DDD in my small project but Not able to understand how to raise domain event in below case.
Account Domain
public class Account : BaseEntity
{
public string PhoneNumber { get; set; }
public int OTP { get; set; }
public Account()
{
}
public Account(string phoneNumber, short otp)
{
this.PhoneNumber = phoneNumber;
this.OTP = otp;
CreatedDate = DateTime.Now;
RowKey = Guid.NewGuid().ToString();
PartitionKey = phoneNumber;
}
}
Account Service
public async Task<bool> GenerateOTP(string phoneNumber)
{
if (phoneNumber.Length != 10)
throw new ArgumentException(ApplicationConstraint.InvalidNumber);
var otp = Convert.ToInt16(new Random().Next(1000, 9999));
var account = new Account(phoneNumber, otp);
await this.accountRepository.AddEntity(account);
return true;
}
Account Repository Azure Storage table is my database
public virtual async Task AddEntity(TEntity entity)
{
TableOperation insertOperation = TableOperation.Insert(entity);
await table.ExecuteAsync(insertOperation);
}
I want to raise domain event only when data get saved in the database. For a workaround, I'm calling messaging service from account service.
Given the limited information provided, one option would be to create an AccountCreated event, (or an EntityCreated event if this is a cross-cutting concern) and publish it through some bus where consumers can asynchronousle receive it and do any subsequent processing needed.
The event need not use domain entities, and it can contain the information/data necessary to do any subsequent processing without the need to access a shared db (and as such adhering to DDD & microservice guidelines).
----Edit----
In the above I assumed that this is an established system and Azure storage isn't something that can change. Publishing an event, and handling it is pretty simple, but there are some things you need to be aware of. In general, you have 3 options here:
Publishing right after saving isn't wrong. It's simple way to do it, and (if you adopt an event-first methodology) you can do it in a generic way across your entities, minimal work. However, you need to be concious of how to deal with errors. Specifically, the issue is that if you store the entity first, before publishing the event, and then the process crashes for whatever reason, the event may be missed, so later workflows will not kick-off. If you do the reverse (publish then store), you run the risk of double-publishing the event. In this case you have two options:
If you store-then-publish: just accept the (really rare) possiblity of not publishing an event. This is something you need to speak to the business, and you can minigate the severity by logging the event before trying to save the entity.
If you publish-then-store: (you'll need to do this if the cost of fixing any issues ad-hoc are too great) you can fix the problem by having your consumers check the id of the incoming message if they ever have processed it before and reject it if they did OR make the process idempotent (if possible), meaning that doing the process twice isn't a problem
Using event sourcing. This isn't difficult in my opinion, but obviously it's an overhead if this is a a simple application, and while not difficult, it does need a significant amount of reading up if you're not familiar with it. If this is a non-trivial application, event sourcing can help a lot, because observers can just observe the events in the buffer and respond to that (so not need to explicitly publish the changes).
Append the event in a separate table within the same transaction where you're storing the entity, and use the outbox pattern implementation (publish those events from a separate service, marking them as published once they've been published). Honestly, the pattern shown on that is a bit simplistic, and there are a lot of tricky and small complexities, so prefer to use an existing one if you can find.
Honestly, if you can get away with 1.1, do that. It's simple and problems only very rarely appear. Just log the operation before you do it so that you can manually do it in the rare case of issues.

Pass parameters from a project to a specific class in another project

I just started to learn C# for a school project but I'm stuck on something.
I have a solution with 2 projects (and each project has a class), something like this:
Solution:
Server (project) (...) MyServerClass.cs, Program.cs
App (project) (...) MyAppClass.cs, Program.cs
In my "MyServerClass.cs", I have this:
class MyServerClass
{
...
public void SomeMethod()
{
Process.Start("App.exe", "MyAppClass");
}
}
How can I properly send, for example, an IP address and port? Would something like this work?
class MyServerClass
{
....
public void SomeMethod()
{
string ip = "127.0.0.1";
int port = 8888;
Process.Start("App.exe", "MyAppClass " + ip + " " + port);
}
}
Then in my "MyAppClass.cs", how can I receive that IP address and port?
EDIT:
The objective of this work is to practice processes/threads/sockets. The idea is having a server that receives emails and filter them, to know if they're spam or not. We got to have 4 or 5 filters. The idea was having them as separated projects (ex: Filter1.exe, Filter2.exe, ...), but I was trying to have only 1 project (ex: Filters.exe) and have the filters as classes (Filter1.cs, Filter2.cs, ...), and then create a new process for each different filter.
I guess I'll stick to a project for each filter!
Thanks!
There are a number of ways to achieve this, each with their own pros/cons.
Some possible solutions:
Pass the values in on the command line. Pros: Easy. Cons: Can only be passed in once on launch. Unidirectional (child process can't send info back). Doesn't scale well for complex structured data.
Create a webservice (either in the server or client). Connect to it and either pull/push the appropriate settings. Pros: Flexible, ongoing, potentially bi-directional with some form of polling and works if client/server are on different hosts. Cons: A little bit more complex, requires one app to be able to locate the web address of the other which is trivial locally and more involved over a network.
Use shared memory via a memory mapped file. This approach allows multiple processes to access the same chunk of memory. One process can write the required data and the others can read it. Pros: Efficient, bi-directional, can be disk-backed to persist state through restarts. Cons: Requires pointers and an understanding of how they work. Requires a little more manipulation of data to perform a read/write.
There are dozens more ways. Without knowing your situation in detail, it's hard to recommend one over another.
Edit Re: Updated requirements
Ok, command line is definitely a good choice here. A quick detour into some architecture...
There's no reason you can't do this with a single project.
First up, use an interface to make sure all your filters are interchangeable. Something like this...
public interface IFilter {
FilterResult Filter(string email);
void SetConfig(string config);
}
SetConfig() is optional but potentially useful to reconfigure a filter without a recompile.
You also need to decide what your IFilter's FilterResult is going to be. Is it a pass/fail? Or a score? Maybe some flags and other metrics.
If you wanted to do multiple projects, you'd put that interface in a "shared" or "common" project on its own and reference it from every other project. This also makes it easy for third parties to develop a filter.
Anyway, next up. Let's look at how the filter is hosted. You want something that's going to listen on the network but that's not the responsibility of the filter itself, so we need a network client. What you use here is up to you. WCF in one flavour or another seems to be a prime candidate. Your network client class should take in its constructor a network port to listen on and an instance of the filter...
public class NetworkClient {
private string endpoint;
private IFilter filter;
public NetworkClient(string Endpoint, IFilter Filter) {
this.filter = Filter;
this.endpoint = Endpoint;
this.Setup();
}
void Setup() {
// Set up your network client to listen on endpoint.
// When it receives a message, pass it to filter.Filter(msg);
}
}
Finally, we need an application to host everything. It's up to you whether you go for a console app or winforms/wpf. Depends if you want the process to have a GUI. If it's running as a service, the UI won't be visible on a user desktop anyway.
So, we'll have a process that takes the endpoint for the NetworkClient to listen on, a class name for the filter to use, and (optionally) a configuration string to be passed in to the filter before first use.
So, in your app's Main(), do something like this...
static void Main() {
try {
const string usage = "Usage: Filter.exe Endpoint FilterType [Config]";
var args = Environment.GetCommandLineArgs();
Type filterType;
IFilter filter;
string endpoint;
string config = null;
NetworkClient networkClient;
switch (args.Length) {
case 0:
throw new InvalidOperationException(String.Format("{0}. An endpoint and filter type are required", usage));
case 1:
throw new InvalidOperationException(String.Format("{0}. A filter type is required", usage));
case 2:
// We've been given an endpoint and type
break;
case 3:
// We've been given an endpoint, type and config.
config = args[3];
break;
default:
throw new InvalidOperationException(String.Format("{0}. Max three parameters supported. If your config contains spaces, ensure you are quoting/escaping as required.", usage));
}
endpoint = args[1];
filterType = Type.GetType(args[2]); //Look at the overloads here to control where you're searching
// Now actually create an instance of the filter
filter = (IFilter)Activator.CreateInstance(filterType);
if (config != null) {
// If required, set config
filter.SetConfig(config);
}
// Make a new NetworkClient and tell it where to listen and what to host.
networkClient = new NetworkClient(endpoint, filter);
// In a console, loop here until shutdown is requested, however you've implemented that.
// In winforms, the main UI loop will keep you alive.
} catch (Exception e) {
Console.WriteLine(e.ToString()); // Or display a dialog
}
}
You should then be able to invoke your process like this...
Filter.exe "127.0.0.1:8000" MyNamespace.MyFilterClass
or
Filter.exe "127.0.0.1:8000" MyNamespace.MyFilterClass "dictionary=en-gb;cutoff=0.5"
Of course, you can use a helper class to convert the config string into something your filter can use (like a dictionary).
When the network client gets a FilterResult back from the filter, it can pass the data back to the server / act accordingly.
I'd also suggest a little reading on Dependency Injection / Inversion of control and Unity. It makes a pluggable architecture much, much simpler. Instead of instantiating everything manually and tracking concrete instances, you can just do something like...
container.Resolve<IFilter>(filterType);
And the container will make sure that you get the appropriate instance for your thread/context.
Hope that helps

How to deal with WCF connection failing

Let's imagine I have WCF service and a client that consumes some methods from a given service.
There are tons of posts of how to handle various exceptions during the client and service communication. Only thing which is still confusing me is a following case:
Service:
[ServiceContract]
public interface IService1
{
[OperationContract]
bool ExportData(object data);
}
public class Service1 : IService1
{
public bool ExportData(object data)
{
// Simulate long operation (i.e. inserting data to the DB)
Thread.Sleep(1000000);
return true;
}
}
Client:
class Program
{
static wsService1.Service1Client client1 = new wsService1.Service1Client();
static void Main(string[] args)
{
object data = GetRecordsFromLocalDB();
bool result = client1.ExportData(data);
if (result)
{
DeleteRecordsFromLocalDB();
}
}
}
Client gets some data from local db and sending it to the server. If result is successful, then client is going to remove exported rows from local DB. Now imagine, when data is already sent to the server, suddenly connection failed (i.e. WiFi was disconnected). In this case data is successfully processed on a server side, but client is never know about it. And yes, I can catch connection exception, but still I don't know what should I do with a records in my local DB. I can send this data again later, but I'll get some duplication on a server DB (i.e. duplication is allowed on remote DB), but I don't want to send same data multiple times.
So, my question is how to handle such cases? What is the best practices?
I checked some info about asynchronous operations. But still this is about when I have stable connection.
As a workaround I can store my export operation under some GUID remotelly and check status for this GUID later. Only thing I can't change remote DB. So, please, suggest what would be better in my case?
Here are some points to consider
On server side you can catch all kinds of error (custom class deriving IErrorHandler) and provide specific error to client letting him know about error's reason.
The concept of service is that it is kind of intermediary between client and database so why would client retrieve data and then send it to service?
One way out is to use transaction which assures that if error occurres then no changes are going to be retained.
By the way, If you expect service to throw an exception do not create global service object since it will end up being in faulted state. Create new instance for every single call instead (make use of using statement so as to dispose its instance). Bool return type does not provide extensive information about the error if any takes place. Let it have void return type and wrap in try/catch block which gives a change to learn more about the source and nature of error.

SslStream, disable session caching

The MSDN documentation says
The Framework caches SSL sessions as they are created and attempts to reuse a cached session for a new request, if possible. When attempting to reuse an SSL session, the Framework uses the first element of ClientCertificates (if there is one), or tries to reuse an anonymous sessions if ClientCertificates is empty.
How can I disable this caching?
At the moment I am experiencing a problem with a reconnect to a server (i.e., the first connection works good, but at attempt to reconnect the servers breaks the session). Restarting the application helps (but of course only for the first connection attempt). I assume the problem root is caching.
I've checked the packets with a sniffer, the difference is at just single place only at Client Hello messages:
First connection to the server (successful):
Second connection attempt (no program restart, failed):
The difference seems to be just the session identifier.
P.S. I'd like to avoid using 3rd-party SSL clients. Is there a reasonable solution?
This is a translation of this question from ru.stackoverflow
Caching is handled inside SecureChannel - internal class that wraps SSPI and used by SslStream. I don't see any points inside that you can use to disable session caching for client connections.
You can clear cache between connections using reflection:
var sslAssembly = Assembly.GetAssembly(typeof(SslStream));
var sslSessionCacheClass = sslAssembly.GetType("System.Net.Security.SslSessionsCache");
var cachedCredsInfo = sslSessionCacheClass.GetField("s_CachedCreds", BindingFlags.NonPublic | BindingFlags.Static);
var cachedCreds = (Hashtable)cachedCredsInfo.GetValue(null);
cachedCreds.Clear();
But it's very bad practice. Consider to fix server side.
So I solved this problem a bit differently. I really didn't like the idea of reflecting out this private static method to dump the cache because you don't really know what you're getting into by doing so; you're basically circumventing encapsulation and that could cause unforeseen problems. But really, I was worried about race conditions where I dump the cache and before I send the request, some other thread comes in and establishes a new session so then my first thread inadvertently hijacks that session. Bad news... anyway, here's what I did.
I stopped to think about whether or not there was a way to sort of isolate the process and then an Android co-worker of mine recalled the availability of AppDomains. We both agreed that spinning one up should allow the Tcp/Ssl call to run, isolated from everything else. This would allow the caching logic to remain intact without causing conflicts between SSL sessions.
Basically, I had originally written my SSL client to be internal to a separate library. Then within that library, I had a public service act as a proxy/mediator to that client. In the application layer, I wanted the ability to switch between services (HSM services, in my case) based on the hardware type, so I wrapped that into an adapter and interfaced that to be used with a factory. Ok, so how is that relevant? Well it just made it easier to do this AppDomain thing cleanly, without forcing this behavior any other consumer of the public service (the proxy/mediator I spoke of). You don't have to follow this abstraction, I just like to share good examples of abstraction whenever I find them :)
Now, in the adapter, instead of calling the service directly, I basically create the domain. Here is the ctor:
public VCRklServiceAdapter(
string hostname,
int port,
IHsmLogger logger)
{
Ensure.IsNotNullOrEmpty(hostname, nameof(hostname));
Ensure.IsNotDefault(port, nameof(port), failureMessage: $"It does not appear that the port number was actually set (port: {port})");
Ensure.IsNotNull(logger, nameof(logger));
ClientId = Guid.NewGuid();
_logger = logger;
_hostname = hostname;
_port = port;
// configure the domain
_instanceDomain = AppDomain.CreateDomain(
$"vcrypt_rkl_instance_{ClientId}",
null,
AppDomain.CurrentDomain.SetupInformation);
// using the configured domain, grab a command instance from which we can
// marshall in some data
_rklServiceRuntime = (IRklServiceRuntime)_instanceDomain.CreateInstanceAndUnwrap(
typeof(VCServiceRuntime).Assembly.FullName,
typeof(VCServiceRuntime).FullName);
}
All this does is creates a named domain from which my actual service will run in isolation. Now, most articles that I came across on how to actually execute within the domain sort of over-simplify how it works. The examples typically involve calling myDomain.DoCallback(() => ...); which isn't wrong, but trying to get data in and out of that domain will likely become problematic as serialization will likely stop you dead in your tracks. Simply put, objects that are instantiated outside of DoCallback() are not the same objects when called from inside of DoCallback since they were created outside of this domain (see object marshalling). So you'll likely get all kinds of serialization errors. This isn't a problem if running the entire operation, input and output and all can occur from inside myDomain.DoCallback() but this is problematic if you need to use external parameters and return something across this AppDomain back to the originating domain.
I came across a different pattern here on SO that worked out for me and solved this problem. Look at _rklServiceRuntime = in my sample ctor. What this is doing is actually asking the domain to instantiate an object for you to act as a proxy from that domain. This will allow you to marshall some objects in and out of it. Here is my implemenation of IRklServiceRuntime:
public interface IRklServiceRuntime
{
RklResponse Run(RklRequest request, string hostname, int port, Guid clientId, IHsmLogger logger);
}
public class VCServiceRuntime : MarshalByRefObject, IRklServiceRuntime
{
public RklResponse Run(
RklRequest request,
string hostname,
int port,
Guid clientId,
IHsmLogger logger)
{
Ensure.IsNotNull(request, nameof(request));
Ensure.IsNotNullOrEmpty(hostname, nameof(hostname));
Ensure.IsNotDefault(port, nameof(port), failureMessage: $"It does not appear that the port number was actually set (port: {port})");
Ensure.IsNotNull(logger, nameof(logger));
// these are set here instead of passed in because they are not
// serializable
var clientCert = ApplicationValues.VCClientCertificate;
var clientCerts = new X509Certificate2Collection(clientCert);
using (var client = new VCServiceClient(hostname, port, clientCerts, clientId, logger))
{
var response = client.RetrieveDeviceKeys(request);
return response;
}
}
}
This inherits from MarshallByRefObject which allows it to cross AppDomain boundaries, and has a single method that takes your external parameters and executes your logic from within the domain that instantiated it.
So now back to the service adapter: All the service adapters has to do now is call _rklServiceRuntime.Run(...) and feed in the necessary, serializable parameters. Now, I just create as many instances of the service adapter as I need and they all run in their own domain. This works for me because my SSL calls are small and brief and these requests are made inside of an internal web service where instancing requests like this is very important. Here is the complete adapter:
public class VCRklServiceAdapter : IRklService
{
private readonly string _hostname;
private readonly int _port;
private readonly IHsmLogger _logger;
private readonly AppDomain _instanceDomain;
private readonly IRklServiceRuntime _rklServiceRuntime;
public Guid ClientId { get; }
public VCRklServiceAdapter(
string hostname,
int port,
IHsmLogger logger)
{
Ensure.IsNotNullOrEmpty(hostname, nameof(hostname));
Ensure.IsNotDefault(port, nameof(port), failureMessage: $"It does not appear that the port number was actually set (port: {port})");
Ensure.IsNotNull(logger, nameof(logger));
ClientId = Guid.NewGuid();
_logger = logger;
_hostname = hostname;
_port = port;
// configure the domain
_instanceDomain = AppDomain.CreateDomain(
$"vc_rkl_instance_{ClientId}",
null,
AppDomain.CurrentDomain.SetupInformation);
// using the configured domain, grab a command instance from which we can
// marshall in some data
_rklServiceRuntime = (IRklServiceRuntime)_instanceDomain.CreateInstanceAndUnwrap(
typeof(VCServiceRuntime).Assembly.FullName,
typeof(VCServiceRuntime).FullName);
}
public RklResponse GetKeys(RklRequest rklRequest)
{
Ensure.IsNotNull(rklRequest, nameof(rklRequest));
var response = _rklServiceRuntime.Run(
rklRequest,
_hostname,
_port,
ClientId,
_logger);
return response;
}
/// <summary>
/// Releases unmanaged and - optionally - managed resources.
/// </summary>
public void Dispose()
{
AppDomain.Unload(_instanceDomain);
}
}
Notice the dispose method. Don't forget to unload the domain. This service implements IRklService which implements IDisposable, so when I use it, it used with a using statement.
This seems a bit contrived, but it's really not and now the logic will be run on it's own domain, in isolation, and thus the caching logic remains intact but non-problematic. Much better than meddling with the SSLSessionCache!
Please forgive any naming inconsistencies as I was sanitizing the actual names quickly after writing the post.. I hope this helps someone!

Registering change notification with Active Directory using C#

This link http://msdn.microsoft.com/en-us/library/aa772153(VS.85).aspx says:
You can register up to five notification requests on a single LDAP connection. You must have a dedicated thread that waits for the notifications and processes them quickly. When you call the ldap_search_ext function to register a notification request, the function returns a message identifier that identifies that request. You then use the ldap_result function to wait for change notifications. When a change occurs, the server sends you an LDAP message that contains the message identifier for the notification request that generated the notification. This causes the ldap_result function to return with search results that identify the object that changed.
I cannot find a similar behavior looking through the .NET documentation. If anyone knows how to do this in C# I'd be very grateful to know. I'm looking to see when attributes change on all the users in the system so I can perform custom actions depending on what changed.
I've looked through stackoverflow and other sources with no luck.
Thanks.
I'm not sure it does what you need, but have a look at http://dunnry.com/blog/ImplementingChangeNotificationsInNET.aspx
Edit: Added text and code from the article:
There are three ways of figuring out things that have changed in Active Directory (or ADAM). These have been documented for some time over at MSDN in the aptly titled "Overview of Change Tracking Techniques". In summary: Polling for Changes using uSNChanged. This technique checks the 'highestCommittedUSN' value to start and then performs searches for 'uSNChanged' values that are higher subsequently. The 'uSNChanged' attribute is not replicated between domain controllers, so you must go back to the same domain controller each time for consistency. Essentially, you perform a search looking for the highest 'uSNChanged' value + 1 and then read in the results tracking them in any way you wish. Benefits This is the most compatible way. All languages and all versions of .NET support this way since it is a simple search. Disadvantages There is a lot here for the developer to take care of. You get the entire object back, and you must determine what has changed on the object (and if you care about that change). Dealing with deleted objects is a pain. This is a polling technique, so it is only as real-time as how often you query. This can be a good thing depending on the application. Note, intermediate values are not tracked here either. Polling for Changes Using the DirSync Control. This technique uses the ADS_SEARCHPREF_DIRSYNC option in ADSI and the LDAP_SERVER_DIRSYNC_OID control under the covers. Simply make an initial search, store the cookie, and then later search again and send the cookie. It will return only the objects that have changed. Benefits This is an easy model to follow. Both System.DirectoryServices and System.DirectoryServices.Protocols support this option. Filtering can reduce what you need to bother with. As an example, if my initial search is for all users "(objectClass=user)", I can subsequently filter on polling with "(sn=dunn)" and only get back the combination of both filters, instead of having to deal with everything from the intial filter. Windows 2003+ option removes the administrative limitation for using this option (object security). Windows 2003+ option will also give you the ability to return only the incremental values that have changed in large multi-valued attributes. This is a really nice feature. Deals well with deleted objects. Disadvantages This is .NET 2.0+ or later only option. Users of .NET 1.1 will need to use uSNChanged Tracking. Scripting languages cannot use this method. You can only scope the search to a partition. If you want to track only a particular OU or object, you must sort out those results yourself later. Using this with non-Windows 2003 mode domains comes with the restriction that you must have replication get changes permissions (default only admin) to use. This is a polling technique. It does not track intermediate values either. So, if an object you want to track changes between the searches multiple times, you will only get the last change. This can be an advantage depending on the application. Change Notifications in Active Directory. This technique registers a search on a separate thread that will receive notifications when any object changes that matches the filter. You can register up to 5 notifications per async connection. Benefits Instant notification. The other techniques require polling. Because this is a notification, you will get all changes, even the intermediate ones that would have been lost in the other two techniques. Disadvantages Relatively resource intensive. You don't want to do a whole ton of these as it could cause scalability issues with your controller. This only tells you if the object has changed, but it does not tell you what the change was. You need to figure out if the attribute you care about has changed or not. That being said, it is pretty easy to tell if the object has been deleted (easier than uSNChanged polling at least). You can only do this in unmanaged code or with System.DirectoryServices.Protocols. For the most part, I have found that DirSync has fit the bill for me in virtually every situation. I never bothered to try any of the other techniques. However, a reader asked if there was a way to do the change notifications in .NET. I figured it was possible using SDS.P, but had never tried it. Turns out, it is possible and actually not too hard to do. My first thought on writing this was to use the sample code found on MSDN (and referenced from option #3) and simply convert this to System.DirectoryServices.Protocols. This turned out to be a dead end. The way you do it in SDS.P and the way the sample code works are different enough that it is of no help. Here is the solution I came up with:
public class ChangeNotifier : IDisposable
{
LdapConnection _connection;
HashSet<IAsyncResult> _results = new HashSet<IAsyncResult>();
public ChangeNotifier(LdapConnection connection)
{
_connection = connection;
_connection.AutoBind = true;
}
public void Register(string dn, SearchScope scope)
{
SearchRequest request = new SearchRequest(
dn, //root the search here
"(objectClass=*)", //very inclusive
scope, //any scope works
null //we are interested in all attributes
);
//register our search
request.Controls.Add(new DirectoryNotificationControl());
//we will send this async and register our callback
//note how we would like to have partial results
IAsyncResult result = _connection.BeginSendRequest(
request,
TimeSpan.FromDays(1), //set timeout to a day...
PartialResultProcessing.ReturnPartialResultsAndNotifyCallback,
Notify,
request);
//store the hash for disposal later
_results.Add(result);
}
private void Notify(IAsyncResult result)
{
//since our search is long running, we don't want to use EndSendRequest
PartialResultsCollection prc = _connection.GetPartialResults(result);
foreach (SearchResultEntry entry in prc)
{
OnObjectChanged(new ObjectChangedEventArgs(entry));
}
}
private void OnObjectChanged(ObjectChangedEventArgs args)
{
if (ObjectChanged != null)
{
ObjectChanged(this, args);
}
}
public event EventHandler<ObjectChangedEventArgs> ObjectChanged;
#region IDisposable Members
public void Dispose()
{
foreach (var result in _results)
{
//end each async search
_connection.Abort(result);
}
}
#endregion
}
public class ObjectChangedEventArgs : EventArgs
{
public ObjectChangedEventArgs(SearchResultEntry entry)
{
Result = entry;
}
public SearchResultEntry Result { get; set;}
}
It is a relatively simple class that you can use to register searches. The trick is using the GetPartialResults method in the callback method to get only the change that has just occurred. I have also included the very simplified EventArgs class I am using to pass results back. Note, I am not doing anything about threading here and I don't have any error handling (this is just a sample). You can consume this class like so:
static void Main(string[] args)
{
using (LdapConnection connect = CreateConnection("localhost"))
{
using (ChangeNotifier notifier = new ChangeNotifier(connect))
{
//register some objects for notifications (limit 5)
notifier.Register("dc=dunnry,dc=net", SearchScope.OneLevel);
notifier.Register("cn=testuser1,ou=users,dc=dunnry,dc=net", SearchScope.Base);
notifier.ObjectChanged += new EventHandler<ObjectChangedEventArgs>(notifier_ObjectChanged);
Console.WriteLine("Waiting for changes...");
Console.WriteLine();
Console.ReadLine();
}
}
}
static void notifier_ObjectChanged(object sender, ObjectChangedEventArgs e)
{
Console.WriteLine(e.Result.DistinguishedName);
foreach (string attrib in e.Result.Attributes.AttributeNames)
{
foreach (var item in e.Result.Attributes[attrib].GetValues(typeof(string)))
{
Console.WriteLine("\t{0}: {1}", attrib, item);
}
}
Console.WriteLine();
Console.WriteLine("====================");
Console.WriteLine();
}

Categories

Resources