Redis keyspace notifications with StackExchange.Redis - c#

I've looking around and I'm unable to find how to perform a subscription to keyspace notifications on Redis using StackExchange.Redis library.
Checking available tests I've found pubsub using channels, but this is more to work like a service bus/queueing rather than subscribing to specific Redis key events.
Is it possible to take advantage of this Redis feature using StackExchange.Redis?

The regular subscriber API should work fine - there is no assumption on use-cases, and this should work fine.
However, I do kinda agree that this is inbuilt functionality that could perhaps benefit from helper methods on the API, and perhaps a different delegate signature - to encapsulate the syntax of the keyapace notifications so that people don't need to duplicate it. For that: I suggest you log an issue so that it doesn't get forgotten.
Simple sample of how to subscribe to a keyspace event
First of all, it's important to check that Redis keyspace events are enabled. For example, events should be enabled on keys of type Set. This can be done using CONFIG SET command:
CONFIG SET notify-keyspace-events KEs
Once keyspace events are enabled, it's just about subscribing to the pub-sub channel:
using (ConnectionMultiplexer connection = ConnectionMultiplexer.Connect("localhost"))
{
IDatabase db = connection.GetDatabase();
ISubscriber subscriber = connection.GetSubscriber();
subscriber.Subscribe("__keyspace#0__:*", (channel, value) =>
{
if ((string)channel == "__keyspace#0__:users" && (string)value == "sadd")
{
// Do stuff if some item is added to a hypothethical "users" set in Redis
}
}
);
}
Learn more about keyspace events here.

Just to extend what the selected answer already describes:
using (ConnectionMultiplexer connection = ConnectionMultiplexer.Connect("localhost"))
{
IDatabase db = connection.GetDatabase();
ISubscriber subscriber = connection.GetSubscriber();
subscriber.Subscribe($"__keyspace#0__:{channel}", (channel, value) =>
{
// Do whatever channel specific handling you need to do here, in my case I used exact Key name that I wanted expiration event for.
}
);
}
Another important thing, I had to subscribe KEx (CONFIG SET notify-keyspace-events KEx
) to get channel based updates for expiration notifications.

Related

How to raise domain Event When I don't want to share actual domain model

I'm trying to implement DDD in my small project but Not able to understand how to raise domain event in below case.
Account Domain
public class Account : BaseEntity
{
public string PhoneNumber { get; set; }
public int OTP { get; set; }
public Account()
{
}
public Account(string phoneNumber, short otp)
{
this.PhoneNumber = phoneNumber;
this.OTP = otp;
CreatedDate = DateTime.Now;
RowKey = Guid.NewGuid().ToString();
PartitionKey = phoneNumber;
}
}
Account Service
public async Task<bool> GenerateOTP(string phoneNumber)
{
if (phoneNumber.Length != 10)
throw new ArgumentException(ApplicationConstraint.InvalidNumber);
var otp = Convert.ToInt16(new Random().Next(1000, 9999));
var account = new Account(phoneNumber, otp);
await this.accountRepository.AddEntity(account);
return true;
}
Account Repository Azure Storage table is my database
public virtual async Task AddEntity(TEntity entity)
{
TableOperation insertOperation = TableOperation.Insert(entity);
await table.ExecuteAsync(insertOperation);
}
I want to raise domain event only when data get saved in the database. For a workaround, I'm calling messaging service from account service.
Given the limited information provided, one option would be to create an AccountCreated event, (or an EntityCreated event if this is a cross-cutting concern) and publish it through some bus where consumers can asynchronousle receive it and do any subsequent processing needed.
The event need not use domain entities, and it can contain the information/data necessary to do any subsequent processing without the need to access a shared db (and as such adhering to DDD & microservice guidelines).
----Edit----
In the above I assumed that this is an established system and Azure storage isn't something that can change. Publishing an event, and handling it is pretty simple, but there are some things you need to be aware of. In general, you have 3 options here:
Publishing right after saving isn't wrong. It's simple way to do it, and (if you adopt an event-first methodology) you can do it in a generic way across your entities, minimal work. However, you need to be concious of how to deal with errors. Specifically, the issue is that if you store the entity first, before publishing the event, and then the process crashes for whatever reason, the event may be missed, so later workflows will not kick-off. If you do the reverse (publish then store), you run the risk of double-publishing the event. In this case you have two options:
If you store-then-publish: just accept the (really rare) possiblity of not publishing an event. This is something you need to speak to the business, and you can minigate the severity by logging the event before trying to save the entity.
If you publish-then-store: (you'll need to do this if the cost of fixing any issues ad-hoc are too great) you can fix the problem by having your consumers check the id of the incoming message if they ever have processed it before and reject it if they did OR make the process idempotent (if possible), meaning that doing the process twice isn't a problem
Using event sourcing. This isn't difficult in my opinion, but obviously it's an overhead if this is a a simple application, and while not difficult, it does need a significant amount of reading up if you're not familiar with it. If this is a non-trivial application, event sourcing can help a lot, because observers can just observe the events in the buffer and respond to that (so not need to explicitly publish the changes).
Append the event in a separate table within the same transaction where you're storing the entity, and use the outbox pattern implementation (publish those events from a separate service, marking them as published once they've been published). Honestly, the pattern shown on that is a bit simplistic, and there are a lot of tricky and small complexities, so prefer to use an existing one if you can find.
Honestly, if you can get away with 1.1, do that. It's simple and problems only very rarely appear. Just log the operation before you do it so that you can manually do it in the rare case of issues.

How to allow SignalR to push updates from DB to the browser using EF6

How can I allow SignalR to push updates from a SQL Server database to the browser using Entity Framework 6?
Here's my action method:
public ActionResult Index()
{
var currentGates = _ctx.Transactions
.GroupBy(item => item.SubGateId)
.SelectMany(group => group.OrderByDescending(x => x.TransactionDateTime)
.Take(1))
.Include(g => g.Card)
.Include(g => g.Student)
.Include(g => g.Student.Faculty)
.Include(g => g.Student.Department)
.Include(g => g.SubGate)
.ToList();
return View(currentGates);
}
After a lot of searching, the only result I got is this:
ASP.NET MVC 5 SignalR, SqlDependency and EntityFramework 6
I have tried the suggested way but it didn't work. In addition to that, I found a very important security issue concerning storing sensitive data in a hidden field!
My question is: How can I update my view according to any Insert on Transaction table?
So basically what you need to do is overwrite the SaveChanges() method and action you SignalR function:
public class ApplicationDbContext : IdentityDbContext<IdentityUser>
{
public override int SaveChanges()
{
var entities = ChangeTracker.Entries().Where(x => x.Entity is Transactions && x.State == EntityState.Added) ;
IHubContext hubContext = GlobalHost.ConnectionManager.GetHubContext<MyHub>();
foreach (var entity in entities)
{
hubContext.Clients.All.notifyClients(entity);
}
return base.SaveChanges();
}
}
Normally I would plug into either the service creating the Transaction or the Save changes event like #Vince Suggested However because of your new requirement
the problem is: I don't have this control over the sdk that pushes the transactions of students to db, so I hope that there's some way to work directly over the db table.
In you're case you can just watch the table using SQL Dependencies
Note: Be careful using SqlDependency class - it has problems with memory leaks.
using(var tableDependency = new SqlTableDependency<Transaction>(conString))
{
tableDependency.OnChanged += TableDependency_Changed;
tableDependency.Start();
}
void TableDependency_Changed(object sender, RecordChangedEventArgs<Transaction> e)
{
if (e.ChangeType != ChangeType.None)
{
var changedEntity = e.Entity;
//You'll need to change this logic to send only to the people you want to
IHubContext hubContext = GlobalHost.ConnectionManager.GetHubContext<MyHub>();
hubContext.Clients.All.notifyClients(entity);
}
}
Edit
It seems you may have other dependencies e.g include you want with your results.
So what you can do is resolve the Id from the entity and then call Get using EntityFramework.
if (e.ChangeType != ChangeType.None)
{
var changedEntity = e.Entity;
var id = GetPrimaryKey(changedEntity)
_ctx.Transactions.Find(id);
}
Edit
Other Methods of approach.
If you're entities have a last updated field you can scan for changes to the table on a timer. Client has a timer when it elapses it send the in the last time it checked for changes. The server the obtains all of entities who have an last update time stamp greater than the time passed in
Although you don't have access to the EF, you probably have access to the web api requests. Whenever someone calls an update or create method, just grab the id of the changed entity, pop all the ids of the entities changed onto a background service which will send the signalr notifications,
The only issue now is obtaining the primary key that can be done with a manual mapping of the types to their id property, or it can be done using the metadata from the model but that's a bit more complicated
Note you probably can modify this so it works generically on all tables
I understand you have the following dataflow:
6x card reader devices
a 'manager' receives the card reader data and pushes it to
a SQL Server 'transaction' table
All of this is 3rd party. Your goal is to have a custom ASP.NET web page that displays the most recent record received by a card reader.
So the issue basically is, that your ASP.NET service needs to be notified of changes in a monitored database table.
Quick-and-dirty approach:
Implement a database trigger on your big 'transaction' table that updates a lightweight 'state' table
CREATE TABLE state (
GateId INT NOT NULL,
UserName VARCHAR (20) NOT NULL,
PRIMARY KEY (GateId)
);
This table is intended to have 1 record per gate only. 6 gates -> 6 records in this table.
Change your ASP.NET service to poll the 'state' table in an interval of e.g. 200ms. When doing such a hack, make sure the table you are polling does not have many records! Once you detect a change, notify your SignalR clients.
IMHO, a dataflow via a database is a bad design decision (or limitation). It would be way better if your 'manager' does not only push data to the database, but subsequently notifies all SignalR clients. This dataflow is basically what #Vince answer assumes.
Edit:
As you have posted the actual device you are using, I'd encourage you to double-check if you can directly connect to the card reader. It seams that there are approaches to register some kind of callback once the device has read a student card. Do this with the sole goal of achieving a straight dataflow / architecture like this:
-> connect 6x card readers to your
-> ASP.Net service which at a single point in your code:
-> updates the database
-> updates the Signal R clients
http://kb.supremainc.com/bs2sdk/doku.php?id=en:getting_started
https://github.com/supremainc/BioStar2_device_SDK
You might ask the vendor for support, for the C# documentation is not too plenty :|

How do I handle claim check with cqrs

TL;DR:
1. Am I creating an anti-pattern?
2. What is the best way to handle a claim check with CQRS?
I have several entry points in my system (webapi passing in json and xml), as well as through the file system with fixed-length files.
I am using Rebus with MSMQ and Sql server to manage my messaging. The data can be larger than 4mb (MSMQ's max message size if I believe). When the system receives a file I convert it into a stream and create a command that implements IAttachmentCommand as below:
public interface IAttachmentCommand : ICommand
{
Stream Attachment { get; }
IClaimCheckCommand ToClaimCheck(string attachmentId);
}
public interface IClaimCheckCommand : ICommand
{
string AttachmentId { get; }
}
I then send it using a command bus (using Rebus). If the command is of type IAttachmentCommand I create an attachment in the rebus databus table and return a new IAttachmentCommand using ToClaimCheck on the original command. The AttachmentCommand is effectively a carbon copy of the original command, except it now has the attachmentId instead of the data.
I will then call send in my Rebus bus with my new AttachmentId as below:
public void Send<TCommand>(TCommand command) where TCommand : ICommand
{
if (command is IAttachmentCommand)
{
var cmd = command as IAttachmentCommand;
var task = CreateAttachment(cmd); // method excluded, but persists to Rebus DataBus and returns AttachmentId
var claimCheck = task.Result;
_activator.Bus.Send(claimCheck);
}
else
{
_activator.Bus.Send(command);
}
}
This seems to be working, although I am happy to have my code pulled to shreds. I can send commands, apply the events that are generated by my aggregate roots, persist to the event store etc etc.
I simply pick up a file from a webapi call or the file system, create a command and send it off with my command bus.
In a separate windows service I have a command dispatcher monitoring MSMQ for these messages. When a message comes in it will then iterate through however many CommandValidationHandlers there are to validate the command. CommandValidationHandlers implement the following:
public interface ICommandValidationHandler<in TCommand> where TCommand : ICommand
{
ValidationResult Validate(TCommand command);
}
ValidationResult effectively returns a collection of errors. These errors are logged, published as an InvalidCommand event that contains the Command info and the errors - this then allows me to have any subscribers that are listening pick up the event - send a mail or call a web service etc to say that the message failed, with the reasons. If the command is invalid an exception is then thrown and the process stops.
My concern is that on validation I have the attachmentId, and have to retrieve the file, which is then validated, for example against an xsd.
From there I need to deserialize it to an object (generally a collection of financial transactions with a header which contains meta data such as no of transactions etc) and perform extra validation on data in the object.
Once this validation is complete I need to iterate through the collection of transactions in the object and send these to their relevant bounded contexts using the command bus, and further processing takes place.
It seems in this instance that I will be hitting the claim store a number of times - once for each validation handler (although I guess this could be resolved with a composite collection of validators), but then again in the Command Handler once validation has taken place.
In the various Event Handlers I have that need access to all the data I need to retrieve the data from the claim store each time and deserialize a number of times.
This seems like code-smell to me. Should I consider caching the file the first time I retrieve it and clear it from cache once all event handlers have finished their work?
Does anybody have better suggestions?
From what I understand about your problem the question is really: "should I use a caching mechanism for reading the claim store on the validation handlers?"
In your case, because the data in the claim store is immutable, you could cache it as long as you need it. That is the beauty of the immutable data: is forever cacheable.
To implement the caching mechanism you could use the decorator pattern over the claim store and switch to the cached version in your composition root in the dependency container. In this way you can anytime switch back to the uncached one.
You could cache it even more, you could cache even the result of the validation if the validated data does not ever change and it is repeated over time.

Webhooks per entity

I'm using the ASP.NET Webhooks packages to allow users to receive callbacks when certain events occur in my application.
e.g. entityUpdated, entityCreated, entityDeleted
I would like to expose the possibility to users of registering Webhooks only for updates on specific entities in case they are only interested in receiving callbacks for those specific entities.
e.g. entityUpdated for entity1
The filters seem like a good candidate for implementing this behavior. Users can subscribe to events using filters.
e.g. entity* (to receive all event concerning entities)
So I was thinking of exposing events per entity like: entity_1_Updated.
That would mean the list of exposed event will change during the runtime of the application (as entities get created or deleted).
More concrete, the implementation of IWebHookFilterProvider would perform a database query to fetch the list of entities for wich events can occur.
Like so:
class EntityWebHookFilterProvider : IWebHookFilterProvider
{
public async Task<Collection<WebHookFilter>> GetFiltersAsync()
{
List<int> ids = await repository.GetAllUpdatableEntitiesAsync();
return new Collection<WebHookFilter>(ids.Select(id => new WebHookFilter { Name = string.Format("entity_{0}_Updated", id)}).ToList());
}
}
Would this be a good solution? Or should the list of events/filters be fixed?
An easier way may be to use a separate field in the registration to indicate the specific ID the subscriber is interested in using the Properties part of the WebHook registration.
Then when you send a notification on the server side you can use the overload which takes a Func enabling you to filter that WebHooks only are generated when the ID matches that of the WebHook registration, for example:
// Create an event with action 'event1' and additional data
await this.NotifyAsync("event1", new { P1 = "p1" }, (w, s) =>
{
// Check that the property included in the event data matches that
// of the WebHook registration.
return true;
});
Hope this helps,
Henrik

Registering change notification with Active Directory using C#

This link http://msdn.microsoft.com/en-us/library/aa772153(VS.85).aspx says:
You can register up to five notification requests on a single LDAP connection. You must have a dedicated thread that waits for the notifications and processes them quickly. When you call the ldap_search_ext function to register a notification request, the function returns a message identifier that identifies that request. You then use the ldap_result function to wait for change notifications. When a change occurs, the server sends you an LDAP message that contains the message identifier for the notification request that generated the notification. This causes the ldap_result function to return with search results that identify the object that changed.
I cannot find a similar behavior looking through the .NET documentation. If anyone knows how to do this in C# I'd be very grateful to know. I'm looking to see when attributes change on all the users in the system so I can perform custom actions depending on what changed.
I've looked through stackoverflow and other sources with no luck.
Thanks.
I'm not sure it does what you need, but have a look at http://dunnry.com/blog/ImplementingChangeNotificationsInNET.aspx
Edit: Added text and code from the article:
There are three ways of figuring out things that have changed in Active Directory (or ADAM). These have been documented for some time over at MSDN in the aptly titled "Overview of Change Tracking Techniques". In summary: Polling for Changes using uSNChanged. This technique checks the 'highestCommittedUSN' value to start and then performs searches for 'uSNChanged' values that are higher subsequently. The 'uSNChanged' attribute is not replicated between domain controllers, so you must go back to the same domain controller each time for consistency. Essentially, you perform a search looking for the highest 'uSNChanged' value + 1 and then read in the results tracking them in any way you wish. Benefits This is the most compatible way. All languages and all versions of .NET support this way since it is a simple search. Disadvantages There is a lot here for the developer to take care of. You get the entire object back, and you must determine what has changed on the object (and if you care about that change). Dealing with deleted objects is a pain. This is a polling technique, so it is only as real-time as how often you query. This can be a good thing depending on the application. Note, intermediate values are not tracked here either. Polling for Changes Using the DirSync Control. This technique uses the ADS_SEARCHPREF_DIRSYNC option in ADSI and the LDAP_SERVER_DIRSYNC_OID control under the covers. Simply make an initial search, store the cookie, and then later search again and send the cookie. It will return only the objects that have changed. Benefits This is an easy model to follow. Both System.DirectoryServices and System.DirectoryServices.Protocols support this option. Filtering can reduce what you need to bother with. As an example, if my initial search is for all users "(objectClass=user)", I can subsequently filter on polling with "(sn=dunn)" and only get back the combination of both filters, instead of having to deal with everything from the intial filter. Windows 2003+ option removes the administrative limitation for using this option (object security). Windows 2003+ option will also give you the ability to return only the incremental values that have changed in large multi-valued attributes. This is a really nice feature. Deals well with deleted objects. Disadvantages This is .NET 2.0+ or later only option. Users of .NET 1.1 will need to use uSNChanged Tracking. Scripting languages cannot use this method. You can only scope the search to a partition. If you want to track only a particular OU or object, you must sort out those results yourself later. Using this with non-Windows 2003 mode domains comes with the restriction that you must have replication get changes permissions (default only admin) to use. This is a polling technique. It does not track intermediate values either. So, if an object you want to track changes between the searches multiple times, you will only get the last change. This can be an advantage depending on the application. Change Notifications in Active Directory. This technique registers a search on a separate thread that will receive notifications when any object changes that matches the filter. You can register up to 5 notifications per async connection. Benefits Instant notification. The other techniques require polling. Because this is a notification, you will get all changes, even the intermediate ones that would have been lost in the other two techniques. Disadvantages Relatively resource intensive. You don't want to do a whole ton of these as it could cause scalability issues with your controller. This only tells you if the object has changed, but it does not tell you what the change was. You need to figure out if the attribute you care about has changed or not. That being said, it is pretty easy to tell if the object has been deleted (easier than uSNChanged polling at least). You can only do this in unmanaged code or with System.DirectoryServices.Protocols. For the most part, I have found that DirSync has fit the bill for me in virtually every situation. I never bothered to try any of the other techniques. However, a reader asked if there was a way to do the change notifications in .NET. I figured it was possible using SDS.P, but had never tried it. Turns out, it is possible and actually not too hard to do. My first thought on writing this was to use the sample code found on MSDN (and referenced from option #3) and simply convert this to System.DirectoryServices.Protocols. This turned out to be a dead end. The way you do it in SDS.P and the way the sample code works are different enough that it is of no help. Here is the solution I came up with:
public class ChangeNotifier : IDisposable
{
LdapConnection _connection;
HashSet<IAsyncResult> _results = new HashSet<IAsyncResult>();
public ChangeNotifier(LdapConnection connection)
{
_connection = connection;
_connection.AutoBind = true;
}
public void Register(string dn, SearchScope scope)
{
SearchRequest request = new SearchRequest(
dn, //root the search here
"(objectClass=*)", //very inclusive
scope, //any scope works
null //we are interested in all attributes
);
//register our search
request.Controls.Add(new DirectoryNotificationControl());
//we will send this async and register our callback
//note how we would like to have partial results
IAsyncResult result = _connection.BeginSendRequest(
request,
TimeSpan.FromDays(1), //set timeout to a day...
PartialResultProcessing.ReturnPartialResultsAndNotifyCallback,
Notify,
request);
//store the hash for disposal later
_results.Add(result);
}
private void Notify(IAsyncResult result)
{
//since our search is long running, we don't want to use EndSendRequest
PartialResultsCollection prc = _connection.GetPartialResults(result);
foreach (SearchResultEntry entry in prc)
{
OnObjectChanged(new ObjectChangedEventArgs(entry));
}
}
private void OnObjectChanged(ObjectChangedEventArgs args)
{
if (ObjectChanged != null)
{
ObjectChanged(this, args);
}
}
public event EventHandler<ObjectChangedEventArgs> ObjectChanged;
#region IDisposable Members
public void Dispose()
{
foreach (var result in _results)
{
//end each async search
_connection.Abort(result);
}
}
#endregion
}
public class ObjectChangedEventArgs : EventArgs
{
public ObjectChangedEventArgs(SearchResultEntry entry)
{
Result = entry;
}
public SearchResultEntry Result { get; set;}
}
It is a relatively simple class that you can use to register searches. The trick is using the GetPartialResults method in the callback method to get only the change that has just occurred. I have also included the very simplified EventArgs class I am using to pass results back. Note, I am not doing anything about threading here and I don't have any error handling (this is just a sample). You can consume this class like so:
static void Main(string[] args)
{
using (LdapConnection connect = CreateConnection("localhost"))
{
using (ChangeNotifier notifier = new ChangeNotifier(connect))
{
//register some objects for notifications (limit 5)
notifier.Register("dc=dunnry,dc=net", SearchScope.OneLevel);
notifier.Register("cn=testuser1,ou=users,dc=dunnry,dc=net", SearchScope.Base);
notifier.ObjectChanged += new EventHandler<ObjectChangedEventArgs>(notifier_ObjectChanged);
Console.WriteLine("Waiting for changes...");
Console.WriteLine();
Console.ReadLine();
}
}
}
static void notifier_ObjectChanged(object sender, ObjectChangedEventArgs e)
{
Console.WriteLine(e.Result.DistinguishedName);
foreach (string attrib in e.Result.Attributes.AttributeNames)
{
foreach (var item in e.Result.Attributes[attrib].GetValues(typeof(string)))
{
Console.WriteLine("\t{0}: {1}", attrib, item);
}
}
Console.WriteLine();
Console.WriteLine("====================");
Console.WriteLine();
}

Categories

Resources