CQRS for streamed data - c#

I am using CQRS segregation and works very well for transactional commands or request-response from one node to remote node.
I have a use case where a command will be issued to a remote node, and this will result in a "stream" data (much like a remote command running, and the server giving us updates in text as it progresses):
// this is sent from requesting node to remote node to initiate the stream
public class LongRunningCommand: ICommand
{
Guid Session { get; set; } // the session ID to use
string CommandLine {get; set; } // the command the remote note will run
}
This data is then sent in a number of packets over a period of time from the remote node to the requesting node:
// this is sent from remote node to requestor in multiple updates over time
public class UpdateProgress: ICommand
{
Guid Session { get; set; } // possibility to multiplex sessions
int Sequence { get; set; } // de-dupe/resequencing out of order packets (lower QOS)
byte[] Payload { get; set; } // the data to be passed to the application
}
This is not really a command, nor is it a request-reply (as there are multiple replies) - it is a long running session of sorts, but I am not sure how this fits in with CQRS.
What would be the best way to order this? Could my requesting node have a Command Handler like the below (where UpdateProgress is the "command" being handled):
public class UpdateProgressCommandHandler : ICommandHandler<UpdateProgress>
{
public async Task HandleAsync(UpdateProgress message)
{
// resequence in handler or chained infrastructure - omitted for brevity
var window = GetWindowForSession(message.Session);
var updateFromServer = System.Text.Encoding.UTF8.GetString(message.Payload);
await window.WriteLine(updateFromServer);
}
}
The above works (and i think fairly well), but terminology seems a bit funky (the command name UpdateProgress is more of an event than a command).
Or am I better off dropping the notion of commands/queries all together, and go full event bus, and if I did, how would i handle the initial request as that is not an event its more of a command (which wouldn't make sense semantically on an event bus that deals with events - not commands or queries).
Or am i getting caught up in naming convention only? Since first time doing this, appreciate a best-practice view for the above use case.

I'm not sure if I understood you correctly, a Command where it needs to talk to a remote node does not appear to be part of your Domain. This is not say it is not a Command, you can still define that as Command but not within your Domain IMO. You could potentially look into integration events here.
Without having a full understanding of your domain, here is how you could define your process:
Execute Command to modify your domain (something like Status: Pending)
Raise an Integration Event from your CommandHandler into a separate worker/ Service Bus
The separate worker completes the process and then raises another integration event
Your worker subscribes to that event and updates the relevant pieces of your domain (eg. Status: Completed).

Related

How to raise domain Event When I don't want to share actual domain model

I'm trying to implement DDD in my small project but Not able to understand how to raise domain event in below case.
Account Domain
public class Account : BaseEntity
{
public string PhoneNumber { get; set; }
public int OTP { get; set; }
public Account()
{
}
public Account(string phoneNumber, short otp)
{
this.PhoneNumber = phoneNumber;
this.OTP = otp;
CreatedDate = DateTime.Now;
RowKey = Guid.NewGuid().ToString();
PartitionKey = phoneNumber;
}
}
Account Service
public async Task<bool> GenerateOTP(string phoneNumber)
{
if (phoneNumber.Length != 10)
throw new ArgumentException(ApplicationConstraint.InvalidNumber);
var otp = Convert.ToInt16(new Random().Next(1000, 9999));
var account = new Account(phoneNumber, otp);
await this.accountRepository.AddEntity(account);
return true;
}
Account Repository Azure Storage table is my database
public virtual async Task AddEntity(TEntity entity)
{
TableOperation insertOperation = TableOperation.Insert(entity);
await table.ExecuteAsync(insertOperation);
}
I want to raise domain event only when data get saved in the database. For a workaround, I'm calling messaging service from account service.
Given the limited information provided, one option would be to create an AccountCreated event, (or an EntityCreated event if this is a cross-cutting concern) and publish it through some bus where consumers can asynchronousle receive it and do any subsequent processing needed.
The event need not use domain entities, and it can contain the information/data necessary to do any subsequent processing without the need to access a shared db (and as such adhering to DDD & microservice guidelines).
----Edit----
In the above I assumed that this is an established system and Azure storage isn't something that can change. Publishing an event, and handling it is pretty simple, but there are some things you need to be aware of. In general, you have 3 options here:
Publishing right after saving isn't wrong. It's simple way to do it, and (if you adopt an event-first methodology) you can do it in a generic way across your entities, minimal work. However, you need to be concious of how to deal with errors. Specifically, the issue is that if you store the entity first, before publishing the event, and then the process crashes for whatever reason, the event may be missed, so later workflows will not kick-off. If you do the reverse (publish then store), you run the risk of double-publishing the event. In this case you have two options:
If you store-then-publish: just accept the (really rare) possiblity of not publishing an event. This is something you need to speak to the business, and you can minigate the severity by logging the event before trying to save the entity.
If you publish-then-store: (you'll need to do this if the cost of fixing any issues ad-hoc are too great) you can fix the problem by having your consumers check the id of the incoming message if they ever have processed it before and reject it if they did OR make the process idempotent (if possible), meaning that doing the process twice isn't a problem
Using event sourcing. This isn't difficult in my opinion, but obviously it's an overhead if this is a a simple application, and while not difficult, it does need a significant amount of reading up if you're not familiar with it. If this is a non-trivial application, event sourcing can help a lot, because observers can just observe the events in the buffer and respond to that (so not need to explicitly publish the changes).
Append the event in a separate table within the same transaction where you're storing the entity, and use the outbox pattern implementation (publish those events from a separate service, marking them as published once they've been published). Honestly, the pattern shown on that is a bit simplistic, and there are a lot of tricky and small complexities, so prefer to use an existing one if you can find.
Honestly, if you can get away with 1.1, do that. It's simple and problems only very rarely appear. Just log the operation before you do it so that you can manually do it in the rare case of issues.

Effectively wait for multiple messages

Imagine you have a client and a server communicating via some kind of a message bus, which has this interface:
interface IBus {
void Send(Message m);
void Receive(Message m);
}
with Message being some POCO like this:
class Message {
public Guid Id {get;set;}
public string Data {get;set;}
}
So you can send a message via Send and when the response arrives the messaging infrastructure is going to invoke Receive method which essentially is just a callback. What I would like to do now is to write a method which will allow me to wait for response
Message WaitForResponse(Message request);
and use it like this:
var response = WaitForResponse(request);
Console.Write(response.Data);
I tried using TaskCompletionSource for it and it works great, but it requires async\await and I already got a lot of code written in this sync style. This code is now using ManulResetEventSlim objects stored in ConcurrentDictionary for synchronization, but it encounters performance issues when the number of requests waiting for response grow to a couple of hundreds (I assume because all threads are blocked with manualResetEventSlim.Wait()). I guess there should be a better way to do it which will require changes only to the implementation of WaitForResponse and will keep all method signatures untouched

How do I handle claim check with cqrs

TL;DR:
1. Am I creating an anti-pattern?
2. What is the best way to handle a claim check with CQRS?
I have several entry points in my system (webapi passing in json and xml), as well as through the file system with fixed-length files.
I am using Rebus with MSMQ and Sql server to manage my messaging. The data can be larger than 4mb (MSMQ's max message size if I believe). When the system receives a file I convert it into a stream and create a command that implements IAttachmentCommand as below:
public interface IAttachmentCommand : ICommand
{
Stream Attachment { get; }
IClaimCheckCommand ToClaimCheck(string attachmentId);
}
public interface IClaimCheckCommand : ICommand
{
string AttachmentId { get; }
}
I then send it using a command bus (using Rebus). If the command is of type IAttachmentCommand I create an attachment in the rebus databus table and return a new IAttachmentCommand using ToClaimCheck on the original command. The AttachmentCommand is effectively a carbon copy of the original command, except it now has the attachmentId instead of the data.
I will then call send in my Rebus bus with my new AttachmentId as below:
public void Send<TCommand>(TCommand command) where TCommand : ICommand
{
if (command is IAttachmentCommand)
{
var cmd = command as IAttachmentCommand;
var task = CreateAttachment(cmd); // method excluded, but persists to Rebus DataBus and returns AttachmentId
var claimCheck = task.Result;
_activator.Bus.Send(claimCheck);
}
else
{
_activator.Bus.Send(command);
}
}
This seems to be working, although I am happy to have my code pulled to shreds. I can send commands, apply the events that are generated by my aggregate roots, persist to the event store etc etc.
I simply pick up a file from a webapi call or the file system, create a command and send it off with my command bus.
In a separate windows service I have a command dispatcher monitoring MSMQ for these messages. When a message comes in it will then iterate through however many CommandValidationHandlers there are to validate the command. CommandValidationHandlers implement the following:
public interface ICommandValidationHandler<in TCommand> where TCommand : ICommand
{
ValidationResult Validate(TCommand command);
}
ValidationResult effectively returns a collection of errors. These errors are logged, published as an InvalidCommand event that contains the Command info and the errors - this then allows me to have any subscribers that are listening pick up the event - send a mail or call a web service etc to say that the message failed, with the reasons. If the command is invalid an exception is then thrown and the process stops.
My concern is that on validation I have the attachmentId, and have to retrieve the file, which is then validated, for example against an xsd.
From there I need to deserialize it to an object (generally a collection of financial transactions with a header which contains meta data such as no of transactions etc) and perform extra validation on data in the object.
Once this validation is complete I need to iterate through the collection of transactions in the object and send these to their relevant bounded contexts using the command bus, and further processing takes place.
It seems in this instance that I will be hitting the claim store a number of times - once for each validation handler (although I guess this could be resolved with a composite collection of validators), but then again in the Command Handler once validation has taken place.
In the various Event Handlers I have that need access to all the data I need to retrieve the data from the claim store each time and deserialize a number of times.
This seems like code-smell to me. Should I consider caching the file the first time I retrieve it and clear it from cache once all event handlers have finished their work?
Does anybody have better suggestions?
From what I understand about your problem the question is really: "should I use a caching mechanism for reading the claim store on the validation handlers?"
In your case, because the data in the claim store is immutable, you could cache it as long as you need it. That is the beauty of the immutable data: is forever cacheable.
To implement the caching mechanism you could use the decorator pattern over the claim store and switch to the cached version in your composition root in the dependency container. In this way you can anytime switch back to the uncached one.
You could cache it even more, you could cache even the result of the validation if the validated data does not ever change and it is repeated over time.

Pass parameters from a project to a specific class in another project

I just started to learn C# for a school project but I'm stuck on something.
I have a solution with 2 projects (and each project has a class), something like this:
Solution:
Server (project) (...) MyServerClass.cs, Program.cs
App (project) (...) MyAppClass.cs, Program.cs
In my "MyServerClass.cs", I have this:
class MyServerClass
{
...
public void SomeMethod()
{
Process.Start("App.exe", "MyAppClass");
}
}
How can I properly send, for example, an IP address and port? Would something like this work?
class MyServerClass
{
....
public void SomeMethod()
{
string ip = "127.0.0.1";
int port = 8888;
Process.Start("App.exe", "MyAppClass " + ip + " " + port);
}
}
Then in my "MyAppClass.cs", how can I receive that IP address and port?
EDIT:
The objective of this work is to practice processes/threads/sockets. The idea is having a server that receives emails and filter them, to know if they're spam or not. We got to have 4 or 5 filters. The idea was having them as separated projects (ex: Filter1.exe, Filter2.exe, ...), but I was trying to have only 1 project (ex: Filters.exe) and have the filters as classes (Filter1.cs, Filter2.cs, ...), and then create a new process for each different filter.
I guess I'll stick to a project for each filter!
Thanks!
There are a number of ways to achieve this, each with their own pros/cons.
Some possible solutions:
Pass the values in on the command line. Pros: Easy. Cons: Can only be passed in once on launch. Unidirectional (child process can't send info back). Doesn't scale well for complex structured data.
Create a webservice (either in the server or client). Connect to it and either pull/push the appropriate settings. Pros: Flexible, ongoing, potentially bi-directional with some form of polling and works if client/server are on different hosts. Cons: A little bit more complex, requires one app to be able to locate the web address of the other which is trivial locally and more involved over a network.
Use shared memory via a memory mapped file. This approach allows multiple processes to access the same chunk of memory. One process can write the required data and the others can read it. Pros: Efficient, bi-directional, can be disk-backed to persist state through restarts. Cons: Requires pointers and an understanding of how they work. Requires a little more manipulation of data to perform a read/write.
There are dozens more ways. Without knowing your situation in detail, it's hard to recommend one over another.
Edit Re: Updated requirements
Ok, command line is definitely a good choice here. A quick detour into some architecture...
There's no reason you can't do this with a single project.
First up, use an interface to make sure all your filters are interchangeable. Something like this...
public interface IFilter {
FilterResult Filter(string email);
void SetConfig(string config);
}
SetConfig() is optional but potentially useful to reconfigure a filter without a recompile.
You also need to decide what your IFilter's FilterResult is going to be. Is it a pass/fail? Or a score? Maybe some flags and other metrics.
If you wanted to do multiple projects, you'd put that interface in a "shared" or "common" project on its own and reference it from every other project. This also makes it easy for third parties to develop a filter.
Anyway, next up. Let's look at how the filter is hosted. You want something that's going to listen on the network but that's not the responsibility of the filter itself, so we need a network client. What you use here is up to you. WCF in one flavour or another seems to be a prime candidate. Your network client class should take in its constructor a network port to listen on and an instance of the filter...
public class NetworkClient {
private string endpoint;
private IFilter filter;
public NetworkClient(string Endpoint, IFilter Filter) {
this.filter = Filter;
this.endpoint = Endpoint;
this.Setup();
}
void Setup() {
// Set up your network client to listen on endpoint.
// When it receives a message, pass it to filter.Filter(msg);
}
}
Finally, we need an application to host everything. It's up to you whether you go for a console app or winforms/wpf. Depends if you want the process to have a GUI. If it's running as a service, the UI won't be visible on a user desktop anyway.
So, we'll have a process that takes the endpoint for the NetworkClient to listen on, a class name for the filter to use, and (optionally) a configuration string to be passed in to the filter before first use.
So, in your app's Main(), do something like this...
static void Main() {
try {
const string usage = "Usage: Filter.exe Endpoint FilterType [Config]";
var args = Environment.GetCommandLineArgs();
Type filterType;
IFilter filter;
string endpoint;
string config = null;
NetworkClient networkClient;
switch (args.Length) {
case 0:
throw new InvalidOperationException(String.Format("{0}. An endpoint and filter type are required", usage));
case 1:
throw new InvalidOperationException(String.Format("{0}. A filter type is required", usage));
case 2:
// We've been given an endpoint and type
break;
case 3:
// We've been given an endpoint, type and config.
config = args[3];
break;
default:
throw new InvalidOperationException(String.Format("{0}. Max three parameters supported. If your config contains spaces, ensure you are quoting/escaping as required.", usage));
}
endpoint = args[1];
filterType = Type.GetType(args[2]); //Look at the overloads here to control where you're searching
// Now actually create an instance of the filter
filter = (IFilter)Activator.CreateInstance(filterType);
if (config != null) {
// If required, set config
filter.SetConfig(config);
}
// Make a new NetworkClient and tell it where to listen and what to host.
networkClient = new NetworkClient(endpoint, filter);
// In a console, loop here until shutdown is requested, however you've implemented that.
// In winforms, the main UI loop will keep you alive.
} catch (Exception e) {
Console.WriteLine(e.ToString()); // Or display a dialog
}
}
You should then be able to invoke your process like this...
Filter.exe "127.0.0.1:8000" MyNamespace.MyFilterClass
or
Filter.exe "127.0.0.1:8000" MyNamespace.MyFilterClass "dictionary=en-gb;cutoff=0.5"
Of course, you can use a helper class to convert the config string into something your filter can use (like a dictionary).
When the network client gets a FilterResult back from the filter, it can pass the data back to the server / act accordingly.
I'd also suggest a little reading on Dependency Injection / Inversion of control and Unity. It makes a pluggable architecture much, much simpler. Instead of instantiating everything manually and tracking concrete instances, you can just do something like...
container.Resolve<IFilter>(filterType);
And the container will make sure that you get the appropriate instance for your thread/context.
Hope that helps

MVVMLight : is this the right way to use the Messenger?

I have a classic business application that manages clients and adresses.
There are tab items (Id, GenericInfo and a few more) with each their own ViewModel.
There is a MainViewModel that handles the save and load commands of a client and its addresses.
We retrieve the data from a WCF service. The data received/sent from each WCF Function is aggregated in a different container.
In my MainViewModel I create a SaveContainer and then send it with the messenger.
public void Save()
{
var container = new SaveContainer();
MessengerInstance.Send(container);
//the container is now populated and ready to be sent via WCF
Console.WriteLine(container.User.Name);
Console.WriteLine(container.Address.StreetName);
Console.WriteLine(container.Address2.StreetName);
}
In my UserViewModel is register for that container and then the viewmodel populate it with the data it has (the user).
public UserViewModel()
: base(Messenger.Default)
{
User = new User();
MessengerInstance.Register<SaveContainer>(this, (x) => x.User = User);
}
And in my AddressViewModel I do the same.
public AddressViewModel()
: base(Messenger.Default)
{
Address = new Address();
Address2 = new Address() { StreetName = "Washington Street" };
MessengerInstance.Register<SaveContainer>(this, x =>
{
x.Address = Address;
x.Address2 = Address2;
});
}
I'd do the same when I have to load data.
After I send the Message, I assume that every ViewModel registered received the message and handled it. Am I assuming wrong? Do you find this way a correct way to use the Messenger? What would you improve?
There is no right way to use the messenger. However, you will have to consider that the message is handled by all recipients that have registerd for the message, not just an intended subset. Furthermore, when using messaging you do not have control over when the message handling is finished, now do you get notified when all recipients are done handling the message. In addition - depending on the implementation of the messenger - the messages may be handled in parallel.
So the problem with your approach (and #cadrell0's extension using a callback) is that you don't know when all recipients have handled the message. Using the callback you will get a callback for each recipient handling the message (i.e. n recipients n callbacks).
So how can you check when all recipients are done handling the message?
You use a counter to determine how many recipients have called back - this is error prone as you might register another message recipient and this messes up your system.
Another way would be validating the save container and once it is complete continue processing - but this might lead to a race condition as you may think all recipients have handled the message and continue, but then one late recipient calls in and invalidates your save container ... not good.
As I see it the messaging is more designed as a notification mechanism, i.e. you notify some recipients that something has happened. If you know and can ensure that there is only one recipient you even can use it in a manner you describe, but as soon as more than one recipient is involved this causes the mentioned problems.
So where does this leave you ... in your szenario I would tend to design the viewmodels as "related" (i.e. the main view model knows about the user view model and the address view models - or the main view model knows about the user view model that in turn knows about the address view models if that is more appropriate). Usually, I also would desing a model that holds the unit of work that I have to deal with (in your case the SaveContainer). Then all view models are constructed from this model and write their data to it. In normal cases this unit of work is what you get from your data storage service and what, in turn, gets saved by the data store in a single transaction.
But again, there is no right way to MVVM!
If I need to do something after a recipient responds to a message I include a callback on my message. When the recipient is done, it executes the callback. Adding parameters to the callback allows the recipient to send data to the sender. This also allows the recipient to perform an async operation.

Categories

Resources