MVVMLight : is this the right way to use the Messenger? - c#

I have a classic business application that manages clients and adresses.
There are tab items (Id, GenericInfo and a few more) with each their own ViewModel.
There is a MainViewModel that handles the save and load commands of a client and its addresses.
We retrieve the data from a WCF service. The data received/sent from each WCF Function is aggregated in a different container.
In my MainViewModel I create a SaveContainer and then send it with the messenger.
public void Save()
{
var container = new SaveContainer();
MessengerInstance.Send(container);
//the container is now populated and ready to be sent via WCF
Console.WriteLine(container.User.Name);
Console.WriteLine(container.Address.StreetName);
Console.WriteLine(container.Address2.StreetName);
}
In my UserViewModel is register for that container and then the viewmodel populate it with the data it has (the user).
public UserViewModel()
: base(Messenger.Default)
{
User = new User();
MessengerInstance.Register<SaveContainer>(this, (x) => x.User = User);
}
And in my AddressViewModel I do the same.
public AddressViewModel()
: base(Messenger.Default)
{
Address = new Address();
Address2 = new Address() { StreetName = "Washington Street" };
MessengerInstance.Register<SaveContainer>(this, x =>
{
x.Address = Address;
x.Address2 = Address2;
});
}
I'd do the same when I have to load data.
After I send the Message, I assume that every ViewModel registered received the message and handled it. Am I assuming wrong? Do you find this way a correct way to use the Messenger? What would you improve?

There is no right way to use the messenger. However, you will have to consider that the message is handled by all recipients that have registerd for the message, not just an intended subset. Furthermore, when using messaging you do not have control over when the message handling is finished, now do you get notified when all recipients are done handling the message. In addition - depending on the implementation of the messenger - the messages may be handled in parallel.
So the problem with your approach (and #cadrell0's extension using a callback) is that you don't know when all recipients have handled the message. Using the callback you will get a callback for each recipient handling the message (i.e. n recipients n callbacks).
So how can you check when all recipients are done handling the message?
You use a counter to determine how many recipients have called back - this is error prone as you might register another message recipient and this messes up your system.
Another way would be validating the save container and once it is complete continue processing - but this might lead to a race condition as you may think all recipients have handled the message and continue, but then one late recipient calls in and invalidates your save container ... not good.
As I see it the messaging is more designed as a notification mechanism, i.e. you notify some recipients that something has happened. If you know and can ensure that there is only one recipient you even can use it in a manner you describe, but as soon as more than one recipient is involved this causes the mentioned problems.
So where does this leave you ... in your szenario I would tend to design the viewmodels as "related" (i.e. the main view model knows about the user view model and the address view models - or the main view model knows about the user view model that in turn knows about the address view models if that is more appropriate). Usually, I also would desing a model that holds the unit of work that I have to deal with (in your case the SaveContainer). Then all view models are constructed from this model and write their data to it. In normal cases this unit of work is what you get from your data storage service and what, in turn, gets saved by the data store in a single transaction.
But again, there is no right way to MVVM!

If I need to do something after a recipient responds to a message I include a callback on my message. When the recipient is done, it executes the callback. Adding parameters to the callback allows the recipient to send data to the sender. This also allows the recipient to perform an async operation.

Related

Manage state in Cosmos DB instead of in-memory for Bot to Human handover scenario

I am working on a bot with human handover features (Human-2-Human chat), where bot is in charge of whole communication. The user can start the communication with bot and if he is not satisfied by bot's responses, he can ask for further assistance from Human.
Bot is able to connect the user to a live Agent using a Third Party System. Bot forwards the message from a dialog to an API endpoint of this system along with a callback url. This third party system uses a callback mechanism to pass the message written by agent on this specified url.
I have created an API Controller endpoint and pass to this system as callback url. When agent sends a message the system notifies on this endpoint. It is simple Web API controller with no direct affiliation to Bot Framework.
Although I maintain a Conversation and User State of the bot in Cosmos DB and it has certain properties that contain status of chat connection like (ChatConnected, ChatClosed etc). Now to pass these message notifications to bot I maintain two concurrent dictionary one for Conversation Reference and second for TurnContext.
Conversation Reference helps to pass the agent message from bot to user using ContinueConversationAsync.
TurnContext helps to manage and update the state of these properties when session closed etc. And also use it to send message after certain period of inactivity since last turn has activity timestamp.
Now both of these are in-memory, which means they are added and removed as new chat sessions are created and as more messages are exchanged. I now want to move this out from in-memory to a shared cache or low latency cosmos. So that I can also use the possibility of auto-scaling new instances of bot service when required. I am using appservices currenly. But due to this coupling new instances don't have access to in-memory data, and therefore can not service. I don't think that enabling AffinityCookie for Bot Scenarios actually works.
I am able to serialize the ConversationReference object (through NewtonSoft), but serializing TurnContext throws JSON serialization exception due to internal loop in object. I tried to mitigate that with SerilizationSettings to ignore loop but it does not even works in VS during debugging throws VS stack overflow exception.
So how can I move this code to become independent of singleton ConcurrentDictionary on an instance-
private readonly ConcurrentDictionary<string, ITurnContext> TurnContextReferences;
private void AddTurnContext(ITurnContext turnContext, string sessionId)
{
if (turnContext != null && !string.IsNullOrWhiteSpace(sessionId))
{
//Add the Session Id and TurnContext to dictionary
TurnContextReferences.AddOrUpdate(sessionId, turnContext, (key, newValue) => turnContext);
}
}
//Using above method inside a function
//Trim the incoming message
var userMessage = messageActivity.Text.Trim();
if (!string.IsNullOrWhiteSpace(userMessage))
{
//send the incoming message from client to Agent
await TPSystem.SendMessageAsync(messageActivity.Conversation.Id, conversationData.SessionId, messageActivity.Text.Trim());
}
//Add to Turn context Dictionary
AddTurnContext(stepContext.Context, conversationData.SessionId);
//Inside API Controller
//Get the TurnContext from the Dictionary
TurnContextReferences.TryGetValue(sessionStateChangedEventData.SessionId, out ITurnContext turnContext);
if (turnContext != null)
{
var conversationData = await BotStateAccessors.ConversationStateAccessor.GetAsync(turnContext, () => new ConversationStateDataModel());
if (!conversationData.LiveAgentChatClosed)
{
conversationData.LiveAgentChatClosed = true;
await BotStateAccessors.ConversationStateAccessor.SetAsync(turnContext, conversationData);
await BotConversationState.SaveChangesAsync(turnContext);
}
}
Any ideas to think through would be appreciated.
Conversation references contain a subset of the information in activities, and an activity is just one property of a turn context, so a conversation reference contains a subset of the information in a turn context. It's redundant to be saving both conversation references and turn contexts because if you save turn contexts then you'll already have all the information from the conversation references.
That said, it's a very bad idea to try to save turn contexts. If you need some pieces of information that aren't in the conversation references then just save that specific information. For example, you can create your own class that contains a conversation reference and a timestamp that signifies the time of the last message from that conversation.
public class ConversationInfo
{
[JsonProperty(PropertyName = "conversationReference")]
public ConversationReference ConversationReference { get; set; }
[JsonProperty(PropertyName = "timestamp")]
public DateTimeOffset Timestamp { get; set; }
}

How to raise domain Event When I don't want to share actual domain model

I'm trying to implement DDD in my small project but Not able to understand how to raise domain event in below case.
Account Domain
public class Account : BaseEntity
{
public string PhoneNumber { get; set; }
public int OTP { get; set; }
public Account()
{
}
public Account(string phoneNumber, short otp)
{
this.PhoneNumber = phoneNumber;
this.OTP = otp;
CreatedDate = DateTime.Now;
RowKey = Guid.NewGuid().ToString();
PartitionKey = phoneNumber;
}
}
Account Service
public async Task<bool> GenerateOTP(string phoneNumber)
{
if (phoneNumber.Length != 10)
throw new ArgumentException(ApplicationConstraint.InvalidNumber);
var otp = Convert.ToInt16(new Random().Next(1000, 9999));
var account = new Account(phoneNumber, otp);
await this.accountRepository.AddEntity(account);
return true;
}
Account Repository Azure Storage table is my database
public virtual async Task AddEntity(TEntity entity)
{
TableOperation insertOperation = TableOperation.Insert(entity);
await table.ExecuteAsync(insertOperation);
}
I want to raise domain event only when data get saved in the database. For a workaround, I'm calling messaging service from account service.
Given the limited information provided, one option would be to create an AccountCreated event, (or an EntityCreated event if this is a cross-cutting concern) and publish it through some bus where consumers can asynchronousle receive it and do any subsequent processing needed.
The event need not use domain entities, and it can contain the information/data necessary to do any subsequent processing without the need to access a shared db (and as such adhering to DDD & microservice guidelines).
----Edit----
In the above I assumed that this is an established system and Azure storage isn't something that can change. Publishing an event, and handling it is pretty simple, but there are some things you need to be aware of. In general, you have 3 options here:
Publishing right after saving isn't wrong. It's simple way to do it, and (if you adopt an event-first methodology) you can do it in a generic way across your entities, minimal work. However, you need to be concious of how to deal with errors. Specifically, the issue is that if you store the entity first, before publishing the event, and then the process crashes for whatever reason, the event may be missed, so later workflows will not kick-off. If you do the reverse (publish then store), you run the risk of double-publishing the event. In this case you have two options:
If you store-then-publish: just accept the (really rare) possiblity of not publishing an event. This is something you need to speak to the business, and you can minigate the severity by logging the event before trying to save the entity.
If you publish-then-store: (you'll need to do this if the cost of fixing any issues ad-hoc are too great) you can fix the problem by having your consumers check the id of the incoming message if they ever have processed it before and reject it if they did OR make the process idempotent (if possible), meaning that doing the process twice isn't a problem
Using event sourcing. This isn't difficult in my opinion, but obviously it's an overhead if this is a a simple application, and while not difficult, it does need a significant amount of reading up if you're not familiar with it. If this is a non-trivial application, event sourcing can help a lot, because observers can just observe the events in the buffer and respond to that (so not need to explicitly publish the changes).
Append the event in a separate table within the same transaction where you're storing the entity, and use the outbox pattern implementation (publish those events from a separate service, marking them as published once they've been published). Honestly, the pattern shown on that is a bit simplistic, and there are a lot of tricky and small complexities, so prefer to use an existing one if you can find.
Honestly, if you can get away with 1.1, do that. It's simple and problems only very rarely appear. Just log the operation before you do it so that you can manually do it in the rare case of issues.

How do I handle claim check with cqrs

TL;DR:
1. Am I creating an anti-pattern?
2. What is the best way to handle a claim check with CQRS?
I have several entry points in my system (webapi passing in json and xml), as well as through the file system with fixed-length files.
I am using Rebus with MSMQ and Sql server to manage my messaging. The data can be larger than 4mb (MSMQ's max message size if I believe). When the system receives a file I convert it into a stream and create a command that implements IAttachmentCommand as below:
public interface IAttachmentCommand : ICommand
{
Stream Attachment { get; }
IClaimCheckCommand ToClaimCheck(string attachmentId);
}
public interface IClaimCheckCommand : ICommand
{
string AttachmentId { get; }
}
I then send it using a command bus (using Rebus). If the command is of type IAttachmentCommand I create an attachment in the rebus databus table and return a new IAttachmentCommand using ToClaimCheck on the original command. The AttachmentCommand is effectively a carbon copy of the original command, except it now has the attachmentId instead of the data.
I will then call send in my Rebus bus with my new AttachmentId as below:
public void Send<TCommand>(TCommand command) where TCommand : ICommand
{
if (command is IAttachmentCommand)
{
var cmd = command as IAttachmentCommand;
var task = CreateAttachment(cmd); // method excluded, but persists to Rebus DataBus and returns AttachmentId
var claimCheck = task.Result;
_activator.Bus.Send(claimCheck);
}
else
{
_activator.Bus.Send(command);
}
}
This seems to be working, although I am happy to have my code pulled to shreds. I can send commands, apply the events that are generated by my aggregate roots, persist to the event store etc etc.
I simply pick up a file from a webapi call or the file system, create a command and send it off with my command bus.
In a separate windows service I have a command dispatcher monitoring MSMQ for these messages. When a message comes in it will then iterate through however many CommandValidationHandlers there are to validate the command. CommandValidationHandlers implement the following:
public interface ICommandValidationHandler<in TCommand> where TCommand : ICommand
{
ValidationResult Validate(TCommand command);
}
ValidationResult effectively returns a collection of errors. These errors are logged, published as an InvalidCommand event that contains the Command info and the errors - this then allows me to have any subscribers that are listening pick up the event - send a mail or call a web service etc to say that the message failed, with the reasons. If the command is invalid an exception is then thrown and the process stops.
My concern is that on validation I have the attachmentId, and have to retrieve the file, which is then validated, for example against an xsd.
From there I need to deserialize it to an object (generally a collection of financial transactions with a header which contains meta data such as no of transactions etc) and perform extra validation on data in the object.
Once this validation is complete I need to iterate through the collection of transactions in the object and send these to their relevant bounded contexts using the command bus, and further processing takes place.
It seems in this instance that I will be hitting the claim store a number of times - once for each validation handler (although I guess this could be resolved with a composite collection of validators), but then again in the Command Handler once validation has taken place.
In the various Event Handlers I have that need access to all the data I need to retrieve the data from the claim store each time and deserialize a number of times.
This seems like code-smell to me. Should I consider caching the file the first time I retrieve it and clear it from cache once all event handlers have finished their work?
Does anybody have better suggestions?
From what I understand about your problem the question is really: "should I use a caching mechanism for reading the claim store on the validation handlers?"
In your case, because the data in the claim store is immutable, you could cache it as long as you need it. That is the beauty of the immutable data: is forever cacheable.
To implement the caching mechanism you could use the decorator pattern over the claim store and switch to the cached version in your composition root in the dependency container. In this way you can anytime switch back to the uncached one.
You could cache it even more, you could cache even the result of the validation if the validated data does not ever change and it is repeated over time.

Webhooks per entity

I'm using the ASP.NET Webhooks packages to allow users to receive callbacks when certain events occur in my application.
e.g. entityUpdated, entityCreated, entityDeleted
I would like to expose the possibility to users of registering Webhooks only for updates on specific entities in case they are only interested in receiving callbacks for those specific entities.
e.g. entityUpdated for entity1
The filters seem like a good candidate for implementing this behavior. Users can subscribe to events using filters.
e.g. entity* (to receive all event concerning entities)
So I was thinking of exposing events per entity like: entity_1_Updated.
That would mean the list of exposed event will change during the runtime of the application (as entities get created or deleted).
More concrete, the implementation of IWebHookFilterProvider would perform a database query to fetch the list of entities for wich events can occur.
Like so:
class EntityWebHookFilterProvider : IWebHookFilterProvider
{
public async Task<Collection<WebHookFilter>> GetFiltersAsync()
{
List<int> ids = await repository.GetAllUpdatableEntitiesAsync();
return new Collection<WebHookFilter>(ids.Select(id => new WebHookFilter { Name = string.Format("entity_{0}_Updated", id)}).ToList());
}
}
Would this be a good solution? Or should the list of events/filters be fixed?
An easier way may be to use a separate field in the registration to indicate the specific ID the subscriber is interested in using the Properties part of the WebHook registration.
Then when you send a notification on the server side you can use the overload which takes a Func enabling you to filter that WebHooks only are generated when the ID matches that of the WebHook registration, for example:
// Create an event with action 'event1' and additional data
await this.NotifyAsync("event1", new { P1 = "p1" }, (w, s) =>
{
// Check that the property included in the event data matches that
// of the WebHook registration.
return true;
});
Hope this helps,
Henrik

Handling client/server messages?

From my client/server I receive serialized data, once the data is deserialized, it goes into a command handler where receivedData.Action is the ClientMessage:
Command._handlers[receivedData.Action].Handle(receivedData.Profile);
The command handler will work out the client message and return the response that should be given to the client.
I have an enum for the client messages as follow:
public enum ClientMessage
{
INIT = 1,
NEW_PROFILE,
UPDATE_PROFILE_EMAIL,
UPDATE_PROFILE_PASSWORD,
UPDATE_PROFILE_PHONE,
UPDATE_PROFILE_DATE,
UPDATE_PROFILE_SECRET_ANSWER,
UPDATE_PROFILE_POSTAL_CODE,
UPDATE_SUCCESS,
PING,
PONG,
QUIT
}
What I am having a difficult is how to have all the actions written, for example:
Should I have a separated enum for what the client sends and another for what the server should reply with ?
Or should I have a single enum with all messages and follow it as requested ?
Or how should I go about defining the messages and handling it ?
This is what my server/client currently does just to give you a better view:
Server starts
Client connects
Client send auth to server
Server verify client and send connected approval message
Client will from there start sending and updating profiles to the server
This is roughly an example only.
IPacketHandler
public interface IPacketHandler
{
MyCommunicationData Handle(ProfileData profile);
}
Command
public class Command
{
public static Dictionary<ClientMessage, IPacketHandler> _handlers = new Dictionary<ClientMessage, IPacketHandler>()
{
{ClientMessage.INIT, new Init()},
{ClientMessage.NEW_PROFILE, new NewProfile()},
{ClientMessage.UPDATE_PROFILE_EMAIL, new UpdateEmail()},
{ClientMessage.UPDATE_PROFILE_PASSWORD, new UpdatePassword()},
{ClientMessage.UPDATE_PROFILE_PHONE, new UpdatePhone()},
{ClientMessage.UPDATE_PROFILE_DATE, new UpdateDate()},
{ClientMessage.UPDATE_PROFILE_SECRET_ANSWER, new UpdateSecretAnswer()},
{ClientMessage.UPDATE_PROFILE_POSTAL_CODE, new UpdatePostalCode()},
{ClientMessage.UPDATE_SUCCESS, new Success()},
{ClientMessage.PING, new Ping()},
{ClientMessage.PONG, new Pong()},
{ClientMessage.QUIT, new Quit()},
};
}
Example of the INIT
public class Init : IPacketHandler
{
public MyCommunicationData Handle(ProfileData profile)
{
// Some verification to auth the client here
// bla bla
// return response
return new MyCommunicationData() { Action = ClientMessage.CONNECTED };
}
}
PS: If my title is off and you have a better suggestion let me know or if you can go ahead and update it, I was not sure of how to describe this in English.
If your question is about how to design the class and interactions as I understood it, then I would - and it's totally dependant on the specifics of your application - separate this big Enumerations type into separate, smaller ones that are more descriptive of what they do, and of your intentions, for example, ProfileAction, ActionResult, PingStatus etc.. Then when you're using these enums, you make sure that you get compiler-time checks that you're doing it correctly, otherwise, what you're doing is almost like just passing strings.
It also has to do with sticking to Single Responsibility principle in OO design: an object should have single responsibility. Your enum as it stands now has more than one responsibility.
With issues like these, I find it helpful to look at what .NET framework does: for example look at Ping class and how it uses PingStatus enumerations and other enumerations as well.
Not sure I'd use an enum at all. They are great inside a peice of code, exposed as communicated value, they are considerably less than great.
For me I'd have a different class per message, not one message with a god property.

Categories

Resources