Imagine you have a client and a server communicating via some kind of a message bus, which has this interface:
interface IBus {
void Send(Message m);
void Receive(Message m);
}
with Message being some POCO like this:
class Message {
public Guid Id {get;set;}
public string Data {get;set;}
}
So you can send a message via Send and when the response arrives the messaging infrastructure is going to invoke Receive method which essentially is just a callback. What I would like to do now is to write a method which will allow me to wait for response
Message WaitForResponse(Message request);
and use it like this:
var response = WaitForResponse(request);
Console.Write(response.Data);
I tried using TaskCompletionSource for it and it works great, but it requires async\await and I already got a lot of code written in this sync style. This code is now using ManulResetEventSlim objects stored in ConcurrentDictionary for synchronization, but it encounters performance issues when the number of requests waiting for response grow to a couple of hundreds (I assume because all threads are blocked with manualResetEventSlim.Wait()). I guess there should be a better way to do it which will require changes only to the implementation of WaitForResponse and will keep all method signatures untouched
Related
TL;DR:
1. Am I creating an anti-pattern?
2. What is the best way to handle a claim check with CQRS?
I have several entry points in my system (webapi passing in json and xml), as well as through the file system with fixed-length files.
I am using Rebus with MSMQ and Sql server to manage my messaging. The data can be larger than 4mb (MSMQ's max message size if I believe). When the system receives a file I convert it into a stream and create a command that implements IAttachmentCommand as below:
public interface IAttachmentCommand : ICommand
{
Stream Attachment { get; }
IClaimCheckCommand ToClaimCheck(string attachmentId);
}
public interface IClaimCheckCommand : ICommand
{
string AttachmentId { get; }
}
I then send it using a command bus (using Rebus). If the command is of type IAttachmentCommand I create an attachment in the rebus databus table and return a new IAttachmentCommand using ToClaimCheck on the original command. The AttachmentCommand is effectively a carbon copy of the original command, except it now has the attachmentId instead of the data.
I will then call send in my Rebus bus with my new AttachmentId as below:
public void Send<TCommand>(TCommand command) where TCommand : ICommand
{
if (command is IAttachmentCommand)
{
var cmd = command as IAttachmentCommand;
var task = CreateAttachment(cmd); // method excluded, but persists to Rebus DataBus and returns AttachmentId
var claimCheck = task.Result;
_activator.Bus.Send(claimCheck);
}
else
{
_activator.Bus.Send(command);
}
}
This seems to be working, although I am happy to have my code pulled to shreds. I can send commands, apply the events that are generated by my aggregate roots, persist to the event store etc etc.
I simply pick up a file from a webapi call or the file system, create a command and send it off with my command bus.
In a separate windows service I have a command dispatcher monitoring MSMQ for these messages. When a message comes in it will then iterate through however many CommandValidationHandlers there are to validate the command. CommandValidationHandlers implement the following:
public interface ICommandValidationHandler<in TCommand> where TCommand : ICommand
{
ValidationResult Validate(TCommand command);
}
ValidationResult effectively returns a collection of errors. These errors are logged, published as an InvalidCommand event that contains the Command info and the errors - this then allows me to have any subscribers that are listening pick up the event - send a mail or call a web service etc to say that the message failed, with the reasons. If the command is invalid an exception is then thrown and the process stops.
My concern is that on validation I have the attachmentId, and have to retrieve the file, which is then validated, for example against an xsd.
From there I need to deserialize it to an object (generally a collection of financial transactions with a header which contains meta data such as no of transactions etc) and perform extra validation on data in the object.
Once this validation is complete I need to iterate through the collection of transactions in the object and send these to their relevant bounded contexts using the command bus, and further processing takes place.
It seems in this instance that I will be hitting the claim store a number of times - once for each validation handler (although I guess this could be resolved with a composite collection of validators), but then again in the Command Handler once validation has taken place.
In the various Event Handlers I have that need access to all the data I need to retrieve the data from the claim store each time and deserialize a number of times.
This seems like code-smell to me. Should I consider caching the file the first time I retrieve it and clear it from cache once all event handlers have finished their work?
Does anybody have better suggestions?
From what I understand about your problem the question is really: "should I use a caching mechanism for reading the claim store on the validation handlers?"
In your case, because the data in the claim store is immutable, you could cache it as long as you need it. That is the beauty of the immutable data: is forever cacheable.
To implement the caching mechanism you could use the decorator pattern over the claim store and switch to the cached version in your composition root in the dependency container. In this way you can anytime switch back to the uncached one.
You could cache it even more, you could cache even the result of the validation if the validated data does not ever change and it is repeated over time.
Let's imagine I have WCF service and a client that consumes some methods from a given service.
There are tons of posts of how to handle various exceptions during the client and service communication. Only thing which is still confusing me is a following case:
Service:
[ServiceContract]
public interface IService1
{
[OperationContract]
bool ExportData(object data);
}
public class Service1 : IService1
{
public bool ExportData(object data)
{
// Simulate long operation (i.e. inserting data to the DB)
Thread.Sleep(1000000);
return true;
}
}
Client:
class Program
{
static wsService1.Service1Client client1 = new wsService1.Service1Client();
static void Main(string[] args)
{
object data = GetRecordsFromLocalDB();
bool result = client1.ExportData(data);
if (result)
{
DeleteRecordsFromLocalDB();
}
}
}
Client gets some data from local db and sending it to the server. If result is successful, then client is going to remove exported rows from local DB. Now imagine, when data is already sent to the server, suddenly connection failed (i.e. WiFi was disconnected). In this case data is successfully processed on a server side, but client is never know about it. And yes, I can catch connection exception, but still I don't know what should I do with a records in my local DB. I can send this data again later, but I'll get some duplication on a server DB (i.e. duplication is allowed on remote DB), but I don't want to send same data multiple times.
So, my question is how to handle such cases? What is the best practices?
I checked some info about asynchronous operations. But still this is about when I have stable connection.
As a workaround I can store my export operation under some GUID remotelly and check status for this GUID later. Only thing I can't change remote DB. So, please, suggest what would be better in my case?
Here are some points to consider
On server side you can catch all kinds of error (custom class deriving IErrorHandler) and provide specific error to client letting him know about error's reason.
The concept of service is that it is kind of intermediary between client and database so why would client retrieve data and then send it to service?
One way out is to use transaction which assures that if error occurres then no changes are going to be retained.
By the way, If you expect service to throw an exception do not create global service object since it will end up being in faulted state. Create new instance for every single call instead (make use of using statement so as to dispose its instance). Bool return type does not provide extensive information about the error if any takes place. Let it have void return type and wrap in try/catch block which gives a change to learn more about the source and nature of error.
I have been trying to adapt Rebus in one of my applications. easy to configure, works well everything. Have to implement PUB/SUB communication to achieve responses from multiple sources.
so what I made is,
Saga(Publisher)
SearchProductSaga : Saga<ProductSagaData>, IAmInitiatedBy<SearchProduct>, IHandleMessages<SearchStarted>, IHandleMessages<SearchProductResponse>, IHandleMessages<SearchCompleted>
Input Queue for Saga is - ProductSaga.Queue
Subscriber 1
contains following sequence of execution:
public class ProductHanderl_1 : IHandleMessage<SearchProduct>
{
public void Handle(FullTextSearchProductRequest message)
{
Bus.Reply(SearchStarted);
//Some business logic to find products
Bus.Reply(AcutalProductResponse);
Bus.Reply(SearchCompleted);
}
}
Subscriber 2
contains same sequence of execution but a different business logic:
public class ProductHanderl_2 : IHandleMessage<SearchProduct>
{
public void Handle(FullTextSearchProductRequest message)
{
Bus.Reply(SearchStarted);
//Some business logic to find products
Bus.Reply(AcutalProductResponse);
Bus.Reply(SearchCompleted);
}
}
Now, after this implementation, what I was expecting is:
I should be able to calculate number of executing subscribers right now, by receiving SearchStarted messages to SearchProductSaga;
and once subscribers done with business logic, would send SearchCompleted message to indicate saga - we are done. And execute MarkAsComplete(); on saga.
But result I'm getting is quite disappointed. What I found is, from handler if you are replying multiple times(like execution sequence in my subscriber logic), all messages are sent together to publisher queue, once handler execution scope ends.
Correct if I'm wrong, and suggest any solution if anyone has. I could achieve same with threading. But I don't want to manage it myself, so is there any asynchronous way to push messages to queue as and when replied from code.
What you're experiencing is a consequence of the fact that a message is handled in a queue transaction in which all outgoing messages are sent as well.
This means that all sent messages, even though they may have been delivered to whichever queueing system you're using, will not be delivered to anyone until the transaction is committed.
This also means that you'd have to divide your saga actions into multiple discrete steps in order to achieve what you're after.
Does that make sense?
From my client/server I receive serialized data, once the data is deserialized, it goes into a command handler where receivedData.Action is the ClientMessage:
Command._handlers[receivedData.Action].Handle(receivedData.Profile);
The command handler will work out the client message and return the response that should be given to the client.
I have an enum for the client messages as follow:
public enum ClientMessage
{
INIT = 1,
NEW_PROFILE,
UPDATE_PROFILE_EMAIL,
UPDATE_PROFILE_PASSWORD,
UPDATE_PROFILE_PHONE,
UPDATE_PROFILE_DATE,
UPDATE_PROFILE_SECRET_ANSWER,
UPDATE_PROFILE_POSTAL_CODE,
UPDATE_SUCCESS,
PING,
PONG,
QUIT
}
What I am having a difficult is how to have all the actions written, for example:
Should I have a separated enum for what the client sends and another for what the server should reply with ?
Or should I have a single enum with all messages and follow it as requested ?
Or how should I go about defining the messages and handling it ?
This is what my server/client currently does just to give you a better view:
Server starts
Client connects
Client send auth to server
Server verify client and send connected approval message
Client will from there start sending and updating profiles to the server
This is roughly an example only.
IPacketHandler
public interface IPacketHandler
{
MyCommunicationData Handle(ProfileData profile);
}
Command
public class Command
{
public static Dictionary<ClientMessage, IPacketHandler> _handlers = new Dictionary<ClientMessage, IPacketHandler>()
{
{ClientMessage.INIT, new Init()},
{ClientMessage.NEW_PROFILE, new NewProfile()},
{ClientMessage.UPDATE_PROFILE_EMAIL, new UpdateEmail()},
{ClientMessage.UPDATE_PROFILE_PASSWORD, new UpdatePassword()},
{ClientMessage.UPDATE_PROFILE_PHONE, new UpdatePhone()},
{ClientMessage.UPDATE_PROFILE_DATE, new UpdateDate()},
{ClientMessage.UPDATE_PROFILE_SECRET_ANSWER, new UpdateSecretAnswer()},
{ClientMessage.UPDATE_PROFILE_POSTAL_CODE, new UpdatePostalCode()},
{ClientMessage.UPDATE_SUCCESS, new Success()},
{ClientMessage.PING, new Ping()},
{ClientMessage.PONG, new Pong()},
{ClientMessage.QUIT, new Quit()},
};
}
Example of the INIT
public class Init : IPacketHandler
{
public MyCommunicationData Handle(ProfileData profile)
{
// Some verification to auth the client here
// bla bla
// return response
return new MyCommunicationData() { Action = ClientMessage.CONNECTED };
}
}
PS: If my title is off and you have a better suggestion let me know or if you can go ahead and update it, I was not sure of how to describe this in English.
If your question is about how to design the class and interactions as I understood it, then I would - and it's totally dependant on the specifics of your application - separate this big Enumerations type into separate, smaller ones that are more descriptive of what they do, and of your intentions, for example, ProfileAction, ActionResult, PingStatus etc.. Then when you're using these enums, you make sure that you get compiler-time checks that you're doing it correctly, otherwise, what you're doing is almost like just passing strings.
It also has to do with sticking to Single Responsibility principle in OO design: an object should have single responsibility. Your enum as it stands now has more than one responsibility.
With issues like these, I find it helpful to look at what .NET framework does: for example look at Ping class and how it uses PingStatus enumerations and other enumerations as well.
Not sure I'd use an enum at all. They are great inside a peice of code, exposed as communicated value, they are considerably less than great.
For me I'd have a different class per message, not one message with a god property.
I'm trying to build a small message/event system where messages may be requests.
Request handlers implement the IHandlerOf<T> interface like
public class UserService : IHandlerOf<ISearchRequest>
{
private void ProccessRequest(ISearchRequest request)
{
}
}
I'm unsure of how I should handle replies since multiple handlers can "answer" a request. How would you design the reply part? Build a list of replies in the message broker, or include the reply object in the process method and let all handlers work against the same reply object?
Examples would be appreciated.
Or do you have any links to existing solutions? Using service buses (like nservicebus) seems a bit overkill since everything is in-process.
Update
My current solution (Work in progress). The broker creates the response object by inspecting the IHandlerOf<> interface which is registered for the request type being used in BeginRequest.
The down side with the solution is that nothing ties the request and reply together which would give no compile errors if a incorrect reply type is mapped to a request type. Although the broker would thrown an error during the registration process if a request got two different response types.
The broker uses try/catch around each handler invocation to be able to continue process the request handlers even if one of those throws an exception. I haven't really decided what to do with the exceptions yet. One handler might throw while another one successfully handled the request.
The handler interface:
// interface defining a class which would handle a request
public interface IHandlerOf<TRequest, TResponse>
where TRequest : IRequest
where TResponse : IResponse
{
void ProcessRequest(IRequestContext<TRequest, TResponse> context);
}
Example implementation
public class FindContactsRequest : IRequest
{
public string SearchValue { get; set; }
}
public class FindContactsResponse : IResponse
{
public ICollection<string> Contacts { get; set; }
}
public class UserService : IHandlerOf<FindContactsRequest, FindContactsResponse>
{
public void ProcessRequest(IRequestContext<FindContactsRequest, FindContactsResponse> context)
{
if (context.Request.SearchValue == "blabla")
{
context.Response.Contacts.Add("My contact name");
}
}
}
broker interface
public interface IMessageBroker
{
IAsyncResult BeginRequest(IRequest request, AsyncCallback callback, object state);
IResponse EndRequest<T>(IAsyncResult result) where T : IResponse;
}
Sample usage
var ar = _broker.BeginRequest(new FindContactsRequest("blabla"));
var response = _broker.EndRequest<FindContactsResponse>(ar);
Console.WriteLine("Woho, found " + response.Contacts.Count + " contacts.");
If all of the handlers work against the same reply object, then the reply object needs some kind of logic to prevent a bad handler from destroying the replies from other handlers. That is, if the reply object contained a List<string>, for example, a misbehaving handler could call Clear on the list and all would be lost. So the reply object would need to wrap that list (by providing an AddReply method or some such) to prevent such behavior.
Also, if all of the handlers work against the same reply object, then multithreaded request handling becomes more difficult. The reply object has to handle thread synchronization to prevent data corruption.
If, on the other hand, the message broker handles combining the replies, you're much more flexible. It can call each handler in turn (sequentially), or it can use asynchronous calls to run multiple handlers in parallel. It seems like the message broker would be the easier and more flexible place to put the logic for combining replies.