I have been trying to adapt Rebus in one of my applications. easy to configure, works well everything. Have to implement PUB/SUB communication to achieve responses from multiple sources.
so what I made is,
Saga(Publisher)
SearchProductSaga : Saga<ProductSagaData>, IAmInitiatedBy<SearchProduct>, IHandleMessages<SearchStarted>, IHandleMessages<SearchProductResponse>, IHandleMessages<SearchCompleted>
Input Queue for Saga is - ProductSaga.Queue
Subscriber 1
contains following sequence of execution:
public class ProductHanderl_1 : IHandleMessage<SearchProduct>
{
public void Handle(FullTextSearchProductRequest message)
{
Bus.Reply(SearchStarted);
//Some business logic to find products
Bus.Reply(AcutalProductResponse);
Bus.Reply(SearchCompleted);
}
}
Subscriber 2
contains same sequence of execution but a different business logic:
public class ProductHanderl_2 : IHandleMessage<SearchProduct>
{
public void Handle(FullTextSearchProductRequest message)
{
Bus.Reply(SearchStarted);
//Some business logic to find products
Bus.Reply(AcutalProductResponse);
Bus.Reply(SearchCompleted);
}
}
Now, after this implementation, what I was expecting is:
I should be able to calculate number of executing subscribers right now, by receiving SearchStarted messages to SearchProductSaga;
and once subscribers done with business logic, would send SearchCompleted message to indicate saga - we are done. And execute MarkAsComplete(); on saga.
But result I'm getting is quite disappointed. What I found is, from handler if you are replying multiple times(like execution sequence in my subscriber logic), all messages are sent together to publisher queue, once handler execution scope ends.
Correct if I'm wrong, and suggest any solution if anyone has. I could achieve same with threading. But I don't want to manage it myself, so is there any asynchronous way to push messages to queue as and when replied from code.
What you're experiencing is a consequence of the fact that a message is handled in a queue transaction in which all outgoing messages are sent as well.
This means that all sent messages, even though they may have been delivered to whichever queueing system you're using, will not be delivered to anyone until the transaction is committed.
This also means that you'd have to divide your saga actions into multiple discrete steps in order to achieve what you're after.
Does that make sense?
Related
I've got an Azure app service with a REST API controller that can be called from Azure Event Grid. As you may know, Event Grid can send multiple events in a batch. I don't know under which circumstances this actually happens, because so far the event payload has always been an array consisting of only one single element, but the documentation makes it clear that there can be multiple events in one go.
Let's say I receive five events, and event #3 is somehow incorrect. How do I let Event Grid know that I've accepted four of the five events, and that it should retry or deadletter the third event?
[Route("api/[controller]")]
[ApiController]
public class EventGridController : ControllerBase
{
[HttpPost("CustomerUpdated")]
public async Task<IActionResult> CustomerUpdated()
{
// ... manage subscription validation here
using var reader = new StreamReader(Request.Body, Encoding.UTF8);
var eventStr = await reader.ReadToEndAsync();
var gridEvents = JsonConvert.DeserializeObject<IEnumerable<GridEvent>>(eventStr);
// ... let's say one of five received events is somehow incorrect here
// ... how do I tell event grid which events failed and which events were accepted
}
Currently, I mark the entire batch as failed by returning BadRequest() even if some of the events actually succeeded. It's an okayish "better safe than sorry" trade-off solution because I'm currently the lone ranger on this project and can as such make sure that my code is idempotent. But I want to make sure that some other developer in the future can't make the codebase non-idempotent and have data inconsistencies popping up all over the place because my code tells EventGrid that the entire batch was failed even though some of the events were actually processed successfully. What can I do?
The Azure Event Grid output batching policy is designed as All or None, see more details here.
In the case like you described, the subscriber needs to handle a logic for skipping already successful processed events and the return status code should be for retrying delivery (e.g. 503). Note, that the 400 Bad Request will immediately process of the deadlettering if it is enabled.
What would be the best way to log all commands and response from an Akka.net cluster? The logging would be handled by a hierarchy of logger actors, but how would these loggers receive or intercept the various commands and responses?
I've tried using the event bus to subscribe to specific commands, but that does not seem to work as I thought, since no command is intercepted.
In Akka.NET every single actor has it's own message queue used to order incoming requests. There are few ways of doing that:
The simplest one is to create your own abstract actor type used by all other actors in your system, and override its AroundReceive method - it will let you to wrap inheriting actors receive handlers with any sort of logic you want:
class AppActor : ActorBase
{
protected readonly ILoggingAdapter logger = Context.GetLogger();
protected override bool AroundReceive(Receive receive, object message)
{
logger.Debug("Incoming request: {0}", message);
return receive(message);
}
}
This will also mean, that you need a separate type for every other variant of actors used in your system, like receive actors or persistent actors.
If you want out of the box solution - there's a commercial tool called Phobos, which can collect metrics of all messages in-flight inside of your actor system, log them and even trace causality between them (a.k.a. tracing).
Imagine you have a client and a server communicating via some kind of a message bus, which has this interface:
interface IBus {
void Send(Message m);
void Receive(Message m);
}
with Message being some POCO like this:
class Message {
public Guid Id {get;set;}
public string Data {get;set;}
}
So you can send a message via Send and when the response arrives the messaging infrastructure is going to invoke Receive method which essentially is just a callback. What I would like to do now is to write a method which will allow me to wait for response
Message WaitForResponse(Message request);
and use it like this:
var response = WaitForResponse(request);
Console.Write(response.Data);
I tried using TaskCompletionSource for it and it works great, but it requires async\await and I already got a lot of code written in this sync style. This code is now using ManulResetEventSlim objects stored in ConcurrentDictionary for synchronization, but it encounters performance issues when the number of requests waiting for response grow to a couple of hundreds (I assume because all threads are blocked with manualResetEventSlim.Wait()). I guess there should be a better way to do it which will require changes only to the implementation of WaitForResponse and will keep all method signatures untouched
I am currently using Masstransit in with the Courier pattern.
I´ve set up an Activity which may fail, and I want to be able to subscribe to this failure and act accordingly.
My problem is, even though I can subscribe to the failure, and even see the exception that caused the failure, I am unable to pass any arguments to it.
For testing purposes, supose I have the following activity:
public class MyActivity : ExecuteActivity<MyMessage>
{
public Task<ExecutionResult> Execute(ExecuteContext<MyMessage> context)
{
try
{
// .... some code
throw new FaultException<RegistrationRefusedData>(
new RegistrationRefusedData(RegistrationRefusedReason.ItemUnavailable));
// .... some code
}
catch (Exception ex)
{
return Task.FromResult(context.Faulted(ex));
}
}
}
The problem is in the reason (RegistrationRefusedReason) I am passing as a argument of the exception. If I subscribe a RoutingSlipActivityFaulted consumer, I can almost get all the information I need:
public class ActivityFaultedConsumer : IMessageConsumer<RoutingSlipActivityFaulted>
{
public void Consume(RoutingSlipActivityFaulted message)
{
string exceptionMessage = message.ExceptionInfo.Message; // OK
string messageType = message.ExceptionInfo.ExceptionType; // OK
RegistrationRefusedReason reason = ??????;
}
}
I feel like I am missing something important here, (maybe misusing the pattern?).
Is there any other way to get parameters from a faulted activity ?
So, the case you're describing isn't a Fault. It's a failure to meet a business condition. In this case, you wouldn't want to retry the transaction, you'd want to terminate it. To notify the initiator of the routing slip, you'd Publish a business event signifying that the transaction was not completed due to the business condition.
For instance, in your case, you may do something like:
context.Publish<RegistrationRefused>(new {
CustomerId = xxx,
ItemId = xxxx,
Reason = "Item was unavailable"
});
context.Terminate();
This would terminate the routing slip (the subsequent activities would not be executed), and produce a RoutingSlipTerminated event.
That's the proper way to end a routing slip due to a business condition or rule. Exceptions are for exceptional behavior only, since you'll likely want to retry them to handle the failure.
Kinda raising this from the dead, but I really haven't found a neat solution to this.
Here is my scenario:
I want to implement a request/response, but I want to wait for the execution of a routing slip.
As Fabio, I want to compensate for any previous activities and I want to pass data back to the request client in case of a fault.
Conveniently, Chris provided a RoutingSlipRequestProxy/RoutingSlipResponseProxy which does just that. I've found 2 approaches, but both of them seem very hacky to me.
Approach 1:
The request client waits for ISimpleResponse or ISimpleFailResponse.
RoutingSlipRequestProxy sets the ResponseAddress in the variables.
The activity sends ISimpleFailResponse to the ResponseAddress.
The client waits for either response
The RoutingSlipResponseProxy sends back Fault<ISimpleResponse> to the ResponseAddress.
From what I see the hackiness comes from step 4/5 and their order. I am pretty sure it works, but it could easily stop working in case messages are consumed out-of-order.
Sample code: https://github.com/steliyan/Sample-RequestResponse/commit/3fcb196804d9db48617a49c7a8f8c276b47b03ef
Approach 2:
The request client waits for ISimpleResponse or ISimpleFailResponse.
The activity calls ReviseItirery with the variables and adds a faulty activity.*
The faulty activity faults
The RoutingSlipResponseProxy2 get the ValidationErrors and sends back ISimpleFailResponse to the ResponseAddress.
* The activity needs to be Activity and not ExecuteActivity because there is no overload of ReviseItinerary with variables but with no activity log.
This approach seems hacky because an additional fault activity is added to the itinerary, just to be able to add a variable to the routing slip.
Sample code: https://github.com/steliyan/Sample-RequestResponse/commit/e9644fa683255f2bda8ae33d8add742f6ffe3817
Conclusion:
Looking at MassTransit code, it doesn't seem like a problem to add a FaultedWithVariables overload. However, I think Chris' point is that there should be a better way to design the workflow, but I am not sure about that.
TL;DR:
1. Am I creating an anti-pattern?
2. What is the best way to handle a claim check with CQRS?
I have several entry points in my system (webapi passing in json and xml), as well as through the file system with fixed-length files.
I am using Rebus with MSMQ and Sql server to manage my messaging. The data can be larger than 4mb (MSMQ's max message size if I believe). When the system receives a file I convert it into a stream and create a command that implements IAttachmentCommand as below:
public interface IAttachmentCommand : ICommand
{
Stream Attachment { get; }
IClaimCheckCommand ToClaimCheck(string attachmentId);
}
public interface IClaimCheckCommand : ICommand
{
string AttachmentId { get; }
}
I then send it using a command bus (using Rebus). If the command is of type IAttachmentCommand I create an attachment in the rebus databus table and return a new IAttachmentCommand using ToClaimCheck on the original command. The AttachmentCommand is effectively a carbon copy of the original command, except it now has the attachmentId instead of the data.
I will then call send in my Rebus bus with my new AttachmentId as below:
public void Send<TCommand>(TCommand command) where TCommand : ICommand
{
if (command is IAttachmentCommand)
{
var cmd = command as IAttachmentCommand;
var task = CreateAttachment(cmd); // method excluded, but persists to Rebus DataBus and returns AttachmentId
var claimCheck = task.Result;
_activator.Bus.Send(claimCheck);
}
else
{
_activator.Bus.Send(command);
}
}
This seems to be working, although I am happy to have my code pulled to shreds. I can send commands, apply the events that are generated by my aggregate roots, persist to the event store etc etc.
I simply pick up a file from a webapi call or the file system, create a command and send it off with my command bus.
In a separate windows service I have a command dispatcher monitoring MSMQ for these messages. When a message comes in it will then iterate through however many CommandValidationHandlers there are to validate the command. CommandValidationHandlers implement the following:
public interface ICommandValidationHandler<in TCommand> where TCommand : ICommand
{
ValidationResult Validate(TCommand command);
}
ValidationResult effectively returns a collection of errors. These errors are logged, published as an InvalidCommand event that contains the Command info and the errors - this then allows me to have any subscribers that are listening pick up the event - send a mail or call a web service etc to say that the message failed, with the reasons. If the command is invalid an exception is then thrown and the process stops.
My concern is that on validation I have the attachmentId, and have to retrieve the file, which is then validated, for example against an xsd.
From there I need to deserialize it to an object (generally a collection of financial transactions with a header which contains meta data such as no of transactions etc) and perform extra validation on data in the object.
Once this validation is complete I need to iterate through the collection of transactions in the object and send these to their relevant bounded contexts using the command bus, and further processing takes place.
It seems in this instance that I will be hitting the claim store a number of times - once for each validation handler (although I guess this could be resolved with a composite collection of validators), but then again in the Command Handler once validation has taken place.
In the various Event Handlers I have that need access to all the data I need to retrieve the data from the claim store each time and deserialize a number of times.
This seems like code-smell to me. Should I consider caching the file the first time I retrieve it and clear it from cache once all event handlers have finished their work?
Does anybody have better suggestions?
From what I understand about your problem the question is really: "should I use a caching mechanism for reading the claim store on the validation handlers?"
In your case, because the data in the claim store is immutable, you could cache it as long as you need it. That is the beauty of the immutable data: is forever cacheable.
To implement the caching mechanism you could use the decorator pattern over the claim store and switch to the cached version in your composition root in the dependency container. In this way you can anytime switch back to the uncached one.
You could cache it even more, you could cache even the result of the validation if the validated data does not ever change and it is repeated over time.