Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
We are planning to develop an audit service web api using AKKA.NET .
The API will receive the audit log data and will spawn an Audit actor which will do some security checks , insert the data in to a database and sent a email notification in case of any major security breaches. The plan is to use TELL method to do the processing in a fire and forget mechanism . The API will always return 200 OK so that calling service is not affected.
When we did some research , we found that most of the posts suggest creating a static instance of ActorSystem in global.asax, we found 2 ways of creaing an actor
Spawn an actor [with unique names] , inside the API and initiate a new instance of the Actor for every call and call the Tell method
Create a single static instance of the Actor and call the Tell Method
We feel the approach 2 is the best way to leverage AKKA.NET. Are we on the right path?
I would normally go for option 1 in this type of scenario. You need to think about how you handle failure. I would probably create some sort of supervisor actor to handle this process each time you get a command to do it. The supervisor could then create and supervise (i.e. determine how failure is handled) two actors - one for saving to the DB, another to send the email. The supervisor would be responsible for managing the process and killing itself (and children) when finished.
If you go for the second option, you will have a single queue for all messages so you will run into problems with scaling. You could get around this by having a pool of Actors , but I think it will be more difficult to handle retries and failures, especially if you need to know which things have been saved but didn't send emails. You could probably still get it to work, especially if you don't care if the save-then-email process fully completes, but I just think the first option fits the Actor model better.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
the title of the question may not clear enough, allow me to explain the background here:
I would like to design a web service that generates PDF and submit it to printer, here is the workflow:
User submit a request to the web service, probably the request will be one off so that user wouldn't suffer from waiting the job complete. User may received a HTTP200 and continue their work.
Once web service received the request, the web service generates the PDF, and submit it to designated printer and this process could take some time and CPU resources. As I don't want the drain all resource on that server, I may use producer consumer pattern here, there might be a queue to to queue client jobs, and process them one by one.
My Questions is that:
I'm new to C#, what is the proper pattern to queue and process them? Should I use ConcurrentQueue and ThreadPool to archive it?
What is the proper way to notify user about the job is success/fail? Instead of using callback service, is async an ideal way? My concern is that there may be lots of jobs in the queue and I don't want client suffer from waiting it complete.
The web service is placed behind a load balancer, how can I maintain a 'process queue' among them? I've tried using Hangfire and it seems okay, however I'm looking for alternative?
How can I know the number of jobs in the Queue/ how may thread is currently running? The webservice will be deployed on IIS, is there a Native way to archive it, or should I implement a web service call to obtain them?
Any help will be appreciated, thanks!
WCF supports the idea of a fire-and-forget methods. You just mark your contract interface method as one way, and there will be no waiting for a return:
[OperationContract( IsOneWay = true )]
void PrintPDF( PrintRequest request );
The only downside, of course, is that you won't get any notification from the server that you're request was successful or even valid. You'd have to do some kind of periodic polling to see what's going on. I guess you could put a Guid into the PrintRequest, so you could interrogate for that job later.
If you're not married to wcf, you might consider signalR...there's a comprehensive sample app of both a server and simple wpf client here. It has the advantage that either party can initiate an exchange once the connection has been established.
If you need to stick with wcf, there's the possibility of doing dualHttp. The client connects with an endpoint to callback to...and the server can then post notifications as work completes. You can get a feel for it from this sample.
Both signalR and wcf dualHttp are pretty straightforward. I guess my preference would be based on the experience of the folks doing the work. signalR has the advantage of playing nicely with browser-based clients...if that ever turns into a concern for you.
As for the queue itself...and keeping with the wcf model, you want to make sure your requests are serializable...so if need be, you can drain the queue and restart it later. In wcf, that typically means making data contracts for queue items. As an aside, I never like to send a boatload of arguments to a service, I prefer instead to make a data contract for method parameters and return types.
Data contracts are typically just simple types marked up with attributes to control serialization. The wcf methods do the magic of serializing/deserializing your types over the wire without you having to do much thinking. The client sends a whizzy and the server receives a whizzy as it's parameter.
There are caveats...in particular, the deserialization doesn't call your constructor (I believe it uses MemberwiseClone instead) ...so you can't rely on the constructor to initialize properties. To that end, you have to remember that, for example, collection types that aren't required might need to be lazily initialized. For example:
[DataContract]
public class ClientState
{
private static object sync = new object( );
//--> and then somewhat later...
[DataMember( Name = "UpdateProblems", IsRequired = false, EmitDefaultValue = false )]
List<UpdateProblem> updateProblems;
/// <summary>Problems encountered during previous Windows Update sessions</summary>
public List<UpdateProblem> UpdateProblems
{
get
{
lock ( sync )
{
if ( updateProblems == null ) updateProblems = new List<UpdateProblem>( );
}
return updateProblems;
}
}
//--> ...and so on...
}
Something I always do is to mark the backing variable as the serializable member, so deserialization doesn't invoke the property logic. I've found this to be an important "trick".
Producer/consumer is easy to write...and easy to get wrong. Look around on StackOverflow...you'll find plenty of examples. One of the best is here. You can do it with ConcurrentQueue and avoid the locks, or just go at it with a good ol' simple Queue as in the example.
But really...you're so much better off using some kind of service bus architecture and not rolling your own queue.
Being behind a load balancer means you probably want them all calling to a service instance to manage a single queue. You could roll your own or, you could let each instance manage its own queue. That might be more processing than you want going on on your server instances...that's your call. With wcf dual http, you may need your load balancer to be configured to have client affinity...so you can have session-oriented two-way communications. signalR supports a message bus backed by Sql Server, Redis, or Azure Service Bus, so you don't have to worry about affinity with a particular server instance. It has performance implication that are discussed here.
I guess the most salient advice is...find out what's out there and try to avoid reinventing the wheel. By all means, go for it if you're in burning/learning mode and can afford the time. But, if you're getting paid, find and learn the tools that are already in the field.
Since you're using .Net on both sides, you might consider writing all your contracts (service contracts and data contracts) into a .DLL that you use on both the client and the service. The nice thing about that is it's easy to keep things in sync, and you don't have to use the (rather weak) generated data contract types that come through WSDL discovery or the service reference wizard, and you can spin up client instances using ChannelFactory<IYourServiceContract>.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I've planned to build multiple applications to learn more about how I can use event sourcing in .net.
This is my planned architecture so far.
So f.ex both web app 1 and web app 2 can create users so when web app 1 creates a user he sends a command msg with the user properties to rabbitmq and the event handlers 1 & 2 use the needed properties to create a user in respective db.
What I don't understand is
f.ex
A visitor on Web App 1 creates an account.
Web App 1 sends the UserCreatedAccount message to rabbitmq
Both event handlers subscribe to message and create a user in respective db
Now, how should web application 1 know that a user really has been created ? should it just assume that everything went fine and let the user through ?
Is my architecture plan missing something ?
I won't be generating an aggregate from the event store, but just store the current state of an object in respective db.
I won't be generating an aggregate from the event store, but just store the current state of an object in respective db
Firstly, it doesn't sound like you're actually implementing event-sourcing. In event sourcing it's the event store that's the source of truth for the state of your aggregates.
Secondly, your diagram shows that the web apps send commands to rabbitmq. Are you planning to put both events and commands there? It's not very clear if you're trying to implement event-sourcing or command-sourcing or both or neither.
This indecision becomes most apparent here:
Web App 1 sends the UserCreatedAccount message to rabbitmq
...
Now, how should web application 1 know that a user really has been created ?
The thing about events is that they are facts. If the UserCreatedAccount event happened then its undeniable - it's a fact. If your webapp is not sure if it's true or not, if the user has been created or not, then it must not emit such an event.
I think what you are really trying to do (which is more consistent with the diagram) is that the webapp issues a CreateUserAccount command. That command is picked-up by a command-handler that, if successful, actually emits the UserCreatedAccount event. Since commands can fail (events cannot - they're always in the past), now the web app has a legitimate reason to wonder if it succeeded or not.
Now onto the solutions.
The web app can monitor the events emitted by the command-handlers to know if the command failed or not (a correlation id will come in handy) but it would also have to be able to time-out (or possibly retry). This will get complicated and will depend on the desired latency and how errors are meant to be communicated.
An alternative is to use rabbitmq only for the events but send commands directly from the web app to the command handler. This way you can still implement event sourcing properly but have a "normal" call from the web app to the command handler, so that you get the response back whether it succeeded or failed. To add location-transparency to this, I'd consider using something like akka.net for this bit.
In eventsourcing how does my publisher application know a command was successful with many subscribers
The usual answer in event sourcing is that the publisher knows a command was successful when the book of record acknowledges that it has been updated successfully.
Subscribers don't necessarily see those updates immediately (key search term: eventual consistency).
Thus, the usual design is that events get written to a durable event store first, and only after that do they get put onto a queue that will distribute them to the subscribers.
Thus, the durable event store acts the book of record -- the single source of truth in your system, and the subscribers have reflections of that truth.
(The underlying issue being that messages can get lost, or re-ordered, and you need a way to recover from those conditions).
If the messages are commands, then we typically have an aggregate that translates the command into new events to write to the store. If the messages are events (FormSubmitted is an event, as is HttpRequestReceived), then you don't need an "aggregate", as such, just synchronous write to ensure that the change has been made durable.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Still relatively new to .net and c# and I've been googling this but i'm convinced it's possible and that i'm probablyjust not using the right terminology / keywords to find what i'm looking for. i'll give you a full run down of what i'm doing.
my first class takes 2 ip addresses a start and a finish. i want to scan each ip address in the range and display it as online or offline (also want to do other stuff like resolve host names, os, mac address ect but that will come later) what i want is split the work up into multiple worker threads but i'm having problems with the process.
so what i want is for the main thread to start a second thread that will act as a listener it will then use a for loop to spin up a thread for each ip address that needs scanning (variable number of ip addresses from 1 to 255)
these threads will scan the ip address, resolve the host name and anything else i want. then this is the part i'm stuck on. I want to combine these variables into a single object that can be extracted by the listener thread then send it send it over and then terminate
so a object that contains (index, IP, Status, Hostname, Mac, OS,)
the listener thread will take these objects as they're passed to it and combine them into a large object (not sure that's the right name) and once all threads have returned with either an object for online devices or a message of offline it will then return to the main thread with the data in a object again to be displayed on screen ect.
what i'm not familiar with and i don't think i'm wording right on the google searches is the correct way to package this data up and end it over and then extract it on the other side.
if anyone has any links to knowledge articles, tutorials or examples of similar stuff that would be great. or even if you can let me know the correct terminology so that i might have better look searching this in the future.
Thanks for any help
Mat
I think Parallel.ForEach is what you are looking for. It would allow you to perform requests in parallel, and then you can use some type of thread safe collection to merge the results.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
While evaulating queueing mechanisms in general and Rebus in particular, I came up with the following questions about Bus Instances Lifecycle:
When need access to the Bus instance (one-way client mode) from several WCF services hosted on a windows service, the only option for instancing is on Singleton mode?
There is a way to Pause a Bus (stop dispatching message to the message handlers?) and then start it again.Or the only option is to dispose it and create a new one.
A use Case for this is when you connect to systems that have throughput limitations, or transactions per hour limits.
Can sagas have multiple workers, if so and assuming that the events were send on the correct order (initiator first), there is way to warranty that the initiator going to be handled first, there for creating the saga, before the following events are handled with multiple workers?
If in the same host, several Bus instances are used, and inside a message handler we call send on another bus instance based on the same configuration. The correlation-id wont be transmitted, And things like reply wont work properly, right?
I prefer concrete answers on how Rebus could support or not this, with code references/examples.
1: It's really simple: The bus instance (i.e. the implementation of IBus that gets put into the container and is handed to you when you do the Configure.With(...) configuration spells) is supposed to be a singleton instance that you keep around for the entire duration of your application's lifetime.
You can easily create multiple instances though, but that would be useful only for hosting multiple Rebus endpoints in the same process.
IOW the bus is fully reentrant and can safely be shared among threads in your web application.
2: Not readily, no - at least not in a way that is supported by the public API. You can do this though: ((RebusBus)bus).SetNumberOfWorkers(0) (i.e. cast the IBus instance to RebusBus and change the number of worker threads), which will block until the number of workers has been adjusted to the desired number.
This way, you can actually achieve what you're after. It's just not an official feature of Rebus (yet), but it might be in the future. I can guarantee, though, that the ability to adjust the number of workers at runtime will not go away.
3: Yes, sagas are guarded by an optimistic concurrency scheme no matter which persistence layer you choose. If you're unsure which type of message will arrive first at your saga, you should make your saga tolerant to this - i.e. just implement IAmInitiatedBy<> for each potentially initiating message type and make the saga handle that properly.
Being (fairly) tolerant to out-of-order messages is a good general robustness principle that will serve you well also when messages are redelivered after having stayed a while in an error queue.
4: Rebus will pick up the current message context even though you're using multiple bus instances because it uses an "ambient context" (i.e. a MessageContext instance mounted on the worker thread) to pick up the fact that you're sending a message from within a handler, which in turn will cause the correlation ID of the handled message to be copied to any outgoing messages.
Thus bus.Reply will work, too.
But as I stated in (1) the bus instance is fully reentrant and there's no need to have multiple instances around, unless they're actually logically difference endpoints.
I hope this answers your questions :)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
How can I implement background processing queues in my ASP.NET MVC web app? While most data changes, updates etc. need to be visible immediately, there are other updates that don't need real time processing which I would like to hand off to a lower-priority background process which will take care of it at its own pace.
As an example, take StackOverflow's badge award system. Usually you may take a specific action that would award you a badge, but the actual 'award' happens later (typically between 10 minutes and a couple hours later). I assume this is done via a separate background process as it is not critical for SO's workings to award badges immediately when earned.
So, I'm trying to create some kind of queue system in which I could stuff tasks (say anything implementing ITask interface which will have a Process() method) which will eventually get executed by a separate process.
How would I go about implementing such a system? Ideas/Hint/Sample Code?
Thank you!
Windows Services and MSMQ to communicate with them (if you even need to).
-- Edit
To slightly expand.
You'll create several services, depending on what you want to do, and have them all run an endless thread of various sleeping levels, to do what you want. They will then update the database appropriately, and you will not have had to do anything on the client side.
You may wish to interact with them from an admin point of view, hence you may have an MSMQ that they listen to admin commands on. Depending on your implementation, you may need to restart them for some reason, or possibly just 'force' a running of whatever they wish to do.
So you'll use an MSMQ Private Queue to do it (System.Messaging namespace). One of the main things to note about MSMQ, is that message need to be < 4meg. So if you intend to send large object graphs, serialise to a file first and just send the filename.
MSMQ is quite beautiful. You can send based on a 'Correlation ID' if you need to, but, for some amusing reason, the correlation ID must be in the form of:
{guid}\1
Anything else does not work (at least in version 2.0 of the framework, the code may have changed).
-- Edit
Example, as requested:
using System.Messaging;
...
MessageQueue queue = new MessageQueue(".\\Private$\\yourqueue");
queue.Formatter = new BinaryMessageFormatter();
Message m = new Message();
m.Body = "your serialisable object or just plain string";
queue.Send(m);
// on the other side
MessageQueue queue = new MessageQueue(".\\Private$\\yourqueue");
queue.Formatter = new BinaryMessageFormatter();
Message m = queue.Receive();
string s = m.Body as string;
// s contains that string now
Jeff has a great post showing how he achieved this originally for Stack Overflow at https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/
While it might not be as reliable as a service if that's an option to you, it's served me well in the past for non essential tasks, and I've had it working in a virtual hosting environment.
Just found this question when searching for background process in ASP.NET MVC. (Available after .NET 4.5.2)
public ActionResult InitiateLongRunningProcess(Emails emails)
{
if (ModelState.IsValid)
{
HostingEnvironment.QueueBackgroundWorkItem(ct => LongRunningProcessAsync(emails.Email));
return RedirectToAction("Index", "Home");
}
return View(user);
}
Note: Personally I won't use the Webserver to run background tasks. Also don't reinvent the wheel, I would highly recommend to use Hangfire.
Read this great article from Hanselman HowToRunBackgroundTasksInASPNET