Is there a WCF service request queue performance counter? - c#

There is a nice ASP.NET perf counter category and set of counters that can be used to track the request queue during perf test runs. However I can't find similar set for a WCF service not hosted thru IIS. Our WCF services are run as Windows services using net-tcp protocols. I've learned that there are a couple binding parameters that control queuing (Binding.MaxConnections and Binding.ListenBacklog). It wasn't a very easy find. So I wonder going forward, is there a why to track these two values in PerfMon?

Under the ServiceModelService performance counter category you can find the following set of queue performance counters:
Queue Dropped Messages
Queue Dropped Messages Per Second
Queued Poison Messages
Queued Poison Messages Per Second
Queued Rejected Messages
Queued Rejected Messages Per Second
None of which provide the information you're looking for. The performance counter that I could find more closely related to what you want is:
Percent of Max Concurrent Calls
Which provides the number of concurrent calls as a percent of maximum concurrent calls.
To see a complete list of available WCF performance counters click here.

Related

NServiceBus 7 concurrency doubts

In NServicebus 7 you can set concurrency that means you can decide how many messages in queue your software can process in parallel.
This can be done at NserviceBus Endpoint level.
I have few doubts about this concept:
the concurrency is per queue not per message Type? Right?
If I use satellites which means I’ll have N different queues (for example: one per message Type), the concurrency will still be per queue?
For example:
I have configured 1 endpoint (so 1 queue) and setted to 10 the concurrency level. I manage 5 different commands (handlers). All the commands are stored in the same queue, mixed. In this case the endpoint is able to take 10 commands per time from the queue without considering the type, correct?
In a second scenario i have 5 satellites which manage the 5 message types, 1 dedicated queue per type. In this case each satellite is able to take 10 messages per time from its queue?
Satellites are an advanced feature for the raw processing of messages without all the benefits of the NServiceBus message processing pipeline. It's not normal to use them—they're most often used when implementing a message transport. For example, the RabbitMQ transport uses a Satellite for a feature that makes an endpoint instance individually addressable, so you have a QueueName queue plus a QueueName-InstanceName queue on the broker, so that another endpoint can do context.Reply() and have the reply go to the specific server that sent the original command. In any case, each satellite manages its concurrency separately, as it's a more low-level construct.
So yes, the concurrency through the main queue is for the endpoint instance, not per message type, because there's a 1:1 relationship between endpoint and queue, and you can't selectively pull messages off the queue by type.
As a result, the endpoint is your unit of scalability, both scaling up (by increasing the concurrency) or out (by adding more endpoint instances on different servers).
This means you should be careful about what message types you process in the same endpoint. They should generally have the same SLA. You wouldn't want a bunch of messages that take 50ms to process held up behind a glut of messages that process for 20 seconds.
Some people will take this to an extreme and go with one endpoint per message type. This level of complexity is usually not necessary, but it does give you the ultimate control over scalability for every single message type.

Can I access the amount of messages that have been prefetched, from the outside?

In MassTransit, I'd like to make an ASP .NET Core healthcheck that considers my app in "degraded health" state, when the "prefetch buffer" is full, and the amount of consumes per second is low enough.
In other words, if there are still messages on the queue, and the consumer is slow. I'll then use this to configure AWS to autoscale my consumers based on the health.
Is there a way to access the amount of messages that have been prefetched, from the outside? Or is this entirely encapsulated within MassTransit?
I'm using ActiveMQ as the underlying transport.
The message counts are not exposed by MassTransit. You may be able to find something by monitoring ActiveMQ and looking at pending message counts for queues – I don't have any real experience monitoring ActiveMQ because I haven't used it in production.
If you are using metrics with MassTransit, for instance, using Prometheus, you can observe the number of active consumers per queue per instance and use those thresholds to determine if additional instances are needed to handle the load and scale up. That's the approach I would take, since you also get the ability to setup Grafana dashboards to observe system metrics in real time.

Log number of queued requests OWIN self-hosted

I do have my OWIN application hosted as a Windows Service and I am getting a lot of timeout issues from different clients. I have some metrics in place around the request/response time however the numbers are very different. For example I can see the client is taking around one minute to perform a request that looks like in the server is taking 3-4 seconds. I am then assuming that the number of requests that can be accepted has reached the limit and subsequent requests that come in would get queued up. Am I right? If that's the case, is there any way I can monitor the number of incoming requests at a given time and how big is the queue (as in number of requests pending to get served)?
I am playing around with https://msdn.microsoft.com/en-us/library/microsoft.owin.host.httplistener.owinhttplistener.setrequestprocessinglimits(v=vs.113).aspx but doesn't look to have any effect.
Any feedback is much appreciated.
THanks!
HttpListener is built on top of Http.Sys so you need to use its performance counters and ETW traces to get this level of information.
https://msdn.microsoft.com/en-us/library/windows/desktop/cc307239%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396
http://blogs.msdn.com/b/wndp/archive/2007/01/18/event-tracing-in-http-sys-part-1-capturing-a-trace.aspx

Multi-threaded RPC with timeout and resend using RabbitMQ

What I'm looking to do is a Map/Reduce system using RabbitMQ as the transport to and from the worker agents, this is to allow a simple scaling roadmap but also allows for a single server implementation for development and testing.
I've seen a few articles on a single message with a timeout using RPC with RabbitMQ but these appear to be blocking. I need to be able to fire off a batch of requests, each could possibly be served by a different agent. I then need to be able to collate all of the responses so I can crunch to a single answer.
If I were to use RPC, I believe I'd end up with a very complicated serial processing of the tasks rather than in parallel.
I'd also like to be able to resend a request under certain circumstances (e.g. the agent reports a transient error), but this isn't essential.
I'm guessing I'm going to need to spawn the threads in the master application and each have an RPC call but as it's a web app and there could be in the region of 20 tasks to be served by the agents, I'm not keen on this approach and suspect it would be a bottleneck quickly.

Queuing in OneWay WCF Messages using Windows Service and SQL Server

I need to implement a queuing mechanism for WCF service requests. The service will be called by clients in a one-way manner. These request messages should be stored in a SQL Server database and a Windows Service queues the messages. The time at which the requests are processed will be configurable. If there happens error in processing the message, it need to be retried up to 100 times and if still fails it need to be terminated.
Also there should be a mechanism to monitor the number of transaction made on a day and number of failures.
QUESTIONS
If I were using MSMQ, clients could have forwarded the message to queue without knowing the service endpoint. But I am using SQL Server to store the request messages. How the clients can put the requests to SQL Server?
Is the solution feasible? Do we have any article/book that explains how to implement the above?
What are the steps to prevent service and client reaching faulted state in this scenario?
What is the best method to store incoming message to database?
What is the best method to implement retry mechanism? Anything already exist so that I don't have to reinvent the wheel?
Is there any book/article that explains this implementation?
NOTES
Content of the message will be complex XML. For example Travel expense items of an employee or a list of employees.
READING
Logging WCF Request to Database
Guaranteed processing of data in WCF service
MSMQ vs. SQL Server Service Broker
Is it possible to persist and then forward WCF messages to destination services?
WCF 4 Routing Service - protocol bridging issue
https://softwareengineering.stackexchange.com/questions/134605/designing-a-scalable-and-robust-retry-mechanism
Integrating SQL Service Broker and NServiceBus
Can a subscriber also publish/send message in NServiceBus?
I'm a DBA, so that flavors my my response, but here's what I'd do:
If you're using SQL 2005+, use Service Broker to store the messages
in the database rather than storing them in a table. You get a
queueing mechanism with this, so you can get rid of MSMQ. You'll also have a table, but it's just going to store the conversation handle (essentially, a pointer to the message) along with how many times it attempted this message. Lastly, you'll want some sort of a "dead letter box" where messages that reach your retry threshold go.
In your message processing code, do the following:
Begin a transaction
Receive a message off of the queue
If the retry count is greater than the threshold, move it to the dead letter box and commit
Increment the counter on the table for this message
Process the message
If the processing succeeded, commit the transaction
If the processing failed, put a new message on the queue with the same contents and then commit the transaction
Notice that there aren't any planned rollbacks. Rollbacks in Service Broker can be bad; if you rollback 5 times without a successful receive, the queue will become disabled for both enqueuing and dequeuing. But you still want to have transactions for the case when your message processor dies in the middle of processing (i.e. the server crashes).
1. If I were using MSMQ, clients could have forwarded the message to queue without knowing the service endpoint.
Yes - but they would need to know the MSMQ endpoint in order to send their message to the queue.....
But I am using SQL Server to store the request messages. How the clients can put the requests to SQL Server?
The clients won't put their requests into SQL Server - that's what the service on the server will do. The client just call a service method, and the code in there will store the request into the SQL Server table.
2. Is the solution feasible? Do we have any article/book that explains how to implement the above?
Sure, I don't see any big issue. The only point unclear to me right now is: how will the clients know their results?? Do they need to go get results from another service or something??
3. What are the steps to prevent service and client reaching faulted state in this scenario?
As always - just make sure your service code catches all exceptions and either handles them internally, or returns interoperable SOAP faults instead of .NET exceptions.
It sounds like what you want to do is similar to this:
In this case you can use netMsmqBinding between your service and your service consumers.
The only thing you won't get out of the box is the retrying. However if you make the queue transactional then this functionality can be implemented in your service code.
If there is a failure in your dequeue operation the message will not be removed from the queue. It will therefore be available for further dequeue attempts.
However, you would need to implement retry attempt threshold code which fails a message after a certain number of attempts.
I would suggest a different approach to the ones suggested here. If you are able to, I would consider the introduction of a messaging framework such as NServiceBus. It satifies many of the requirements that you have right out of the box. Let me try and address this in context of your requirements.
The service will be called by clients in a one-way manner.
All communication between endpoints in NServiceBus is one way. The underlying transport NServiceBus uses is MSMQ, so much like your WCF approach, your client is communicating with queues, rather than specific service endpoints.
These request messages should be stored in a SQL Server database and a Windows Service queues the messages.
If you wanted to store your request messages in a database then you can configure NServiceBus to forward all messages sent to your request processing endpoint to another "audit" queue, which you can use to persist to the database. This has the added benefit of separating your application logic from your auditing implementation.
The time at which the requests are processed will be configurable.
NServiceBus allows you to defer when a mesage is sent. Normally a message is sent via the Send method of a Bus instance - Bus.Send(msg). You can use The Defer method to send the message some time in the future eg. Bus.Defer(DateTime.Now.AddDays(1), msg); There's nothing more you really have to do, NserviceBus will handle the message once the specified time has been reached.
If there happens error in processing the message, it need to be retried up to 100 times and if still fails it need to be terminated.
By default, NServiceBus will enlist your message in a transaction as soon as your message leaves the queue. This ensures that in the event of failure that the message is rolled back to the originating queue. In such an event, NServiceBus will automatically try to reprocess the message a configurable number of times. The default being 5. You can of course set this to whatever you want, although I am not sure why you would want to set this to 100. At any rate, NServiceBus uses this setting to stop an endless loop of automatic retries. Once the limit has been reached the message is sent to an error queue where it sits until you fix whatever issues caused the exception or until you decide to push the message back to the queue for processing. Either way, you are assured that the message is never lost.
Also there should be a mechanism to monitor the number of transaction made on a day and number of failures.
The beauty of using MSMQ as the transport is that performance monitoring can be a achieved at a infrastructure level. How your applications perform, can be measured by how long they sit in the queue. NServiceBus comes with performance monitors that track the length of time a message is in the queue and you can also add perf mons that come built into windows to track other activity. To monitor errors, all you need to do is check the number of messages in the error queue.
One of the main features of NServiceBus is reliability. WCF will only do so much for you, and then you are on your own. That's a lot of code, complexity and frankly hugely error prone. The things I have described here are all standard features of NServiceBus and I have barely scratched the surface with all the other things that you can do with it. I recommend you check it out.

Categories

Resources