I was reading the following post How to correctly use IHttpModule
*
Now lets think of the word itself. Application pool. Yes pool. It
means that a certain web application is running multiple
HttpApplication instances in one pool. Yes multiple. Otherwise it
wouldn't be called a pool. »How many?« you may ask. That doesn't
really matter as long as you know there could be more than one. We
trust IIS to do its job. And it obviously does it so well that it made
this fact completely transparent for us developers hence not many
completely understand its inner workings. We rely on its robustness to
provide the service. And it does. Each of these HttpApplication
instances in the pool keeps its own list of HTTP modules that it uses
with each request it processes.
*
I have a question that under what scenario multiple instances of an Application object can run for a single application. Till now I was aware of the fact that a single application object exists per application. So I am curious to know that is this true that multiple instances can run per application and how it is decided ?
Each HttpApplication object instance is unique to a single request. If your site is processing multiple requests in parallel, each one must have it's own instance of HttpApplication. That object has per-request state information that must not change during the request's lifetime (including the body of the request and response!)
The instances are pooled, as described in the article. Each one will be reused to service multiple subsequent requests, up to the limit set on the application pool, then it'll be allowed to die off.
Note that you're specifically asking about HttpApplication. This is distinct from the System.Windows.Forms.Application class, which is in fact a singleton class that only exists once per application.
Related
I am finding it hard to find any detailed documentation on the use of StatelessWorkers.
I want to achieve something similar to this. As suggested in the document I need to use Stateless Workers in order to process some messages and activate the grains that will eventually hold the state.
I would like to have multiple instances of a dispatcher grain processing the "initialization" since this grain by no means handles any state and the messages do not need to be queued in order.
Do I need to mark this grain as Reentrant? or Will the StatelessWorker (attribute) be enough?
With regards to activation, it seems like I need to inherit from IGrainWithIntegerKey (or a similar interface). this means that I need to activate the grain as follows:
GrainClient.GrainFactory.GetGrain<IDispatcherActor>(0)
Since I am always using 0 as ID will multiple instances of the grain still be activated? or do I need to create different IDs. It seems like I cannot call the grain as follows:
GrainClient.GrainFactory.GetGrain<IDispatcherActor>()
even if I inherit from IGrain
Short Answer
You can create a stateless worker by inheriting IGrainWithIntegerKey and using a key 0.
Long Answer
Stateless workers are the same as normal grains with a couple of differences:
They are always activated locally (in the same silo as the caller).
Multiple activations can be created if the calls to a stateless worker activation build up.
They are subject to the same deactivation semantics.
It might be surprising that stateless workers have keys, but there are couple of reasons why keys might be useful:
Stateless worker activations may have different 'flavours', which could be related to their key.
A larger pool of stateless workers could be activated by addressing them with a range of keys.
But if these features aren't useful to you, the convention is to use a key of 0.
They can only be called from inside a silo.
StatelessWorker grains can be called from clients. That's actually one of the popular scenarios when calls from clients should be preprocessed before they can be routed to other grains for actual processing.
Does ASP.NET Core pipeline handle requests by multithreading?
If it does, how do you configure the number of threads? And also, should singleton services be thread safe?
The first question was already answered in the comment above (look into KestrelServerOptions)
Regarding thread safetly, the answer is in the documentation:
Singleton lifetime services are created the first time they are requested (or when ConfigureServices is run if you specify an instance there) and then every subsequent request will use the same instance. If your application requires singleton behavior, allowing the services container to manage the service's lifetime is recommended instead of implementing the singleton design pattern and managing your object's lifetime in the class yourself.
That means all requests for the service pull the same object, which means no per-thread objects, and thus no thread safety.
Thread safety
Singleton services need to be thread safe. If a singleton service has a dependency on a transient service, the transient service may also need to be thread safe depending how it’s used by the singleton.
Coudn't be more clear. Since the objects are not created per thread, they are not thread safe by default (though it's possible some services are designed to be).
I read this question, but the answers and discussions are confusing myself.
So I decided to check, but how could I do it? How to create a test to prove if HttpWebClientProtocol class is Thread Safeor not?
I have already done the following test:
Create one HttpWebClientProtocol to call a WS.
I create the WS by myself and have just a Thread.Sleep(30000) inside.
So I create two independent threads to call this HttpWebClientProtocol at the same time.
The result is: Both threads called the WS with no problems. (One thread didn't need to wait the first call ends)
with this test have I proved that the object IS Thread Safe and the "correct' answer of the other question is wrong??
Well... I have a better test for you.
HttpWebClientProtocol Class
Directly from MSDN. Here's a copy/pasta of what they have to say about thread safety:
Thread Safety
The properties on this class are copied into a new instance of a WebRequest object for each XML Web service method call. While you can call XML Web service methods on the same WebClientProtocol instance from different threads at the same time, there is no synchronization done to ensure that a consistent snapshot of the properties gets transferred to the WebRequest object. Therefore, if you need to modify the properties and make concurrent method calls from different threads you should use a different instance of the XML Web service proxy or provide your own synchronization.
About thread safety
It's not about just "being available". But it's about making sure that data/state being affected by one thread does not affect the correct execution of the other thread.
If they share data structure and those structure are shared between threads, they are not thread-safe. The issue might not be easily apparent but on a system with large amount of usage of that class in a multi-threaded system, you could find some bugs/exceptions/weird behaviors that you will not be able to reproduce in a development environment and "only happens in production".
That my friend, is NOT thread safe.
About HttpWebClientProtocol and why it's not thread-safe
While the documentation is clear about being able to reuse the HttpWebClientProtocol, it is important to know that all the properties of the object itself are not going to be persisted to other requests created on another thread.
Meaning that if you have 2 threads playing with the Credentials property, you might end-up with some requests with different credentials. This would be bad in a web application with impersonation where requests could be done with a different credential and you could end-up with the data of someone else.
However, if you only need to set the initial properties once, then yes. You can reuse the instance.
I have a situation where I have an object C that is required by two types of classes. One of these class runs in separate thread, the other one creates multiple threads with the help of timer elapsed event.
So there are basically two life times of object C.
Object C is created along with A and B by a factory. For class1 I create the instance through master factory, but for second one I will have to pass the entire factory. The second class will now decide in run time (based on timer tick) how to create object C.
My question is regarding the second case: I am passing the entire factory, which besides the knowledge of creating object C also has a knowledge of creating A and B, is this considered bad design?
I am attaching a snapshot of what I am doing
Composition Snapshot
When working with multiple threads, each thread should get its own object graph. This means that every time you spin off some operation to a new thread (or a thread from the thread pool), you should ask the container again for the root object to work with. Prevent passing services from on thread to the other, because this scatters the knowledge about the thread-safety of your services throughout the code base, while with dependency injection you try to centralize this knowledge to a single place (the composition root). When this knowledge is scattered throughout the application, it becomes much harder to change the behavior of components what thread-safety is concerned.
When you do this, there is probably no need to even have two different configurations for that class. That class might simply be registeres as transient and because you resolve it at each pulse of the timer, each thread gets its own instance or the lifetime is scoped, in that case the class' lifetime will probably end when the timed operation ends.
The code that the timer calls and calls back into the container should be part of the composition root. Since the service is resolved on a background thread, you will often have to wrap that call in some sort of scope (lifetime scope, child container, etc). This allows that instance (or any other registered service) to live for the duration of that scope.
I added log4net to my application and can now see the thread Ids of user activities as they navigate through my website. Is there any specific algorithm to how threads assignment happens with IIS7, or is it just a random number assignment (I suspect it's not completely random because my low traffic site show threads mostly in the range 10-30)? Any maximum to the number of threads available? And I notice that my scheduler shows up with a weird threads id -- any reason for this? The scheduler is Quartz.net and the id shows as "Scheduler_Worker-10", and not just a number.
This explains all you need to know.
An Excerpt:
When ASP.NET is hosted on IIS 7.0 in
integrated mode, the use of threads is
a bit different. First of all, the
application-level queues are no more.
Their performance was always really
bad, there was no hope in fixing this,
and so we got rid of them. But perhaps
the biggest difference is that in IIS
6.0, or ISAPI mode, ASP.NET restricts the number of threads concurrently
executing requests, but in IIS 7.0
integrated mode, ASP.NET restricts the
number of concurrently executing
requests. The difference only matters
when the requests are asynchronous
(the request either has an
asynchronous handler or a module in
the pipeline completes
asynchronously). Obviously if the
reqeusts are synchronous, then the
number of concurrently executing
requests is the same as the number of
threads concurrently executing
requests, but if the requests are
asynchronous then these two numbers
can be quite different as you could
have far more reqeusts than threads.
So basically, if requests are synchronous, the same number of threads per request. See here for various parameters.
I've explained this is a blog post on my blog
ASP.NET Performance-Instantiating Business Layers
The title doesn't coincide with your question but I explain the way IIS handles Requests and I believe you'll have your answer.
A quote from the article
When IIS fields a request for your
application it hands it over to the
worker process. The worker process in
turn creates and instance of your
Global class (which is of type
HttpApplication). From that point on
the typical flow of an ASP.NET
application takes place (the ASP.NET
pipeline). However, what you need to
know and understand is that the worker
process (think of it as IIS really)
keeps the instance of your
HttpApplication (an instance of your
Global class) alive, in order to field
other requests. In fact it by default
it would create and cache up to 10
instances of your Global class, if
required (Lazy instantiation)
depending on load the number of
requests your website receives other
factors. In Figure1 above the
instances of your ASP.NET application
are shown as the red boxes. There
could be up to 10 of these cached by
the worker process. These are really
threads that the worker process has
created and cached and each thread has
its own instance of your Global class.
Note that each of these threads is in
the same App Domain. So any static
classes you may have in your
application are shared across each of
these threads or application
instances.
I suggest you read that article and I'll be happy to answer any questions you may have. Please note that I've intentional kept the article simple in that I don't talk about what happens in the kernel or go into details of the various components that participate. Keeping it simple helps people understand the concepts a lot better (I feel).
I'll answer some of your other questions here:
Is there any specific algorithm to how threads assignment happens with IIS7?
No, for all intents an purposes it's random. This is explain in the article I pointed to. The short answer is that if a cached thread is available then IIs will use it. If not, it will create a new thread, create and instance of your HttpApplication (Global) and assign all of the context to it. So in a site that's not busy, you may see the same threads handle requests. But there are no guarantees. If there is more than one free thread IIS will pick a thread at random to service that request. You should note here, that even in a not so busy site, if your requests take a long time, IIS will be forced to create new threads to service other incoming requests.
Any maximum to the number of threads available?
Yes (as explained in th article) typically 10 threads per worker process. This can be adjusted but I've worked on a number of extremely busy websites and I've never had to. The key is to make your applications respond as fast as possible. Mind you an application can have multiple worker process assigned to it (configured in your app pool) so in busy sites you actually want multiple worker processes for your application, however the implication is that you have the required hardware (CPU cores and memory).
The scheduler is Quartz.net and the id shows as "Scheduler_Worker-10", and not just a number
Threads can have names instead of Ids. If the thread has been assigned a name then you'll see that instead of an id. Of course for threads IIS creates you have no such control. Mind you, I've not used (nor know about Quartz) so I don't know about that but I'm guess that's the case.