I got a Cloud Service deployment with 4 worker roles, one of which got auto-scaling enabled. As soon as auto-scaling occurs, all instances of all roles are recycling.
Ideally, I'd like to stop the roles from recycling or at least terminate the work of all other roles in a controlled way.
I found out, that you can react to the RoleEnvironment.Changing event and cancel it to request a graceful shutdown (i.e. make OnStop being called). However, by adding tracing output to the Changing event handler, I noticed that the Changing event was obviously not even fired, so the cancellation was not being registered either.
private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
// This tracing output does not show up in the logs table.
Trace.TraceInformation("RoleEnvironmentChanging event fired.");
if ((e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange)))
{
// This one neither.
Trace.TraceInformation("One of the changes is a RoleEnvironmentConfigurationSettingChange. Cancelling..");
e.Cancel = true;
}
if ((e.Changes.Any(change => change is RoleEnvironmentTopologyChange)))
{
// This one neither.
Trace.TraceInformation("One of the changes is a RoleEnvironmentTopologyChange. Cancelling.");
e.Cancel = true;
}
}
public override bool OnStart()
{
// Hook up to the changing event to prevent roles from unnecessarily restarting.
RoleEnvironment.Changing += RoleEnvironmentChanging;
// Set the maximum number of concurrent connections
ServicePointManager.DefaultConnectionLimit = 12;
bool result = base.OnStart();
return result;
}
Also adding an internal endpoint to each role did not bring the change. Here the configuration from the .csdef:
<WorkerRole name="MyRole" vmsize="Medium">
[...ConfigurationSettings...]
<Endpoints>
<InternalEndpoint name="Endpoint1" protocol="http" />
</Endpoints>
</WorkerRole>
Also changing the protocol to "any" wasn't successful.
How can I stop my role instances from recycling after a scaling operation?
EDIT:
» Included code snippets
» Fixed typos
Did you try one of the following?
Check whether the event is being fired in the instances of role which is auto-scaling (to make sure its not a problem with the internal endpoint)
Do a complete re-deployment (instead of update).
Add a short Thread.Sleep() after the Tracing output in the event handler (sometimes the role is being shut down before the trace output can be registered)
Do a change in one of the configs via the management portal (and check whether event is being triggered)
Check whether the other events (for instance RoleEnvironment.Changed) are being fired
Wow, over 2 years w/o a real answer here. Too bad.
My experience with the topic is:
set e.Cancel to false if your instance is able to work after and while scaling without needed to be reconfigured.
if (e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange)){
Trace.WriteLine("with recycle");
e.Cancel = true;
}
else {
Trace.WriteLine("without recycle");
e.Cancel = false;
}
Maybe you want to set Trace.AutoFlush = true at OnStart.
Role Environment Methods and Events
There are five main places where you can write code to respond to environment changes. Two of these, OnStart and OnStop, are methods on the RoleEntryPoint class which you can override in your main role class (which is called WebRole or WorkerRole by default). The other three are events on the RoleEnvironment class which you can subscribe to: Changing, Changed and Stopping.
The purpose of these methods is pretty clear from their names:
OnStart gets called when the instance is first started.
Changing gets called when something about the role environment is about to change.
Changed gets called when something about the role environment has just been changed.
Stopping gets called when the instance is about to be stopped.
OnStop gets called when the instance is being stopped.
In all cases, there’s nothing your code can do to prevent the corresponding action from occurring, but you can respond to it in any way you wish. In the case of the Changing event, you can also choose whether the instance should be recycled to deal with the configuration change by setting e.Cancel = true.
Why aren’t Changing and Changed firing in my application?
When I first started exploring this topic, I observed the following unusual behaviour in both the Windows Azure Compute Emulator (previously known as the Development Fabric) and in the cloud:
The Changing and Changed events did not fire on any instance when I made configuration changes.
RoleEnvironment.CurrentRoleInstance.Role.Instances.Count always returned 1, even when there were many instances in the role.
It turns out that this is the expected behaviour when a role has in no internal endpoints defined, as documented in this MSDN article. So the solution is simply to define an internal endpoint in your ServiceDefinition.csdef file like this:
<Endpoints>
<InternalEndpoint name=”InternalEndpoint1″ protocol=”http” />
</Endpoints>
Which Events Fire Where and When?
Even though the names of the events seem pretty self-explanatory, the exact behaviour when scaling deployments up and down is not necessarily what you might expect. The following diagram shows which events fire in an example scenario containing a single role. 2 instances are deployed initially, the deployment is then scaled to 4 instances, then back down to 3, and finally the deployment is stopped.
taken from http://azure.microsoft.com/blog/2011/01/04/responding-to-role-topology-changes/
Related
First of all, let me say that I know that is is bad practice, not good, probably not allowed (technically) etc, etc, etc... to force stop another service from my app.
However, there ARE some use cases to warrant this need. In my case, for example, there is a 3rd party service that is installed by my app because of my reference to it ("it" is a barcode scanning SDK). The SDK states that I must call method to something called
GetScannerService();
I have observed that this call will either start the service or grab the instance to it if it is already running.
Furthermore, there are some calls that have to be done during onStop and onDestroy of my app which effectively will stop this third party service.
All that said, I have seen cases where this service gets stuck in a weird state. I have no control over the code (and bugs) in this package. Yes, I have reached out to them but have been unsuccessful so far to get them to fix the root cause. When it is stuck in this state, I can see it in the list of running services (and sometimes it is listed in the cached ones) but when my app calls to GetScannerService, it throws an exception that basically states the service cannot be started...but it already is.
So, when this happens, if I manually to the running services list and find it (again, sometimes it is in cached) and click force stop, this will fix things and my app works as expected again...until it happens again that is.
So, I want and need to have my app control this service. The thought is on startup, when I do the first call to GetScannerService, if it returns the exception, I will basically force stop it so that I can then call again and have it started. In other words, I want to automate the force stop function.
I know technically this is not allowed but I have also read that there are ways to do it, even if you don't have root.
So far, I can get the list of all running services and I can see the service in question in my list. Which means I also have access to a lot of information about the running service. But, what I have tried does not work. I have tried to KillBackgroundProcesses but it did not work, the service is still in the list.
Here is what I have tried so far:
private void Button_Click(object sender, EventArgs e)
{
var am = (ActivityManager)this.Application.ApplicationContext.GetSystemService(Context.ActivityService);
var taskList = am.GetRunningServices(serviceListLimit);
List<string> serviceNames = new List<string>();
foreach(var t in taskList)
{
serviceNames.Add(t.Service.PackageName);
}
var adapter = new ArrayAdapter<string>(this, Android.Resource.Layout.SimpleListItem1, serviceNames);
services.Adapter = adapter;
if (serviceNames.Contains(emdkServiceName))
testKillService(am, emdkClass);
}
private void testKillService(ActivityManager am, Class emdkClass)
{
am.KillBackgroundProcesses(emdkServiceName);
}
So, I can list them and see it in the list as well as grab details about the item in the list. Anybody know how I can force stop it?
You can use the Process class:
void killProcess (int pid)
You can get the "pid" from the ActivityManager.RunningAppProcessInfo.
I am new at messaging architectures, so I might be going at this the wrong way. But I wanted to introduce NServiceBus slowly in my team by solving a tiny problem.
Appointments in Agenda's have states. Two users could be looking at the same appointment in the same agenda, in the same application. They start this application via a Remote session on a central server. So if user 1 updates the state of the appointment, I'd like user 2 to see the new state 'real time'.
To simulate this or make a proof of concept if you will, I made a new console application. Via NuGet I got both NServiceBus and NServiceBus.Host, because as I understood from the documentation I need both. And I know in production code it is not recommended to put everything in the same assembly, but the publisher and subscriber will most likely end up in the same assembly though...
In class Program method Main I wrote the following code:
BusConfiguration configuration = new BusConfiguration();
configuration.UsePersistence<InMemoryPersistence>();
configuration.UseSerialization<XmlSerializer>();
configuration.UseTransport<MsmqTransport>();
configuration.TimeToWaitBeforeTriggeringCriticalErrorOnTimeoutOutages(new TimeSpan(1, 0, 0));
ConventionsBuilder conventions = configuration.Conventions();
conventions.DefiningEventsAs(t => t.Namespace != null
&& t.Namespace.Contains("Events"));
using (IStartableBus bus = Bus.Create(configuration))
{
bus.Start();
Console.WriteLine("Press key");
Console.ReadKey();
bus.Publish<Events.AppointmentStateChanged>(a =>
{
a.AppointmentID = 1;
a.NewState = "New state";
});
Console.WriteLine("Event published.");
Console.ReadKey();
}
In class EndPointConfig method Customize I added:
configuration.UsePersistence<InMemoryPersistence>();
configuration.UseSerialization<XmlSerializer>();
configuration.UseTransport<MsmqTransport>();
ConventionsBuilder conventions = configuration.Conventions();
conventions.DefiningEventsAs(t => t.Namespace != null
&& t.Namespace.Contains("Events"));
AppointmentStateChanged is a simple class in the Events folder like so:
public class AppointmentStateChanged: IEvent {
public int AppointmentID { get; set; }
public string NewState { get; set; }
}
AppointmentStateChangedHandler is the event handler:
public class AppointmentStateChangedHandler : IHandleMessages<Events.AppointmentStateChanged> {
public void Handle(Events.AppointmentStateChanged message) {
Console.WriteLine("AppointmentID: {0}, changed to state: {1}",
message.AppointmentID,
message.NewState);
}
}
If I start up one console app everything works fine. I see the handler handle the event. But if I try to start up a second console app it crashes with: System.Messaging.MessageQueueException (Timeout for the requested operation has expired). So i must be doing something wrong and makes me second guess that I don't understand something on a higher level. Could anyone point me in the right direction please?
Update
Everthing is in the namespace AgendaUpdates, except for the event class which is in the AgendaUpdates.Events namespace.
Update 2
Steps taken:
Copied AgendaUpdates solution (to AgendaUpdates2 folder)
In the copy I changed MessageEndpointMappings in App.Config the EndPoint attribute to "AgendaUpdates2"
I got MSMQ exception: "the queue does not exist or you do not have sufficient permissions to perform the operation"
In the copy I added this line of code to EndPointConfig: configuration.EndpointName("AgendaUpdates2");
I got MSMQ exception: "the queue does not exist or you do not have sufficient permissions to perform the operation"
In the copy I added this line of code to the Main methodin the Program class:
configuration.EndpointName("AgendaUpdates2");
Got original exception again after pressing key.
--> I tested it by starting 2 visual studio's with the original and the copied solution. And then start both console apps in the IDE.
I'm not exactly sure why you are getting that specific exception, but I can explain why what you are trying to do fails. The problem is not having publisher and subscriber in the same application (this is possible and can be useful); the problem is that you are running two instances of the same application on the same machine.
NServiceBus relies on queuing technology (MSMQ in your case), and for everything to work properly each application needs to have its own unique queue. When you fire up two identical instances, both are trying to share the same queue.
There are a few things you can tinker with to get your scenario to work and to better understand how the queuing works:
Change the EndPointName of your second instance
Run the second instance on a separate machine
Separate the publisher and subscriber into separate processes
Regardless of which way you go, you will need to adjust your MessageEndpointMappings (on the consumer/subscriber) to reflect where the host/publisher queue lives (the "owner" of the message type):
http://docs.particular.net/nservicebus/messaging/message-owner#configuring-endpoint-mapping
Edit based on your updates
I know this is a test setup/proof of concept, but it's still useful to think of these two deployments (of the same code) as publisher/host and subscriber/client. So let's call the original the host and the copy the client. I assume you don't want to have each subscribe to the other (at least for this basic test).
Also, make sure you are running both IDEs as Administrator on your machine. I'm not sure if this is interfering or not.
In the copy I changed MessageEndpointMappings in App.Config the EndPoint attribute to "AgendaUpdates2" I got MSMQ exception: "the queue does not exist or you do not have sufficient permissions to perform the operation"
Since the copy is the client, you want to point its mapping to the host. So this should be "AgendaUpdates" (omit the "2").
In the copy I added this line of code to EndPointConfig: configuration.EndpointName("AgendaUpdates2"); I got MSMQ exception: "the queue does not exist or you do not have sufficient permissions to perform the operation"
In the copy I added this line of code to the Main methodin the Program class: configuration.EndpointName("AgendaUpdates2"); Got original exception again after pressing key
I did not originally notice this, but you don't need to configure the endpoint twice. I believe your EndPointConfig is not getting called, as it is only used when hosting via the NSB host executable. You can likely just delete this class.
This otherwise sounds reasonable, but remember that your copy should not be publishing if its the subscriber, so don't press any keys after it starts (only press keys in the original).
If you want to publisher to also be the receiver of the message, you want to specify this in configuration.
This is clearly explained in this article, where the solution to your problem is completely at the end of the article.
Generally with services, the task you want to complete is repeated, maybe in a loop or maybe a trigger or maybe something else.
I'm using Topshelf to complete a repeated task for me, specifically I'm using the Shelf'ing functionality.
The problem I'm having is how to handle the looping of the task.
When boot strapping the service in Topshelf, you pass it a class (in this case ScheduleQueueService) and indicate which is its Start method and it's Stop method:
Example:
public class QueueBootstrapper : Bootstrapper<ScheduledQueueService>
{
public void InitializeHostedService(IServiceConfigurator<ScheduledQueueService> cfg)
{
cfg.HowToBuildService(n => new ScheduledQueueService());
cfg.SetServiceName("ScheduledQueueHandler");
cfg.WhenStarted(s => s.StartService());
cfg.WhenStopped(s => s.StopService());
}
}
But in my StartService() method I am using a while loop to repeat the task I'm running, but when I attempt to stop the service through Windows services it fails to stop and I suspect its because the StartService() method never ended when it was originally called.
Example:
public class ScheduledQueueService
{
bool QueueRunning;
public ScheduledQueueService()
{
QueueRunning = false;
}
public void StartService()
{
QueueRunning = true;
while(QueueRunning){
//do some work
}
}
public void StopService()
{
QueueRunning = false;
}
}
what is a better way of doing this?
I've considered using the .NET System.Threading.Tasks to run the work in and then maybe closing the thread on StopService()
Maybe using Quartz to repeat the task and then remove it.
Thoughts?
Generally, how I would handle this is have a Timer event, that fires off a few moments after StartService() is called. At the end of the event, I would check for a stop flag (set in StopService()), if the flag (e.g. your QueueRunning) isn't there, then I would register a single event on the Timer to happen again in a few moments.
We do something pretty similar in Topshelf itself, when polling the file system: https://github.com/Topshelf/Topshelf/blob/v2_master/src/Topshelf/FileSystem/PollingFileSystemEventProducer.cs#L80
Now that uses the internal scheduler type instead of a Timer object, but generally it's the same thing. The fiber is basically which thread to process the event on.
If you have future questions, you are also welcomed to join the Topshelf mailing list. We try to be pretty responsive on there. http://groups.google.com/group/topshelf-discuss
I was working on some similar code today I stumbled on https://stackoverflow.com/a/2033431/981 by accident and its been working like a charm for me.
I don't know about Topshelf specifically but when writing a standard windows service you want the start and stop events to complete as quickly as possible. If the start thread takes too long windows assumes that it has failed to start up, for example.
To get around this I generally use a System.Timers.Timer. This is set to call a startup method just once with a very short interval (so it runs almost immediately). This then does the bulk of the work.
In your case this could be your method that is looping. Then at the start of each loop check a global shutdown variable - if its true you quit the loop and then the program can stop.
You may need a bit more (or maybe even less) complexity than this depending on where exactly the error is but the general principle should be fine I hope.
Once again though I will disclaim that this knowledge is not based on topshelf, jsut general service development.
This is rather weird issue that I am facing with by WCF/Silverlight application. I am using a WCF to get data from a database for my Silverlight application and the completed event is not triggering for method in WCF on some systems. I have checked the called method executes properly has returns the values. I have checked via Fiddler and it clearly shows that response has the returned values as well. However the completed event is not getting triggered. Moreover in few of the systems, everything is fine and I am able to process the returned value in the completed method.
Any thoughts or suggestions would be greatly appreciated. I have tried searching around the web but without any luck :(
Following is the code.. Calling the method..
void RFCDeploy_Loaded(object sender, RoutedEventArgs e)
{
btnSelectFile.IsEnabled = true;
btnUploadFile.IsEnabled = false;
btnSelectFile.Click += new RoutedEventHandler(btnSelectFile_Click);
btnUploadFile.Click += new RoutedEventHandler(btnUploadFile_Click);
RFCChangeDataGrid.KeyDown += new KeyEventHandler(RFCChangeDataGrid_KeyDown);
btnAddRFCManually.Click += new RoutedEventHandler(btnAddRFCManually_Click);
ServiceReference1.DataService1Client ws = new BEVDashBoard.ServiceReference1.DataService1Client();
ws.GetRFCChangeCompleted += new EventHandler<BEVDashBoard.ServiceReference1.GetRFCChangeCompletedEventArgs>(ws_GetRFCChangeCompleted);
ws.GetRFCChangeAsync();
this.BusyIndicator1.IsBusy = true;
}
Completed Event....
void ws_GetRFCChangeCompleted(object sender, BEVDashBoard.ServiceReference1.GetRFCChangeCompletedEventArgs e)
{
PagedCollectionView view = new PagedCollectionView(e.Result);
view.GroupDescriptions.Add(new PropertyGroupDescription("RFC"));
RFCChangeDataGrid.ItemsSource = view;
foreach (CollectionViewGroup group in view.Groups)
{
RFCChangeDataGrid.CollapseRowGroup(group, true);
}
this.BusyIndicator1.IsBusy = false;
}
Please note that this WCF has lots of other method as well and all of them are working fine.... I have problem with only this method...
Thanks...
As others have noted, a look at some of your code would help. But some things to check:
(1) Turn off "Enable Just My Code" under Debug/Options/Debugging/General, and set some breakpoints in the Reference.cs file, to see whether any of the low-level callback methods there are getting hit.
(2) Confirm that you're setting the completed event handlers, and on the right instance of the proxy client. If you're setting the event handlers on one instance, and making the call on another, that could result in the behavior you're describing.
(3) Poke around with MS Service Trace Viewer, as described here, and see if there are any obvious errors (usually helpfully highlighted in red).
Likely there are other things you could check, but this will keep you busy for a day or so :-).
(Edits made after code posted)
(4) You might want to try defining your ws variable at the class level rather than the function. In theory, having an event-handler defined on it means that it won't get garbage collected, but it's still a little odd, in that once you're out of the function, you don't have a handle to it anymore, and hence can't do important things like, say, closing it.
(5) If you haven't already, try rebuilding your proxy class through the Add Service Reference dialog box in Visual Studio. I've seen the occasional odd problem pop up when the web service has changed subtly and the client wasn't updated to reflect the changes: some methods will get called successfully, others won't.
(6) If you're likely to have multiple instances of a proxy client open at the same time, consider merging them into one instance (and use the optional "object userState" parameter of the method call to pass the callback, so you don't run into the nasty possibility of multiple event handlers getting assigned). I've run into nasty problems in the past when multiple instances were stepping on each other, and my current best practice is to structure my code in such a way that there's only ever one client instance open at a time. I know that's not necessarily what MS says, but it's been my experience.
This issue is because of special characters in one of the fields returned from DB which browser was not able to render. After considerable debug n search over the web, was able to find this out. Used Regular expressions to remove these special characters in WCF, the new returned values from the method was successfully rendered in various browsers on different system. :)
Make sure you have checked 'Generate asynchronous operations' in your service reference. Right-click on the service reference and check the box. This solved it for me.
I was creating a http module and while debugging I noticed something which at first (at least) seemed like weird behaviour.
When I set a breakpoint in the init method of the httpmodule I can see that the http module init method is being called several times even though I have only started up the website for debugging and made one single request (sometimes it is hit only 1 time, other times as many as 10 times).
I know that I should expect several instances of the HttpApplication to be running and for each the http modules will be created, but when I request a single page it should be handled by a single http application object and therefore only fire the events associated once, but still it fires the events several times for each request which makes no sense - other than it must have been added several times within that httpApplication - which means it is the same httpmodule init method which is being called every time and not a new http application being created each time it hits my break point (see my code example at the bottom etc.).
What could be going wrong here? Is it because I am debugging and set a breakpoint in the http module?
It have noticed that it seems that if I startup the website for debugging and quickly step over the breakpoint in the httpmodule it will only hit the init method once and the same goes for the eventhandler. If I instead let it hang at the breakpoint for a few seconds the init method is being called several times (seems like it depends on how long time I wait before stepping over the breakpoint). Maybe this could be some build in feature to make sure that the httpmodule is initialized and the http application can serve requests , but it also seems like something that could have catastrophic consequences.
This could seem logical, as it might be trying to finish the request and since I have set the break point it thinks something have gone wrong and try to call the init method again? Soo it can handle the request?
But is this what is happening and is everything fine (I am just guessing), or is it a real problem?
What I am specially concerned about is that if something makes it hang on the "production/live" server for a few seconds a lot of event handlers are added through the init and therefore each request to the page suddenly fires the eventhandler several times.
This behaviour could quickly bring any site down.
I have looked at the "original" .net code used for the httpmodules for formsauthentication and the rolemanagermodule, etc... But my code isn't any different that those modules uses.
My code looks like this.
public void Init(HttpApplication app)
{
if (CommunityAuthenticationIntegration.IsEnabled)
{
FormsAuthenticationModule formsAuthModule = (FormsAuthenticationModule) app.Modules["FormsAuthentication"];
formsAuthModule.Authenticate += new FormsAuthenticationEventHandler(this.OnAuthenticate);
}
}
Here is an example how it is done in the RoleManagerModule from the .NET framework:
public void Init(HttpApplication app)
{
if (Roles.Enabled)
{
app.PostAuthenticateRequest += new EventHandler(this.OnEnter);
app.EndRequest += new EventHandler(this.OnLeave);
}
}
Does anyone know what is going on?
(I just hope someone out there can tell me why this is happening and assure me that everything is perfectly fine) :)
UPDATE:
I have tried to narrow down the problem and so far I have found that the init method being called is always on a new object of my http module (contrary to what I thought before).
I seems that for the first request (when starting up the site) all of the HttpApplication objects being created and their modules are all trying to serve the first request and therefore all hit the eventhandler that is being added.
I can't really figure out why this is happening.
If I request another page all the HttpApplication's created (and their modules) will again try to serve the request causing it to hit the eventhandler multiple times.
But it also seems that if I then jump back to the first page (or another one) only one HttpApplication will start to take care of the request and everything is as expected - as long as I don't let it hang at a break point.
If I let it hang at a breakpoint it begins to create new HttpApplication's objects and starts adding HttpApplications (more than 1) to serve/handle the request (which is already in process of being served by the HttpApplication which is currently stopped at the breakpoint).
I guess or hope that it might be some intelligent "behind the scenes" way of helping to distribute and handle load and / or errors. But I have no clue.
I hope some out there can assure me that it is perfectly fine and how it is supposed to be?
It's normal for the Init() method to be called multiple times. When an application starts up, the ASP.NET Worker process will instantiate as many HttpApplication objects as it thinks it needs, then it'll pool them (e.g. reuse them for new requests, similar to database connection pooling).
Now for each HttpApplication object, it will also instantiate one copy of each IHttpModule that is registered and call the Init method that many times. So if 5 HttpApplication objects are created, 5 copies of your IHttpModule will be created, and your Init method called 5 times. Make sense?
Now why is it instantiating 5 HttpApplication objects say? Well maybe your ASPX page has links to other resources which your browser will try to download, css, javascript, WebResource.aspx, maybe an iframe somewhere. Or maybe the ASP.NET Worker Process 'is in the mood' for starting more than 1 HttpApplication object, that's really an internal detail/optimisation of the ASP.NET process running under IIS (or the VS built in webserver).
If you want code that's guaranteed to run just once (and don't want to use the Application_StartUp event in the Global.asax), you could try the following in your IHttpModule:
private static bool HasAppStarted = false;
private readonly static object _syncObject = new object();
public void Init(HttpApplication context)
{
if (!HasAppStarted)
{
lock (_syncObject)
{
if (!HasAppStarted)
{
// Run application StartUp code here
HasAppStarted = true;
}
}
}
}
I've done something similar and it seems to work, though I'd welcome critiques of my work in case I've missed something.
Inspect the HttpContext.Current.Request to see, for what request the module's init is fired. Could be browser sending multiple request.
If you are connected to IIS, do check IIS logs to know whether any request is received for the time you are staying at the break point.
Here is a bit of explanation as to what you should use, when, and how they work.
When to use Application_Start vs Init in Global.asax?
Edit: More reading
The ASP Column: HTTP Modules
INFO: Application Instances, Application Events, and Application State in ASP.NET
Examle above locks the IHttpModule for all requests, and then, it frezes the whole application.
If your IHttpModule calls request several times is needed to call HttpApplication method CompleteRequest and dispose the HttpApplication instance of the IHttpModule in EndRequest event in order to remove instance of the HttpApplication like this:
public class TestModule :IHttpModule
{
#region IHttpModule Members
public void Dispose()
{
}
public void Init(HttpApplication context)
{
context.BeginRequest += new EventHandler(context_BeginRequest);
context.EndRequest += new EventHandler(context_EndRequest);
}
void context_EndRequest(object sender, EventArgs e)
{
HttpApplication app = sender as HttpApplication;
app.CompleteRequest();
app.Dispose();
}
void context_BeginRequest(object sender, EventArgs e)
{
//your code here
}
#endregion
}
If you need that IHttpModule requests every time without rerequest on postback use this code above.