Description:
On a C# ASP.Net web application, we have implemented some timers to periodically run background tasks. One of the timers occasionally seems to get "doubled" or more rarely "tripled".
The timer is set to run once every minute and seems to run properly for a while. Eventually, however, it seems like a second timer gets started and calls the timed process a second time within the same time interval. I've even seen a case where we had three processes running.
Since this process locks some database records and having a second (or third) process doing the same thing will cause a deadlock or timeout error on the database connection, we've implemented a mechanism to only allow one thread at a time to execute the database critical portion of the process code. When the process takes longer than a minute to run, this mechanism successfully blocks the next run triggered by its own timer. But the thread locking fails if the process is triggered by the second (or third) timer.
In our logs, I output both the Process ID and the Managed Thread ID, which lets me see which thread is starting, finishing, or erring out. The strange thing, is that regardless of which timer instance kicked off the process, the Process ID is the same.
var processID = System.Diagnostics.Process.GetCurrentProcess().Id;
var thread = System.Threading.Thread.CurrentThread.ManagedThreadId;
How do I prevent multiple instances of the timer?
We have a web-farm with 2 servers behind a load balancer. I've been assurred that the web-garden is set to only allow one instance of the app-pool on each server. A web.config setting specifies which server will run the timed process. The other server will not load the timer.
Relevant Code:
On the Global.asax.cs
protected static WebTaskScheduler PersonGroupUpdateScheduler
{
get;
private set;
}
protected void StartSchedulers()
{
using (var logger = new LogManager())
{
// ... other timers configured in similar fashion ...
if (AppSetting.ContinuousPersonGroupUpdates)
{
// clear out-of-date person-group-updater lock
logger.AppData.Remove("PersonGroupUpdater"); // database record to prevent interference with another process outside the web application.
var currentServer = System.Windows.Forms.SystemInformation.ComputerName;
if (currentServer.EqualsIngoreCase(AppSetting.ContinuousPersonGroupUpdateServer))
{
PersonGroupUpdateScheduler = new WebTaskScheduler() {
AutoReset = true,
Enabled = true,
Interval = AppSetting.ContinuousPersonGroupUpdateInterval.TotalMilliseconds,
SynchronizingObject = null,
};
PersonGroupUpdateScheduler.Elapsed += new ElapsedEventHandler(DistributePersonGroupProcessing);
PersonGroupUpdateScheduler.Start();
HostingEnvironment.RegisterObject(PersonGroupUpdateScheduler);
logger.Save(Log.Types.Info, "Starting Continuous Person-Group Updating Timer.", "Web");
}
else
{
logger.Save(Log.Types.Info, string.Format("Person-Group Updating set to run on server {0}.", AppSetting.ContinuousPersonGroupUpdateServer), "Web");
}
}
else
{
logger.Save(Log.Types.Info, "Person-Group Updating is turned off.", "Web");
}
}
}
private void DistributePersonGroupProcessing(object state, ElapsedEventArgs eventArgs)
{
// to start with a clean connection, create a new data context (part of default constructor)
// with each call.
using (var groupUpdater = new GroupManager())
{
groupUpdater.HttpContext = HttpContext.Current;
groupUpdater.ContinuousGroupUpdate(state, eventArgs);
}
}
On a separate file, we have the WebTaskScheduler class which just wraps System.Timers.Timer and implements the IRegisteredObject interface so that IIS will recognize the triggered process as something it needs to deal with when shutting down.
public class WebTaskScheduler : Timer, IRegisteredObject
{
private Action _action = null;
public Action Action
{
get
{
return _action;
}
set
{
_action = value;
}
}
private readonly WebTaskHost _webTaskHost = new WebTaskHost();
public WebTaskScheduler()
{
}
public void Stop(bool immediate)
{
this.Stop();
_action = null;
}
}
Finally, the locking mechanism for the critical section of the code.
public void ContinuousGroupUpdate(object state, System.Timers.ElapsedEventArgs eventArgs)
{
var pgUpdateLock = PersonGroupUpdaterLock.Instance;
try
{
if (0 == Interlocked.Exchange(ref pgUpdateLock.LockCounter, 1))
{
if (LogManager.AppData["GroupImporter"] == "Running")
{
Interlocked.Exchange(ref pgUpdateLock.LockCounter, 0);
LogManager.Save(Log.Types.Info, string.Format("Group Import is running, exiting Person-Group Updater. Person-Group Update Signaled at {0:HH:mm:ss.fff}.", eventArgs.SignalTime), "Person-Group Updater");
return;
}
try
{
LogManager.Save(Log.Types.Info, string.Format("Continuous Person-Group Update is Starting. Person-Group Update Signaled at {0:HH:mm:ss.fff}.", eventArgs.SignalTime), "Person-Group Updater");
LogManager.AppData["PersonGroupUpdater"] = "Running";
// ... prep work is done here ...
try
{
// ... real work is done here ...
LogManager.Save(Log.Types.Info, "Continuous Person-Group Update is Complete", "Person-Group Updater");
}
catch (Exception ex)
{
ex.Data["Continuous Person-Group Update Activity"] = "Processing Groups";
ex.Data["Current Record when failure occurred"] = currentGroup ?? string.Empty;
LogManager.Save(Log.Types.Error, ex, "Person-Group Updater");
}
}
catch (Exception ex)
{
LogManager.Save(Log.Types.Error, ex, "Person-Group Updater");
}
finally
{
Interlocked.Exchange(ref pgUpdateLock.LockCounter, 0);
LogManager.AppData.Remove("PersonGroupUpdater");
}
}
else
{
// exit if another thread is already running this method
LogManager.Save(Log.Types.Info, string.Format("Continuous Person-Group Update is already running, exiting Person-Group Updater. Person-Group Update Signaled at {0:HH:mm:ss.fff}.", eventArgs.SignalTime), "Person-Group Updater");
}
}
catch (Exception ex)
{
Interlocked.Exchange(ref pgUpdateLock.LockCounter, 0);
LogManager.Save(Log.Types.Error, ex, "Person-Group Updater");
}
}
IIS can/will host multiple AppDomains under a worker process (w3wp). These AppDomains can't/don't/shouldn't really talk to each. It's IIS's responsibility to manage them.
I suspect what's happening is that you have multiple AppDomains loaded.
That said...just to be 100% sure...the timer is being started under Application_Start in your global.asax, correct? This will get executed once per AppDomain (not per HttpApplication, as it's name suggests).
You can check how many app domains are running for your process by using the ApplicationManager's GetRunningApplications() and get GetAppDomain(string id) methods.
In theory you could also do some inter-appdomain communication in there to make sure your process only starts once...but I'd strongly advise against it. In general, relying on scheduling from a web application is ill advised (because your code is meant to be ignorant of how IIS manages your application lifetime).
The preferred/recommended approach for scheduling is via a Windows Service.
Related
We have a windows service which runs a while loop and monitor the database for pending orders. It works fine however latly we notice that in high load environment its opening two threads to process instead of one.
In this code, when StartService() is called, it opens a new thread and process orders in DB. This code should always call start service only once however why do we see multiple threads open ? Do you see any bug with this design ?
Here Queue.IsFull is a Volatile Bool flag.
public static void StartWork()
{
bool started = false;
//Infinite Loop
while (continueWork)
{
try
{
//Bool flag to prevent back to back call
if (started == false)
{
started = true;
// Do work only if Any Pending Request in Database.
if (AppSettings.AnythingToPRocess() == true)
{
if (Queue.IsFull == false)
{
StartService(); //set Queue.IsFull to True inside
}
}
started = false;
}
}
catch (Exception exp)
{
LogError("Failed to Start" , exp);
}
finally
{
System.Threading.Thread.Sleep(5000); //5 seconds
}
}
}
private static void StartService()
{
// Set Flag to false here to prevent back to back calls
Queue.IsFull = true;
Log("Service started");
Thread ServiceThread = new Thread(() =>
{
Service service = new Service();
service.Process();
});
ServiceThread.Name = "Thread1";
ServiceThread.Start();
}
Sleep(5) is not 5 seconds, it's milliseconds.
Unless there's an exception, started will always end up false so if StartService is asynchronous then the try block will run again.
As I can understand from your post, you are starting the service and it runs in a different thread. In that case the started flag should be set as false by exiting the service.
I am building an ASP.NET web.api service. there is api needs more than 2 minutes to retrieve desired data, so I implemented cache mechanism, and every request sent to API Server, the server will return the cached data and meanwhile start a new thread to load new data into the cache, the issue is if I submitted a lot of requests, a lot of thread will be running and eventually crashed the server, I want to implement a mechanism to control only a thread at any certain time, but I know ASP.NET Web.API is inherently multi threads, how do I tell other request to wait, because there is one thread already retrieving new set of data ?
[Dependency]
public ICacheManager<OrderArray> orderArrayCache { get; set; }
private ReadOrderService Service = new ReadOrderService();
private const string _ckey = "all";
public dynamic Get()
{
try
{
OrderArray cache = orderArrayCache.Get(_ckey);
if(cache == null || cache.orders.Length == 0)
{
OrderArray data = Service.GetAllOrders();
orderArrayCache.Add(_ckey, data);
return data;
}
else
{
Caching();
return cache;
}
}
catch (Exception error)
{
ErrorLog.WriteLog(Config._SystemName, this.GetType().Name, System.Reflection.MethodBase.GetCurrentMethod().Name, error.ToString());
return 0;
}
}
public void Caching()
{
Thread worker = new Thread(() => CacheWorker());
worker.Start();
}
public void CacheWorker()
{
try
{
//ActivityLog.WriteLog(Config._SystemName, this.GetType().Name, System.Reflection.MethodBase.GetCurrentMethod().Name, "Cache Worker Is Starting to Work");
OrderArray data = Service.GetAllOrders();
orderArrayCache.Put(_ckey, data);
//ActivityLog.WriteLog(Config._SystemName, this.GetType().Name, System.Reflection.MethodBase.GetCurrentMethod().Name, "Cache Worker Is Working Hard");
}
catch(Exception error)
{
//ActivityLog.WriteLog(Config._SystemName, this.GetType().Name, System.Reflection.MethodBase.GetCurrentMethod().Name, error.ToString());
}
}
Without commenting on the overall architecture, it's as trivial as setting a flag that you're working, and not starting the thread if that flag is set.
Of course in the ASP.NET MVC/WebAPI context, a controller instance is created for every request, so a simple field won't work. You could make it static, but that'll only work per AppDomain: one application can run in multiple AppDomains, by using multiple worker processes.
You could solve that by using a mutex, but then your application could be in a server farm, introducing a whole shebang of new problems.
That being said, the naive, static approach:
private static bool _currentlyRetrievingCacheableData = false;
public void Caching()
{
if (_currentlyRetrievingCacheableData)
{
return;
}
Thread worker = new Thread(() => CacheWorker());
worker.Start();
}
public void CacheWorker()
{
try
{
_currentlyRetrievingCacheableData = true;
// ...
}
catch(Exception error)
{
// ...
}
finally
{
_currentlyRetrievingCacheableData = false;
}
}
There's still a race issue here, but at most two threads can be accessing the CacheWorker() method. You can prevent that by using a lock statement.
Do note that all of this are workarounds for doing the obvious: let the cache refreshing mechanism live outside your web application code, for example in a Windows Service or a Scheduled Task.
I have a series of code blocks that are taking too long. I don't need any finesse when it fails. In fact, I want to throw an exception when these blocks take too long, and just fall out through our standard error handling. I would prefer to NOT create methods out of each block (which are the only suggestions I've seen so far), as it would require a major rewrite of the code base.
Here's what I would LIKE to create, if possible.
public void MyMethod( ... )
{
...
using (MyTimeoutObject mto = new MyTimeoutObject(new TimeSpan(0,0,30)))
{
// Everything in here must complete within the timespan
// or mto will throw an exception. When the using block
// disposes of mto, then the timer is disabled and
// disaster is averted.
}
...
}
I've created a simple object to do this using the Timer class. (NOTE for those that like to copy/paste: THIS CODE DOES NOT WORK!!)
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Timers;
public class MyTimeoutObject : IDisposable
{
private Timer timer = null;
public MyTimeoutObject (TimeSpan ts)
{
timer = new Timer();
timer.Elapsed += timer_Elapsed;
timer.Interval = ts.TotalMilliseconds;
timer.Start();
}
void timer_Elapsed(object sender, ElapsedEventArgs e)
{
throw new TimeoutException("A code block has timed out.");
}
public void Dispose()
{
if (timer != null)
{
timer.Stop();
}
}
}
It does not work because the System.Timers.Timer class captures, absorbs and ignores any exceptions thrown within, which -- as I've discovered -- defeats my design. Any other way of creating this class/functionality without a total redesign?
This seemed so simple two hours ago, but is causing me much headache.
OK, I've spent some time on this one and I think I have a solution that will work for you without having to change your code all that much.
The following is how you would use the Timebox class that I created.
public void MyMethod( ... ) {
// some stuff
// instead of this
// using(...){ /* your code here */ }
// you can use this
var timebox = new Timebox(TimeSpan.FromSeconds(1));
timebox.Execute(() =>
{
/* your code here */
});
// some more stuff
}
Here's how Timebox works.
A Timebox object is created with a given Timespan
When Execute is called, the Timebox creates a child AppDomain to hold a TimeboxRuntime object reference, and returns a proxy to it
The TimeboxRuntime object in the child AppDomain takes an Action as input to execute within the child domain
Timebox then creates a task to call the TimeboxRuntime proxy
The task is started (and the action execution starts), and the "main" thread waits for for as long as the given TimeSpan
After the given TimeSpan (or when the task completes), the child AppDomain is unloaded whether the Action was completed or not.
A TimeoutException is thrown if action times out, otherwise if action throws an exception, it is caught by the child AppDomain and returned for the calling AppDomain to throw
A downside is that your program will need elevated enough permissions to create an AppDomain.
Here is a sample program which demonstrates how it works (I believe you can copy-paste this, if you include the correct usings). I also created this gist if you are interested.
public class Program
{
public static void Main()
{
try
{
var timebox = new Timebox(TimeSpan.FromSeconds(1));
timebox.Execute(() =>
{
// do your thing
for (var i = 0; i < 1000; i++)
{
Console.WriteLine(i);
}
});
Console.WriteLine("Didn't Time Out");
}
catch (TimeoutException e)
{
Console.WriteLine("Timed Out");
// handle it
}
catch(Exception e)
{
Console.WriteLine("Another exception was thrown in your timeboxed function");
// handle it
}
Console.WriteLine("Program Finished");
Console.ReadLine();
}
}
public class Timebox
{
private readonly TimeSpan _ts;
public Timebox(TimeSpan ts)
{
_ts = ts;
}
public void Execute(Action func)
{
AppDomain childDomain = null;
try
{
// Construct and initialize settings for a second AppDomain. Perhaps some of
// this is unnecessary but perhaps not.
var domainSetup = new AppDomainSetup()
{
ApplicationBase = AppDomain.CurrentDomain.SetupInformation.ApplicationBase,
ConfigurationFile = AppDomain.CurrentDomain.SetupInformation.ConfigurationFile,
ApplicationName = AppDomain.CurrentDomain.SetupInformation.ApplicationName,
LoaderOptimization = LoaderOptimization.MultiDomainHost
};
// Create the child AppDomain
childDomain = AppDomain.CreateDomain("Timebox Domain", null, domainSetup);
// Create an instance of the timebox runtime child AppDomain
var timeboxRuntime = (ITimeboxRuntime)childDomain.CreateInstanceAndUnwrap(
typeof(TimeboxRuntime).Assembly.FullName, typeof(TimeboxRuntime).FullName);
// Start the runtime, by passing it the function we're timboxing
Exception ex = null;
var timeoutOccurred = true;
var task = new Task(() =>
{
ex = timeboxRuntime.Run(func);
timeoutOccurred = false;
});
// start task, and wait for the alloted timespan. If the method doesn't finish
// by then, then we kill the childDomain and throw a TimeoutException
task.Start();
task.Wait(_ts);
// if the timeout occurred then we throw the exception for the caller to handle.
if(timeoutOccurred)
{
throw new TimeoutException("The child domain timed out");
}
// If no timeout occurred, then throw whatever exception was thrown
// by our child AppDomain, so that calling code "sees" the exception
// thrown by the code that it passes in.
if(ex != null)
{
throw ex;
}
}
finally
{
// kill the child domain whether or not the function has completed
if(childDomain != null) AppDomain.Unload(childDomain);
}
}
// don't strictly need this, but I prefer having an interface point to the proxy
private interface ITimeboxRuntime
{
Exception Run(Action action);
}
// Need to derive from MarshalByRefObject... proxy is returned across AppDomain boundary.
private class TimeboxRuntime : MarshalByRefObject, ITimeboxRuntime
{
public Exception Run(Action action)
{
try
{
// Nike: just do it!
action();
}
catch(Exception e)
{
// return the exception to be thrown in the calling AppDomain
return e;
}
return null;
}
}
}
EDIT:
The reason I went with an AppDomain instead of Threads or Tasks only, is because there is no bullet proof way for terminating Threads or Tasks for arbitrary code [1][2][3]. An AppDomain, for your requirements, seemed like the best approach to me.
Here's an async implementation of timeouts:
...
private readonly semaphore = new SemaphoreSlim(1,1);
...
// total time allowed here is 100ms
var tokenSource = new CancellationTokenSource(100);
try{
await WorkMethod(parameters, tokenSource.Token); // work
} catch (OperationCancelledException ocx){
// gracefully handle cancellations:
label.Text = "Operation timed out";
}
...
public async Task WorkMethod(object prm, CancellationToken ct){
try{
await sem.WaitAsync(ct); // equivalent to lock(object){...}
// synchronized work,
// call tokenSource.Token.ThrowIfCancellationRequested() or
// check tokenSource.IsCancellationRequested in long-running blocks
// and pass ct to other tasks, such as async HTTP or stream operations
} finally {
sem.Release();
}
}
NOT that I advise it, but you could pass the tokenSource instead of its Token into WorkMethod and periodically do tokenSource.CancelAfter(200) to add more time if you're certain you're not at a spot that can be dead-locked (waiting on an HTTP call) but I think that would be an esoteric approach to multithreading.
Instead your threads should be as fast as possible (minimum IO) and one thread can serialize the resources (producer) while others process a queue (consumers) if you need to deal with IO multithreading (say file compression, downloads etc) and avoid deadlock possibility altogether.
I really liked the visual idea of a using statement. However, that is not a viable solution. Why? Well, a sub-thread (the object/thread/timer within the using statement) cannot disrupt the main thread and inject an exception, thus causing it to stop what it was doing and jump to the nearest try/catch. That's what it all boils down to. The more I sat and worked with this, the more that came to light.
In short, it can't be done the way I wanted to do it.
However, I've taken Pieter's approach and mangled my code a bit. It does introduce some readability issues, but I've tried to mitigate them with comments and such.
public void MyMethod( ... )
{
...
// Placeholder for thread to kill if the action times out.
Thread threadToKill = null;
Action wrappedAction = () =>
{
// Take note of the action's thread. We may need to kill it later.
threadToKill = Thread.CurrentThread;
...
/* DO STUFF HERE */
...
};
// Now, execute the action. We'll deal with the action timeouts below.
IAsyncResult result = wrappedAction.BeginInvoke(null, null);
// Set the timeout to 10 minutes.
if (result.AsyncWaitHandle.WaitOne(10 * 60 * 1000))
{
// Everything was successful. Just clean up the invoke and get out.
wrappedAction.EndInvoke(result);
}
else
{
// We have timed out. We need to abort the thread!!
// Don't let it continue to try to do work. Something may be stuck.
threadToKill.Abort();
throw new TimeoutException("This code block timed out");
}
...
}
Since I'm doing this in three or four places per major section, this does get harder to read over. However, it works quite well.
I have a SQL server CLR stored proc that is used to retrieve a large set of rows, then do a process and update a count in another table.
Here's the flow:
select -> process -> update count -> mark the selected rows as processed
The nature of the process is that it should not count the same set of data twice. And the SP is called with a GUID as an argument.
So I'm keeping a list of GUIDs (in a static list in the SP) that are currently in process and halt the execution for subsequent calls to the SP with the same argument until one currently in process finishes.
I have the code to remove the GUID when a process finishes in a finally block but it's not working everytime. There are instances (like when the user cancels the execution of the SP)where the SP exits without calling the finally block and without removing the GUID from the list so subsequent calls keeps waiting indefinitely.
Can you guys give me a solution to make sure that my finally block will be called no matter what or any other solution to make sure only one ID is in process at any given time.
Here's a sample of the code with the processing bits removed
[Microsoft.SqlServer.Server.SqlProcedure]
public static void TransformSurvey(Guid PublicationId)
{
AutoResetEvent autoEvent = null;
bool existing = false;
//check if the process is already running for the given Id
//concurrency handler holds a dictionary of publicationIds and AutoresetEvents
lock (ConcurrencyHandler.PublicationIds)
{
existing = ConcurrencyHandler.PublicationIds.TryGetValue(PublicationId, out autoEvent);
if (!existing)
{
//there's no process in progress. so OK to start
autoEvent = new AutoResetEvent(false);
ConcurrencyHandler.PublicationIds.Add(PublicationId, autoEvent);
}
}
if (existing)
{
//wait on the shared object
autoEvent.WaitOne();
lock (ConcurrencyHandler.PublicationIds)
{
ConcurrencyHandler.PublicationIds.Add(PublicationId, autoEvent); //add this again as the exiting thread has removed this from the list
}
}
try
{
// ... do the processing here..........
}
catch (Exception ex)
{
//exception handling
}
finally
{
//remove the pubid
lock (ConcurrencyHandler.PublicationIds)
{
ConcurrencyHandler.PublicationIds.Remove(PublicationId);
autoEvent.Set();
}
}
}
Wrapping the code at a higher level is a good solution, another option could be the using statement with IDisposable.
public class SQLCLRProcedure : IDisposable
{
public bool Execute(Guid guid)
{
// Do work
}
public void Dispose()
{
// Remove GUID
// Close Connection
}
}
using (SQLCLRProcedure procedure = new SQLCLRProcedure())
{
procedure.Execute(guid);
}
This isn't verified in a compiler but it's commonly referred to as the IDisposable Pattern.
http://msdn.microsoft.com/en-us/library/system.idisposable.aspx
i am writing a windows service that checks for a particular service and check it. if it is stop it will start it...
protected override void OnStart(string[] args)
{
Thread thread = new Thread(new ThreadStart(ServiceThreadFunction));
thread.Start();
}
public void ServiceThreadFunction()
{
try
{
ServiceController dc = new ServiceController("WebClient");
//ServiceController[] services = ServiceController.GetServices();
while (true)
{
if ((int)dc.Status == 1)
{
dc.Start();
WriteLog(dc.Status.ToString);
if ((int)dc.Status == 0)
{
//heartbeat
}
}
else
{
//service started
}
//Thread.Sleep(1000);
}
}
catch (Exception ex)
{
// log errors
}
}
i want the service to check for the another service and start... plz help me how can i do that
First of all, why are you casting the ServiceController's Status property from the convenient ServiceControllerStatus enum to an int? Best to leave it as an enum. Especially since your Heartbeat code, which compares it to 0, will never be run because ServiceControllerStatus doesn't have 0 as a possible value.
Secondly, you shouldn't use a while(true) loop. Even with the Thread.Sleep you have commented out there, it's a needless drain on resources. You can just use the WaitForStatus method to wait for the service to start:
ServiceController sc = new ServiceController("WebClient");
if (sc.Status == ServiceControllerStatus.Stopped)
{
sc.Start();
sc.WaitForStatus (ServiceControllerStatus.Running, TimeSpan.FromSeconds(30));
}
This will wait up to 30 seconds (or whatever) for the service to reach the Running state.
UPDATE: I re-read the original question, and I think what you're trying to do here shouldn't even be done with code. If I understood correctly, you want to set a dependency for your service on the WebClient service when you're installing it. Then, when the user starts your service in the Service Manager, it will automatically try to start the dependent service.