I have implemented a solution in my ASP.NET project to automatically send some emails based on a time. I have done this by using the System.Runtime.Cache, specifically the CacheItemRemovedCallback. First of all i add the task to the cache in the Application_Start method:
protected void Application_Start(object sender, EventArgs e)
{
...
AddTask(reportElement.name, totalMinutes);
...
}
and the AddTask method then adds the item to the cache:
private void AddTask(string name, int minutes)
{
OnCacheRemove = new CacheItemRemovedCallback(CacheItemRemoved);
HttpRuntime.Cache.Insert(name, minutes, null, DateTime.Now.AddMinutes(minutes), Cache.NoSlidingExpiration,
CacheItemPriority.NotRemovable, OnCacheRemove);
}
So when the cache entry expires after the minutes specified in the AbsolutionExpiration, it calls my CacheItemRemoved method. This basically runs a report, sends an email and then re-adds the task to the cache, so it will run again after the time has expired again - simple. Here is part of the code we are concerned with in the CacheItemRemoved.
public void CacheItemRemoved(string taskName, object minutes, CacheItemRemovedReason r)
{
...
finally
{
AddTask(taskName, Convert.ToInt32(minutes));
}
...
}
There is exception handling in the code, as you can see the re-adding of the task is in the finally block, so should always get called. And all the exception catch blocks do is log the error to the file, as i want to keep the task running even if the previous one fails.
This works perfectly on my local machine, but when on a Windows Server 2003, it basically just runs once. I have added extra debugging and it looks like the second time the cache entry is added, it simply doesn't expire. I am completely stuck now. The windows server is running IIS 6.0. Are there any settings for the cache i don't know about. Also, on the server it seems to expire at a completely different time to what was specified in the minutes.
Thanks in advance.
HttpRuntime.Cache.Insert(name, minutes, null, DateTime.Now.AddMinutes(minutes), Cache.NoSlidingExpiration,
CacheItemPriority.NotRemovable, OnCacheRemove);
When you add your cache item why are you specifying CacheItemPriority.NotRemovable, surely this will prevent the item from ever being removed (unless you run out of memory).
Related
I'm developing an app which basically performs some tasks on timer tick (in this case - searching for beacons) and sends results to the server. My goal was to create an app which does its job constantly in the background. Fortunately, I'm using logging all over the code, so when we started to test it we found that sometime later the timer's callback wasn't being called on time. There were some pauses which obviously had been caused by standby and doze mode. At that moment I was using a background service and System.Threading.Timer. Then, after some research, I rewrote the services to use Alarm Manager + Wake locks, but the pauses were still there. The next try was to make the service foreground and use it with a Handler to post delayed tasks and everything seemed to be fine while the device was connected to the computer. When the device is not connected to a charger those pauses are here again. The interesting thing is that we cannot actually predict this behavior. Sometimes it works perfectly fine and sometimes not. And this is really strange because the code to schedule it is pretty simple and straightforward:
...
private int scanThreadsCount = 0;
private Android.OS.Handler handler = new Android.OS.Handler();
private bool LocationInProgress
{
get { return Interlocked.CompareExchange(ref scanThreadsCount, 0, 0) != 0; }
}
public void ForceLocation()
{
if (!LocationInProgress) DoLocation();
}
private async void DoLocation()
{
Interlocked.Increment(ref scanThreadsCount);
Logger.Debug("Location is started");
try
{
// Location...
}
catch (Exception e)
{
Logger.Error(e, "Location cannot be performed due to an unexpected error");
}
finally
{
if (LocationInterval > 0)
{
# It's here. The location interval is 60 seconds
# and the service is running in the foreground!
# But in the screenshot we can see the delay which
# sometimes reaches 10 minutes or even more
handler.PostDelayed(ForceLocation, LocationInterval * 1000);
}
Logger.Debug("Location has been finished");
Interlocked.Decrement(ref scanThreadsCount);
}
}
...
Actually it can be ok, but I need that service to do its job strictly on time, but the callback is being called with a few seconds delay or a few minutes and that's not acceptable.
The Android documentation says that foreground services are not restricted by standby and doze mode, but I cannot really find the cause of that strange behavior. Why is the callback not being called on time? Where do these 10 minutes pauses come from? It's pretty frustrating because I cannot move further unless I have the robust basis. Does anybody know the reason of such a strange behavior or any suggestions how I can achieve the callback to be executed on time?
P.S. The current version of the app is here. I know, it's quite boring trying to figure out what is wrong with one's code, but there are only 3 files which have to do with that problem:
~/Services/BeaconService.cs
~/Services/BeaconServiceScanFunctionality.cs
~/Services/BeaconServiceSyncFunctionality.cs
The project was provided for those who would probably want to try it in action and figure it out by themselves.
Any help will be appreciated!
Thanks in advance
Given the code:
protected void Application_Start(object sender, EventArgs e)
{
var testTimer = new Timer(
LogTimer,
null,
new TimeSpan(0, 0, 0, 0),
new TimeSpan(0, 0, 0, 1)
);
}
public static void LogTimer(object sender)
{
"Hello".Log();
}
At seemingly random occasions the timer stops firing, and wont start again unless I restart the website.
It doesn't throw any exceptions, but looking in the Windows error log there are some entries:
The Open Procedure for service "Lsa" in DLL "C:\Windows\System32\Secur32.dll" failed. Performance data for this service will not be available. The first four bytes (DWORD) of the Data section contains the error code.
Unable to open the Server service performance object. The first four bytes (DWORD) of the Data section contains the status code.
The site is active (the start mode of the app pool is AlwaysRunning.
I understand that using timers in this way is not a recommended approach for critical things for exactly this reason, but I am failing to come up with an explanation as to why it's silently and apparently randomly just giving up.
From your code, I expect the garbage collector to collect your timer since there is no handle for that. have you tried something like
static Timer testTimer ;
protected void Application_Start(object sender, EventArgs e)
{
testTimer = new Timer(...);
}
ASP.NET isn't suited to running timers due to the way AppDomains get unloaded, the threading model and many other factors.
I suggest you read this blog post from Scott Hanselman that discusses various ways to successfully run timer-based code in ASP.NET web applications.
I have a Windows service that polls a remote FTP server every three seconds. It checks a directory for files, downloads any files present, and deletes those files once downloaded. Average file size is 10 KB, and rarely they will go up to the 100 KB range.
Occasionally (I have noticed no pattern), the WebClient will throw the following:
System.Net.WebException: The operation has timed out.
at System.Net.WebClient.OpenRead(Uri address)
It will do this for one or more files, usually whatever files are in the remote directory at that time. It will continue to do so indefinitely, churning on the "stuck" files at each polling interval. The bizarre part is that when I stop/start the Windows service, the "stuck" files download perfectly and the polling/downloading works again for long stretches of time. This is bizarre because I download like this:
private object _pollingLock = new object();
public void PollingTimerElapsed(object sender, ElapsedEventArgs e)
{
if(Monitor.TryEnter(_pollingLock);
{
//FtpHelper lists content of files in directory
...
foreach(var file in files)
{
using(var client = new WebClient())
{
client.Proxy = null;
using(var data = client.OpenRead(file.Uri)
{
//Use data stream to write file locally
...
}
}
//FtpHelper deletes the file
...
}
}
//Release the _pollingLock inside a finally
}
I would assume that a new connection is opened and closed for each file (unless .NET is doing something behind the scenes). If a file download had an issue, it would get a fresh retry on the next polling interval (in 3 sec). Why would a service restart make things work?
I've begun to suspect that the issue has something to do with caching (file or connection). Recently I tried going into Internet Explorer and clearing the cache. Approximately 30 sec or so later, all the files downloaded with no service restart. But, the next batch of files to arrive all got hung up again. I might try adding a line like this:
client.CachePolicy = new RequestCachePolicy(RequestCacheLevel.NoCacheNoStore);
or try disabling KeepAlives, but I want to get some opinions before I start trying random stuff.
So: What is causing the occasional timeouts? Why does restarting the service work? Why did clearing the cache work?
Update
I made the cache policy and keep alive change mentioned above about two weeks ago. I just now got my first timeout since then. It appears to have improved frequency, but alas, it is still happening.
Update
As requested, this is how I am kicking off the Timer:
_pollingTimer.AutoReset = true;
_pollingTimer.Elapser += PollingTimerElapsed;
_pollingTimer.Interval = 10000;
_pollingTimer.Enabled = true;`
Looks like you are kicking off your processing using the System.Timers.Timer.Elapsed event.
One gotcha that I found is that if your Elapsed event takes longer to execute than the timer interval, your event can be called again from another thread before it has finished executing.
This is specifically mentioned in the docs:
If the SynchronizingObject property is null, the Elapsed event is
raised on a ThreadPool thread. If the processing of the Elapsed event
lasts longer than Interval, the event might be raised again on another
ThreadPool thread. In this situation, the event handler should be
reentrant.
Assuming you are indeed using a vanilla timer with AutoReset=true (its on by default), first thing to do would be address this potential issue. You can use a SynchronizingObject, alternatively you can do something like this:
//setup code
Timer myTimer = new Timer(30000);
myTimer.AutoReset = false;
....
//Elapsed handler
public void PollingTimerElapsed(object sender, ElapsedEventArgs e)
{
//do what you currently do
...
//when finished, kick off the timer again
myTimer.Start();
}
Either way, the main thing is to ensure that your code doesn't accidentally get called simultaneously by multiple threads - if that happens there's a good chance that occasionally you'll have one thread trying to download something from the site while another thread is simultaneously deleting the file.
The things that you mentioned e.g. it only happens occasionally, that normally file sizes are small, that its fixed by a restart, etc. would point me in the direction of this being the issue.
Assume I have two Quartz.net jobs that
downloads a CSV file with a delta of changes for a period (e.g. 24h) and then imports the data (called IncrementalImportJob)
downloads a CSV file with a all the records and then imports the data (called FullImportJob)
The requirement is that IncrementalImportJob at a minimum once for the period (e.g. 24h). If that window is missed, or the job didn't complete successfully, then FullImportJob should run instead. The reason is that changes for that (missed) day would not be imported. This condition is rather exceptional.
The FullImportJob requires resources (time, CPU, database, memory) to import all the data, which may impact other systems. Further, the delta of changes are often minimal or non-existent. So the goal is to favour running the IncrementalImportJob when possible.
How does one configure quartz.net to run FullImportJob if IncrementalImportJob hasn't completed successfully in a specific time period (say 24h)?
Searching the web for "quartz.net recovery" and "quartz.net misfire" doesn't reveal whether its supported or whether its even possible.
There is native misfire handling in quartz.net, however it only goes as far as specifying whether the job should fire immediately again, or after a period of time or a number of times after misfiring.
I think one option is to handle this internally from IncrementalImportJob.
try
{
//download data
//import data
}
catch (Exception e) //something went wrong
{
//log the error
UpdateFullImportJobTrigger(sched);
}
//Reschedule FullImportJob to run at a time of your choosing.
public void UpdateFullImportJobTrigger(IScheduler sched)
{
Trigger oldTrigger = sched.getTrigger(triggerKey("oldTrigger", "group1");
TriggerBuilder tb = oldTrigger.getTriggerBuilder();
//if you want it to run based on a schedule use this:
Trigger newTrigger = tb.withSchedule(simpleSchedule()
.withIntervalInSeconds(10)
.withRepeatCount(10)
.build();
sched.rescheduleJob(oldTrigger.getKey(), newTrigger);
//or use simple trigger if you want it to run immediately and only once so that
//it runs again on schedule the next time.
}
This is one way of doing it. Another would be abstracting this logic to a maintenance job that checks the logs every so often and if it finds a failure message from IncrementalImportJob, it fires FullImportJob. However, this depends to some extent on your logging system (most people use NLog or log4net).
If on the other hand, your concern is that the job never ran in the first place because, for instance, the app/database/server was down, you could schedule FullImportJob to fire a few hours later and check if IncrementalImportJob has fired as follows:
//this is done from FullImportJob
//how you retrieve triggerKey will depend on whether
//you are using RAMJobStore or ADO.NET JobStore
public void Execute(IJobExecutionContext context)
{
ITrigger incImportJobTrigger = context.Scheduler.GetTrigger(triggerKey);
//if the job has been rescheduled with a new time quartz will set this to null
if (!incImportJobTrigger.GetPreviousFireTimeUtc().HasValue) return;
DateTimeOffset utcTime = incImportJobTrigger.GetPreviousFireTimeUtc().Value;
DateTime previousTireTime = utcTime.LocalDateTime;
if (previousTireTime.Day == DateTime.Now.Day) return;
//IncrementalImportJob has not ran today, let's run FullImportJob
}
Hope this helps.
I implemented windows service with eventLog and FileSystemWatcher that looks for changes in specific directory and writes messages into MyLog.
strange thing 1:
I install it via installUtil.exe (since the VS2012 doesn't have installer templates) and in some situations when I go to "Services" and start the service I get:
The [service name] service on local computer started and then stopped. Some Services stop automatically if they are not in use by another services or programs.
I've already seen this question. 2 answeres from this post why it can be so:
1) There is no thread starting in OnStart() method.
I use the designer and set most of the properties in the Properties window and I never started any thread manually, but in some cases everything was working, so I think this is not the case.
2) An exception occures in OnStart() method. I think it's not the case cause I don't change the code. I just uninstall, build and install again the same service and in some cases it runs, in some not.
When I was stuck for mabby 2 hours with this thing I noticed that the Source property of eventLog is too long: "FilesMonitoringServices". I changed it to "MonitorSource" and everything started to work. Than I reinstalled it cauple of times and got the same warning as the above. I changed the Source property again and now the service runs.
This is the first strange thing.
strange thing 2: worse. Even if it runs it logs only OnStart() and OnStop() methods, I mean the fileSystemWatcher event handler never excutes. It is strange because today I reinstalled this service mabby hundred times and 3 times it was working but after I reinstalled it once again it stoped. And I haven't changed the code between the reinstallations at all.
Here is the methods and constructor from my class (MonitoringService) that inherits ServiceBase:
public MonitoringService()
{
InitializeComponent();
if (!EventLog.SourceExists(eventLog.Source))
{
EventLog.CreateEventSource(eventLog.Source, eventLog.Log);
}
// haven't changed it between the reinstallations
fileWatcher.Path = #"path";
}
protected override void OnStart(string[] args)
{
fileWatcher.EnableRaisingEvents = true;
eventLog.WriteEntry("start", EventLogEntryType.Information);
base.OnStart(args);
}
protected override void OnStop()
{
fileWatcher.EnableRaisingEvents = false;
fileWatcher.Dispose();
eventLog.WriteEntry("stop", EventLogEntryType.Information);
base.OnStop();
}
And file system watcher event handler:
private void fileSystemWatcher1_Changed(object sender, FileSystemEventArgs e)
{
using (var conn = new SqlConnection(GetConnectionString()))
{
conn.Open();
var productId = Convert.ToInt32(Regex.Match(e.Name, #"\d+").Value);
const string cmd = "UPDATE Products SET ImageModifiedDate=#date WHERE ProductId=#productId";
using (var command = new SqlCommand(cmd, conn))
{
command.Parameters.AddWithValue("#productId", productId);
command.Parameters.AddWithValue("#date", DateTime.Now);
command.ExecuteNonQuery();
}
}
eventLog.WriteEntry(string.Format("{0} has been changed: {1}", e.Name, DateTime.Now), EventLogEntryType.Information);
}
Question: it seems to me that this behavior is caused not by my code but rather by operation system settings . Can it be so?
****Edits: just discovered more specific stuff:**
1) If it shows the message (when I want to start the service):
The [service name] service on local computer started and then stopped. ....
I need to change Source property of eventLog, rebuild and reinstall. And this message will not show up; mabby next time.
2) I have the following folders hierarchy: images/prod-images. images and prod-images directories both contain image files. When the service is runing and I change the image from prod-images folder the message is written into the log as I wanted and the database is updated. But after one this event the service stops! (I checked this 3 times). And when I restart it and repeat this again a couple of times it updates database, writes logs and on the 3d time I get
The [service name] service on local computer started and then stopped. ....
But this is not the best part) If I change the image that is in images directory I can do it multiple times and the service doesn't stop. (only images from images/prod-images are bound to entries in the database).
So, mabbe this feature somehow referes to the database accessing?
Edits 2: in visual studio I use DEBUG -> Attach to Process to debug the service. I set up the breakpoints and change the image. First time the event handler executes flawlessly: the database is updated and the log message is written. But than I continue to press F11 (Step Into) and this event handler executes second time. At the line
var productId = Convert.ToInt32(Regex.Match(e.Name, #"\d+").Value);
I get "FormatException was unhandled". After this I stop debugging and the service stops! That's it: the exception occures in event handler.
Do You have any idea why it executes second time? Thanks!
P.S. I've already submited Davut Gürbüz answer cause he pointed me in the right direction.
Anyway, check out my own answer that explains the actual problem.
If you got start-stop error, this means you have an error in constructor.
Put a try catch into your ctor. You may log the error to eventlog in catch block.
Beside this I create a main method and start win service as a console app. If I get an instance of service in my main method I can also Debug it.
//You should select Console Application from Application properties
static void Main(string[] args)
{
MyWindowsService service = new MyWindowsService();
if (Environment.UserInteractive)
{
service.OnStart(args);
Console.WriteLine("Press any key to stop program");
Console.Read();
service.OnStop();
}
else
{
ServiceBase.Run(service);
}
}
Hope helps
The reason why the fileSystemWatcher1_Changed event handler method was executing twice is because I was monitoring images folder included subdirectories. And this event handler was monitoring all LastWrite events.
So, when I changed the image in images/prod-images directory this handler reacted to the image changing and also to the folder changing.
In this situation I either can change the monitoring path to prod-images or insert an if statement when updating the DB.
Such a silly mistake took me a couple of days to find it out))