I read on msdn that although timers cannot guarantee to fire at the exact interval (in my case 1 second) they will not fire before the interval.
The timers on one pc is working fine (Windows 7) while on the other (Windows Server 2003) fires every 0.99999936 seconds.
I'm really interested in why this is happening.
I noticed this because I had code counting seconds to newSeconds = newSeconds + delta.Seconds
Where delta was DateTime.Now - lastTime
The seconds part was showing 1 on Windows 7 and 0 on Windows Server 2003.
Solution was to just read totalseconds, but still I wonder why it's firing before.
Can anyone elaborate on this?
Edit
I actually have it happening on two different windows 2003 pcs.
My wondering goes deeper into the areas off is there a difference between os's, is the .net framework 4 different for 7 vs 2003? Or any other deviations people might know of? How are the timers implemented, could it be a hardware related issue?
And as oppose to this one:
C# timer getting fired before their interval time
I have it happening all the time, on every tick. No need for long running.
Thanks
Edit
public void OnTick(object sender, EventArgs e)
{
var delta = DateTime.Now - _lastTime;
DoStuff
_lastTime = DateTime.Now
}
This line:
newSeconds = newSeconds + delta.Seconds
Could and probably will drift further and further away from a true measure. Imagine measuring 1 mile with a yardstick: You will eventually drift off.
The actual time discrepancy may be the result of the small amount of time between when the event fires and when you grab the time. There must be a few CPU cycles in between those two points.
I would recommend you to use reactive extensions. Read the "It's all about time" chapter in this blog post.
Related
I've been given a task to write a program to count how many page views are requested from our site. My current approach is to get data from google analytics Real Time API, which works to my suprise.
My problem is that to get pageviews every minute I need to poll data from google API twice (cause it returns sum of last 29 minutes + a value from a timer that resets every minute). After I set up 'the point of reset', lets just say, on a 55th second every minute, I poll data on 56th and later on at 53th second, which gives me relatively good estimation of new users / page views requested.
So this is my current approach:
static System.Timers.Timer myTimer = new System.Timers.Timer();
myTimer.AutoReset = false;
myTimer.Interval = interval();
myTimer.Elapsed += myTimer_Elapsed2;
myTimer.Start();
static double interval()
{
return 1000 - DateTime.Now.Millisecond;
}
static void myTimer_Elapsed2(object sender, System.Timers.ElapsedEventArgs e)
{
if (DateTime.Now.Second == (resetPoint.Second - 1) % 60 && warden)
{
DoStuff(); //mostly inserting google API data to database
}
else if (DateTime.Now.Second == (resetPoint.Second + 1) % 60) //so we dont get riddiculous 60 and above
{
//I get some data here, to later use it in DoStuff - mostly to calculate the gap between later
}
myTimer.Interval = interval(); //Because DoStuff() takes about 0.5 sec to execute, i need to recalibrate
myTimer.Start();
}
And it works really well, until it stops after about 2 hours, for now I have no idea why (program runs, just timer doesn't do its work anymore).
How do I make it stable for long periods of time? Best case scenario would be to run it for months without intervention.
# I edited to give a better sense what I'm actually doing
#END CREDITS
I ended up using two timers, each running in a one minute circle. And a database writing sometimes crashed and I didn't handle the corresponding exception properly. Log told me that google API functions from time to time tend to retrieve data a bit longer, which led to multiple Threading.Event calls and made my database data handling throw an exception hence stopping the timer.
I tried to use Quartz approach but its lack of human-friendly howto made me abandon this library.
You should really look into using Quartz.net for scheduling events on a reliable basis. Using a timer for scheduling is asking for stuff like race conditions, event skips and database deadlocks.
http://www.quartz-scheduler.net/ allows you to schedule events at precise intervals, independant of when your code starts or stops.
An example on how you use it: This will build a trigger that will fire at the top of the next hour, then repeat every 2 hours, forever:
trigger = TriggerBuilder.Create()
.WithIdentity("trigger8") // because group is not specified, "trigger8" will be in the default group
.StartAt(DateBuilder.EvenHourDate(null)) // get the next even-hour (minutes and seconds zero ("00:00"))
.WithSimpleSchedule(x => x
.WithIntervalInHours(2)
.RepeatForever())
// note that in this example, 'forJob(..)' is not called
// - which is valid if the trigger is passed to the scheduler along with the job
.Build();
scheduler.scheduleJob(trigger, job);
http://www.quartz-scheduler.net/documentation/quartz-2.x/tutorial/simpletriggers.html has a few examples. I really URGE you to use it, since it will severely simplify development.
The .NET timer is reliable. That is, it won't just stop working randomly for no apparent reason.
Most likely, something in your timer event handler is throwing an exception, which is not surfaced because System.Timers.Timer squashes exceptions. As the documentation states:
The Timer component catches and suppresses all exceptions thrown by event handlers for the Elapsed event. This behavior is subject to change in future releases of the .NET Framework.
That bit about the behavior being "subject to change" has been there since at least .NET 2.0.
What I think is happening is that the timer calls your event handler. The event handler or one of the methods it calls throws an exception, and the timer just drops it on the floor because you don't handle it.
You need to put an exception handler in your myTimer_Elapsed2 method so that you can at least log any exceptions that crop up. With the information provided from the exception log, you can probably identify what the problem is.
Better yet, stop using System.Timers.Timer. Use System.Threading.Timer instead.
Finally, there's no way that your code as written will reliably give you a timer tick at exactly 55 seconds past the minute, every minute. The timer isn't exact. It will be off by a few milliseconds each minute. Over time, it's going to start ticking at 54 seconds (or maybe 56), and then 53 (or 57), etc. If you really need this to tick reliably at 55 seconds past the minute, then you'll need to reset the timer after every minute, taking into account the current time.
I suspect that your need to check every minute at exactly the 55 second mark is overkill. Just set your timer to tick every minute, and then determine the exact elapsed time since the last tick. So one "minute" might be 61 or 62 seconds, and another might be 58 or 59 seconds. If you store the number of requests and the elapsed time, subsequent processing can smooth the bumps and give you a reliable requests-per-minute number. Trying to gather the data on exact one-minute boundaries is going to be exceedingly difficult, if even possible with a non-real-time operating system like Windows.
the codes below shows that sleep(1) will sleep an average of 2 miliseconds!
DateTime dt = DateTime.Now;
int i = 0;
long max = -1;
while (true)
{
Stopwatch st=new Stopwatch();
st.Restart();
System.Threading.Thread.Sleep(1);
long el = st.ElapsedMilliseconds;
max = Math.Max(el, max);
i++;
double time = DateTime.Now.Subtract(dt).TotalMilliseconds;
if (time >= 1000)
{
Console.WriteLine("Time =" + time);
Console.WriteLine("i =" + i);
Console.WriteLine("max ="+max);
System.Threading.Thread.Sleep(200);
i = 0;
dt = DateTime.Now;
max = -1;
}
}
Typical Output:
Time =1000.1553
i =495
max =5
could some body explain me the reason? and how can i fix this problem?!
Getting 2 milliseconds is fairly unusual, most anybody that runs your code will get 15 instead. It is rather machine dependent and mostly depends on what other programs you've got running on your machine. One way to change it, for example, is to start Chrome and you'll see (close to) 1 msec sleeps.
You should display more digits to avoid rounding artifacts. A simplification of the code:
static void Main(string[] args) {
Stopwatch st = new Stopwatch();
while (true) {
st.Restart();
System.Threading.Thread.Sleep(1);
st.Stop();
Console.Write("{0} ", st.Elapsed.Ticks / 10000.0);
System.Threading.Thread.Sleep(200);
}
}
Which produces on my machine:
16.2074 15.6224 15.6291 15.5313 15.6242 15.6176 15.6152 15.6279 15.6194 15.6128
15.6236 15.6236 15.6134 15.6158 15.6085 15.6261 15.6297 15.6128 15.6261 15.6218
15.6176 15.6055 15.6218 15.6224 15.6212 15.6134 15.6128 15.5928 15.6375 15.6279
15.6146 15.6254 15.6248 15.6091 15.6188 15.4679 15.6019 15.6212 15.6164 15.614
15.7504 15.6085 15.55 15.6248 15.6152 15.6248 15.6242 15.6158 15.6188 15.6206 ...
This is normal output, I have no programs running on my machine that mess with the operating system. This will be the way it works on most machines.
Some background on what's going on. When you call Thread.Sleep() with a value larger than 0 then you voluntarily give up the processor and your thread goes into a wait state. It will resume when the operating system's thread scheduler runs and enough time has expired.
What's key about that sentence is "when the thread scheduler runs". It runs at distinct times in Windows, driven by the clock interrupt that wakes up the processor from the HALT state. This starts in the kernel, one primary task of the clock interrupt is that it increments the clock value. The one that's used by, for example, DateTime.Now and Environment.TickCount
The clock does not have infinite resolution, it only changes when the clock interrupt occurs. By default on all modern Windows versions, that clock interrupt occurs 64 times per second. Which makes the clock accuracy 1 / 64 = 15.625 milliseconds. You can clearly see this value back in the output of the program on my machine.
So what happened on your machine is that a program changed the clock interrupt rate. That is a rather unfortunate inheritance from Windows 3.1, the first Windows version that supported multi-media timers. Timers that can tick at a high rate to support programs that need to do things with media, like animating a GIF file, tune the frame rate of a video player, keep the audio card fed with sound without stutter or excessive latency. Programs like Chrome.
They do this by calling timeBeginPeriod(). They usually go whole-hog and pick the smallest allowable value, 1 millisecond. Apparently 2 msec on your machine. You can do this too, you'll see the Sleep(1) call now taking about 1 msec instead of 2. Don't forget to call timeEndPeriod() when you no longer need the high rate.
But do keep in mind that this is pretty unfriendly thing to do. Waking up the processor this often is very detrimental to battery life, always an issue on portable machines. Which explains what mystified this site's founding father in his blog post "Why does Windows have terrible battery life". It doesn't, Chrome has terrible battery life :) If you want to find out what program messed with the clock then you can run powercfg -energy from the command line.
I dont think it's weird to see this result. The Stopwatch itself probably takes a millisecond. I highly doubt you can expect a precise 1 millisecond. There is always overhead involved and I doubt sleep guarantees you that the sleep time is that precise.
Personally I would expect a range from 1-5 milliseconds.
Thread.Sleep is designed to pause a thread for at least the number of milliseconds you specify. It basically leaves the execution of the current thead and it's up to the scheduler of the operating system to wake it again. The thing is, you cannot be sure that the underlying OS's scheduler will allow the thread to resume immediately.
I think, System.Threading.Thread.SpinWait is what you are looking for.
I have a small problem regarding threading in C#.
For some reason, my thread speeds up from 32ms delay to 16ms delay when I open Chrome, when I close Chrome it goes back to 32ms. I'm using Thread.Sleep(1000 / 60) for the delay.
Can somebody explain why this is happening, and maybe suggest a possible solution?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace ConsoleApplication2
{
class Program
{
static bool alive;
static Thread thread;
static DateTime last;
static void Main(string[] args)
{
alive = true;
thread = new Thread(new ThreadStart(Loop));
thread.Start();
Console.ReadKey();
}
static void Loop()
{
last = DateTime.Now;
while (alive)
{
DateTime current = DateTime.Now;
TimeSpan span = current - last;
last = current;
Console.WriteLine("{0}ms", span.Milliseconds);
Thread.Sleep(1000 / 60);
}
}
}
}
Just a post to confirm Matthew's correct answer. The accuracy of Thread.Sleep() is affected by the clock interrupt rate on Windows. It by default ticks 64 times per second, once every 15.625 msec. A Sleep() can only complete when such an interrupt occurs. The mental image here is the one induced by the word "sleep", the processor is in fact asleep and not executing code. Only that clock interrupt is going to wake it up again to resume executing your code.
Your choice of 1000/60 was a very unhappy one, that asks for 16 msec. Just a bit over 15.625 so you'll always wake back up at least 2 ticks later: 2 x 15.625 = 31 msec. What you measured.
That interrupt rate is however not fixed, it can be altered by a program. It does so by calling CreateTimerQueueTimer() or the legacy timeBeginPeriod(). A browser in general has a need to do so. Something simple as animating a GIF requires a better timer since GIF frame times are specified with a unit of 10 msec. Or in general any multi-media related operation needs it.
A very ugly side-effect of a program doing this is that this increased clock interrupt rate has system-wide effects. Like it did in your program. Your timer suddenly got accurate and you actually got the sleep duration you asked for, 16 msec. So Chrome is changing the rate to, probably, 1000 ticks per second. The maximum supported. And good for business when you have a competing operating system.
You can avoid this problem by picking a sleep duration that's a closer match to the default interrupt rate. If you ask for 15 then you'll get 15.625 and Chrome cannot have an effect on that. 31 is the next sweet spot. Etcetera, integer multiples of 15.625 and rounded down.
UPDATE: do note that this behavior changed just recently. Starting at Win10 version 2004, the effect is no longer global so Chrome can no longer affect your program.
Starting at Win11, an app with an inactive window operates with the default interrupt rate.
This is possibly occurring because Chrome (or some component of Chrome) is calling timeBeginPeriod() with a value that increases the resolution of the Windows API function Sleep(), which is called from Thread.Sleep().
See this thread for more information: Can I improve the resolution of Thread.Sleep?
I noticed this behavior with Windows Media Player some years ago: The behavior of one of our applications changed depending on whether Windows Media Player was running or not. It turned out, WMP was calling timeBeginPeriod().
However, in general, Thread.Sleep() (and by extension, the Windows API Sleep()) is extremely inaccurate.
Basically, Thread.Sleep isn't very accurate.
Thread.Sleep(1000/60) (which evaluates to Thread.Sleep(16)), asks the thread to go to sleep and come back when 16ms has elapsed. However, that thread might not get to execute again until a greater amount of time has elapsed; say, for example, 32ms.
As for why Chrome is having an effect, I don't know but since Chrome spawns one new thread for each tab, it'll have an effect on the system's threading behaviour.
First, 1000 / 60 = 16 ms
The PC clock has a resolution of around 18-20ms, Sleep() and the result of DateTime.Now will be rounded to a multiple of that value.
So, Thread.Sleep(5) and Thread.Sleep(15) will delay for the same amount of time. And that can be 20, 40 or even 60 ms. You do not get much guarantees, the argument for Sleep() is only a minimum.
And another process (Chrome) that hogs the CPU (even a little) can influence the behavior of your program that way. Edit: that is the reverse of what you're seeing, so something a little else is happening here. Still, it's about rounding to timeslices.
You are hitting a resolution issue with DateTime. You should use Stopwatch for this kind of precision. Eric Lippert states that DateTime is only accurate to around 30 ms, so your readings with it in this case will not tell you anything.
Measurement is half of your problem. The actual time variation for your loop is due to Sleep resolution (as stated in the other answers).
In my application, I have used the number of System.Threading.Timer and set this timer to fire every 1 second. My application execute the thread at every 1 second but it execution of the millisecond is different.
In my application i have used the OPC server & OPC group .one thread reading the data from the OPC server (like one variable changing it's value & i want to log this moment of the changes values into my application every 1 s)
then another thread to read this data read this data from the first thread every 1s & second thread used for store data into the MYSQL database .
in this process when i will read the data from the first thread then i will get the old data values like , read the data at 10:28:01.530 this second then i will get the information of 10:28:00.260 this second.so i want to mange these threads the first thread worked at 000 millisecond & second thread worked at 500 millisecond. using this first thread update the data at 000 second & second thread read the data at 500 millisecond.
My output is given below:
10:28:32.875
10:28:33.390
10:28:34.875
....
10:28:39.530
10:28:40.875
However, I want following results:
10:28:32.000
10:28:33.000
10:28:34.000
....
10:28:39.000
10:28:40.000
How can the timer be set so the callback is executed at "000 milliseconds"?
First of all, it's impossible. Even if you are to schedule your 'events' for a time that they are fired few milliseconds ahead of schedule, then compare millisecond component of the current time with zero in a loop, the flow control for your code could be taken away at the any given moment.
You will have to rethink your design a little, and not depend on when the event would fire, but think of the algorithm that will compensate for the milliseconds delayed.
Also, you won't have much help with the Threading.Timer, you would have better chance if you have used your own thread, periodically:
check for the current time, see what is the time until next full second
Sleep() for that amount minus the 'spice' factor
do the work you have to do.
You'll calculate your 'spice' factor depending on the results you are getting - does the sleep finishes ahead or behind the schedule.
If you are to give more information about your apparent need for having event at exactly zero ms, I could help you get rid of that requirement.
HTH
I would say that its impossible. You have to understand that switching context for cpu takes time (if other process is running you have to wait - cpu shelduler is working). Each CPU tick takes some time so synchronization to 0 milliseconds is impossible. Maybe with setting high priority of your process you can get closer to 0 but you won't achive it ever.
IMHO it will be impossible to really get a timer to fire exactly every 1sec (on the milisecond) - even in hardcore assembler this would be a very hard task on your normal windows-machine.
I think first what you need to do: is to set right dueTime for a timer. I do it so:
dueTime = 1000 - DateTime.Now.Milliseconds + X; where X - is serving for accuracy and you need select It by testing. Then Threading.Timer each time It ticks running on thread from CLR thread pool and, how tests show - this thread is different each time. Creating threads slows timer, because of this you can use WaitableTimer, which always will be running at the same thread. Instead of WaitableTimer you can using Thread.Sleep method in such way:
Thread.CurrentThread.Priority = Priority.High; //If time is really critical
Thread.Sleep (1000 - DateTime.Now + 50); //Make bound = 1s
while (SomeBoolCondition)
{
Thread.Sleep (980); //1000 ms = 1 second, but something ms will be spent on exit from Sleep
//Here you write your code
}
It will be work faster then a timer.
I have a WPF app that uses DispatcherTimer to update a clock tick.
However, after my application has been running for approx 6 hours the clocks hands angles no longer change. I have verified that the DispatcherTimer is still firing with Debug and that the angle values are still updating, however the screen render does not reflect the change.
I have also verified using WPFPerf tools Visual Profiler that the Unlabeled Time, Tick (Time Manager) and AnimatedRenderMessageHandler(Media Content) are all gradually growing until they are consuming nearly 80% of the CPU, however Memory is running stable.
The hHandRT.Angle is a reference to a RotateTransform
hHandRT = new RotateTransform(_hAngle);
This code works perfectly for approx 5 hours of straight running but after that it delays and the angle change does not render to the screen. Any suggestions for how to troubleshoot this problem or any possible solutions you may know of.
.NET 3.5, Windows Vista SP1 or Windows XP SP3 (both show the same behavior)
EDIT: Adding Clock Tick Function
//In Constructor
...
_dt = new DispatcherTimer();
_dt.Interval = new TimeSpan(0, 0, 1);
_dt.Tick += new EventHandler(Clock_Tick);
...
private void Clock_Tick(object sender, EventArgs e)
{
DateTime startTime = DateTime.UtcNow;
TimeZoneInfo tst = TimeZoneInfo.FindSystemTimeZoneById(_timeZoneId);
_now = TimeZoneInfo.ConvertTime(startTime, TimeZoneInfo.Utc, tst);
int hoursInMinutes = _now.Hour * 60 + _now.Minute;
int minutesInSeconds = _now.Minute * 60 + _now.Second;
_hAngle = (double)hoursInMinutes * 360 / 720;
_mAngle = (double)minutesInSeconds * 360 / 3600;
_sAngle = (double)_now.Second * 360 / 60;
// Use _sAngle to showcase more movement during Testing.
//hHandRT.Angle = _sAngle;
hHandRT.Angle = _hAngle;
mHandRT.Angle = _mAngle;
sHandRT.Angle = _sAngle;
//DSEffect
// Add Shadows to Hands creating a UNIFORM light
//hands.Effect = textDropShadow;
}
Along the lines of too much happening in the clock tick, I'm currently trying this adjustment to see if it helps. Too bad it takes 5 hours for the bug to manifest itself :(
//DateTime startTime = DateTime.UtcNow;
//TimeZoneInfo tst = TimeZoneInfo.FindSystemTimeZoneById(_timeZoneId);
//_now = TimeZoneInfo.ConvertTime(startTime, TimeZoneInfo.Utc, tst);
_now = _now.AddSeconds(1);
You say you're creating an instance of the Clock class each time? Note that timers in .NET will root themselves to keep themselves from being garbage collected. They'll keep on firing until you stop them yourself, and they will keep your Clock objects alive because they are referenced in the timer tick event.
I think what's happening is that with each Clock you create you start another timer. At first you only fire 1 event per second, but then you get add on another timer and get 2 per second, and they continue to accumulate in this way. Eventually you see your Tick handler and AnimatedRenderMessageHandler rising in CPU until they bog down and are unable to update your screen. That would also explain why increasing the frequency of the timer firings made your symptoms appear sooner.
The fix should be simple: just stop or dispose the DispatcherTimer when you are done with your Clock object.
You're assuming it's the DispatcherTimer and focusing totally on that. I personally have a hard time believing it has anything to do with the timer itself, but rather think it has to do with whatever you're doing within the timer tick. Can you tell us more about exactly what is going on each time the timer ticks?
hHandRT.Angle = _hAngle;
mHandRT.Angle = _mAngle;
sHandRT.Angle = _sAngle;
I believe you have to look at your above code once again.
You are setting Angle property of your transform for all 3 transforms even if you dont need them to change every second. Your minute will change for every 60 changes and your hour will change for every 3600 seconds. However you can atleast reduce your changing hours angle for every second.
What is happening here is, whenever you request transform changes to WPF, WPF queues the request to priority dispatch queue and every second you are pushing more changes to be done then it can process. And this is the only reason your CPU usage keeps on increasing instead of memory.
Detailed Analysis:
After looking at your code, I feel your DispatcherTimer_Tick event does too much of calculation, remember Dispatcher thread is already overloaded with lots of things to do like managing event routing, visual update etc, if keep your cpu more busy to do custom task in dispatcher thread that too in every second event, it will definately keep on increasing the queue of pending tasks.
You might think its a small multiplication calculation but for Dispatcher thread it can be costly when it comes to loading timezones, converting time value etc. You should profile and see the tick execution time.
You should use System.Threading.Timer object, that will run on another thread, after every tick event, when you are done with your calculations of final angles required, you can then pass them on to Dispatcher thread.
like,
Dispatcher.BeginInvoke((Action)delegate(){
hHandRT.Angle = _hAngle;
mHandRT.Angle = _mAngle;
sHandRT.Angle = _sAngle;
});
By doing this, you will be reducing workload from dispatcher thread little bit.