I'd like to use 3 or 4 C# Timers with an Interval that could be 40ms (to work with image data 1000/25 = 40).
According to MSDN it seems to be the good pattern to perform a Task every 40ms. The default interval is 100ms.
In real life, I'd like to know if 40ms is still Ok ? Or if I should use another thread design pattern ? Is the wakeup/sleep behavior is near cpu free ?
There is no special relevance to the 100 msec default, you can change it as needed.
You do need to pick your values carefully if you want to get an interval that's consistent from one machine to another. The accuracy of the Timer class is affected by the default operating system clock interrupt rate. On most Windows machines, that interrupt occurs 64 times per second. Equal to 15.625 milliseconds. There are machines that have a higher rate, some go as low as 1 msec. A side-effect of other programs changing the interrupt rate. The timeBeginPeriod() winapi function does this and it has a global effect.
So the best intervals to pick are ones that are a multiple of 15.625 and stay just below that. So your chosen interval repeats well on any machine. Which makes the good choices:
15
31
46
62
etc.
Your best bet for aiming near 40 msec is therefore 46. It will be accurate to 1.4% on any machine. I do always pick 45 myself, nice round number.
Do beware that actual intervals can be arbitrary longer if the machine is under heavy load or you have a lot of active threadpool threads in your program.
Related
I'm converting some code over from .NET Micro Framework which I was running on a Netduino. The code measures the frequency of a square-wave oscillator that has a maximum frequency of about 1000 Hz, or a period of about 1 millisecond. The application is a rain detector that varies its capacitance depending on how wet it is. Capacitance increases with wetness, which reduces the oscillator frequency.
On the Netduino, I used an InterruptPin. It's not a genuine interrupt but schedules a .NET event, and in the EventArgs is contained a timestamp of when the pin value changed. On the Netduino, I could also configure whether it would be the rising or falling edge that would trigger the event. I managed to get this working fairly well and 1 KHz was approaching the maximum throughput that the Netduino could reliably measure.
On the Raspberry Pi, things don't go as well. It's running Windows 10 IoT Core, which to be sure is quite a different environment to the Netduino. I have a ValueChanged event that I can tap into but there is no timestamp and it occurs twice as fast because it gets triggered by both halves of the waveform. I hoped that, with its faster quad-core CPU, the Raspberry Pi might be able to cope with this, but in fact the best throughput I can get appears to be in the order of 1 event every 30 milliseconds - an order of magnitude worse that what I got on the Netduino, at least, which means I'm falling a long way short of timing a 1 KHz square wave.
So I'm looking for ideas. I've thought about slowing the oscillator down. The original circuit was running at around 1 MHz and I've added a lot of resistors to increase the time constant, bringing it down to around 1 KHz. I could go on adding resistors but there comes a point where it starts to get silly and I'm worried about component tolerances making the thing hard to calibrate.
It would be handy of the Raspberry Pi exposed some counter/timer functionality, but none of these 'maker' boards seem to do that, for some unfathomable reason.
One approach could be to use an A-to-D converter to somehow get a direct reading, but the electronics is a bit beyond me (hey, I'm a software guy!).
There is enough grunt in the Raspberry Pi that I ought to be able to get this to work! Has anyone found a way of getting faster throughput to the GPIO pins?
Suppose two machines are running the same code, but you want to offset the timing of the code being run so that there's no possibility of their not running simultaneously, and by simultaneously I mean not running within 5 seconds of each other.
One could generate a random number of seconds prior to the start of the running code, but that may generate the same number.
Is there an algorithm to independently guarantee different random numbers?
In order to guarantee that the apps don't run at the same time, you need some sort of communication between the two. This could be as simple as someone setting a configuration value to run at a specific time (or delay by a set amount of seconds if you can guarantee they will start at the same time). Or it might require calling into a database (or similar) to determine when it is going to start.
It sounds like you're looking for a scheduler. You'd want a third service (the scheduler) which maintains when applications are supposed to/allowed to start. I would avoid having the applications talk directly to each other, as this will become a nightmare as your requirements become more complex (a third computer gets added, another program has to follow similar scheduling rules, etc.).
Have the programs send something unique (the MAC address of the machine, a GUID that only gets generated once and stored in a config file, etc.) to the scheduling service, and have it respond with how many seconds (if any) that program has to wait to begin its main execution loop. Or better yet, give the scheduler permissions on both machines to run the program at specified times.
You can't do this in pure isolation though - let's say that you have one program uniquely decide to wait 5 seconds, and the other wait 7 seconds - but what happens when the counter for program 2 is started 2 seconds before program 1? A scheduler can take care of that for you.
As pointed in comments/other answer true random can't really provide any guarantees of falling into particular ranges when running in parallel independently.
Assuming your goal is to not run multiple processes at the same time you can force each machine to pick different time slot to run the process.
If you can get consensus between this machines on current time and "index" of machine than you can run your program at selected slots with possible random offset withing time slot.
I.e. use time service to synchronize time (default behavior for most of OS for machines connected to pretty much any network) and pre-assign sequential IDs to machines (and have info on total count). Than let machine with ID to run in time slot like (assuming count < 60, otherwise adjust start time based on count; provide enough time to avoid overlaps when small time drift happens between time synchronization interval)
(start of an hour + (ID*minutes) + random_offset (0,30 seconds))
This way no communications between machines is needed.
Have both app read a local config file, wait the number of seconds specified and then start running.
Put 0 in one, 6+ in the other. They'll not start within 5 seconds of each other. (Adjust the 6+ as necessary to cater for variations in machine loads, speeds etc.)
Not really an algorithm but you could create two arrays of numbers that are completely different and then grab a number from the array (randomly) before the app starts.
What is the penalty for them running at the same time?
The reason I ask is that even if you offset the starting time one could start before the other one is finished. If the data they are processing grows then this gets more likely as time goes on and the 5s rule becomes obsolete.
If they use the same resources then it would be best to use those resources somehow to tell you. E.g. Set a flag in the database, or check if there is enough memory available to run.
I don't understand the meaning of timer precision and resolution. Can anyone explain it to me?
NOTE: This question is related to Stopwatch.
Accuracy and precision are opposing goals, you can't get both. An example of a very accurate timing source is DateTime.UtcNow. It provides absolute time that's automatically corrected for clock rate errors by the kernel, using a timing service to periodically re-calibrate the clock. You probably heard of time.windows.com, the NTP server that most Windows PC use. Very accurate, you can count on less than a second of error over an entire year. But not precise, the value only updates 64 times per second. It is useless to time anything that takes less than a second with any kind of decent precision.
The clock source for Stopwatch is very different. It uses a free running counter that is driven by a frequency source available somewhere in the chipset. This used to be a dedicate crystal running at the color burst frequency (3.579545 MHz) but relentless cost cutting has eliminated that from most PCs. Stopwatch is very precise, you can tell from its Frequency property. You should get something between a megahertz and the cpu clock frequency, allowing you to time down to a microsecond or better. But it is not accurate, it is subject to electronic part tolerances. Particularly mistrust any Frequency beyond a gigahertz, that's derived from a multiplier which also multiplies the error. And beware the Heisenberg principle, starting and stopping the Stopwatch takes non-zero overhead that will affect the accuracy of very short measurements. Another common accuracy problem with Stopwatch is the operating system switching out your thread to allow other code to run. You need to take multiple samples and use the median value.
They are the same as with any measurement. See this Wikipedia article for more details --
http://en.wikipedia.org/wiki/Accuracy_and_precision
There are different types of times in .net (3 or 4 of them, if i remember correctly), each working with his own algorithm. The precision of timer means how accurate it is in informing the using application on the ticking events. For example, if you use a timer and set it to trigger its ticking event every 1000 ms, the precision of the timer means how close to the specified 1000 ms it will actually tick.
for more information (at least in c#), i suggest u read the msdn page on timers:
From MSDN Stopwatch Class: (emphasis mine)
"The Stopwatch measures elapsed time by counting timer ticks in the underlying timer mechanism. If the installed hardware and operating system support a high-resolution performance counter, then the Stopwatch class uses that counter to measure elapsed time. Otherwise, the Stopwatch class uses the system timer to measure elapsed time. Use the Frequency and IsHighResolution fields to determine the precision and resolution of the Stopwatch timing implementation."
I don't have a great deal of experience with threads. I'm using .NET 4 and would like to use the .NET 4 threading features to solve this. Here is what I want to do.
I have a class with two methods, 'A' and 'B'. I want 'A' to call 'B' some number of times (like 100) every some number of milliseconds (like 3000). I want to record the average execution time of method 'B' when it's done executing its 100 (or whatever) times. The class will have some private properties to keep track of the total elapsed execution time of 'B' in order to calculate an average.
I'm not sure if method 'A' should call 'B' via a System.Timers.Timer thread (where the interval can be set, but not the number of times) or if there is a better (.NET 4) way of doing this.
Thanks very much.
In reading over your question, I think the root question you have is about safely kicking off a set of events and timing their execution in a thread-safe manner. In your example, you are running 100 iterations every 3000ms. That means that at most each iteration should only take 30ms. Unfortunately, the System.Timers.Timer (which is System.Threading.Timer with a wrapper around it) is not that precise. Expect a precision of 10ms at best and possibly a lot worse. In order to get the 1ms precision you really need, you are going to need to tap into the native interop. Here is a quote I found on this:
The precision of multithreaded timers depends on the operating system, and is typically in the 10–20 ms region. If you need greater precision, you can use native interop and call the Windows multimedia timer. This has precision down to 1 ms and it is defined in winmm.dll. First call timeBeginPeriod to inform the operating system that you need high timing precision, and then call timeSetEvent to start a multimedia timer. When you’re done, call timeKillEvent to stop the timer and timeEndPeriod to inform the OS that you no longer need high timing precision. You can find complete examples on the Internet that use the multimedia timer by searching for the keywords dllimport winmm.dll timesetevent
-Joseph Albahari ( http://www.albahari.com/threading/part3.aspx )
If you follow his advice, you should get the precision you need.
Debug.WriteLine("Timer is high-resolution: {0}", Stopwatch.IsHighResolution);
Debug.WriteLine("Timer frequency: {0}", Stopwatch.Frequency);
Result:
Timer is high-resolution: True
Timer frequency: 2597705
This article (from 2005!) mentions a Frequency of 3579545, a million more than mine. This blog post mentions a Frequency of 3,325,040,000, which is insane.
Why is my Frequency so much comparatively lower? I'm on an i7 920 machine, so shouldn't it be faster?
3,579,545 is the magic number. That's the frequency in Hertz before dividing it by 3 and feeding it into the 8053 timer chip in the original IBM PC. The odd looking number wasn't chosen by accident, it is the frequency of the color burst signal in the NTSC TV system used in the US and Japan. The IBM engineers were looking for a cheap crystal to implement the oscillator, nothing was cheaper than the one used in every TV set.
Once IBM clones became widely available, it was still important for their designers to choose the same frequency. A lot of MS-DOS software relied on the timer ticking at that rate. Directly addressing the chip was a common crime.
That changed once Windows came around. A version of Windows 2 was the first one to virtualize the timer chip. In other words, software wasn't allowed to directly address the timer chip anymore. The processor was configured to run in protected mode and intercepted the attempt to use the I/O instruction. Running kernel code instead, allowing the return value of the instruction to be faked. It was now possible to have multiple programs using the timer without them stepping on each other's toes. An important first step to break the dependency on how the hardware is actually implemented.
The Win32 API (Windows NT 3.1 and Windows 95) formalized access to the timer with an API, QueryPerformanceCounter() and QueryPerformanceFrequency(). A kernel level component, the Hardware Adaption Layer, allows the BIOS to pass that frequency. Now it was possible for the hardware designers to really drop the dependency on the exact frequency. That took a long time btw, around 2000 the vast majority of machines still had the legacy rate.
But the never-ending quest to cut costs in PC design put an end to that. Nowadays, the hardware designer just picks any frequency that happens to be readily available in the chipset. 3,325,040,000 would be such a number, it is most probably the CPU clock rate. High frequencies like that are common in cheap designs, especially the ones that have an AMD core. Your number is pretty unusual, some odds that your machine wasn't cheap. And that the timer is a lot more accurate, CPU clocks have typical electronic component tolerances.
The frequence depends on the HAL (Hardware abstraction layer). Back in the pentium days, it was common to use the CPU tick (which was based on the CPU clock rate) so you ended up with really high frequency timers.
With multi-processor and multi-core machines, and especially with variable rate CPUs (the CPU clock slows down for low power states) using the CPU tick as the timer becomes difficult and error prone, so the writers of the HAL seem to have chosen to use a slower, but more reliable hardware clock, like the real time clock.
The Stopwatch.Frequency value is per second, so your frequency of 2,597,705 means you have more than 2.5 million ticks per second. Exactly how much precision do you need?
As for the variations in frequency, that is a hardware-dependent thing. Some of the most common hardware differences are the number of cores, the frequency of each core, the current power state of your cpu (or cores), whether you have enabled the OS to dynamically adjust the cpu frequency, etc. Your frequency will not always be the same, and depending on what state your cpu is in when you check it, it may be lower or higher, but generally around the same (for you, probably around 2.5 million.)
I think 2,597,705 = your processor frequency. Myne is 2,737,822. i7 930