I have a code snippet like this:
while(true)
{
myStopWatch.Start();
DoMyJob();
myStopWatch.Stop();
FPS = 1000/myStopWatch.Elapsed.ToMillionSeconds();
myStopWatch.Reset();
}
which works pretty good, I got the FPS around 100(+/-2). But sometimes I just want to focus on a certain part of the DoMyJob() performance and add some feedbacks, so I split the DoMyJob() to DoMyJob1() and add DoMyJobs2(). the first part is mainly calculation stuff, second part is to visualize the calculation on the Form and update some indicators.
So the code becomes:
while(true)
{
myStopWatch.Start();
DoMyJob_1();
myStopWatch.Stop();
FPS = 1000/myStopWatch.Elapsed.ToMillionSeconds();
myStopWatch.Reset();
DoMyJob_2();
}
I did not expect anything would mess up the FPS since DoMyJob1 is almost the same as the original DoMyJob. But oops..it messed up. the FPS becomes frenzy, bouncing between 40 and up to 600 in a somehow random manner. I wiped out the DoMyJob2() and FPS went back to steady 100.
As I examined deep into the FPS sequence, I found out the FPSs are not random at all - they had like 4 or 5 different ranges, in my code, 30-50, 100-120, 300-360, 560-600, etc. Not a single number falls into the gaps. Then I tried the code in another laptop and the issue still exists, but just with different ranges. I know StopWatch uses Win32API. Is it because it's buggy and I run the code on the 64bit system??
BTW: what the best way to measure FPS on .NET Windows Form App? (like if FPS=100 or more)
If DoMyJob_2 takes a variable amount of time, then you have a slice of time from every second that is not being taken into account. You could use your method to calculate an average time to execute DoMyJob_1, but not to determine frames per second. For example:
loop 1:
task 1: 5ms
reported fps: 1000/5ms = 200
task 2: 15ms
real fps: 1000/20ms = 50
loop 2:
task 1: 5ms
reported fps: 1000/5ms = 200
task 2: 25ms
real fps: 1000/30ms = 33
...
So I'm not sure that's what you are seeing, but it seems possible. What you are describing (fluctuating reported fps) might actually make more sense if the total length of the job tends to be stable, but the way you split the job makes each part variable.
Related
I have an application which monitors a particular event and then starts to calculate things once it happens. Events are irregular and can come in any pattern from bunches in a sec to none for long time..
I want to measure %% of time the application is busy (similar to CPU % Usage)
I want to use Timer100Ns counter
Two questions:
Do I increment it by hardware ticks or by DateTime ticks (e.g. if I use Stopwatch - do I use sw.ElapsedTicks or sw.Elapsed.Ticks) ?
Do I need a base counter for it?
so I am about to write something like this:
Stopwatch sw = new Stopwatch();
sw.Start();
// Do some operation which is irregular by nature
sw.Stop();
// Measure utilization of the application
myCounterOfTypeTimer100Ns.IncrementBy(sw.Elapsed.Ticks);
Will it do ?
EDIT : I experimented with it a bit and now its even more confusing.. It actually shows the values I increment it by. Not %%.
The mystery unravelled. It currently appears that I don't use it in the way it was supposed to be used (or rather I didn't read TFM properly). If the sampling interval is 1s (as in perf mon live window) and you intervals are more than 1s then it shows you a nonsense number... To achieve smoothness, the activity you are trying to measure must be really fractions of 1s.. Otherwise this counter is not a good idea..
The answer for this kind of problem (although its not obvious, but still disturbing that nobody suggested it in a week) is actually SampleCounter.
I have a small problem regarding threading in C#.
For some reason, my thread speeds up from 32ms delay to 16ms delay when I open Chrome, when I close Chrome it goes back to 32ms. I'm using Thread.Sleep(1000 / 60) for the delay.
Can somebody explain why this is happening, and maybe suggest a possible solution?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace ConsoleApplication2
{
class Program
{
static bool alive;
static Thread thread;
static DateTime last;
static void Main(string[] args)
{
alive = true;
thread = new Thread(new ThreadStart(Loop));
thread.Start();
Console.ReadKey();
}
static void Loop()
{
last = DateTime.Now;
while (alive)
{
DateTime current = DateTime.Now;
TimeSpan span = current - last;
last = current;
Console.WriteLine("{0}ms", span.Milliseconds);
Thread.Sleep(1000 / 60);
}
}
}
}
Just a post to confirm Matthew's correct answer. The accuracy of Thread.Sleep() is affected by the clock interrupt rate on Windows. It by default ticks 64 times per second, once every 15.625 msec. A Sleep() can only complete when such an interrupt occurs. The mental image here is the one induced by the word "sleep", the processor is in fact asleep and not executing code. Only that clock interrupt is going to wake it up again to resume executing your code.
Your choice of 1000/60 was a very unhappy one, that asks for 16 msec. Just a bit over 15.625 so you'll always wake back up at least 2 ticks later: 2 x 15.625 = 31 msec. What you measured.
That interrupt rate is however not fixed, it can be altered by a program. It does so by calling CreateTimerQueueTimer() or the legacy timeBeginPeriod(). A browser in general has a need to do so. Something simple as animating a GIF requires a better timer since GIF frame times are specified with a unit of 10 msec. Or in general any multi-media related operation needs it.
A very ugly side-effect of a program doing this is that this increased clock interrupt rate has system-wide effects. Like it did in your program. Your timer suddenly got accurate and you actually got the sleep duration you asked for, 16 msec. So Chrome is changing the rate to, probably, 1000 ticks per second. The maximum supported. And good for business when you have a competing operating system.
You can avoid this problem by picking a sleep duration that's a closer match to the default interrupt rate. If you ask for 15 then you'll get 15.625 and Chrome cannot have an effect on that. 31 is the next sweet spot. Etcetera, integer multiples of 15.625 and rounded down.
UPDATE: do note that this behavior changed just recently. Starting at Win10 version 2004, the effect is no longer global so Chrome can no longer affect your program.
Starting at Win11, an app with an inactive window operates with the default interrupt rate.
This is possibly occurring because Chrome (or some component of Chrome) is calling timeBeginPeriod() with a value that increases the resolution of the Windows API function Sleep(), which is called from Thread.Sleep().
See this thread for more information: Can I improve the resolution of Thread.Sleep?
I noticed this behavior with Windows Media Player some years ago: The behavior of one of our applications changed depending on whether Windows Media Player was running or not. It turned out, WMP was calling timeBeginPeriod().
However, in general, Thread.Sleep() (and by extension, the Windows API Sleep()) is extremely inaccurate.
Basically, Thread.Sleep isn't very accurate.
Thread.Sleep(1000/60) (which evaluates to Thread.Sleep(16)), asks the thread to go to sleep and come back when 16ms has elapsed. However, that thread might not get to execute again until a greater amount of time has elapsed; say, for example, 32ms.
As for why Chrome is having an effect, I don't know but since Chrome spawns one new thread for each tab, it'll have an effect on the system's threading behaviour.
First, 1000 / 60 = 16 ms
The PC clock has a resolution of around 18-20ms, Sleep() and the result of DateTime.Now will be rounded to a multiple of that value.
So, Thread.Sleep(5) and Thread.Sleep(15) will delay for the same amount of time. And that can be 20, 40 or even 60 ms. You do not get much guarantees, the argument for Sleep() is only a minimum.
And another process (Chrome) that hogs the CPU (even a little) can influence the behavior of your program that way. Edit: that is the reverse of what you're seeing, so something a little else is happening here. Still, it's about rounding to timeslices.
You are hitting a resolution issue with DateTime. You should use Stopwatch for this kind of precision. Eric Lippert states that DateTime is only accurate to around 30 ms, so your readings with it in this case will not tell you anything.
Measurement is half of your problem. The actual time variation for your loop is due to Sleep resolution (as stated in the other answers).
In my application, I have used the number of System.Threading.Timer and set this timer to fire every 1 second. My application execute the thread at every 1 second but it execution of the millisecond is different.
In my application i have used the OPC server & OPC group .one thread reading the data from the OPC server (like one variable changing it's value & i want to log this moment of the changes values into my application every 1 s)
then another thread to read this data read this data from the first thread every 1s & second thread used for store data into the MYSQL database .
in this process when i will read the data from the first thread then i will get the old data values like , read the data at 10:28:01.530 this second then i will get the information of 10:28:00.260 this second.so i want to mange these threads the first thread worked at 000 millisecond & second thread worked at 500 millisecond. using this first thread update the data at 000 second & second thread read the data at 500 millisecond.
My output is given below:
10:28:32.875
10:28:33.390
10:28:34.875
....
10:28:39.530
10:28:40.875
However, I want following results:
10:28:32.000
10:28:33.000
10:28:34.000
....
10:28:39.000
10:28:40.000
How can the timer be set so the callback is executed at "000 milliseconds"?
First of all, it's impossible. Even if you are to schedule your 'events' for a time that they are fired few milliseconds ahead of schedule, then compare millisecond component of the current time with zero in a loop, the flow control for your code could be taken away at the any given moment.
You will have to rethink your design a little, and not depend on when the event would fire, but think of the algorithm that will compensate for the milliseconds delayed.
Also, you won't have much help with the Threading.Timer, you would have better chance if you have used your own thread, periodically:
check for the current time, see what is the time until next full second
Sleep() for that amount minus the 'spice' factor
do the work you have to do.
You'll calculate your 'spice' factor depending on the results you are getting - does the sleep finishes ahead or behind the schedule.
If you are to give more information about your apparent need for having event at exactly zero ms, I could help you get rid of that requirement.
HTH
I would say that its impossible. You have to understand that switching context for cpu takes time (if other process is running you have to wait - cpu shelduler is working). Each CPU tick takes some time so synchronization to 0 milliseconds is impossible. Maybe with setting high priority of your process you can get closer to 0 but you won't achive it ever.
IMHO it will be impossible to really get a timer to fire exactly every 1sec (on the milisecond) - even in hardcore assembler this would be a very hard task on your normal windows-machine.
I think first what you need to do: is to set right dueTime for a timer. I do it so:
dueTime = 1000 - DateTime.Now.Milliseconds + X; where X - is serving for accuracy and you need select It by testing. Then Threading.Timer each time It ticks running on thread from CLR thread pool and, how tests show - this thread is different each time. Creating threads slows timer, because of this you can use WaitableTimer, which always will be running at the same thread. Instead of WaitableTimer you can using Thread.Sleep method in such way:
Thread.CurrentThread.Priority = Priority.High; //If time is really critical
Thread.Sleep (1000 - DateTime.Now + 50); //Make bound = 1s
while (SomeBoolCondition)
{
Thread.Sleep (980); //1000 ms = 1 second, but something ms will be spent on exit from Sleep
//Here you write your code
}
It will be work faster then a timer.
Ideally I would like to have something similar to the Stopwatch class but with an extra property called Speed which would determine how quickly the timer changes minutes. I am not quite sure how I would go about implementing this.
Edit
Since people don't quite seem to understand why I want to do this. Consider playing a soccer game, or any sport game. The halfs are measured in minutes, but the time-frame in which the game is played is significantly lower i.e. a 45 minute half is played in about 2.5 minutes.
Subclass it, call through to the superclass methods to do their usual work, but multiply all the return values by Speed as appropriate.
I would use the Stopwatch as it is, then just multiply the result, for example:
var Speed = 1.2; //Time progresses 20% faster in this example
var s = new Stopwatch();
s.Start();
//do things
s.Stop();
var parallelUniverseMilliseconds = s.ElapsedMilliseconds * Speed;
The reason your simple "multiplication" doesn't work is that it doesn't speeding up the passing of time - the factor applies to all time that has passed, as well as time that is passing.
So, if you set your speed factor to 3 and then wait 10 minutes, your clock will correctly read 30 minutes. But if you then change the factor to 2, your clock will immediately read 20 minutes because the multiplication is applied to time already passed. That's obviously not correct.
I don't think the stopwatch is the class you want to measure "system time" with. I think you want to measure it yoruself, and store elapsed time in your own variable.
Assuming that your target project really is a game, you will likely have your "game loop" somewhere in code. Each time through the loop, you can use a regular stopwatch object to measure how much real-time has elapsed. Multiply that value by your speed-up factor and add it to a separate game-time counter. That way, if you reduce your speed factor, you only reduce the factor applied to passing time, not to the time you've already recorded.
You can wrap all this behaviour into your own stopwatch class if needs be. If you do that, then I'd suggest that you calculate/accumulate the elapsed time both "every time it's requested" and also "every time the factor is changed." So you have a class something like this (note that I've skipped field declarations and some simple private methods for brevity - this is just a rough idea):
public class SpeedyStopwatch
{
// This is the time that your game/system will run from
public TimeSpan ElapsedTime
{
get
{
CalculateElapsedTime();
return this._elapsedTime;
}
}
// This can be set to any value to control the passage of time
public double ElapsedTime
{
get { return this._timeFactor; }
set
{
CalculateElapsedTime();
this._timeFactor = value;
}
}
private void CalculateElapsedTime()
{
// Find out how long (real-time) since we last called the method
TimeSpan lastTimeInterval = GetElapsedTimeSinceLastCalculation();
// Multiply this time by our factor
lastTimeInterval *= this._timeFactor;
// Add the multiplied time to our elapsed time
this._elapsedTime += lastTimeInterval;
}
}
According to modern physics, what you need to do to make your timer go "faster" is to speed up the computer that your software is running one. I don't mean the speed at wich it performs calculations, but the physical speed. The close you get to the speed of light ( the constant C ) the greater the rate at which time passes for your computer, so as you approach the speed of light, time will "speed up" for you.
It sounds like what you might actually be looking for is an event scheduler, where you specify that certain events must happen at specific points in simulated time and you want to be able to change the relationship between real time and simulated time (perhaps dynamically). You can run into boundary cases when you start to change the speed of time in the process of running your simulation and you may also have to deal with cases where real time takes longer to return than normal (your thread didn't get a time slice as soon as you wanted, so you might not actually be able to achieve the simulated time you're targeting.)
For instance, suppose you wanted to update your simulation at least once per 50ms of simulated time. You can implement the simulation scheduler as a queue where you push events and use a scaled output from a normal Stopwatch class to drive the scheduler. The process looks something like this:
Push (simulate at t=0) event to event queue
Start stopwatch
lastTime = 0
simTime = 0
While running
simTime += scale*(stopwatch.Time - lastTime)
lastTime = stopwatch.Time
While events in queue that have past their time
pop and execute event
push (simulate at t=lastEventT + dt) event to event queue
This can be generalized to different types of events occurring at different intervals. You still need to deal with the boundary case where the event queue is ballooning because the simulation can't keep up with real time.
I'm not entirely sure what you're looking to do (doesn't a minute always have 60 seconds?), but I'd utilize Thread.Sleep() to accomplish what you want.
I'm trying to determine the "beats per minute" from real-time audio in C#. It is not music that I'm detecting in though, just a constant tapping sound. My problem is determining the time between those taps so I can determine "taps per minute" I have tried using the WaveIn.cs class out there, but I don't really understand how its sampling. I'm not getting a set number of samples a second to analyze. I guess I really just don't know how to read in an exact number of samples a second to know the time between to samples.
Any help to get me in the right direction would be greatly appreciated.
I'm not sure which WaveIn.cs class you're using, but usually with code that records audio, you either A) tell the code to start recording, and then at some later point you tell the code to stop, and you get back an array (usually of type short[]) that comprises the data recorded during this time period; or B) tell the code to start recording with a given buffer size, and as each buffer is filled, the code makes a callback to a method you've defined with a reference to the filled buffer, and this process continues until you tell it to stop recording.
Let's assume that your recording format is 16 bits (aka 2 bytes) per sample, 44100 samples per second, and mono (1 channel). In the case of (A), let's say you start recording and then stop recording exactly 10 seconds later. You will end up with a short[] array that is 441,000 (44,100 x 10) elements in length. I don't know what algorithm you're using to detect "taps", but let's say that you detect taps in this array at element 0, element 22,050, element 44,100, element 66,150 etc. This means you're finding taps every .5 seconds (because 22,050 is half of 44,100 samples per second), which means you have 2 taps per second and thus 120 BPM.
In the case of (B) let's say you start recording with a fixed buffer size of 44,100 samples (aka 1 second). As each buffer comes in, you find taps at element 0 and at element 22,050. By the same logic as above, you'll calculate 120 BPM.
Hope this helps. With beat detection in general, it's best to record for a relatively long time and count the beats through a large array of data. Trying to estimate the "instantaneous" tempo is more difficult and prone to error, just like estimating the pitch of a recording is more difficult to do in realtime than with a recording of a full note.
I think you might be confusing samples with "taps."
A sample is a number representing the height of the sound wave at a given moment in time. A typical wave file might be sampled 44,100 times a second, so if you have two channels for stereo, you have 88,200 sixteen-bit numbers (samples) per second.
If you take all of these numbers and graph them, you will get something like this:
(source: vbaccelerator.com)
What you are looking for is this peak ------------^
That is the tap.
Assuming we're talking about the same WaveIn.cs, the constructor of WaveLib.WaveInRecorder takes a WaveLib.WaveFormat object as a parameter. This allows you to set the audio format, ie. samples rate, bit depth, etc. Just scan the audio samples for peaks or however you're detecting "taps" and record the average distance in samples between peaks.
Since you know the sample rate of the audio stream (eg. 44100 samples/second), take your average peak distance (in samples), multiply by 1/(samples rate) to get the time (in seconds) between taps, divide by 60 to get the time (in minutes) between taps, and invert to get the taps/minute.
Hope that helps