Process.TotalProcessorTime exceeds the actual time passing - c#

I want to calculate the average CPU usage % between two points of time.
I use the ratio between t1-t0 and Process.TotalProcessorTime1 - Process.TotalProcessorTime0
(where t is the actual DateTime.Now at that point)
but sometimes when the computer is busy I get the TotalProcessorTime difference in Ticks is larger than the actual time (in Ticks) that passed, so my cpu% exceeds 100.
how can that be?
long currentLogTime = DateTime.Now.Ticks;
long currentSPUUsageTime = _process.TotalProcessorTime.Ticks;
long timeDiff= currentLogTime - m_LastLogTime;
if (timeDiff != 0)
{
cpuUsage = (currentSPUUsageTime - m_LastCPUUsageTime) * 100 / timeDiff;
}

If a single process uses more than one processor, it can use processor time in faster than real time.

DateTime.Now is measured with a lower resolution than TotalProcessorTime.
You'll need to use a high-resolution timer to measure the elapsed time. Consider using a Stopwatch instance for this purpose.
Use the StartNew method when you start a new "log period":
m_stopwatch = Stopwatch.StartNew();
Then simply read the ElapsedTicks property to determine how much time that has elapsed:
long timeDiff= m_stopwatch.ElapsedTicks;

Related

C# Process wait millisecond precise

I am developing an application (sort of a game helper), which sends keystrokes to a game at certain time intervals (you can specify what key will be pressed).
The problem is that I need to do the KeyPress event with a millisecond precision. After some research I found out that Thread.Sleep has a resolution of 20-50 ms and the best I could find so far was using the StopWatch() like following:
cmd_PlayJumps = new DelegateCommand(
() =>
{
ActivateWindow();
Stopwatch _timer = new Stopwatch();
Stopwatch sw = new Stopwatch();
double dElapsed = 0;
//Initial Key Press
_timer.Start();
_Keyboard.KeyPress(WindowsInput.Native.VirtualKeyCode.RETURN);
int iTotalJumps = SelectedLayout.JumpCollection.Count;
//loop through collection
for (int iJump = 0; iJump < iTotalJumps - 1; iJump++)
{
dElapsed = _timer.Elapsed.TotalMilliseconds;
sw.Restart();
while (sw.Elapsed.TotalMilliseconds < SelectedLayout.JumpCollection[iJump + 1].WaitTime -
dElapsed)
{
//wait
}
_timer.Restart();
_Keyboard.KeyPress(WindowsInput.Native.VirtualKeyCode.RETURN);
}
//final key press
_Keyboard.KeyPress(WindowsInput.Native.VirtualKeyCode.RETURN);
_timer.Stop();
_timer = null;
});
As duration of KeyPress event varies within 0.3 - 1.5 ms I also keep track of that in order to get rid of deviation.
Nonetheless, I am only able to get 60% accuracy with this code, as even the StopWatch() is not that precise (of course if my code is not incorrect).
I would like to know, how can I achieve at least 90% accuracy?
The Problem is that you need to be lucky, it all depends on how often the .Tick is reached, which will be around 0.2-2 ms depending on your hardware. It is extremely difficult to avoid this, you could try setting a high process priority to steal the CPU away to get more Ticks in.
This can be achieved with :
System.Diagnostics.Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.High;
Also try setting
while (sw.Elapsed.TotalMilliseconds <= SelectedLayout.JumpCollection[iJump + 1].WaitTime - dElapsed)
Which should save you another tick sometimes and get the accuracy up a little.
Other than that the Main issue is that windows itself is not the best Timekeeper, DateTime.Now has a tolerance of 16ms for instance and was never thought of as a "real time" operating system.
As a side note :If you really need this to be as accurate as possible I'd advise you to look into Linux.
I got it to an average timing miss of 0.448 milliseconds using a combination of Thread.Sleep and a spin waiter. Setting the thread to high priority does not change the logic, as the thread needs to be running and continuously checking the variable.
static void Main(string[] args)
{
Thread.CurrentThread.Priority = ThreadPriority.Highest;
var timespans = new List<TimeSpan>(50);
while (timespans.Count < 50)
{
var scheduledTime = DateTime.Now.AddSeconds(0.40);
Console.WriteLine("Scheduled to run at: {0:hh:mm:ss.FFFF}", scheduledTime);
var wait = scheduledTime - DateTime.Now + TimeSpan.FromMilliseconds(-50);
Thread.Sleep((int)Math.Abs(wait.TotalMilliseconds));
while (DateTime.Now < scheduledTime) ;
var offset = DateTime.Now - scheduledTime;
Console.WriteLine("Actual: {0}", offset);
timespans.Add(offset);
}
Console.WriteLine("Average delay: {0}", timespans.Aggregate((a, b) => a + b).TotalMilliseconds / 50);
Console.Read();
}
Please note that true realtime code cannot be obtained using standard, running on windows, CLR code. The garbage collector could step in, even in between loop cycles, and start collecting objects, at which point have a good chance of getting imprecise timing.
You can reduce the chance of this happening by changing the garbage collector's Latency Mode, at which point it won't do the big collections until extreme low memory situations. If this is not enough for you, consider writing the above solution in a language where better guarantees for timing are given (e.g. C++).
You could try using ElapsedTicks. It is the smallest unit that the Stopwatch can measure and you can convert the number of elapsed ticks to seconds (and fractions of seconds of course) using the Frequency property. I do not know if it is better than Elapsed.TotalMilliseconds but worth a try.

Get milliseconds passed

A just need a stable count of the current program's progression in milliseconds in C#. I don't care about what timestamp it goes off of, whether it's when the program starts, midnight, or the epoch, I just need a single function that returns a stable millisecond value that does not change in an abnormal manner besides increasing by 1 each millisecond. You'd be surprised how few comprehensive and simple answers I could find by searching.
Edit: Why did you remove the C# from my title? I'd figure that's a pretty important piece of information.
When your program starts create a StopWatch and Start() it.
private StopWatch sw = new StopWatch();
public void StartMethod()
{
sw.Start();
}
At any point you can query the Stopwatch:
public void SomeMethod()
{
var a = sw.ElapsedMilliseconds;
}
If you want something accurate/precise then you need to use a StopWatch, and please read Eric Lippert's Blog (formerly the Principal Developer of the C# compiler Team) Precision and accuracy of DateTime.
Excerpt:
Now, the question “how much time has elapsed from start to finish?” is a completely different question than “what time is it right now?” If the question you want to ask is about how long some operation took, and you want a high-precision, high-accuracy answer, then use the StopWatch class. It really does have nanosecond precision and accuracy that is close to its precision.
If you don't need an accurate time, and you don't care about precision and the possibility of edge-cases that cause your milliseconds to actually be negative then use DateTime.
Do you mean DateTime.Now? It holds absolute time, and subtracting two DateTime instances gives you a TimeSpan object which has a TotalMilliseconds property.
You could store the current time in milliseconds when the program starts, then in your function get the current time again and subtract
edit:
if what your going for is a stable count of process cycles, I would use processor clocks instead of time.
as per your comment you can use DateTime.Ticks, which is 1/10,000 of a millisecond per tick
Also, if you wanted to do the time thing you can use DateTime.Now as your variable you store when you start your program, and do another DateTime.Now whenever you want the time. It has a millisecond property.
Either way DateTime is what your looking for
It sounds like you are just trying to get the current date and time, in milliseconds. If you are just trying to get the current time, in milliseconds, try this:
long milliseconds = DateTime.Now.Ticks / TimeSpan.TicksPerMillisecond;

sleep in c# does not work properly

the codes below shows that sleep(1) will sleep an average of 2 miliseconds!
DateTime dt = DateTime.Now;
int i = 0;
long max = -1;
while (true)
{
Stopwatch st=new Stopwatch();
st.Restart();
System.Threading.Thread.Sleep(1);
long el = st.ElapsedMilliseconds;
max = Math.Max(el, max);
i++;
double time = DateTime.Now.Subtract(dt).TotalMilliseconds;
if (time >= 1000)
{
Console.WriteLine("Time =" + time);
Console.WriteLine("i =" + i);
Console.WriteLine("max ="+max);
System.Threading.Thread.Sleep(200);
i = 0;
dt = DateTime.Now;
max = -1;
}
}
Typical Output:
Time =1000.1553
i =495
max =5
could some body explain me the reason? and how can i fix this problem?!
Getting 2 milliseconds is fairly unusual, most anybody that runs your code will get 15 instead. It is rather machine dependent and mostly depends on what other programs you've got running on your machine. One way to change it, for example, is to start Chrome and you'll see (close to) 1 msec sleeps.
You should display more digits to avoid rounding artifacts. A simplification of the code:
static void Main(string[] args) {
Stopwatch st = new Stopwatch();
while (true) {
st.Restart();
System.Threading.Thread.Sleep(1);
st.Stop();
Console.Write("{0} ", st.Elapsed.Ticks / 10000.0);
System.Threading.Thread.Sleep(200);
}
}
Which produces on my machine:
16.2074 15.6224 15.6291 15.5313 15.6242 15.6176 15.6152 15.6279 15.6194 15.6128
15.6236 15.6236 15.6134 15.6158 15.6085 15.6261 15.6297 15.6128 15.6261 15.6218
15.6176 15.6055 15.6218 15.6224 15.6212 15.6134 15.6128 15.5928 15.6375 15.6279
15.6146 15.6254 15.6248 15.6091 15.6188 15.4679 15.6019 15.6212 15.6164 15.614
15.7504 15.6085 15.55 15.6248 15.6152 15.6248 15.6242 15.6158 15.6188 15.6206 ...
This is normal output, I have no programs running on my machine that mess with the operating system. This will be the way it works on most machines.
Some background on what's going on. When you call Thread.Sleep() with a value larger than 0 then you voluntarily give up the processor and your thread goes into a wait state. It will resume when the operating system's thread scheduler runs and enough time has expired.
What's key about that sentence is "when the thread scheduler runs". It runs at distinct times in Windows, driven by the clock interrupt that wakes up the processor from the HALT state. This starts in the kernel, one primary task of the clock interrupt is that it increments the clock value. The one that's used by, for example, DateTime.Now and Environment.TickCount
The clock does not have infinite resolution, it only changes when the clock interrupt occurs. By default on all modern Windows versions, that clock interrupt occurs 64 times per second. Which makes the clock accuracy 1 / 64 = 15.625 milliseconds. You can clearly see this value back in the output of the program on my machine.
So what happened on your machine is that a program changed the clock interrupt rate. That is a rather unfortunate inheritance from Windows 3.1, the first Windows version that supported multi-media timers. Timers that can tick at a high rate to support programs that need to do things with media, like animating a GIF file, tune the frame rate of a video player, keep the audio card fed with sound without stutter or excessive latency. Programs like Chrome.
They do this by calling timeBeginPeriod(). They usually go whole-hog and pick the smallest allowable value, 1 millisecond. Apparently 2 msec on your machine. You can do this too, you'll see the Sleep(1) call now taking about 1 msec instead of 2. Don't forget to call timeEndPeriod() when you no longer need the high rate.
But do keep in mind that this is pretty unfriendly thing to do. Waking up the processor this often is very detrimental to battery life, always an issue on portable machines. Which explains what mystified this site's founding father in his blog post "Why does Windows have terrible battery life". It doesn't, Chrome has terrible battery life :) If you want to find out what program messed with the clock then you can run powercfg -energy from the command line.
I dont think it's weird to see this result. The Stopwatch itself probably takes a millisecond. I highly doubt you can expect a precise 1 millisecond. There is always overhead involved and I doubt sleep guarantees you that the sleep time is that precise.
Personally I would expect a range from 1-5 milliseconds.
Thread.Sleep is designed to pause a thread for at least the number of milliseconds you specify. It basically leaves the execution of the current thead and it's up to the scheduler of the operating system to wake it again. The thing is, you cannot be sure that the underlying OS's scheduler will allow the thread to resume immediately.
I think, System.Threading.Thread.SpinWait is what you are looking for.

More precise Thread.Sleep

How can i do Thread.Sleep(10.4166667);?
OK i see now that Sleep is not the way to go.
So i use Timer but timer is also in ms put i need more precise
Is there timer with nanosecond accuracy?
So you want your thread to sleep precisely for that time and then resume? Forget about it. This parameter tells the system to wake the Thread after at least this number of milliseconds. At least. And after resuming, the thread could be put to sleep once again in a blink of an eye. That just how Operating Systems work and you cannot control it.
Please note that Thread.Sleep sleeps as long as you tell it (not even precisely), no matter how long code before or after takes to execute.
Your question seems to imply that you want some code to be executed in certain intervals, since a precise time seems to matter. Thus you might prefer a Timer.
To do such a precise sleep you would need to use a real time operating system and you would likely need specialized hardware. Integrity RTOS claims to respond to interrupts in nanoseconds, as do others.
This isn't going to happen with C# or any kind of high level sleep call.
Please note that the argument is in milliseconds, so 10 is 10 milliseconds. Are you sure you want 10.41 etc milliseconds? If you want 10.41 seconds, then you can use 10416.
The input to Thread.Sleep is the number of milliseconds for which the thread is blocked. After that it will be runnable, but you have no influence over when it is actually scheduled. I.e. in theory the thread could wait forever before resuming execution.
It hardly ever makes sense to rely on specific number of milliseconds here. If you're trying to synchronize work between two threads there are better options than using Sleep.
As you already mentioned: You could combine DispatcherTimer with Stopwatch (Making sure the IsHighResolution and Frequency suits your needs). Start the Timer and the Stopwatch, and on discreet Ticks of the Timer check the exact elapsed time of the stopwatch.
If you are trying to rate-limit a calculation and insist on using only Thread.Sleep then be aware there is a an underlying kernel pulse rate (roughly 15ms), so your thread will only resume when a pulse occurs. The guarantee provided is to "wait at least the specified duration." For example, if you call Thread.Sleep(1) (to wait 1ms), and the last pulse was 13ms ago, then you will end up waiting 2ms until the next pulse comes.
The draw synchronization I implemented for a rendering engine does something similar to dithering to get the quantization to the 15ms intervals to be uniformly distributed around my desired time interval. It is mostly just a matter of subtracting half the pulse interval from the sleep duration, so only half the invocations wait the extra duration to the next 15ms pulse, and half occur early.
public class TimeSynchronizer {
//see https://learn.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-sleep
public const double THREAD_PULSE_MS = 15.6d;//TODO read exact value for your system
public readonly TimeSpan Min = TimeSpan.Zero;
public TimeSynchronizer(TimeSpan? min = null) {
if (min.HasValue && min.Value.Ticks > 0L) this.Min = min.Value;
}
private DateTime _targetTimeUtc = DateTime.UtcNow;//you may wish to defer this initialization so the first Synchronize() call assuredly doesn't wait
public void Synchronize() {
if (this.Min.Ticks > 0L) {
DateTime nowUtc = DateTime.UtcNow;
TimeSpan waitDuration = this._targetTimeUtc - nowUtc;
//store the exact desired return time for the next inerval
if (waitDuration.Ticks > 0L)
this._targetTimeUtc += this.Min;
else this._targetTimeUtc = nowUtc + this.Min;//missed it (this does not preserve absolute synchronization and can de-phase from metered interval times)
if (waitDuration.TotalMilliseconds > THREAD_PULSE_MS/2d)
Thread.Sleep(waitDuration.Subtract(TimeSpan.FromMilliseconds(THREAD_PULSE_MS/2d)));
}
}
}
I do not recommend this solution if your nominal sleep durations are significantly less than the pulse rate, because it will frequently not wait at all in that case.
The following screenshot shows rough percentile bands on how long it truly takes (from buckets of 20 samples each - dark green are the median values), with a (nominal) minimum duration between frames set at 30fps (33.333ms):
I am suspicious that the exact pulse duration is 1 second / 600, since in SQL server a single DateTime tick is exactly 1/300th of a second

How would I go about implementing a stopwatch with different speeds?

Ideally I would like to have something similar to the Stopwatch class but with an extra property called Speed which would determine how quickly the timer changes minutes. I am not quite sure how I would go about implementing this.
Edit
Since people don't quite seem to understand why I want to do this. Consider playing a soccer game, or any sport game. The halfs are measured in minutes, but the time-frame in which the game is played is significantly lower i.e. a 45 minute half is played in about 2.5 minutes.
Subclass it, call through to the superclass methods to do their usual work, but multiply all the return values by Speed as appropriate.
I would use the Stopwatch as it is, then just multiply the result, for example:
var Speed = 1.2; //Time progresses 20% faster in this example
var s = new Stopwatch();
s.Start();
//do things
s.Stop();
var parallelUniverseMilliseconds = s.ElapsedMilliseconds * Speed;
The reason your simple "multiplication" doesn't work is that it doesn't speeding up the passing of time - the factor applies to all time that has passed, as well as time that is passing.
So, if you set your speed factor to 3 and then wait 10 minutes, your clock will correctly read 30 minutes. But if you then change the factor to 2, your clock will immediately read 20 minutes because the multiplication is applied to time already passed. That's obviously not correct.
I don't think the stopwatch is the class you want to measure "system time" with. I think you want to measure it yoruself, and store elapsed time in your own variable.
Assuming that your target project really is a game, you will likely have your "game loop" somewhere in code. Each time through the loop, you can use a regular stopwatch object to measure how much real-time has elapsed. Multiply that value by your speed-up factor and add it to a separate game-time counter. That way, if you reduce your speed factor, you only reduce the factor applied to passing time, not to the time you've already recorded.
You can wrap all this behaviour into your own stopwatch class if needs be. If you do that, then I'd suggest that you calculate/accumulate the elapsed time both "every time it's requested" and also "every time the factor is changed." So you have a class something like this (note that I've skipped field declarations and some simple private methods for brevity - this is just a rough idea):
public class SpeedyStopwatch
{
// This is the time that your game/system will run from
public TimeSpan ElapsedTime
{
get
{
CalculateElapsedTime();
return this._elapsedTime;
}
}
// This can be set to any value to control the passage of time
public double ElapsedTime
{
get { return this._timeFactor; }
set
{
CalculateElapsedTime();
this._timeFactor = value;
}
}
private void CalculateElapsedTime()
{
// Find out how long (real-time) since we last called the method
TimeSpan lastTimeInterval = GetElapsedTimeSinceLastCalculation();
// Multiply this time by our factor
lastTimeInterval *= this._timeFactor;
// Add the multiplied time to our elapsed time
this._elapsedTime += lastTimeInterval;
}
}
According to modern physics, what you need to do to make your timer go "faster" is to speed up the computer that your software is running one. I don't mean the speed at wich it performs calculations, but the physical speed. The close you get to the speed of light ( the constant C ) the greater the rate at which time passes for your computer, so as you approach the speed of light, time will "speed up" for you.
It sounds like what you might actually be looking for is an event scheduler, where you specify that certain events must happen at specific points in simulated time and you want to be able to change the relationship between real time and simulated time (perhaps dynamically). You can run into boundary cases when you start to change the speed of time in the process of running your simulation and you may also have to deal with cases where real time takes longer to return than normal (your thread didn't get a time slice as soon as you wanted, so you might not actually be able to achieve the simulated time you're targeting.)
For instance, suppose you wanted to update your simulation at least once per 50ms of simulated time. You can implement the simulation scheduler as a queue where you push events and use a scaled output from a normal Stopwatch class to drive the scheduler. The process looks something like this:
Push (simulate at t=0) event to event queue
Start stopwatch
lastTime = 0
simTime = 0
While running
simTime += scale*(stopwatch.Time - lastTime)
lastTime = stopwatch.Time
While events in queue that have past their time
pop and execute event
push (simulate at t=lastEventT + dt) event to event queue
This can be generalized to different types of events occurring at different intervals. You still need to deal with the boundary case where the event queue is ballooning because the simulation can't keep up with real time.
I'm not entirely sure what you're looking to do (doesn't a minute always have 60 seconds?), but I'd utilize Thread.Sleep() to accomplish what you want.

Categories

Resources