How can i do Thread.Sleep(10.4166667);?
OK i see now that Sleep is not the way to go.
So i use Timer but timer is also in ms put i need more precise
Is there timer with nanosecond accuracy?
So you want your thread to sleep precisely for that time and then resume? Forget about it. This parameter tells the system to wake the Thread after at least this number of milliseconds. At least. And after resuming, the thread could be put to sleep once again in a blink of an eye. That just how Operating Systems work and you cannot control it.
Please note that Thread.Sleep sleeps as long as you tell it (not even precisely), no matter how long code before or after takes to execute.
Your question seems to imply that you want some code to be executed in certain intervals, since a precise time seems to matter. Thus you might prefer a Timer.
To do such a precise sleep you would need to use a real time operating system and you would likely need specialized hardware. Integrity RTOS claims to respond to interrupts in nanoseconds, as do others.
This isn't going to happen with C# or any kind of high level sleep call.
Please note that the argument is in milliseconds, so 10 is 10 milliseconds. Are you sure you want 10.41 etc milliseconds? If you want 10.41 seconds, then you can use 10416.
The input to Thread.Sleep is the number of milliseconds for which the thread is blocked. After that it will be runnable, but you have no influence over when it is actually scheduled. I.e. in theory the thread could wait forever before resuming execution.
It hardly ever makes sense to rely on specific number of milliseconds here. If you're trying to synchronize work between two threads there are better options than using Sleep.
As you already mentioned: You could combine DispatcherTimer with Stopwatch (Making sure the IsHighResolution and Frequency suits your needs). Start the Timer and the Stopwatch, and on discreet Ticks of the Timer check the exact elapsed time of the stopwatch.
If you are trying to rate-limit a calculation and insist on using only Thread.Sleep then be aware there is a an underlying kernel pulse rate (roughly 15ms), so your thread will only resume when a pulse occurs. The guarantee provided is to "wait at least the specified duration." For example, if you call Thread.Sleep(1) (to wait 1ms), and the last pulse was 13ms ago, then you will end up waiting 2ms until the next pulse comes.
The draw synchronization I implemented for a rendering engine does something similar to dithering to get the quantization to the 15ms intervals to be uniformly distributed around my desired time interval. It is mostly just a matter of subtracting half the pulse interval from the sleep duration, so only half the invocations wait the extra duration to the next 15ms pulse, and half occur early.
public class TimeSynchronizer {
//see https://learn.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-sleep
public const double THREAD_PULSE_MS = 15.6d;//TODO read exact value for your system
public readonly TimeSpan Min = TimeSpan.Zero;
public TimeSynchronizer(TimeSpan? min = null) {
if (min.HasValue && min.Value.Ticks > 0L) this.Min = min.Value;
}
private DateTime _targetTimeUtc = DateTime.UtcNow;//you may wish to defer this initialization so the first Synchronize() call assuredly doesn't wait
public void Synchronize() {
if (this.Min.Ticks > 0L) {
DateTime nowUtc = DateTime.UtcNow;
TimeSpan waitDuration = this._targetTimeUtc - nowUtc;
//store the exact desired return time for the next inerval
if (waitDuration.Ticks > 0L)
this._targetTimeUtc += this.Min;
else this._targetTimeUtc = nowUtc + this.Min;//missed it (this does not preserve absolute synchronization and can de-phase from metered interval times)
if (waitDuration.TotalMilliseconds > THREAD_PULSE_MS/2d)
Thread.Sleep(waitDuration.Subtract(TimeSpan.FromMilliseconds(THREAD_PULSE_MS/2d)));
}
}
}
I do not recommend this solution if your nominal sleep durations are significantly less than the pulse rate, because it will frequently not wait at all in that case.
The following screenshot shows rough percentile bands on how long it truly takes (from buckets of 20 samples each - dark green are the median values), with a (nominal) minimum duration between frames set at 30fps (33.333ms):
I am suspicious that the exact pulse duration is 1 second / 600, since in SQL server a single DateTime tick is exactly 1/300th of a second
Related
the codes below shows that sleep(1) will sleep an average of 2 miliseconds!
DateTime dt = DateTime.Now;
int i = 0;
long max = -1;
while (true)
{
Stopwatch st=new Stopwatch();
st.Restart();
System.Threading.Thread.Sleep(1);
long el = st.ElapsedMilliseconds;
max = Math.Max(el, max);
i++;
double time = DateTime.Now.Subtract(dt).TotalMilliseconds;
if (time >= 1000)
{
Console.WriteLine("Time =" + time);
Console.WriteLine("i =" + i);
Console.WriteLine("max ="+max);
System.Threading.Thread.Sleep(200);
i = 0;
dt = DateTime.Now;
max = -1;
}
}
Typical Output:
Time =1000.1553
i =495
max =5
could some body explain me the reason? and how can i fix this problem?!
Getting 2 milliseconds is fairly unusual, most anybody that runs your code will get 15 instead. It is rather machine dependent and mostly depends on what other programs you've got running on your machine. One way to change it, for example, is to start Chrome and you'll see (close to) 1 msec sleeps.
You should display more digits to avoid rounding artifacts. A simplification of the code:
static void Main(string[] args) {
Stopwatch st = new Stopwatch();
while (true) {
st.Restart();
System.Threading.Thread.Sleep(1);
st.Stop();
Console.Write("{0} ", st.Elapsed.Ticks / 10000.0);
System.Threading.Thread.Sleep(200);
}
}
Which produces on my machine:
16.2074 15.6224 15.6291 15.5313 15.6242 15.6176 15.6152 15.6279 15.6194 15.6128
15.6236 15.6236 15.6134 15.6158 15.6085 15.6261 15.6297 15.6128 15.6261 15.6218
15.6176 15.6055 15.6218 15.6224 15.6212 15.6134 15.6128 15.5928 15.6375 15.6279
15.6146 15.6254 15.6248 15.6091 15.6188 15.4679 15.6019 15.6212 15.6164 15.614
15.7504 15.6085 15.55 15.6248 15.6152 15.6248 15.6242 15.6158 15.6188 15.6206 ...
This is normal output, I have no programs running on my machine that mess with the operating system. This will be the way it works on most machines.
Some background on what's going on. When you call Thread.Sleep() with a value larger than 0 then you voluntarily give up the processor and your thread goes into a wait state. It will resume when the operating system's thread scheduler runs and enough time has expired.
What's key about that sentence is "when the thread scheduler runs". It runs at distinct times in Windows, driven by the clock interrupt that wakes up the processor from the HALT state. This starts in the kernel, one primary task of the clock interrupt is that it increments the clock value. The one that's used by, for example, DateTime.Now and Environment.TickCount
The clock does not have infinite resolution, it only changes when the clock interrupt occurs. By default on all modern Windows versions, that clock interrupt occurs 64 times per second. Which makes the clock accuracy 1 / 64 = 15.625 milliseconds. You can clearly see this value back in the output of the program on my machine.
So what happened on your machine is that a program changed the clock interrupt rate. That is a rather unfortunate inheritance from Windows 3.1, the first Windows version that supported multi-media timers. Timers that can tick at a high rate to support programs that need to do things with media, like animating a GIF file, tune the frame rate of a video player, keep the audio card fed with sound without stutter or excessive latency. Programs like Chrome.
They do this by calling timeBeginPeriod(). They usually go whole-hog and pick the smallest allowable value, 1 millisecond. Apparently 2 msec on your machine. You can do this too, you'll see the Sleep(1) call now taking about 1 msec instead of 2. Don't forget to call timeEndPeriod() when you no longer need the high rate.
But do keep in mind that this is pretty unfriendly thing to do. Waking up the processor this often is very detrimental to battery life, always an issue on portable machines. Which explains what mystified this site's founding father in his blog post "Why does Windows have terrible battery life". It doesn't, Chrome has terrible battery life :) If you want to find out what program messed with the clock then you can run powercfg -energy from the command line.
I dont think it's weird to see this result. The Stopwatch itself probably takes a millisecond. I highly doubt you can expect a precise 1 millisecond. There is always overhead involved and I doubt sleep guarantees you that the sleep time is that precise.
Personally I would expect a range from 1-5 milliseconds.
Thread.Sleep is designed to pause a thread for at least the number of milliseconds you specify. It basically leaves the execution of the current thead and it's up to the scheduler of the operating system to wake it again. The thing is, you cannot be sure that the underlying OS's scheduler will allow the thread to resume immediately.
I think, System.Threading.Thread.SpinWait is what you are looking for.
I have a small problem regarding threading in C#.
For some reason, my thread speeds up from 32ms delay to 16ms delay when I open Chrome, when I close Chrome it goes back to 32ms. I'm using Thread.Sleep(1000 / 60) for the delay.
Can somebody explain why this is happening, and maybe suggest a possible solution?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace ConsoleApplication2
{
class Program
{
static bool alive;
static Thread thread;
static DateTime last;
static void Main(string[] args)
{
alive = true;
thread = new Thread(new ThreadStart(Loop));
thread.Start();
Console.ReadKey();
}
static void Loop()
{
last = DateTime.Now;
while (alive)
{
DateTime current = DateTime.Now;
TimeSpan span = current - last;
last = current;
Console.WriteLine("{0}ms", span.Milliseconds);
Thread.Sleep(1000 / 60);
}
}
}
}
Just a post to confirm Matthew's correct answer. The accuracy of Thread.Sleep() is affected by the clock interrupt rate on Windows. It by default ticks 64 times per second, once every 15.625 msec. A Sleep() can only complete when such an interrupt occurs. The mental image here is the one induced by the word "sleep", the processor is in fact asleep and not executing code. Only that clock interrupt is going to wake it up again to resume executing your code.
Your choice of 1000/60 was a very unhappy one, that asks for 16 msec. Just a bit over 15.625 so you'll always wake back up at least 2 ticks later: 2 x 15.625 = 31 msec. What you measured.
That interrupt rate is however not fixed, it can be altered by a program. It does so by calling CreateTimerQueueTimer() or the legacy timeBeginPeriod(). A browser in general has a need to do so. Something simple as animating a GIF requires a better timer since GIF frame times are specified with a unit of 10 msec. Or in general any multi-media related operation needs it.
A very ugly side-effect of a program doing this is that this increased clock interrupt rate has system-wide effects. Like it did in your program. Your timer suddenly got accurate and you actually got the sleep duration you asked for, 16 msec. So Chrome is changing the rate to, probably, 1000 ticks per second. The maximum supported. And good for business when you have a competing operating system.
You can avoid this problem by picking a sleep duration that's a closer match to the default interrupt rate. If you ask for 15 then you'll get 15.625 and Chrome cannot have an effect on that. 31 is the next sweet spot. Etcetera, integer multiples of 15.625 and rounded down.
UPDATE: do note that this behavior changed just recently. Starting at Win10 version 2004, the effect is no longer global so Chrome can no longer affect your program.
Starting at Win11, an app with an inactive window operates with the default interrupt rate.
This is possibly occurring because Chrome (or some component of Chrome) is calling timeBeginPeriod() with a value that increases the resolution of the Windows API function Sleep(), which is called from Thread.Sleep().
See this thread for more information: Can I improve the resolution of Thread.Sleep?
I noticed this behavior with Windows Media Player some years ago: The behavior of one of our applications changed depending on whether Windows Media Player was running or not. It turned out, WMP was calling timeBeginPeriod().
However, in general, Thread.Sleep() (and by extension, the Windows API Sleep()) is extremely inaccurate.
Basically, Thread.Sleep isn't very accurate.
Thread.Sleep(1000/60) (which evaluates to Thread.Sleep(16)), asks the thread to go to sleep and come back when 16ms has elapsed. However, that thread might not get to execute again until a greater amount of time has elapsed; say, for example, 32ms.
As for why Chrome is having an effect, I don't know but since Chrome spawns one new thread for each tab, it'll have an effect on the system's threading behaviour.
First, 1000 / 60 = 16 ms
The PC clock has a resolution of around 18-20ms, Sleep() and the result of DateTime.Now will be rounded to a multiple of that value.
So, Thread.Sleep(5) and Thread.Sleep(15) will delay for the same amount of time. And that can be 20, 40 or even 60 ms. You do not get much guarantees, the argument for Sleep() is only a minimum.
And another process (Chrome) that hogs the CPU (even a little) can influence the behavior of your program that way. Edit: that is the reverse of what you're seeing, so something a little else is happening here. Still, it's about rounding to timeslices.
You are hitting a resolution issue with DateTime. You should use Stopwatch for this kind of precision. Eric Lippert states that DateTime is only accurate to around 30 ms, so your readings with it in this case will not tell you anything.
Measurement is half of your problem. The actual time variation for your loop is due to Sleep resolution (as stated in the other answers).
i'm not sure how the Windows kernel handles Thread timing ...
i'm speaking about DST and any other event that affects the time of day on Windows boxes.
for example, Thread .Sleep will block a thread from zero to infinite milliseconds.
if the kernel uses the same "clock" as that used for time of day, then when
(a) someone manually changes the time of day, or
(b) some synchronization to a time server changes the time of day, or
(c) Daylight Saving Time begins or ends and the system has been configured to respond to these two DST events,
et cetera,
are sleeping threads in any way affected? i.e., does the kernel handle such events in such a way that the programmer need do nothing?
N.B.: for non-critical applications, this is likely a who cares? situation.
For critical applications, knowing the answer to this question is important because of the possibility that one must program for such exception conditions.
thank you
edit: i thought of a simple test which i've run in LINQPad 4.
the test involved putting the thread to sleep, starting a manual timer at approximately the same time as the thread was put to sleep, and then (a) moving the time ahead one hour, then for the second test, moving the time back two hours ... in both tests, the period of sleep was not affected.
Bottom line: with Thread.Sleep, there is no need to worry about events that affect the time of day.
here's the trivial c# code:
Int32 secondsToSleep;
String seconds;
Boolean inputOkay;
Console.WriteLine("How many seconds?");
seconds = Console.ReadLine();
inputOkay = Int32.TryParse(seconds, out secondsToSleep);
if (inputOkay)
{
Console.WriteLine("sleeping for {0} second(s)", secondsToSleep);
Thread.Sleep(secondsToSleep * 1000);
Console.WriteLine("i am awake!");
}
else Console.WriteLine("invalid input: [{0}]", seconds);
No, Thread.Sleep is not affected. The kernel furthermore keeps time in UTC exclusively. Time zones and DST are just layered above that.
In my application, I have used the number of System.Threading.Timer and set this timer to fire every 1 second. My application execute the thread at every 1 second but it execution of the millisecond is different.
In my application i have used the OPC server & OPC group .one thread reading the data from the OPC server (like one variable changing it's value & i want to log this moment of the changes values into my application every 1 s)
then another thread to read this data read this data from the first thread every 1s & second thread used for store data into the MYSQL database .
in this process when i will read the data from the first thread then i will get the old data values like , read the data at 10:28:01.530 this second then i will get the information of 10:28:00.260 this second.so i want to mange these threads the first thread worked at 000 millisecond & second thread worked at 500 millisecond. using this first thread update the data at 000 second & second thread read the data at 500 millisecond.
My output is given below:
10:28:32.875
10:28:33.390
10:28:34.875
....
10:28:39.530
10:28:40.875
However, I want following results:
10:28:32.000
10:28:33.000
10:28:34.000
....
10:28:39.000
10:28:40.000
How can the timer be set so the callback is executed at "000 milliseconds"?
First of all, it's impossible. Even if you are to schedule your 'events' for a time that they are fired few milliseconds ahead of schedule, then compare millisecond component of the current time with zero in a loop, the flow control for your code could be taken away at the any given moment.
You will have to rethink your design a little, and not depend on when the event would fire, but think of the algorithm that will compensate for the milliseconds delayed.
Also, you won't have much help with the Threading.Timer, you would have better chance if you have used your own thread, periodically:
check for the current time, see what is the time until next full second
Sleep() for that amount minus the 'spice' factor
do the work you have to do.
You'll calculate your 'spice' factor depending on the results you are getting - does the sleep finishes ahead or behind the schedule.
If you are to give more information about your apparent need for having event at exactly zero ms, I could help you get rid of that requirement.
HTH
I would say that its impossible. You have to understand that switching context for cpu takes time (if other process is running you have to wait - cpu shelduler is working). Each CPU tick takes some time so synchronization to 0 milliseconds is impossible. Maybe with setting high priority of your process you can get closer to 0 but you won't achive it ever.
IMHO it will be impossible to really get a timer to fire exactly every 1sec (on the milisecond) - even in hardcore assembler this would be a very hard task on your normal windows-machine.
I think first what you need to do: is to set right dueTime for a timer. I do it so:
dueTime = 1000 - DateTime.Now.Milliseconds + X; where X - is serving for accuracy and you need select It by testing. Then Threading.Timer each time It ticks running on thread from CLR thread pool and, how tests show - this thread is different each time. Creating threads slows timer, because of this you can use WaitableTimer, which always will be running at the same thread. Instead of WaitableTimer you can using Thread.Sleep method in such way:
Thread.CurrentThread.Priority = Priority.High; //If time is really critical
Thread.Sleep (1000 - DateTime.Now + 50); //Make bound = 1s
while (SomeBoolCondition)
{
Thread.Sleep (980); //1000 ms = 1 second, but something ms will be spent on exit from Sleep
//Here you write your code
}
It will be work faster then a timer.
Ideally I would like to have something similar to the Stopwatch class but with an extra property called Speed which would determine how quickly the timer changes minutes. I am not quite sure how I would go about implementing this.
Edit
Since people don't quite seem to understand why I want to do this. Consider playing a soccer game, or any sport game. The halfs are measured in minutes, but the time-frame in which the game is played is significantly lower i.e. a 45 minute half is played in about 2.5 minutes.
Subclass it, call through to the superclass methods to do their usual work, but multiply all the return values by Speed as appropriate.
I would use the Stopwatch as it is, then just multiply the result, for example:
var Speed = 1.2; //Time progresses 20% faster in this example
var s = new Stopwatch();
s.Start();
//do things
s.Stop();
var parallelUniverseMilliseconds = s.ElapsedMilliseconds * Speed;
The reason your simple "multiplication" doesn't work is that it doesn't speeding up the passing of time - the factor applies to all time that has passed, as well as time that is passing.
So, if you set your speed factor to 3 and then wait 10 minutes, your clock will correctly read 30 minutes. But if you then change the factor to 2, your clock will immediately read 20 minutes because the multiplication is applied to time already passed. That's obviously not correct.
I don't think the stopwatch is the class you want to measure "system time" with. I think you want to measure it yoruself, and store elapsed time in your own variable.
Assuming that your target project really is a game, you will likely have your "game loop" somewhere in code. Each time through the loop, you can use a regular stopwatch object to measure how much real-time has elapsed. Multiply that value by your speed-up factor and add it to a separate game-time counter. That way, if you reduce your speed factor, you only reduce the factor applied to passing time, not to the time you've already recorded.
You can wrap all this behaviour into your own stopwatch class if needs be. If you do that, then I'd suggest that you calculate/accumulate the elapsed time both "every time it's requested" and also "every time the factor is changed." So you have a class something like this (note that I've skipped field declarations and some simple private methods for brevity - this is just a rough idea):
public class SpeedyStopwatch
{
// This is the time that your game/system will run from
public TimeSpan ElapsedTime
{
get
{
CalculateElapsedTime();
return this._elapsedTime;
}
}
// This can be set to any value to control the passage of time
public double ElapsedTime
{
get { return this._timeFactor; }
set
{
CalculateElapsedTime();
this._timeFactor = value;
}
}
private void CalculateElapsedTime()
{
// Find out how long (real-time) since we last called the method
TimeSpan lastTimeInterval = GetElapsedTimeSinceLastCalculation();
// Multiply this time by our factor
lastTimeInterval *= this._timeFactor;
// Add the multiplied time to our elapsed time
this._elapsedTime += lastTimeInterval;
}
}
According to modern physics, what you need to do to make your timer go "faster" is to speed up the computer that your software is running one. I don't mean the speed at wich it performs calculations, but the physical speed. The close you get to the speed of light ( the constant C ) the greater the rate at which time passes for your computer, so as you approach the speed of light, time will "speed up" for you.
It sounds like what you might actually be looking for is an event scheduler, where you specify that certain events must happen at specific points in simulated time and you want to be able to change the relationship between real time and simulated time (perhaps dynamically). You can run into boundary cases when you start to change the speed of time in the process of running your simulation and you may also have to deal with cases where real time takes longer to return than normal (your thread didn't get a time slice as soon as you wanted, so you might not actually be able to achieve the simulated time you're targeting.)
For instance, suppose you wanted to update your simulation at least once per 50ms of simulated time. You can implement the simulation scheduler as a queue where you push events and use a scaled output from a normal Stopwatch class to drive the scheduler. The process looks something like this:
Push (simulate at t=0) event to event queue
Start stopwatch
lastTime = 0
simTime = 0
While running
simTime += scale*(stopwatch.Time - lastTime)
lastTime = stopwatch.Time
While events in queue that have past their time
pop and execute event
push (simulate at t=lastEventT + dt) event to event queue
This can be generalized to different types of events occurring at different intervals. You still need to deal with the boundary case where the event queue is ballooning because the simulation can't keep up with real time.
I'm not entirely sure what you're looking to do (doesn't a minute always have 60 seconds?), but I'd utilize Thread.Sleep() to accomplish what you want.