Switching bit with high speed in C# - c#

i'd like to switch bit with time shorter than 1 ms. I'd prefer do this in C# Windows Forms, but it can be in for example console app in C++, C#. What i want to do is to switch bit and send it via LPT port.
Switching bit in this code is to slow..
PortAccess.Output(888,1);
Thread.Sleep(1);
PortAccess.Output(888,0);
Thread.Sleep(1);
I've read this post: How to use QueryPerformanceCounter? , but it's only timer..
Please help :)

There is no easy or obvious way to do this kind of fine-grained timing control in the C#/.NET environment. You can use the Stopwatch class to get close, but the resolution isn't great for real-time work. To use a timer to do something like this - nonsense code but you loop until the time elapsed is your desired interval:
Stopwatch swatch = new Stopwatch();
while(true)
{
swatch.Reset();
swatch.Start();
PortAccess.Output(888, 1);
while (swatch.ElapsedMilliseconds < 1) { }
swatch.Stop();
swatch.Reset();
swatch.Start();
PortAccess.Output(888, 0);
while (swatch.ElapsedMilliseconds < 1) { }
swatch.Stop();
}
Sleep should not be used for timing anywhere. Sleep only basically says, "sleep for at least X milliseconds". So Sleep(1) might sleep for 25ms.
And a by-the-way: next to no PCs have parallel ports anymore. This is an ancient - no, the most ancient way to write bits or flip outputs external to a PC. Doing it by directly outputting to a PC IO port is really rubbish too. You could look for an external digital IO device/board/interface with a decent driver - much better idea.

First, you have to be aware that Sleep() does not have such a fine resolution. Usually it's about 20ms resolution, so that your calls will wait much longer than what you want.
Second, in a system like Windows which is not providing realtime warranties you cannot rely on being able to actually perform something each millisecond, even if you keep the thread alive (using Spinwait() for instance). The thread may and will still be interrupted by the OS in the normal task switching process and therefore you'll have periods of no activity for up to several milliseconds.
In short, don't try that. It will not work.

Keeping in mind that what Lucero and Kieren, I'll still answer your question.
You can use the Stopwatch to get sub-millisecond precision using Ticks where 1 Tick== 1/10,000 millisecond. For example here wait 1/10 millisecond:
Stopwatch sw = Stopwatch.StartNew();
while (sw.ElapsedTicks <1000);
Debug.Print(sw.ElapsedTicks.ToString());
You should make sure that Stopwatch has a high enough frequency for your needs on the system you'll be using it. Also, please remember that you are not at the driver level here, so there is nothing approaching real-time guarantees with this.

Related

C# performance counter Timer100Ns. How to?

I have an application which monitors a particular event and then starts to calculate things once it happens. Events are irregular and can come in any pattern from bunches in a sec to none for long time..
I want to measure %% of time the application is busy (similar to CPU % Usage)
I want to use Timer100Ns counter
Two questions:
Do I increment it by hardware ticks or by DateTime ticks (e.g. if I use Stopwatch - do I use sw.ElapsedTicks or sw.Elapsed.Ticks) ?
Do I need a base counter for it?
so I am about to write something like this:
Stopwatch sw = new Stopwatch();
sw.Start();
// Do some operation which is irregular by nature
sw.Stop();
// Measure utilization of the application
myCounterOfTypeTimer100Ns.IncrementBy(sw.Elapsed.Ticks);
Will it do ?
EDIT : I experimented with it a bit and now its even more confusing.. It actually shows the values I increment it by. Not %%.
The mystery unravelled. It currently appears that I don't use it in the way it was supposed to be used (or rather I didn't read TFM properly). If the sampling interval is 1s (as in perf mon live window) and you intervals are more than 1s then it shows you a nonsense number... To achieve smoothness, the activity you are trying to measure must be really fractions of 1s.. Otherwise this counter is not a good idea..
The answer for this kind of problem (although its not obvious, but still disturbing that nobody suggested it in a week) is actually SampleCounter.

Why is the first iteration always faster then the next in a loop?

I would like to understand why the first iteration in the loop executes quicker than the rest.
Stopwatch sw = new Stopwatch ();
sw.Start ();
for(int i=0; i<10; i++)
{
System.Threading.Thread.Sleep ( 100 );
Console.WriteLine ( "Finished at : {0}", ((double) sw.ElapsedTicks / Stopwatch.Frequency ) * 1e3 );
}
When I execute the code I get the following:
Initially I thought it could be due to the accuracy factor of Stopwatch class, but then why is it applicable only to the first element? Correct me if I'm missing something.
This is a very flawed benchmark. For one, Thread.Sleep does not guarantee you that you'll sleep for exactly 100ms. Try much longer sleeps and you'll see more consistent results.
So it might be even just scheduling - the next iterations are always just doing sleep after sleep. Since Sleep works thanks to the system interrupt clock, the sleeps after the first should take similar amount of time, while the first has to "sync up" with the clock first.
If you add another sleep before the cycle (and before starting the stopwatch), you'll likely get closer times for each of the iterations.
Or even better, don't use sleeps. If you use some actual CPU work instead, you'll avoid thread switches (provided you've got enough CPU to do that) and many other costs not associated with the cycle itself. For example,
Stopwatch sw = new Stopwatch ();
sw.Start ();
for(int i=0; i<10; i++)
{
Thread.SpinWait(10000000);
Console.WriteLine ( "Finished at : {0}", ((double) sw.ElapsedTicks / Stopwatch.Frequency ) * 1e3 );
}
This will give you much more consistent results, because it doesn't depend on the clock at all.
There's many other things that can complicate a benchmark like this, which is why benchmarks simply aren't done this way. There will always be deviations, and they can get rather big, especially on a system with a lot of work.
In other words, if you're getting differences in CPU work execution time on the scale of milliseconds, someone is stealing your work. There's nothing in a modern CPU that would account for such a huge difference just based on e.g. i++ being there or not.
I could describe a lot more issues with your code, but it probably isn't worth it. Just google for some best practices on CPU work benchmarking in C#, and you'll get much more worth out of it.
Oh, and just to help hammer the point home more, on my computer, the first tends to go anywhere from 99 up to 100. This would be highly unusual, since the default is 15.6ms, rather than 1ms, but the culprit is easily found - Chrome sets it to 1ms. Ouch.
What you're outputting for times is the total time elapsed since the start. so, time increasing by about 100ms is exactly what you should be expecting
But, when you use Thread.Sleep you're giving up control of the thread and maybe for something close to the time you've specified. That time will be in multiples of the system quantum--so, what you specify cannot possibly be exact. If other threads of higher priority are doing work, it's less likely that your thread will be given processor time at a granularity close to the time you've suggested.

sleep in c# does not work properly

the codes below shows that sleep(1) will sleep an average of 2 miliseconds!
DateTime dt = DateTime.Now;
int i = 0;
long max = -1;
while (true)
{
Stopwatch st=new Stopwatch();
st.Restart();
System.Threading.Thread.Sleep(1);
long el = st.ElapsedMilliseconds;
max = Math.Max(el, max);
i++;
double time = DateTime.Now.Subtract(dt).TotalMilliseconds;
if (time >= 1000)
{
Console.WriteLine("Time =" + time);
Console.WriteLine("i =" + i);
Console.WriteLine("max ="+max);
System.Threading.Thread.Sleep(200);
i = 0;
dt = DateTime.Now;
max = -1;
}
}
Typical Output:
Time =1000.1553
i =495
max =5
could some body explain me the reason? and how can i fix this problem?!
Getting 2 milliseconds is fairly unusual, most anybody that runs your code will get 15 instead. It is rather machine dependent and mostly depends on what other programs you've got running on your machine. One way to change it, for example, is to start Chrome and you'll see (close to) 1 msec sleeps.
You should display more digits to avoid rounding artifacts. A simplification of the code:
static void Main(string[] args) {
Stopwatch st = new Stopwatch();
while (true) {
st.Restart();
System.Threading.Thread.Sleep(1);
st.Stop();
Console.Write("{0} ", st.Elapsed.Ticks / 10000.0);
System.Threading.Thread.Sleep(200);
}
}
Which produces on my machine:
16.2074 15.6224 15.6291 15.5313 15.6242 15.6176 15.6152 15.6279 15.6194 15.6128
15.6236 15.6236 15.6134 15.6158 15.6085 15.6261 15.6297 15.6128 15.6261 15.6218
15.6176 15.6055 15.6218 15.6224 15.6212 15.6134 15.6128 15.5928 15.6375 15.6279
15.6146 15.6254 15.6248 15.6091 15.6188 15.4679 15.6019 15.6212 15.6164 15.614
15.7504 15.6085 15.55 15.6248 15.6152 15.6248 15.6242 15.6158 15.6188 15.6206 ...
This is normal output, I have no programs running on my machine that mess with the operating system. This will be the way it works on most machines.
Some background on what's going on. When you call Thread.Sleep() with a value larger than 0 then you voluntarily give up the processor and your thread goes into a wait state. It will resume when the operating system's thread scheduler runs and enough time has expired.
What's key about that sentence is "when the thread scheduler runs". It runs at distinct times in Windows, driven by the clock interrupt that wakes up the processor from the HALT state. This starts in the kernel, one primary task of the clock interrupt is that it increments the clock value. The one that's used by, for example, DateTime.Now and Environment.TickCount
The clock does not have infinite resolution, it only changes when the clock interrupt occurs. By default on all modern Windows versions, that clock interrupt occurs 64 times per second. Which makes the clock accuracy 1 / 64 = 15.625 milliseconds. You can clearly see this value back in the output of the program on my machine.
So what happened on your machine is that a program changed the clock interrupt rate. That is a rather unfortunate inheritance from Windows 3.1, the first Windows version that supported multi-media timers. Timers that can tick at a high rate to support programs that need to do things with media, like animating a GIF file, tune the frame rate of a video player, keep the audio card fed with sound without stutter or excessive latency. Programs like Chrome.
They do this by calling timeBeginPeriod(). They usually go whole-hog and pick the smallest allowable value, 1 millisecond. Apparently 2 msec on your machine. You can do this too, you'll see the Sleep(1) call now taking about 1 msec instead of 2. Don't forget to call timeEndPeriod() when you no longer need the high rate.
But do keep in mind that this is pretty unfriendly thing to do. Waking up the processor this often is very detrimental to battery life, always an issue on portable machines. Which explains what mystified this site's founding father in his blog post "Why does Windows have terrible battery life". It doesn't, Chrome has terrible battery life :) If you want to find out what program messed with the clock then you can run powercfg -energy from the command line.
I dont think it's weird to see this result. The Stopwatch itself probably takes a millisecond. I highly doubt you can expect a precise 1 millisecond. There is always overhead involved and I doubt sleep guarantees you that the sleep time is that precise.
Personally I would expect a range from 1-5 milliseconds.
Thread.Sleep is designed to pause a thread for at least the number of milliseconds you specify. It basically leaves the execution of the current thead and it's up to the scheduler of the operating system to wake it again. The thing is, you cannot be sure that the underlying OS's scheduler will allow the thread to resume immediately.
I think, System.Threading.Thread.SpinWait is what you are looking for.

Why Thread.Sleep waits longer than requested when other apps are running?

I have a small problem regarding threading in C#.
For some reason, my thread speeds up from 32ms delay to 16ms delay when I open Chrome, when I close Chrome it goes back to 32ms. I'm using Thread.Sleep(1000 / 60) for the delay.
Can somebody explain why this is happening, and maybe suggest a possible solution?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace ConsoleApplication2
{
class Program
{
static bool alive;
static Thread thread;
static DateTime last;
static void Main(string[] args)
{
alive = true;
thread = new Thread(new ThreadStart(Loop));
thread.Start();
Console.ReadKey();
}
static void Loop()
{
last = DateTime.Now;
while (alive)
{
DateTime current = DateTime.Now;
TimeSpan span = current - last;
last = current;
Console.WriteLine("{0}ms", span.Milliseconds);
Thread.Sleep(1000 / 60);
}
}
}
}
Just a post to confirm Matthew's correct answer. The accuracy of Thread.Sleep() is affected by the clock interrupt rate on Windows. It by default ticks 64 times per second, once every 15.625 msec. A Sleep() can only complete when such an interrupt occurs. The mental image here is the one induced by the word "sleep", the processor is in fact asleep and not executing code. Only that clock interrupt is going to wake it up again to resume executing your code.
Your choice of 1000/60 was a very unhappy one, that asks for 16 msec. Just a bit over 15.625 so you'll always wake back up at least 2 ticks later: 2 x 15.625 = 31 msec. What you measured.
That interrupt rate is however not fixed, it can be altered by a program. It does so by calling CreateTimerQueueTimer() or the legacy timeBeginPeriod(). A browser in general has a need to do so. Something simple as animating a GIF requires a better timer since GIF frame times are specified with a unit of 10 msec. Or in general any multi-media related operation needs it.
A very ugly side-effect of a program doing this is that this increased clock interrupt rate has system-wide effects. Like it did in your program. Your timer suddenly got accurate and you actually got the sleep duration you asked for, 16 msec. So Chrome is changing the rate to, probably, 1000 ticks per second. The maximum supported. And good for business when you have a competing operating system.
You can avoid this problem by picking a sleep duration that's a closer match to the default interrupt rate. If you ask for 15 then you'll get 15.625 and Chrome cannot have an effect on that. 31 is the next sweet spot. Etcetera, integer multiples of 15.625 and rounded down.
UPDATE: do note that this behavior changed just recently. Starting at Win10 version 2004, the effect is no longer global so Chrome can no longer affect your program.
Starting at Win11, an app with an inactive window operates with the default interrupt rate.
This is possibly occurring because Chrome (or some component of Chrome) is calling timeBeginPeriod() with a value that increases the resolution of the Windows API function Sleep(), which is called from Thread.Sleep().
See this thread for more information: Can I improve the resolution of Thread.Sleep?
I noticed this behavior with Windows Media Player some years ago: The behavior of one of our applications changed depending on whether Windows Media Player was running or not. It turned out, WMP was calling timeBeginPeriod().
However, in general, Thread.Sleep() (and by extension, the Windows API Sleep()) is extremely inaccurate.
Basically, Thread.Sleep isn't very accurate.
Thread.Sleep(1000/60) (which evaluates to Thread.Sleep(16)), asks the thread to go to sleep and come back when 16ms has elapsed. However, that thread might not get to execute again until a greater amount of time has elapsed; say, for example, 32ms.
As for why Chrome is having an effect, I don't know but since Chrome spawns one new thread for each tab, it'll have an effect on the system's threading behaviour.
First, 1000 / 60 = 16 ms
The PC clock has a resolution of around 18-20ms, Sleep() and the result of DateTime.Now will be rounded to a multiple of that value.
So, Thread.Sleep(5) and Thread.Sleep(15) will delay for the same amount of time. And that can be 20, 40 or even 60 ms. You do not get much guarantees, the argument for Sleep() is only a minimum.
And another process (Chrome) that hogs the CPU (even a little) can influence the behavior of your program that way. Edit: that is the reverse of what you're seeing, so something a little else is happening here. Still, it's about rounding to timeslices.
You are hitting a resolution issue with DateTime. You should use Stopwatch for this kind of precision. Eric Lippert states that DateTime is only accurate to around 30 ms, so your readings with it in this case will not tell you anything.
Measurement is half of your problem. The actual time variation for your loop is due to Sleep resolution (as stated in the other answers).

Need microsecond delay in .NET app for throttling UDP multicast transmission rate

I'm writing a UDP multicast client/server pair in C# and I need a delay on the order of 50-100 µsec (microseconds) to throttle the server transmission rate. This helps to avoid significant packet loss and also helps to keep from overloading the clients that are disk I/O bound. Please do not suggest Thread.Sleep or Thread.SpinWait. I would not ask if I needed either of those.
My first thought was to use some kind of a high-performance counter and do a simple while() loop checking the elapsed time but I'd like to avoid that as it feels kludgey. Wouldn't that also peg the CPU utilization for the server process?
Bonus points for a cross-platform solution, i.e. not Windows specific. Thanks in advance, guys!
Very short sleep times are generally best achieved by a CPU spin loop (like the kind you describe). You generally want to avoid using the high-precision timer calls as they can themselves take up time and skew the results. I wouldn't worry too much about CPU pegging on the server for such short wait times.
I would encapsulate the behavior in a class, as follows:
Create a class whose static constructor runs a spin loop for several million iterations and captures how long it takes. This gives you an idea of how long a single loop cycle would take on the underlying hardware.
Compute a uS/iteration value that you can use to compute arbitrary sleep times.
When asked to sleep for a particular period of time, divide uS to sleep by the uS/iteration value previously computed to identify how many loop iterations to perform.
Spin using a while loop until the estimated time elapses.
I would use stopwatch but would need a loop
read this to add more extension to the stopwatch, like ElapsedMicroseconds
or something like this might work too
System.Diagnostics.Stopwatch.IsHighResolution MUST be true
static void Main(string[] args)
{
Stopwatch sw;
sw = Stopwatch.StartNew();
int i = 0;
while (sw.ElapsedMilliseconds <= 5000)
{
if (sw.Elapsed.Ticks % 100 == 0)
{ i++; /* do something*/ }
}
sw.Stop();
}
I've experienced with such requirement when I needed more precision with my multicast application.
I've found that the best solution resides with the MultiMedia timers as seen in this example.
I've used this implementation and added TPL async invoke to it. You should see my SimpleMulticastAnalyzer project for more information.
static void udelay(long us)
{
var sw = System.Diagnostics.Stopwatch.StartNew();
long v = (us * System.Diagnostics.Stopwatch.Frequency )/ 1000000;
while (sw.ElapsedTicks < v)
{
}
}
static void Main(string[] args)
{
for (int i = 0; i < 100; i++)
{
Console.WriteLine("" + i + " " + DateTime.Now.Second + "." + DateTime.Now.Millisecond);
udelay(1000000);
}
}
I would discourage using spin loop as it consumes and creates blocking thread. thread.sleep is better, it doesn't use processor resource during sleep, it just slice the time. Try it, and you'll see from task manager how the CPU usage spike with the spin loop.
Have you looked at multimedia timers? You could probably find a .NET library somewhere that wraps the API calls somewhere.

Categories

Resources