I called timeBeginPeriod(1), but it didn't work - c#

The Window's default precision for Thread.Sleep() is 15.625 ms (1000 / 64), i.e. if you call Thread.Sleep(1), the time elapsed is 15 ms or 16 ms. I want to improve the accuracy to 1 ms.
There's a function "timeBeginPeriod" which can change the accuracy. But I didn't get what I want. Here's my code:
[DllImport("winmm.dll", EntryPoint = "timeBeginPeriod")]
public static extern void TimeBeginPeriod(int t);
[DllImport("winmm.dll", EntryPoint = "timeEndPeriod")]
public static extern void TimeEndPeriod(int t);
TimeBeginPeriod(1);
var t1 = Environment.TickCount;
Thread.Sleep(1);
var t2 = Environment.TickCount;
Console.WriteLn(t2 - t1);
TimeEndPeriod(1);
What I expected is 1 or 2, but I got 15 or 16 actually.
Is there any code I missed?

Based on discussions in various forums, I have the suspicion that this feature was broken on Windows 10 for a while, but it appears to be fixed on my machine running Windows 10 version 2004. Without the call to timeBeginPeriod, the sleep timer has a resolution of ~15ms. After calling timeBeginPeriod(1), the sleep timer resolution goes down to 1..2 ms
Can others confirm this on an up-to-date Windows system?
Addendum 1: I just found https://stackoverflow.com/a/48011619/295690 which indicates that even though the feature might actually be repaired, there are good reasons to avoid it anyway.
Addendum 2: Even more insight from https://randomascii.wordpress.com/2013/07/08/windows-timer-resolution-megawatts-wasted/ leads me to assume that it is not just a matter of the OS version. I could well imagine that certain power saving modes or hardware configurations prevent programs from increasing the system-wide tick frequency.

I have some update about this problem.
The accuracy of Environment.TickCount is about 16ms.It's not a high precision method. Actually by using DateTime, we can get more accurate elapse time even without using TimeBeginPeriod/TimeEndPeriod on Windows 10, Version 1903.
long t1 = Environment.TickCount;
DateTime dt1 = DateTime.Now;
Thread.Sleep(1);
DateTime dt2 = DateTime.Now;
long t2 = Environment.TickCount;
Console.WriteLine("DateTime Method: " + (dt2-dt1).TotalMilliseconds);
Console.WriteLine("TickCount Method: " + (t2 - t1));
DateTime Method: 2.0074
TickCount Method: 0

Related

How to find how long my computer is being idle (idle time) on C# desktop

How to find how long my computer is being idle (idle time) on C# Desktop Application
Idle time is the total time a computer or device has been powered on, but has not been used. If a computer or computer device is idle for a set period of time, it may go into a standby mode or power off.
is there any way to get that ?
If with idle you mean time elapsed from last user input (as GetIdleTime() function would do) then you have to use GetlastInputInfo() function (see MSDN):
In C# you have to P/Invoke it:
[DllImport("user32.dll")]
static extern bool GetLastInputInfo(ref LASTINPUTINFO plii);
[StructLayout(LayoutKind.Sequential)]
struct LASTINPUTINFO {
public uint cbSize;
public int dwTime;
}
It'll return number of milliseconds elapsed from system boot of last user input. First thing you need is then system boot time, you have that from Environment.TickCount (number of milliseconds from boot) then:
DateTime bootTime = DateTime.UtcNow.AddMilliseconds(-Environment.TickCount);
Now you can have time of last input:
LASTINPUTINFO lii = new LASTINPUTINFO();
lii.cbSize = (uint)Marshal.SizeOf(typeof(LASTINPUTINFO));
GetLastInputInfo(ref lii);
DateTime lastInputTime = bootTime.AddMilliseconds(lii.dwTime);
Elapsed time will then simply be:
TimeSpan idleTime = DateTime.UtcNow.Subtract(lastInputTime);
Depending on your UI framework (you did not specify), you need to catch the WM_ENTERIDLE message and store the time in which the message is received.
How should you catch this message? These guys will tell you for to do it in winforms.

Why is my C# program faster in a profiler?

I have a relatively large system (~25000 lines so far) for monitoring radio-related devices. It shows graphs and such using latest version of ZedGraph.
The program is coded using C# on VS2010 with Win7.
The problem is:
when I run the program from within VS, it runs slow
when I run the program from the built EXE, it runs slow
when I run the program though Performance Wizard / CPU Profiler, it runs Blazing Fast.
when I run the program from the built EXE, and then start VS and Attach a profiler to ANY OTHER PROCESS, my program speeds up!
I want the program to always run that fast!
Every project in the solution is set to RELEASE, Debug unmanaged code is DISABLED, Define DEBUG and TRACE constants is DISABLED, Optimize Code - I tried either, Warning Level - I tried either, Suppress JIT - I tried either,
in short I tried all the solutions already proposed on StackOverflow - none worked. Program is slow outside profiler, fast in profiler.
I don't think the problem is in my code, because it becomes fast if I attach the profiler to other, unrelated process as well!
Please help!
I really need it to be that fast everywhere, because it's a business critical application and performance issues are not tolerated...
UPDATES 1 - 8 follow
--------------------Update1:--------------------
The problem seems to Not be ZedGraph related, because it still manifests after I replaced ZedGraph with my own basic drawing.
--------------------Update2:--------------------
Running the program in a Virtual machine, the program still runs slow, and running profiler from the Host machine doesn't make it fast.
--------------------Update3:--------------------
Starting screen capture to video also speeds the program up!
--------------------Update4:--------------------
If I open the Intel graphics driver settings window (this thing: http://www.intel.com/support/graphics/sb/img/resolution_new.jpg)
and just constantly hover with the cursor over buttons, so they glow, etc, my program speeds up!.
It doesn't speed up if I run GPUz or Kombustor though, so no downclocking on the GPU - it stays steady 850Mhz.
--------------------Update5:--------------------
Tests on different machines:
-On my Core i5-2400S with Intel HD2000, UI runs slow and CPU usage is ~15%.
-On a colleague's Core 2 Duo with Intel G41 Express, UI runs fast, but CPU usage is ~90% (which isn't normal either)
-On Core i5-2400S with dedicated Radeon X1650, UI runs blazing fast, CPU usage is ~50%.
--------------------Update6:--------------------
A snip of code showing how I update a single graph (graphFFT is an encapsulation of ZedGraphControl for ease of use):
public void LoopDataRefresh() //executes in a new thread
{
while (true)
{
while (!d.Connected)
Thread.Sleep(1000);
if (IsDisposed)
return;
//... other graphs update here
if (signalNewFFT && PanelFFT.Visible)
{
signalNewFFT = false;
#region FFT
bool newRange = false;
if (graphFFT.MaxY != d.fftRangeYMax)
{
graphFFT.MaxY = d.fftRangeYMax;
newRange = true;
}
if (graphFFT.MinY != d.fftRangeYMin)
{
graphFFT.MinY = d.fftRangeYMin;
newRange = true;
}
List<PointF> points = new List<PointF>(2048);
int tempLength = 0;
short[] tempData = new short[2048];
int i = 0;
lock (d.fftDataLock)
{
tempLength = d.fftLength;
tempData = (short[])d.fftData.Clone();
}
foreach (short s in tempData)
points.Add(new PointF(i++, s));
graphFFT.SetLine("FFT", points);
if (newRange)
graphFFT.RefreshGraphComplete();
else if (PanelFFT.Visible)
graphFFT.RefreshGraph();
#endregion
}
//... other graphs update here
Thread.Sleep(5);
}
}
SetLine is:
public void SetLine(String lineTitle, List<PointF> values)
{
IPointListEdit ip = zgcGraph.GraphPane.CurveList[lineTitle].Points as IPointListEdit;
int tmp = Math.Min(ip.Count, values.Count);
int i = 0;
while(i < tmp)
{
if (values[i].X > peakX)
peakX = values[i].X;
if (values[i].Y > peakY)
peakY = values[i].Y;
ip[i].X = values[i].X;
ip[i].Y = values[i].Y;
i++;
}
while(ip.Count < values.Count)
{
if (values[i].X > peakX)
peakX = values[i].X;
if (values[i].Y > peakY)
peakY = values[i].Y;
ip.Add(values[i].X, values[i].Y);
i++;
}
while(values.Count > ip.Count)
{
ip.RemoveAt(ip.Count - 1);
}
}
RefreshGraph is:
public void RefreshGraph()
{
if (!explicidX && autoScrollFlag)
{
zgcGraph.GraphPane.XAxis.Scale.Max = Math.Max(peakX + grace.X, rangeX);
zgcGraph.GraphPane.XAxis.Scale.Min = zgcGraph.GraphPane.XAxis.Scale.Max - rangeX;
}
if (!explicidY)
{
zgcGraph.GraphPane.YAxis.Scale.Max = Math.Max(peakY + grace.Y, maxY);
zgcGraph.GraphPane.YAxis.Scale.Min = minY;
}
zgcGraph.Refresh();
}
.
--------------------Update7:--------------------
Just ran it through the ANTS profiler. It tells me that the ZedGraph refresh counts when the program is fast are precisely two times higher compared to when it's slow.
Here are the screenshots:
I find it VERY strange that, considering the small difference in the length of the sections, performance differs twice with mathematical precision.
Also, I updated the GPU driver, that didn't help.
--------------------Update8:--------------------
Unfortunately, for a few days now, I'm unable to reproduce the issue... I'm getting constant acceptable speed (which still appear a bit slower than what I had in the profiler two weeks ago) which isn't affected by any of the factors that used to affect it two weeks ago - profiler, video capturing or GPU driver window. I still have no explanation of what was causing it...
Luaan posted the solution in the comments above, it's the system wide timer resolution. Default resolution is 15.6 ms, the profiler sets the resolution to 1ms.
I had the exact same problem, very slow execution that would speed up when the profiler was opened. The problem went away on my PC but popped back up on other PCs seemingly at random. We also noticed the problem disappeared when running a Join Me window in Chrome.
My application transmits a file over a CAN bus. The app loads a CAN message with eight bytes of data, transmits it and waits for an acknowledgment. With the timer set to 15.6ms each round trip took exactly 15.6ms and the entire file transfer would take about 14 minutes. With the timer set to 1ms round trip time varied but would be as low as 4ms and the entire transfer time would drop to less than two minutes.
You can verify your system timer resolution as well as find out which program increased the resolution by opening a command prompt as administrator and entering:
powercfg -energy duration 5
The output file will have the following in it somewhere:
Platform Timer Resolution:Platform Timer Resolution
The default platform timer resolution is 15.6ms (15625000ns) and should be used whenever the system is idle. If the timer resolution is increased, processor power management technologies may not be effective. The timer resolution may be increased due to multimedia playback or graphical animations.
Current Timer Resolution (100ns units) 10000
Maximum Timer Period (100ns units) 156001
My current resolution is 1 ms (10,000 units of 100nS) and is followed by a list of the programs that requested the increased resolution.
This information as well as more detail can be found here: https://randomascii.wordpress.com/2013/07/08/windows-timer-resolution-megawatts-wasted/
Here is some code to increase the timer resolution (originally posted as the answer to this question: how to set timer resolution from C# to 1 ms?):
public static class WinApi
{
/// <summary>TimeBeginPeriod(). See the Windows API documentation for details.</summary>
[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Interoperability", "CA1401:PInvokesShouldNotBeVisible"), System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Security", "CA2118:ReviewSuppressUnmanagedCodeSecurityUsage"), SuppressUnmanagedCodeSecurity]
[DllImport("winmm.dll", EntryPoint = "timeBeginPeriod", SetLastError = true)]
public static extern uint TimeBeginPeriod(uint uMilliseconds);
/// <summary>TimeEndPeriod(). See the Windows API documentation for details.</summary>
[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Interoperability", "CA1401:PInvokesShouldNotBeVisible"), System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Security", "CA2118:ReviewSuppressUnmanagedCodeSecurityUsage"), SuppressUnmanagedCodeSecurity]
[DllImport("winmm.dll", EntryPoint = "timeEndPeriod", SetLastError = true)]
public static extern uint TimeEndPeriod(uint uMilliseconds);
}
Use it like this to increase resolution :WinApi.TimeBeginPeriod(1);
And like this to return to the default :WinApi.TimeEndPeriod(1);
The parameter passed to TimeEndPeriod() must match the parameter that was passed to TimeBeginPeriod().
There are situations when slowing down a thread can speed up other threads significantly, usually when one thread is polling or locking some common resource frequently.
For instance (this is a windows-forms example) when the main thread is checking overall progress in a tight loop instead of using a timer, for example:
private void SomeWork() {
// start the worker thread here
while(!PollDone()) {
progressBar1.Value = PollProgress();
Application.DoEvents(); // keep the GUI responisive
}
}
Slowing it down could improve performance:
private void SomeWork() {
// start the worker thread here
while(!PollDone()) {
progressBar1.Value = PollProgress();
System.Threading.Thread.Sleep(300); // give the polled thread some time to work instead of responding to your poll
Application.DoEvents(); // keep the GUI responisive
}
}
Doing it correctly, one should avoid using the DoEvents call alltogether:
private Timer tim = new Timer(){ Interval=300 };
private void SomeWork() {
// start the worker thread here
tim.Tick += tim_Tick;
tim.Start();
}
private void tim_Tick(object sender, EventArgs e){
tim.Enabled = false; // prevent timer messages from piling up
if(PollDone()){
tim.Tick -= tim_Tick;
return;
}
progressBar1.Value = PollProgress();
tim.Enabled = true;
}
Calling Application.DoEvents() can potentially cause allot of headaches when GUI stuff has not been disabled and the user kicks off other events or the same event a 2nd time simultaneously, causing stack climbs which by nature queue the first action behind the new one, but I'm going off topic.
Probably that example is too winforms specific, I'll try making a more general example. If you have a thread that is filling a buffer that is processed by other threads, be sure to leave some System.Threading.Thread.Sleep() slack in the loop to allow the other threads to do some processing before checking if the buffer needs to be filled again:
public class WorkItem {
// populate with something usefull
}
public static object WorkItemsSyncRoot = new object();
public static Queue<WorkItem> workitems = new Queue<WorkItem>();
public void FillBuffer() {
while(!done) {
lock(WorkItemsSyncRoot) {
if(workitems.Count < 30) {
workitems.Enqueue(new WorkItem(/* load a file or something */ ));
}
}
}
}
The worker thread's will have difficulty to obtain anything from the queue since its constantly being locked by the filling thread. Adding a Sleep() (outside the lock) could significantly speed up other threads:
public void FillBuffer() {
while(!done) {
lock(WorkItemsSyncRoot) {
if(workitems.Count < 30) {
workitems.Enqueue(new WorkItem(/* load a file or something */ ));
}
}
System.Threading.Thread.Sleep(50);
}
}
Hooking up a profiler could in some cases have the same effect as the sleep function.
I'm not sure if I've given representative examples (it's quite hard to come up with something simple) but I guess the point is clear, putting sleep() in the correct place can help improve the flow of other threads.
---------- Edit after Update7 -------------
I'd remove that LoopDataRefresh() thread altogether. Rather put a timer in your window with an interval of at least 20 (which would be 50 frames a second if none were skipped):
private void tim_Tick(object sender, EventArgs e) {
tim.Enabled = false; // skip frames that come while we're still drawing
if(IsDisposed) {
tim.Tick -= tim_Tick;
return;
}
// Your code follows, I've tried to optimize it here and there, but no guarantee that it compiles or works, not tested at all
if(signalNewFFT && PanelFFT.Visible) {
signalNewFFT = false;
#region FFT
bool newRange = false;
if(graphFFT.MaxY != d.fftRangeYMax) {
graphFFT.MaxY = d.fftRangeYMax;
newRange = true;
}
if(graphFFT.MinY != d.fftRangeYMin) {
graphFFT.MinY = d.fftRangeYMin;
newRange = true;
}
int tempLength = 0;
short[] tempData;
int i = 0;
lock(d.fftDataLock) {
tempLength = d.fftLength;
tempData = (short[])d.fftData.Clone();
}
graphFFT.SetLine("FFT", tempData);
if(newRange) graphFFT.RefreshGraphComplete();
else if(PanelFFT.Visible) graphFFT.RefreshGraph();
#endregion
// End of your code
tim.Enabled = true; // Drawing is done, allow new frames to come in.
}
}
Here's the optimized SetLine() which no longer takes a list of points but the raw data:
public class GraphFFT {
public void SetLine(String lineTitle, short[] values) {
IPointListEdit ip = zgcGraph.GraphPane.CurveList[lineTitle].Points as IPointListEdit;
int tmp = Math.Min(ip.Count, values.Length);
int i = 0;
peakX = values.Length;
while(i < tmp) {
if(values[i] > peakY) peakY = values[i];
ip[i].X = i;
ip[i].Y = values[i];
i++;
}
while(ip.Count < values.Count) {
if(values[i] > peakY) peakY = values[i];
ip.Add(i, values[i]);
i++;
}
while(values.Count > ip.Count) {
ip.RemoveAt(ip.Count - 1);
}
}
}
I hope you get that working, as I commented before, I hav'nt got the chance to compile or check it so there could be some bugs there. There's more to be optimized there, but the optimizations should be marginal compared to the boost of skipping frames and only collecting data when we have the time to actually draw the frame before the next one comes in.
If you closely study the graphs in the video at iZotope, you'll notice that they too are skipping frames, and sometimes are a bit jumpy. That's not bad at all, it's a trade-off you make between the processing power of the foreground thread and the background workers.
If you really want the drawing to be done in a separate thread, you'll have to draw the graph to a bitmap (calling Draw() and passing the bitmaps device context). Then pass the bitmap on to the main thread and have it update. That way you do lose the convenience of the designer and property grid in your IDE, but you can make use of otherwise vacant processor cores.
---------- edit answer to remarks --------
Yes there is a way to tell what calls what. Look at your first screen-shot, you have selected the "call tree" graph. Each next line jumps in a bit (it's a tree-view, not just a list!). In a call-graph, each tree-node represents a method that has been called by its parent tree-node (method).
In the first image, WndProc was called about 1800 times, it handled 872 messages of which 62 triggered ZedGraphControl.OnPaint() (which in turn accounts for 53% of the main threads total time).
The reason you don't see another rootnode, is because the 3rd dropdown box has selected "[604] Mian Thread" which I didn't notice before.
As for the more fluent graphs, I have 2nd thoughts on that now after looking more closely to the screen-shots. The main thread has clearly received more (double) update messages, and the CPU still has some headroom.
It looks like the threads are out-of-sync and in-sync at different times, where the update messages arrive just too late (when WndProc was done and went to sleep for a while), and then suddenly in time for a while. I'm not very familiar with Ants, but does it have a side-by side thread timeline including sleep time? You should be able to see what's going on in such a view. Microsofts threads view tool would come in handy for this:
When I have never heard or seen something similar; I’d recommend the common sense approach of commenting out sections of code/injecting returns at tops of functions until you find the logic that’s producing the side effect. You know your code and likely have an educated guess where to start chopping. Else chop mostly all as a sanity test and start adding blocks back. I’m often amazed how fast one can find those seemingly impossible bugs to track. Once you find the related code, you will have more clues to solve your issue.
There is an array of potential causes. Without stating completeness, here is how you could approach your search for the actual cause:
Environment variables: the timer issue in another answer is only one example. There might be modifications to the Path and to other variables, new variables could be set by the profiler. Write the current environment variables to a file and compare both configurations. Try to find suspicious entries, unset them one by one (or in combinations) until you get the same behavior in both cases.
Processor frequency. This can easily happen on laptops. Potentially, the energy saving system sets the frequency of the processor(s) to a lower value to save energy. Some apps may 'wake' the system up, increasing the frequency. Check this via performance monitor (permon).
If the apps runs slower than possible there must be some inefficient resource utilization. Use the profiler to investigate this! You can attache the profiler to the (slow) running process to see which resources are under-/ over-utilized. Mostly, there are two major categories of causes for too slow execution: memory bound and compute bound execution. Both can give more insight into what is triggering the slow-down.
If, however, your app actually changes its efficiency by attaching to a profiler you can still use your favorite monitor app to see, which performance indicators do actually change. Again, perfmon is your friend.
If you have a method which throws a lot of exceptions, it can run slowly in debug mode and fast in CPU Profiling mode.
As detailed here, debug performance can be improved by using the DebuggerNonUserCode attribute. For example:
[DebuggerNonUserCode]
public static bool IsArchive(string filename)
{
bool result = false;
try
{
//this calls an external library, which throws an exception if the file is not an archive
result = ExternalLibrary.IsArchive(filename);
}
catch
{
}
return result;
}

Serial port data precise time stamp

I am working on a project which requires precise time(ms) for each data entry I read from a serial port connected to an encoder (US digital S5 Optical Shaft Encoder with a QSB).
I installed the encoder on a small cart where I use it to count the speed of the cart.
Here is what I did so far:
connect to the serial port and write command to QSB to tell the encoder to stream data. commands available here:
www.usdigital.com/assets/general/QSB%20Commands%20List_1.pdf
www.usdigital.com/assets/general/QSB%20Applications%20Examples.pdf
Use readline() to read received data.
put all lines of data into one StringBuilder and output it to a file.
I am able to get data entries in between 1ms when I set the output value threshold and interval rate to as fast as possible.
Here is what I got:
----time stamp(h/m/s/ms)-------value
data with correct time stamp: https://www.dropbox.com/s/pvo1dz56my4o99y/Capture1.JPG
However, there are abrupt "jumps", roughly 200ms when data is continuous (I am rolling the cart in a constant speed)
data with incorrect time stamp: https://www.dropbox.com/s/sz3sxwv4qwsb2cn/Capture2.JPG
Here is my code:
private void buttonOpenEncoderPort_Click(object sender, EventArgs e)
{
serialPortEncoder.Write("S0E\r\n");//start streaming data
System.Threading.Thread.Sleep(500);
serialPortEncoder.Write("W0B0\r\n");//set threshold to 0 so the encoder will stream data a the interval I set.
System.Threading.Thread.Sleep(500);
serialPortEncoder.Write("W0C0000\r\n");//set output interval to 0 so it will stream as fast as possible
System.Threading.Thread.Sleep(1500);
backgroundWorkerEncoder.RunWorkerAsync();}
//I am using a background worker to pull data out.
private void backgroundWorkerEncoder_DoWork(object sender, DoWorkEventArgs e)
{
while (serialPortEncoder.IsOpen)
{
if (serialPortEncoder.BytesToRead != 0)
{
try
{
String s = serialPortEncoder.ReadLine();//read from encoder
LazerBucket.Add(getCurrentTimeWithMS(timeEncoder) + "-----" + s + "\r\n");//put one line of data with time stamp in a List<String>
richTextBoxEncoderData.BeginInvoke(new MethodInvoker(delegate()
{
richTextBoxEncoderData.Text = s; })); //update UI
}
catch (Exception ex) { MessageBox.Show(ex.ToString()); }
}
}
}
private String getCurrentTimeWithMS(DateTime d)//to get time
{
StringBuilder s = new StringBuilder();
d = DateTime.Now;
int hour = d.Hour;
int minute = d.Minute;
int second = d.Second;
int ms = d.Millisecond;
s.Append(" ----" + hour.ToString() + ":" + minute.ToString() + ":" + second.ToString() + ":" + ms.ToString());
return s.ToString();
}
I would appericiate it if someone could find the cause of the time jump. 200ms is too much to be ignored.
EDIT:
As suggested, I tried Stopwatch but still there are 200ms delay. But when I print out time stamps and BytesToRead together, I found that data in the buffer is decreasing as readLine() is being executed. Eventually BytesToRead will drop to single digit and that's where the delay happens. I am looking for better solutions on how to implement threads. And also explanations for the delay. Maybe I am reading to fast so the buffer can't keep up with me?
EDIT:
problem solved. see my answer below. Thanks for replying though. Stopwatch really helps. Now I am trying to work out whether event driven or polling is better.
After some endless researching on the web, I found the cause of the delay.
Device Manager--->Port---->advance----> change latency to 1ms will solve the problem.
I am now polling data using a separate thread. It works very well.
Are you using C# 4.5? If so, I highly recommend using async/await over BackgroundWorker.
Also, DateTime isn't really accurate for real-time applications. I would recommend DateTime strictly as a start time and then using Stopwatch in System.Diagnostics to get the elapsed time since the start time.
private void backgroundWorkerEncoder_DoWork(object sender, DoWorkEventArgs e)
{
var startTime = DateTime.Now;
var stopwatch = Stopwatch.StartNew();
while (serialPort.IsOpen && !backgroundWorker.CancellationPending)
{
if (serialPort.BytesToRead > 0)
{
try
{
var line = serialPort.ReadLine();
var timestamp = (startTime + stopwatch.Elapsed);
var lineString = string.Format("{0} ----{1}",
line,
timestamp.ToString("HH:mm:ss:fff"));
// Handle formatted line string here.
}
catch (Exception ex)
{
// Handle exception here.
}
}
}
As for the 200 ms discrepancy, it could be a variety of things. Perhaps the BackgroundWorker is on a lower priority and doesn't get as much CPU time as you hoped. Could also be something on the I/O side of either SerialPort or the actual serial device itself.
When you want precise measurements you should not use DateTime.Now, try stopwatch instead.
As detailed here and here, DateTime is accurate but not precise to the millisecond. If you need precision and accuracy, save DateTime.Now when you start measuring and get the offset from the stopwatch.
While 200ms seems like a long delay - even for DateTime - the stopwatch might indeed solve your Problem.
To me it seems that the OS is [in your way].
I suggest the following.
Read the data from the port either in a seperate proces (or service) or a separate thread with priority above normal
Store the raw(!) data in a queue with the accurate timestamp for later processing. This "task" should be as light as possible to avoid the GC or scheduler to kick in and stall it for even the smallest amount of time. No string concats or formats for example. Those ops cost time and put stress on memory.
Process that data in a seperate thread or process. If that one get's hold up for a some time there's no real harm done as the timestamps are accurate.
In short; decouple reading from processing.
Imo stock-versions of Windows are too much IO-bound (it loves and hugs the concept of swapping) to be for RT processes. Using a diff OS, on a diff box, for reading and sending it to a Winbox for further processing could perhaps be a option to consider too (last resort?)

Datetime Granularity between 32bit and 64bit Windows

I have create a little engineering app for some network debugging. It takes a list of IP addresses and pings them, with user set timeout and rate. It logs the average round-trip time and every time one of the sends fails it logs it's duration of failure and a timestamp of when it happened...
That's the idea. I developed it on a Win7 machine with .Net4 and have had to put the thing on a set of XP laptops.
The problem is the duration values on my box during testing show nice ms durations, on the XP boxes when I look they show 0 or 15.625 (magic number?)... and have the funny square, box symbol in the string?
public void LinkUp()
{
if (_isLinkUp) return;
_upTime = DateTime.Now;
var span = _upTime.Subtract(_downTime);
_downTimeLog.Add(new LinkDown()
{
_span = span,
_status = _ipStatus,
_time = _downTime
});
_isLinkUp = true;
}
That's the bit that does the log. The _ipStatus is the ping failure reason (typically timeout).
_downEventLog.AppendLine(" Duration-> " + linkDownLogEvent._span.TotalMilliseconds + "ms\n");
That's the bit that does the print... Can anyone shed any light on this apparent difference?
The question has been answered but I will include an edit here for some more info.
EDIT:
It seems that the difference was down not to the Win7 and WinXP difference but 32bit and 64bit.
In a 32 bit windows systems as Henk points out, the granularity of the system clock is 15-16ms, this is what gave me the value of 15.625 for every value less than 16ms for the timespan.
In a 64 bit system the system call is to a different set of methods that have a much finer granularity. So on my dev machine in x64 I had ms accuracy from my system clock!
Now, the stopwatch uses a hardware interface via the processor instrumentation to record a much finer granularity (probably not every processor tick, but I imagine something obscenely accurate in-line with this thinking). If the hardware underlying the OS does not have this level of instrumentation, it will however use the system time. So beware! But I would guess most modern desktops/laptops have this instrumentation... Embedded devices or things of that nature might not, but then the stopwatch class is not in the Compact Framework as far as I can see (here you have to use QueryPerformanceCounter()).
Hope all this helps. It's helped me a lot.
Somewhere around the _spanStopWatch initialiser:
if (!_spanStopWatch.IsHighResolution)
{
throw new ThisMachineIsNotAccurateEnoughForMyLikingException("Find a better machine.");
}
The nuts and bolts:
public void LinkUp()
{
if (_isLinkUp) return;
_spanStopWatch.Stop();
var span = _spanStopWatch.Elapsed;
_downTimeLog.Add(new LinkDown()
{
_span = span,
_status = _ipStatus,
_time = _downTime
});
_isLinkUp = true;
}
0 or 15.625 (magic number?)
Yes, using DateTime.Now is accurate only to the length of a CPU timeslice, 15-20 ms depending on your hardware and OS version.
Use System.Diagnostics.Stopwatch for more accurate timing.

How to get a Fast .Net Http Request

I need an Http request that I can use in .Net which takes under 100 ms. I'm able to achieve this in my browser so I really don't see why this is such a problem in code.
I've tried WinHTTP as well as WebRequest.Create and both of them are over 500ms which isn't acceptable for my use case.
Here are examples of the simple test I'm trying to pass. (WinHttpFetcher is a simple wrapper I wrote but it does the most trivial example of a Get Request that I'm not sure it's worth pasting.)
I'm getting acceptable results with LibCurlNet but if there are simultaneous usages of the class I get an access violation. Also since it's not managed code and has to be copied to bin directory, it's not ideal to deploy with my open source project.
Any ideas of another implementation to try?
[Test]
public void WinHttp_Should_Get_Html_Quickly()
{
var fetcher = new WinHttpFetcher();
var startTime = DateTime.Now;
var result = fetcher.Fetch(new Uri("http://localhost"));
var endTime = DateTime.Now;
Assert.Less((endTime - startTime).TotalMilliseconds, 100);
}
[Test]
public void WebRequest_Should_Get_Html_Quickly()
{
var startTime = DateTime.Now;
var req = (HttpWebRequest) WebRequest.Create("http://localhost");
var response = req.GetResponse();
var endTime = DateTime.Now;
Assert.Less((endTime - startTime).TotalMilliseconds, 100);
}
When benchmarking, it is best to discard at least the first two timings as they are likely to skew the results:
Timing 1: Dominated by JIT overhead i.e. the process of turning byte code into native code.
Timing 2: A possible optimization pass for the JIT'd code.
Timings after this will reflect repeat performance much better.
The following is an example of a test harness that will automatically disregard JIT and optimization passes, and run a test a set number of iterations before taking an average to assert performance. As you can see the JIT pass takes a substantial amount of time.
JIT:410.79ms
Optimize:0.98ms.
Average over 10 iterations:0.38ms
Code:
[Test]
public void WebRequest_Should_Get_Html_Quickly()
{
private const int TestIterations = 10;
private const int MaxMilliseconds = 100;
Action test = () =>
{
WebRequest.Create("http://localhost/iisstart.htm").GetResponse();
};
AssertTimedTest(TestIterations, MaxMilliseconds, test);
}
private static void AssertTimedTest(int iterations, int maxMs, Action test)
{
double jit = Execute(test); //disregard jit pass
Console.WriteLine("JIT:{0:F2}ms.", jit);
double optimize = Execute(test); //disregard optimize pass
Console.WriteLine("Optimize:{0:F2}ms.", optimize);
double totalElapsed = 0;
for (int i = 0; i < iterations; i++) totalElapsed += Execute(test);
double averageMs = (totalElapsed / iterations);
Console.WriteLine("Average:{0:F2}ms.", averageMs);
Assert.Less(averageMs, maxMs, "Average elapsed test time.");
}
private static double Execute(Action action)
{
Stopwatch stopwatch = Stopwatch.StartNew();
action();
return stopwatch.Elapsed.TotalMilliseconds;
}
Use the StopWatch class to get accurate timings.
Then, make sure you're not seeing the results of un-optimized code or JIT compilation by running your timing test several times in Release code. Discard the first few calls to remove he impact of JIT and then take the mean tidings of the rest.
VS.NET has the ability to measure performance, and you might also want to use something like Fiddler to see how much time you're spending "on the wire" and sanity check that it's not your IIS/web server causing the delays.
500ms is a very long time, and it's possible to be in the 10s of ms with these classes, so don't give up hope (yet).
Update #1:
This is a great article that talks about micro benchmarking and what's needed to avoid seeing things like JIT:
http://blogs.msdn.com/b/vancem/archive/2009/02/06/measureit-update-tool-for-doing-microbenchmarks.aspx
You're not quite micro-benchmarking, but there are lots of best practices in here.
Update #2:
So, I wrote this console app (using VS.NET 2010)...
class Program
{
static void Main(string[] args)
{
var stopwatch = Stopwatch.StartNew();
var req = (HttpWebRequest)WebRequest.Create("http://localhost");
var response = req.GetResponse();
Console.WriteLine(stopwatch.ElapsedMilliseconds);
}
}
... and Ctrl-F5'd it. It was compiled as debug, but I ran it without debugging, and I got 63ms. I'm running this on my Windows 7 laptop, and so http://localhost brings back the default IIS7 home page. Running it again I get similar times.
Running a Release build gives times in the 50ms to 55ms range.
This is the order of magnitude I'd expect. Clearly, if your website is performing an ASP.NET recompile, or recycling the app pool, or doing lots of back end processing, then your timings will differ. If your markup is massive then it will also differ, but none of the classes you're using client side should be the rate limiting steps here. It'll be the network hope and/or the remote app processing.
Try setting the Proxy property of the HttpWebRequest instance to null.
If that works, then try setting it to GlobalProxySelection.GetEmptyWebProxy(), which seems to be more correct.
You can read more about it here:
- WebRequest slow?: http://holyhoehle.wordpress.com/2010/01/12/webrequest-slow/
Update 2018: Pulling this up from the comments.
System.Net.GlobalProxySelection is obsolete.This class has been deprecated. Please use WebRequest.DefaultWebProxy instead to access and set the global default proxy. Use null instead of GetEmptyWebProxy(). – jirarium Jul 22 '17 at 5:44

Categories

Resources