I have tried the following cases and used a stopwatch to mesaure the actual time it takes for the socket to receive. Note that the timeouts are in miliseconds.
Dim NextClient As New TcpClient
NextClient.ReceiveTimeout = 1 //Case 1
NextClient.Client.ReceiveTimeout = 1 //Case 2
Dim ns As Net.Sockets.NetworkStream = client.GetStream()
ns.ReadTimeout = 1 //Case 3
Dim sw As New Stopwatch
sw.Start()
ns.Read(gbytes, 0, 5997)
sw.Stop()
BTW, ns.CanTimeout returns true. Also, I am not really expecting a 1 ms precision and it is only for testing purposes. I have actually started with 500 ms but first wanted to test with this.
In all cases and their combinations, even though the I have measured 100+ miliseconds each time with the stopwatch, I receive the data with no exceptions. If I intentionally delay the response over few seconds, however, I can get the exception. But even my ping / 2 to the server is much more than 1 ms.
Oddly, if I set the timeout to 1000 ms and the server response takes about 1020 ms the exception is triggered.
So, is there a minimum value or something else?
MSDN:
The ReceiveTimeout property determines the amount of time that the Read method will block until it is able to receive data. This time is measured in milliseconds. If the time-out expires before Read successfully completes, TcpClient throws a IOException. There is no time-out by default.
... if I set the timeout to 1000 ms and the server response takes about 1020 ms the exception is triggered. Well, if the response takes longer than the configured timeout, you'll get the exception.
If the time-out period is exceeded, the Receive method will throw a SocketException.
... is there a minimum value or something else?
Applicable values vary from -1 over 0 to MAX_INT_32.
The time-out value, in milliseconds. The default value is 0, which indicates an infinite time-out period. Specifying -1 also indicates an infinite time-out period.
However, the underlying Hardware and/or OS determines the resultion/granularity of the timeout (e.g. Windows has typically a default timer resolution of 15.625 ms. Consequently, exceptions are only raised at multiples of 15.625 ms).
The definition from Microsoft appears to be here:
https://msdn.microsoft.com/en-us/library/bk6w7hs8%28v=vs.110%29.aspx
and the remark against it reads:
"If the read operation does not complete within the time specified by this property, the read operation throws an IOException."
So the issue is somewhat complicated because, as usual, it relies on the wording being written accurately and distinctly.
The definition makes no reference to any kind of timer quanta or limitations of any timer, therefore if you specify a timeout of 1000mS and a read is not received within 1000mS then an exception must be thrown otherwise the mechanism is not compliant with its definition.
To make it correct any definition would have to specify some value of timer accuracy otherwise there is no workable definition of 'exceeds'. Just because the timeout is specified as an integer doesn't mean the interval is exceeded after 1001mS - you could justifiably say it's exceeded after 1000.0000000000000001ms.
There is also no definition of 'complete' when it states "If the read operation does not complete...".
Does it mean if it hasn't read ALL the bytes you asked for, or, if you ask for a partial read, does it mean if it hasn't read anything?
It's just not worded very well, and I suspect that Microsoft would say that the definition is wrong rather that the function being wrong, which doesn't help you much because they could change the way the function worked between releases in a way that might break your code without even commenting on it.
Provided it still does what the description says then it still works according to spec, and if you use features that are 'undocumented' - in other words things you observe that it does rather than things that the specification says it does - then that's just tough.
Unfortunately this simply doesn't do what the spec says in the first place, so all you can do is to use and exploit its 'observed behaviour' and hope that it stays the same.
Related
Using CSharp, if I do:
System.Security.Cryptography.X509Certificates.X509Chain ch = new System.Security.Cryptography.X509Certificates.X509Chain();
What is the effect of:
ch.ChainPolicy.UrlRetrievalTimeout = new TimeSpan(4000);
?
According to https://msdn.microsoft.com/en-us/library/system.timespan(v=vs.110).aspx
I would be initializing new instance of the TimeSpan structure to the specified number of ticks and a tick is equal to 100 nanoseconds or one ten-millionth of a second. There are 10,000 ticks in a millisecond. Ref: https://msdn.microsoft.com/en-us/library/system.timespan.ticks(v=vs.110).aspx
This looks like I am setting a timeout to less than a millisecond to download certificate revocation lists. In practice, however, very large CRL files are being downloaded taking numerous seconds using such code.
So what does that line do exactly?
TIA.
This looks like I am setting a timeout to less than a millisecond to download certificate revocation lists
https://referencesource.microsoft.com/#System/security/system/security/cryptography/x509/x509chain.cs,383:
ChainPara.dwUrlRetrievalTimeout = (uint)Math.Floor(timeout.TotalMilliseconds);
So you effectively set it to "0", resulting in the OS default value. It's just a problem of how it has to round the value to send it down to Windows.
The value maps to dwUrlRetrievalTimeout on CERT_CHAIN_PARA, which doesn't have a whole lot to say about how it's used (gap between downloads, a timeout during a message pump, et cetera). But you could try TimeSpan.FromMilliseconds(1) (or some other small number) and see what you see. 1ms is not likely to exceed your TCP and HTTP connection times, so it will probably have the effect of computing the chain in Offline mode.
I am using "PerformanceCounter" class of C# to calculate below 2 counters "Available Bytes" and "% Committed Bytes In Use" under "Memory" category.
PerformanceCounter pc = new PerformanceCounter("Memory", "Available Bytes", true);
PerformanceCounter pc1 = new PerformanceCounter("Memory", "% Committed Bytes In Use", true);
var a = pc.RawValue;
var b = pc1.NextValue();
The issue I'm seeing here is that "RawValue" is used for "Available Bytes" counter whereas "NextValue()" is used for "% Committed Bytes In Use" counter.
Is there any uniform method to calculate both or all counters?
It only varies per category because different categories contain different counter types. The PerformanceCounter.CounterType property defines what type of data the counter is holding, and therefore how the data is calculated. It doesn't make sense for a counter that's measuring the difference over time to have the difference in the raw value because the difference could be over different time periods for different clients wanting to do the measurement. See the Performance Counter Type Enumeration for more info on the different types. If you really want to get into the details of how each type works, you have to resort to the Win32 documentation on which all of this is based. There used to be a single page with all of this, but I'm having trouble finding that at the moment. The closest I can find is here: https://technet.microsoft.com/en-us/library/cc960029.aspx. Some performance counter types use a main counter and a "base" counter and then use a formula based on the current and previous raw values for each of those (and possibly system time as well) to compute NextValue(). RawValue may appear to be invalid for certain counter types because it just doesn't make sense to interpret it in the same way as the computed value. For example, IIRC for % CPU used for the process, the raw value is the number of CPU ticks used since the program started, which, if interpreted as a percentage is nonsense. It's only meaningful when compared to previous values and the elapsed time (from which you can also infer out the maximum possible change).
Using RawValue makes sense for some counters, not for others. However, NextValue() often can't return a meaningful value the first time you call it because when it's computed as a difference between samples, you don't have a previous sample to compare it to. You can just ignore that, or you can set up your code to call it once during startup so that subsequent calls get real values. Keep in mind that NextValue() is expected to be called on a timer. For example, if you're calling it on the Network Bytes Sent counter, it will return the number of bytes sent between the previous call and this one. So for example if you call NextValue() on the Network Bytes Sent counter 2 seconds after the initial call and then again after 2 minutes, you're going to get very different values, even if the network transfer is steady because the call after 2 seconds is return the number of bytes transferred in 2 seconds, and the call after 2 minutes is going to return the number of bytes transferred in 2 minutes.
So, in short, you can use NextValue() for all counter types, but you must throw away or ignore the first value returned, and you must call NextValue() on a fixed interval for the results to make sense (just like the interactive Windows Performance Monitor program does).
By my experience and mostly MSDN documentation, is that it varies per Performance Counter Category, then again by specific attribute Property such as Available Bytes or % Committed in your case.
What might be what you are looking for is NextSample().
Perf Counter
Property: RawValue
Gets or sets the raw, or uncalculated, value of this counter.
^ meaning that it's not necessarily up to the developer who created it.
Method: NextValue()
Obtains a counter sample and returns the calculated value for it.
^ meaning that it's up to the developer who created it.
Method: NextSample()
Obtains a counter sample, and returns the raw, or uncalculated, value for it.
Also something that was explained to me long ago, so take it with a grain of salt, the concept of RawValue is not always valid.
RawValues are used to create samples. NextSample(), or samples, is/are averages - much more realistic - of RawValues over time. NextValue() is cleaned up samples converted into either %, or from bytes to Kilobytes (based on the context of the value and the implementation by developer).
So, in my humble opinion, even if the info is over 10 years old, is to abandon usage of RawValue and utilize NextSample() in its place - should you need a realistic/accurate value.
How can I limit/reduce the timeout period for FindElement? I am scraping a website. For a table which appears in thousands of pages, I can have either an element stating there is no information, or the table.
I search for one of theses elements and when missing, I search for the other. The problem is that when one of them does not exist, it takes a long time until the FindElement times out. Can this period be shortened? Can the timeout period be defined per element? All I found about waits are to prolong the timeout period...
I'm working in a .NET environment, if that helps.
The delay in FindElement is caused by the Implicit Wait settings. You can set it temporary to different value
driver.Manage().Timeouts().ImplicitlyWait(TimeSpan.FromSeconds(0)); // setting to 0 will check one time only when using FindElement
// look for the elements
driver.Manage().Timeouts().ImplicitlyWait(TimeSpan.FromSeconds(original settings));
This is a bit of an academic question as I'm struggling with the thinking behind Microsoft using double as the data type for the Interval property!
Firstly from MDSN Interval is the time, in milliseconds, between Elapsed events; I would interpret that to be a discrete number so why the use of a double? surely int or long makes greater sense!?
Can Interval support values like 5.768585 (5.768585 ms)? Especially when one considers System.Timers.Timer to have nowhere near sub millisecond accuracy... Most accurate timer in .NET?
Seems a bit daft to me.. Maybe I'm missing something!
Disassembling shows that the interval is consumed via a call to (int)Math.Ceiling(this.interval) so even if you were to specify a real number, it would be turned into an int before use. This happens in a method called UpdateTimer.
Why? No idea, perhaps the spec said that double was required at one point and that changed? The end result is that double is not strictly required, because it is eventually converted to an int and cannot be larger than Int32.MaxValue according to the docs anyway.
Yes, the timer can "support" real numbers, it just doesn't tell you that it silently changed them. You can initialise and run the timer with 100.5d, it turns it into 101.
And yes, it is all a bit daft: 4 wasted bytes, potential implicit casting, conversion calls, explicit casting, all needless if they'd just used int.
The reason to use a double here is the attempt to provide enough accuracy.
In detail: The systems interrupt time slices are given by ActualResolution which is returned by NtQueryTimerResolution(). NtQueryTimerResolution is exported by the native Windows NT library NTDLL.DLL. The System time increments are given by TimeIncrement which is returned by GetSystemTimeAdjustment().
These two values are determining the behavior of the system timers. They are integer values and the express 100 ns units. However, this is already insufficient for certain hardware today. On some systems ActualResolution is returned 9766 which would correspond to 0.9766 ms. But in fact these systems are operating at 1024 interrupts per second (tuned by proper setting of the multimedia interface). 1024 interrupts a second will cause the interrupt period to be 0.9765625 ms. This is of too high detail, it reaches into the 100 ps regime and can therefore not be hold in the standard ActualResolution format.
Therefore it has been decided to put such time-parameters into double. But: This does not mean that all of the posible values are supported/used. The granularity given by TimeIncrement will persist, no matter what.
When dealing with timers it is always advisable to look at the granularity of the parameters involved.
So back to your question: Can Interval support values like 5.768585 (ms) ?
No, the system I've taken as an example above cannot.
But it can support 5.859375 (ms)!
Other systems with different hardware may support other numbers.
So the idea of introducing a double here is not such a stupid idea and actually makes sense. Spending another 4 bytes to get things finally right is a good investment.
I've summarized some more details about Windows time matters here.
So a simple enough question really.
How exactly does the interval for System.Timers work?
Does it fire 1 second, each second, regardless of how long the timeout event takes or does it require the routine to finish first and then restarts the interval?
So either:
1 sec....1 sec....1 sec and so on
1 sec + process time....1 sec + process time....1 sec + process time and so on
The reason I ask this is I know my "processing" takes much less than 1 second but I would like to fire it every one second on the dot (or as close as).
I had been using a Thread.Sleep method like so:
Thread.Sleep(1000 - ((int)(DateTime.Now.Subtract(start).TotalMilliseconds) >= 1000 ? 0 : (int)(DateTime.Now.Subtract(start).TotalMilliseconds)));
Where start time is registered at start of the routine. The problem here is that Thread.Sleep only works in milliseconds. So my routine could restart at 1000ms or a fraction over like 1000.0234ms, which can happen as one of my routines takes 0ms according to "TimeSpan" but obviously it has used ticks/nanoseconds - which would then mean the timing is off and is no longer every second. If I could sleep by ticks or nanoseconds it would be bang on.
If number 1 applies to System.Timers then I guess I'm sorted. If not I need some way to "sleep" the thread to a higher resolution of time i.e ticks/nanoseconds.
You might ask why I do an inline IF statement, well sometimes the processing can go above 1000ms so we need to make sure we don't create a minus figure. Also, by the time we determine this, the ending time has changed slightly - not by much, but, it could make the thread delay slightly longer causing the entire subsequent sleeping off.
I know, I know, the time would be negligible... but what happens if the system suddenly stalled for a few ms... it would protect against that in this case.
Update 1
Ok. So I didn't realise you can put a TimeSpan in as the timing value. So I used the below code:
Thread.Sleep(TimeSpan.FromMilliseconds(1000) - ((DateTime.Now.Subtract(start).TotalMilliseconds >= 1000) ? TimeSpan.FromMilliseconds(0) : DateTime.Now.Subtract(start)));
If I am right, this should then allow me to repeat the thread at exactly 1 second - or as close as the system will allow.
IF you have set AutoReset = true; then your theory 1 is true, otherwise you would have to deal with it in code – see the docuementation for Timer on MSDN.