What is the effect of X509Chain ChainPolicy UrlRetrievalTimeout initialization? - c#

Using CSharp, if I do:
System.Security.Cryptography.X509Certificates.X509Chain ch = new System.Security.Cryptography.X509Certificates.X509Chain();
What is the effect of:
ch.ChainPolicy.UrlRetrievalTimeout = new TimeSpan(4000);
?
According to https://msdn.microsoft.com/en-us/library/system.timespan(v=vs.110).aspx
I would be initializing new instance of the TimeSpan structure to the specified number of ticks and a tick is equal to 100 nanoseconds or one ten-millionth of a second. There are 10,000 ticks in a millisecond. Ref: https://msdn.microsoft.com/en-us/library/system.timespan.ticks(v=vs.110).aspx
This looks like I am setting a timeout to less than a millisecond to download certificate revocation lists. In practice, however, very large CRL files are being downloaded taking numerous seconds using such code.
So what does that line do exactly?
TIA.

This looks like I am setting a timeout to less than a millisecond to download certificate revocation lists
https://referencesource.microsoft.com/#System/security/system/security/cryptography/x509/x509chain.cs,383:
ChainPara.dwUrlRetrievalTimeout = (uint)Math.Floor(timeout.TotalMilliseconds);
So you effectively set it to "0", resulting in the OS default value. It's just a problem of how it has to round the value to send it down to Windows.
The value maps to dwUrlRetrievalTimeout on CERT_CHAIN_PARA, which doesn't have a whole lot to say about how it's used (gap between downloads, a timeout during a message pump, et cetera). But you could try TimeSpan.FromMilliseconds(1) (or some other small number) and see what you see. 1ms is not likely to exceed your TCP and HTTP connection times, so it will probably have the effect of computing the chain in Offline mode.

Related

Is there any uniform method to calculate any counters within any category for PerformanceCounter?

I am using "PerformanceCounter" class of C# to calculate below 2 counters "Available Bytes" and "% Committed Bytes In Use" under "Memory" category.
PerformanceCounter pc = new PerformanceCounter("Memory", "Available Bytes", true);
PerformanceCounter pc1 = new PerformanceCounter("Memory", "% Committed Bytes In Use", true);
var a = pc.RawValue;
var b = pc1.NextValue();
The issue I'm seeing here is that "RawValue" is used for "Available Bytes" counter whereas "NextValue()" is used for "% Committed Bytes In Use" counter.
Is there any uniform method to calculate both or all counters?
It only varies per category because different categories contain different counter types. The PerformanceCounter.CounterType property defines what type of data the counter is holding, and therefore how the data is calculated. It doesn't make sense for a counter that's measuring the difference over time to have the difference in the raw value because the difference could be over different time periods for different clients wanting to do the measurement. See the Performance Counter Type Enumeration for more info on the different types. If you really want to get into the details of how each type works, you have to resort to the Win32 documentation on which all of this is based. There used to be a single page with all of this, but I'm having trouble finding that at the moment. The closest I can find is here: https://technet.microsoft.com/en-us/library/cc960029.aspx. Some performance counter types use a main counter and a "base" counter and then use a formula based on the current and previous raw values for each of those (and possibly system time as well) to compute NextValue(). RawValue may appear to be invalid for certain counter types because it just doesn't make sense to interpret it in the same way as the computed value. For example, IIRC for % CPU used for the process, the raw value is the number of CPU ticks used since the program started, which, if interpreted as a percentage is nonsense. It's only meaningful when compared to previous values and the elapsed time (from which you can also infer out the maximum possible change).
Using RawValue makes sense for some counters, not for others. However, NextValue() often can't return a meaningful value the first time you call it because when it's computed as a difference between samples, you don't have a previous sample to compare it to. You can just ignore that, or you can set up your code to call it once during startup so that subsequent calls get real values. Keep in mind that NextValue() is expected to be called on a timer. For example, if you're calling it on the Network Bytes Sent counter, it will return the number of bytes sent between the previous call and this one. So for example if you call NextValue() on the Network Bytes Sent counter 2 seconds after the initial call and then again after 2 minutes, you're going to get very different values, even if the network transfer is steady because the call after 2 seconds is return the number of bytes transferred in 2 seconds, and the call after 2 minutes is going to return the number of bytes transferred in 2 minutes.
So, in short, you can use NextValue() for all counter types, but you must throw away or ignore the first value returned, and you must call NextValue() on a fixed interval for the results to make sense (just like the interactive Windows Performance Monitor program does).
By my experience and mostly MSDN documentation, is that it varies per Performance Counter Category, then again by specific attribute Property such as Available Bytes or % Committed in your case.
What might be what you are looking for is NextSample().
Perf Counter
Property: RawValue
Gets or sets the raw, or uncalculated, value of this counter.
^ meaning that it's not necessarily up to the developer who created it.
Method: NextValue()
Obtains a counter sample and returns the calculated value for it.
^ meaning that it's up to the developer who created it.
Method: NextSample()
Obtains a counter sample, and returns the raw, or uncalculated, value for it.
Also something that was explained to me long ago, so take it with a grain of salt, the concept of RawValue is not always valid.
RawValues are used to create samples. NextSample(), or samples, is/are averages - much more realistic - of RawValues over time. NextValue() is cleaned up samples converted into either %, or from bytes to Kilobytes (based on the context of the value and the implementation by developer).
So, in my humble opinion, even if the info is over 10 years old, is to abandon usage of RawValue and utilize NextSample() in its place - should you need a realistic/accurate value.

TcpClient.ReceiveTimeout is not working on small values?

I have tried the following cases and used a stopwatch to mesaure the actual time it takes for the socket to receive. Note that the timeouts are in miliseconds.
Dim NextClient As New TcpClient
NextClient.ReceiveTimeout = 1 //Case 1
NextClient.Client.ReceiveTimeout = 1 //Case 2
Dim ns As Net.Sockets.NetworkStream = client.GetStream()
ns.ReadTimeout = 1 //Case 3
Dim sw As New Stopwatch
sw.Start()
ns.Read(gbytes, 0, 5997)
sw.Stop()
BTW, ns.CanTimeout returns true. Also, I am not really expecting a 1 ms precision and it is only for testing purposes. I have actually started with 500 ms but first wanted to test with this.
In all cases and their combinations, even though the I have measured 100+ miliseconds each time with the stopwatch, I receive the data with no exceptions. If I intentionally delay the response over few seconds, however, I can get the exception. But even my ping / 2 to the server is much more than 1 ms.
Oddly, if I set the timeout to 1000 ms and the server response takes about 1020 ms the exception is triggered.
So, is there a minimum value or something else?
MSDN:
The ReceiveTimeout property determines the amount of time that the Read method will block until it is able to receive data. This time is measured in milliseconds. If the time-out expires before Read successfully completes, TcpClient throws a IOException. There is no time-out by default.
... if I set the timeout to 1000 ms and the server response takes about 1020 ms the exception is triggered. Well, if the response takes longer than the configured timeout, you'll get the exception.
If the time-out period is exceeded, the Receive method will throw a SocketException.
... is there a minimum value or something else?
Applicable values vary from -1 over 0 to MAX_INT_32.
The time-out value, in milliseconds. The default value is 0, which indicates an infinite time-out period. Specifying -1 also indicates an infinite time-out period.
However, the underlying Hardware and/or OS determines the resultion/granularity of the timeout (e.g. Windows has typically a default timer resolution of 15.625 ms. Consequently, exceptions are only raised at multiples of 15.625 ms).
The definition from Microsoft appears to be here:
https://msdn.microsoft.com/en-us/library/bk6w7hs8%28v=vs.110%29.aspx
and the remark against it reads:
"If the read operation does not complete within the time specified by this property, the read operation throws an IOException."
So the issue is somewhat complicated because, as usual, it relies on the wording being written accurately and distinctly.
The definition makes no reference to any kind of timer quanta or limitations of any timer, therefore if you specify a timeout of 1000mS and a read is not received within 1000mS then an exception must be thrown otherwise the mechanism is not compliant with its definition.
To make it correct any definition would have to specify some value of timer accuracy otherwise there is no workable definition of 'exceeds'. Just because the timeout is specified as an integer doesn't mean the interval is exceeded after 1001mS - you could justifiably say it's exceeded after 1000.0000000000000001ms.
There is also no definition of 'complete' when it states "If the read operation does not complete...".
Does it mean if it hasn't read ALL the bytes you asked for, or, if you ask for a partial read, does it mean if it hasn't read anything?
It's just not worded very well, and I suspect that Microsoft would say that the definition is wrong rather that the function being wrong, which doesn't help you much because they could change the way the function worked between releases in a way that might break your code without even commenting on it.
Provided it still does what the description says then it still works according to spec, and if you use features that are 'undocumented' - in other words things you observe that it does rather than things that the specification says it does - then that's just tough.
Unfortunately this simply doesn't do what the spec says in the first place, so all you can do is to use and exploit its 'observed behaviour' and hope that it stays the same.

Why is the data type of System.Timers.Timer.Interval a double?

This is a bit of an academic question as I'm struggling with the thinking behind Microsoft using double as the data type for the Interval property!
Firstly from MDSN Interval is the time, in milliseconds, between Elapsed events; I would interpret that to be a discrete number so why the use of a double? surely int or long makes greater sense!?
Can Interval support values like 5.768585 (5.768585 ms)? Especially when one considers System.Timers.Timer to have nowhere near sub millisecond accuracy... Most accurate timer in .NET?
Seems a bit daft to me.. Maybe I'm missing something!
Disassembling shows that the interval is consumed via a call to (int)Math.Ceiling(this.interval) so even if you were to specify a real number, it would be turned into an int before use. This happens in a method called UpdateTimer.
Why? No idea, perhaps the spec said that double was required at one point and that changed? The end result is that double is not strictly required, because it is eventually converted to an int and cannot be larger than Int32.MaxValue according to the docs anyway.
Yes, the timer can "support" real numbers, it just doesn't tell you that it silently changed them. You can initialise and run the timer with 100.5d, it turns it into 101.
And yes, it is all a bit daft: 4 wasted bytes, potential implicit casting, conversion calls, explicit casting, all needless if they'd just used int.
The reason to use a double here is the attempt to provide enough accuracy.
In detail: The systems interrupt time slices are given by ActualResolution which is returned by NtQueryTimerResolution(). NtQueryTimerResolution is exported by the native Windows NT library NTDLL.DLL. The System time increments are given by TimeIncrement which is returned by GetSystemTimeAdjustment().
These two values are determining the behavior of the system timers. They are integer values and the express 100 ns units. However, this is already insufficient for certain hardware today. On some systems ActualResolution is returned 9766 which would correspond to 0.9766 ms. But in fact these systems are operating at 1024 interrupts per second (tuned by proper setting of the multimedia interface). 1024 interrupts a second will cause the interrupt period to be 0.9765625 ms. This is of too high detail, it reaches into the 100 ps regime and can therefore not be hold in the standard ActualResolution format.
Therefore it has been decided to put such time-parameters into double. But: This does not mean that all of the posible values are supported/used. The granularity given by TimeIncrement will persist, no matter what.
When dealing with timers it is always advisable to look at the granularity of the parameters involved.
So back to your question: Can Interval support values like 5.768585 (ms) ?
No, the system I've taken as an example above cannot.
But it can support 5.859375 (ms)!
Other systems with different hardware may support other numbers.
So the idea of introducing a double here is not such a stupid idea and actually makes sense. Spending another 4 bytes to get things finally right is a good investment.
I've summarized some more details about Windows time matters here.

How to count the data sent or received by my PC (all processes/programs)?

I need to count the amount (in B/kB/MB/whatever) of data sent and received by my PC, by every running program/process.
Let's say I click "Start counting" and I get the sum of everything sent/received by my browser, FTP client, system actualizations etc. etc. from that moment till I choose "Stop".
To make it simpler, I want to count data transferred via TCP only - if it matters.
For now, I got the combo list of NICs in the PC (based on the comment in the link below).
I tried to change the code given here but I failed, getting strange out-of-nowhere values in dataSent/dataReceived.
I also read the answer at the question 442409 but as I can see it is about the data sent/received by the same program, which doesn't fit my requirements.
Perfmon should have counters for this type of thing that you want to do, so look there first.
Alright, I think I've found the solution, but maybe someone will suggest something better...
I made the timer (tested it with 10ms interval), which gets the "Bytes Received/sec" PerformanceCounter value and adds it to a global "temporary" variable and also increments the sum counter (if there is any lag). Then I made second timer with 1s interval, which gets the sum of values (from temporary sum), divides it by the counter and adds to the overall amount (also global). Then it resets the temporary sum and the counter.
I'm just not sure if it is right method, because I don't know, how the variables of "Bytes Received/sec" PerformanceCounter are varying during the one second. Maybe I should make some kind of histograph and get the average value?
For now, downloading 8.6MB file gave me 9.2MB overall amount - is it possible the other processes would generate that amount of net activity in less than 20 seconds?

Keep track of number of events per timespan

What's the best way, in C# to keep track of the number of events per timespan?
For example, I want to limit my TCP application to, say, a maximum of 10 requests per minute before setting a flag. The TCP application is intended to be as efficient as possible and runs as a windows service.
Maybe I should work on it tomorrow when my brain is less tired...
Thanks!
Check out the TimeSpan object - keep track of the DateTime when you process each event, and compare the difference using TimeSpan. If the quantity of events exceeds 10 and the TimeSpan is still under 60 seconds, you'll know to set the flag.
You cannot really limit the number of TCP "events" but you can throttle the amount of data that you send (simply by sleeping your thread that returns the data).
This way you can limit the amount of data used per TCP connection (e.g. to a maximum of 1k/sec).
Put time stamps in a queue. As long as the queue is full, don't let an event happen. When DateTime.Now - timeSpan > next_item_in_queue.TimeStamp, remove the item from the queue.

Categories

Resources