Exponentially Delay Client Side API Calls - c#

I'm interfacing with a third party API that returns a call limit threshold and how many calls I've used of the threshold so far. I believe it's 60 calls every minute. After 1 minute it resets.
I would like to delay my API calls as I reach that limit more and more, sort of like an exponential curve where the curve hits double the max threshold at the max threshold.
So at 0 it's 0 delay. At 60 it would be a 120 second delay.
And if they change the call limit, I want to be able to respond and adjust my max limit again to 2 * the new limit with an exponential-sorta curve.
What algorithm can I use for this? (Preferable VB.NET, else C#)

You could potentially do something along these lines, we did this to not bombard our mail server when a camera went offline or had an error.
public static class Delay
{
public static double ByInterval(int maximum, int interval) => Math.Round((maximum / (Math.Pow(2, interval) - 1)), 0);
}
So for instance, if the maximum delay should be one hundred twenty and we'd like at an interval of three, the output would be fifteen. I'm also rounding to a whole number. Not sure if this is what you're looking for, but we coupled this to an appender, so we store the emails until our threshold is met. We used our values to equate into seconds with (10000000 * Delay.By(120, 3)) for instance. Since we stored as ticks primarily.

Related

What is the most efficient method of doing a repeating timers?

When creating a looping timer for something like a game with an "update" function that's called repeatedly at intervals, there's several ways to represent this, including:
Option A: Incrementing timer values and resetting to 0:
Represent the current timer value and add to it the delta time.
When the timer value exceeds the timer maximum, set the timer value to 0
Option A presents the issue of timers not actually being once every N seconds, since it's possible for the timer value to exceed the maximum between checks and therefore it'll pretty much always be slightly more than the intended time. If such precision isn't needed, for example if doing something expensive every few seconds instead of every update, this method is acceptable.
Option B: Same as A, except reduce by the timer maximum whenever the timer exceeds the maximum. This solves the issue of A, ensuring that the timer is triggering almost exactly on its intended timing. It present a new issue though - if the timer value ever exceeds the maximum by a factor of more than 2, such as if there's ever an update that "hangs" for a while and the delta time becomes larger than the twice timer maximum, then the timer value will be decremented to a value larger than the maximum, causing it to trigger in the next update (as many times in a row as the value is multiples of the maximum, e.g. 8x the maximum = triggers for the next 8 loops).
This could be useful if you want to guarantee that over a very large time frame an action will be performed a certain number of times.
Option C: Same as B, except the timer reduction/reset is encapsulate in a while loop:
while(timerValue > timerMaximum) {
timerValue -= timerMaximum;
}
Which resolves option B's issue, but would be inefficient if the timer value exceeds the maximum by a huge factor.
Option D: Instead of decrementing the value when it exceeds the maximum, do the following:
timerValue = ((timerValue / timerMax) % 1) * timerMax;
This finds the timer's value as a proportion of the timer maximum, and sets it to the fractional part of that proportion multiplied by the maximum, achieving a nearly identical result as option C, but without using any loops.
Since all of these methods could be useful for different applications, depending on the acceptable precision and desired behaviour, how do the performances of each of these methods compare?

Measure precision of timer (e.g. Stopwatch/QueryPerformanceCounter)

Given that the Stopwatch class in C# can use something like three different timers underneath e.g.
System timer e.g. precision of approx +-10 ms depending on timer resolution that can be set with timeBeginPeriod it can be approx +-1 ms.
Time Stamp Counter (TSC) e.g. with a tick frequency of 2.5MHz or 1 tick = 400 ns so ideally a precision of that.
High Precision Event Timer (HPET) e.g. with a tick frequency of 25MHz or 1 tick = 40 ns so ideally a precision of that.
how can we measure the observable precision of this? Precision being defined as
Precision refers to the closeness of two or more measurements to each
other.
Now if the Stopwatch uses HPET does this mean we can use Stopwatch to get measurements of a precision equivalent to the frequency of the timer?
I don't think so, since this requires us to be able to use the timer with zero variance or a completely fixed overhead, which as far as I can tell is not true for Stopwatch. For example, when using HPET and calling:
var before_ticks = Stopwatch.GetTimestamp();
var after_ticks = Stopwatch.GetTimestamp();
var diff_ticks = after_ticks - before_ticks;
then the diff will be say approx 100 ticks or 4000 ns and it will have some variance too.
So how could one experimentally measure the observable precision of the Stopwatch? So it supports all possible timer modes underneath.
My idea would be to search for the minimum number of ticks != 0, to first establish the overhead in ticks of the Stopwatch that is for system timer this would be 0 until e.g. 10ms which is 10 * 1000 * 10 = 100,000 ticks since system timer has a tick resolution of 100ns, but the precision is far from this. For HPET it will never be 0 since the overhead of calling Stopwatch.GetTimestamp() is higher than the frequency of the timer.
But this says nothing about how precise we can measure using the timer. My definition would be how small a difference in ticks we can measure reliably.
The search could be performed by measuring different number of iterations ala:
var before = Stopwatch.GetTimestamp();
for (int i = 0; i < iterations; ++i)
{
action(); // Calling a no-op delegate Action since this cannot be inlined
}
var after = Stopwatch.GetTimestamp();
First a lower bound could be found where all of say 10 of measurements for a given number of iterations yield a non-zero number of ticks, save these measurements in long ticksLower[10]. Then the closest possible number of iterations that yield tick difference that is always higher that any of the first 10 measurements could be found, save these in long ticksUpper[10].
Worst case precision would then be the highest ticks in ticksUpper minus lowest ticks in ticksLower.
Does this sound reasonable?
Why do I want to know the observable precision of the Stopwatch? Because this can be used for determining the length of time you would need to measure for to get a certain level of precision of micro-benchmarking measurements. I.e. for 3 digit precision the length should be >1000 times the precision of the timer. Of course, one would measure multiple times with this length.
The Stopwatch class exposes a Frequency property that is the direct result of calling SafeNativeMethods.QueryPerformanceFrequency. Here is an excerpt of the property page:
The Frequency value depends on the resolution of the underlying timing
mechanism. If the installed hardware and operating system support a
high-resolution performance counter, then the Frequency value reflects
the frequency of that counter. Otherwise, the Frequency value is based
on the system timer frequency.

What are the value of Stopwatch's ticks if their duration varies?

I'm not talking about the bloodsucking spider-like disease-spreader, but rather the sort of tick I recorded here:
Stopwatch noLyme = new Stopwatch();
noLyme.Start();
. . .
noLyme.Stop();
MessageBox.Show(string.Format(
"elapsed milliseconds == {0}, elapsed ticks == {1}",
noLyme.ElapsedMilliseconds, noLyme.ElapsedTicks));
What the message box showed me was 17357 milliseconds and 56411802 ticks; this equates to 3250.089416373797 ticks per millisecond, or approximately 3.25 million ticks per second.
Since the ratio is such an odd one (3250.089416373797:1), I assume the time length of a tick changes based on hardware used, or other factors. That being the case, in what practical way are tick counts used? To me, it seems milliseconds hold more value. IOW: why would I care about ticks (the variable time slices)?
From the documentation (with Frequency being another property of the Stopwatch):
Each tick in the ElapsedTicks value represents the time interval equal
to 1 second divided by the Frequency.
Stopwatch.ElapsedTicks (MSDN)
Ticks are useful if you need very precise timing based on the specifics of your hardware.
You would use ticks if you want to know a very precise performance measurement that is specific to a given machine. Internal hardware mechanisms determine the conversion from ticks to actual time.
ticks are the raw, low level units in which the hardware measures time.
It's like asking "what use are bits when we can use ints". Well, if we didn't have bits, ints wouldn't exist!
However, ticks can be useful. When counting ticks, converting them to milliseconds is a costly process. When measuring accurately, you can count everything in ticks and convert the results to seconds at the end of the process. When comparing measurements, absolute values nay not be relevant, it may only be relative differences that are of interest.
Of course, in these days of high level language and multitasking there aren't many cases where you would go to the bare metal in this way, but why shouldn't the raw hardware value be exposed through the higher level interfaces?

Calculating algorithm time

I have an algorithm that works with extremely large numbers around the order of 2 raised to the power 4,500,000. I use the BigInteger class in .NET 4 to handle these numbers.
The algorithm is very simple in that it is a single loop that reduces a large initial number based on some predefined criteria. With each iteration, the number is reduced by around 10 exponents so 4,500,000 would become 4,499,990 in the next iteration.
I am currently getting 5.16 iterations per second or 0.193798 seconds per iteration. Based on that the total time for the algorithm should be roughly 22 hours to bring the exponent value down to 0.
The problem is, as the number is reduced, the time required to process the number in memory is reduced as well. Plus, as the exponent reduces to the 200,000 range, the iterations per second become huge and the reduction per iteration also increases exponentially.
Instead of letting the algo run for a whole day, is there a mathematical way to calculate how much time it would take based on an initial starting number and iterations per second?
This would be very helpful since I can measure improvements of optimization attempts quickly.
Consider the following psuedocode:
double e = 4500000; // 4,500,000.
Random r = new Random();
while (e > 0)
{
BigInteger b = BigInteger.Power(2, e);
double log = BigInteger.Log(b, 10);
e -= Math.Abs(log * r.Next(1, 10));
}
First rewrite
double log = BigInteger.Log(b, 10);
as
double log = log(2)/log(10) * e; // approx 0.3 * e
Then you notice that the algorithm terminates after O(1) iterations (~70% termination chance on each iteration), you can probably neglect the cost of everything apart from the first iteration.
The total cost of your algo is about 1 to 2 times as expensive as Math.Pow(2, e) for the initial exponent e. For base=2 this is a trivial bitshift, for others you'll need square-and-multiply
There is no way to estimate the time of the unknown since you are using Random!

Calculating Averages with Performance Counters

I have a service process, and I want to use performance counters to publish the average time that it takes to complete tasks. I am using the AverageTimer32 counter to do this.
It's almost working the way I want, but not quite: When I increment the counter, it will briefly bump up to the value that I expect (watching in Performance Monitor), but then it drops right back down to zero.
So, the counter is zero, I run a task, the task completes, the counter briefly bumps up (to the correct value), but then it almost immediately falls back to zero.
I am using the AverageTimer32 counter with an AverageBase as the denominator. I increment the AverageBase by 1 every time I start a task, and then I increment the AverageTimer32 by the number of ticks to complete every time I finish the task. Can anyone give me a push?
It turns out that the reason that I could not do what I wanted was that none of the performance counter types provide for automatically calculating a running average. (the "average" counters, calculate an average based upon that moment in time, like "bytes per second").
I wanted a running average. So, I used the RawFraction performance counter type.
There was one problem with that method: The formula divides the result by 100 to produce a percentage, and I wanted a raw number (average operations completed per second).
So, I incremented the denominator of the fraction by 100 for every 1 operation (offsetting the percentage calculation).
My result: I can now display how long it takes, on average, for my service to complete a task. If my service isn't busy, the average remains constant so that you can see the long-term trend of my service's performance.

Categories

Resources