What's the best way, in C# to keep track of the number of events per timespan?
For example, I want to limit my TCP application to, say, a maximum of 10 requests per minute before setting a flag. The TCP application is intended to be as efficient as possible and runs as a windows service.
Maybe I should work on it tomorrow when my brain is less tired...
Thanks!
Check out the TimeSpan object - keep track of the DateTime when you process each event, and compare the difference using TimeSpan. If the quantity of events exceeds 10 and the TimeSpan is still under 60 seconds, you'll know to set the flag.
You cannot really limit the number of TCP "events" but you can throttle the amount of data that you send (simply by sleeping your thread that returns the data).
This way you can limit the amount of data used per TCP connection (e.g. to a maximum of 1k/sec).
Put time stamps in a queue. As long as the queue is full, don't let an event happen. When DateTime.Now - timeSpan > next_item_in_queue.TimeStamp, remove the item from the queue.
Related
In here says "" The time, in milliseconds, between Elapsed events. The value must be greater than zero, and less than or equal to Int32.MaxValue" [2,147,483,647]
However, I need 2100 hours plus 1 minute as a Timer.Interval. [7,560,000,000]
How to solve this, There's another way?
Timers shouldn't live anywhere near that long. Fire a short timer periodically, and check the system clock to see if it's time to perform your long-running event or not.
Better yet, use Quartz.net, which is already designed for this.
I was thinking of changing system's local time to server's time and then use it but I bet there are other ways to do this. I've been trying to find something like a clock in c#, but couldnt find anything. I'm receiving server's time in a DateTime format.
edit:
I need my application to use while working same time server does. I just want to get server's time once and after that, make my application work in a while loop using the time I've obtained from the server. There might be a difference between my system's time and server's time (even 5 seconds) and that's why I want to do this.
It's not entirely clear what you mean, but you could certainly create your own IClock interface which you'd use everywhere in code, and then write an implementation of that which is regularly synchronized with your server (or with NTP).
My Noda Time project already uses the idea of an injectable clock - not for synchronization purposes, but for testability. (A time service is basically a dependency.) Basically the idea is workable :) You may well not find anything which already does this, but it shouldn't be too hard to write. You'll want to think about how to adjust time though - for example, if the server time gets ahead of your "last server time + local time measurements" you may want to slew it gradually rather than having a discrete jump.
This is always assuming you do want it to be local to your application, of course. Another alternative (which may well not be appropriate, depending on your context) is to require that the host runs a time synchronization client (I believe Windows does by default these days) and simply start failing if the difference between your server and the client gets too large. (It's never going to be exactly in sync anyway, or at least not for long - you'll need to allow for some leeway.)
The answer #JonSkeet's provided to synch the times looks good, I just wanted to point out some things.
As #Alexei already said, users require admin privileges to be able to change their local time (in Windows as least), but there may also be other issues that can cause the time to be out of synch (bad internet connection, hacks etc.). This means there is no guarantee that the client time is indeed the same as the server time, so you will at least need to check the time the request was received serverside anyway. Plus there might also be a usability issue at hand here, would I want an application to be able change the time of my own local machine? Hell no.
To sum things up:
Check the time of the request serverside at least
Don't change the time of the client machine but show some kind of indicator in your application
How to handle the indicator in your application can be done in various ways.
Show a clock in your application (your initial idea) that is periodically synched with the server
Show some kind of countdown ("you can submit after x seconds.."), push a resetCountdown request to the clients when a request is received.
Enable a 'send button' or what ever you have, this would work kind of similar to the countdown.
Just remember, it's nearly impossible validate a request such as this clientside. So you have to build in some checks serverside!
I actually wanted to write a comment but it got kind of long.. :)
Okay a bit of necromancy as this is 6 years old, but had to deal with a similar problem for a network game.
Employed a technique I referred to as "marco-polo" for reasons that will be obvious soon. It requires the two clocks to be able to exchange messages, and its accuracy is dependent on how fast they can do that.
Disclaimer: I am fairly certain I am not the first to do this, and that this is the most rudimentary way to synchronize two clocks. Still I didn't find a documented way of doing so.
At Clock B (The clock we're trying to synchronize) we do the following ::
// Log the timestamp
localTime_Marco_Send = DateTime.UtcNow;
// Send that to clock A
SendSyncRequest();
// Wait for an answer
Sleep(..);
At Clock A (the reference clock) we have the following handler ::
// This is triggered by SendSyncRequest
OnReceiveSyncRequest()
{
// We received "Marco" - Send "Polo"
SendSyncReply(DateTime.UtcNow);
}
And back at Clock B ::
// This is triggered by SendSyncReply
OnReceiveSyncReply(DateTime remoteHalfTime)
{
// Log the time we received it
DateTime localTime_Polo_Receive = DateTime.UtcNow;
// The remote time is somewhere between the two local times
// On average, it will be in the middle of the two
DateTime localHalfTime = localTime_Marco_Send +
(localTime_Polo_Receive - localTime_Marco_Send) / 2;
// As a result, the estimated dT from A to B is
TimeSpan estimatedDT_A_B = localHalfTime - remoteHalfTime;
}
As a result we now have access to a nifty TimeSpan we can subtract from our current local time to estimate the remote time
DateTime estimatedRemoteTime = DateTime.UtcNow - estimatedDT_A_B;
The accuracy of this estimate is subject to the Round Trip Time of send-receive-send-receive, and you should also account for Clock drift (you should be doing this more than once):
Round-trip-time. If it were instant, you'd have the exact dT. If it takes 1 second to come and return, you don't know if the delay was on the sending or the receiving. As a result, your error is 0 < e < RTT, and on average will be RTT/2. If you know send (or receive) takes more than the other, use that to your advantage - the time you received is not the half-time, but is shifted relatively to how long each leg takes
Clock drift. CPU clocks drift, maybe 1s per day. So poll again once potential drift may play an important role.
Your server should always save the time in UTC mode.
You save time in UTC like this in the server:
DateTime utcTime = new DateTime(0, DateTimeKind.Utc);
or:
DateTime utcTimeNow = DateTime.UtcNow;
In the client, when you get the time which is stored in utc you can sonvert it to local time like this:
public DateTime ToLocalTime(DateTime utcTime)
{
//Assumes that even if utcTime kind is no properly deifned it is indeed UTC time
DateTime serverTime= new DateTime(utcTime.Ticks, DateTimeKind.Utc);
return TimeZoneInfo.ConvertTimeFromUtc(serverTime, m_localTimeZone);
}
If You want to change your local time zone , here is a code example on how to read time zone to use from config:
string localTimeZoneId = sysParamsHelper.ReadString(LOCAL_TIME_ZONE_ID_KEY, LOCAL_TIME_ZONE_DEFAULT_ID);
ReadOnlyCollection<TimeZoneInfo> timeZones = TimeZoneInfo.GetSystemTimeZones();
foreach (TimeZoneInfo timeZoneInfo in timeZones)
{
if(timeZoneInfo.Id.Equals(localTimeZoneId))
{
m_localTimeZone = timeZoneInfo;
break;
}
}
if (m_localTimeZone == null)
{
m_logger.Error(LogTopicEnum.AMR, "Could not find time zone with id: " + localTimeZoneId + " . will use default time zone (UTC).");
m_localTimeZone = TimeZoneInfo.Utc;
}
In my application, I have used the number of System.Threading.Timer and set this timer to fire every 1 second. My application execute the thread at every 1 second but it execution of the millisecond is different.
In my application i have used the OPC server & OPC group .one thread reading the data from the OPC server (like one variable changing it's value & i want to log this moment of the changes values into my application every 1 s)
then another thread to read this data read this data from the first thread every 1s & second thread used for store data into the MYSQL database .
in this process when i will read the data from the first thread then i will get the old data values like , read the data at 10:28:01.530 this second then i will get the information of 10:28:00.260 this second.so i want to mange these threads the first thread worked at 000 millisecond & second thread worked at 500 millisecond. using this first thread update the data at 000 second & second thread read the data at 500 millisecond.
My output is given below:
10:28:32.875
10:28:33.390
10:28:34.875
....
10:28:39.530
10:28:40.875
However, I want following results:
10:28:32.000
10:28:33.000
10:28:34.000
....
10:28:39.000
10:28:40.000
How can the timer be set so the callback is executed at "000 milliseconds"?
First of all, it's impossible. Even if you are to schedule your 'events' for a time that they are fired few milliseconds ahead of schedule, then compare millisecond component of the current time with zero in a loop, the flow control for your code could be taken away at the any given moment.
You will have to rethink your design a little, and not depend on when the event would fire, but think of the algorithm that will compensate for the milliseconds delayed.
Also, you won't have much help with the Threading.Timer, you would have better chance if you have used your own thread, periodically:
check for the current time, see what is the time until next full second
Sleep() for that amount minus the 'spice' factor
do the work you have to do.
You'll calculate your 'spice' factor depending on the results you are getting - does the sleep finishes ahead or behind the schedule.
If you are to give more information about your apparent need for having event at exactly zero ms, I could help you get rid of that requirement.
HTH
I would say that its impossible. You have to understand that switching context for cpu takes time (if other process is running you have to wait - cpu shelduler is working). Each CPU tick takes some time so synchronization to 0 milliseconds is impossible. Maybe with setting high priority of your process you can get closer to 0 but you won't achive it ever.
IMHO it will be impossible to really get a timer to fire exactly every 1sec (on the milisecond) - even in hardcore assembler this would be a very hard task on your normal windows-machine.
I think first what you need to do: is to set right dueTime for a timer. I do it so:
dueTime = 1000 - DateTime.Now.Milliseconds + X; where X - is serving for accuracy and you need select It by testing. Then Threading.Timer each time It ticks running on thread from CLR thread pool and, how tests show - this thread is different each time. Creating threads slows timer, because of this you can use WaitableTimer, which always will be running at the same thread. Instead of WaitableTimer you can using Thread.Sleep method in such way:
Thread.CurrentThread.Priority = Priority.High; //If time is really critical
Thread.Sleep (1000 - DateTime.Now + 50); //Make bound = 1s
while (SomeBoolCondition)
{
Thread.Sleep (980); //1000 ms = 1 second, but something ms will be spent on exit from Sleep
//Here you write your code
}
It will be work faster then a timer.
How can i do Thread.Sleep(10.4166667);?
OK i see now that Sleep is not the way to go.
So i use Timer but timer is also in ms put i need more precise
Is there timer with nanosecond accuracy?
So you want your thread to sleep precisely for that time and then resume? Forget about it. This parameter tells the system to wake the Thread after at least this number of milliseconds. At least. And after resuming, the thread could be put to sleep once again in a blink of an eye. That just how Operating Systems work and you cannot control it.
Please note that Thread.Sleep sleeps as long as you tell it (not even precisely), no matter how long code before or after takes to execute.
Your question seems to imply that you want some code to be executed in certain intervals, since a precise time seems to matter. Thus you might prefer a Timer.
To do such a precise sleep you would need to use a real time operating system and you would likely need specialized hardware. Integrity RTOS claims to respond to interrupts in nanoseconds, as do others.
This isn't going to happen with C# or any kind of high level sleep call.
Please note that the argument is in milliseconds, so 10 is 10 milliseconds. Are you sure you want 10.41 etc milliseconds? If you want 10.41 seconds, then you can use 10416.
The input to Thread.Sleep is the number of milliseconds for which the thread is blocked. After that it will be runnable, but you have no influence over when it is actually scheduled. I.e. in theory the thread could wait forever before resuming execution.
It hardly ever makes sense to rely on specific number of milliseconds here. If you're trying to synchronize work between two threads there are better options than using Sleep.
As you already mentioned: You could combine DispatcherTimer with Stopwatch (Making sure the IsHighResolution and Frequency suits your needs). Start the Timer and the Stopwatch, and on discreet Ticks of the Timer check the exact elapsed time of the stopwatch.
If you are trying to rate-limit a calculation and insist on using only Thread.Sleep then be aware there is a an underlying kernel pulse rate (roughly 15ms), so your thread will only resume when a pulse occurs. The guarantee provided is to "wait at least the specified duration." For example, if you call Thread.Sleep(1) (to wait 1ms), and the last pulse was 13ms ago, then you will end up waiting 2ms until the next pulse comes.
The draw synchronization I implemented for a rendering engine does something similar to dithering to get the quantization to the 15ms intervals to be uniformly distributed around my desired time interval. It is mostly just a matter of subtracting half the pulse interval from the sleep duration, so only half the invocations wait the extra duration to the next 15ms pulse, and half occur early.
public class TimeSynchronizer {
//see https://learn.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-sleep
public const double THREAD_PULSE_MS = 15.6d;//TODO read exact value for your system
public readonly TimeSpan Min = TimeSpan.Zero;
public TimeSynchronizer(TimeSpan? min = null) {
if (min.HasValue && min.Value.Ticks > 0L) this.Min = min.Value;
}
private DateTime _targetTimeUtc = DateTime.UtcNow;//you may wish to defer this initialization so the first Synchronize() call assuredly doesn't wait
public void Synchronize() {
if (this.Min.Ticks > 0L) {
DateTime nowUtc = DateTime.UtcNow;
TimeSpan waitDuration = this._targetTimeUtc - nowUtc;
//store the exact desired return time for the next inerval
if (waitDuration.Ticks > 0L)
this._targetTimeUtc += this.Min;
else this._targetTimeUtc = nowUtc + this.Min;//missed it (this does not preserve absolute synchronization and can de-phase from metered interval times)
if (waitDuration.TotalMilliseconds > THREAD_PULSE_MS/2d)
Thread.Sleep(waitDuration.Subtract(TimeSpan.FromMilliseconds(THREAD_PULSE_MS/2d)));
}
}
}
I do not recommend this solution if your nominal sleep durations are significantly less than the pulse rate, because it will frequently not wait at all in that case.
The following screenshot shows rough percentile bands on how long it truly takes (from buckets of 20 samples each - dark green are the median values), with a (nominal) minimum duration between frames set at 30fps (33.333ms):
I am suspicious that the exact pulse duration is 1 second / 600, since in SQL server a single DateTime tick is exactly 1/300th of a second
I need to count the amount (in B/kB/MB/whatever) of data sent and received by my PC, by every running program/process.
Let's say I click "Start counting" and I get the sum of everything sent/received by my browser, FTP client, system actualizations etc. etc. from that moment till I choose "Stop".
To make it simpler, I want to count data transferred via TCP only - if it matters.
For now, I got the combo list of NICs in the PC (based on the comment in the link below).
I tried to change the code given here but I failed, getting strange out-of-nowhere values in dataSent/dataReceived.
I also read the answer at the question 442409 but as I can see it is about the data sent/received by the same program, which doesn't fit my requirements.
Perfmon should have counters for this type of thing that you want to do, so look there first.
Alright, I think I've found the solution, but maybe someone will suggest something better...
I made the timer (tested it with 10ms interval), which gets the "Bytes Received/sec" PerformanceCounter value and adds it to a global "temporary" variable and also increments the sum counter (if there is any lag). Then I made second timer with 1s interval, which gets the sum of values (from temporary sum), divides it by the counter and adds to the overall amount (also global). Then it resets the temporary sum and the counter.
I'm just not sure if it is right method, because I don't know, how the variables of "Bytes Received/sec" PerformanceCounter are varying during the one second. Maybe I should make some kind of histograph and get the average value?
For now, downloading 8.6MB file gave me 9.2MB overall amount - is it possible the other processes would generate that amount of net activity in less than 20 seconds?