Is DateTime.Now an I/O bound operation? - c#

What happens when you call DateTime.Now?
I followed the property code in Reflector and it appears to add the time zone offset of the current locale to UtcNow. Following UTCNow led me, turn by turn, finally to a Win32 API call.
I reflected on it and asked a related question but haven't received a satisfactory response yet. From the links in the present comment on that question, I infer that there is a hardware unit that keeps time. But I also want to know what unit it keeps time in and whether or not it uses the CPU to convert time into a human readable unit. This will shed some light on whether the retrieval of date and time information is I/O bound or compute-bound.

You are deeply in undocumented territory with this question. Time is provided by the kernel: the underlying native API call is NtQuerySystemTime(). This does get tinkered with across Windows versions - Windows 8 especially heavily altered the underlying implementation, with visible side-effects.
It is I/O bound in nature: time is maintained by the RTC (Real Time Clock) which used to be a dedicated chip but nowadays is integrated in the chipset. But there is very strong evidence that it isn't I/O bound in practice. Time updates in sync with the clock interrupt so very likely the interrupt handler reads the RTC and you get a copy of the value. Something you can see when you tinker with timeBeginPeriod().
And you can see when you profile it that it only takes ~7 nanoseconds on Windows 10 - entirely too fast to be I/O bound.

You seem to be concerned with blocking. There are two cases where you'd want to avoid that.
On the UI thread it's about latency. It does not matter what you do (IO or CPU), it can't take long. Otherwise it freezes the UI thread. UtcNow is super fast so it's not a concern.
Sometimes, non-blocking IO is being uses as a way to scale throughput as more load is added. Here, the only reason is to save threads because each thread consumes a lot of resources. Since there is no async way to call UtcNow the question is moot. You just have to call it as is.
Since time on Windows usually advances at 60 Hz I'd assume that a call to UtcNow reads from an in-memory variable that is written to at 60 Hz. That makes is CPU bound. But it does not matter either way.

.NET relies on the API. MSDN has to say this about the API:
https://msdn.microsoft.com/de-de/library/windows/desktop/ms724961(v=vs.85).aspx
When the system first starts, it sets the system time to a value based on the real-time clock of the computer and then regularly updates the time [...] GetSystemTime copies the time to a SYSTEMTIME [...]
I have found no reliable sources to back up my claim that it is stored as SYSTEMTIME structure, updated therein, and just copied into the receiving buffer of GetSystemTime when called. The smallest logical unit is 100ns from the NtQuerySystemTime system call, but we end up with 1 millisecond in the CLR's DateTime object. Resolution is not always the same.
We might be able to figure that out for Mono on Linux, but hardly for Windows given that the API code itself is not public. So here is an assumption: Current time is a variable in the kernel address space. It will be updated by the OS (frequently by the system clock timer interrupt, less frequently maybe from a network source -- the documentation mentions that callers may not rely on monotonic behavior, as a network sync can correct the current time backwards). The OS will synchronize access to prevent concurrent writing but otherwise it will not be an I/O-expensive operation.
On recent computers, the timer interval is no longer fixed, and can be controlled by the BIOS and OS. Applications can even request lower or higher clock rates: https://randomascii.wordpress.com/2013/07/08/windows-timer-resolution-megawatts-wasted

Related

Best way to throttle an external application's CPU Usage

Ok - here is the scenario:
I host a server application on Amazon AWS hosted windows instances. (I do not have access to the source code - so I cannot resolve the issues from within the applications source code)
These specific instances are able to build up CPU credits during times of idle cpu (less than 10-20% usage) and then spend those CPU credits during times of increased compute requirement.
My server application however, typically runs at around 15-20% cpu usage when no clients are connected- this is time when I would rather lower the cpu usage to around 5% through throttling of the cpu - maintaining enough cpu throughput to accept a TCP Socket from incoming clients.
When a connected client is detected, I would like to remove the throttle and allow full access to the reserve of AWS CPU Credits.
I have got code in place that can Suspend and Resume processes via C# using Windows API calls.
I am however a bit fuzzy on how to accurately attain a target cpu usage for that process.
What I am doing so far, which is having moderate success:
Looping inside another application
check the cpu usage of the server application - using performance counters (dont like these - they require a 100-1000 ms wait in order to return a % value)
I determine if the current value is above or below the target value - if above, I increase an int value called 'sleep' by 10ms
If below - 'sleep' is decreased by 10ms.
Then the application will call
Process.Suspend();
Threads.sleep(sleep);
Process.Resume();
Like I said - this is having moderate success.
But there are several reasons I don't like it:
1. It requires a semi-rapid loop in an external application: This might end up just shifting cpu usage to that application.
2. Im sure there are better mathematical solutions to work out the ideal sleep time.
I came across this application : http://mion.faireal.net/BES/
It seems to do everything I want, except I need to be able to control it, and I am not a c++ developer.
It also seems to be able to achieve accurate cpu throttling without consuming large cpu utself.
Can someone suggest CPU throttle techniques.
Remember - I cannot modify the source code of the application being throttled - at most, I could inject code into it: but it occurs to me that if I inject suspend code into it, then the resume code could not fire etc.
An external agent program might be the best way to go.

Get system clock/time in C# without using any objects?

I'm developing an application in C# that requires many function calls (100-1000 per second) to happen at very specific times. However, there are extremely tight specs on the latency of this application, and so due to the latency increase associated with garbage collection, it's not feasible for me to use DateTime or Timer objects. Is there some way I can access the system time as a primitive type, without having to create DateTime objects?
TL;DR: Is there an analogue for Java's System.currentTimeMillis() for C#?
What makes you think DateTime allocates objects? It's a value type. No need for a heap allocation, and thus no need for garbage collection. (As TomTom says, if you have hard latency requirements, you'll need a real-time operating system etc. If you just have "low" latency requirements, that's a different matter.)
You should be able to use DateTime.Now or DateTime.UtcNow without any issues - UtcNow is faster, as it doesn't perform any time zone conversions.
As an example, I just time 100 million calls to DateTime.UtcNow and then using the Hour property, and on my laptop that takes about 3.5 seconds. Using the Ticks property (which doesn't involve as much computation) takes about 1.2 seconds. Without using any property, it only takes 1 second.
So basically if you're only performing 1000 calls per second, it's going to be irrelevant.
Consider not using windows. SImple like that. Not even "not using C#" but not using windows.
However, there are extremely tight specs on the latency of this application,
There are special real time operating systems that are build for exactly this.
Is there an analogue for Java's System.currentTimeMillis() f
Yes. but that still will not help.
The best you CAN do is high precision multimedia timers, which work like a charm but also have no real time guarantees. The language is not hte problem - your OS of choicee is unsuitable for the task at hand.
GC is totally not an issue if programming smart. Objects are not an issue, usin a concurrent GC and avoiding EXCESSIVE creation of objects helps a lot. You dramatize a problem here that is not there to start with.
There is a kernel API that can handle very low MS precision and can be accessed from C#
http://www.codeproject.com/Articles/98346/Microsecond-and-Millisecond-NET-Timer
the real problem is you must reconfigure the kernel to make that interrupt at short notices or you are at the mercy of the scheduler that does not ahve such a low resolution.

Using many timers may cause application crash?

I have an application and I have 6 timers. each timer have different interval which mean 1s, 1s, 3s, 3s, 3s, 3s, respectively. Require CPU is always 2% to 3%.
in my PC is fine due to my PC's capability.
I am sure it may cause application if PC's capability is low.
Is there any effective way to use timer? or other running background?
The reason, I use timer because this timer will query database(get total amount) whenever user added or edit or delete product record, not just product record any record.
Timer 1s is for show Date and Time label
Timer 1s is to interact with datagriview, update the whole column
and Other timers is to get data from MySql Server. As my estimation, the max num of records can be 10 records.
Thanks
It's unclear why you think you need multiple timers here, and you don't even say which timer implementation you are using - and it would likely make a difference.
Employing a single timer that triggers on a reasonable minimal precision (1s, 100ms, etc) would reduce the overall overhead and would likely serve your purpose better. Of course that's said without any indication of what your actually trying to achieve.
It sounds as if you may have multiple issues, but to answer your question, running multiple timers will no cause your application to crash. How you implement the timers and if you are locking the code blocks that are called when a time is fired are important. If you are allowing code blocks to be executed before the code block has finished executing a previous call it can cause your application to become unstable. You should look at timers and perhaps even threads. Without knowing more about what you are doing it is difficult to provide a an more definitive answer to your question.

Time of program at different moments

I have just finished a project, but i 've got a question from my teacher. Why does my program (with same algorithm, same data, same environment) run with different finish time at different moments.?
Can anyone help me?
Example: Now my program runs for 1.03s.
but then it runs for 1.05s (sometimes faster 1.01).
That happens because your program is not the only entity executing in the system and it does not get all the resources immediately at all times.
For this reason it's practically of little value to measure short execution times as they are going to vary quite noticeably. Instead, if you're interested in more accurate time measurements, you should execute your code many times and calculate the average time of all runs.
Just an idea here but could it be because of the changes on the memory usage, cpu usage by the background applications changes on different times. I mean time difference would create difference only on;
The memory usage by the other applications
The physical conditions such as cpu heat. ( The changes in time is really small )
And system clock. If you do a random number generation or do any operation that uses system clock on the background might create that change.
Hope this helps.
Cheers.
That's easy. You capture system time difference, using a counter that's imprecise as it uses system resources. There are more programs that run in parallel with yours, some take priority over your code causing temporary (~20ms, depending on OS settings) suspension of your thread. Even in DOS there is code that runs in quasi-parallel with yours, given there's only one thread possible, your code is stalled while the time is still ticking (it's governed by that code).
Because Windows is not a real time operating system. Many other activity can happen when your program is executed, and the cpu can share its cycles with other running processes. Time can change even more if your program need to read from physical devices as disk ( database too ) and the net: this because physical resource can be busy serving other requests. memory could change things too, if there is page faults, so your app need to read pages from virtual memory and as a result you will see a performance decrease. Since you are using C#, time can change sensibly from first execution to the others in the same process, due to the fact the code is JITtted, ie is compiled from intermediate code to assembly the first time is seen, then it is used in the assembly form, that is dramatically faster.
The assumption is wrong. The environment does not stay the same. The available resources for your program depend on many things. E.g. CPU and memory utilization by other processes, e.g. background processes. The harddisk and/or network utilization due to other processes. Even if there are no other processes running your program will change the internal state of the caches.
In "real world" performance scenarios it is not uncommon to see fluctuations of +/- 20% after "warm up". That is: measure 10 times in a row as "warm up" and discard the results. Measure 10 times more and collect the results. --> +/- 20% is quite common. If you do not warm up you might even see differences several orders of magnitude due to "cold" caches.
Conclusion: your program is very small and uses very little resources and it does not benefit from durable cache mechanisms.

Need help with the architecture for a penny bidding website

I'm trying to create a website similar to BidCactus and LanceLivre.
The specific part I'm having trouble with is the seconds aspect of the timer.
When an auction starts, a timer of 15 seconds starts counting down, and every time a person bids, the timer is reset and the price of the item is increased by 0,01$.
I've tried using SignalR for this bit, and while it does work well during trials runs in the office, it's just not good enough for real world usage where seconds count. I would get HTTP 503 errors when too many users were bidding and idling on the site.
How can I make the timer on the clients end shows the correct remaining time?
Would HTTP GETting that information with AJAX every second allow me to properly display the missing time? That's a request each second!
And not only that, but when a user requests that GET, I calculate remaining seconds, but until the user see's that response, that time is no longer useful as a second or more might pass between processing and returning. Do you see my conundrum?
Any suggestions on how to approach this problem?
There are a couple problems with the solution you described:
It is extremely wasteful. There is already a fairly high accuracy clock built into every computer on the Internet.
The Internet always has latency. By the time the packet reaches the client, it will be old.
The Internet is a variable-latency network, so the time update packets you get could be as high or higher than one second behind for one packet, and as low as 20ms behind for another packet.
It takes complicated algorithms to deal with #2 and #3.
If you actually need second-level accuracy
There is existing Internet-standard software that solves it - the Network Time Protocol.
Use a real NTP client (not the one built into Windows - it only guarantees it will be accurate to within a couple seconds) to synchronize your server with national standard NTP servers, and build a real NTP client into your application. Sync the time on your server regularly, and sync the time on the client regularly (possibly each time they log in/connect? Maybe every hour?). Then simply use the system clock for time calculations.
Don't try to sync the client's system time - they may not have access to do so, and certainly not from the browser. Instead, you can get a reference time relative to the system time, and simply add the difference as an offset on client-side calculations.
If you don't actually need second-level accuracy
You might not really need to guarantee accuracy to within a second.
If you make this decision, you can simplify things a bit. Simply transmit a relative finish time to the client for each auction, rather than an absolute time. Re-request it on the client side every so often (e.g. every minute). Their global system time may be out of sync, but the second-hand on their clock should pretty accurately tick down seconds.
If you want to make this a little more slick, you could try to determine the (relative) latency for each call to the server. Keep track of how much time has passed between calls to the server, and the time-left value from the previous call. Compare them. Then, calculate whichever is smaller, and base your new time off that calculation.
I'd be careful when engineering such a solution, though. If you get the calculations wrong, or are dealing with inaccurate system clocks, you could break your whole syncing model, or unintentionally cause the client to prefer the higest latency call. Make sure you account for all cases if you write the "slick" version of this code :)
One way to get really good real-time communication is to open a connection from the browser to a special tcp/ip socket server that you write on the server. This is how a lot of chat packages on the web work.
Duplex sockets allow you to push data both directions. Because the connection is already open, you can send quite a bit of very fast data across.
In the past, you needed to use Adobe Flash to accomplish this. I'm not sure if browsers have advanced enough to handle this without a plugin (eg, websockets?)
Another approach worth looking at is long polling. In concept, a connection is made to the server that just doesn't die, and it gives you the opportunity on the server to trickle bits of realtime data down to the clients.
Just some pointers. I have written web software using JavaScript <-> Flash <-> Python/PHP, and was please with how it worked.
Good luck.

Categories

Resources