In using Stopwatch.GetTimestamp() we find that if you record the return value and then continue calling it and comparing to the previous return value, it will eventually but unpredictably return a value less than the original.
Is this expected behavior?
The purpose of doing this in the production code is to have a microsecond accurate sytem time.
The technique involves calling DateTime.UtcNow and also calling Stopwatch.GetTimestamp() as originalUtcNow and originalTimestamp, respectively.
From that point forward, the application simply calls Stopwatch.GetTimestamp() and using Stopwatch.Frequency it calculates the difference from the originalTimestamp variable and then adds that difference to the originalUtcNow.
Then, Voila...an efficient and accurate microsecond DateTime.
But, we find that sometimes the Stopwatch.GetTimestamp() will return lower number.
It happens quite rarely. Our thinking is to simply "reset" when that happens and continue.
HOWEVER, it makes us doubt the accuracy of the Stopwatch.GetTimestamp() or suspect there is a bug in the .Net library.
If you can shed some light on this, please do.
FYI, based on the current timestamp value, the frequence, and the long.MaxValue it appears unlikely that it will roll over during our lifetime unless it's a hardware issue.
EDIT: We're now calculating this value "per thread" and then "clamping it" to watch for jumps between cores to reset it.
It's possible that you get the jump in time because your thread is jumping cores. See the "note" on this page: http://msdn.microsoft.com/en-us/library/ebf7z0sw.aspx
The behavior of the Stopwatch class will vary from system to system depending on hardware support.
See: http://msdn.microsoft.com/en-us/library/system.diagnostics.stopwatch.ishighresolution.aspx
Also, I believe the underlying equivalent win32 call (QueryPerformanceCounter) contains useful documentation: http://msdn.microsoft.com/en-us/library/windows/desktop/ms644904(v=vs.85).aspx
I don't know exactly regarding about running backwards (which sounds like a small change backwards), but I have so far experienced three times that the value of Stopwatch.GetTimestamp() can change so enormously that it causes overflow exceptions in some further calculations of form about like this:
(Stopwatch.GetTimestamp() - ProgramStartStopwatchTimestamp) * n
where n is some large value, but small enough that if the Stopwatch weren't jumping around enormously, then the program could run years without having overflow exception. Note also that these exceptions occurred many hours after the program started, so the issue is not just that the Stopwatch ran backwards a little bit immediately after start. It just jumped to totally different range, in whatever direction.
Regarding Stopwatch rolling over, in one of the above cases it (not the difference, but Stopwatch) obtained value of something a la 0xFF4? ???? ???? ????, so it jumped to a range that was very close to rolling over. After restarting the program multiple times, this new range was still consistently in effect. If that matters anymore considering the need to handle the jumps anyway...
If it was additionally necessary to determine which core the timestamp was taken on then it probably helps to know executing core number. For this end, there are functions called GetCurrentProcessorNumber (available since Server 2003 and Vista) and GetCurrentProcessorNumberEx (available since Server 2008 R2 and Windows 7). See also this question's answers for more options (including Windows XP).
Note that core number can be changed by the scheduler any time. But when one reads the core number before reading Stopwatch timestamp, and after, and the core number remained same, then perhaps one can assume that the Stopwatch read was also performed on this core...
To specifically answer the high-level question "How often does Stopwatch.GetTimestamp() roll over?", Microsoft's answer is:
Not less than 100 years from the most recent system boot, and potentially longer based on the underlying hardware timer used. For most applications, rollover isn't a concern.
Related
I want to provide a trial version of my software. This version should only be evaluated within a specific period of time. Let's say only during January 2011.
As this software massively uses the system clock in processing, it would be quite annoying to set the clock to an earlier time to be able to use it over and over. So because of this, I wound't think of a more complicated protection mechanism.
So I have thought about exiting after a test like:
if (DateTime.Now.Year != 2011 && DateTime.Now.Month != 1)
{
MessageBox.Show("expired!");
Application.Exit();
}
How easy will this be cracked :-) ?
Is there a "safe" way to do this ?
Basicly you can say it is impossible to secure trial software against cracking. Everything you do in Software can be bypassed (Yes! Even Ring-0 Drivers).
Even if you have an external dongle from which you recieve the authentication to start can be spoofed through software. Although it isn't easy :-)
You can only make it hard not impossible :-)
It's not exactly related to cracking it, but it's worth noting that if this is an app that can be used internationally, it'll show as being 'expired' for many users before they've even had a chance to try it at all. The values returned by DateTime reflect the local users culture, so DateTime.Now.Year returns 1431 for Arabic cultures and 2553 for the Thai culture for instance. Months can similarly differ, so you shouldn't hardcode values for them without checking the culture first.
You can get round this by using the InvariantCulture each time ,e.g. DateTime.Now.Year.ToString(System.Globalization.CultureInfo.InvariantCulture);
This method could be cracked if the user set his computer Date to 2009: your software will be used for other two years.
If you want to use this method I think the best way is to check the actual date on the internet.
However some users couldn't have a connection; in that case you can use something like a countdown that makes the software executable for n-days.
You ship your software with an encrypted file that contains the date of installation;
The next time you will check for that file:
If exists and the day is different increment a counter;
If exists but is corrupted or if it doesn't exists update the counter to "max-day";
If exists but the counter is equal to "max-day" exit the application;
Update the old file with the new values;
Obviously the counter will be another encrypted file and, as for the other file, you have to check his state (corruption or deletion).
Here's my opinion.
The weak point of any time-limit or dongle-protection is that in the end everything must be checked with an 'if' statement. In the old 'x86 days there is JNE, JE, JNZ family of instructions. Such 'if' statement must exist in hundreds if not thousands or more in the application. So any cracker must find out where to start looking, for instance, dongle checkers almost always use DeviceIoControl API, which could be pinpointed quickly. After the calls to DeviceIoControl API found, the cracker just reverse engineered the routine around the API call, and try change JNE instructions around that to JE or the other way around.
In your case, the usage of DateTime is the tell-tale (but of course, there is a lot of place where DateTime being used for other things, that makes it harder for the cracker). To make things difficult for the cracker, copy the current date to some object's values, and try to make, like, 20 places or something that stores the DateTime. Or even better, retrieve current date from the net and also from the current computer. Don't check the DateTime directly, but use the value that you store in some objects before, to make it harder for the cracker. Use consistency check mechanism to ensure the dates are within tolerance, and kill the app if you find out that 2 of the datetime is different to the other stored datetime (give 2 days tolerance or so).
Also check whether the clock is not turned back by the user, if you found out that CurrentDateTime < StoredDateTimeInRegistry then you should store a kill flag somewhere in the registry. Or you might also want to use a file in addition to the registry.
For every kind checks you do, try to do it in many places and in different ways.
At the end, I must say that what Bigbohne said is true (nothing is impossible to crack) - it is just that, by making it difficult for the cracker, we changed his/her effort-to-result ratio, and hopefully discouraging him from continuing the cracking process.
Checking trial period expiration in C# code is easy to crack, even if you will obfuscate code due to it is compiled into CLR. It is better carry out this check into code that is compiled to byte code. Also you can read topic Protect .NET code from reverse engineering? about protecting .NET code from reverse engineering
Software licensing is a complete subject on its own, and it looks like that you are looking for a simplest solution to be implemented for your trial software.
What simply you can do, on startup of your application log the current date/time in registry and use it is as a reference point for validation. So even if the system time would be changed it wouldn't effect your application validation logic.
If possible, write the registry access library in C++, which wouldn't be possible to crack. Good luck.
The idea is that an existing project uses timeGetTime() (for windows targets) quite frequently.
milliseconds = timeGetTime();
Now, this could be replaced with
double tmp = (double) lpPerformanceCount.QuadPart/ lpFrequency.QuadPart;
milliseconds = rint(tmp * 1000);
with lpPerformanceCount.QuadPart and lpFrequency.QuadPart being taken from the use of a single call to QueryPerformanceCounter() and QueryPerformanceFrequency().
I know Windows' internals are kind of voodoo, but can someone decipher which of the two is more accurate or/and has more overheads?
I suspect accuracy might be same but QueryPerformanceCounter might have less overheads. But I have no hard data to back it up.
Of course I wouldn't be surprised if the opposite is true.
If overheads are tiny in any way I would be more interested on whether there's any difference in accuracy.
The accuracy of timeGetTime() is variable, based on the last used timeBeginPeriod. It will never be better than one millisecond. QueryPerformanceCounter is variable too, depending on hardware support. It will never be worse than about a microsecond.
Neither of them have notable overhead, QPC is probably a bit heavier. Whether that's significant to you is quite unclear from your question. I doubt it, but measure. With QPC.
Be careful: QueryPerformanceCounter may be processor dependent. If your thread grabs the perf counter on one CPU, and ends up on another CPU before it grabs again, the results may not be reliable. See the MSDN entry.
Accuracy is better on QPC. timeGetTime is accurate within the 1-10ms range (and its resolution is no finer than 1ms), whereas QPC can give you accuracy in the microsecond range.
The overhead varies. QPC uses the best hardware timer available. That may be some lightweight one built into the CPU, or it may have to go out to the motherboard which adds significant latency. And it might be made more expensive by having to go through a driver correcting for the timer hardware being buggy.
But neither is prohibitively expensive. If you're not going to call the timer millions of times per second, the overhead is insignificant for both.
We have updated the documentation for QueryPerformanceCounter and this should help to answer the questions above. Please see
http://msdn.microsoft.com/en-us/library/windows/desktop/dn553408(v=vs.85).aspx
Ed Briggs
Microsoft Corporation
QueryPerformanceCounter does not quite give you a time. In order to convert its values
into time measures you'd have to use QueryPerformanceFrequency which is supposed to
let you know at which rate the counter increments. But the frequency value is more or
less an estimate. The frequency of the counter can vary with the underlaying hardware and with the version of the OS. But it shall not be considered beeing a constant. It has an offset and is sometimes acompanied by thermal drift.
Saying that, I would recommend to use QueryPerformanceCounter with care.
Some still mix accuracy with granularity. QueryPerformanceCounter has finer granularity,
while timeGetTime has better accuracy.
However, the fastest source is GetSystemTimeAsFileTime which returns a time value in 100ns units. But its granularity is not 100ns. Its granularity depends on the result of
timeGetDevCaps and the setting of timeBeginPeriod. Setting the latter properly can result in a granularity of about 10000 which corresponds to about 1ms.
I've written some more details here.
Is it a viable option to compare two FileInfo.CreationTimeUtc.Ticks of two files on two different computers to see which version is newer - or is there a better way?
Do Ticks depend on OS time or are they really physical ticks from some fixed date in the past?
The aim of a UTC time is that it will be universal - but both computers would have to have synchronized clocks for it to be appropriate to just compare the ticks. For example, if both our computers updated a file at the exact same instant (relativity aside) they could still record different times - it's not like computers tend to come with atomic clocks built-in.
Of course, they can synchronize time with NTP etc to be pretty close to each other. Whether that's good enough for your uses is hard to say without more information. You should also bear in mind the possibility of users deliberately messing with their system clocks - could that cause a problem for your use case?
Do Ticks depend on OS time or are they
really physical ticks from some fixed
date in the past?
Ticks are pretty much independent of the OS.
If I remember correctly, 1 second = 10000000 ticks.
So whatever time you are checking, what you get from Ticks is about ten million times what you get from TotalSeconds. (Although it is more accurate than TotalSeconds, obviously.)
A tick is basically the smallest unit of time that you can measure from .NET.
As of speaking about UTC, yes, it is as good as you can get. If the system time on the machine your app is running on is accurate enough, you'll manage with it without issue.
Basically, the more frequent updates there are of the files, the more inaccurate this will be. If someone creates two versions of the file in one second, all of the system times must be precisely synchronized to get a good result.
If you only have different versions once per several minutes, then it is very much good enough for you.
The short answer is that no, this is not a viable solution, at least not in the theoretical, anything-can-happen type of world.
The clock of a computer might be accurate to a billionth of a billionth of a second, but the problem isn't the accuracy, it's whether the clocks of the two computers are synchronized.
Here's an experiment. Look at your watch, then ask a random stranger around you what the time is. If your file was written to on your computer when you looked at the watch, and written to on the computer belonging to the person you're asking 1 second ago, would your comparison determine that your file or his/her file was newer?
Now, your solution might work, assuming that:
You're not going to have to compare milli- or nano-second different clock values
The clocks of the computers in question are synchronized against some common source
** or at the very least set to be as close to each other as your inaccuracy-criteria allows for
Depending on your requirements, I would seriously look at trying to find a different way of ensuring that you get the right value.
If you are talking about arbitrary files, then the answer about UTC is worth a try. The clocks should be the same.
If you have control over the files and the machines writing to them, I would write a version number at the start of the file. Then you can compare the version number. You would have to look into how to get unique version numbers if two machines write a file independently. To solve this, a version webservice could be implemented which both machines use.
Some file formats have version information built-in, and also time stamps. For example, Microsoft Office formats. You could then use the internal information to decide which was the newest. But you could end up with version conflict on these as well.
From the way you phrased the question, I assume that for some reason you cannot call the DateTime comparison methods. Assuming this, the msdn page for the ticks property states "A single tick represents one hundred nanoseconds or one ten-millionth of a second. There are 10,000 ticks in a millisecond." Thus the ticks refer to the value assigned by the .NET library and do not depend on the machine/OS(ehich is probably Windows since you are using C#) and can be safely used for comparisons.
I made bunch of benchmarks of the framework 4.0 and older and I can't understand why the same code is slower when using WPF compared to Windows Forms:
This is the code, it has nothing to do with the UI elements:
Random rnd = new Random(845038);
Int64 number = 0;
for (int i = 0; i < 500000000; i++)
{
number += rnd.Next();
}
The code takes 5968ms - 6024ms to execute in Windows Forms and 6953ms in WPF.
Here is the post with the downloadable solution: http://blog.bettiolo.it/2010/04/benchmark-of-net-framework-40.html
A lot can happen in six seconds on a Windows machine. I would imagine that the background processing that takes place in WPF is a little different (and carries a little more overhead) than that which occurs in Winforms.
The first loop runs at the same speed for me.
Are you measuring without the debugger attached?
When I downloaded the zip file and looked at your code, the problem became obvious: It is not the same code.
Since the two tests have different code, they are compiled differently by the C# compiler and optimized differently by the JIT compiler. Different registers are assigned to local variables. Different calling techniques are used. Different stack offsets are used.
Here are a few differences I noted between the two benchmark methods:
They take different parameter types
They contain different numbers (9 vs 7) and types of local variables
They make different numbers of method calls
They have a different number of loops
One calls Application.DoEvents() and the other doesn't
My guess is that in your WinForms version of the code, the JIT compiler placed the variable 'i' at a stack offset whereas in the WPF version it placed it in a register, which then needed to be saved on each iteration.
In any case, don't blame the difference on WPF vs WinForms: Blame the difference on having two different tests that look superficially similar but got optimized differently.
Break the testing code out into a static method in a separate class. If you use identical code in both benchmarks I can pretty much guarantee you that you will get identical results.
First off, to rule out any environmental factors you would have to run this test for each solution over a period of 24 to 48 hours. Second the actual logic of it being slower is flawed. If you detach any gui gode from this application you will see they both target the same framework ergo they should not be any different.
If you are testing which GUI framework is faster then your test is invalid as it does not play to either ones strength or weaknesses. To test WPF against Winforms in this manner is to miss the fundamental differences between both frameworks.
There is not biased way of testing each framework since both co exist. To say that WPF is faster in terms of rendering complex primitives or expensive GUI operations is flawed as the test would be biased to WPF.
The fact that the underlying model can be so different means that any testing of this nature would be subjective. I do not trust tests of this nature simply because they are either trying to prove an authors point or disprove someone else's methods.
I would not worry about speed simply because if a customer has the ability to run a WPF application properly then the gains or misses will be so minuscule that they will not matter.
Do you have a form of some kind showing on the screen? I would think the overhead difference for the form may be what you're seeing.
How accurate is System.Diagnostics.Stopwatch? I am trying to do some metrics for different code paths and I need it to be exact. Should I be using stopwatch or is there another solution that is more accurate.
I have been told that sometimes stopwatch gives incorrect information.
I've just written an article that explains how a test setup must be done to get an high accuracy (better than 0.1ms) out of the stopwatch. I Think it should explain everything.
http://www.codeproject.com/KB/testing/stopwatch-measure-precise.aspx
The System.Diagnostics.Stopwatch class does accurately measure time elapsed, but the way that the ElapsedTicks method works has led some people to the conclusion that it is not accurate, when they really just have a logic error in their code.
The reason that some developers think that the Stopwatch is not accurate is that the ElapsedTicks from the Stopwatch DO NOT EQUATE to the Ticks in a DateTime.
The problem arises when the application code uses the ElapsedTicks to create a new DateTime.
var watch = new Stopwatch();
watch.Start();
... (perform a set of operations)
watch.Stop();
var wrongDate = new DateTime(watch.ElapsedTicks); // This is the WRONG value.
If necessary, the stopwatch duration can be converted to a DateTime in the following way:
// This converts stopwatch ticks into DateTime ticks.
// First convert to TimeSpan, then convert to DateTime
var rightDate = new DateTime(watch.Elapsed.Ticks);
Here is an article that explains the problem in more detail:
http://geekswithblogs.net/BlackRabbitCoder/archive/2012/01/12/c.net-little-pitfalls-stopwatch-ticks-are-not-timespan-ticks.aspx
Note that the content is no longer available at the original link. Here is a reference to the archived content from the Wayback Machine:
https://web.archive.org/web/20190104073827/http://geekswithblogs.net:80/BlackRabbitCoder/archive/2012/01/12/c.net-little-pitfalls-stopwatch-ticks-are-not-timespan-ticks.aspx
Stopwatch class return different values under different configuration as Frequency depends on the installed hardware & operating system.
Using stopwatch class we can have only the rough estimation of execution time. And for each execution it returns different value so we have to take average of different execution.
More Info : http://msdn.microsoft.com/en-us/library/system.diagnostics.stopwatch.aspx
First, exact is of course not a possible or meaningful concept when talking about time or space, since no empyrical measurement of a physical magnitude can ever pretend to be exact.
Second, David Bolton's blog article may be useful. I'm quoting:
If this was timed with the high resolution counter then it will be accurate to microseconds. It is actually accurate to nanoseconds (10-9 seconds, ie a billionth of a second) but there is so much other stuff going on that nanosecond accuracy is really a bit pointless. When doing timing or benchmarking of code, you should do a number of runs and take the average time- because of other processes running under Windows, how much swapping to disk is occurring etc, the values between two runs may vary.
MSDN has some examples of the stopwatch. They also have it showing how accurate it is within Nanoseconds. Hope this helps!
Why you don't profile your code instead of focusing on micro-benchmarks?
There are some good Open Source profilers like:
NProf
Prof-It for C#
NProfiler
ProfileSharp
In addition to seconding the advice of HUAGHAGUAH above, I'd add that you should be VERY skeptical of micro-benchmarks in general. While close-focused performance testing has a legitimate place, it's very easy to tweak an unimportant detail. So write and verify code that is designed for readability and clarity, then profile it to find out where the hot spots are (or whether there are any worth worrying about), and then tune (only) those portions.
I recall working with a programmer who micro-optimized a bit of code that executed while the system waited for human input. The time savings absolutely disappeared in the between-keystroke lag!
If you want more precise timings. Take a look at the QueryPerformanceCounter. MSDN link for QueryPerformanceCounter. A neat implementation is given here. The example loads coredll.dll for CE, for Windows you should load the Kernel32.dll as stated in the MSDN documentation.