Is it a viable option to compare two FileInfo.CreationTimeUtc.Ticks of two files on two different computers to see which version is newer - or is there a better way?
Do Ticks depend on OS time or are they really physical ticks from some fixed date in the past?
The aim of a UTC time is that it will be universal - but both computers would have to have synchronized clocks for it to be appropriate to just compare the ticks. For example, if both our computers updated a file at the exact same instant (relativity aside) they could still record different times - it's not like computers tend to come with atomic clocks built-in.
Of course, they can synchronize time with NTP etc to be pretty close to each other. Whether that's good enough for your uses is hard to say without more information. You should also bear in mind the possibility of users deliberately messing with their system clocks - could that cause a problem for your use case?
Do Ticks depend on OS time or are they
really physical ticks from some fixed
date in the past?
Ticks are pretty much independent of the OS.
If I remember correctly, 1 second = 10000000 ticks.
So whatever time you are checking, what you get from Ticks is about ten million times what you get from TotalSeconds. (Although it is more accurate than TotalSeconds, obviously.)
A tick is basically the smallest unit of time that you can measure from .NET.
As of speaking about UTC, yes, it is as good as you can get. If the system time on the machine your app is running on is accurate enough, you'll manage with it without issue.
Basically, the more frequent updates there are of the files, the more inaccurate this will be. If someone creates two versions of the file in one second, all of the system times must be precisely synchronized to get a good result.
If you only have different versions once per several minutes, then it is very much good enough for you.
The short answer is that no, this is not a viable solution, at least not in the theoretical, anything-can-happen type of world.
The clock of a computer might be accurate to a billionth of a billionth of a second, but the problem isn't the accuracy, it's whether the clocks of the two computers are synchronized.
Here's an experiment. Look at your watch, then ask a random stranger around you what the time is. If your file was written to on your computer when you looked at the watch, and written to on the computer belonging to the person you're asking 1 second ago, would your comparison determine that your file or his/her file was newer?
Now, your solution might work, assuming that:
You're not going to have to compare milli- or nano-second different clock values
The clocks of the computers in question are synchronized against some common source
** or at the very least set to be as close to each other as your inaccuracy-criteria allows for
Depending on your requirements, I would seriously look at trying to find a different way of ensuring that you get the right value.
If you are talking about arbitrary files, then the answer about UTC is worth a try. The clocks should be the same.
If you have control over the files and the machines writing to them, I would write a version number at the start of the file. Then you can compare the version number. You would have to look into how to get unique version numbers if two machines write a file independently. To solve this, a version webservice could be implemented which both machines use.
Some file formats have version information built-in, and also time stamps. For example, Microsoft Office formats. You could then use the internal information to decide which was the newest. But you could end up with version conflict on these as well.
From the way you phrased the question, I assume that for some reason you cannot call the DateTime comparison methods. Assuming this, the msdn page for the ticks property states "A single tick represents one hundred nanoseconds or one ten-millionth of a second. There are 10,000 ticks in a millisecond." Thus the ticks refer to the value assigned by the .NET library and do not depend on the machine/OS(ehich is probably Windows since you are using C#) and can be safely used for comparisons.
Related
I created an App which uses Timer class to callback a method at a certain time of a day and recall it every 24 hours after that.
I use Ticks to signify 24 hours later. (int) TimeSpan.FromHours(24).TotalMilliseconds
I use that to retrieve the ticks for 24 hours.
This works fine for me but on different computers, the trigger time is way off.
Anyway to debug this ? How should I fight/handle this issue ....
How much is "way off" to you? If you want an app to run at a specific time, schedule it for that specific time, not 24 hours from the time it finishes - you're inevitably going to see some slippage doing it that way because the time will always be off the next day X seconds, where X is how long the program took to complete the previous day.
How "way off" yes desktop computers clocks frequently fluctuate by a second or more every day, they generally use a NTP server to correct these fluctuations. But its just the nature of the beast.
First of all, there is no "my ticks" because a Tick is a well-defined value.
A single tick represents one hundred nanoseconds or one ten-millionth of a second. There are 10,000 ticks in a millisecond.
A DateTime object also has a Ticks property that you can use to access it. I wrote some simple code that I've posted here worked great for me, producing perfect results.
I see no reason your implementation should drift so much. Please make a sample that consistently produces the problem or post the relevant pieces of your own source.
In using Stopwatch.GetTimestamp() we find that if you record the return value and then continue calling it and comparing to the previous return value, it will eventually but unpredictably return a value less than the original.
Is this expected behavior?
The purpose of doing this in the production code is to have a microsecond accurate sytem time.
The technique involves calling DateTime.UtcNow and also calling Stopwatch.GetTimestamp() as originalUtcNow and originalTimestamp, respectively.
From that point forward, the application simply calls Stopwatch.GetTimestamp() and using Stopwatch.Frequency it calculates the difference from the originalTimestamp variable and then adds that difference to the originalUtcNow.
Then, Voila...an efficient and accurate microsecond DateTime.
But, we find that sometimes the Stopwatch.GetTimestamp() will return lower number.
It happens quite rarely. Our thinking is to simply "reset" when that happens and continue.
HOWEVER, it makes us doubt the accuracy of the Stopwatch.GetTimestamp() or suspect there is a bug in the .Net library.
If you can shed some light on this, please do.
FYI, based on the current timestamp value, the frequence, and the long.MaxValue it appears unlikely that it will roll over during our lifetime unless it's a hardware issue.
EDIT: We're now calculating this value "per thread" and then "clamping it" to watch for jumps between cores to reset it.
It's possible that you get the jump in time because your thread is jumping cores. See the "note" on this page: http://msdn.microsoft.com/en-us/library/ebf7z0sw.aspx
The behavior of the Stopwatch class will vary from system to system depending on hardware support.
See: http://msdn.microsoft.com/en-us/library/system.diagnostics.stopwatch.ishighresolution.aspx
Also, I believe the underlying equivalent win32 call (QueryPerformanceCounter) contains useful documentation: http://msdn.microsoft.com/en-us/library/windows/desktop/ms644904(v=vs.85).aspx
I don't know exactly regarding about running backwards (which sounds like a small change backwards), but I have so far experienced three times that the value of Stopwatch.GetTimestamp() can change so enormously that it causes overflow exceptions in some further calculations of form about like this:
(Stopwatch.GetTimestamp() - ProgramStartStopwatchTimestamp) * n
where n is some large value, but small enough that if the Stopwatch weren't jumping around enormously, then the program could run years without having overflow exception. Note also that these exceptions occurred many hours after the program started, so the issue is not just that the Stopwatch ran backwards a little bit immediately after start. It just jumped to totally different range, in whatever direction.
Regarding Stopwatch rolling over, in one of the above cases it (not the difference, but Stopwatch) obtained value of something a la 0xFF4? ???? ???? ????, so it jumped to a range that was very close to rolling over. After restarting the program multiple times, this new range was still consistently in effect. If that matters anymore considering the need to handle the jumps anyway...
If it was additionally necessary to determine which core the timestamp was taken on then it probably helps to know executing core number. For this end, there are functions called GetCurrentProcessorNumber (available since Server 2003 and Vista) and GetCurrentProcessorNumberEx (available since Server 2008 R2 and Windows 7). See also this question's answers for more options (including Windows XP).
Note that core number can be changed by the scheduler any time. But when one reads the core number before reading Stopwatch timestamp, and after, and the core number remained same, then perhaps one can assume that the Stopwatch read was also performed on this core...
To specifically answer the high-level question "How often does Stopwatch.GetTimestamp() roll over?", Microsoft's answer is:
Not less than 100 years from the most recent system boot, and potentially longer based on the underlying hardware timer used. For most applications, rollover isn't a concern.
I want to provide a trial version of my software. This version should only be evaluated within a specific period of time. Let's say only during January 2011.
As this software massively uses the system clock in processing, it would be quite annoying to set the clock to an earlier time to be able to use it over and over. So because of this, I wound't think of a more complicated protection mechanism.
So I have thought about exiting after a test like:
if (DateTime.Now.Year != 2011 && DateTime.Now.Month != 1)
{
MessageBox.Show("expired!");
Application.Exit();
}
How easy will this be cracked :-) ?
Is there a "safe" way to do this ?
Basicly you can say it is impossible to secure trial software against cracking. Everything you do in Software can be bypassed (Yes! Even Ring-0 Drivers).
Even if you have an external dongle from which you recieve the authentication to start can be spoofed through software. Although it isn't easy :-)
You can only make it hard not impossible :-)
It's not exactly related to cracking it, but it's worth noting that if this is an app that can be used internationally, it'll show as being 'expired' for many users before they've even had a chance to try it at all. The values returned by DateTime reflect the local users culture, so DateTime.Now.Year returns 1431 for Arabic cultures and 2553 for the Thai culture for instance. Months can similarly differ, so you shouldn't hardcode values for them without checking the culture first.
You can get round this by using the InvariantCulture each time ,e.g. DateTime.Now.Year.ToString(System.Globalization.CultureInfo.InvariantCulture);
This method could be cracked if the user set his computer Date to 2009: your software will be used for other two years.
If you want to use this method I think the best way is to check the actual date on the internet.
However some users couldn't have a connection; in that case you can use something like a countdown that makes the software executable for n-days.
You ship your software with an encrypted file that contains the date of installation;
The next time you will check for that file:
If exists and the day is different increment a counter;
If exists but is corrupted or if it doesn't exists update the counter to "max-day";
If exists but the counter is equal to "max-day" exit the application;
Update the old file with the new values;
Obviously the counter will be another encrypted file and, as for the other file, you have to check his state (corruption or deletion).
Here's my opinion.
The weak point of any time-limit or dongle-protection is that in the end everything must be checked with an 'if' statement. In the old 'x86 days there is JNE, JE, JNZ family of instructions. Such 'if' statement must exist in hundreds if not thousands or more in the application. So any cracker must find out where to start looking, for instance, dongle checkers almost always use DeviceIoControl API, which could be pinpointed quickly. After the calls to DeviceIoControl API found, the cracker just reverse engineered the routine around the API call, and try change JNE instructions around that to JE or the other way around.
In your case, the usage of DateTime is the tell-tale (but of course, there is a lot of place where DateTime being used for other things, that makes it harder for the cracker). To make things difficult for the cracker, copy the current date to some object's values, and try to make, like, 20 places or something that stores the DateTime. Or even better, retrieve current date from the net and also from the current computer. Don't check the DateTime directly, but use the value that you store in some objects before, to make it harder for the cracker. Use consistency check mechanism to ensure the dates are within tolerance, and kill the app if you find out that 2 of the datetime is different to the other stored datetime (give 2 days tolerance or so).
Also check whether the clock is not turned back by the user, if you found out that CurrentDateTime < StoredDateTimeInRegistry then you should store a kill flag somewhere in the registry. Or you might also want to use a file in addition to the registry.
For every kind checks you do, try to do it in many places and in different ways.
At the end, I must say that what Bigbohne said is true (nothing is impossible to crack) - it is just that, by making it difficult for the cracker, we changed his/her effort-to-result ratio, and hopefully discouraging him from continuing the cracking process.
Checking trial period expiration in C# code is easy to crack, even if you will obfuscate code due to it is compiled into CLR. It is better carry out this check into code that is compiled to byte code. Also you can read topic Protect .NET code from reverse engineering? about protecting .NET code from reverse engineering
Software licensing is a complete subject on its own, and it looks like that you are looking for a simplest solution to be implemented for your trial software.
What simply you can do, on startup of your application log the current date/time in registry and use it is as a reference point for validation. So even if the system time would be changed it wouldn't effect your application validation logic.
If possible, write the registry access library in C++, which wouldn't be possible to crack. Good luck.
I am trying to implement an exact algorithm of minimizing total tardiness for single machine. I was searching in the web to get an idea how I can implement it using dynamic programming. I read the paper of Lawler who proposed a PSEUDOPOLYNOMIAL algorithm back in 77. However, still could not able to map it in java or c# code.
Could you please help me providing some reference how to implement this exact algorithm effectively?
Edit-1: #bcat: not really. I need to implement it for our software. :( still I am not able to find any guidance how to implement it. Greedy one is easy to implement , but result of scheduling is not that impressive.
Kind Regards,
Xiaon
You could have specified the exact characteristics of your particular problem along with limits for different variables and the overall characteristics of the input.
Without this, I assume you use this definition:
You want to minimize the total tardiness in a single machine scheduling problem with n independent jobs, each having its processing time and due date.
What you want to do is to pick up a subset of the jobs so that they do not overlap (single machine available) and you can also pick the order in which you do them, keeping the due dates.
I guess the first step to do is to sort by the due dates, it seems that there is no benefit of sorting them in some different way.
Next what is left is to pick the subset. Here is where the dynamic programming comes to help. I'll try to describe the state and recursive relation.
State:
[current time][current job] = maximum number of jobs done
Relation:
You either process the current job and call
f(current time + processing_time_of[current job],current job +1)
or you skip the process and call
f(current time,current job +1)
finally you take the minimum of those 2 values returned by those calls and store it in state
[current time][current job]
And you start the recursion at time 0 and job 0.
By the way, greedy seems to be doing pretty well, check this out:
http://www.springerlink.com/content/f780526553631475/
For single machine, Longest Processing Time schedule (also known as Decreasing-Time Algorithm) is optimal if you know all jobs ahead of time and can sort them from longest to shortest (so all you need to do is sort).
As has been mentioned already, you've specified no information about your scheduling problem, so it is hard to help beyond this (as no universally optimal and efficient scheduler exists).
Maybe the Job Scheduling Section of the Stony Brook Algorithm Repository can help you.
Is there an upper limit on the number of instances of WorkflowInstance I may have running at one time?
I am using .Net 3.5 and C# (not that the language should make a difference.)
Please note that I am not suggesting that it is a good design to many running at once, I am simply curious about the upper limit, if one exists.
I asked this question of an MS guy at a conference.
With a persistence service and idle instances unloaded, there is only a small per-saved-instance in memory (I seem to recall 64 bytes). Therefore it is easily possible for a single process to support hundreds of thousands.
Clearly you need to scale out if many of the instances are not idle to handle the processing load of the non-idle instances.
There is no hard-coded limit that I'm aware of, if that's what you're asking. There's a practical limit based on your system resources, obviously.