what is c time_t equivalent for c# - c#

There is a native method from dll written in c which takes a parameter of type time_t.
Is it possible to use C# uint or ulong for this parameter?

I do not think I should say they are equivalent, but you can convert t_time to DateTime in such a way:
int t= 1070390676; // value of time_t in an ordinary integer
System.DateTime dt= new System.DateTime(1970,1,1).AddSeconds(t);
And this example is from How can I convert date-time data in time_t to C# DateTime class format?
I should also say that UInt32 is used for t_time,too.check DateTime to time_t

Depends on how time_t was defined in the Standard C header files the DLL was compiled against.
If time_t is 64-bit, the C# equivalent is long.
If time_t is 32-bit, then it has the Year 2038 bug and you should ask whoever wrote the DLL for a non-buggy version.

According to Wikipedia's article on Time_t you could use a integer (Int32 or Int64)
Unix and POSIX-compliant systems implement time_t as an integer or real-floating type (typically a 32- or 64-bit integer) which represents the number of seconds since the start of the Unix epoch: midnight UTC of January 1, 1970 (not counting leap seconds).

Bastardo's solution did not help me. I was facing an issue with DST, so an additional conversion to local time was required, or the resulting time differed by one hour. This is what I do:
return new DateTime(1970, 1, 1).ToLocalTime().AddSeconds(time_t);

Related

converting c# timestamp to datetime in php

I have timestamp comoing from c# (64-bit binary value) that I need to convert in php.
The timestamp is created with DateTime.ToBinary() method.
an example is -8586307756854775808 which gives 10/11/2019 10:00:00 PM
I tried with unpack() in php but without success:
$timestamp = unpack('q', -8586307756854775808);
How can I do this in php ?
Thanks
At first must separate the ticks from 64-bit binary value.
The ticks of the 64-bit value are the same when creating DateTime in C# regardless of the use of DateTimeKind.Utc or DateTimeKind.Local
Then may be convert the Ticks in a UNIX-Timestamp.
gmdate() returns a date string from the timestamp.
<?php
//num from c# DateTime.ToBinary()
//2019-10-11 22:00:00
$num = -8586307756854775808;
$ticks = $num & 0x3fffffffffffffff;
$linuxTs = $ticks/10000000 - 62135596800;
$strDate = gmdate("Y-m-d H:i:s",$linuxTs);
echo $strDate; //"2019-10-11 22:00:00"
Note: This solution requires a 64-bit platform.
The kind information from the 64-bit value $num may also be useful.
$kind = (ord(pack('J',$num)) & 0xC0)>>6; //0,1 or 2
DateTime.ToBinary creates a value that combines the .Ticks and .Kind property into a single 64-bit value. You shouldn't use it unless you intend to pass that value back into DateTime.FromBinary in .NET.
While it is possible to interpret it outside of .NET, one would have to apply the same logic that is in the implementation of DateTime.FromBinary. You can see the source code here.
Instead, consider using a standard interchange format. The most common is ISO 8601 (also covered in part by RFC 3339).
string iso = dateTime.ToString("O");
The above uses the "round-trip" specifier to get the ISO 8601 format. There are other ways as well.
If your values are UTC based, you could also consider passing a Unix Timestamp. The simplest way to get one from .NET is through DateTimeOffset.ToUnixTimeSeconds or DateTimeOffset.ToUnixTimeMilliseconds depending on what precision you want. Of course, you will need to convert your DateTime to a DateTimeOffset first.
long ms = new DateTimeOffset(dateTime).ToUnixTimeMilliseconds();
The above assumes that your DateTime has either DateTimeKind.Local or DateTimeKind.Utc in its .Kind property. If it is DateTimeKind.Unspecified, then you may need to work with TimeZoneInfo to get aligned to UTC before figuring out the Unix Timestamp.

FILETIME implementation in C#

I am trying to convert some C++ project to C#. I am struggling now with a time class I encountered with C++. It has methods like:
const RsDateTime& RsDateTime::GetTime()
{
FILETIME ft;
::GetSystemTimeAsFileTime(&ft);
::FileTimeToLocalFileTime(&ft, &m_Time); // Why call to local time? What does it do?
return *this;
}
FILETIME RsDateTime::GetUTCTimeAsFile() const
{
FILETIME ft;
::LocalFileTimeToFileTime(&m_Time, &ft); // Why call to local time?
return ft;
}
static unsigned __int64 GetAsUINT64(const FILETIME* ft)
{
ULARGE_INTEGER li;
li.LowPart = ft->dwLowDateTime;
li.HighPart = ft->dwHighDateTime;
return li.QuadPart;
}
Can someone please help me how to approach converting this to C#? Which methods do I need to use? Also what is the difference between FileTime and LocalFileTime as mentioned above?
In Win32 (which this code seems to be), FILETIME is:
A 64-bit value representing the number of 100-nanosecond intervals since January 1, 1601 (UTC).
Unless you're dependent on the 1601 epoch, in C#/.NET just use the DateTime type, which contains:
The value of this property represents the number of 100-nanosecond intervals that have elapsed since 12:00:00 midnight, January 1, 0001.
My advice is to completely ditch/disregard the C++ code UNLESS you know the data it's returning is being used somewhere that depends on Win32 FILETIME specifically, if not, remove the code and use System.DateTime.
The DateTime C# class now has a method FromFileTime which converts the 64bit FILETIME structure into a DataTime object. At that point, you can use all the DateTime methods and properties normally in c#.
Int64 exampleTime = 0x01cff7924ade4400L;
DateTime dtModified = DateTime.FromFileTime(exampleTime);

How to convert integer to unsigned 64-bit integer

I'm converting this code from C# to Ruby:
C# Code
DateTime dtEpoch = new DateTime(1970, 01, 01, 0, 0, 0, 0, DateTimeKind.Utc);
string strTimeStamp = Convert.ToUInt64((DateTime.UtcNow - dtEpoch).TotalSeconds).ToString();
Ruby Code
now = Time.now.utc
epoch = Time.utc(1970,01,01, 0,0,0)
time_diff = ((now - epoch).to_s).unpack('Q').first.to_s
I need to convert the integer into an unsigned 64-bit integer. Is unpack really the way to go?
I'm not really sure what is the returned value for your code, but it sure ain't seconds since epoch.
Ruby stores dates and times internally as seconds until epoch. Time.now.to_i will return exactly what you're looking for.
require 'date'
# Seconds from epoch to this very second
puts Time.now.to_i
# Seconds from epoch until today, 00:00
puts Date.today.to_time.to_i
In short, Time.now.to_i is enough.
Ruby internally stores Times in seconds since 1970-01-01T00:00:00.000+0000:
Time.now.to_f #=> 1439806700.8638804
Time.now.to_i #=> 1439806700
And you don't have to convert the value to something like ulong in C#, because Ruby automatically coerces the integer type so that it doesn't fight against your common sence.
A bit verbose explanation: ruby stores integers as instances of Fixnum, if that number fits the 63-bit size (not 64-bit, weird huh?) If that number exceeds that size, ruby automatically converts it to a Bignum, which has an arbitrary size.

Timestamp as UTC Integer

I Have a legacy database with a field containing an integer representing a datetime in UTC
From the documentation:
"Timestamps within a CDR appear in Universal Coordinated Time (UTC). This value remains
independent of daylight saving time changes"
An example of a value is 1236772829.
My question is what is the best way to convert it to a .NET DateTime (in CLR code, not in the DB), both as the UTC value and as a local time value.
Have tried to google it but without any luck.
You'll need to know what the integer really means. This will typically consist of:
An epoch/offset (i.e. what 0 means) - for example "midnight Jan 1st 1970"
A scale, e.g. seconds, milliseconds, ticks.
If you can get two values and what they mean in terms of UTC, the rest should be easy. The simplest way would probably be to have a DateTime (or DateTimeOffset) as the epoch, then construct a TimeSpan from the integer, e.g. TimeSpan.FromMilliseconds etc. Add the two together and you're done. EDIT: Using AddSeconds or AddMilliseconds as per aakashm's answer is a simpler way of doing this bit :)
Alternatively, do the arithmetic yourself and call the DateTime constructor which takes a number of ticks. (Arguably the version taking a DateTimeKind as well would be better, so you can explicitly state that it's UTC.)
Googling that exact phrase gives me this Cicso page, which goes on to say "The field specifies a time_t value that is obtained from the operating system. "
time_t is a C library concept which strictly speaking doesn't have to be any particular implementation, but typically for UNIX-y systems will be the number of seconds since the start of the Unix epoch, 1970 January 1 00:00.
Assuming this to be right, this code will give you a DateTime from an int:
DateTime epochStart = new DateTime(1970, 1, 1);
int cdrTimestamp = 1236772829;
DateTime result = epochStart.AddSeconds(cdrTimestamp);
// Now result is 2009 March 11 12:00:29
You should sanity check the results you get to confirm that this is the correct interpretation of these time_ts.

Convert System::DateTime to _timeb

I have a legacy C++-based application that timestamps incoming network traffic using the CRT _ftime() function. The _ftime() function returns a _timeb structure, which has a 32-bit and a 64-bit implementation. We are using the 32-bit implementation, which looks like this:
struct _timeb {
long time; // 4 bytes
unsigned short millitm; // 2 bytes
short timezone; // 2 bytes
short dstflag; // 2 bytes
};
From the MSDN documentation, here is how each field is interpreted:
dstflag - nonzero if daylight savings time is currently in effect for the local time zone (see _tzset for an explanation of how daylight savings time is determined.)
millitm - fraction of a second in milliseconds
time - time in seconds since midnight (00:00:00), January 1, 1970, coordinated universal time (UTC).
timezone - difference in minutes, moving westward, between UTC and local time. The value of timezone is set from the value of the global variable _timezone (see _tzset).
I am re-working the portion of the code that does the timestamping to use C# in .NET 3.5. Timestamps are now generated using the System.DateTime structure, but I still need to convert them back to the _timeb structure so the legacy C++ code can operate on them. Here is how I am doing that in my managed C++ bridge library:
DateTime dateTime = DateTime::UtcNow;
DateTime baseTime(1970, 1, 1, 0, 0, 0, DateTimeKind::Utc);
TimeSpan delta = dateTime - baseTime;
_timeb timestamp;
timestamp.time = delta.TotalSeconds;
timestamp.millitm = dateTime.Millisecond;
timestamp.dstflag = TimeZoneInfo::Local->IsDaylightSavingTime(dateTime) ? 1 : 0;
timestamp.timezone = TimeZoneInfo::Local->BaseUtcOffset.TotalMinutes * -1;
From what I can tell, this appears to reconstruct the _timeb structure as if I had called _ftime() directly, and that's good. The thing is, timestamps are a critical piece of our application, so this has to be right.
My question is two-fold.
Is my algorithm flawed somehow? Does anyone see anything obvious that I've missed? Are there boundary conditions where this won't work right?
Is there a better way to do the conversion? Does .NET have a way to do this in a more straightforward manner?
You're aware of the Y2K38 problem? I assume you checked the sign of .timezone. Avoid the cleverness of using dateTime.Millisecond, that just confuses the next guy. Looks good otherwise.

Categories

Resources