How to create Delphi Extended 10 bytes from a number? - c#

I know this is probably a very rare question.
I have a service written with Delphi, and a client written with C#. The Delphi service tries to read a 10-byte Extended data type from the C# client.
After some research, I found some sample code in C# to convert a 10-byte Extended to a number (Convert Delphi Extended to C#). But I couldn't find any sample to convert a number to a 10-byte Extended, so that I can send it back to the service.
I tried to write code by myself, but the calculation is very difficult for me to understand.
Can anyone help me?

Delphi (32 bit target) natively support Extended data type. You can just copy the 10 bytes in the Extended variable. For example:
const
// Binary representation of Extended number "123456789012345678"
Bin : array [0..9] of Byte = (0, 167, 121, 24, 211,
165, 77, 219, 55, 64);
procedure TForm1.Button1Click(Sender: TObject);
var
V : Extended;
I : Integer;
begin
V := PExtended(#Bin[0])^; // Copy Bin to V
Memo1.Lines.Add(Format('%22f', [V]));
end;
The binary format for extended data type can be found here.
A better description of the format is here.

Sorry for not make question clear, but thanks for all the comments here.
I got the code working, it looks not that perfect, but unit testing is passed.
Thanks for the link sent by #fpiette, it gave me bellow thoughts.
public static byte[] WriteExtendedToBuffer(double value)
{
var extendedBuffer = Enumerable.Repeat((byte)0x0, 10).ToArray();
if (!double.IsNaN(value) && !double.IsInfinity(value) && (value != 0))
{
var doubleBuff = BitConverter.GetBytes(value);
var sign = doubleBuff[7] & 0x80;
doubleBuff[7] = (byte)(doubleBuff[7] & 0x7F);
var exp = BitConverter.ToUInt16(doubleBuff, 6);
doubleBuff[7] = 0;
doubleBuff[6] = (byte)(doubleBuff[6] & 0x0F);
var massive = BitConverter.ToUInt64(doubleBuff);
exp >>= 4;
if (exp == 0)
{
exp = 16383 - 1022;
Buffer.BlockCopy(BitConverter.GetBytes(exp), 0, extendedBuffer, 8, 2);
extendedBuffer[9] = (byte)(extendedBuffer[9] | sign);
massive <<= 11;
Buffer.BlockCopy(BitConverter.GetBytes(massive), 0, extendedBuffer, 0, 8);
}
else
{
exp = (ushort)(16383 + exp - 1023);
Buffer.BlockCopy(BitConverter.GetBytes(exp), 0, extendedBuffer, 8, 2);
extendedBuffer[9] = (byte)(extendedBuffer[9] | sign);
massive <<= 11;
Buffer.BlockCopy(BitConverter.GetBytes(massive), 0, extendedBuffer, 0, 8);
extendedBuffer[7] = (byte)(extendedBuffer[7] | 0x80);
}
}
return extendedBuffer;
}

Related

CRC-16/X-25 Calculation

I'm searching to calculate CRC-16 for concox VTS. After search a lot I've got several formula, lookup table, CRC32.net library etc and tried them. But didn't get the actual result which I've wanted.
For example, when I use crc32.net library :
byte[] data = { 11, 01, 03, 51, 51, 00, 94, 10, 95, 20, 20, 08, 25, 81, 00, 23 };
UInt32 crcOut = Crc32Algorithm.ComputeAndWriteToEnd(data);
Console.WriteLine(crcOut.ToString());
It returns : 3232021645
But actually it should return : 90DD
I've tried another example too but they also did not return proper value.
Edit :
Here is the RAW data from Device :
{78}{78}{11}{01}{03}{51}{51}{00}{94}{10}{95}{20}{20}{08}{25}{81}{00}{23}{90}{DD}{0D}{0A}
When split by following data sheet, it looks like -
{78}{78} = Start Bit
{11} = Packet Length(17) = (01 + 12 + 02 + 02) = Decimal 17 = Hexadecimal 11
{01} = Protocol No
{03}{51}{51}{00}{94}{10}{95}{20} = TerminalID = 0351510094109520
{20}{08} = Model Identification Code
{25}{81} = Time Zone Language
{00}{23} = Information Serial No
{90}{DD} = Error Check (CRC : Packet Length to Information Serial No)
{0D}{0A} = Stop Bit
They told in error check it need CRC from Packet Length to Information Serial No. To make reply this packet I also need to make a data packet with CRC code.
I've found an online calculator from below link. The data match with CRC-16/X-25.
Now I need to calculate it by C# code.
https://crccalc.com/?crc=11,%2001,%2003,%2051,%2051,%2000,%2094,%2010,%2095,%2020,%2020,%2008,%2025,%2081,%2000,%2023&method=CRC-16/X-25&datatype=hex&outtype=0
Waiting for your reply.
Thanks
The CRC-16 you assert that you need in your comment (which needs to be in your question) is the CRC-16/X-25. On your data, that CRC gives 0xcac0, not 0x90dd.
In fact, none of the documented CRC-16's listed in that catalog produce 0x90dd for your data. You need to provide a reference for the CRC that you need, and how you determined that 0x90dd is the expected result for that data.
Update for updated question:
The bytes you provided in your example data are in decimal:
byte[] data = { 11, 01, 03, 51, 51, 00, 94, 10, 95, 20, 20, 08, 25, 81, 00, 23 };
That is completely wrong, since, based on the actual data message data in your updated question that you want the CRC for, those digits must be interpreted as hexadecimal. (Just by chance, none of those numbers have hexadecimal digits in a..f.) To represent that test vector in your code correctly, it needs to be:
byte[] data = { 0x11, 0x01, 0x03, 0x51, 0x51, 0x00, 0x94, 0x10, 0x95, 0x20, 0x20, 0x08, 0x25, 0x81, 0x00, 0x23 };
This computes the CRC-16/X-25:
ushort crc16_x25(byte[] data, int len) {
ushort crc = 0xffff;
for (int i = 0; i < len; i++) {
crc ^= data[i];
for (unsigned k = 0; k < 8; k++)
crc = (crc & 1) != 0 ? (crc >> 1) ^ 0x8408 : crc >> 1;
}
return ~crc;
}

How can a create a byte from 8 Bool values in C#

I need to read 8 bool values and create a Byte from it, How is this done?
rather than hardcoding the following 1's and 0's - how can i create that binary value from a series of Boolean values in c#?
byte myValue = 0b001_0000;
There's many ways of doing it, for example to build it from an array:
bool[] values = ...;
byte result = 0;
for(int i = values.Length - 1; i >= 0; --i) // assuming you store them "in reverse"
result = result | (values[i] << (values.Length - 1 - i));
My solution with Linq:
public static byte CreateByte(bool[] bits)
{
if (bits.Length > 8)
{
throw new ArgumentOutOfRangeException();
}
return (byte)bits.Reverse().Select((val, i) => Convert.ToByte(val) << i).Sum();
}
The call to Reverse() is optional and dependent on if you want index 0 to be the LSB (without Reverse) or the MSB (with Reverse)
var values = new bool[8];
values [7] = true;
byte result = 0;
for (var i = 0; i < 8; i++)
{
//edited to bit shifting because of community complains :D
if (values [i]) result |= (byte)(1 << i);
}
// result => 128
This might be absolutely overkill, but I felt like playing around with SIMD. It could've probably been written even better but I don't know SIMD all that well.
If you want reverse bit order to what this generates, just remove the shuffling part from the SIMD approach and change (7 - i) to just i
For those not familiar with SIMD, this approach is about 3 times faster than a normal for loop.
public static byte ByteFrom8Bools(ReadOnlySpan<bool> bools)
{
if (bools.Length < 8)
Throw();
static void Throw() // Throwing in a separate method helps JIT produce better code, or so I've heard
{
throw new ArgumentException("Not enough booleans provided");
}
// these are JIT compile time constants, only one of the branches will be compiled
// depending on the CPU running this code, eliminating the branch entirely
if(Sse2.IsSupported && Ssse3.IsSupported)
{
// copy out the 64 bits all at once
ref readonly bool b = ref bools[0];
ref bool refBool = ref Unsafe.AsRef(b);
ulong ulongBools = Unsafe.As<bool, ulong>(ref refBool);
// load our 64 bits into a vector register
Vector128<byte> vector = Vector128.CreateScalarUnsafe(ulongBools).AsByte();
// this is just to propagate the 1 set bit in true bools to the most significant bit
Vector128<byte> allTrue = Vector128.Create((byte)1);
Vector128<byte> compared = Sse2.CompareEqual(vector, allTrue);
// reverse the bytes we care about, leave the rest in their place
Vector128<byte> shuffleMask = Vector128.Create((byte)7, 6, 5, 4, 3, 2, 1, 0, 8, 9, 10, 11, 12, 13, 14, 15);
Vector128<byte> shuffled = Ssse3.Shuffle(compared, shuffleMask);
// move the most significant bit of each byte into a bit of int
int mask = Sse2.MoveMask(shuffled);
// returning byte = returning the least significant byte from int
return (byte)mask;
}
else
{
// fall back to a more generic algorithm if there aren't the correct instructions on the CPU
byte bits = 0;
for (int i = 0; i < 8; i++)
{
bool b = bools[i];
bits |= (byte)(Unsafe.As<bool, byte>(ref b) << (7 - i));
}
return bits;
}
}

Determining values in a bitmask

I have a protocol guide for a piece of hardware where I can extract 16 different kinds of data. To indicate I want all data, I would enter 65535 as the mask.
2^0 (1)
+ 2^1 (2)
+ 2^2 (4)
...
+ 2^15 (32768)
==============
65535
I now need to indicate I need options 9, 10, and 13. Presumably I simply need to use the following calculation:
2^9 (512)
+ 2^10 (1024)
+ 2^13 (8192)
==============
9728
(If I'm off-base here, or there is a programmatic way to do this, I'd be interested to know!)
What I would like to know is how I would in future extract all the numbers that were involved in the summation.
I had thought I would be able to check with (9728 & 9) == 9, (9728 & 10) == 10, and (9728 & 13) == 13, but all those return false.
bit 9 is 256; bit 10 is 512; bit 13 is 4096.
So:
if((val & 256) != 0) { /* bit 9 is set */ }
if((val & 512) != 0) { /* bit 10 is set */ }
if((val & 4096) != 0) { /* bit 13 is set */ }
You could also use an enum for convenience:
[Flags]
public enum MyFlags {
None = 0,
Foo = 1,
Bar = 2,
...
SomeFlag = 256,
AnotherFlag = 512,
...
}
then:
MyFlags flags = (MyFlags)val;
if((flags & MyFlags.SomeFlag) != 0) {/* SomeFlag is set */}
And likewise:
MyFlags thingsWeWant = MyFlags.Foo | MyFlags.SomeFlag | MyFlags.AnotherFlag;
int val = (int)thingsWeWant;
Did mean sth like this?
var value = 512 | 1024 | 8192;
var pos = 9;
var isSetNine = (value & (1 << pos)) != 0;

Converting 2 bytes to Short in C#

I'm trying to convert two bytes into an unsigned short so I can retrieve the actual server port value. I'm basing it off from this protocol specification under Reply Format. I tried using BitConverter.ToUint16() for this, but the problem is, it doesn't seem to throw the expected value. See below for a sample implementation:
int bytesRead = 0;
while (bytesRead < ms.Length)
{
int first = ms.ReadByte() & 0xFF;
int second = ms.ReadByte() & 0xFF;
int third = ms.ReadByte() & 0xFF;
int fourth = ms.ReadByte() & 0xFF;
int port1 = ms.ReadByte();
int port2 = ms.ReadByte();
int actualPort = BitConverter.ToUInt16(new byte[2] {(byte)port1 , (byte)port2 }, 0);
string ip = String.Format("{0}.{1}.{2}.{3}:{4}-{5} = {6}", first, second, third, fourth, port1, port2, actualPort);
Debug.WriteLine(ip);
bytesRead += 6;
}
Given one sample data, let's say for the two byte values, I have 105 & 135, the expected port value after conversion should be 27015, but instead I get a value of 34665 using BitConverter.
Am I doing it the wrong way?
If you reverse the values in the BitConverter call, you should get the expected result:
int actualPort = BitConverter.ToUInt16(new byte[2] {(byte)port2 , (byte)port1 }, 0);
On a little-endian architecture, the low order byte needs to be second in the array. And as lasseespeholt points out in the comments, you would need to reverse the order on a big-endian architecture. That could be checked with the BitConverter.IsLittleEndian property. Or it might be a better solution overall to use IPAddress.HostToNetworkOrder (convert the value first and then call that method to put the bytes in the correct order regardless of the endianness).
BitConverter is doing the right thing, you just have low-byte and high-byte mixed up - you can verify using a bitshift manually:
byte port1 = 105;
byte port2 = 135;
ushort value = BitConverter.ToUInt16(new byte[2] { (byte)port1, (byte)port2 }, 0);
ushort value2 = (ushort)(port1 + (port2 << 8)); //same output
To work on both little and big endian architectures, you must do something like:
if (BitConverter.IsLittleEndian)
actualPort = BitConverter.ToUInt16(new byte[2] {(byte)port2 , (byte)port1 }, 0);
else
actualPort = BitConverter.ToUInt16(new byte[2] {(byte)port1 , (byte)port2 }, 0);

Problem porting PHP crypt() function to C#

Im working on porting some old ALP user accounts to a new ASP.Net solution, and I would like for the users to be able to use their old passwords.
However, in order for that to work, I need to be able to compare the old hashes to a newly calculated one, based on a newly typed password.
I searched around, and found this as the implementation of crypt() called by PHP:
char *
crypt_md5(const char *pw, const char *salt)
{
MD5_CTX ctx,ctx1;
unsigned long l;
int sl, pl;
u_int i;
u_char final[MD5_SIZE];
static const char *sp, *ep;
static char passwd[120], *p;
static const char *magic = "$1$";
/* Refine the Salt first */
sp = salt;
/* If it starts with the magic string, then skip that */
if(!strncmp(sp, magic, strlen(magic)))
sp += strlen(magic);
/* It stops at the first '$', max 8 chars */
for(ep = sp; *ep && *ep != '$' && ep < (sp + 8); ep++)
continue;
/* get the length of the true salt */
sl = ep - sp;
MD5Init(&ctx);
/* The password first, since that is what is most unknown */
MD5Update(&ctx, (const u_char *)pw, strlen(pw));
/* Then our magic string */
MD5Update(&ctx, (const u_char *)magic, strlen(magic));
/* Then the raw salt */
MD5Update(&ctx, (const u_char *)sp, (u_int)sl);
/* Then just as many characters of the MD5(pw,salt,pw) */
MD5Init(&ctx1);
MD5Update(&ctx1, (const u_char *)pw, strlen(pw));
MD5Update(&ctx1, (const u_char *)sp, (u_int)sl);
MD5Update(&ctx1, (const u_char *)pw, strlen(pw));
MD5Final(final, &ctx1);
for(pl = (int)strlen(pw); pl > 0; pl -= MD5_SIZE)
MD5Update(&ctx, (const u_char *)final,
(u_int)(pl > MD5_SIZE ? MD5_SIZE : pl));
/* Don't leave anything around in vm they could use. */
memset(final, 0, sizeof(final));
/* Then something really weird... */
for (i = strlen(pw); i; i >>= 1)
if(i & 1)
MD5Update(&ctx, (const u_char *)final, 1);
else
MD5Update(&ctx, (const u_char *)pw, 1);
/* Now make the output string */
strcpy(passwd, magic);
strncat(passwd, sp, (u_int)sl);
strcat(passwd, "$");
MD5Final(final, &ctx);
/*
* and now, just to make sure things don't run too fast
* On a 60 Mhz Pentium this takes 34 msec, so you would
* need 30 seconds to build a 1000 entry dictionary...
*/
for(i = 0; i < 1000; i++) {
MD5Init(&ctx1);
if(i & 1)
MD5Update(&ctx1, (const u_char *)pw, strlen(pw));
else
MD5Update(&ctx1, (const u_char *)final, MD5_SIZE);
if(i % 3)
MD5Update(&ctx1, (const u_char *)sp, (u_int)sl);
if(i % 7)
MD5Update(&ctx1, (const u_char *)pw, strlen(pw));
if(i & 1)
MD5Update(&ctx1, (const u_char *)final, MD5_SIZE);
else
MD5Update(&ctx1, (const u_char *)pw, strlen(pw));
MD5Final(final, &ctx1);
}
p = passwd + strlen(passwd);
l = (final[ 0]<<16) | (final[ 6]<<8) | final[12];
_crypt_to64(p, l, 4); p += 4;
l = (final[ 1]<<16) | (final[ 7]<<8) | final[13];
_crypt_to64(p, l, 4); p += 4;
l = (final[ 2]<<16) | (final[ 8]<<8) | final[14];
_crypt_to64(p, l, 4); p += 4;
l = (final[ 3]<<16) | (final[ 9]<<8) | final[15];
_crypt_to64(p, l, 4); p += 4;
l = (final[ 4]<<16) | (final[10]<<8) | final[ 5];
_crypt_to64(p, l, 4); p += 4;
l = final[11];
_crypt_to64(p, l, 2); p += 2;
*p = '\0';
/* Don't leave anything around in vm they could use. */
memset(final, 0, sizeof(final));
return (passwd);
}
And, here is my version in C#, along with an expected match.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Diagnostics;
using System.Security.Cryptography;
using System.IO;
using System.Management;
namespace Test
{
class Program
{
static void Main(string[] args)
{
byte[] salt = Encoding.ASCII.GetBytes("$1$ls3xPLpO$Wu/FQ.PtP2XBCqrM.w847/");
Console.WriteLine("Hash: " + Encoding.ASCII.GetString(salt));
byte[] passkey = Encoding.ASCII.GetBytes("suckit");
byte[] newhash = md5_crypt(passkey, salt);
Console.WriteLine("Hash2: " + Encoding.ASCII.GetString(newhash));
byte[] newhash2 = md5_crypt(passkey, newhash);
Console.WriteLine("Hash3: " + Encoding.ASCII.GetString(newhash2));
Console.ReadKey(true);
}
public static byte[] md5_crypt(byte[] pw, byte[] salt)
{
MemoryStream ctx, ctx1;
ulong l;
int sl, pl;
int i;
byte[] final;
int sp, ep; //** changed pointers to array indices
MemoryStream passwd = new MemoryStream();
byte[] magic = Encoding.ASCII.GetBytes("$1$");
// Refine the salt first
sp = 0; //** Changed to an array index, rather than a pointer.
// If it starts with the magic string, then skip that
if (salt[0] == magic[0] &&
salt[1] == magic[1] &&
salt[2] == magic[2])
{
sp += magic.Length;
}
// It stops at the first '$', max 8 chars
for (ep = sp;
(ep + sp < salt.Length) && //** Converted to array indices, and rather than check for null termination, check for the end of the array.
salt[ep] != (byte)'$' &&
ep < (sp + 8);
ep++)
continue;
// Get the length of the true salt
sl = ep - sp;
ctx = MD5Init();
// The password first, since that is what is most unknown
MD5Update(ctx, pw, pw.Length);
// Then our magic string
MD5Update(ctx, magic, magic.Length);
// Then the raw salt
MD5Update(ctx, salt, sp, sl);
// Then just as many characters of the MD5(pw,salt,pw)
ctx1 = MD5Init();
MD5Update(ctx1, pw, pw.Length);
MD5Update(ctx1, salt, sp, sl);
MD5Update(ctx1, pw, pw.Length);
final = MD5Final(ctx1);
for(pl = pw.Length; pl > 0; pl -= final.Length)
MD5Update(ctx, final,
(pl > final.Length ? final.Length : pl));
// Don't leave anything around in vm they could use.
for (i = 0; i < final.Length; i++) final[i] = 0;
// Then something really weird...
for (i = pw.Length; i != 0; i >>= 1)
if((i & 1) != 0)
MD5Update(ctx, final, 1);
else
MD5Update(ctx, pw, 1);
// Now make the output string
passwd.Write(magic, 0, magic.Length);
passwd.Write(salt, sp, sl);
passwd.WriteByte((byte)'$');
final = MD5Final(ctx);
// and now, just to make sure things don't run too fast
// On a 60 Mhz Pentium this takes 34 msec, so you would
// need 30 seconds to build a 1000 entry dictionary...
for(i = 0; i < 1000; i++)
{
ctx1 = MD5Init();
if((i & 1) != 0)
MD5Update(ctx1, pw, pw.Length);
else
MD5Update(ctx1, final, final.Length);
if((i % 3) != 0)
MD5Update(ctx1, salt, sp, sl);
if((i % 7) != 0)
MD5Update(ctx1, pw, pw.Length);
if((i & 1) != 0)
MD5Update(ctx1, final, final.Length);
else
MD5Update(ctx1, pw, pw.Length);
final = MD5Final(ctx1);
}
//** Section changed to use a memory stream, rather than a byte array.
l = (((ulong)final[0]) << 16) | (((ulong)final[6]) << 8) | ((ulong)final[12]);
_crypt_to64(passwd, l, 4);
l = (((ulong)final[1]) << 16) | (((ulong)final[7]) << 8) | ((ulong)final[13]);
_crypt_to64(passwd, l, 4);
l = (((ulong)final[2]) << 16) | (((ulong)final[8]) << 8) | ((ulong)final[14]);
_crypt_to64(passwd, l, 4);
l = (((ulong)final[3]) << 16) | (((ulong)final[9]) << 8) | ((ulong)final[15]);
_crypt_to64(passwd, l, 4);
l = (((ulong)final[4]) << 16) | (((ulong)final[10]) << 8) | ((ulong)final[5]);
_crypt_to64(passwd, l, 4);
l = final[11];
_crypt_to64(passwd, l, 2);
byte[] buffer = new byte[passwd.Length];
passwd.Seek(0, SeekOrigin.Begin);
passwd.Read(buffer, 0, buffer.Length);
return buffer;
}
public static MemoryStream MD5Init()
{
return new MemoryStream();
}
public static void MD5Update(MemoryStream context, byte[] source, int length)
{
context.Write(source, 0, length);
}
public static void MD5Update(MemoryStream context, byte[] source, int offset, int length)
{
context.Write(source, offset, length);
}
public static byte[] MD5Final(MemoryStream context)
{
long location = context.Position;
byte[] buffer = new byte[context.Length];
context.Seek(0, SeekOrigin.Begin);
context.Read(buffer, 0, (int)context.Length);
context.Seek(location, SeekOrigin.Begin);
return MD5.Create().ComputeHash(buffer);
}
// Changed to use a memory stream rather than a character array.
public static void _crypt_to64(MemoryStream s, ulong v, int n)
{
char[] _crypt_a64 = "./0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz".ToCharArray();
while (--n >= 0)
{
s.WriteByte((byte)_crypt_a64[v & 0x3f]);
v >>= 6;
}
}
}
}
What Am I doing wrong? I am making some big assumptions about the workings of the MD5xxxx functions in the FreeBSD version, but it seems to work.
Is this not the actual version used by PHP? Does anyone have any insight?
EDIT:
I downloaded a copy of PHP's source code, and found that it uses the glibc library. So, I downloaded a copy of glibc's source code, found the __md5_crypt_r function, duplicated its functionality, ant it came back with the EXACT same hashes as the FreeBSD version.
Now, I am pretty much stumped. Did PHP 4 use a different method than PHP 5? What is going on?
Alright, so here is the answer:
PHP uses the glibc implementation of the crypt function. (attached: C# implementation)
The reason my old passwords are not matching the hash is because the Linux box my old website (hosted by GoDaddy) sat on had a non-standard hashing algorithm. (Possibly to fix some of the WEIRD stuff done in the algorithm.)
However, I have tested the following implementation against glibc's unit tests and against a windows install of PHP. Both tests were passed 100%.
EDIT
Here is the link: (moved to a Github Gist)
https://gist.github.com/1092558
The crypt() function in PHP uses whatever hash algorithm the underlying operating system provides for encrypting the data - have a look at its documentation. So the first step should be to find out, how the data was encrypted (what hashing algorithm was used). Once you know that, it should be trivial to find the same algorithm for C#.
You can always system() (or whatever the C# static function is called) out to a php command-line script that does the crypt for you.
I would recommend forcing a password change though after successful login. Then you can have a flag that indicates if the user has changed. Once everyone has changed you can dump the php call.
Just reuse the php implementation... Make sure php's crypt libraries are in your system environment path...
You may need to update your interop method to make sure your string marshaling/charset is correct... you can then use the original hashing algorithm.
[DllImport("crypt.dll", CharSet=CharSet.ASCII)]
private static extern string crypt(string password, string salt);
public bool ValidLogin(string username, string password)
{
string hash = crypt(password, null);
...
}
It does not look trivial.
UPDATE: Originally I wrote: "The PHP Crypt function does not look like a standard hash. Why not? Who knows." As pointed out in the comments, the PHP crypt() is the same as used in BSD for passwd crypt. I don't know if that is a dejure standard, but it is defacto standard. So.
I stand by my position that it does not appear to be trivial.
Rather than porting the code, you might consider keeping the old PHP running, and use it strictly for password validation of old passwords. As users change their passwords, use a new hashing algorithm, something a little more "open". You would have to store the hash, as well as the "flavor of hash" for each user.

Categories

Resources