I'm trying to recreate the functionallity of
slappasswd -h {md5}
on .Net
I have this code on Perl
use Digest::MD5;
use MIME::Base64;
$ctx = Digest::MD5->new;
$ctx->add('fredy');
print "Line $.: ", $ctx->clone->hexdigest, "\n";
print "Line $.: ", $ctx->digest, "\n";
$hashedPasswd = '{MD5}' . encode_base64($ctx->digest,'');
print $hashedPasswd . "\n";
I've tried to do the same on VB.Net , C# etc etc , but only works the
$ctx->clone->hexdigest # result : b89845d7eb5f8388e090fcc151d618c8
part in C# using the MSDN Sample
static string GetMd5Hash(MD5 md5Hash, string input)
{
// Convert the input string to a byte array and compute the hash.
byte[] data = md5Hash.ComputeHash(Encoding.UTF8.GetBytes(input));
// Create a new Stringbuilder to collect the bytes
// and create a string.
StringBuilder sBuilder = new StringBuilder();
// Loop through each byte of the hashed data
// and format each one as a hexadecimal string.
for (int i = 0; i < data.Length; i++)
{
sBuilder.Append(data[i].ToString("x2"));
}
// Return the hexadecimal string.
return sBuilder.ToString();
}
With this code in Console App :
string source = "fredy";
using (MD5 md5Hash = MD5.Create())
{
string hash = GetMd5Hash(md5Hash, source);
Console.WriteLine("The MD5 hash of " + source + " is: " + hash + ".");
}
outputs : The MD5 hash of fredy is: b89845d7eb5f8388e090fcc151d618c8.
but i need to implement the $ctx->digest function, it outputs some binary data like
¸˜E×ë_ƒˆàüÁQÖÈ
this output happens on Linux and Windows with Perl.
Any ideas?
Thanks
As I already said in my comment above, you are mixing some things up. What the digest in Perl creates is a set of bytes. When those are printed, Perl will convert them automatically to a string-representation, because (simplified) it thinks if you print stuff it goes to a screen and you want to be able to read it. C# does not do that. That doesn't mean the Perl digest and the C# digest are not the same. Just their representation is different.
You have already established that they are equal if you convert both of them to a hexadecimal representation.
Now what you need to do to get output in C# that looks like the string that Perl prints when you do this:
print $ctx->digest; # output: ¸˜E×ë_ƒˆàüÁQÖÈ
... is to convert the C# byte[] data to a string of characters.
That has been answered before,f or example here: How to convert byte[] to string?
Using that technique, I believe your function to get it would look like this. Please note I am a Perl developer and I have no means of testing this. Consider it C#-like pseudo-code.
static string GetMd5PerlishString(MD5 md5Hash, string input)
{
// Convert the input string to a byte array and compute the hash.
byte[] data = md5Hash.ComputeHash(Encoding.UTF8.GetBytes(input));
string result = System.Text.Encoding.UTF8.GetString(data);
return result;
}
Now it should look the same.
Please also note that MD5 is not a secure hashing algorithm for passwords any more. Please do not store use it to store user passwords!
Related
I'm working on rewriting a piece of PHP code to C#. This code is used for password hashing. In the first step it produces a string like "password{salt}", than hashes it via sha512 hash algorithm. After that a loop is hashing the combination of the first hash and the salt again for 5000 iterations.
The PHP Code looks like this:
<?php
$password = 'abc';
$salt = 'def';
$salted = $password.'{'.$salt.'}';
$digest = hash('sha512', $salted, true);
for ($i=1; $i<5000; $i++) {
$digest = hash('sha512', $digest.$salted, true);
}
$encodedPassword = base64_encode($digest);
//$encodedPassword contains the final hash code
I was able to get it working without the loop (with just the first hash() call). So the main hashing and base64 encoding is done correctly. I found out that this part is what I cannot manage to rewrite in C#:
$digest.$salted
$digest seems to be a binary representation since PHP's hash() function was used with "true" as the last parameter (see PHP hash - manual). $salted is a string. Both get somehow magically combined by PHP's dot / concat operator. I guess there will be some sort of standard conversion from binary to string under the hood when using the dot operator with a non-string operand.
This is my code so far:
void Main()
{
string password = "abc";
string salt = "def";
string salted = String.Format("{0}{{{1}}}", password, salt);
byte[] digest = hash(salted);
for(int i = 1; i < 1; i++)
{
digest = hash(String.Format("{0}{1}", System.Text.Encoding.UTF8.GetString(digest), salted));
}
var encodedPassword = System.Convert.ToBase64String(digest);
//$encodedPassword should contain the final hash code
}
static byte[] hash(string toHash)
{
System.Security.Cryptography.SHA512 sha512 = new System.Security.Cryptography.SHA512Managed();
return sha512.ComputeHash(System.Text.Encoding.UTF8.GetBytes(toHash));
}
As you see I tried to convert the hash bytes back to a string with System.Text.Encoding.UTF8.GetString() and then append the salt but that doesn't produce the same output as the PHP code.
I would be very happy if someone could help me on this. Thank you very much.
In the PHP version you loop 4999 times, while in the C# version 0. The second problem is that the returned bytes from hash() have no encoding at all.
This should give you the same result as the PHP version:
System.Security.Cryptography.SHA512 sha512 = new System.Security.Cryptography.SHA512Managed();
var saltedUtf8Bytes = System.Text.Encoding.UTF8.GetBytes(salted);
for(int i = 1; i < 5000; i++)
{
digest = sha512.ComputeHash(digest.Concat(saltedUtf8Bytes).ToArray());
}
I'm trying to get a website Login page and a C# launcher to connect to a MySQL database, my C# code converts a string to SHA256 but in uppercase. So I figured it would be easier to change in PHP, so using strtoupper I pass the variable string for the encrypted password. It works great the only problem is this:
bec4c38f480db265e86e1650b1515216be5095f7a049852f76eea9934351b9ac - Original
BEC4C38F48DB265E86E1650B1515216BE5095F7A049852F76EEA9934351B9AC - C#
^ Right here there is meant to be a 0
I'm not sure what's gone wrong as both are using the exact same encryption method and it's odd that it's only one Character... Has anyone experienced this before?
PHP to encrypt text to SHA256 and then strtoupper:
$encrypt_password=(hash('sha256', $mypassword));
$s_password = strtoupper($encrypt_password);
C# Convert string to SHA256:
System.Security.Cryptography.SHA256 sha256 = new System.Security.Cryptography.SHA256Managed();
byte[] sha256Bytes = System.Text.Encoding.Default.GetBytes(txtpass.Text);
byte[] cryString = sha256.ComputeHash(sha256Bytes);
string sha256Str = string.Empty;
for (int i = 0; i < cryString.Length; i++)
{
sha256Str += cryString[i].ToString("X");
}
This is the only code that involves encrypting on both sides.
A value like 13 is just "D" not "0D" like would represented in the hash. You need to pad values that are less than 2 digits. Use "X2" as the format string.
Possible Duplicate Converting byte array to string and back again in C#
I am using Huffman Coding for compression and decompression of some text from here
The code in there builds a huffman tree to use it for encoding and decoding. Everything works fine when I use the code directly.
For my situation, i need to get the compressed content, store it and decompress it when ever need.
The output from the encoder and the input to the decoder are BitArray.
When I tried convert this BitArray to String and back to BitArray and decode it using the following code, I get a weird answer.
Tree huffmanTree = new Tree();
huffmanTree.Build(input);
string input = Console.ReadLine();
BitArray encoded = huffmanTree.Encode(input);
// Print the bits
Console.Write("Encoded Bits: ");
foreach (bool bit in encoded)
{
Console.Write((bit ? 1 : 0) + "");
}
Console.WriteLine();
// Convert the bit array to bytes
Byte[] e = new Byte[(encoded.Length / 8 + (encoded.Length % 8 == 0 ? 0 : 1))];
encoded.CopyTo(e, 0);
// Convert the bytes to string
string output = Encoding.UTF8.GetString(e);
// Convert string back to bytes
e = new Byte[d.Length];
e = Encoding.UTF8.GetBytes(d);
// Convert bytes back to bit array
BitArray todecode = new BitArray(e);
string decoded = huffmanTree.Decode(todecode);
Console.WriteLine("Decoded: " + decoded);
Console.ReadLine();
The Output of Original code from the tutorial is:
The Output of My Code is:
Where am I wrong friends? Help me, Thanks in advance.
You cannot stuff arbitrary bytes into a string. That concept is just undefined. Conversions happen using Encoding.
string output = Encoding.UTF8.GetString(e);
e is just binary garbage at this point, it is not a UTF8 string. So calling UTF8 methods on it does not make sense.
Solution: Don't convert and back-convert to/from string. This does not round-trip. Why are you doing that in the first place? If you need a string use a round-trippable format like base-64 or base-85.
I'm pretty sure Encoding doesn't roundtrip - that is you can't encode an arbitrary sequence of bytes to a string, and then use the same Encoding to get bytes back and always expect them to be the same.
If you want to be able to roundtrip from your raw bytes to string and back to the same raw bytes, you'd need to use base64 encoding e.g.
http://blogs.microsoft.co.il/blogs/mneiter/archive/2009/03/22/how-to-encoding-and-decoding-base64-strings-in-c.aspx
I have an algorithm in C# running on server side which hashes a base64-encoded string.
byte[] salt = Convert.FromBase64String(serverSalt); // Step 1
SHA256Managed sha256 = new SHA256Managed(); // Step 2
byte[] hash = sha256.ComputeHash(salt); // Step 3
Echo("String b64: " + Convert.ToBase64String(hash)); // Step 4
The hash is then checked against a database list of hashes.
I'd love to achieve the same with javascript, using the serverSalt as it is transmitted from C# through a websocket.
I know SHA-256 hashes different between C# and Javascript because C# and Javascript have different string encodings.
But I know I can pad zeros in the byte array to make Javascript behave as C# (step 1 above is solved).
var newSalt = getByteArrayFromCSharpString(salt); // Pad zeros where needed
function getByteArrayFromCSharpString(inString)
{
var bytes = [];
for (var i = 0; i < inString.length; ++i)
{
bytes.push(inString.charCodeAt(i));
bytes.push(0);
}
return bytes;
}
Could anyone provide some insight on which algorithms I could use to reproduce steps 2, 3 and 4?
PS: there are already questions and answers around but not a single code snippet.
Here's the solution, I really hope this could help other people in the same situation.
In the html file, load crypto-js library
<!-- library for doing password hashing, base64 eoncoding / decoding -->
<script src="http://crypto-js.googlecode.com/svn/tags/3.0.2/build/components/core-min.js"></script>
<script src="http://crypto-js.googlecode.com/svn/tags/3.0.2/build/components/enc-base64-min.js"></script>
<script src="http://crypto-js.googlecode.com/svn/tags/3.0.2/build/rollups/sha256.js"></script>
In the javascript, do the following
// This function takes a base64 string, hashes it with the SHA256 algorithm
// and returns a base64 string.
function hashBase64StringAndReturnBase64String(str)
{
// Take the base64 string and parse it into a javascript variable
var words = CryptoJS.enc.Base64.parse(str);
// Create the hash using the CryptoJS implementation of the SHA256 algorithm
var hash = CryptoJS.SHA256(words);
var outString = hash.toString(CryptoJS.enc.Base64)
// Display what you just got and return it
console.log("Output string is: " + outString);
return outString;
}
check Java script SHA256 implementation on the following URL
http://www.movable-type.co.uk/scripts/sha256.html
I've written my first COM classes. My unit tests work fine, but my first use of the COM objects has hit a snag.
The COM classes provide methods which accept a string, manipulate it and return a string. The consumer of the COM objects is a dBASE PLUS program.
When the input string contains common keyboard characters (ASCII 127 or lower), the COM methods work fine. However, if the string contains characters beyond the ASCII range, some of them get remapped from Windows-1252 to C#'s Unicode. This table shows the mapping that takes place: http://www.unicode.org/Public/MAPPINGS/VENDORS/MICSFT/WINDOWS/CP1252.TXT
For example, if the dBASE program calls the COM object with:
oMyComObject.MyMethod("It will cost€123") where the € is hex 80,
the C# method receives it as Unicode:
public string MyMethod(string source)
{
// source is Unicode and now the Euro symbol is hex 20AC
...
}
I would like to avoid this remapping because I want the original hex content of the string.
I've tried adding the following to MyMethod to convert the string back to Windows-1252, but the Euro symbol gets lost because it becomes a question mark:
byte[] UnicodeBytes = Encoding.Unicode.GetBytes(source.ToString());
byte[] Win1252Bytes = Encoding.Convert(Encoding.Unicode, Encoding.GetEncoding(1252), UnicodeBytes);
string Win1252 = Encoding.GetEncoding(1252).GetString(Win1252Bytes);
Is there a way to prevent this conversion of the "source" parameter to Unicode? Or, is there a way to convert it 100% from Unicode back to Windows-1252?
Yes, I'm answering my own question. The answer by "Jigsore" put me on the right track, but I want to explain more clearly in case someone else makes the same mistake I made.
I eventually figured out that I had misdiagnosed the problem. dBASE was passing the string fine and C# was receiving it fine. It was how I checked the contents of the string that was in error.
This turnkey builds on Jigsore's answer:
void Main()
{
string unicodeText = "\u20AC\u0160\u0152\u0161";
byte[] unicodeBytes = Encoding.Unicode.GetBytes(unicodeText);
byte[] win1252bytes = Encoding.Convert(Encoding.Unicode, Encoding.GetEncoding(1252), unicodeBytes);
for (int i = 0; i < win1252bytes.Length; i++)
Console.Write("0x{0:X2} ", win1252bytes[i]); // output: 0x80 0x8A 0x8C 0x9A
// win1252String represents the string passed from dBASE to C#
string win1252String = Encoding.GetEncoding(1252).GetString(win1252bytes);
Console.WriteLine("\r\nWin1252 string is " + win1252String); // output: Win1252 string is €ŠŒš
Console.WriteLine("looking at the code of the first character the wrong way: " + (int)win1252String[0]);
// output: looking at the code of the first character the wrong way: 8364
byte[] bytes = Encoding.GetEncoding(1252).GetBytes(win1252String[0].ToString());
Console.WriteLine("looking at the code of the first character the right way: " + bytes[0]);
// output: looking at the code of the first character the right way: 128
// Warning: If your input contains character codes which are large in value than what a byte
// can hold (ex: multi-byte Chinese characters), then you will need to look at more than just bytes[0].
}
The reason the first method was wrong is that casting (int)win1252String[0] (or the converse of casting an integer j to a character with (char)j) involves an implicit conversion with the Unicode character set C# uses.
I consider this resolved and would like to thank each person who took the time to comment or answer for their time and trouble. It is appreciated!
Actually you're doing the Unicode to Win-1252 conversion correctly, but you're performing an extra step. The original Win1252 codes are in the Win1252Bytes array!
Check the following code:
string unicodeText = "\u20AC\u0160\u0152\u0161";
byte[] unicodeBytes = Encoding.Unicode.GetBytes(unicodeText);
byte[] win1252bytes = Encoding.Convert(Encoding.Unicode, Encoding.GetEncoding(1252), unicodeBytes);
for (i = 0; i < win1252bytes.Length; i++)
Console.Write("0x{0:X2} ", win1252bytes[i]);
The output shows the Win-1252 codes for the unicodeText string, you can check this by looking at the CP1252.TXT table.