I use Crypto-JS v2.5.3 (hmac.min.js) http://code.google.com/p/crypto-js/ library to calculate client side hash and the script is:
$("#PasswordHash").val(Crypto.HMAC(Crypto.SHA256, $("#pwd").val(), $("#PasswordSalt").val(), { asByte: true }));
this return something like this:
b3626b28c57ea7097b6107933c6e1f24f586cca63c00d9252d231c715d42e272
Then in Server side I use the following code to calculate hash:
private string CalcHash(string PlainText, string Salt) {
string result = "";
ASCIIEncoding enc = new ASCIIEncoding();
byte[]
baText2BeHashed = enc.GetBytes(PlainText),
baSalt = enc.GetBytes(Salt);
System.Security.Cryptography.HMACSHA256 hasher = new HMACSHA256(baSalt);
byte[] baHashedText = hasher.ComputeHash(baText2BeHashed);
result = string.Join("", baHashedText.ToList().Select(b => b.ToString("x")).ToArray());
return result;
}
and this method returned:
b3626b28c57ea797b617933c6e1f24f586cca63c0d9252d231c715d42e272
As you see there is just some zero characters that the server side method ignore that. where is the problem? is there any fault with my server side method? I just need this two value be same with equal string and salt.
As you see there is just some zero characters that the server side method ignore that. where is the problem?
Here - your conversion to hex in C#:
b => b.ToString("x")
If b is 10, that will just give "a" rather than "0a".
Personally I'd suggest a simpler hex conversion:
return BitConverter.ToString(baHashedText).Replace("-", "").ToLowerInvariant();
(You could just change "x" to "x2" instead, to specify a length of 2 characters, but it's still a somewhat roundabout way of performing a bytes-to-hex conversion.)
Everyone else keeps reccomending to use things like using BitConverter and trimming "-" or using ToString(x2). There is a better solution, a class that has been in .NET since 1.1 SoapHexBinary.
using System.Runtime.Remoting.Metadata.W3cXsd2001;
public byte[] StringToBytes(string value)
{
SoapHexBinary soapHexBinary = SoapHexBinary.Parse(value);
return soapHexBinary.Value;
}
public string BytesToString(byte[] value)
{
SoapHexBinary soapHexBinary = new SoapHexBinary(value);
return soapHexBinary.ToString();
}
This will produce the exact format you want.
I believe the problem is here:
result = string.Join("", baHashedText.ToList().Select(b => b.ToString("x")).ToArray());
change it to:
result = string.Join("", baHashedText.ToList().Select(b => b.ToString("x2")).ToArray());
Related
Hi I'm trying to transform a string containing special characters like û and ….
In my research and tests I almost succeeded using the following function:
public static string ToHex(this string input)
{
char[] values = input.ToCharArray();
string hex = "0x";
string add = "";
foreach (char c in values)
{
int value = Convert.ToInt32(c);
add = String.Format("{0:X}", value).Length == 1 ?
"0" + String.Format("{0:X}", value) + "00"
: String.Format("{0:X}", value) + "00";
hex += add;
}
return hex;
}
If I try to decode ´o¸sçPQ^ûË\u000f±d it does it correctly and turns it into this 0xB4006F00B8007300E700500051005E00FB00CB000F00B1006400,
instead when I try to decode ´o¸sçPQ](ÂF\u0012…a it fails and turns it into 0xB4006F00B8007300E700500051005D002800C200460012002026006100 instead of this
0xB4006F00B8007300E700500051005D002800C2004600120026206100.
Making a minimum of debug I saw that the string is transformed from
´o¸sçPQ](ÂF\u0012…a to ´o¸sçPQ](ÂF.a, I wouldn't want that to be the problem but I'm not sure.
EDIT
0xB4006F00B8007300E700500051005D002800C2004600120026206100 ´o¸sçPQ](ÂF…a CORRECT
0xB4006F00B8007300E700500051005D002800C200460012002026006100 ´o¸sçPQ](ÂF.a MY OUTPUT
0xB4006F00B8007300E700500051005D003D00CB0042000C00A50061006000AD004500BB00 ´o¸sçPQ]=ËB¥a`E» CORRECT
0xB4006F00B8007300E700500051005D003D00CB0042000C00A50061006000AD004500BB00 ´o¸sçPQ]=ËB¥a`E» MY OUTPUT
0xB4006F00B8007300E700500051005D002F00D30042001900B7006E006100 ´o¸sçPQ]/ÓB·na CORRECT
0xB4006F00B8007300E700500051005D002F00D30042001900B7006E006100 ´o¸sçPQ]/ÓB·na MY OUTPUT
0xB4006F00B8007300E700500051005F001A20BC006B0021003500DD00 ´o¸sçPQ_‚¼k!5Ý CORRECT
0xB4006F00B8007300E700500051005F00201A00BC006B0021003500DD00 ´o¸sçPQ_'¼k!5Ý MY OUTPUT
0xB4006F00B8007300E700500051005D002F00EE006B00290014204E004100 ´o¸sçPQ]/îk)—NA CORRECT
0xB4006F00B8007300E700500051005D002F00EE006B0029002014004E004100 ´o¸sçPQ]/îk)-NA MY OUTPUT
0xB4006F00B8007300E700500051005D003800E600690036001C204C004F00 ´o¸sçPQ]8æi6“LO CORRECT
0xB4006F00B8007300E700500051005D003800E60069003600201C004C004F00 ´o¸sçPQ]8æi6"LO MY OUTPUT
0xB4006F00B8007300E700500051005D002F00F3006200390014204E004700C602 ´o¸sçPQ]/ób9—NGˆ CORRECT
0xB4006F00B8007300E700500051005D002F00F300620039002014004E0047002C600 ´o¸sçPQ]/ób9-NG^ MY OUTPUT
0xB4006F00B8007300E700500051005D003B00EE007200330078014100 ´o¸sçPQ];îr3ŸA CORRECT
0xB4006F00B8007300E700500051005D003B00EE0072003300178004100 ´o¸sçPQ];îr3YA MY OUTPUT
0xB4006F00B8007300E700500051005D003000F20064003E009D004B00 ´o¸sçPQ]0òd>K CORRECT
0xB4006F00B8007300E700500051005D003000F20064003E009D004B00 ´o¸sçPQ]0òd>?K MY OUTPUT
0xB4006F00B8007300E700500051005D002F00E60075003E00 ´o¸sçPQ]/æu> CORRECT
0xB4006F00B8007300E700500051005D002F00E60075003E00 ´o¸sçPQ]/æu> MY OUTPUT
0xB4006F00B8007300E700500051005D002F00EE006A003000DC024500 ´o¸sçPQ]/îj0˜E CORRECT
0xB4006F00B8007300E700500051005D002F00EE006A0030002DC004500 ´o¸sçPQ]/îj0~E MY OUTPUT
I thank you in advance for every reply or comment,
greetings.
This is due to endianness, and different integer and string encodings.
char cc = '…';
Console.WriteLine(cc);
// 2026 <-- note, hex value differs from byte representation shown below
Console.WriteLine(((int)cc).ToString("x"));
// 26200000
Console.WriteLine(BytesToHex(BitConverter.GetBytes((int)cc)));
// 2620
Console.WriteLine(BytesToHex(Encoding.GetEncoding("utf-16").GetBytes(new[] { cc })));
You should not treat chars as integers. There are plenty of different ways to encode strings, .net internally uses UTF-16. And all encodings works with bytes, not with integers. Explicit conversion chars to integer can lead to unexpected results, like yours. Why don't you get encoding you need and work with bytes via Encoding.GetBytes?
void Main()
{
// output you expect 0xB4006F00B8007300E700500051005D002800C2004600120026206100
Console.WriteLine(BytesToHex(Encoding.GetEncoding("utf-16").GetBytes("´o¸sçPQ](ÂF\u0012…a")));
}
public static string BytesToHex(byte[] bytes)
{
// whatever way to convert bytes to hex
return "0x" + BitConverter.ToString(bytes).Replace("-", "");
}
I want to create a method in c# that will accept my unique email or username and will return me a unique string just like youtube's video id (https://www.youtube.com/watch?v=_MSYfOYFF14). I can't simply use GUID because I want to generate a unique string against every user and it will remain same for that user each time I hit that method.
So is that possible anyhow?
1) Use the MD5 to get the byte array
2) Convert the byte array to string
3) Remove last two character
using System.Security.Cryptography;
//...
private string GenerateUniqueString(string input )
{
using (MD5 md5 = MD5.Create())
{
byte[] hash = md5.ComputeHash(Encoding.Default.GetBytes(input));
var res = Convert.ToBase64String(hash);
return res.Substring(0, res.Length - 2);
}
}
If it is not going to be exposed to someone else then it can be human readable. Stated that username or password is unique, that means that also combination of these values concatenated with some character (that can not be used in email and username) must be unique. The result is then very simple:
var uniqueString = $"{uniqueName}|{uniqueEmail}";
Simple example of hashing together multiple strings.
public static string Hash(bool caseInsensitive, params string[] strs)
{
using (var sha256 = SHA256.Create())
{
for (int i = 0; i < strs.Length; i++)
{
string str = caseInsensitive ? strs[i].ToUpperInvariant() : strs[i];
byte[] bytes = Encoding.UTF8.GetBytes(str);
byte[] length = BitConverter.GetBytes(bytes.Length);
sha256.TransformBlock(length, 0, length.Length, length, 0);
sha256.TransformBlock(bytes, 0, bytes.Length, bytes, 0);
}
sha256.TransformFinalBlock(new byte[0], 0, 0);
var hash = sha256.Hash;
return Convert.ToBase64String(hash);
}
}
There is a caseInsensitive parameter, because foo#bar.com is equivalent to foo#BAR.COM (and quite often FOO#bar.com is equivalent to all of them). Note how I'm encoding the strings: I'm prepending before each string the length of the encoded string (in UTF8). In this way "Hello", "World" is differento from "Hello World", because one will be converted to something similar to 5Hello5World while the other will be "11Hello World".
Usange:
string base64hash = Hash(true, "Donald Duck", "donaldduck#disney.com");
Note that thanks to the params keyword, the Hash method can accept any number of (string) parameters.
I need some sort of conversion/mapping that, for example, is done by CLCL clipboard manager.
What it does is like that:
I copy the following Unicode text: ūī
And CLCL converts it to: ui
Is there any technique to do such a conversion? Or maybe there are mapping tables that can be used to convert, let's say, symbol ū is mapped to u.
UPDATE
Thanks to all for help. Here is what I came with (a hybrid of two solutions), one posted by Erik Schierboom and one taken from http://blogs.infosupport.com/normalizing-unicode-strings-in-c/#comment-8984
public static string ConvertUnicodeToAscii(string unicodeStr, bool skipNonConvertibleChars = false)
{
if (string.IsNullOrWhiteSpace(unicodeStr))
{
return unicodeStr;
}
var normalizedStr = unicodeStr.Normalize(NormalizationForm.FormD);
if (skipNonConvertibleChars)
{
return new string(normalizedStr.ToCharArray().Where(c => (int) c <= 127).ToArray());
}
return new string(
normalizedStr.Where(
c =>
{
UnicodeCategory category = CharUnicodeInfo.GetUnicodeCategory(c);
return category != UnicodeCategory.NonSpacingMark;
}).ToArray());
}
I have used the following code for some time:
private static string NormalizeDiacriticalCharacters(string value)
{
if (value == null)
{
throw new ArgumentNullException("value");
}
var normalised = value.Normalize(NormalizationForm.FormD).ToCharArray();
return new string(normalised.Where(c => (int)c <= 127).ToArray());
}
In general, it is not possible to convert Unicode to ASCII because ASCII is a subset of Unicode.
That being said, it is possible to convert characters within the ASCII subset of Unicode to Unicode.
In C#, generally there's no need to do the conversion, since all strings are Unicode by default anyway, and all components are Unicode-aware, but if you must do the conversion, use the following:
string myString = "SomeString";
byte[] asciiString = System.Text.Encoding.ASCII.GetBytes(myString);
I need to convert a string into it's binary equivilent and keep it in a string. Then return it back into it's ASCII equivalent.
You can encode a string into a byte-wise representation by using an Encoding, e.g. UTF-8:
var str = "Out of cheese error";
var bytes = Encoding.UTF8.GetBytes(str);
To get back a .NET string object:
var strAgain = Encoding.UTF8.GetString(bytes);
// str == strAgain
You seem to want the representation as a series of '1' and '0' characters; I'm not sure why you do, but that's possible too:
var binStr = string.Join("", bytes.Select(b => Convert.ToString(b, 2)));
Encodings take an abstract string (in the sense that they're an opaque representation of a series of Unicode code points), and map them into a concrete series of bytes. The bytes are meaningless (again, because they're opaque) without the encoding. But, with the encoding, they can be turned back into a string.
You seem to be mixing up "ASCII" with strings; ASCII is simply an encoding that deals only with code-points up to 128. If you have a string containing an 'é', for example, it has no ASCII representation, and so most definitely cannot be represented using a series of ASCII bytes, even though it can exist peacefully in a .NET string object.
See this article by Joel Spolsky for further reading.
You can use these functions for converting to binary and restore it back :
public static string BinaryToString(string data)
{
List<Byte> byteList = new List<Byte>();
for (int i = 0; i < data.Length; i += 8)
{
byteList.Add(Convert.ToByte(data.Substring(i, 8), 2));
}
return Encoding.ASCII.GetString(byteList.ToArray());
}
and for converting string to binary :
public static string StringToBinary(string data)
{
StringBuilder sb = new StringBuilder();
foreach (char c in data.ToCharArray())
{
sb.Append(Convert.ToString(c, 2).PadLeft(8, '0'));
}
return sb.ToString();
}
Hope Helps You.
First convert the string into bytes, as described in my comment and in Cameron's answer; then iterate, convert each byte into an 8-digit binary number (possibly with Convert.ToString, padding appropriately), then concatenate. For the reverse direction, split by 8 characters, run through Convert.ToInt16, build up a byte array, then convert back to a string with GetString.
I am trying to read a String in UTF-16 encoding scheme and perform MD5 hashing on it. But strangely, Java and C# are returning different results when I try to do it.
The following is the piece of code in Java:
public static void main(String[] args) {
String str = "preparar mantecado con coca cola";
try {
MessageDigest digest = MessageDigest.getInstance("MD5");
digest.update(str.getBytes("UTF-16"));
byte[] hash = digest.digest();
String output = "";
for(byte b: hash){
output += Integer.toString( ( b & 0xff ) + 0x100, 16).substring( 1 );
}
System.out.println(output);
} catch (Exception e) {
}
}
The output for this is: 249ece65145dca34ed310445758e5504
The following is the piece of code in C#:
public static string GetMD5Hash()
{
string input = "preparar mantecado con coca cola";
System.Security.Cryptography.MD5CryptoServiceProvider x = new System.Security.Cryptography.MD5CryptoServiceProvider();
byte[] bs = System.Text.Encoding.Unicode.GetBytes(input);
bs = x.ComputeHash(bs);
System.Text.StringBuilder s = new System.Text.StringBuilder();
foreach (byte b in bs)
{
s.Append(b.ToString("x2").ToLower());
}
string output= s.ToString();
Console.WriteLine(output);
}
The output for this is: c04d0f518ba2555977fa1ed7f93ae2b3
I am not sure, why the outputs are not the same. How do we change the above piece of code, so that both of them return the same output?
UTF-16 != UTF-16.
In Java, getBytes("UTF-16") returns an a big-endian representation with optional byte-ordering mark. C#'s System.Text.Encoding.Unicode.GetBytes returns a little-endian representation. I can't check your code from here, but I think you'll need to specify the conversion precisely.
Try getBytes("UTF-16LE") in the Java version.
The first thing I can find, and this might not be the only problem, is that C#'s Encoding.Unicode.GetBytes() is littleendian, while Java's natural byte order is bigendian.
You could use the System.Text.Enconding.Unicode.GetString(byte[]) to convert back from byte to string. In this way you're sure that all happens in Unicode encoding.