I have the following code to encode password field but it gets error when password field is longer than ten characters.
private string base64Encode(string sData)
{
try
{
byte[] encData_byte = new byte[sData.Length];
//encData_byte = System.Text.Encoding.UTF8.GetBytes(sData);
encData_byte = System.Text.Encoding.UTF8.GetBytes(sData);
string encodedData = Convert.ToBase64String(encData_byte);
return encodedData;
}
catch (Exception ex)
{
throw new Exception("Error in base64Encode" + ex.Message);
}
}
This is the code to decode the encoded value
public string base64Decode(string sData)
{
System.Text.UTF8Encoding encoder = new System.Text.UTF8Encoding();
System.Text.Decoder utf8Decode = encoder.GetDecoder();
byte[] todecode_byte = Convert.FromBase64String(sData);
int charCount = utf8Decode.GetCharCount(todecode_byte, 0, todecode_byte.Length);
char[] decoded_char = new char[charCount];
utf8Decode.GetChars(todecode_byte, 0, todecode_byte.Length, decoded_char, 0);
string result = new String(decoded_char);
return result;
}
That code itself shouldn't be failing - but it's not actually providing any protection for the password. I'm not sure what kind of "encoding" you're really trying to do, but this is not the way to do it. Issues:
Even if this worked, it's terrible from a security point of view - this isn't encryption, hashing, or anything like it
You're allocating a new byte array for no good reason - why?
You're catching Exception, which is almost always a bad idea
Your method ignores .NET naming conventions
If you can explain to us what the bigger picture is, we may be able to suggest a much better approach.
My guess is that the exception you're seeing is actually coming when you call Convert.FromBase64String, i.e. in the equivalent decoding method, which you haven't shown us.
I think you will need to modify your code.
These are 2 links which gives more details -
Encrypt and decrypt a string
http://msdn.microsoft.com/en-us/library/system.security.cryptography.rsacryptoserviceprovider.aspx
They are correct about this not being secure. But the question you asked was why is the code failing. Base64 strings usually take up more space than the string they encode. You are trying to store the same amount of data in fewer characters (64 instead of 255), so it expands the string. Since you are dimensioning the array based on the size of the string, any time the base 64 string exceeds the size of the base 255 string, you get an error. Instead of writing the code yourself, use the built in converters.
System.Convert.ToBase64String()
System.Convert.FromBase64String()
But as I mentioned before, this is not secure, so only use this if you are trying to do something with a legacy system, and you need to preserve functionality for some reason.
Related
From a call to an external API my method receives an image/png as an IRestResponse where the Content property is a string representation.
I need to convert this string representation of image/png into a byte array without saving it first and then going File.ReadAllBytes. How can I achieve this?
You can try a hex string to byte conversion. Here is a method I've used before. Please note that you may have to pad the byte depending on how it comes. The method will throw an error to let you know this. However if whoever sent the image converted it into bytes, then into a hex string (which they should based on what you are saying) then you won't have to worry about padding.
public static byte[] HexToByte(string HexString)
{
if (HexString.Length % 2 != 0)
throw new Exception("Invalid HEX");
byte[] retArray = new byte[HexString.Length / 2];
for (int i = 0; i < retArray.Length; ++i)
{
retArray[i] = byte.Parse(HexString.Substring(i * 2, 2), NumberStyles.HexNumber, CultureInfo.InvariantCulture);
}
return retArray;
}
This might not be the fastest solution by the way, but its a good representation of what needs to happen so you can optimize later.
This is also assuming the string being sent to you is the raw byte converted string. If the sender did anything like a base58 conversion or something else you will need to decode and then use method.
I have found that the IRestResponse from RestSharp actually contains a 'RawBytes' property which is of the response content. This meets my needs and no conversion is necessary!
I'm trying to convert a string to a byte[] using the ASCIIEncoder object in the .NET library. The string will never contain non-ASCII characters, but it will usually have a length greater than 16. My code looks like the following:
public static byte[] Encode(string packet)
{
ASCIIEncoder enc = new ASCIIEncoder();
byte[] byteArray = enc.GetBytes(packet);
return byteArray;
}
By the end of the method, the byte array should be full of packet.Length number of bytes, but Intellisense tells me that all bytes after byteArray[15] are literally questions marks that cannot be observed. I used Wireshark to view byteArray after I sent it and it was received on the other side fine, but the end device did not follow the instructions encoded in byteArray. I'm wondering if this has anything to do with Intellisense not being able to display all elements in byteArray, or if my packet is completely wrong.
If your packet string basically contains characters in the range 0-255, then ASCIIEncoding is not what you should be using. ASCII only defines character codes 0-127; anything in the range 128-255 will get turned into question marks (as you have observed) because there characters are not defined in ASCII.
Consider using a method like this to convert the string to a byte array. (This assumes that the ordinal value of each character is in the range 0-255 and that the ordinal value is what you want.)
public static byte[] ToOrdinalByteArray(this string str)
{
if (str == null) { throw new ArgumentNullException("str"); }
var bytes = new byte[str.Length];
for (int i = 0; i < str.Length; ++i) {
// Wrapping the cast in checked() will trigger an OverflowException
// if the character being converted is out of range for a byte.
bytes[i] = checked((byte)str[i]);
}
return bytes;
}
The Encoding class hierarchy is specifically designed for handling text. What you have here doesn't seem to be text, so you should avoid using these classes.
The standard encoders use the replacement character fallback strategy. If a character doesn't exist in the target character set, they encode a replacement character ('?' by default).
To me, that's worse than a silent failure; It's data corruption. I prefer that libraries tell me when my assumptions are wrong.
You can derive an encoder that throws an exception:
Encoding.GetEncoding(
"us-ascii",
new EncoderExceptionFallback(),
new DecoderExceptionFallback());
If you are truly using only characters in Unicode's ASCII range then you'll never see an exception.
I'm writing an application that needs to verify HMAC-SHA256 checksums. The code I currently have looks something like this:
static bool VerifyIntegrity(string secret, string checksum, string data)
{
// Verify HMAC-SHA256 Checksum
byte[] key = System.Text.Encoding.UTF8.GetBytes(secret);
byte[] value = System.Text.Encoding.UTF8.GetBytes(data);
byte[] checksum_bytes = System.Text.Encoding.UTF8.GetBytes(checksum);
using (var hmac = new HMACSHA256(key))
{
byte[] expected_bytes = hmac.ComputeHash(value);
return checksum_bytes.SequenceEqual(expected_bytes);
}
}
I know that this is susceptible to timing attacks.
Is there a message digest comparison function in the standard library? I realize I could write my own time hardened comparison method, but I have to believe that this is already implemented elsewhere.
EDIT: Original answer is below - still worth reading IMO, but regarding the timing attack...
The page you referenced gives some interesting points about compiler optimizations. Given that you know the two byte arrays will be the same length (assuming the size of the checksum isn't particularly secret, you can immediately return if the lengths are different) you might try something like this:
public static bool CompareArraysExhaustively(byte[] first, byte[] second)
{
if (first.Length != second.Length)
{
return false;
}
bool ret = true;
for (int i = 0; i < first.Length; i++)
{
ret = ret & (first[i] == second[i]);
}
return ret;
}
Now that still won't take the same amount of time for all inputs - if the two arrays are both in L1 cache for example, it's likely to be faster than if it has to be fetched from main memory. However, I suspect that is unlikely to cause a significant issue from a security standpoint.
Is this okay? Who knows. Different processors and different versions of the CLR may take different amounts of time for an & operation depending on the two operands. Basically this is the same as the conclusion of the page you referenced - that it's probably as good as we'll get in a portable way, but that it would require validation on every platform you try to run on.
At least the above code only uses relatively simple operations. I would personally avoid using LINQ operations here as there could be sneaky optimizations going on in some cases. I don't think there would be in this case - or they'd be easy to defeat - but you'd at least have to think about them. With the above code, there's at least a reasonably close relationship between the source code and IL - leaving "only" the JIT compiler and processor optimizations to worry about :)
Original answer
There's one significant problem with this: in order to provide the checksum, you have to have a string whose UTF-8 encoded form is the same as the checksum. There are plenty of byte sequences which simply don't represent UTF-8-encoded text. Basically, trying to encode arbitrary binary data as text using UTF-8 is a bad idea.
Base64, on the other hand, is basically designed for this:
static bool VerifyIntegrity(string secret, string checksum, string data)
{
// Verify HMAC-SHA256 Checksum
byte[] key = Encoding.UTF8.GetBytes(secret);
byte[] value = Encoding.UTF8.GetBytes(data);
byte[] checksumBytes = Convert.FromBase64String(checksum);
using (var hmac = new HMACSHA256(key))
{
byte[] expectedBytes = hmac.ComputeHash(value);
return checksumBytes.SequenceEqual(expectedBytes);
}
}
On the other hand, instead of using SequenceEqual on the byte array, you could Base64 encode the actual hash, and see whether that matches:
static bool VerifyIntegrity(string secret, string checksum, string data)
{
// Verify HMAC-SHA256 Checksum
byte[] key = Encoding.UTF8.GetBytes(secret);
byte[] value = Encoding.UTF8.GetBytes(data);
using (var hmac = new HMACSHA256(key))
{
return checksum == Convert.ToBase64String(hmac.ComputeHash(value));
}
}
I don't know of anything better within the framework. It wouldn't be too hard to write a specialized SequenceEqual operator for arrays (or general ICollection<T> implementations) which checked for equal lengths first... but given that the hashes are short, I wouldn't worry about that.
If you're worried about the timing of the SequenceEqual, you could always replace it with something like this:
checksum_bytes.Zip( expected_bytes, (a,b) => a == b ).Aggregate( true, (a,r) => a && r );
This returns the same result as SequenceEquals but always check every element before given an answer this less chance of revealing anything through a timing attack.
How it is susceptible to timing attacks? Your code works the same amount of time in the case of valid or invalid digest. And calculate digest/check digest looks like the easiest way to check this.
I was wondering if someone could help me to get this method converted to ruby, is this possible at all?
public static string getSHA512(string str){
UnicodeEncoding UE = new UnicodeEncoding();
byte[] HashValue = null;
byte[] MessageBytes = UE.GetBytes(str);
System.Security.Cryptography.SHA512Managed SHhash = new System.Security.Cryptography.SHA512Managed();
string strHex = "";
HashValue = SHhash.ComputeHash(MessageBytes);
foreach (byte b in HashValue){
strHex += string.Format("{0:x2}", b);
}
return strHex;
}
Thanks in advance
UPDATE:
I just would like to make it clear that unfortunately it's method is not just for SHA512 generation but a custom one. I believe that the Digest::SHA512.hexdigest would be just the SHHast instance, but if you carefully look for the method you can see that it differs a bit from a simple hash generation.
Follows the result of both functions.
# in C#
getSHA512("hello") => "5165d592a6afe59f80d07436e35bd513b3055429916400a16c1adfa499c5a8ce03a370acdd4dc787d04350473bea71ea8345748578fc63ac91f8f95b6c140b93"
# in Ruby
Digest::SHA512.hexdigest("hello") || Digest::SHA2 => "9b71d224bd62f3785d96d46ad3ea3d73319bfbc2890caadae2dff72519673ca72323c3d99ba5c11d7c7acc6e14b8c5da0c4663475c2e5c3adef46f73bcdec043"
require 'digest/sha2'
class String
def sha512
Digest::SHA2.new(512).hexdigest(encode('UTF-16LE'))
end
end
'hello'.sha512 # => '5165d592a6afe59f80d07436e35bd…5748578fc63ac91f8f95b6c140b93'
As with all my code snippets on StackOverflow, I always assume the latest version of Ruby. Here's one that also works with Ruby 1.8:
require 'iconv'
require 'digest/sha2'
class String
def sha512(src_enc='UTF-8')
Digest::SHA2.new(512).hexdigest(Iconv.conv(src_enc, 'UTF-16LE', self))
end
end
'hello'.sha512 # => '5165d592a6afe59f80d07436e35bd…5748578fc63ac91f8f95b6c140b93'
Note that in this case, you have to know and tell Ruby about the encoding the string is in explicitly. In Ruby 1.9, Ruby always knows what encoding a string is in, and will convert it accordingly, when required. I chose UTF-8 as default encoding because it is backwards-compatible with ASCII, is the standard encoding on the internet and also otherwise widely used. However, for example both .NET and Java use UTF-16LE, not UTF-8. If your string is not UTF-8 or ASCII-encoded, you will have to pass in the encoding name into the sha512 method.
Off-Topic: 9 lines of code reduced to 1. I love Ruby!
Well, actually that is a little bit unfair. You could have written it something like this:
var messageBytes = new UnicodeEncoding().GetBytes(str);
var hashValue = new System.Security.Cryptography.SHA512Managed().
ComputeHash(messageBytes);
return hashValue.Aggregate<byte, string>("",
(s, b) => s += string.Format("{0:x2}", b)
);
Which is really only 3 lines (broken into 5 for StackOverflow's layout) and most importantly gets rid of that ugly 1950s-style explicit for loop for a nice 1960s-style fold (aka. reduce aka. inject aka. Aggregate aka. inject:into: … it's all the same).
There's probably an even more elegant way to write this, but a) I don't actually know C# and .NET and b) this question is about Ruby. Focus, Jörg, focus! :-)
Aaand … found it:
return string.Join("", from b in hashValue select string.Format("{0:x2}", b));
I knew there had to be an equivalent to Ruby's Enumerable#join somewhere, I just was looking in the wrong place.
Use the Digest::SHA2 class.
I've tried every example I can find on the web but I cannot get my .NET code to produce the same MD5 Hash results from my VB6 app.
The VB6 app produces identical results to this site:
http://www.functions-online.com/md5.html
But I cannot get the same results for the same input in C# (using either the MD5.ComputeHash method or the FormsAuthentication encryption method)
Please help!!!!
As requested, here is some code. This is pulled straight from MSDN:
public string hashString(string input)
{
// Create a new instance of the MD5CryptoServiceProvider object.
MD5 md5Hasher = MD5.Create();
// Convert the input string to a byte array and compute the hash.
byte[] data = md5Hasher.ComputeHash(Encoding.Default.GetBytes(input));
// Create a new Stringbuilder to collect the bytes
// and create a string.
StringBuilder sBuilder = new StringBuilder();
// Loop through each byte of the hashed data
// and format each one as a hexadecimal string.
for (int i = 0; i < data.Length; i++)
{
sBuilder.Append(data[i].ToString("x2"));
}
// Return the hexadecimal string.
return sBuilder.ToString();
}
My test string is:
QWERTY123TEST
The results from this code is:
8c31a947080131edeaf847eb7c6fcad5
The result from Test MD5 is:
f6ef5dc04609664c2875895d7da34eb9
Note: The result from the TestMD5 is what I am expecting
Note: I've been really, really stupid, sorry - just realised I had the wrong input. As soon as I hard-coded it, it worked. Thanks for the help
This is a C# MD5 method that i know works, i have used it to authenticate via different web restful APIs
public static string GetMD5Hash(string input)
{
System.Security.Cryptography.MD5CryptoServiceProvider x = new System.Security.Cryptography.MD5CryptoServiceProvider();
byte[] bs = System.Text.Encoding.UTF8.GetBytes(input);
bs = x.ComputeHash(bs);
System.Text.StringBuilder s = new System.Text.StringBuilder();
foreach (byte b in bs)
{
s.Append(b.ToString("x2").ToLower());
}
return s.ToString();
}
What makes the "functions-online" site (http://www.functions-online.com/md5.html) an authority on MD5? For me, it works OK only for ISO-8859-1. But when I try pasting anything other than ISO-8859-1 into it, it returns the same MD5 hash. Try Cyrillic capital B by itself, code point 0x412. Or try Han Chinese symbol for water, code point 0x98A8.
As far as I know, the posted C# applet is correct.