C# AES remove trailing null characters - c#

I'm new to the cryptography topic, so I wanted to try out AES. My problem is, that after decryption, the target buffer would have trailing '\0' characters. I guess this happens because the data is encrypted in blocks and my cipher-byte[] is bigger than the initial data. Is there a fancy way to handle this? My current solution looks this:
using (MemoryStream ms = new MemoryStream(cipher)) {
using (CryptoStream cs = new CryptoStream(ms, decryptor, CryptoStreamMode.Read)) {
byte[] cipherBuffer = cs.Read(cipher.Length);
int i = cipherBuffer.Length - 1;
while (i >= 0 && cipherBuffer[i] == '\0')
i--;
byte[] targetBuffer = new byte[i + 1];
Array.Copy(cipherBuffer, targetBuffer, i + 1);
return targetBuffer;
}
}
Let's say I want to encrypt an Image that actually has a trailing '\0', then this code would create a wrong result.
I tried looking for a way that this trailing null character removal is done oob but I didn't find any other solution to this.

Related

Custom Newline in Binary Stream using Hex Array in WPF

I have a binary file I am reading and printing into a textbox while wrapping at a set point, but it is wrapping at places it shouldn't be. I want to ignore all line feed characters except those I have defined.
There isn't a single Newline byte, rather it seems to be a series of them. I think I found the series of Hex values 00-01-01-0B that seem to correspond with where the line feeds should be.
How do I ignore existing line breaks, and use what I want instead?
This is where I am at:
shortFile = new FileStream(#"tempfile.dat", FileMode.Open, FileAccess.Read);
DisplayArea.Text = "";
byte[] block = new byte[1000];
shortFile.Position = 0;
while (shortFile.Read(block, 0, 1000) > 0)
{
string trimmedText = System.Text.Encoding.Default.GetString(block);
DisplayArea.Text += trimmedText + "\n";
}
I had just figured it out a couple minutes before dlatikay posted, but really appreciated seeing that he also had the right idea. I just replaced all control characters with spaces.
for (int i = 0; i < block.Length; i++)
{
if (block[i] < 32)
{
block[i] = 0x20;
}
}

C# BinaryReader ReadBytes(len) returns different results than Read(bytes, 0, len)

I've got a BinaryReader reading in a number of bytes into an array. The underlying Stream for the reader is a BufferedStream(whose underlying stream is a network stream). I noticed that sometimes the reader.Read(arr, 0, len) method is returning different(wrong) results than reader.ReadBytes(len).
Basically my setup code looks like this:
var httpClient = new HttpClient();
var reader = new BinaryReader(new BufferedStream(await httpClient.GetStreamAsync(url).ConfigureAwait(false)));
Later on down the line, I'm reading a byte array from the reader. I can confirm the sz variable is the same for both scenarios.
int sz = ReadSize(reader); //sz of the array to read
if (bytes == null || bytes.Length <= sz)
{
bytes = new byte[sz];
}
//reader.Read will return different results than reader.ReadBytes sometimes
//everything else is the same up until this point
//var tempBytes = reader.ReadBytes(sz); <- this will return right results
reader.Read(bytes, 0, sz); // <- this will not return the right results sometimes
It seems like the reader.Read method is reading further into the stream than it needs to or something, because the rest of the parsing will break after this happens. Obviously I could stick with reader.ReadBytes, but I want to reuse the byte array to go easy on the GC here.
Would there ever be any reason that this would happen? Is a setting wrong or something?
Make sure you clear out bytes array before calling this function because Read(bytes, 0, len) does NOT clear given byte array, so some previous bytes may conflict with new one. I also had this problem long ago in one of my parsers. just set all elements to zero, or make sure that you are only reading (parsing) up to given len

PHP - C# translated hashing function not working?

I'll get right to the question,
We have this block of C# code
using (Rfc2898DeriveBytes pbkdf2 = new Rfc2898DeriveBytes(password, passwordSaltBytes, iterationCount))
{
pbkdf2Bytes = pbkdf2.GetBytes(derivedLength + iterationCountBytes.Length);
}
Returns a byte array, first index has a value of 252
We attempt the same thing in PHP:
$key = hash_pbkdf2("SHA1", $password, $password.$salt, $iterationCount, 48);
First index is 102...
The values all match before this specific part.
It's just that hashing function that isn't giving me consistent results.
Any help is appreciated, cheers.
Edit - If it's not obvious, I'm trying to understand why those two values don't match, what encoding/decoding etc. am I misunderstanding or doing incorrectly.
This is the full C# code. As you can see there are some unnecessary loops etc. but the reasons why this wasn't working are 2:
As somebody pointed out, the bytes in PHP do no output raw data by default, and thus the hash (and consequently as such,) its bytes, weren't identical with that of the C# script.
Previously, I thought (as others also pointed out) that I should pass in the $salt as it is without any encoding or transformation. But upon looking closer at the actual C# code... we can see in the 2nd for i loop that they're actually appending saltBytes onto passwordBytes, thus creating something similar to $password.$salt in PHP
Combining the two above issues:
Sending the $password.$salt instead of just one, and then setting the $raw_output option to true, outputs the same hash, the same bytes as C# does.
byte[] passwordBytes = Encoding.UTF8.GetBytes(password);
byte[] saltBytes = Encoding.UTF8.GetBytes(salt);
byte[] iterationCountBytes = BitConverter.GetBytes(iterationCount);
int derivedLength = passwordBytes.Length + saltBytes.Length;
byte[] passwordSaltBytes = new byte[derivedLength];
byte[] pbkdf2Bytes;
string encryptedString;
for (int i = 0; i < passwordBytes.Length; i++)
{
passwordSaltBytes[i] = passwordBytes[i];
}
for (int i = 0; i < saltBytes.Length; i++)
{
passwordSaltBytes[passwordBytes.Length + i] = saltBytes[i];
}
using (Rfc2898DeriveBytes pbkdf2 = new Rfc2898DeriveBytes(password, passwordSaltBytes, iterationCount))
{
pbkdf2Bytes = pbkdf2.GetBytes(derivedLength + iterationCountBytes.Length);
}
Thanks.

Byte to value error

So in c#, I have needed a random below given number generator and I found one on StackOverFlow. But near the end, it converts the byte array into a BigInteger. I tried doing the same, though I am using the Deveel-Math lib as it allows me to us BigDeciamals. But I have tried to the array change into a value, and that into a String but I keep getting a "Could not find any recognizable digits." error and as of now I am stumped.
public static BigInteger RandomIntegerBelow1(BigInteger N)
{
byte[] bytes = N.ToByteArray();
BigInteger R;
Random random = new Random();
do
{
random.NextBytes(bytes);
bytes[bytes.Length - 1] &= (byte)0x7F; //force sign bit to positive
R = BigInteger.Parse(BytesToStringConverted(bytes)) ;
//the Param needs a String value, exp: BigInteger.Parse("100")
} while (R >= N);
return R;
}
static string BytesToStringConverted(byte[] bytes)
{
using (var stream = new MemoryStream(bytes))
{
using (var streamReader = new StreamReader(stream))
{
return streamReader.ReadToEnd();
}
}
}
Deveel-Math
Wrong string conversion
You are converting your byte array to a string of characters based on UTF encoding. I'm pretty sure this is not what you want.
If you want to convert a byte array to a string that contains a number expressed in decimal, try this answer using BitConverter.
if (BitConverter.IsLittleEndian)
Array.Reverse(array); //need the bytes in the reverse order
int value = BitConverter.ToInt32(array, 0);
This is way easier
On the other hand, I notice that Deveel-Math's BigInteger has a constructor that takes a byte array as input (see line 226). So you should be able to greatly simplify your code by doing this:
R = new Deveel.Math.BigInteger(1, bytes) ;
However, since Deveel.Math appears to be BigEndian, you may need to reverse the array first:
System.Array.Reverse(bytes);
R = new Deveel.Math.BigInteger(1, bytes);

Convert c# crypto code to ruby

I'm currently trying to convert this c# code into ruby, but I'm having difficulty with the hex conversion that is being used
public static string Decrypt(string hexString, string key, string iv)
{
var bytes = Enumerable.Range(0, hexString.Length)
.Where(x => x % 2 == 0)
.Select(x => Convert.ToByte(hexString.Substring(x, 2), 16))
.ToArray();
//===== AES provider
var provider = new AesCryptoServiceProvider();
provider.Mode = CipherMode.CBC;
provider.Key = Encoding.UTF8.GetBytes(key);
provider.IV = Encoding.UTF8.GetBytes(iv);
var transform = provider.CreateDecryptor();
using (var ms = new MemoryStream(bytes))
{
using (var cs = new CryptoStream(ms, transform, CryptoStreamMode.Read))
{
using (var sr = new StreamReader(cs))
{
cs.Flush();
var plainText = sr.ReadToEnd();
return plainText;
}
}
Here is a fiddle of the working code: https://dotnetfiddle.net/JI8SID
With these inputs:
var iv = "8E394493F1E54545";
var key = "36D65EA1F6A849AF9964E0BAA98096B3";
var encrypted = "0A1D18A104A568FDE4770E0B816870C6";
I should be getting:
"testing"
My code is below, but I keep getting a key length too short (OpenSSL::Cipher::CipherError). I'm guessing there's something wrong with my hex_to_bin conversion, but it is stumping me.
require 'openssl'
def hex_to_bin(str)
str.scan(/../).map { |x| x.hex.chr }.join
end
def decrypt(data, hex_key, hex_iv)
decipher = OpenSSL::Cipher::AES256.new(:CBC)
decipher.decrypt
decipher.key = hex_to_bin(hex_key)
decipher.iv = hex_to_bin(hex_iv)
(decipher.update(hex_to_bin(data)) + decipher.final)
end
iv = "8E394493F1E54545"
key = "36D65EA1F6A849AF9964E0BAA98096B3"
encrypted = "0A1D18A104A568FDE4770E0B816870C6"
puts decrypt(encrypted, key, iv)
Thank you in advance!
Use a key length of exactly the same length specified, in the case given AES256 make the key exactly 32-bytes in length. Otherwise an implementation can do whatever it wants from null padding, the garbage bytes past the end of the key or throw an error.
In the code there is a hexadecimal key of 32-bytes but then it is converted to a binary key of 16-bytes by the call: hex_to_bin(hex_key).
In a similar manner the 16-byte hex iv is being reduced to 8-bytes by the call: hex_to_bin(hex_iv).
You really need to supply longer hex keys. Just eliminating the conversion calls will result in 128-bits of key material.
Your intuition is correct - the problem is the call to hex_to_bin on key and iv. Here is a working decrypt routine which emits the string 'testing' when plugged into your sample code:
def decrypt(data, hex_key, hex_iv)
decipher = OpenSSL::Cipher::AES256.new(:CBC)
decipher.decrypt
decipher.key = hex_key
decipher.iv = hex_iv
(decipher.update(hex_to_bin(data)) + decipher.final)
end

Categories

Resources