Encrypting files using RC4 encryption algorithm in C# - c#

My question is, how do I encrypt and decrypt a file in C# using the RC4 encryption algorithm?
This is not a duplicate of these questions:
What is a NullReferenceException, and how do I fix it?
RC4 Algorithm: Unable to Encrypt / Decrypt data where client uses Javascript and Server c#
RC4 128 bit encryption in C#
I do however acknowledge that at first glance, this question will appear like a duplicate of this question, however, it is around 7 months old, and still has no answer with working code that solves the question directly.
I have however referred to the below links, but none of them answers the question fully, or in fact, at all.
http://www.codeproject.com/Articles/5719/Simple-encrypting-and-decrypting-data-in-C
http://www.codeproject.com/Articles/5068/RC-Encryption-Algorithm-C-Version
I do know that the built-in System.Security.Cryptography library in Visual Studio 2013 supports RC2, but what I want to focus on right now is RC4, as part of a research. I know it is weak yes, but I'm still using it. No important data is going to be using this encryption.
Preferably with a code example, that accepts a stream as an input. I have caused great confusion, as I did not describe my concerns properly. I am opting for a stream input, due to the concern that any kind of other input may cause a decrease in the speed of processing large files.
Specifications: NET Framework 4.5, C#, WinForms.

Disclaimer: While this code works, it might not be correctly implemented and/or secure.
Here's an example of file encryption/decryption using the BouncyCastle's RC4Engine:
// You encryption/decryption key as a bytes array
var key = Encoding.UTF8.GetBytes("secretpassword");
var cipher = new RC4Engine();
var keyParam = new KeyParameter(key);
// for decrypting the file just switch the first param here to false
cipher.Init(true, keyParam);
using (var inputFile = new FileStream(#"C:\path\to\your\input.file", FileMode.Open, FileAccess.Read))
using (var outputFile = new FileStream(#"C:\path\to\your\output.file", FileMode.OpenOrCreate, FileAccess.Write))
{
// processing the file 4KB at a time.
byte[] buffer = new byte[1024 * 4];
long totalBytesRead = 0;
long totalBytesToRead = inputFile.Length;
while (totalBytesToRead > 0)
{
// make sure that your method is marked as async
int read = await inputFile.ReadAsync(buffer, 0, buffer.Length);
// break the loop if we didn't read anything (EOF)
if (read == 0)
{
break;
}
totalBytesRead += read;
totalBytesToRead -= read;
byte[] outBuffer = new byte[1024 * 4];
cipher.ProcessBytes(buffer, 0, read, outBuffer,0);
await outputFile.WriteAsync(outBuffer,0,read);
}
}
The resulting file was tested using this website and it appears to be working as expected.

Related

How to shrink a string and be able to find the original later

I am working on this app that is still in beta, so I set up a logging system. The log is too long to be used in a mailto url so I thought about shrinking the text and then decrypt it.
Let's say I have a 50 line long log, this should help me make something like this zef16z1e6f8 and then have a procedure to use that to find out all 50 lines of the log.
I would like to note that I don't need any fancy TripleDES encryption or something.
First I would suggest re-looking at why you can't just mail the entire log content? Unless you have large logs (>5MB) I'd suggest just mailing the log. If you still want to pursue some shrinking strategy there are two I'd consider.
If you want a simple reference string which can be used to lookup your log data at some later stage you can just associate some sort of identifier with the data (e.g. a GUID as suggested by Eugene). This has the benefit of having a constant length, irrespective of the log size.
Alternatively you could just compress the log, this will shrink the data somewhat (anything up to about 90%, as Dan mentioned). However this has the downside of having a variable length and for very large logs may still exceed your size limitations. If you go this route you could do something like this (not tested):
private string GetCompressedString()
{
byte[] byteArray = Encoding.UTF8.GetBytes("Some long log string");
using (var ms = new MemoryStream())
{
using (var gz = new GZipStream(ms, CompressionMode.Compress, true))
{
ms.Write(byteArray, 0, byteArray.Length);
}
ms.Position = 0;
var compressedBytes = new byte[ms.Length];
ms.Read(compressedBytes, 0, compressedBytes.Length);
return Convert.ToBase64String(compressedBytes);
}
}

Serialize an Object to a byte[], then encrypting the byte[]

I came across a case where I need to encrypt an object in order to send it for delivery over a .NET Remoting connection. I have already implemented string encryption in our code, so I used a similar design for encrypting the object:
public static byte[] Encrypt(object obj)
{
byte[] bytes = ToByteArray(toEncrypt); // code below
using (SymmetricAlgorithm algo = SymmetricAlgorithm.Create())
{
using (System.IO.MemoryStream ms = new System.IO.MemoryStream())
{
byte[] key = Encoding.ASCII.GetBytes(KEY);
byte[] iv = Encoding.ASCII.GetBytes(IV);
using (CryptoStream cs = new CryptoStream(ms, algo.CreateEncryptor(key, iv), CryptoStreamMode.Write))
{
cs.Write(bytes, 0, bytes.Length);
cs.Close();
return ms.ToArray();
}
}
}
}
private static byte[] ToByteArray(object obj)
{
byte[] bytes = null;
if (obj != null)
{
using (System.IO.MemoryStream ms = new System.IO.MemoryStream())
{
BinaryFormatter bf = new BinaryFormatter();
bf.Serialize(ms, obj);
bytes = ms.ToArray();
}
}
return bytes;
}
Is there anything I need to watch out for by serializing and encrypting the object in this way?
What exactly is the aim here? If you are only interested in the security of the data over-the-wire, then since you are using .NET remoting which supports transparent encryption, why not use that, rather than go to the trouble of implementing it yourself?
If you do insist on performing your own encryption, then it's worth noting that you appear to be using a constant IV, which is incorrect. The IV should be a random byte array, and different for each encrypted message. This is generally prepended to the encrypted message before transmission for use in the decryption process.
As a further note, since your key and IV are both converted from strings using Encoding.ASCII.GetBytes, which is intend for 7 bit ASCII input, you are reducing the effective key space significantly.
The only thing I can think of, is that generally after encrypting the data you should write over the plain-text array with NULLs. This is so that the plain-text isn't recoverable in memory, and if it is written out to disk in a page file or swap, it won't be recoverable by a malicious program/user. If the sensitivity of the data is not a concern though, this may not be necessary, but it is a good habit to get into.
EDIT:
That being said however, never roll your own if you don't have to (which you rarely ever do). Chances are very good you'll screw it up and make it vulnerable to attack unless you really know what you're doing. If you don't understand or don't like the built-in APIs, the gents over at Bouncy Castle have done outstanding work at creating libraries for you to use, and in many different languages.

AES Decryption - Porting code from C# to Java

I am trying to port the following code from C# into Java. I have made multiple attempts to try and decrypt my encrypted data and I get gibberish every time. The code below uses the org.bouncycastle library and unfortunately there doesn't seem to be a 1-1 mapping between the C# code and the Java code.
I basically know three things:
byte[] file - This contains my encrypted file. Usually a pretty large array of bytes.
byte[] padding - It is 32*bytes* every time and it seems that the first 16 bytes of this are used as the IV.
byte[] aesKey - It is 32*bytes* every time and I do not know how exactly the C# code is using this array.
Original C# Code
private byte[] decryptmessage(byte[] cmessage, byte[] iVector, byte[] m_Key)
{
{
//// randomly generated number acts as inetialization vector
m_IV = new byte[16];
Array.Copy(iVector, 0, m_IV, 0, 16);
// GenerateAESKey();
KeyParameter aesKeyParam = ParameterUtilities.CreateKeyParameter("AES", m_Key);
ParametersWithIV aesIVKeyParam = new ParametersWithIV(aesKeyParam, m_IV);
IBufferedCipher cipher = CipherUtilities.GetCipher("AES/CFB/NoPadding");
cipher.Init(false, aesIVKeyParam);
return cipher.DoFinal(cmessage);
}
}
My attempt in Java
private static byte[] decryptMessage(byte[] file, byte[] iVector, byte[] aesKey) throws Exception {
IvParameterSpec spec = new IvParameterSpec(Arrays.copyOfRange(iVector, 0, 16));
SecretKeySpec key = new SecretKeySpec(Arrays.copyOfRange(aesKey, 0, 16), "AES");
Cipher cipher = Cipher.getInstance("AES/CFB/NoPadding");
cipher.init(Cipher.DECRYPT_MODE, key, spec);
return cipher.doFinal(file);
}
P.S: This is the final step of decryption. Before all this I had to take out some initial set of bytes from my encrypted file and decrypt them using an RSA private key to get this AES key.
If someone has a link / document I can read that properly explains the whole process of using AES to encrypt a file, then using RSA on the key and iv to the begining of the encrypted file, I will be extremely happy. I have just been staring at the C# code, I'd like to see something with pictures.
EDIT: Bytes not bits.
EDIT2: Renamed padding to iVector for consistency and correctness.
In the C# code, you initialize the key with 256 bits (32 bytes) and thus get AES-256. In the Java code, you only use 128 bit (16 bytes) and get AES-128.
So the fix is probably:
SecretKeySpec key = new SecretKeySpec(aesKey, "AES");
You might then find that Java doesn't want to use 256 bit keys (for legal reason). You then have to intall the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files 6.

Speeding up Encryption

So this is how I am doing encryption right now:
public static byte[] Encrypt(byte[] Data, string Password, string Salt)
{
char[] converter = Salt.ToCharArray();
byte[] salt = new byte[converter.Length];
for (int i = 0; i < converter.Length; i++)
{
salt[i] = (byte)converter[i];
}
PasswordDeriveBytes pdb = new PasswordDeriveBytes(Password, salt);
MemoryStream ms = new MemoryStream();
Aes aes = new AesManaged();
aes.Key = pdb.GetBytes(aes.KeySize / 8);
aes.IV = pdb.GetBytes(aes.BlockSize / 8);
CryptoStream cs = new CryptoStream(ms, aes.CreateEncryptor(), CryptoStreamMode.Write);
cs.Write(Data, 0, Data.Length);
cs.Close();
return ms.ToArray();
}
I am using this algorithm on data streaming over a network. The problem is it is a bit slow for what I am trying to do. So I was wondering if anyone has better way of doing it? I am no expert on encryption this method was pieced together from different sources. I am not entirely sure how it works.
I have clocked it at about 0.5-1.5ms and I need to get it down to about 0.1ms any ideas?
I'm pretty sure that performance is the least of your problems here.
Is the salt re-used for each packet? If so, you're using a strong cypher in a weak fashion. You're starting each packet with the cypher in exactly the same state. This is a security flaw. Someone skilled in cryptography would be able to crack your encryption after only a couple thousand packets.
I'm assuming you're sending a stream of packets to the same receiver. In that case, your use of AES will be much stronger if you keep the Aes object around and re-use it. That will make your use of the cypher much, much stronger and speed things up greatly.
As to the performance question, most of your time is being spent initializing the cypher. If you don't re-initialize it every time, you'll speed up quite a lot.
aes.KeySize>>3 would be faster than aes.KeySize / 8.

ZIP file created with SharpZipLib cannot be opened on Mac OS X

Argh, today is the day of stupid problems and me being an idiot.
I have an application which creates a zip file containing some JPEGs from a certain directory. I use this code in order to:
read all files from the directory
append each of them to a ZIP file
using (var outStream = new FileStream("Out2.zip", FileMode.Create))
{
using (var zipStream = new ZipOutputStream(outStream))
{
foreach (string pathname in pathnames)
{
byte[] buffer = File.ReadAllBytes(pathname);
ZipEntry entry = new ZipEntry(Path.GetFileName(pathname));
entry.DateTime = now;
zipStream.PutNextEntry(entry);
zipStream.Write(buffer, 0, buffer.Length);
}
}
}
All works well under Windows, when I open the file e. g. with WinRAR, the files are extracted. But as soon as I try to unzip my archive on Mac OS X, it only creates a .cpgz file. Pretty useless.
A normal .zip file created manually with the same files on Windows is extracted without any problems on Windows and Mac OS X.
I found the above code on the Internet, so I am not absolutely sure if the whole thing is correct. I wonder if it is needed to use zipStream.Write() in order to write directly to the stream?
got the exact same problem today. I tried implementing the CRC stuff as proposed but it didn't help.
I finaly found the solution on this page: http://community.sharpdevelop.net/forums/p/7957/23476.aspx#23476
As a result, I just had to add this line in my code:
oZIPStream.UseZip64 = UseZip64.Off;
And the file opens up as it should on MacOS X :-)
Cheers
fred
I don't know for sure, because I am not very familiar with either SharpZipLib or OSX , but I still might have some useful insight for you.
I've spent some time wading through the zip spec, and actually I wrote DotNetZip, which is a zip library for .NET, unrelated to SharpZipLib.
Currently on the user forums for DotNetZip, there's a discussion going on about zip files generated by DotNetZip that cannot be read on OSX. One of the people using the library is having a problem that seems similar to what you are seeing. Except I have no idea what a .cpgxz file is.
We tracked it down, a little. At this point the most promising theory is that OSX does not like "bit 3" in the "general purpose bitfield" in the header of each zip entry.
Bit 3 is not new. PKWare added bit 3 to the spec 17 years ago. It was intended to support streaming generation of archives, in the way that SharpZipLib works. DotNetZip also has a way to produce a zipfile as it is streamed out, and it will also set bit-3 in the zip file if used in this way, although normally DotNetZip will produce a zipfile with bit-3 unset in it.
From what we can tell, when bit 3 is set, the OSX zip reader (whatever it is - like I said I'm not familiar with OSX) chokes on the zip file. The same zip contents produced without bit 3, allows the zip file to be opened. Actually it's not as simple as just flipping one bit - the presence of the bit signals the presence of other metadata. So I am using "bit 3" as a shorthand for all that.
So the theory is that bit 3 causes the problem. I haven't tested this myself. There's been some impedance mismatch on the communication with the person who has the OSX machine - so it is unresolved as yet.
But, if this theory holds, it would explain your situation: that WinRar and any Windows machine can open the file, but OSX cannot.
On the DotNetZip forums, we had a discussion about what to do about the problem. As near as I can tell, the OSX zip reader is broken, and cannot handle bit 3, so the workaround is to produce a zip file with bit 3 unset. I don't know if SharpZipLib can be convinced to do that.
I do know that if you use DotNetZip, and use the normal ZipFile class, and save to a seekable stream (like a filesystem file), you will get a zip that does not have bit 3 set. If the theory is correct, it should open with no problem on the Mac, every time. This is the result the DotNetZip user has reported. It's just one result so not generalizable yet, but it looks plausible.
example code for your scenario:
using (ZipFile zip = new ZipFile()
{
zip.AddFiles(pathnames);
zip.Save("Out2.zip");
}
Just for the curious, in DotNetZip you will get bit 3 set if you use the ZipFile class and save it to a nonseekable stream (like ASPNET's Response.OutputStream) or if you use the ZipOutputStream class in DotNetZip, which always writes forward only (no seeking back).
I think SharpZipLib's ZipOutputStream is also always "forward only."
So, I searched for a few more examples on how to use SharpZipLib and I finally got it to work on Windows and os x. Basically I added the "Crc32" of the file to the zip archive. No idea what this is though.
Here is the code that worked for me:
using (var outStream = new FileStream("Out3.zip", FileMode.Create))
{
using (var zipStream = new ZipOutputStream(outStream))
{
Crc32 crc = new Crc32();
foreach (string pathname in pathnames)
{
byte[] buffer = File.ReadAllBytes(pathname);
ZipEntry entry = new ZipEntry(Path.GetFileName(pathname));
entry.DateTime = now;
entry.Size = buffer.Length;
crc.Reset();
crc.Update(buffer);
entry.Crc = crc.Value;
zipStream.PutNextEntry(entry);
zipStream.Write(buffer, 0, buffer.Length);
}
zipStream.Finish();
// I dont think this is required at all
zipStream.Flush();
zipStream.Close();
}
}
Explanation from cheeso:
CRC is Cyclic Redundancy Check - it's a checksum on the entry data. Normally the header for each entry in a zip file contains a bunch of metadata, including some things that cannot be known until all the entry data has been streamed - CRC, Uncompressed size, and compressed size. When generating a zipfile through a streamed output, the zip spec allows setting a bit (bit 3) to specify that these three data fields will immediately follow the entry data.
If you use ZipOutputStream, normally as you write the entry data, it is compressed and a CRC is calculated, and the 3 data fields are written immediately after the file data.
What you've done is streamed the data twice - the first time implicitly as you calculate the CRC on the file before writing it. If my theory is correct, the what is happening is this: When you provide the CRC to the zipStream before writing the file data, this allows the CRC to appear in its normal place in the entry header, which keeps OSX happy. I'm not sure what happens to the other two quantities (compressed and uncompressed size).
I had exactly the same problem, my mistake was (and in your example code as well) that I didn't supply the file lenght for each entry.
Example code:
...
ZipEntry entry = new ZipEntry(Path.GetFileName(pathname));
entry.DateTime = now;
var fileInfo = new FileInfo(pathname)
entry.size = fileInfo.lenght;
...
I was separating the folder names with a backslash... when I changed this to a forward slash it worked!
What's going on with the .cpgz file is that Archive Utility is being launched by a file with a .zip extension. Archive Utility examines the file and thinks it isn't compressed, so it's compressing it. For some bizarre reason, .cpgz (CPIO archiving + gzip compression) is the default. You can set a different default in Archive Utility's Preferences.
If you do indeed discover this is a problem with OS X's zip decoder, please file a bug. You can also try using the ditto command-line tool to unpack it; you may get a better error message. Of course, OS X also ships unzip, the Info-ZIP utility, but I'd expect that to work.
I agree with Cheeso's answer however if the Input file size is greater than 2GB then byte[] buffer = File.ReadAllBytes(pathname); will throw an IO exception.
So i modified Cheeso code and it works like a charm for all the files.
.
long maxDataToBuffer = 104857600;//100MB
using (var outStream = new FileStream("Out3.zip", FileMode.Create))
{
using (var zipStream = new ZipOutputStream(outStream))
{
Crc32 crc = new Crc32();
foreach (string pathname in pathnames)
{
tempBuffLength = maxDataToBuffer;
FileStream fs = System.IO.File.OpenRead(pathname);
ZipEntry entry = new ZipEntry(Path.GetFileName(pathname));
entry.DateTime = now;
entry.Size = buffer.Length;
crc.Reset();
long totalBuffLength = 0;
if (fs.Length <= tempBuffLength) tempBuffLength = fs.Length;
byte[] buffer = null;
while (totalBuffLength < fs.Length)
{
if ((fs.Length - totalBuffLength) <= tempBuffLength)
tempBuffLength = (fs.Length - totalBuffLength);
totalBuffLength += tempBuffLength;
buffer = new byte[tempBuffLength];
fs.Read(buffer, 0, buffer.Length);
crc.Update(buffer, 0, buffer.Length);
buffer = null;
}
entry.Crc = crc.Value;
zipStream.PutNextEntry(entry);
tempBuffLength = maxDataToBuffer;
fs = System.IO.File.OpenRead(pathname);
totalBuffLength = 0;
if (fs.Length <= tempBuffLength) tempBuffLength = fs.Length;
buffer = null;
while (totalBuffLength < fs.Length)
{
if ((fs.Length - totalBuffLength) <= tempBuffLength)
tempBuffLength = (fs.Length - totalBuffLength);
totalBuffLength += tempBuffLength;
buffer = new byte[tempBuffLength];
fs.Read(buffer, 0, buffer.Length);
zipStream.Write(buffer, 0, buffer.Length);
buffer = null;
}
fs.Close();
}
zipStream.Finish();
// I dont think this is required at all
zipStream.Flush();
zipStream.Close();
}
}
I had a similar problem but on Windows 7. I updated to the as of this writing latest version of ICSharpZipLib 0.86.0.518. From then on I could no longer decompress any ZIP archives created with the code that was working so far.
There error messages were different depending on the tool I tried to extract with:
Unknown compression method.
Compressed size in local header does not match that of central directory header in new zip file.
What did the trick was to remove the CRC calculation as mentioned here: http://community.sharpdevelop.net/forums/t/8630.aspx
So I removed the line that is:
entry.Crc = crc.Value
And from then on I could again unzip the ZIP archives with any third party tool. I hope this helps someone.
There are two things:
Ensure your underlying output stream is seekable, or SharpZipLib won't be able to back up and fill in any ZipEntry fields that you omitted (size, crc, compressed size, ...). As a result, SharpZipLib will force "bit 3" to be enabled. The background has been explained pretty well in previous answers.
Fill in ZipEntry.Size, or explicitly set stream.UseZip64 = UseZip64.Off. The default is to conservatively assume the stream could be very large. Unzipping then requires "pk 4.5" support.
I encountered weird behavior when archive is empty (no entries inside it) it can not be opened on MAC - generates cpgz only. The idea was to put a dummy .txt file in it in case when no files for archiving.

Categories

Resources