I am trying to port a working solution for signing PDFs from a standard C# class library to portable class library (or windows store app 8.1). iTextSharp version: 5.5.3.0
The logic is as follows: I create a signature appearance in iTextSharp, hash it (SHA256, it's a third party requirement), send the hash to a webservice which returns me the signed content.
As mentioned, the solution is working fine e.g. in ASP.net web applications, but all the attempts to implement it in the WinRT environment seem to fail - the signature is applied, but it's invalid: message from PDF reader: "document has been altered or corrupted since the Signature was applied".
After analyzing the code differences, the only difference which seems relevant to me in this case is the hashing part. In the standard C# class library, I had it solved like that, and got it working with valid signatures as a result:
PdfSignatureAppearance sap = stp.SignatureAppearance;
// some appearance properties are filled here...
PdfSignature dic = new PdfSignature(PdfName.ADOBE_PPKLITE, PdfName.ADBE_PKCS7_DETACHED);
dic.Date = new PdfDate(sap.SignDate);
dic.Reason = sap.Reason;
dic.Location = sap.Location;
sap.CryptoDictionary = dic;
Dictionary<PdfName, int> exc = new Dictionary<PdfName, int>();
exc.Add(PdfName.CONTENTS, csize * 2 + 2);
sap.PreClose(exc);
HashAlgorithm sha = new SHA256CryptoServiceProvider();
var sapStream = sap.GetRangeStream();
int read = 0;
byte[] buff = new byte[8192];
while ((read = sapStream.Read(buff, 0, 8192)) > 0)
{
sha.TransformBlock(buff, 0, read, buff, 0);
}
sha.TransformFinalBlock(buff, 0, 0);
// here I am sending the hash to the third party webservice,
// obtaining the 'signed' response
byte[] outc = new byte[csize];
PdfDictionary dic2 = new PdfDictionary();
Array.Copy(response, 0, outc, 0, response.Length);
dic2.Put(PdfName.CONTENTS, new PdfString(outc).SetHexWriting(true));
sap.Close(dic2);
Since the libraries for WinRT are partly different, I try to implement the hashing using different class:
var sapStream = sap.GetRangeStream();
HashAlgorithmProvider alg = Windows.Security.Cryptography.Core.HashAlgorithmProvider.OpenAlgorithm(HashAlgorithmNames.Sha256);
var hasher = alg.CreateHash();
int read = 0;
byte[] buff = new byte[8192];
while ((read = await sapStream.ReadAsync(buff, 0, 8192)) > 0)
{
hasher.Append(buff.AsBuffer());
}
String hashText = CryptographicBuffer.EncodeToBase64String(hasher.GetValueAndReset());
Then I am sending the hashText to the webservice, obtain the response and put it into the file in the same manner, but the signature is invalid.
What am I missing?
The issue in the WinRT version is that it ignores the read value in the hashing loop:
int read = 0;
byte[] buff = new byte[8192];
while ((read = await sapStream.ReadAsync(buff, 0, 8192)) > 0)
{
hasher.Append(buff.AsBuffer());
}
Especially the last block normally will not fully fill the buffer buff, so the last hasher.Append call will hash the final block plus some trailing trash bytes which falsify the result.
You may only hash the first read bytes of your buff.
The OP eventually solved it like this:
while ((read = await sapStream.ReadAsync(buff, 0, 8192)) > 0)
{
byte[] newArr = new byte[read];
Array.Copy(buff, newArr, read);
hasher.Append(newArr.AsBuffer());
}
Related
I am trying to sign a pdf through an external webservice. The web service receives a SHA-256 hex encoded hash and in when signed the service returns a hex digest signature.
At the moment i am stuck trying to use the iText 7 (c#) library trying to sign the document with the signature values. I did manage to sign a pdf document with itext with the use of a local selfsigned certificate but not with the external container yet, adobe acrobat always reports problems with the BER decoding which makes me to believe there is something going wrong either with the parsing on my side or the signature values are incorrect.
The signing steps with software samples are as followed:
First add a blank dignature , sign this document and then generate a hash (SHA-256).
public static void CalculateHash(Stream fileStream, out byte[] docBytesHash, out byte[] preSignedBytes)
{
PdfName filter = PdfName.Adobe_PPKLite;
PdfName subFilter = PdfName.Adbe_pkcs7_detached;
int estimatedSize = 8192;
PdfReader reader = new PdfReader(fileStream);
MemoryStream baos = new MemoryStream();
PdfSigner signer = new PdfSigner(reader, baos, new StampingProperties());
signer.SetCertificationLevel(PdfSigner.CERTIFIED_NO_CHANGES_ALLOWED);
PdfSignatureAppearance appearance = signer.GetSignatureAppearance();
appearance.SetLayer2Text("Signature field which signing is deferred.").SetPageRect(new Rectangle(36, 600,
200, 100)).SetPageNumber(1);
signer.SetFieldName("DeferredSignature1");
DigestCalcBlankSigner external = new DigestCalcBlankSigner(filter, subFilter);
//IExternalSignatureContainer external = new ExternalBlankSignatureContainer(filter, subFilter);
signer.SignExternalContainer(external, estimatedSize);
docBytesHash = external.GetDocBytesHash();
preSignedBytes = baos.ToArray();
}
The hash is converted to bytearray and send to the webservice
public static string ByteArrayToString(byte[] array)
{
StringBuilder stringBuilder = new StringBuilder();
foreach (byte b in array)
stringBuilder.AppendFormat("{0:x2}", b);
return stringBuilder.ToString();
}
The webservice replies with the signature data an example of the string data
"473e8e376ca067f3c806902f718be21bf8a788ddbd31786b14fc47678596d6993a4f1ecb80e091f93af4820a75d97aee4b1a15c4a7914b4f881ca86e5d06b429b176d5b663c986c9ce2824333c98e0b5def0af53178b9ce38aa4efaa0adce2eee409487fb7fecf58e4c5bfcc3a0d083e35a83f9c722c73b78784e9990b6f00b89ae4934714c92b34699ce00ad5a662d0058bd613021449e9d09ab2d25376230de75591ab6ce4b5c5d24216794e8c871a690b4e19011621d41c66f4b0048abc9f2d4449072ee9e70c30dcf9b8b5a1ea8ee3a285163c2c5b293a3798a4a13ca59e83c66d9148d519b55e13643a3a7e0794732b92f50c1424f7be5774f67e910076"
The string is then converted to bytearray with the following function
public static byte[] StringToByteArray(String hex)
{
int NumberChars = hex.Length;
byte[] bytes = new byte[NumberChars / 2];
for (int i = 0; i < NumberChars; i += 2)
bytes[i / 2] = Convert.ToByte(hex.Substring(i, 2), 16);
return bytes;
}
The signature container is now constructed with the use of the byte array signature and the document is signed with the use of the container and the pre signed document bytes
using (System.IO.FileStream outStream = new System.IO.FileStream(System.IO.Path.GetTempPath() + "tempout.pdf", System.IO.FileMode.Create))
{
PdfName filter = PdfName.Adobe_PPKLite;
PdfName subFilter = PdfName.Adbe_pkcs7_detached;
MemoryStream baos = new MemoryStream();
fileStream.CopyTo(baos); //temp
byte[] preSignedBytes = baos.ToArray();
ReadySignatureSigner extSigContainer = new ReadySignatureSigner(StringHelper.StringToByteArray(signature.data.attributes.signature)); //now use external provider signature
PdfDocument docToSign = new PdfDocument(new PdfReader(new MemoryStream(preSignedBytes)));
PdfSigner.SignDeferred(docToSign, "DeferredSignature1", outStream, extSigContainer);
docToSign.Close();
}
I also added the original pdf, the unsigned pdf and signed pdf for reference in below link. I hope someone can have a look at the contents and maybe tell me what is going wrong here. I already spend many hours online looking for solutions, but nothing found so far.
Regards, Jos Eilers
pdf's
Thanks to the answer of mkl i rewrote the code to build up the PKCS container as follows, first create the hash and presigned bytes and send this to the signing service :
Org.BouncyCastle.X509.X509Certificate[] signChain = new Org.BouncyCastle.X509.X509Certificate[1];
signChain[0] = Org.BouncyCastle.Security.DotNetUtilities.FromX509Certificate(signingcert);
PdfName filter = PdfName.Adobe_PPKLite;
PdfName subFilter = PdfName.Adbe_pkcs7_detached;
int estimatedSize = 8192;
PdfReader reader = new PdfReader(memoryStream);
MemoryStream baos = new MemoryStream();
PdfSigner signer = new PdfSigner(reader, baos, new StampingProperties());
signer.SetCertificationLevel(PdfSigner.CERTIFIED_NO_CHANGES_ALLOWED);
PdfSignatureAppearance appearance = signer.GetSignatureAppearance();
appearance.SetLayer2Text("Signature field which signing is deferred.").SetPageRect(new Rectangle(36, 600,
200, 100)).SetPageNumber(1);
signer.SetFieldName("DeferredSignature1");
DigestCalcBlankSigner external = new DigestCalcBlankSigner(filter, subFilter);
signer.SignExternalContainer(external, estimatedSize);
var signatureContainer = new PdfPKCS7(null, signChain, HASH_ALGORITHM, true);
preSignedBytes = baos.ToArray();
sh = signatureContainer.GetAuthenticatedAttributeBytes(preSignedBytes, null, null, CryptoStandard.CMS);
I then use the returned signature and create the new PKCS7 container and perform deferred signing as follows (hash mentioned is byte array sh from previous section) :
var signatureContainer = new PdfPKCS7(null, signChain, HASH_ALGORITHM, true);
signatureContainer.SetExternalDigest(StringHelper.StringToByteArray(signature.data.attributes.signature), null, "RSA");
byte[] encodedSignature = signatureContainer.GetEncodedPKCS7(hash, null, null, null, CryptoStandard.CMS);
ReadySignatureSigner extSigContainer = new ReadySignatureSigner(encodedSignature); //now use external provider signature
memoryStream.Position = 0;
PdfDocument docToSign = new PdfDocument(new PdfReader(memoryStream));
MemoryStream memoryOutStream = new MemoryStream();
PdfSigner.SignDeferred(docToSign, "DeferredSignature1", memoryOutStream, extSigContainer);
At the moment the certificate is visible in the pdf, but now i still have the error that the document has been altered.I am a little bit stuck at the moment, any suggestions? Also i am curious wether or not the use of a private key is required over here (i didnt get a reply yet from the provider if this is necessary).
Does anyone have any insights, the webservice used is from Digidentity
I'm getting different values when I attempt to hash the same password/salt combination in node and .NET. (Yes, I know SHA1 is dangerous, I'm trying to change that too).
C#
byte[] unencodedBytes = Encoding.Unicode.GetBytes(password);
byte[] saltBytes = Convert.FromBase64String(salt);
byte[] buffer = new byte[unencodedBytes.Length + saltBytes.Length];
Buffer.BlockCopy(unencodedBytes, 0, buffer, 0, unencodedBytes.Length);
Buffer.BlockCopy(saltBytes, 0, buffer, unencodedBytes.Length - 1, saltBytes.Length);
byte[] hash = HashAlgorithm.Create("SHA1").ComputeHash(buffer);
//This is what I need
string hashedString = Convert.ToBase64String(hash);
Here's my JS
var buffer = [];
var unicodePassword = new Buffer(password, 'utf16le');
for (var i = 0; i < unicodePassword.length; ++i) {
buffer.push(unicodePassword[i]);
}
var salt = new Buffer(userEntry.PasswordSalt, 'base64');
for (var i = 0; i < salt.length; i++) {
buffer.push(salt[i]);
}
var bufferString = new Buffer(buffer);
//This is what I need
var hashedString = crypto.createHash('sha1').update(bufferString).digest('base64');
I know that I'm getting the exact same byte array in both implementations when I send it off to be hashed. It looks like this code is doing the exact same thing but the value of hashedString is not the same. Any ideas what's going on?
the default Encoding.Unicode is emitting byteOrderMask
What if you create Encoding without BOM in C# code?
new UnicodeEncoding(false, false)
I need to decode .xz file created with 7zip. Compression alghorithm is LZMA2. How should I use 7zip SDK for C# to solve this problem?
I already tried with this code, but it didn't work:
var coder = new SevenZip.Compression.LZMA.Decoder();
var input = new MemoryStream(); // filled with byte array of .xz file
var output = new MemoryStream();
// Read the decoder properties
byte[] properties = new byte[5];
zipFile.Read(properties, 0, 5);
// Read in the decompress file size.
byte[] fileLengthBytes = new byte[8];
zipFile.Read(fileLengthBytes, 0, 8);
long fileLength = BitConverter.ToInt64(fileLengthBytes, 0);
coder.SetDecoderProperties(properties); // this throws exception
coder.Code(zipFile, output, zipFile.Length, fileLength, null);
output.Flush();
output.Close();
I marked a line which causes exception.
I'm trying to decompress an Android adb file using the Deflate algorithm. I've tried both DotNetZips Ionic Zlib as well as Microsofts built-in System.IO.Compression introduced in Net 4.5 but both of them result in a corrupted archive. They both have the exact same file size, but the hashes don't match up between the corrupt and good archives.
I'm using the following code to decompress.
byte[] app = File.ReadAllBytes(tb_keyOutDir.Text + "\\app_stripped.ab");
MemoryStream ms = new MemoryStream(app);
//skip first two bytes to avoid invalid block length error
ms.Seek(2, SeekOrigin.Begin);
DeflateStream deflate = new DeflateStream(ms, CompressionMode.Decompress);
string dec = new StreamReader(deflate, Encoding.ASCII).ReadToEnd();
File.WriteAllText(tb_keyOutDir.Text + "\\app.tar", dec);
I can decompress it via CygWin with OpenSSL and it's decompressing it properly so I know my files aren't corrupted or anything.
cat app_stripped.ab | openssl zlib -d > app.tar
use Ionic library
try use this method to decompress :
public static byte[] Decompress(byte[] gzip) {
using (var stream = new Ionic.Zlib.ZlibStream(new MemoryStream(gzip), Ionic.Zlib.CompressionMode.Decompress)) {
const int size = 1024;
byte[] buffer = new byte[size];
using (MemoryStream memory = new MemoryStream()) {
int count = 0;
do {
count = stream.Read(buffer, 0, size);
if (count > 0) {
memory.Write(buffer, 0, count);
}
}
while (count > 0);
return memory.ToArray();
}
}
}
and when you want call :
byte[] app = Decompress(File.ReadAllBytes(tb_keyOutDir.Text + "\\app_stripped.ab"));
File.WriteAllBytes(tb_keyOutDir.Text + "\\app.tar", app);
Original compressed data can be correctly inflated back. However, if I inflate data, deflate, and again inflate, resulting data are incorrect (e.g. simple data extraction, its modification and again compression - only now when testing no modification occurs, so I can test it).
Resulting data are somehow "damaged". The starting (about) 40 bytes are OK, and then "block" of incorrect data follows (remnants of original data are still there, but many bytes are missing).
Changing compression level doesn't help (except setting NO_COMPRESSION creates somehow incomplete stream).
Question is simple: why is that happening?
using ICSharpCode.SharpZipLib.Zip.Compression;
public byte[] Inflate(byte[] inputData)
{
Inflater inflater = new Inflater(false);
using (var inputStream = new MemoryStream(inputData))
using (var ms = new MemoryStream())
{
var inputBuffer = new byte[4096];
var outputBuffer = new byte[4096];
while (inputStream.Position < inputData.Length)
{
var read = inputStream.Read(inputBuffer, 0, inputBuffer.Length);
inflater.SetInput(inputBuffer, 0, read);
while (inflater.IsNeedingInput == false)
{
var written = inflater.Inflate(outputBuffer, 0, outputBuffer.Length);
if (written == 0)
break;
ms.Write(outputBuffer, 0, written);
}
if (inflater.IsFinished == true)
break;
}
inflater.Reset();
return ms.ToArray();
}
}
public byte[] Deflate(byte[] inputData)
{
Deflater deflater = new Deflater(Deflater.BEST_SPEED, false);
deflater.SetInput(inputData);
deflater.Finish();
using (var ms = new MemoryStream())
{
var outputBuffer = new byte[65536 * 4];
while (deflater.IsNeedingInput == false)
{
var read = deflater.Deflate(outputBuffer);
ms.Write(outputBuffer, 0, read);
if (deflater.IsFinished == true)
break;
}
deflater.Reset();
return ms.ToArray();
}
}
Edit: My bad, by mistake I rewrote first several bytes of the original compressed data. This isn't SharpZipLib fault, but mine.
I know this is a tangential answer, but the exact same thing happened to me, I abandoned SharpZipLib and went to DotNetZip :
http://dotnetzip.codeplex.com/
Easier API, no corrupt or strange byte order files.