Copying a part of a byte[] array into a PDFReader - c#

This is a continuation of the ongoing struggle to reduce my memory load mention in
How do you refill a byte array using SqlDataReader?
So I have a byte array that is a set size, for this example, I'll say new byte[400000]. Inside of this array, I'll be placing pdf's of different sizes (less than 400000).
psuedo code would be:
public void Run()
{
byte[] fileRetrievedFromDatabase = new byte[400000];
foreach (var document in documentArray)
{
// Refill the file with data from the database
var currentDocumentSize = PopulateFileWithPDFDataFromDatabase(fileRetrievedFromDatabase);
var reader = new iTextSharp.text.pdf.PdfReader(fileRetrievedFromDatabase.Take((int)currentDocumentSize ).ToArray());
pageCount = reader.NumberOfPages;
// DO ADDITIONAL WORK
}
}
private int PopulateFileWithPDFDataFromDatabase(byte[] fileRetrievedFromDatabase)
{
// DataAccessCode Goes here
int documentSize = 0;
int bufferSize = 100; // Size of the BLOB buffer.
byte[] outbyte = new byte[bufferSize]; // The BLOB byte[] buffer to be filled by GetBytes.
myReader = logoCMD.ExecuteReader(CommandBehavior.SequentialAccess);
Array.Clear(fileRetrievedFromDatabase, 0, fileRetrievedFromDatabase.Length);
if (myReader == null)
{
return;
}
while (myReader.Read())
{
documentSize = myReader.GetBytes(0, 0, null, 0, 0);
// Reset the starting byte for the new BLOB.
startIndex = 0;
// Read the bytes into outbyte[] and retain the number of bytes returned.
retval = myReader.GetBytes(0, startIndex, outbyte, 0, bufferSize);
// Continue reading and writing while there are bytes beyond the size of the buffer.
while (retval == bufferSize)
{
Array.Copy(outbyte, 0, fileRetrievedFromDatabase, startIndex, retval);
// Reposition the start index to the end of the last buffer and fill the buffer.
startIndex += retval;
retval = myReader.GetBytes(0, startIndex, outbyte, 0, bufferSize);
}
}
return documentSize;
}
The problem with the above code is that that I keep getting a "Rebuild trailer not found. Original Error: PDF startxref not found" error when I try to access the PDF Reader. I believe it's because the byte array is too long and has trailing 0's. But since I'm using the byte array so that I'm not continuously building new objects on the LOH, I need to do this.
So how do I get just the piece of the Array that I need and send it to the PDFReader?
Updated
So I looked at the source and realized I had some variables from my actual code that was confusing. I'm basically reusing the fileRetrievedFromDatabase object in each iteration of the loop. Since it's passed by reference, it gets cleared (set to all zero's), and then filled in the PopulateFileWithPDFDataFromDatabase. This object is then used to create a new PDF.
If I didn't do it this way, a new large byte array would be created in every iteration and the Large Object Heap gets full and eventually throws an OutOfMemory exception.

You have at least two options:
Treat your buffer like a circular buffer with two indexes for starting and ending position.
need an index of the last byte written in outByte and you have to stop reading when you reach that index.
Simply read the same number of bytes as you have in your data array to avoid reading into the "unknown" parts of the buffer which don't belong to the same file.
In other words, instead of having bufferSize as the last parameter, have the data.Length.
// Read the bytes into outbyte[] and retain the number of bytes returned.
retval = myReader.GetBytes(0, startIndex, outbyte, 0, data.Length);
If data length is 10 and your outbyte buffer is 15, then you should only read the data.Length not the bufferSize.
However, I still don't see how you're reusing the outbyte "buffer", if that's what you're doing... I'm simply not following based on what you've provided in your answer. Maybe you can clarify exactly what is being reused.

Apparently, I the way the while loop is currently structured, it wasn't copying the data on it's last iteration. Needed to add this:
if (outbyte != null && outbyte.Length > 0 && retval > 0)
{
Array.Copy(outbyte, 0, currentDocument.Data, startIndex, retval);
}
It's now working, although I will definitely need to refactor.

Related

Read mainframe file and parse data using .net

I have a file which is very long, and has no line breaks, CR or LF or other delimiters.
Records are fixed length, and the first control record length is 24 and all other record lengths are of fixed length 81 bytes.
I know how to read a fixed length file per line basis and I am using Multi Record Engine and have defined classes for each 81 byte line record but can’t figure out how I can read 80 characters at a time and then parse that string for the actual fields.
You can use the FileStream to read the number of bytes you need - like in your case either 24 or 81. Keep in mind that progressing through the stream the position changes and therefor you should not use the offset (should always be 0) - also be aware that if there is no information "left" on the stream it will cause an exception.
So you would end up with something like this:
var recordlength = 81;
var buffer = new byte[recordlength];
stream.Read(buffer, 0, recordlength); // offset = 0, start at current position
var record = System.Text.Encoding.UTF8.GetString(buffer); // single record
Since the recordlength is different for the control record you could use that part into a single method, let's name it Read and use that read method to traverse through the stream untill you reach the end, like this:
public List<string> Records()
{
var result = new List<string>();
using(var stream = new FileStream(#"c:\temp\lipsum.txt", FileMode.Open))
{
// first record
result.Add(Read(stream, 24));
var record = "";
do
{
record = Read(stream);
if (!string.IsNullOrEmpty(record)) result.Add(record);
}
while (record.Length > 0);
}
return result;
}
private string Read(FileStream stream, int length = 81)
{
if (stream.Length < stream.Position + length) return "";
var buffer = new byte[length];
stream.Read(buffer, 0, length);
return System.Text.Encoding.UTF8.GetString(buffer);
}
This will give you a list of records (including the starting control record).
This is far from perfect, but an example - also keep in mind that even if the file is empty there is always 1 result in the returned list.

C# BinaryReader ReadBytes(len) returns different results than Read(bytes, 0, len)

I've got a BinaryReader reading in a number of bytes into an array. The underlying Stream for the reader is a BufferedStream(whose underlying stream is a network stream). I noticed that sometimes the reader.Read(arr, 0, len) method is returning different(wrong) results than reader.ReadBytes(len).
Basically my setup code looks like this:
var httpClient = new HttpClient();
var reader = new BinaryReader(new BufferedStream(await httpClient.GetStreamAsync(url).ConfigureAwait(false)));
Later on down the line, I'm reading a byte array from the reader. I can confirm the sz variable is the same for both scenarios.
int sz = ReadSize(reader); //sz of the array to read
if (bytes == null || bytes.Length <= sz)
{
bytes = new byte[sz];
}
//reader.Read will return different results than reader.ReadBytes sometimes
//everything else is the same up until this point
//var tempBytes = reader.ReadBytes(sz); <- this will return right results
reader.Read(bytes, 0, sz); // <- this will not return the right results sometimes
It seems like the reader.Read method is reading further into the stream than it needs to or something, because the rest of the parsing will break after this happens. Obviously I could stick with reader.ReadBytes, but I want to reuse the byte array to go easy on the GC here.
Would there ever be any reason that this would happen? Is a setting wrong or something?
Make sure you clear out bytes array before calling this function because Read(bytes, 0, len) does NOT clear given byte array, so some previous bytes may conflict with new one. I also had this problem long ago in one of my parsers. just set all elements to zero, or make sure that you are only reading (parsing) up to given len

Memory growing with byte array copy

I have problem with simple byte[] copy. In ConsoleApplication i load 75MB DAT file into byte[]. After that i would like to cut array with function bellow.
public static byte[] SubArray(this byte[] Data, int Index, int Length = 0)
{
if (Length == 0) Length = Data.Length - Index;
byte[] Result = new byte[Length];
Array.Copy(Data, Index, Result, 0, Length);
return Result;
}
If i use only one Data = Data.SubArray(32), memory grow from 100 to 180MB, but if i do a test with three times Data = Data.SubArray(32), memory grow triple to 340MB. I suppose that old array is still in memory. How do i release old array from memory? I don't need it anymore and with more array sub in code memory growth to 2GB.
You need to let the Garbage Collector to do its thing. To make it easier for GC, you would normally set the old unused reference to null or replace it with a new reference value. GC needs some time to hit.

Issue reading file to byte array

I'm maintaining a program that has the following code to read a file to a byte array:
using (FileStream fileStream = new FileStream(filePath, FileMode.Open))
{
fileStream.Position = 0;
int fileSize = (int)fileStream.Length;
int readSize;
int remain = fileSize;
var pos = 0;
byteData = new byte[fileSize];
while (remain > 0)
{
readSize = fileStream.Read(byteData, pos, Math.Min(1024, remain));
pos += readSize;
remain -= readSize;
}
}
And then afterwards outputs this byte array as a Base64 string:
var value = "File contents:" + Environment.NewLine + Convert.ToBase64String(byteData)
The issue we are occasionally seeing is that the output is just a string of A's, like "AAAAAAAAAAAAAAAAAAAAAA", but longer. I've figured out that if you output a byte array that has been initialized to a given length but not assigned a value (i.e. each byte is still the initial value of 0) it will output in Base64 as a series of A's, so my hypothesis is that the byte array is being created to the size of the file, but then the value of each byte isn't being assigned. Looking at the code I can't see any obvious issues with it though, so if anyone knows better I'd be very grateful.
For posterity, I did end up changing it to File.ReadAllBytes, however I also found out that the issue was with the file itself, and an empty byte array was actually correct. I.e. each byte was still the initial value of 0, so a corresponding base64 string of "A"s was also correct.

What is the most efficient way to deal with methods that return variable length results into a fixed buffer?

I am using c# 4.0 and every so often I come across a method in the .NET Common Library, or some other library, that has a signature like the following (e.g. Socket.Receive, Stream.Read, etc)
int DoSomethingSuperClever(byte[] buffer, int offset, int count)
The intention is always that you pass it in a buffer and it fills that buffer up to the maximum number of bytes you specify in the count argument (from the given offset) and returns exactly how many bytes it has really managed to fill in.
This is my, super naive, way of dealing with this situation:
var data = new byte[0];
var buffer = new byte[1024];
int read;
while ((read = something.DoSomethingSuperClever(buffer,
0,
buffer.Length)) > 0)
{
int origLength = data.Length;
var temp = new byte[origLength + read];
Array.Copy(data, temp, data.Length);
Array.Copy(buffer, 0, temp, origLength, read);
data = temp;
}
return data;
I think that is pretty rubbish because of all the array creations but it does the job correctly at least I suppose.
I wondered about having a List<byte>, adding to it then doing ToArray at the end...
Of course, I cannot just call AddRange because then if read was less than the length of the buffer I would get junk appended (as AddRange doesn't accept a length argument it will always add the entire collection). So then I think to go with that approach I would end up with a for loop and loads of Add calls but surely that is even worse than the array copy isn't it?
So, experts, what is the most efficient way that I should be dealing with these types of calls?
So, experts, what is the most efficient way that I should be dealing with these types of calls?
Well one simple way is to write it all to a MemoryStream a chunk at a time, then use MemoryStream.ToArray. Let it deal with buffer resizing etc.
MemoryStream ms = new MemoryStream();
while ((read = something.DoSomethingSuperClever(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
return ms.ToArray();
(I would typically have a bigger buffer than 1K, by the way. Obviously it depends on your use case, but I'd normally default to 8, 16 or 32K.)
I'd use a MemoryStream instead of a byte[]:
var data = new MemoryStream();
var buffer = new byte[1024];
int read = 0;
while ((read = something.DoSomethingSuperClever(buffer,
0,
buffer.Length) > 0)
{
data.Write(buffer, 0, read);
}
return data.ToArray();
Depending on the source of your something object you might want to adjust the buffer size to be more efficient.

Categories

Resources