Need a fast method of deserializing 1 million Strings & Guids in c# - c#

I want to deserialize a list of 1 million pairs of (String,Guid) for a performance critical app. The format can be anything I choose, and serialization does not have the same performance requirements.
What sort of approach is best? Text or binary? Write each pair (string,guid) consecutively, or write all strings followed by all guids?
I started playing with LinqPad, (and the simpler example of deserializing strings only) and found that (slightly counter-intuitively), using a TextReader and ReadLine() was a fair bit faster than using a BinaryReader and ReadString(). (Is the filesystem cache playing tricks on me?)
public string[] DeSerializeBinary()
{
var tmr = System.Diagnostics.Stopwatch.StartNew();
long ms = 0;
string[] arr = null;
using (var rdr = new BinaryReader(new FileStream(file, FileMode.Open, FileAccess.Read)))
{
var num = rdr.ReadInt32();
arr = new String[num];
for (int i = 0; i < num; i++)
{
arr[i] = rdr.ReadString();
}
tmr.Stop();
ms = tmr.ElapsedMilliseconds;
Console.WriteLine("DeSerializeBinary took {0}ms", ms);
}
return arr;
}
public string[] DeserializeText()
{
var tmr = System.Diagnostics.Stopwatch.StartNew();
long ms = 0;
string[] arr = null;
using (var rdr = File.OpenText(file))
{
var num = Int32.Parse(rdr.ReadLine());
arr = new String[num];
for (int i = 0; i < num; i++)
{
arr[i] = rdr.ReadLine();
}
tmr.Stop();
ms = tmr.ElapsedMilliseconds;
Console.WriteLine("DeserializeText took {0}ms", ms);
}
return arr;
}
Some Edits:
I used RamMap to clear the file system cache, and it turns out there was very little difference to Text & Binary reader for strings only.
I have a fairly simple class that holds the string and guid. It also holds an int index which corresponds to its position in the list. Obviously there's no need to include this in serialization.
In a test for (binary) deSerializing Strings and Guids alternately, I get around 500ms.
Ideal timing is 50ms, or as close as I can get. However, a simple experiment showed it takes at least 120ms to read the (compressed) file into memory from a reasonably fast SSD drive, without any sort of parsing at all. So 50ms seems unlikely.
Our strings have no theoretical length restrictions. However, we can assume that the performance target only applies if they are all 20 characters or less.
Timings include opening the file.
Reading the Strings is the clear bottleneck now (hence my experiments with serializing strings only). The JIT_NewFast took 30% before I preallocated an array of 16bytes for reading GUIDs.

It's not surprising that reading a bunch of strings is faster with StreamReader than with BinaryReader. StreamReader reads in blocks from the underlying stream, and parses the strings from that buffer. BinaryReader doesn't have a buffer like that. It reads the string length from the underlying stream, and then reads that many characters. So BinaryReader makes more calls to the base stream's Read method.
But there's more to deserializing a (String, Guid) pair than just reading. You also have to parse the Guid. If you write the file in binary then the Guid is written in binary, which makes it much easier and faster to create a Guid structure. If it's a string, then you have to call new Guid(string) to parse the text and create a Guid, after you split the line into its two fields.
Hard to say which of those will be faster.
I can't imagine that we're talking about a whole lot of time here. Certainly reading a file with a million lines will take around a second. Unless the string is really long. A GUID is only 36 characters if you count the separators, right?
With BinaryWriter, you can write the file like this:
writer.Write(count); // integer number of records
foreach (var pair in pairs)
{
writer.Write(pair.theString);
writer.Write(pair.theGuid.ToByteArray());
}
And to read it, you have:
count = reader.ReadInt32();
byte[] guidBytes = new byte[16];
for (int i = 0; i < count; ++i)
{
string s = reader.ReadString();
reader.Read(guidBytes, 0, guidBytes.Length);
pairs.Add(new Pair(s, new Guid(guidBytes));
}
Whether that's faster than splitting a string and calling the Guid constructor that takes a string parameter, I don't know.
I suspect that any difference is going to be pretty slight. I'd probably go with the simplest method: a text file.
If you want to get really crazy, you can write a custom format that you can easily slurp up in just a couple of large reads (a header, an index, and two arrays for strings and GUIDs), and do everything else in memory. That would almost certainly be faster. But faster enough to warrant the extra work? Doubtful.
Update
Or maybe not doubtful. Here's some code that writes and reads a custom binary format. The format is:
count (int32)
guids (count * 16 bytes)
strings (one big concatenated string)
index (index of each string's starting character in the big string)
I assume you're using a Dictionary<string, Guid> to hold these things. But your data structure doesn't really matter. The code would be substantially the same.
Note that I tested this very briefly. I won't say that the code is 100% bug free, but I think you can get the idea of what I'm doing.
private void WriteGuidFile(string filename, Dictionary<string, Guid>guids)
{
using (var fs = File.Create(filename))
{
using (var writer = new BinaryWriter(fs, Encoding.UTF8))
{
List<int> stringIndex = new List<int>(guids.Count);
StringBuilder bigString = new StringBuilder();
// write count
writer.Write(guids.Count);
// Write the GUIDs and build the string index
foreach (var pair in guids)
{
writer.Write(pair.Value.ToByteArray(), 0, 16);
stringIndex.Add(bigString.Length);
bigString.Append(pair.Key);
}
// Add one more entry to the string index.
// makes deserializing easier
stringIndex.Add(bigString.Length);
// Write the string that contains all of the strings, combined
writer.Write(bigString.ToString());
// write the index
foreach (var ix in stringIndex)
{
writer.Write(ix);
}
}
}
}
Reading is just slightly more involved:
private Dictionary<string, Guid> ReadGuidFile(string filename)
{
using (var fs = File.OpenRead(filename))
{
using (var reader = new BinaryReader(fs, Encoding.UTF8))
{
// read the count
int count = reader.ReadInt32();
// The guids are in a huge byte array sized 16*count
byte[] guidsBuffer = new byte[16*count];
reader.Read(guidsBuffer, 0, guidsBuffer.Length);
// Strings are all concatenated into one
var bigString = reader.ReadString();
// Index is an array of int. We can read it as an array of
// ((count+1) * 4) bytes.
byte[] indexBuffer = new byte[4*(count+1)];
reader.Read(indexBuffer, 0, indexBuffer.Length);
var guids = new Dictionary<string, Guid>(count);
byte[] guidBytes = new byte[16];
int startix = 0;
int endix = 0;
for (int i = 0; i < count; ++i)
{
endix = BitConverter.ToInt32(indexBuffer, 4*(i+1));
string key = bigString.Substring(startix, endix - startix);
Buffer.BlockCopy(guidsBuffer, (i*16),
guidBytes, 0, 16);
guids.Add(key, new Guid(guidBytes));
startix = endix;
}
return guids;
}
}
}
A couple of notes here. First, I'm using BitConverter to convert the data in the byte arrays to integers. It would be faster to use unsafe code and just index into the arrays using an int32*.
You might gain some speed by using pointers to index into the guidBuffer and calling Guid Constructor (Int32, Int16, Int16, Byte, Byte, Byte, Byte, Byte, Byte, Byte, Byte) rather than using Buffer.BlockCopy to copy the GUID into the temporary array.
You could make the string index an index of lengths rather than the starting positions. That would eliminate the need for the extra value at the end of the array, but it's unlikely that it'd make any difference in the speed.
There might be other optimization opportunities, but I think you get the general idea here.

Related

Array of bytes[] has no values when is converted from int[]

I'm passing int[] array that hold image, later I want to convert it to bytes[] and save the image to local path. However, I notice that the bytePic[] length is equal to int[] arrPic just the values are missing. There is a screenshot below:
Below is the entire function:
public string ChangeMaterialPicture(int[] arrPic, int materialId,string defaultPath)
{
var material = _warehouseRepository.GetMaterialById(materialId);
if(material is not null)
{
// Convert the Array to Bytes
byte[] bytePic = new byte[arrPic.Length];
for(var i = 0; i < arrPic.Length; i++)
{
AddByteToArray(bytePic, Convert.ToByte(arrPic[i]));
}
// Convert the Bytes to IMG
string filename = Guid.NewGuid().ToString() + "_.png";
System.IO.File.WriteAllBytes(#$"{defaultPath}\materials\{material.VendorId}\{filename}", bytePic);
// Update the Image
material.Picture = filename;
_warehouseRepository.UpdateMaterial(material);
return material.Picture;
}
else
{
return String.Empty;
}
}
public byte[] AddByteToArray(byte[] bArray, byte newByte)
{
byte[] newArray = new byte[bArray.Length + 1];
bArray.CopyTo(newArray, 1);
newArray[0] = newByte;
return newArray;
}
You are creating the new array newArray in AddByteToArray and return it. But at the call site you are never using this returned value and the bytePic array remains unchanged.
The code in AddByteToArray makes no sense. Why create a new array when the intention was to insert one byte into an existing array? What you need to do is to cast the int into byte. Simply write:
byte[] bytePic = new byte[arrPic.Length];
for (int i = 0; i < arrPic.Length; i++)
{
bytePic[i] = (byte)arrPic[i];
}
And delete the method AddByteToArray.
This assumes that every value in the int array is in the range 0 to 255 and therefore fits into one byte.
There are different ways to do this. With LINQ you could also write:
byte[] bytePic = arrPic.Select(i => (byte)i).ToArray();
I would assume your original array uses a int to represent a full RGBA-pixel, since 32bit per pixel mono images are very rare in my experience. And if you do have such an image, you probably want to be more intelligent in how you do this conversion. The only time just casting int to bytes would be a good idea is if you are sure only the lower 8 bits are used, but if that is the case, why are you using an int-array in the first place.
If you actually have RGBA-pixles you do not want to convert individual int-values to bytes, but rather convert a single int value to 4 bytes. And this is not that difficult to do, you just need to use the right methods. The old school options is to use Buffer.BlockCopy.
Example:
byte[] bytePic = new byte[arrPic.Length * 4];
Buffer.BlockCopy(arrPic, 0, bytePic, 0, bytePic.Length);
But if your write-method accepts a span you might want to just convert your array to a span and cast this to the right type, avoiding the copy.

Streaming int as text into a file without creating string objects

I'm streaming some text into a file. The following code is an MCVE to demonstrate the issue:
using (var fileStream = File.OpenWrite(#"d:\test.txt"))
{
using (var streamWriter = new StreamWriter(fileStream))
{
for (int i = 0; i < 20000000; i++)
{
streamWriter.Write("Just ");
streamWriter.Write(i);
streamWriter.Write(" example.\r\n");
}
}
}
As you can see, it does not only stream strings into a file but also ints.
Unfortunately, the Write(int) method seems to call ToString() on the numbers, so above code creates 20M string objects. This can be proven with a memory profiler like dotMemory. These strings will be garbage collected, but they cause a 577 MB throughput on the garbage collector.
Is there a way to directly stream the ints (in a human readable format, not binary) into the file without creating temporary strings?
You can do it one digit at a time using recursion:
void writeInt(StreamWriter w, int i)
{
if (i<0)
{
w.Write('-');
writeInt(w, -i);
}
else if (i>=10)
{
writeInt(w, i/10); // Write all but the last digit
w.Write((char)('0' + i%10)); // Write the last digit
}
else
{
w.Write((char)('0' + i)); // Write single digit number
}
}
Call it like this for a given integer:
int i = 42;
streamWriter.Write("Just ");
writeInt(streamWriter, i);
streamWriter.Write(" example.\r\n");

Byte to value error

So in c#, I have needed a random below given number generator and I found one on StackOverFlow. But near the end, it converts the byte array into a BigInteger. I tried doing the same, though I am using the Deveel-Math lib as it allows me to us BigDeciamals. But I have tried to the array change into a value, and that into a String but I keep getting a "Could not find any recognizable digits." error and as of now I am stumped.
public static BigInteger RandomIntegerBelow1(BigInteger N)
{
byte[] bytes = N.ToByteArray();
BigInteger R;
Random random = new Random();
do
{
random.NextBytes(bytes);
bytes[bytes.Length - 1] &= (byte)0x7F; //force sign bit to positive
R = BigInteger.Parse(BytesToStringConverted(bytes)) ;
//the Param needs a String value, exp: BigInteger.Parse("100")
} while (R >= N);
return R;
}
static string BytesToStringConverted(byte[] bytes)
{
using (var stream = new MemoryStream(bytes))
{
using (var streamReader = new StreamReader(stream))
{
return streamReader.ReadToEnd();
}
}
}
Deveel-Math
Wrong string conversion
You are converting your byte array to a string of characters based on UTF encoding. I'm pretty sure this is not what you want.
If you want to convert a byte array to a string that contains a number expressed in decimal, try this answer using BitConverter.
if (BitConverter.IsLittleEndian)
Array.Reverse(array); //need the bytes in the reverse order
int value = BitConverter.ToInt32(array, 0);
This is way easier
On the other hand, I notice that Deveel-Math's BigInteger has a constructor that takes a byte array as input (see line 226). So you should be able to greatly simplify your code by doing this:
R = new Deveel.Math.BigInteger(1, bytes) ;
However, since Deveel.Math appears to be BigEndian, you may need to reverse the array first:
System.Array.Reverse(bytes);
R = new Deveel.Math.BigInteger(1, bytes);

Concatenating a C# List of byte[]

I am creating several byte arrays that need to be joined together to create one large byte array - i'd prefer not to use byte[]'s at all but have no choice here...
I am adding each one to a List as I create them, so I only have to do the concatenation once I have all the byte[]'s, but my question is, what is the best way of actually doing this?
When I have a list with an unknown number of byte[]'s and I want to concat them all together.
Thanks.
listOfByteArrs.SelectMany(byteArr=>byteArr).ToArray()
The above code will concatenate a sequence of sequences of bytes into one sequence - and store the result in an array.
Though readable, this is not maximally efficient - it's not making use of the fact that you already know the length of the resultant byte array and thus can avoid the dynamically extended .ToArray() implementation that necessarily involves multiple allocations and array-copies. Furthermore, SelectMany is implemented in terms of iterators; this means lots+lots of interface calls which is quite slow. However, for small-ish data-set sizes this is unlikely to matter.
If you need a faster implementation you can do the following:
var output = new byte[listOfByteArrs.Sum(arr=>arr.Length)];
int writeIdx=0;
foreach(var byteArr in listOfByteArrs) {
byteArr.CopyTo(output, writeIdx);
writeIdx += byteArr.Length;
}
or as Martinho suggests:
var output = new byte[listOfByteArrs.Sum(arr => arr.Length)];
using(var stream = new MemoryStream(output))
foreach (var bytes in listOfByteArrs)
stream.Write(bytes, 0, bytes.Length);
Some timings:
var listOfByteArrs = Enumerable.Range(1,1000)
.Select(i=>Enumerable.Range(0,i).Select(x=>(byte)x).ToArray()).ToList();
Using the short method to concatenate these 500500 bytes takes 15ms, using the fast method takes 0.5ms on my machine - YMMV, and note that for many applications both are more than fast enough ;-).
Finally, you could replace Array.CopyTo with the static Array.Copy, the low-level Buffer.BlockCopy, or a MemoryStream with a preallocated back buffer - these all perform pretty much identically on my tests (x64 .NET 4.0).
Here's a solution based on Andrew Bezzub's and fejesjoco's answers, pre-allocating all the memory needed up front. This yields Θ(N) memory usage and Θ(N) time (N being the total number of bytes).
byte[] result = new byte[list.Sum(a => a.Length)];
using(var stream = new MemoryStream(result))
{
foreach (byte[] bytes in list)
{
stream.Write(bytes, 0, bytes.Length);
}
}
return result;
write them all to a MemoryStream instead of a list. then call MemoryStream.ToArray(). Or when you have the list, first summarize all the byte array lengths, create a new byte array with the total length, and copy each array after the last in the big array.
Use Linq:
List<byte[]> list = new List<byte[]>();
list.Add(new byte[] { 1, 2, 3, 4 });
list.Add(new byte[] { 1, 2, 3, 4 });
list.Add(new byte[] { 1, 2, 3, 4 });
IEnumerable<byte> result = Enumerable.Empty<byte>();
foreach (byte[] bytes in list)
{
result = result.Concat(bytes);
}
byte[] newArray = result.ToArray();
Maybe faster solution would be (not declaring array upfront):
IEnumerable<byte> bytesEnumerable = GetBytesFromList(list);
byte[] newArray = bytesEnumerable.ToArray();
private static IEnumerable<T> GetBytesFromList<T>(IEnumerable<IEnumerable<T>> list)
{
foreach (IEnumerable<T> elements in list)
{
foreach (T element in elements)
{
yield return element;
}
}
}
It seems like above would iterate each array only once.
Instead of storing each byte array into a List<byte[]>, you could instead add them directly to a List<byte>, using the AddRange method for each one.
hmm how about list.addrange?

How to read a large (1 GB) txt file in .NET?

I have a 1 GB text file which I need to read line by line. What is the best and fastest way to do this?
private void ReadTxtFile()
{
string filePath = string.Empty;
filePath = openFileDialog1.FileName;
if (string.IsNullOrEmpty(filePath))
{
using (StreamReader sr = new StreamReader(filePath))
{
String line;
while ((line = sr.ReadLine()) != null)
{
FormatData(line);
}
}
}
}
In FormatData() I check the starting word of line which must be matched with a word and based on that increment an integer variable.
void FormatData(string line)
{
if (line.StartWith(word))
{
globalIntVariable++;
}
}
If you are using .NET 4.0, try MemoryMappedFile which is a designed class for this scenario.
You can use StreamReader.ReadLine otherwise.
Using StreamReader is probably the way to since you don't want the whole file in memory at once. MemoryMappedFile is more for random access than sequential reading (it's ten times as fast for sequential reading and memory mapping is ten times as fast for random access).
You might also try creating your streamreader from a filestream with FileOptions set to SequentialScan (see FileOptions Enumeration), but I doubt it will make much of a difference.
There are however ways to make your example more effective, since you do your formatting in the same loop as reading. You're wasting clockcycles, so if you want even more performance, it would be better with a multithreaded asynchronous solution where one thread reads data and another formats it as it becomes available. Checkout BlockingColletion that might fit your needs:
Blocking Collection and the Producer-Consumer Problem
If you want the fastest possible performance, in my experience the only way is to read in as large a chunk of binary data sequentially and deserialize it into text in parallel, but the code starts to get complicated at that point.
You can use LINQ:
int result = File.ReadLines(filePath).Count(line => line.StartsWith(word));
File.ReadLines returns an IEnumerable<String> that lazily reads each line from the file without loading the whole file into memory.
Enumerable.Count counts the lines that start with the word.
If you are calling this from an UI thread, use a BackgroundWorker.
Probably to read it line by line.
You should rather not try to force it into memory by reading to end and then processing.
StreamReader.ReadLine should work fine. Let the framework choose the buffering, unless you know by profiling you can do better.
TextReader.ReadLine()
I was facing same problem in our production server at Agenty where we see large files (sometimes 10-25 gb (\t) tab delimited txt files). And after lots of testing and research I found the best way to read large files in small chunks with for/foreach loop and setting offset and limit logic with File.ReadLines().
int TotalRows = File.ReadLines(Path).Count(); // Count the number of rows in file with lazy load
int Limit = 100000; // 100000 rows per batch
for (int Offset = 0; Offset < TotalRows; Offset += Limit)
{
var table = Path.FileToTable(heading: true, delimiter: '\t', offset : Offset, limit: Limit);
// Do all your processing here and with limit and offset and save to drive in append mode
// The append mode will write the output in same file for each processed batch.
table.TableToFile(#"C:\output.txt");
}
See the complete code in my Github library : https://github.com/Agenty/FileReader/
Full Disclosure - I work for Agenty, the company who owned this library and website
My file is over 13 GB:
You can use my class:
public static void Read(int length)
{
StringBuilder resultAsString = new StringBuilder();
using (MemoryMappedFile memoryMappedFile = MemoryMappedFile.CreateFromFile(#"D:\_Profession\Projects\Parto\HotelDataManagement\_Document\Expedia_Rapid.jsonl\Expedia_Rapi.json"))
using (MemoryMappedViewStream memoryMappedViewStream = memoryMappedFile.CreateViewStream(0, length))
{
for (int i = 0; i < length; i++)
{
//Reads a byte from a stream and advances the position within the stream by one byte, or returns -1 if at the end of the stream.
int result = memoryMappedViewStream.ReadByte();
if (result == -1)
{
break;
}
char letter = (char)result;
resultAsString.Append(letter);
}
}
}
This code will read text of file from start to the length that you pass to the method Read(int length) and fill the resultAsString variable.
It will return the bellow text:
I'd read the file 10,000 bytes at a time. Then I'd analyse those 10,000 bytes and chop them into lines and feed them to the FormatData function.
Bonus points for splitting the reading and line analysation on multiple threads.
I'd definitely use a StringBuilder to collect all strings and might build a string buffer to keep about 100 strings in memory all the time.

Categories

Resources