Can I compare object based on the proto buf serialization - c#

I have some model objects that I save in a DB serialized with protobuf. I want to compare the version I will save to the existing one, to avoid to add two times the same version.
Ideally I should
byte[] existingBlob = GetFromDBExistingModelObject();
ModelType existingModel = existingBlob.Deserialize();
if (!model.Equals(existingModel))
{
byte[] serializedModel = model.Serialize();
Save(serializedModel); //Save in DB the new blob
}
However I will have to implement .Equals on every model object and this be quite painful. I would like to do
byte[] existingBlob = GetFromDBExistingModelObject();
byte[] serializedModel = model.Serialize();
if (!compareBlob(existingBlob, serializedModel)
{
Save(serializedModel);
}
private bool compareBlob(byte[] existingBlob, byte[] serializedModel)
{
if (serializedModel.Length != existingBlob.Length)
{
return false;
}
return !serializedModel.Where((t, i) => t != existingBlob[i]).Any();
}
I also do that for performance, because I don't deserialize the existingBlob
What do you think of this implementation ? Do you think I can rely on this comparison ? I use protobuf for serialization.
Thanks for your comment.

protobuf-net will produce a predictable output, but strictly speaking that is not guaranteed by the spec; - there are 2 edge cases (field-order, and sub-normal forms† for varint encoding) that technically could produce different output with the same meaning, but protobuf-net will always produce the same output currently.
I am toying with adding an option to deliberately use sub-normal varint forms to avoid some memory shuffling, but that would be opt-in only.
So; as long as you aren't building your binary files by appending (protobuf is an appendable format, but obviously all bets are off if you are appending in arbitrary orders), then yes: the data on the wire should be predictable, and you can compare the byte sequence to test for equality.
As a minor note, I would recommend a regular for loop here, for efficiency:
if (serializedModel.Length != existingBlob.Length)
{
return false;
}
for(int i = 0 ; i < serializedModel.Length ; i++)
if(serializedModel[i] != existingBlob[i]) return false;
return true;
(if you are particularly speed-crazy, you could even use unsafe code and compare it as a int* or long* instead (taking 1/4 or 1/8 of the tests), and just check the last few bytes manually)
You might also consider comparing a hash (sha1 etc) instead of byte-by-byte; this would be especially useful for large models, especially if you can store the hashed value along with the original (so you never have to fetch the original existing BLOB - just the existing hash).
† : specifically, the bit-sequence 10000000 or 00000000 at the "big end" of a varint just means "and more zeros at the big end" (with or without more data to follow), so has no impact on the number; hence any (reasonable) number of 0x80 0x80 0x00 on the end of a varint does not change the result; there is a use-case where-by this could be used to avoid having to move data around, by deliberately using an oversized varint as a length-prefix.

Related

C# Is it possible to generate an identifier for array of double values

I am working with existing data and have records which contain an array double[23] and double[46]. The values in the array can be the same across multiple records. I would like to generate an id (perhaps an int) to uniquely identify the values in each array.
There are places in the application where I need to group records based on the values in the array being identical. While there are ways to query for this, I was hoping for a single int field (or something similar) to group on. This would really help simplify queries and especially help with report tools where grouping on a smaller single field would help immensely.
I thought of generating a hash code, but I understand these are not guaranteed to be the same for each double[] with matching values. I had tried implementing
((IStructuralEquatable)combined).GetHashCode(EqualityComparer<double>.Default);
To compare the structure and data, but again, I don't think this is guaranteed to match another double[] having the same values.
Perhaps a form of checksum would work but admittedly I am having trouble implementing something. I am looking for suggestions/direction.
Here is data for 3 sample records. Data in record 1&3 are the same so a generated id should match for those.
32.7,48.9,55.9,48.9,47.7,46.9,45.7,44.4,43.4,41.9,40.4,38.4,36.7,34.4,32.4,30.4,27.9,25.4,22.4,19.4,16.4,13.4,10.4,47.9
40.8,49.0,50.0,49.0,47.8,47.0,45.8,44.5,43.5,42.0,40.5,38.5,36.8,34.5,32.5,30.5,28.0,25.5,22.5,19.5,16.5,13.5,10.5,48.0
32.7,48.9,55.9,48.9,47.7,46.9,45.7,44.4,43.4,41.9,40.4,38.4,36.7,34.4,32.4,30.4,27.9,25.4,22.4,19.4,16.4,13.4,10.4,47.9
Perhaps this is not possible without just checking all the data, but was hoping for a better solution to simplify the application and improve the speed.
The goal is to add a new id field to the existing records to represent the array data. That way, passing records into report tools would group together easily on one field rather than checking the whole array on each record.
I appreciate any direction.
EDIT - Some issues I ran into trying things (incase it helps someone)
In trying to understand this originally, I was calling this code (which is part of .NET). I understood these functions would hash the values of the array together (only 8 values in this case). I didn't think it included the array handle. The result was not quite as expected as there is a bug MS corrected in .NET as per the commented line below. With the fix I was getting better results.
int IStructuralEquatable.GetHashCode(IEqualityComparer comparer) {
if (comparer == null)
throw new ArgumentNullException("comparer");
Contract.EndContractBlock();
int ret = 0;
for (int i = (this.Length >= 8 ? this.Length - 8 : 0); i < this.Length; i++) {
ret = CombineHashCodes(ret, comparer.GetHashCode(GetValue(i)));
//.NET 4.6.2, in .NET 4.5.2 it is ret = CombineHashCodes(ret, comparer.GetHashCode(GetValue(0)))
}
return ret;
}
internal static int CombineHashCodes(int h1, int h2) {
return (((h1 << 5) + h1) ^ h2);
}
I modified this to handle more than 8 values and still had some hashes not matching. I later determined the issue was in the data; I was unaware some of the records had some doubles stored with more than one decimal place (should have been rounded). This of course changed the hash. Now that I have the data consistent, I am seeing matching hashes; any arrays with identical values have an identical hash.
I thought of generating a hash code, but I understand these are not guaranteed to be the same for each double[] with matching values
Quite the opposite, a hash function is required by design to return equal hashes for equal inputs. For example, 0 is a good starting point for your hash function, returning the value 0 for equal rows. Everything else is just an optimization to try to reduce false positives.
Perhaps this is not possible without just checking all the data
Of course you need to check all the data, how else would you do it?
However your implementation is broken. The default hash function for an array hashes the handle to the array itself, so different instances of arrays with the same data will show up as different. What you want to do is to use a HashCode instance and Add() each element of your array in it to get a proper hash code.

Is it possible to concatenate a list of strings using only a single allocation?

After doing some profiling, we've discovered that the current way in which our app concatenates strings causes an enormous amount of memory churn and CPU time.
We're building a List<string> of strings to concatenate that is on the order of 500 thousand elements long, referencing several hundred megabytes worth of strings. We're trying to optimize this one small part of our app since it seems to account for a disproportionate amount of CPU and memory usage.
We do a lot of text processing :)
Theoretically, we should be able to perform the concatenation in a single allocation and N copies - we can know how many total characters are available in our string, so it should just be as simple as summing up the lengths of the component strings and allocating enough underlying memory to hold the result.
Assuming we're starting with a pre-filled List<string>, is it possible to concatenate all strings in that list using a single allocation?
Currently, we're using the StringBuilder class, but this stores its own intermediate buffer of all of the characters - so we have an ever growing chunk array, with each chunk storing a copy of the characters we're giving it. Far from ideal. The allocations for the array of chunks aren't horrible, but the worst part is that it allocates intermediate character arrays, which means N allocations and copies.
The best we can do right now is to call List<string>.ToArray() - which performs one copy of a 500k element array - and pass the resulting string[] to string.Concat(params string[]). string.Concat() then performs two allocations, one to copy the input array into an internal array, and the one to allocate the destination string's memory.
From referencesource.microsoft.com:
public static String Concat(params String[] values) {
if (values == null)
throw new ArgumentNullException("values");
Contract.Ensures(Contract.Result<String>() != null);
// Spec#: Consider a postcondition saying the length of this string == the sum of each string in array
Contract.EndContractBlock();
int totalLength=0;
// -----------> Allocation #1 <---------
String[] internalValues = new String[values.Length];
for (int i=0; i<values.Length; i++) {
string value = values[i];
internalValues[i] = ((value==null)?(String.Empty):(value));
totalLength += internalValues[i].Length;
// check for overflow
if (totalLength < 0) {
throw new OutOfMemoryException();
}
}
return ConcatArray(internalValues, totalLength);
}
private static String ConcatArray(String[] values, int totalLength) {
// -----------------> Allocation #2 <---------------------
String result = FastAllocateString(totalLength);
int currPos=0;
for (int i=0; i<values.Length; i++) {
Contract.Assert((currPos <= totalLength - values[i].Length),
"[String.ConcatArray](currPos <= totalLength - values[i].Length)");
FillStringChecked(result, currPos, values[i]);
currPos+=values[i].Length;
}
return result;
}
Thus, in the best case, we have three allocations, two for arrays referencing the component strings, and one for the destination concatenated string.
Can we improve on this? Is it possible to concatenate a List<string> using a single allocation and a single loop of character copies?
Edit 1
I'd like to summarize the various approaches discussed so far, and why they are still sub-optimal. I'd also like to set the parameters of the situation in concrete a little more, since I've received a lot of questions that try to side step the central question.
...
First, the structure of the code that I am working within. There are three layers:
Layer one is a set of methods that produce my content. These methods return small-ish string objects, which I will call my 'component' strings'. These string objects will eventually be concatenated into a single string. I do not have the ability to modify these methods; I have to face the reality that they return string objects and move forward.
Layer two is my code that calls these content producers and assembles the output, and is the subject of this question. I must call the content producer methods, collect the strings they return, and eventually concatenate the returned strings into a single string (reality is a little more complex; the returned strings are partitioned depending on how they're routed for output, and so I have several sets of large collections of strings).
Layer three is a set of methods that accept a single large string for further processing. Changing the interface of that code is beyond my control.
Talking about some numbers: a typical batch run will collect ~500000 strings from the content producers, representing about 200-500 MB of memory. I need the most efficient way to concatenate these 500k strings into a single string.
...
Now I'd like to examine the approaches discussed so far. For the sake of numbers, assume we're running 64-bit, assume that we are collecting 500000 string objects, and assume that the aggregate size of the string objects totals 200 megabytes worth of character data. Also, assume that the original string object's memory is not counted toward any approach's total in the below analysis. I make this assumption because it is necessarily common to any and all approaches, because it is an assumption that we cannot change the interface of the content producers - they return 500k relatively small fully formed strings objects that I must then accept and somehow concatenate. As stated above, I cannot change this interface.
Approach #1
Content producers ----> StringBuilder ----> string
Conceptually, this would be invoking the content producers, and directly writing the strings they return to a StringBuilder, and then later calling StringBuilder.ToString() to obtain the concatenated string.
By analyzing StringBuilder's implementation, we can see that the cost of this boils down to 400 MB of allocations and copies:
During the stage where we collect the output from the content producers, we're writing 200 MB of data to the StringBuilder. We would be performing one 200 MB allocation to pre-allocate the StringBuilder, and then 200 MB worth of copies as we copy and discard the strings returned from the content producers
After we've collected all output from the content producers and have a fully formed StringBuilder, we then need to call StringBuilder.ToString(). This performs exactly one allocation (string.FastAllocateString()), and then copies the string data from its internal buffers to the string object's internal memory.
Total cost: approximately 400 MB of allocations and copies
Approach #2
Content producers ---> pre-allocated char[] ---> string
This strategy is fairly simple. Assuming we know roughly how much character data we're going to be collecting from the producers, we can pre-allocate a char[] that is 200 MB large. Then, as we call the content producers, we copy the strings they return into our char[]. This accounts for 200 MB of allocations and copies. The final step to turn this into a string object is to pass it to the new string(char[]) constructor. However, since strings are immutable and arrays are not, the constructor will make a copy of that entire array, causing it to allocate and copy another 200 MB of character data.
Total cost: approximately 400 MB of allocations and copies
Approach #3:
Content producers ---> List<string> ----> string[] ----> string.Concat(string[])
Pre-allocate a List<string> to be about 500k elements - approximately 4 MB of allocations for List's underlying array (500k * 8 bytes per pointer == 4 MB of memory).
Call all of the content producers to collect their strings. Approximately 4 MB of copies, as we copy the pointer to the returned string into List's underlying array.
Call List<string>.ToArray() to obtain a string[]. Approximately 4 MB of allocations and copies (again, we're really just copying pointers).
Call string.Concat(string[]):
Concat will make a copy of the array provided to it before it does any real work. Approximately 4 MB of allocations and copies, again.
Concat will then allocate a single 'destination' string object using the internal string.FastAllocateString() special method. Approximately 200 MB of allocations.
Concat will then copy strings from its internal copy of the provided array directly into the destination. Approximately 200 MB of copies.
Total cost: approximately 212 MB of allocations and copies
None of these approaches are ideal, however approach #3 is very close. We're assuming that the absolute minimum of memory that needs to be allocated and copied is 200 MB (for the destination string), and here we get pretty close - 212 MB.
If there were a string.Concat overload that 1) Accepted an IList<string> and 2) did not make a copy of that IList before using it, then the problem would be solved. No such method is provided by .Net, hence the subject of this question.
Edit 2
Progress on a solution.
I've done some testing with some hacked IL, and found that directly invoking string.FastAllocateString(n) (which is not usually invokable...) is about as fast as invoking new string('\0', n), and both seem to allocate exactly as much memory as is expected.
From there, it seems its possible to acquire a pointer to the freshly allocated string using the unsafe and fixed statements.
And so, a rough solution begins to appear:
private static string Concat( List<string> list )
{
int concatLength = 0;
for( int i = 0; i < list.Count; i++ )
{
concatLength += list[i].Length;
}
string newString = new string( '\0', concatLength );
unsafe
{
fixed( char* ptr = newString )
{
...
}
}
return newString;
}
The next biggest hurdle is implementing or finding an efficient block copy method, ala Buffer.BlockCopy, except one that will accept char* types.
If you can determine the length of the concatenation before trying to perform the operation, a char array can beat string builder in some use cases. Manipulating the characters within the array prevents the multiple allocations.
See: http://blogs.msdn.com/b/cisg/archive/2008/09/09/performance-analysis-reveals-char-array-is-better-than-stringbuilder.aspx
UPDATE
Please check out this internal implementation of the String.Join from .NET - it uses unsafe code with pointers to avoid multiple allocations. Unless I'm missing something, it would seem you can re-write this using your List to accomplish what you want:
[System.Security.SecuritySafeCritical] // auto-generated
public unsafe static String Join(String separator, String[] value, int startIndex, int count) {
//Range check the array
if (value == null)
throw new ArgumentNullException("value");
if (startIndex < 0)
throw new ArgumentOutOfRangeException("startIndex", Environment.GetResourceString("ArgumentOutOfRange_StartIndex"));
if (count < 0)
throw new ArgumentOutOfRangeException("count", Environment.GetResourceString("ArgumentOutOfRange_NegativeCount"));
if (startIndex > value.Length - count)
throw new ArgumentOutOfRangeException("startIndex", Environment.GetResourceString("ArgumentOutOfRange_IndexCountBuffer"));
Contract.EndContractBlock();
//Treat null as empty string.
if (separator == null) {
separator = String.Empty;
}
//If count is 0, that skews a whole bunch of the calculations below, so just special case that.
if (count == 0) {
return String.Empty;
}
int jointLength = 0;
//Figure out the total length of the strings in value
int endIndex = startIndex + count - 1;
for (int stringToJoinIndex = startIndex; stringToJoinIndex <= endIndex; stringToJoinIndex++) {
if (value[stringToJoinIndex] != null) {
jointLength += value[stringToJoinIndex].Length;
}
}
//Add enough room for the separator.
jointLength += (count - 1) * separator.Length;
// Note that we may not catch all overflows with this check (since we could have wrapped around the 4gb range any number of times
// and landed back in the positive range.) The input array might be modifed from other threads,
// so we have to do an overflow check before each append below anyway. Those overflows will get caught down there.
if ((jointLength < 0) || ((jointLength + 1) < 0) ) {
throw new OutOfMemoryException();
}
//If this is an empty string, just return.
if (jointLength == 0) {
return String.Empty;
}
string jointString = FastAllocateString( jointLength );
fixed (char * pointerToJointString = &jointString.m_firstChar) {
UnSafeCharBuffer charBuffer = new UnSafeCharBuffer( pointerToJointString, jointLength);
// Append the first string first and then append each following string prefixed by the separator.
charBuffer.AppendString( value[startIndex] );
for (int stringToJoinIndex = startIndex + 1; stringToJoinIndex <= endIndex; stringToJoinIndex++) {
charBuffer.AppendString( separator );
charBuffer.AppendString( value[stringToJoinIndex] );
}
Contract.Assert(*(pointerToJointString + charBuffer.Length) == '\0', "String must be null-terminated!");
}
return jointString;
}
Source: http://www.dotnetframework.org/default.aspx/4#0/4#0/DEVDIV_TFS/Dev10/Releases/RTMRel/ndp/clr/src/BCL/System/String#cs/1305376/String#cs
UPDATE 2
Good point on the fast allocate. According to an old SO post, you can wrap FastAllocate using reflection (assuming of course you'd cache the fastAllocate method reference so you just called Invoke each time. Perhaps the tradeoff of the call is better than what you're doing now.
var fastAllocate = typeof (string).GetMethods(BindingFlags.NonPublic | BindingFlags.Static)
.First(x => x.Name == "FastAllocateString");
var newString = (string)fastAllocate.Invoke(null, new object[] {20});
Console.WriteLine(newString.Length); // 20
Perhaps another approach is to use unsafe code to copy your allocation into a char* array, then pass this to the string constructor. The string constructor with char* is an extern passed to the underlying C++ implementation. I haven't found a reliable source for that code to confirm, but perhaps this can be faster for you. The non-prod ready code (no checks for potential overflow, add fixed to lock strings from garbage collection, etc) would start with:
public unsafe string MyConcat(List<string> values)
{
int index = 0;
int totalLength = values.Sum(m => m.Length);
char* concat = stackalloc char[totalLength + 1]; // Add additional char for null term
foreach (var value in values)
{
foreach (var c in value)
{
concat[index] = c;
index++;
}
}
concat[index] = '\0';
return new string(concat);
}
Now I'm all out of ideas for this :) Perhaps somebody can figure out a method here with marshalling to avoid unsafe code. Since introducing unsafe code requires adding the unsafe flag to compilation, consider adding this piece as a separate dll to minimize your app's security risk if you go down that route.
Unless the average length of the strings is very small, the most efficient approach, given a List<String>, will be to use ToArray() to copy it to a new String[], and pass that to a concatenation or joining method. Doing that may cause a wasted allocation for an array of references if the concatenation or joining method wants to make a copy of its array before it starts, but that would only allocate one reference per string, there will only be one allocation to hold character data, and it will be correctly sized to hold the entire string.
If you're building the data structure yourself, you might gain a little bit of efficiency by initializing a String[] to the estimated required size, populating it yourself, and expanding it as needed. That would save one allocation of a String[] worth of data.
Another approach would be to allocate a String[8192][] and then allocate a String[8192] for each array of strings as you go along. Once you're all done, you'll know exactly what size String[] you need to pass to the Concat method so you can create an array of that exact size. This approach would require a greater quantity of allocations, but only the final String[] and the String itself would need to go on the Large Object Heap.
It's a shame the constraints you're putting on yourself. It's very blockily structured, and it's hard to get any flow going. For example, if you didn't expect a IList but only expected IEnumerable you might be able to make it easier for the producer of your content. Not only that, you could make your processing benefit from being able to consume the strings only as you need them - and only as they're produced.
This gets you on down the road to some nice asynchrony.
One the other end, they're making you send to whole thing at once. That's tough.
But having said that, and since you're going to run it over and over, etc... I'm wondering if you couldn't create your string buffer or byte buffer or StringBuilder or whatever - and reuse it between executions - allocate the max monster (or progressively bump-reallocate it as needed) one time - and don't let the gc have it. The string constructor will copy it over and over again - but that's a single allocation per cycle. If you're running this so much you're making the machine hot, then it might be worth the hit. I've made precisely that tradeoff in the near past (but I didn't have 5gb to choke on). It felt dirty at first - but ooohh - the throughput spoke loudly!
Also, it may be possible, that while your native API expects a string, but you can lie to it - let it think you're giving it a string. You can very probably pass the buffer with a null char at the end - or with the length - depending on the API's particulars. I think one or two commenters spoke to this. In such a case, you may probably need your buffer pinned for the duration of the calls to the native consumer of your big ol' string.
If this is the case, you're down to a one-time allocation of a buffer, repeated copies into it, and that's it. It could go way under your proposed best case.
I have implemented a method to concatenate a List into a single string that performs exactly one allocation.
The following code compiles under .Net 4.6 - Block.MemoryCopy wasn't added to .Net until 4.6.
The "unsafe" implementation:
public static unsafe class FastConcat
{
public static string Concat( IList<string> list )
{
string destinationString;
int destLengthChars = 0;
for( int i = 0; i < list.Count; i++ )
{
destLengthChars += list[i].Length;
}
destinationString = new string( '\0', destLengthChars );
unsafe
{
fixed( char* origDestPtr = destinationString )
{
char* destPtr = origDestPtr; // a pointer we can modify.
string source;
for( int i = 0; i < list.Count; i++ )
{
source = list[i];
fixed( char* sourcePtr = source )
{
Buffer.MemoryCopy(
sourcePtr,
destPtr,
long.MaxValue,
source.Length * sizeof( char )
);
}
destPtr += source.Length;
}
}
}
return destinationString;
}
}
The competing implementation is the following "safe" implementation:
public static string Concat( IList<string> list )
{
return string.Concat( list.ToArray() )
}
Memory consumption
The "unsafe" implementation performs exactly one allocation and zero temporary allocations. The List<string> is directly concatenated into a single, freshly allocated string object.
The "safe" implementation requires two copies of the list - one, when I call ToArray() to pass it to string.Concat, and another when string.Concat performs its own internal copy of the array.
When concatenating a 500k element list, the "safe" string.Concat method allocates exactly 8 MB of extra memory in a 64-bit process, which I've confirmed by running the test driver in a memory monitor. This is what we would expect with the array copies performed by the safe implementation.
CPU performance
For small worksets, the unsafe implementation seems to win by about 25%.
The test driver was tested by compiling for 64-bit, installing the program into the native image cache via NGEN, and running from outside the debugger on an unloaded workstation.
From my test driver with a small workset (500k strings each 2-10 chars long):
Unsafe Time: 17.266 ms
Unsafe Time: 18.419 ms
Unsafe Time: 16.876 ms
Safe Time: 21.265 ms
Safe Time: 21.890 ms
Safe Time: 24.492 ms
Unsafe average: 17.520 ms. Safe average: 22.549 ms. Safe takes about 25% longer than unsafe. This is likely due to the extra work the safe implementation has to do, allocating temporary arrays.
...
From my test driver with a large workset (500k strings, each 500-800 chars long):
Unsafe Time: 498.122 ms
Unsafe Time: 513.725 ms
Unsafe Time: 515.016 ms
Safe Time: 487.456 ms
Safe Time: 499.508 ms
Safe Time: 512.390 ms
As you can see, the performance difference with large strings is roughly zero, likely because the time is dominated by the raw copy.
Conclusion
If you don't care about the array copies, the safe implementation is dead simple to implement, and is roughly as fast as the unsafe implementation. If you want to be absolutely perfect with memory usage, use the unsafe implementation.
I've attached the code I used for the test harness:
class PerfTestHarness
{
private List<string> corpus;
public PerfTestHarness( List<string> corpus )
{
this.corpus = corpus;
// Warm up the JIT
// Note that `result` is discarded. We reference it via 'result[0]' as an
// unused paramater to my prints to be absolutely sure it doesn't get
// optimized out. Cheap hack, but it works.
string result;
result = FastConcat.Concat( this.corpus );
Console.WriteLine( "Fast warmup done", result[0] );
result = string.Concat( this.corpus.ToArray() );
Console.WriteLine( "Safe warmup done", result[0] );
GC.Collect();
GC.WaitForPendingFinalizers();
}
public void PerfTestSafe()
{
Stopwatch watch = new Stopwatch();
string result;
GC.Collect();
GC.WaitForPendingFinalizers();
watch.Start();
result = string.Concat( this.corpus.ToArray() );
watch.Stop();
Console.WriteLine( "Safe Time: {0:0.000} ms", watch.Elapsed.TotalMilliseconds, result[0] );
Console.WriteLine( "Memory usage: {0:0.000} MB", Environment.WorkingSet / 1000000.0 );
Console.WriteLine();
}
public void PerfTestUnsafe()
{
Stopwatch watch = new Stopwatch();
string result;
GC.Collect();
GC.WaitForPendingFinalizers();
watch.Start();
result = FastConcat.Concat( this.corpus );
watch.Stop();
Console.WriteLine( "Unsafe Time: {0:0.000} ms", watch.Elapsed.TotalMilliseconds, result[0] );
Console.WriteLine( "Memory usage: {0:0.000} MB", Environment.WorkingSet / 1000000.0 );
Console.WriteLine();
}
}
StringBuilder was designed to concatenate strings efficiently. It has no other purpose.Use the constructor which sets the initial capacity:
int totalLength = CalcTotalLength();
// sufficient capacity
StringBuilder sb = new StringBuilder(totalLength);
But then you say that even StringBuilder allocates intermediate memory, and you want to do better...
These are unusual requirements, so you need to write a function which suits your situation (creating a char[] of appropriate size, then filling it in). I'm sure you are more than capable.
The first two of my answers have now been already incorporated in the question. Here is my highly situation dependent, but useful -
Third Answer
If in all these MBs of string you are getting a lot of strings that are same, then a smarter way would be use two dictionaries, one would be Dictionary<int, int> to store position and "Id" of the string at that position while another would be a Dictionary<int, int> to store the "Id" and the index of actual string in the original string[].
Coincidentally for me, what I am trying to do is already implemented in C#. Goes kinda like this...
If indeed there are a lot of same strings, is it a rare case where String Interning is useful? You are guaranteed to save considerable amount of your 200 MB target if a lot of matching strings are coming from the content producers.
What is String.Intern?
When you use strings in C#, the CLR does something clever called
string interning. It's a way of storing one copy of any string. If you
end up having a hundred—or, worse, a million—strings with the same
value, it's a waste to take up all of that memory storing the same
string over and over again. String interning is a way around that.
The CLR maintains a table called the intern pool that contains a
single, unique reference to every literal string that's either
declared or created programmatically while your program's running. And
the .NET Framework gives you two useful methods for interacting with
the intern pool: String.Intern() and String.IsInterned().
The way String.Intern() works is pretty straightforward. You pass it a
single string as an argument. If that string is already in the intern
pool, it returns a reference to that string. If it's not already in
the intern pool, it adds it and returns the same reference you passed
into it.
The way to use String Interning is explained in the link. For the sake of completeness of this answer I can add the code here but only if you feel that these solutions are useful.

SHA1Managed.ComputeHash Occasionally Different on Different Servers

Background (You can skip this section)
I have a large amount of data (about 3 mb) that needs to be kept up to date on several hundred machines. Some of the machines run C# and some run Java. The data could change at any time and needs to be propogated to the clients within minutes. The data is delivered in Json format from 4 load balanced servers. These 4 servers are running ASP.NET 4.0 with Mvc 3 and C# 4.0.
The code that runs on the 4 servers has a hashing algorithm which hashes the Json response and then converts the hash to a string. This hash is given to the client. Then, every few minutes, the clients ping the server with the hash and if the hash is out of date the new Json object is returned. If the hash is still current then a 304 with an emptry body is returned.
Occasionally the hashes generated by the 4 boxes are inconsistent across the boxes, which means that the clients are constantly downloading the data (each request could hit a different server).
Code Snipet
Here is the code that is used to generate the hash.
internal static HashAlgorithm Hasher { get; set; }
...
Hasher = new SHA1Managed();
...
Convert.ToBase64String(Hasher.ComputeHash(Encoding.ASCII.GetBytes(jsonString)));
To try and debug the problem I split it out like this:
Prehash = PreHashBuilder.ToString();
ASCIIBytes = Encoding.ASCII.GetBytes(Prehash);
HashedBytes = Hasher.ComputeHash(ASCIIBytes);
Hash = Convert.ToBase64String(HashedBytes);
I then added a route which spits out the above values and used Beyond Compare to compare the differences.
Byte arrays are converted to a string format for BeyondCompare use by using:
private static string GetString(byte[] bytes)
{
StringBuilder sb = new StringBuilder();
foreach (byte b in bytes)
{
sb.Append(b);
}
return sb.ToString();
}
As you can see the byte array is displayed litterally as a sequence of bytes. It is not 'converted'.
The Problem
I discovered that the Prehash and ASCIIBytes values were the same, but the HashedBytes values were different - which meant that the Hash was also different.
I restarted the IIS WebSites on the 4 server boxes several times and, when they had different hashes, compared the values in BeyondCompare. In everycase it was the "HashedBytes" value that was different (the results of SHA1Managed.ComputeHash(...))
The Question
What am I doing wrong? The input to the ComputeHash function is identical. Is SHA1Managed machine dependent? That doesn't make since because half the time the 4 machines have the same hash.
I have searched StackOverFlow and Bing but have been unable to find anyone else with this problem. The closest thing I could find was people with problems with their encoding, but I think I have proven that the encoding is not an issue.
Output
I was hoping not to dump everything here because of how long it is, but here is a snipet of the dump I am comparing:
Hash:o1ZxBaVuU6OhE6De96wJXUvmz3M=
HashedBytes:163861135165110831631611916022224717299375230207115
ASCIIBytes:1151169710310146991111094779114100101114831011141181059910147115101114118105991014611511899591151051031101171129511510111411810599101114101102101114101110991011159598979910710111010011111410010111411510111411810599101951185095117114108611041161161125847471051159897991071011101004610910211598101115116971031014699111109477911410010111483101114118105991014711510111411810599101461151189947118505911510510311011711295115101114118105991011141011021011141011109910111595989799107101110100112971211091011101161151161111141011151011141....
Prehash:...
When I compare the two pages on the different servers the ASCII Bytes are identical but the HashedBytes are not. The dump method I use for the bytes does no conversions, it simply dumps each byte out in sequence. I could delimit the bytes with a '.' I suppose.
Follow Up
I have made the b.ToString(CultureInfo.InvariantCulture) change and have made the HashAlgorithm a local variable instead of a static property. I am waiting for the code to deploy to the servers.
I have been trying to duplicate the issue but have been unable to do so once I made the SHA1Managed property a local variable instead of global static.
The problem was with Multi-Threading. My code was thread safe except for the SHA1Managed class that I had marked static. I assumed that SHA1Managed.ComputeHash would be thread safe underneath but apparently it is not if marked internal static.
To repeat, SHA1Managed.ComputeHash is not thread safe if marked internal static.
MSDN states:
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
I don't know why internal static behaves differently than public static.
I would mark #pst as the answer and add a comment to clarify the problem, but #pst made a comment so I can't mark it as the answer.
Thanks for all your input.
Your GetString method could potentially produce different results on machines of different cultures, because StringBuilder.Append(byte) calls byte.ToString(CultureInfo.CurrentCulture). Try
private static string GetString(byte[] bytes)
{
StringBuilder sb = new StringBuilder();
foreach (byte b in bytes)
{
sb.Append(b.ToString(CultureInfo.InvariantCulture));
}
return sb.ToString();
}
But using a method that doesn't use decimal string representations of the byte values would be better.
The problem is your code is likely messing with leading 0's, use the following as your array to string code to compare. it will produce reliable results and is specifically designed for turning byte arrays in to strings so they can be transmitted between machines.
using System.Runtime.Remoting.Metadata.W3cXsd2001;
public byte[] StringToBytes(string value)
{
SoapHexBinary soapHexBinary = SoapHexBinary.Parse(value);
return soapHexBinary.Value;
}
public string BytesToString(byte[] value)
{
SoapHexBinary soapHexBinary = new SoapHexBinary(value);
return soapHexBinary.ToString();
}
Also, I would recommend that you check that the JSON is not subtlety different, as that would create a totally diffrent hash. For example some cultures represent the number "One thousand six hundred point seven" as 1,600.7, 1 000.7, or even 1 600,7 (see this Wikipedia page).

compare the contents of large files

I need to compare the contents of very large files. Speed of the program is important. I need 100% match.I read a lot of information but did not find the optimal solution. I am haveconsidering two choices and both problems.
Compare whole file byte by byte - not fast enough for large files.
File Comparison using Hashes - not 100% match the two files with the same hash.
What would you suggest? Maybe I could make use of threads? Could MemoryMappedFile be helpful?
If you really need to to guarantee 100% that the files are 100% identical, then you need to do a byte-to-byte comparison. That's just entailed in the problem - the only hashing method with 0% risk of false matching is the identity function!
What we're left with is short-cuts that can quickly give us quick answers to let us skip the byte-for-byte comparison some of the time.
As a rule, the only short-cut on proving equality is proving identity. In OO code that would be showing two objects where in fact the same object. The closest thing in files is if a binding or NTFS junction meant two paths were to the same file. This happens so rarely that unless the nature of the work made it more usual than normal, it's not going to be a net-gain to check on.
So we're left short-cutting on finding mis-matches. Does nothing to increase our passes, but makes our fails faster:
Different size, not byte-for-byte equal. Simples!
If you will examine the same file more than once, then hash it and record the hash. Different hash, guaranteed not equal. The reduction in files that need a one-to-one comparison is massive.
Many file formats are likely to have some areas in common. Particularly the first bytes for many formats tend to be "magic numbers", headers etc. Either skip them, or skip then and then check last (if there is a chance of them being different but it's low).
Then there's the matter of making the actual comparison as fast as possible. Loading batches of 4 octets at a time into an integer and doing integer comparison will often be faster than octet-per-octet.
Threading can help. One way is to split the actual comparison of the file into more than one operation, but if possible a bigger gain will be found by doing completely different comparisons in different threads. I'd need to know a bit more about just what you are doing to advise much, but the main thing is to make sure the output of the tests is thread-safe.
If you do have more than one thread examining the same files, have them work far from each other. E.g. if you have four threads, you could split the file in four, or you could have one take byte 0, 4, 8 while another takes byte 1, 5, 9, etc. (or 4-octet group 0, 4, 8 etc). The latter is much more likely to have false sharing issues than the former, so don't do that.
Edit:
It also depends on just what you're doing with the files. You say you need 100% certainty, so this bit doesn't apply to you, but it's worth adding for the more general problem that if the cost of a false-positive is a waste of resources, time or memory rather than an actual failure, then reducing it through a fuzzy short-cut could be a net-win and it can be worth profiling to see if this is the case.
If you are using a hash to speed things (it can at least find some definite mis-matches faster), then Bob Jenkins' Spooky Hash is a good choice; it's not cryptographically secure, but if that's not your purpose it creates as 128-bit hash very quickly (much faster than a cryptographic hash, or even than the approaches taken with many GetHashCode() implementations) that are extremely good at not having accidental collisions (the sort of deliberate collisions cryptographic hashes avoid is another matter). I implemented it for .Net and put it on nuget because nobody else had when I found myself wanting to use it.
Serial Compare
Test File Size(s): 118 MB
Duration: 579 ms
Equal? true
static bool Compare(string filePath1, string filePath2)
{
using (FileStream file = File.OpenRead(filePath1))
{
using (FileStream file2 = File.OpenRead(filePath2))
{
if (file.Length != file2.Length)
{
return false;
}
int count;
const int size = 0x1000000;
var buffer = new byte[size];
var buffer2 = new byte[size];
while ((count = file.Read(buffer, 0, buffer.Length)) > 0)
{
file2.Read(buffer2, 0, buffer2.Length);
for (int i = 0; i < count; i++)
{
if (buffer[i] != buffer2[i])
{
return false;
}
}
}
}
}
return true;
}
Parallel Compare
Test File Size(s): 118 MB
Duration: 340 ms
Equal? true
static bool Compare2(string filePath1, string filePath2)
{
bool success = true;
var info = new FileInfo(filePath1);
var info2 = new FileInfo(filePath2);
if (info.Length != info2.Length)
{
return false;
}
long fileLength = info.Length;
const int size = 0x1000000;
Parallel.For(0, fileLength / size, x =>
{
var start = (int)x * size;
if (start >= fileLength)
{
return;
}
using (FileStream file = File.OpenRead(filePath1))
{
using (FileStream file2 = File.OpenRead(filePath2))
{
var buffer = new byte[size];
var buffer2 = new byte[size];
file.Position = start;
file2.Position = start;
int count = file.Read(buffer, 0, size);
file2.Read(buffer2, 0, size);
for (int i = 0; i < count; i++)
{
if (buffer[i] != buffer2[i])
{
success = false;
return;
}
}
}
}
});
return success;
}
MD5 Compare
Test File Size(s): 118 MB
Duration: 702 ms
Equal? true
static bool Compare3(string filePath1, string filePath2)
{
byte[] hash1 = GenerateHash(filePath1);
byte[] hash2 = GenerateHash(filePath2);
if (hash1.Length != hash2.Length)
{
return false;
}
for (int i = 0; i < hash1.Length; i++)
{
if (hash1[i] != hash2[i])
{
return false;
}
}
return true;
}
static byte[] GenerateHash(string filePath)
{
MD5 crypto = MD5.Create();
using (FileStream stream = File.OpenRead(filePath))
{
return crypto.ComputeHash(stream);
}
}
tl;dr Compare byte segments in parallel to determine if two files are equal.
Why not both?
Compare with hashes for the first pass, then return to conflicts and perform the byte-by-byte comparison. This allows maximal speed with guaranteed 100% match confidence.
There's no avoiding doing byte-for-byte comparisons if you want perfect comparisons (The file still has to be read byte-for-byte to do any hashing), so the issue is how you're reading and comparing the data.
So a there are two things you'll want to address:
Concurrency - Make sure you're reading data at the same time you're checking it.
Buffer Size - Reading the file 1 byte at a time is going to be slow, make sure you're reading it into a decent size buffer (about 8MB should do nicely on very large files)
The objective is to make sure you can do your comparison as fast as the hard disk can read the data, and that you're always reading data with no delays. If you're doing everything as fast as the data can be read from the drive then that's as fast as it is possible to do it since the hard disk read speed becomes the bottleneck.
Ultimately a hash is going to read the file byte by byte anyway ... so if you are looking for an accurate comparison then you might as well do the comparison. Can you give some more background on what you are trying to accomplish? How big are the 'big' files? How often do you have to compare them?
If you have a large set of files and you are trying to identify duplicates, I would try to break down the work by order of expense.
I might try something like the following:
1) group files by size. Files with different sizes clearly can't be identical. This information is very inexpensive to retrieve. If each group only contains 1 file, you are done, no dupes, otherwise proceed to step 2.
2) Within each size group generate a hash of the first n bytes of the file. Identify a reasonable n that will likely detect differences. Many files have identical headers, so you wan't to make sure n is greater that that header length. Group by the hashes, if each group contains 1 file, you are done (no dupes within this group), otherwise proceed to step 3.
3) At this point you are likely going to have to do more expensive work like generate a hash of the whole file, or do a byte by byte comparison. Depending on the number of files, and the nature of the file contents, you might try different approaches. Hopefully, the previous groupings will have narrowed down likely duplicates so that the number of files that you actually have to fully scan will be very small.
To calculate a hash, the entire file needs to be read.
How about opening both files together, and comparing them chunk by chunk?
Pseudo code:
open file A
open file B
while file A has more data
{
if next chunk of A != next chunk of B return false
}
return true
This way you are not loading too much together, and not reading in the entire file if you find a mismatch earlier. You should set up a test that varies the chunk size to determine the right size for optimal performance.

Compress Guids by hashing in small data sets

I'm working on a mobile app and i want to optimise the data that it's receiving from the server (as JSON).
There are 3 lists returned (each containing its own class of objects, the approximate list sizes are 50, 100 and 170). Each object has a Guid id and there is some relation data for each object. E.g.:
o = { Id = "8f088552-5b24-4ba4-a6e5-8958c4353581",
RelatedIds = ["19d2e562-0874-473f-8e05-7052e8defd9a", "615b4c47-199a-4f7d-8268-08ed43d9c891", ... ] }
Is there a way to compress these Guids to something sorter without storing an identity map? Perhaps using a hash function?
You can convert the 16-byte representation of a GUID into a Base 64 string. However you didn't mention a programming language so we can't help further.
A hash function is not recommended here because hash functions are generally lossy.
No. One of the attributes of (non-cryptographic) hashes is that they collide: hash(a) == hash(b) but a != b. They are a performance optimization in the case where you are doing a lot of equality checks and you expect many false results (because if hash(a) != hash(b) then a != b). A GUID->counter map is probably the best way to get smaller ids here.
You can convert hex (base16) to base64, and remove all the punctuation. You should save 25% for using base64, and another 4 bytes for punctuation.
Thinking about it some more i've realized that HTTP compression (if enabled) is probably going to compress that data well enough anyway, so it's not really worth the effort to compress data manually.

Categories

Resources