I have a protocol buffer setup like this:
[ProtoContract]
Foo
{
[ProtoMember(1)]
Bar[] Bars;
}
A single Bar gets encoded to a 67 byte protocol buffer. This sounds about right because I know that a Bar is pretty much just a 64 byte array, and then there are 3 bytes overhead for length prefixing.
However, when I encode a Foo with an array of 20 Bars it takes 1362 bytes. 20 * 67 is 1340, so there are 22 bytes of overhead just for encoding an array!
Why does this take up so much space? And is there anything I can do to reduce it?
This overhead is quite simply the information it needs to know where each of the 20 objects starts and ends. There is nothing I can do different here without breaking the format (i.e. doing something contrary to the spec).
If you really want the gory details:
An array or list is (if we exclude "packed", which doesn't apply here) simply a repeated block of sub-messages. There are two layouts available for sub-messages; strings and groups. With a string, the layout is:
[header][length][data]
where header is the varint-encoded mash of the wire-type and field-number (hex 08 in this case with field 1), length is the varint-encoded size of data, and data is the sub-object itself. For small objects (data less than 128 bytes) this often means 2 bytes overhead per object, depending on a: the field number (fields above 15 take more space), and b: the size of the data.
With a group, the layout is:
[header][data][footer]
where header is the varint-encoded mash of the wire-type and field-number (hex 0B in this case with field 1), data is the sub-object, and footer is another varint mash to indicate the end of the object (hex 0C in this case with field 1).
Groups are less favored generally, but they have the advantage that they don't incur any overhead as data grows in size. For small field-numbers (less than 16) again the overhead is 2 bytes per object. Of course, you pay double for large field-numbers, instead.
By default, arrays aren't actually passed as arrays, but as repeated members, which have a little more overhead.
So I'd guess you actually have 1 byte of overhead for each repeated array element, plus 2 extra bytes overhead on top.
You can lose the overhead by using a "packed" array. protobuf-net supports this: http://code.google.com/p/protobuf-net/
The documentation for the binary format is here: http://code.google.com/apis/protocolbuffers/docs/encoding.html
Related
I have huge a list of string. I want to hold these list as memory efficient. I tried to hold on a list. But, it uses 24 bytes for each string which has 5 characters. Namely, there should be some overhead areas.
Then, I tried to hold on a string array. The memory usage has been a bit efficient. But, I have still memory usage problem.
How can I hold a list of string? I know that "C# reserves 2 bytes for each character". I want to hold a string which has 5 characters as 5*2 = 10 bytes. But, why does it use 24 bytes for this process?
Thank you for helps.
enter image description here
Firstly, note that the difference between a List<string> that was created at the correct size, and a string[] (of the same size) is inconsequential for any non-trivial size; a List<T> is really just a fancy wrapper for T[] with insert/resize/etc capabilities. If you only need to hold the data: T[] is fine, but so is List<T> usually.
As for the string - it isn't C# that reserves anything - it is .NET that defines that a string is an object, which is internally a length (int) plus memory for char data, 2 bytes per char. But: objects in .NET have object headers, padding/alignment, etc - and importantly: a minimum size. So yes, they take more memory than just the raw data you're trying to represent.
If you only need the actual data, you could perhaps store the data not as string, but as raw memory - either a simple large byte[] or byte*, or as a twinned pair of int[]/int* (for lengths and/or offsets into the page) and a char[]/char* (for the actual character data), or a byte[]/byte* if you can work with encoded data (i.e. you're mainly interested in IO work). However, working with such a form will be hugely inconvenient - virtually no common APIs will want to play with you unless you are talking in string. There are some APIs that accept raw byte/char data, but they are largely the encoder/decoder APIs, and some IO APIs. So again: unless that's what you're doing: it won't end well. Very recently, some Span<char> / Span<byte> APIs have appeared which would make this slightly less inconvenient (if you can use the latest .NET Core builds, etc), but: I strongly suspect that in most common cases you're just going to have to accept the string overhead and live with it.
Minimum size of any object in 64-bit .NET is 24 bytes.
In 32-bit it's a bit smaller but there's always at least 8 bytes for the object header and here we'd expect the string to store it's length (4 bytes). 8 + 4 + 10 = 22. I'm guessing it also wants/needs all objects to be 4-byte aligned. So if you're storing them as objects, you're not going to get a much smaller representation.
If it's all 7-bit ASCII type characters, you could store them as arrays of bytes but each array would still take up some space.
Your best route (I appreciate this bit is more comment like) is to come up with different processing algorithms that don't require them to all be in memory at the same time in the first place.
So I've been working on this project for a while now, involving LSB steganography. Really fun stuff. Anyways, I just finished writing the code for embedding and extracting files from an image(instead of just plaintext), and I'm running into this problem. I can recognize the MIME and extension of the bytes, but because the embedded file doesn't usually take up all of the LSBs of the image, there's a lot of garbage data. So I have the extracted file + some garbage in the byte array right after it. I need to figure out how to cut these, so that the file that is being exported is the correct, smaller size.
TLDR: I have a byte array with a recognized file in it, with some additional random bytes. How do I find out where the file ends and the random bytes begin?
Remember this is all in C#.
Any advice is appreciated.
Link to my project for reference: https://github.com/nicosogangstar/Steg
Generally you have two options.
End of stream marker
This is the more direct approach of the two, but it may lack some versatily depending on what data you want to hide. After you embed your data, continue with embedding a unique sequence of bits/bytes such that you know it cannot be prematurely encountered in the data before. As you extract the bits, you can stop reading once you encounter this sequence. If you expect to hide only readable text, i.e. bytes with ascii codes between 32 and 127, your marker can be as short as eight 0s, or eight 1s. However, if you intend to hide any sort of binary data, where each byte has a chance of appearing, you may accidentally encounter the marker while extracting legitimate data and thus halt the process prematurely.
Header information
You can add a header preceding data, e.g, another 16-24 bits (or any other amount) which can be translated to a number that tells you how many bits/bytes/pixels to read before stopping. For example, if you want to hide a byte array of size 1000, first embed 2 bytes related to the length of the secret and then follow it with the actual data. More specifically, split the length in 2 bytes, where the first byte has the 8th to 15th bits and the second byte has the 0th to 7th bits of the number 1000 in binary.
00000011 11101000 1000 in binary
3 -24 byte values
You can embed all sorts of information in a header, such as whether the data is encrypted or compressed with some algorithm, the original filename of the date, how many LSBs to read for extracting the information, etc.
I'm building a simulator for a 40 card's deck game. The deck is divided into 4 seeds, each one with 10 cards. Since there's only 1 seed that's different from the others ( let's say, hearts ) , I've thinked of a quite convinient way to store a set of 4 cards with the same value in 3 bits: the first two indicate how many cards of a given value are left, and the last one is a marker that tells if the heart card of that value is still in the deck.
So,
{7h 7c 7s} = 101
That allows me to store the whole deck on 30 bits of memory instead of 40. Now, when i was programming in C, I'd have allocated 4 chars ( 1 byte each = 32 bits), and played with the values with bit operations.
In C# I can't do that, since chars are 2 bytes each and playing with bits is much more of a pain, so, the question is : what's the smallest amount of memory I'll have to use to store the data required?
PS: Keep in mind that i may have to allocate 100k+ of those decks in system's memory, so saving 10 bits is quite a lot
in C, I'd have allocated 3 chars ( 1 byte each = 32 bits)
3 bytes gives you 24 bits, not 32... you need 4 bytes to get 32 bits. (Okay, some platforms have non-8-bit bytes, but they're pretty rare these days.)
In C# I can't do that, since chars are 2 bytes each
Yes, so you use byte instead of char. You shouldn't be using char for non-textual information.
and playing with bits is much more of a pain
In what way?
But if you need to store 30 bits, just use an int or a uint. Or, better, create your own custom value type which backs the data with an int, but exposes appropriate properties and constructors to make it better to work with.
PS: Keep in mind that i may have to allocate 100k+ of those decks in system's memory, so saving 10 bits is quite a lot
Is it a significant amount though? If it turned out you needed to store 8 bytes per deck instead of 4 bytes, that means 800M instead of 400M for 100,000 of them. Still less than a gig of memory. That's not that much...
In C#, unlike in C/C++, the concept of a byte is not overloaded with the concept of a character.
Check out the byte datatype, in particular a byte[], which many of the APIs in the .Net Framework have special support for.
C# (and modern versions of C) have a type that's exactly 8 bits: byte (or uint8_t in C), so you should use that. C char usually is 8 bits, but it's not guaranteed and so you shouldn't rely on that.
In C#, you should use char and string only when dealing with actual characters and strings of characters, don't treat them as numbers.
I have data 0f 340 bytes in string mostly consists of signs and numbers like "føàA¹º#ƒUë5§Ž§"
I want to compress into 250 or less bytes to save it on my RFID card.
As this data is related to finger print temp. I want lossless compression.
So is there any algorithm which i can implement in C# to compress it?
If the data is strictly numbers and signs, I highly recommend changing the numbers into int based values. eg:
+12939272-23923+927392
can be compress into 3 piece of 32-bit integers, which is 22 bytes => 16 bytes. Picking the right integer size (whether 32-bit, 24-bit, 16-bit) should help.
If the integer size varies greatly, you could possibly use 8-bit to begin and use the value 255 to specify that the next 8-bit becomes the 8 more significant bits of the integer, making it 15-bit.
alternatively, you could identify the most significant character and assign 0 for it. the second most significant character gets 10, and the third 110. This is a very crude compression, but if you data is very limited, this might just do the job for you.
Is there any other information you know about your string? For instance does it contain certain characters more often than others? Does it contain all 255 characters or just a subset of them?
If so, huffman encoding may help you, see this or this other link for implementations in C#.
To be honest it just depends on how your input string looks like. What I'd do is try the using rar, zip, 7zip (LZMA) with very small dictionary sizes (otherwise they'll just use up too much space for preprocessed information) and see how big the raw compressed file they produce is (will probably have to use their libraries in order to make them strip headers to conserve space). If any of them produce a file under 250b, then find the c# library for it and there you go.
Can anyone tell me how many bytes the below string will take up?
string abc = "a";
From my article on strings:
In the current implementation at least, strings take up 20+(n/2)*4 bytes (rounding the value of n/2 down), where n is the number of characters in the string. The string type is unusual in that the size of the object itself varies. The only other classes which do this (as far as I know) are arrays. Essentially, a string is a character array in memory, plus the length of the array and the length of the string (in characters). The length of the array isn't always the same as the length in characters, as strings can be "over-allocated" within mscorlib.dll, to make building them up easier. (StringBuilder does this, for instance.) While strings are immutable to the outside world, code within mscorlib can change the contents, so StringBuilder creates a string with a larger internal character array than the current contents requires, then appends to that string until the character array is no longer big enough to cope, at which point it creates a new string with a larger array. The string length member also contains a flag in its top bit to say whether or not the string contains any non-ASCII characters. This allows for extra optimisation in some cases.
I suspect that was written before I had a chance to work with a 64-bit CLR; I suspect in 64-bit land each string takes up either 4 or 8 more bytes.
EDIT: I wrote up a blog post more recently which includes 64-bit information (and contradicts the above slightly for x86...)
Basically, Each string object require a constant 20 bytes for the object data.
The buffer requires 2 bytes per character.
The memory usage estimation for string in bytes: 20 + (2 * Length).
So, Normally The memory in CLR for this string: 22 bytes
However while we pass or sending this string to another end or in any other usage, we do not need this much memory(we never need the 20 bytes for the object data). So it depends on the type of encoding you select, while you use it.
For a default encoding, it will take 1 byte for a character.
So Answer is 1 byte for default encoding.
You can check with this code:
Encoding.Default.GetBytes("a"); //It will give you a byte array of size 1.
Encoding.Default.GetBytes("ABC"); //It will give you a byte array of size 3.
If you ask about size of string object then it is wrong to ask about its size, without debugger it is impossible to say what exactly is it. Not sure that it is possible with debugger either. string uses pointers internally.
If you ask about size of sequence of chars that it contains then it is 4, because strings are stored in UTF-16. All chars in Basic Multilingual Plane are coded with two bytes.