Currently I'm working on an app that reads the stream from Twitter API and parses it into objects. At the moment I read the stream and use ReadObject(...) from DataContractJsonSerializer to make my objects and I write them to a buffer in memory (don't worry I read them from that buffer asynchronously and I keep a maximum of 100 objects before I start overwriting old ones).
This works great!! HOWEVER: Do I have the guarantee that the reading/writing will keep up with the actual stream. If this is not the case; what can I do about this?
You could use a BlockingCollection for the buffer, that way instead of overwriting old entries, an attempt to add more than 100 items will block instead while your reader catches up.
From what I understand, you will not have that guarantee. If you've got a limit of 100 buffered objects, you may get to the point where that buffer is full of new objects, a new one comes in and overwrites something. Really it's a trade off, the more you allow in your buffer the greater security of not falling behind, versus using more RAM.
The only alternative I can see is somehow writing your own scheduler proritising the processing of the buffered objects over reading new ones from the stream.
Related
I'm doing a testbed client/server system (dotnet 4.0) that will eventually have two components communicating via streams across some transport medium, but at the moment has the two communicating via a single MemoryStream. Never used them before, and I made the assumption I could be writing and reading at the same time. However, because there's only one 'cursor' I can't be reading from the stream until it's finished writing and I can seek() back to zero.
The named pipe stuff supports full duplex operation, but only if I set one object up as the server and have the other connect to it- not something I'm wanting to do at this point.
I can get the result I want by creating a byte buffer and having two MemoryStream instances pointing at that buffer, but that falls over when I reach the end of the buffer and get an exception because the memory stream can't be expanded.
I could probably do this by creating a file instead of the array and having two FileStream instances, but that seems a somewhat messy way of doing it. And if left running would result in a full disk since nothing would be pruning the data that's been read.
What I'm after is a stream that doesn't support seek() or position, maintains separate read and write pointers, buffers data that's written to it and discards it sometime after it's been read. Feels like reinventing the wheel to roll my own. Surely such a thing is already around somewhere?
I'm writing a binary data format to file containing a graph of serialized objects. To be more resilient to errors (and to be able to debug problems) I am considering length-prefixing each object in the stream. I'm using C# and a BinaryWriter at the moment, but it is quite a general problem.
The size of each object isn't known until it has been completely serialized, so to be able to
write the length prefixes there are a number of strategies:
Use a write buffer with enough space to have random access and insert the length at the correct position after the object is serialized.
Write each object to its own MemoryStream, then write the length of the buffer and the buffer contents to the main stream.
Write a zero length for all objects in the first pass, remember the positions in the file for all object sizes (a table of object to size), and make a second pass filling in all the sizes.
??
The total size (and thus the size of the first/outermost object) is typically around 1mb but can be as large as 50-100mb. My concern is the performance and memory usage of the process.
Which strategy would be most efficient?
Which strategy would be most efficient?
The only way to determine this is to measure.
My first instinct would be to use #2, but knowing that is likely to add pressure to the GC (or fragmentation to the large object heap if the worker streams exceed 80Kb). However #3 sounds interesting, assuming the complexity of tracking those positions doesn't hit maintainability.
In the end you need to measure with your data, and consider that unless you have unusual circumstances the performance will be dominated by network or storage performance, not by processing in memory.
100MB is only 2.5% of the memory in a 'small' sized server (or a standard desktop computer). I'd serialize to memory (e.g. a byte[] array/MemoryStream with BinaryWriter) and then flush that to disk when done.
This would also keep your code clean, compact, and easy to manage - saving you from hours of tearing your hair and seeking back and forth in a large blob :)
Hope this helps!
If you control the format, you could accumulate a list of object sizes and append a directory at the end of your file. However, don't forget that in .NET world your write buffers are copied several times before actually getting transferred to disk anyway. Therefore any gains you make by avoiding (say) an extra MemoryStream will not increase the overall efficiency much.
We are using protobuf-net for serialization and deserialization of messages in an application whose public protocol is based on Google Protocol Buffers. The library is excellent and covers our all requirements except for this one: we need to find out the serialized message length in bytes before the message is actually serialized.
The question has already been asked a year and a half ago and according to Marc, the only way to do this was to serialize to a MemoryStream and read the .Length property afterwards. This is not acceptable in our case, because MemoryStream allocates a byte buffer behind the scenes and we have to avoid this.
This line from the same response gives us hope that it might be possible after all:
If you clarify what the use-case is, I'm sure we can make it easily
available (if it isn't already).
Here is our use case. We have messages whose size varies between several bytes and two megabytes. The application pre-allocates byte buffers used for socket operations and for serializing / deserializing and once the warm-up phase is over, no additional buffers can be created (hint: avoding GC and heap fragmentation). Byte buffers are essentially pooled. We also want to avoid copying bytes between buffers / streams as much as possible.
We have come up with two possible strategies and both of them require message size upfront:
Use (large) fixed-size byte buffers and serialize all messages that can fit into one buffer; send the content of the buffer using Socket.Send. We have to know when the next message cannot fit into the buffer and stop serializing. Without message size, the only way to achieve this is to wait for an exception to occur during Serialize.
Use (small) variable size byte buffers and serialize each message into one buffer; send the content of the buffer using Socket.Send. In order to check out the byte buffer with appropriate size from the pool, we need to know how much bytes does a serialized message have.
Because the protocol is already defined (we cannot change this) and requires message length prefix to be Varint32, we cannot use SerializeWithLengthPrefix method.
So is it possible to add a method that estimates a message size without serialization into a stream? If it is something that does not fit into the current feature set and roadmap of the library, but is doable, we are interested into extending the library ourselves. We are also looking for alternative approaches, if there are any.
As noted, this is not immediately available, as the code intentionally tries to do a single pass over the data (especially IEnumerable<T> etc). Depending on your data, though, it might already be doing a moderate amount of copying, to allow for the fact that sub-messages are also length-prefixed, so might need juggling. This juggling can be greatly reduced by using the "grouped" sub-format internally in the message, as groups allow forwards-only construction without track-backs.
So is it possible to add a method that estimates a message size without serialization into a stream?
An estimate is next to useless; since there is no terminator, it needs to be exact. Ultimately, the sizes are a little hard to predict without actually doing it. There was some code in v1 for size prediction, but the single-pass code currently seems preferred, and in most cases the buffer overhead is nominal (there is code in place to re-use the internal buffers so that it doesn't spend all the time allocating buffers for small messages).
If your message internally is forwards-only (grouped), then a cheat might be to serialize to a fake stream that measures, but drops all the data; you'd end up serializing twice, however.
Re:
and requires message length prefix to be Varint32, we cannot use SerializeWithLengthPrefix method
I'm not quite sure I see the relationship there - it allows a range of formats etc to be used here; perhaps if you can be more specific?
Re copying data around - an idea I played with here is that of using sub-normal forms for the length prefix. For example, it might be that in most cases 5 bytes is plenty, so rather than juggle, it could leave 5 bytes, and then simply overwrite without condensing (since the octet 10000000 still means "zero and continue", even if it is redundant). This would still need to be buffered (to allow backfill), but would not require and movement of the data.
A final simple idea would be simply: serialize to a FileStream; then write the file length, and the file data. It trades memory usage for IO, obviously.
I have a very large set of binary files where several thousand raw frames of video are being sequentially read and processed, and I’m now looking to optimize it as it appears to be more CPU-bound than I/O-bound.
The frames are currently being read in this manner, and I suspect this is the biggest culprit:
private byte[] frameBuf;
BinaryReader binRead = new BinaryReader(FS);
// Initialize a new buffer of sizeof(frame)
frameBuf = new byte[VARIABLE_BUFFER_SIZE];
//Read sizeof(frame) bytes from the file
frameBuf = binRead.ReadBytes(VARIABLE_BUFFER_SIZE);
Would it make much of a difference in .NET to re-organize the I/O to avoid creating all these new byte arrays with each frame?
My understanding of .NET’s memory allocation mechanism is weak as I am coming from a pure C/C++ background. My idea is to re-write this to share a static buffer class that contains a very large shared buffer with an integer keeping track of the frame’s actual size, but I love the simplicity and readability of the current implementation and would rather keep it if the CLR already handles this in some way I am not aware of.
Any input would be much appreciated.
You don't need to init frameBuf if you use binRead.ReadBytes -- you'll get back a new byte array which will overwrite the one you just created. This does create a new array for each read, though.
If you want to avoid creating a bunch of byte arrays, you could use binRead.Read, which will put the bytes into an array you supply to it. If other threads are using the array, though, they'll see the contents of it change right in front of them. Be sure you can guarantee you're done with the buffer before reusing it.
You need to be careful here. It is very easy to get completely bogus test results on code like this, results that never repro in real use. The problem is the file system cache, it will cache the data you read from a file. The trouble starts when you run your test over and over again, tweaking the code and looking for improvements.
The second, and subsequent times you run the test, the data no longer comes off the disk. It is still present in the cache, it only takes a memory-to-memory copy to get it into your program. That's very fast, a microsecond or so of overhead plus the time needed to copy. Which runs at bus-speeds, at least 5 gigabytes per second on modern machines.
Your test will now reveal that you spend a lot of time on allocating the buffer and processing the data, relative from the amount of time spent reading the data.
This will rarely repro in real use. The data won't be in the cache yet, now the sluggy disk drive needs to seek the data (many milliseconds) and it needs to be read off the disk platter (a couple of dozen megabytes per second, at best). Reading the data now takes a good three of four magnitudes of time longer. If you managed to make the processing step twice as fast, your program will actually only run 0.05% faster. Give or take.
I have an appliction right now that is a pipeline design. In one the first stage it reads some data and files into a Stream. There are some intermediate stages that do stuff to the stream of data. And then there is a final stage that writes the stream out to somewhere. This all happens serially, one stage completes and then hands off to the next stage.
This all has been working just great, but now the amount of data is starting to get quite a bit larger (hundreds of GB potentially). So I'm thinking that I will need to do something to alleviate this. My initial thought is what I'm looking for some feedback on (being an independent developer I just don't have anywhere to bounce the idea off of).
I'm thinking of creating a Parallel pipeline. The Object that starts off the pipeline would create all of the stages and kick each one off in it's own thread. When the first stage gets the stream to some certain size then it will pass that stream off to the next stage for processing and start up a new stream of its own to continue to fill up. The idea here being that the final stage will be closing out streams as the first stage is building a new ones so my memory usage would be kept lower.
So questions:
1) Any high level thoughts on directions for this design?
2) Is there a simpler approach that you can think of that might apply here?
3) Is there anything existing out there that does something like this that I could reuse (not a product I have to buy)?
Thanks,
MikeD
The producer/consumer model is a good way to proceed. And Microsoft has their new Parallel Extensions which should provide most of the ground work for you. Look into the Task object. There's a preview release available for .NET 3.5 / VS2008.
Your first task should read blocks of data from your stream and then pass them onto other tasks. Then, have as many tasks in the middle as logically fit. Smaller tasks are (generally) better. The only thing you need to watch out for is to make sure the last task saves the data in the order it was read (because all the tasks in the middle may finish in a different order to what they started).
For the design you've suggested, you'd want to have a good read up on producer/consumer problems if you haven't already. You'll need a good understanding of how to use semaphores in that situation.
Another approach you could try is to create multiple identical pipelines, each in a separate thread. This would probably be easier to code because it has a lot less inter-thread communication. However, depending on your data you may not be able to split it into chunks this way.
In each stage, do you read the entire chunk of data, do the manipulation, then send the entire chuck to the next stage?
If that is the case, you are using a "push" technique where you push the entire chunk of data to the next stage. Are you able to handle things in a more stream like manor using a "pull" technique? Each stage is a stream, and as you read data from that stream, it pulls data from the previous stream by calling read on it. As each stream is being read, it reads from the previous stream in small bits, processes it and returns the processed data. The destination stream determines how many bytes to read from the previous stream, and you don't ever have to consume large amounts of memory. This is how applications like BizTalk work. There are some blogs about how BizTalk Pipeline streams work, and I think it might be exactly what you want.
Here's a multi-part blog entry that you might find interesting:
Part 1
Part 2
Part 3
Part 4
Part 5