What risks in manipulating a provided stream multiple times - c#

Given a stream by a user such that we expect them to manage the disposal of it through typical using
using(var stream = new MemoryStream())
{
MyMethod(stream);
}
Is there any risk to copying back to the stream after working on it. Specifically we have a method that populates the data, but we have a conditional need to sort the data. So MyMethod is something like this:
void MyMethod(Stream stream, bool sort = false)
{
//Stream is populated
stream.Position = 0;
if(sort)
{
Sort(stream);
}
}
void Sort(Stream stream)
{
using(var sortedStream = new MemoryStream)
{
//Sort per requirements into the new sorted local stream
sortedStream.Position = 0;
//Is this safe? Any risk of losing data or memory leak?
sortedStream.CopyTo(stream);
}
}
The thing to notice is we are populating the stream provided by the user and then sorting it into a local stream. Since the local stream is owned by the local method it is cleaned up but in converse we can NOT clean up the provided stream, but want to populate it with the local results.
To reiterate my question, is there anything wrong with this? Is there a risk of garbage data being in the stream or some other issue I am not thinking of?

Stream is an abstract class, and has a lot of different implementations. Not all streams can be written to, so in some cases the code may not work as expected, or could crash.
sortedStream.Position = 0;
sortedStream.CopyTo(stream);
You would need to check the CanSeek and CanWrite properties beforehand:
if (sortedStream.CanSeek & stream.CanWrite)
{
sortedStream.Position = 0;
sortedStream.CopyTo(stream);
}
else
{
// not supported
}

Whether a given stream support moving the position around and re-writing data over itself is going to depend on the specific stream. Some support it, and some don't. Not all streams are allowed to change their position, not all are able to write, not all are able to overwrite existing data, and some are able to do all of those things.
A well behaved stream shouldn't leak resources if you do any of those unsupported things; it ought to just throw an exception, but of course technically a custom stream could do whatever it wants, so you most certainly could write your own stream that leaks resources when changing the position. But of course at that point the bug of leaking a resource is in that stream's implementation, not in your code that sorts the data in the stream. The code you've shown here only needs to worry about a stream throwing an exception if an unsupported operation is performed.

I have no idea why you don't sort it before you insert it into the stream or why you use a stream at all when you access seems to be random-access, but technically, it's fine. You can do it. It will work.

Related

Pass-Through Stream (not having to save to memory in the middle)

In my c# program, I am working with an object (third-party, so I have no way of changing its source code) that takes a stream in in its constructor (var myObject = new MyObject(stream);).
My challenge is that I need to make some changes to some of the lines of my file prior to it being ready to be given to this object.
To do this, I wrote the following code (which does work):
using (var st = new MemoryStream())
using (var reader = new StreamReader(path))
{
using (var writer = new StreamWriter(st))
{
while (!reader.EndOfStream)
{
var currentLine = reader.ReadLine().Replace(#"XXXX", #"YYYY");
writer.Write(currentLine);
}
writer.Flush();
st.Position = 0;
var myObject = new MyObject(st);
}
}
So, this does work, but it seems so inefficient since it is no longer really streaming the information, but storing it all in the memory stream before giving it through to the object.
Is there a way to create a transform / "pass-through stream" that will:
Read in each small amount from the streamreader
Make the adjustment on that small amount
Stream that amount through
So there won't be a large bit of memory storage in the middle?
Thanks!!
You just create your own class that derives from Stream, and implements the required methods that your MyObject needs.
Implement all your Read/ReadAsync methods by calling the matching Read/ReadAsync methods on your StreamReader. You can then modify the data as it passes through.
You'd have a bit of work to do if the modification requires some sort of understanding of the data as you'll be working in unknown quantities of bytes at a time. You would need to buffer the data to an extent required to do your necessary transformations, but how you do that is very specific to the transformation of the stream that you want to achieve.
Unfortunately the design of the C# Stream class is loaded down with lots of baggage, so implementing the entire API of Stream is quite a bit of work, but chances are your MyObject only calls one or two methods on Stream so a bit of experimentation should soon get it working.

Do C# Streams behave like pointers?

I've this class
class CacheHelper() {
private Dictionary<string, MemoryStream> cacher;
// ... other porps, f's...etc
public MemoryStream GetImageStream(string fileName)
{
if (!cacher.ContainsKey(fileName))
return null;
MemoryStream memStream = null;
cacher.TryGetValue(fileName, out memStream); // TODO
return memStream;
}
}
and I'm using it like this:
Stream fileStream = _cacheHelper.GetImageStream(filePath);
and When I'm done I'm closing fileStream like this:
if(fileStream!=null)
fileStream.Dispose();
I'm not sure what's going on underneath Stream implementation in C#, so I'm afraid that I'm closing the original MemoryStream (the one inside the internal cacher Dictionary) if I'm closing fileStream, i.e. implemented on top of pointers, or something.
A MemoryStream is a class. All classes are reference types which means that the variable you have is indeed a kind of pointer to the actual instance. What happens is that you pass a reference of your memory stream somewhere. If you don't want to close that stream, you should not do so.
A better implementation might be to either cache byte arrays or handle everything using streams inside the cache itself. Passing a stateful object from your cache to somewhere it's used and expecting it to keep it's original state is not such a good design. It's very easy to make mistakes that way.

Reading a file with FileStream and FILE_FLAG_NO_BUFFERING

A little background: I've been experimenting with using the FILE_FLAG_NO_BUFFERING flag when doing IO with large files. We're trying to reduce the load on the cache manager in the hope that with background IO, we'll reduce the impact of our app on user machines. Performance is not an issue. Being behind the scenes as much as possible is a big issue. I have a close-to-working wrapper for doing unbuffered IO but I ran into a strange issue. I get this error when I call Read with an offset that is not a multiple of 4.
Handle does not support synchronous operations. The parameters to the FileStream constructor may need to be changed to indicate that the handle was opened asynchronously (that is, it was opened explicitly for overlapped I/O).
Why does this happen? And is doesn't this message contradict itself? If I add the Asynchronous file option I get an IOException(The parameter is incorrect.)
I guess the real question is what do these requirements, http://msdn.microsoft.com/en-us/library/windows/desktop/cc644950%28v=vs.85%29.aspx, have to do with these multiples of 4.
Here is the code that demonstrates the issue:
FileOptions FileFlagNoBuffering = (FileOptions)0x20000000;
int MinSectorSize = 512;
byte[] buffer = new byte[MinSectorSize * 2];
int i = 0;
while (i < MinSectorSize)
{
try
{
using (FileStream fs = new FileStream(#"<some file>", FileMode.Open, FileAccess.Read, FileShare.None, 8, FileFlagNoBuffering | FileOptions.Asynchronous))
{
fs.Read(buffer, i, MinSectorSize);
Console.WriteLine(i);
}
}
catch { }
i++;
}
Console.ReadLine();
When using FILE_FLAG_NO_BUFFERING, the documented requirement is that the memory address for a read or write must be a multiple of the physical sector size. In your code, you've allowed the address of the byte array to be randomly chosen (hence unlikely to be a multiple of the physical sector size) and then you're adding an offset.
The behaviour you're observing is that the call works if the offset is a multiple of 4. It is likely that the byte array is aligned to a 4-byte boundary, so the call is working if the memory address is a multiple of 4.
Therefore, your question can be rewritten like this: why is the read working when the memory address is a multiple of 4, when the documentation says it has to be a multiple of 512?
The answer is that the documentation doesn't make any specific guarantees about what happens if you break the rules. It may happen that the call works anyway. It may happen that the call works anyway, but only in September on even-numbered years. It may happen that the call works anyway, but only if the memory address is a multiple of 4. (It is likely that this depends on the specific hardware and device drivers involved in the read operation. Just because it works on your machine doesn't mean it will work on anybody else's.)
It probably isn't a good idea to use FILE_FLAG_NO_BUFFERING with FileStream in the first place, because I doubt that FileStream actually guarantees that it will pass the address you give it unmodified to the underlying ReadFile call. Instead, use P/Invoke to call the underlying API functions directly. You may also need to allocate your memory this way, because I don't know whether .NET provides any way to allocate memory with a particular alignment or not.
Just call CreateFile directly with FILE_FLAG_NO_BUFFERING and then close it before opening with FileStream to achieve the same effect.

Strings appear to be sticking around too long

In short, I've got an application that converts a flat data file into an XML file. It does this by populating objects and then serializing them to XML.
The problem I'm running into is that the Garbage Collector does not seem to be taking care of the serialized strings. 3500 record files are running up to OutOfMemoryExceptions before they finish. Something is fishy, indeed.
When I take the serialization out of the mix and simply pass an empty string, the memory consumption remains as expected, so I've ruled out the possibility that my intermediate objects (between flat file and xml) are the problem here. They seem to be collected as expected.
Can anyone help? How do I make sure these strings are disposed of properly?
Update: Some sample code
// myObj.Serialize invokes an XmlSerializer instance to handle its work
string serialized = myObj.Serialize();
myXmlWriter.WriteRaw(serialized);
This is basically where the problem is ocurring--if I take the string serialized out of play, the memory problems go away, too, even though I'm still transforming the flat file into objects, one at a time.
Update 2: Serialize method
public virtual string Serialize()
{
System.IO.StreamReader streamReader = null;
System.IO.MemoryStream memoryStream = null;
using (memoryStream = new MemoryStream())
{
memoryStream = new System.IO.MemoryStream();
Serializer.Serialize(memoryStream, this);
memoryStream.Seek(0, System.IO.SeekOrigin.Begin);
using (streamReader = new System.IO.StreamReader(memoryStream))
{
return streamReader.ReadToEnd();
}
}
}
You need to make sure they aren't referenced anywhere. Before an OutOfMemoryException is thrown, the GC is run. If it isn't recovering that memory, that means something is still holding on to it. Like others said, if you post some code, we might be able to help. Otherwise you can use a profiler or WinDbg/SOS to help figure out what is holding onto your strings.
Very curious indeed. I added the following dandy after each serialized record writes to the XmlWriter:
if (GC.GetTotalMemory(false) > 104857600)
{
GC.WaitForPendingFinalizers();
}
and wouldn't you know it, it's keeping it in check and it's processing without incident, never getting too far above the threshold I set. I feel like there should be a better way, but it almost seems like the code was executing too fast for the garbage collector to reclaim the strings in time.
Do you have an example of your code - how you're creating these strings? Are you breaking out into unmanaged code anywhere (which means you would be required to clean-up after yourself).
Another thought is how you are converting flat data file into XML. XML can be somewhat heavy depending on how you are building the file. If you are trying to hold the entire object in memory, it is very likely (easy to do, in fact) that you are running out of memory.
It sure looks like your method could be cleaned up to be just:
public virtual string Serialize()
{
StringBuilder sb = new StringBuilder();
using (StringWriter writer = new StringWriter(sb))
{
this.serializer.Serialize(writer, this);
}
return sb.ToString();
}
You are creating an extra MemoryStream for no reason.
But if you are writing the string to a file, then why don't you just send a FileStream to the Serialize() method?

EndianBinaryReader - Contious update of the input stream?

I am trying to use the EndianBinaryReader and EndianBinaryWriter that Jon Skeet wrote as part of his misc utils lib. It works great for the two uses I have made of it.
The first reading from a Network Stream (TCPClient) where I sit in a loop reading the data as it comes in. I can create a single EndianBinaryReader and then just dispose of it on the shut down of the application. I construct the EndianBinaryReader by passing the TCPClient.GetStream in.
I am now trying to do the same thing when reading from a UdpClient but this does not have a stream as it is connection less. so I get the data like so
byte[] data = udpClientSnapShot.Receive(ref endpoint);
I could put this data into a memory stream
var memoryStream = new MemoryStream(data);
and then create the EndianBinaryReader
var endianbinaryReader = new EndianBinaryReader(
new BigEndianBitConverter(), memoryStream,Encoding.ASCII);
but this means I have to create a new endian reader every time I do a read. Id there a way where I can just create a single stream that I can just keep updateing the inputstream with the data from the udp client?
I can't remember whether EndianBinaryReader buffers - you could overwrite a single MemoryStream? But to be honest there is very little overhead from an extra object here. How big are the packets? (putting it into a MemoryStream will clone the byte[]).
I'd be tempted to use the simplest thing that works and see if there is a real problem. Probably the one change I would make is to introduce using (since they are IDisposable):
using(var memoryStream = new MemoryStream(data))
using(var endianbinaryReader = ..blah..) {
// use it
}
Your best option is probably an override of the .NET Stream class to provide your custom functionality. The class is designed to be overridable with custom behavior.
It may look daunting because of the number of members, but it is easier than it looks. There are a number of boolean properties like "CanWrite", etc. Override them and have them all return "false" except for the functionality that your reader needs (probably CanRead is the only one you need to be true.)
Then, just override all of the methods that start with the phrase "When overridden in a derived class" in the help for Stream and have the unsupported methods return an "UnsupportedException" (instead of the default "NotImplementedException".
Implement the Read method to return data from your buffered UDP packets using perhaps a linked list of buffers, setting used buffers to "null" as you read past them so that the memory footprint doesn't grow unbounded.

Categories

Resources