Occasionally we need to copy huge files from one bucket to another in AWS S3. Whenever possible we use the CopyRequest to handle this operation all on AWS (since no round trip required back to the client). But sometimes we do not have the option to do this because we need to copy between 2 completely separate accounts which requires a GET and then a PUT.
Problems:
The response stream returned from the GET is not seekable so it cannot be passed to the PUT request and have it stream seamlessly from one to the other
Copying the response stream to an intermediary stream (MemoryStream) using CopyTo() and then passing that to the PUT operation works well but doesn't scale (large files will throw OutOfMemory exceptions)
So basically I need an intermediary stream that I can read/write to at the same time, basically I would read a chunk from the response stream and write it to my intermediary stream, meanwhile the PUT request is reading out the content and its just a seamless pass-thru sort of scenario.
I found this post on stackoverflow and it seemed promising at first but it still throws an OutOfMemory exception with large files.
.NET Asynchronous stream read/write
Anyone ever had to do something similar to this? How would you tackle it? Thanks in advcance
It's not clear why you would want to use MemoryStream. The Stream.CopyTo method in .NET 4 doesn't need to use an intermediate stream - it will just read into a local buffer of a fixed size, then write that buffer to the output stream, then read more data (overwriting the buffer) etc.
If you're not using .NET 4, it's easy to implement something similar, e.g.
public static void CopyTo(this Stream input, Stream output)
{
byte[] buffer = new byte[64 * 1024]; // 64K buffer
int bytesRead;
while ((bytesRead = input.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, bytesRead);
}
}
I found this, but it uses a Queue internally, which the author notes is an order of magnitude slower than a MemoryStream.
http://www.codeproject.com/Articles/16011/PipeStream-a-Memory-Efficient-and-Thread-Safe-Stre
I keep hoping I'll find an official MS library solution, but it seems that this wheel hasn't been properly invented yet.
Related
I am have a pretty weird issue right now with how i'm sending my data over sockets. My read buffer is getting all the data i'm sending through the socket, but for some reason the File.Write or File.WriteAsync methods i'm calling are not writing all the bytes, or just stopping(I think that's the issue) OR it could be something wrong with the sockets and how they're being read and written too.
So here is a snippet of my code, keep in mind, both of these are on the same file but are different threads.
byte[] Buffer = new Byte[1024];
int bytesRead;
while((bytesRead = ReceiverSocket.Receive(Buffer)) > 0)
{
totalBytesRead += bytesRead;
//myDownload.Write(Buffer, 0, bytesRead);
myDownload.WriteAsync(Buffer, 0, bytesRead);
}
The code above is my receive function for my download client. Keep in mind i'm getting the same results regardless of what write method I use.
And I am simply using this:
byte[] fileData = File.ReadAllBytes(fileToSend);
serverSocket.SendFile(fileToSend);
I send a file with filesize: 10560 and actually retrieve all the bytes over the receive loop, yet when I check my file after i'm done it only receives a portion like so:
TotalBytesRead
1024
2048
3072
...
...8192
9216
10240
10560
Which means it reads all the data, though in my program, it's still stuck inside the loop and when i close it to check the filesize, it's size is 8192, so it seems to me it didn't read the next 3 socket streams.
I'm at a loss for this, is there anything i'm doing wrong? Could this be a threading issue i'm not aware of?
I figured out the answer, I was locking on my
while((bytesRead = ReceiverSocket.Receive(Buffer)) > 0)
due to the socket always waiting on a receive, so the SendFile wouldn't necessarily send 0, so it was always stuck in the while loop, but one I closed the while, it finished writing to file.
If we create a HttpWebRequest and get the ResponseStream from its response, then whether the data will get downloaded completely at once or, when we call the ReadBytes of the stream , then only the data will get download from the network and then reads the content?
Code sample which i want to refer is mentioned below:
var webRequest = HttpWebRequest.Create('url of a big file approx 700MB') as HttpWebRequest;
var webResponse = webRequest.GetResponse();
using (BinaryReader ns = new BinaryReader(webResponse.GetResponseStream()))
{
Thread.Sleep(60000); //Sleep for 60seconds, hope 700MB file get downloaded in 60 seconds
//At this point whether the response is totally downloaded or will not get downloaded at all
var buffer = ns.ReadBytes(bufferToRead);
//Or, in the above statement ReadBytes function is responsible for downloading the content from the internet.
}
GetResponseStream opens and returns a Stream object. The stream object is sourced from the underlying Socket. This Socket is sent data by the network adapter asynchronously. The data just arrives and is buffered. GetResponseStream will block execution until the first data arrives.
ReadByte pulls the data up from the socket layer to c#. This method will block execution until there is a byte avaliable.
Closing the stream prematurely will end the asynchronous transfer (closes the Socket, the sender will be notified of this as their connection will fail) and discard (flush) any buffered data that you have not used yet.
var webRequest = HttpWebRequest.Create('url of a big file approx 700MB') as HttpWebRequest;
Okay, we're set up ready to go. It's a bit different if you PUT or POST a stream of your own, but the differences are analogous.
var webResponse = webRequest.GetResponse();
When GetResponse() returns, it will at the very least have read all of the HTTP headers. It may well have read the headers of a redirect, and done another request to the URI it was redirected to. It's also possible that it's actually hitting a cache (either directly or because the webserver setnt 304 Not Modified) but by default the details of that are hidden from you.
There will likely be some more bytes in the socket's buffer.
using (BinaryReader ns = new BinaryReader(webResponse.GetResponseStream()))
{
At this point, we've got a stream representing the network stream.
Let's remove the Thread.Sleep() it does nothing except add a risk of the connection timing out. Even assuming it doesn't timeout while waiting, the connection will have "backed off" from sending bytes since you weren't reading them, so the effect will be to slow things even more than you did by adding a deliberate slow-down.
var buffer = ns.ReadBytes(bufferToRead);
At this point, either bufferToRead bytes have been read to create a byte[] or else fewer than bufferToRead because the total size of the stream was less than that, in which case buffer contains the entire stream. This will take as long as it takes.
}
At this point, because a successful HTTP GET was performed, the underlying web-access layer may cache the response (probably not if it's very large - the default assumption is that very large requests don't get repeated a lot and don't benefit from caching).
Error conditions will raise exceptions if they occur, and in that case no caching will ever be done (there is no point caching a buggy response).
There is no need to sleep, or otherwise "wait" on it.
It's worth considering the following variant that works at just a slightly lower level by manipulating the stream directly rather than through a reader:
using(var stm = webResponse.GetResponseStream())
{
We're going to work on the stream directly;
byte[] buffer = new byte[4096];
do
{
int read = stm.Read(buffer, 0, 4096);
This will return up to 4096 bytes. It may read less, because it has a chunk of bytes already available and it returns that many immediately. It will only return 0 bytes if it is at the end of the stream, so this gives us a balance between waiting and not waiting - it promises to wait long enough to get at least one byte, but whether or not it waits until it gets all 4096 bytes is up to the stream to choose whether it is more efficient to wait that long or return fewer bytes;
DoSomething(buffer, 0, read);
We work with the bytes we got.
} while(read != 0);
Read() only gives us zero bytes, if it's at the end of the stream.
}
And again, when the stream is disposed, the response may or may not be cached.
As you can see, even at the lowest level .NET gives us access to when using HttpWebResponse, there's no need to add code to wait on anything, as that is always done for us.
You can use asynchronous access to the stream to avoid waiting, but then the asynchronous mechanism still means you get the result when it's available.
To answer your question about when streaming starts, GetResponseStream() will start receiving data from the server. However, at some point the network buffers will become full and the server will stop sending data if you don't read off the buffers. For a detailed description of the tcp buffers, etc see here.
So your sleep of 60000 will not be helping you much as the network buffers along the way will fill up and data will stop arriving until you read it off. It is better to read it off and write it in chunks as you go.
More info on the workings of ResponseStream here.
If you are wondering about what buffer size to use, see here.
I'm wanting to do a protocol analysis that uses SSL/TLS fortunately I can install my own certificate and the DNS portion won't be an issue. My problem is what do I use to do this. I've considered using paros but it will be more trouble than it's worth. So I thought I could write two C# applications. The first is the pseudo server and the other is the pseudo client. The both have a tcp connection between the two of them that I can then use wireshark on. The problem is, is that I have very little experience with streams. So if anyone could point me to helpful articles or if the code is pretty short a sample would be great. Thank you in advanced.
It's not terribly hard to Read/Write to/from streams, you can't just connect the streams, you'll need to have your own code to do this. Preferably on it's own thread (or worker process or task or whatever threading concept you need).
public void ConnectStreams(Stream inStream, Stream outStream)
{
byte[] buffer = new byte[1024];
int bytesRead = 0;
while((bytesRead = inStream.Read(buffer, 0, 1024)) != 0)
{
outStream.Write(buffer, 0, bytesRead);
outStream.Flush();
}
}
Basically Streams operate on byte arrays. When we run this line:
while((bytesRead = inStream.Read(buffer, 0, 1024)) != 0)
We are basically saying, perform Read on inStream, put the read bytes into buffer, at index 0 (in buffer) and read a max of 1024 bytes.
Then we assign the return value into bytesRead which is the number of ACTUAL bytes read (between 0 and 1024 in this case) and if that is not equal to 0, continue looping.
Then we simply write it back into the outStream with the buffer containing the data, and the number of bytes actually read. We perform a flush to actually force the output out vs. stacking up in an internal buffer.
When the stream reaches the EOF, .Read will return 0, the loop will exit and you can continue on. This is how you "Connect" two streams at the most simple level.
My problem is in regards file copying performance. We have a media management system that requires a lot of moving files around on the file system to different locations including windows shares on the same network, FTP sites, AmazonS3, etc. When we were all on one windows network we could get away with using System.IO.File.Copy(source, destination) to copy a file. Since many times all we have is an input Stream (like a MemoryStream), we tried abstracting the Copy operation to take an input Stream and an output Stream but we are seeing a massive performance decrease. Below is some code for copying a file to use as a discussion point.
public void Copy(System.IO.Stream inStream, string outputFilePath)
{
int bufferSize = 1024 * 64;
using (FileStream fileStream = new FileStream(outputFilePath, FileMode.OpenOrCreate, FileAccess.Write))
{
int bytesRead = -1;
byte[] bytes = new byte[bufferSize];
while ((bytesRead = inStream.Read(bytes, 0, bufferSize)) > 0)
{
fileStream.Write(bytes, 0, bytesRead);
fileStream.Flush();
}
}
}
Does anyone know why this performs so much slower than File.Copy? Is there anything I can do to improve performance? Am I just going to have to put special logic in to see if I'm copying from one windows location to another--in which case I would just use File.Copy and in the other cases I'll use the streams?
Please let me know what you think and whether you need additional information. I have tried different buffer sizes and it seems like a 64k buffer size is optimal for our "small" files and 256k+ is a better buffer size for our "large" files--but in either case it performs much worse than File.Copy(). Thanks in advance!
File.Copy was build around CopyFile Win32 function and this function takes lot of attention from MS crew (remember this Vista-related threads about slow copy performance).
Several clues to improve performance of your method:
Like many said earlier remove Flush method from your cycle. You do not need it at all.
Increasing buffer may help, but only on file-to-file operations, for network shares, or ftp servers this will slow down instead. 60 * 1024 is ideal for network shares, at least before vista. for ftp 32k will be enough in most cases.
Help os by providing your caching strategy (in your case sequential reading and writing), use FileStream constructor override with FileOptions parameter (SequentalScan).
You can speed up copying by using asynchronous pattern (especially useful for network-to-file cases), but do not use threads for this, instead use overlapped io (BeginRead, EndRead, BeginWrite, EndWrite in .net), and do not forget set Asynchronous option in FileStream constructor (see FileOptions)
Example of asynchronous copy pattern:
int Readed = 0;
IAsyncResult ReadResult;
IAsyncResult WriteResult;
ReadResult = sourceStream.BeginRead(ActiveBuffer, 0, ActiveBuffer.Length, null, null);
do
{
Readed = sourceStream.EndRead(ReadResult);
WriteResult = destStream.BeginWrite(ActiveBuffer, 0, Readed, null, null);
WriteBuffer = ActiveBuffer;
if (Readed > 0)
{
ReadResult = sourceStream.BeginRead(BackBuffer, 0, BackBuffer.Length, null, null);
BackBuffer = Interlocked.Exchange(ref ActiveBuffer, BackBuffer);
}
destStream.EndWrite(WriteResult);
}
while (Readed > 0);
Three changes will dramatically improve performance:
Increase your buffer size, try 1MB (well -just experiment)
After you open your fileStream, call fileStream.SetLength(inStream.Length) to allocate the entire block on disk up front (only works if inStream is seekable)
Remove fileStream.Flush() - it is redundant and probably has the single biggest impact on performance as it will block until the flush is complete. The stream will be flushed anyway on dispose.
This seemed about 3-4 times faster in the experiments I tried:
public static void Copy(System.IO.Stream inStream, string outputFilePath)
{
int bufferSize = 1024 * 1024;
using (FileStream fileStream = new FileStream(outputFilePath, FileMode.OpenOrCreate, FileAccess.Write))
{
fileStream.SetLength(inStream.Length);
int bytesRead = -1;
byte[] bytes = new byte[bufferSize];
while ((bytesRead = inStream.Read(bytes, 0, bufferSize)) > 0)
{
fileStream.Write(bytes, 0, bytesRead);
}
}
}
Dusting off reflector we can see that File.Copy actually calls the Win32 API:
if (!Win32Native.CopyFile(fullPathInternal, dst, !overwrite))
Which resolves to
[DllImport("kernel32.dll", CharSet=CharSet.Auto, SetLastError=true)]
internal static extern bool CopyFile(string src, string dst, bool failIfExists);
And here is the documentation for CopyFile
You'll never going to able to beat the operating system at doing something so fundemental with your own code, not even if you crafted it carefully in assembler.
If you need make sure that your operations occur with the best performance AND you want to mix and match various sources then you will need to create a type that describes the resource locations. You then create an API that has functions such as Copy that takes two such types and having examined the descriptions of both chooses the best performing copy mechanism. E.g., having determined that both locations are windows file locations you it would choose File.Copy OR if the source is windows file but the destination is to be HTTP POST it uses a WebRequest.
Try to remove the Flush call, and move it to be outside the loop.
Sometimes the OS knows best when to flush the IO.. It allows it to better use its internal buffers.
Here's a similar answer
How do I copy the contents of one stream to another?
Your main problem is the call to Flush(), that will bind your performance to the speed of the I/O.
Mark Russinovich would be the authority on this.
He wrote on his blog an entry Inside Vista SP1 File Copy Improvements which sums up the Windows state of the art through Vista SP1.
My semi-educated guess would be that File.Copy would be most robust over the greatest number of situations. Of course, that doesn't mean in some specific corner case, your own code might beat it...
One thing that stands out is that you are reading a chunk, writing that chunk, reading another chunk and so on.
Streaming operations are great candidates for multithreading. My guess is that File.Copy implements multithreading.
Try reading in one thread and writing in another thread. You will need to coordinate the threads so that the write thread doesn't start writing away a buffer until the read thread is done filling it up. You can solve this by having two buffers, one that is being read while the other is being written, and a flag that says which buffer is currently being used for which purpose.
I've been playing around with what I thought was a simple idea. I want to be able to read in a file from somewhere (website, filesystem, ftp), perform some operations on it (compress, encrypt, etc.) and then save it somewhere (somewhere may be a filesystem, ftp, or whatever). It's a basic pipeline design. What I would like to do is to read in the file and put it onto a MemoryStream, then perform the operations on the data in the MemoryStream, and then save that data in the MemoryStream somewhere. I was thinking I could use the same Stream to do this but run into a couple of problems:
Everytime I use a StreamWriter or StreamReader I need to close it and that closes the stream so that I cannot use it anymore. That seems like there must be some way to get around that.
Some of these files may be big and so I may run out of memory if I try to read the whole thing in at once.
I was hoping to be able to spin up each of the steps as separate threads and have the compression step begin as soon as there is data on the stream, and then as soon as the compression has some compressed data available on the stream I could start saving it (for example). Is anything like this easily possible with the C# Streams? ANyone have thoughts as to how to accomplish this best?
Thanks,
Mike
Using a helper method to drive the streaming:
static public void StreamCopy(Stream source, Stream target)
{
byte[] buffer = new byte[8 * 1024];
int size;
do
{
size = source.Read(buffer, 0, 8 * 1024);
target.Write(buffer, 0, size);
} while (size > 0);
}
You can easily combine whatever you need:
using (FileStream iFile = new FileStream(...))
using (FileStream oFile = new FileStream(...))
using (DeflateStream oZip = new DeflateStream(outFile, CompressionMode.Compress))
StreamCopy(iFile, oZip);
Depending on what you are actually trying to do, you'd chain the streams differently. This also uses relatively little memory, because only the data being operated upon is in memory.
StreamReader/StreamWriter shouldn't have been designed to close their underlying stream -- that's a horrible misfeature in the BCL. But they do, they won't be changed (because of backward compatibility), so we're stuck with this disaster of an API.
But there are some well-established workarounds, if you want to use StreamReader/Writer but keep the Stream open afterward.
For a StreamReader: don't Dispose the StreamReader. It's that simple. It's harmless to just let a StreamReader go without ever calling Dispose. The only effect is that your Stream won't get prematurely closed, which is actually a plus.
For a StreamWriter: there may be buffered data, so you can't get away with just letting it go. You have to call Flush, to make sure that buffered data gets written out to the Stream. Then you can just let the StreamWriter go. (Basically, you put a Flush where you normally would have put a Dispose.)
Unless you're reading in streams bigger than your hard drive, I don't think you'll run out of memory:
http://blogs.msdn.com/ericlippert/archive/2009/06/08/out-of-memory-does-not-refer-to-physical-memory.aspx