Writing to ZipArchive using the HttpContext OutputStream - c#

I've been trying to get the "new" ZipArchive included in .NET 4.5 (System.IO.Compression.ZipArchive) to work in a ASP.NET site. But it seems like it doesn't like writing to the stream of HttpContext.Response.OutputStream.
My following code example will throw
System.NotSupportedException: Specified method is not supported
as soon as a write is attempted on the stream.
The CanWrite property on the stream returns true.
If I exchange the OutputStream with a filestream, pointing to a local directory, it works. What gives?
ZipArchive archive = new ZipArchive(HttpContext.Response.OutputStream, ZipArchiveMode.Create, false);
ZipArchiveEntry entry = archive.CreateEntry("filename");
using (StreamWriter writer = new StreamWriter(entry.Open()))
{
writer.WriteLine("Information about this package.");
writer.WriteLine("========================");
}
Stacktrace:
[NotSupportedException: Specified method is not supported.]
System.Web.HttpResponseStream.get_Position() +29
System.IO.Compression.ZipArchiveEntry.WriteLocalFileHeader(Boolean isEmptyFile) +389
System.IO.Compression.DirectToArchiveWriterStream.Write(Byte[] buffer, Int32 offset, Int32 count) +94
System.IO.Compression.WrappedStream.Write(Byte[] buffer, Int32 offset, Int32 count) +41

Note: This has been fixed in .Net Core 2.0. I'm not sure what is the status of the fix for .Net Framework.
Calbertoferreira's answer has some useful information, but the conclusion is mostly wrong. To create an archive, you don't need seek, but you do need to be able to read the Position.
According to the documentation, reading Position should be supported only for seekable streams, but ZipArchive seems to require this even from non-seekable streams, which is a bug.
So, all you need to do to support writing ZIP files directly to OutputStream is to wrap it in a custom Stream that supports getting Position. Something like:
class PositionWrapperStream : Stream
{
private readonly Stream wrapped;
private long pos = 0;
public PositionWrapperStream(Stream wrapped)
{
this.wrapped = wrapped;
}
public override bool CanSeek { get { return false; } }
public override bool CanWrite { get { return true; } }
public override long Position
{
get { return pos; }
set { throw new NotSupportedException(); }
}
public override void Write(byte[] buffer, int offset, int count)
{
pos += count;
wrapped.Write(buffer, offset, count);
}
public override void Flush()
{
wrapped.Flush();
}
protected override void Dispose(bool disposing)
{
wrapped.Dispose();
base.Dispose(disposing);
}
// all the other required methods can throw NotSupportedException
}
Using this, the following code will write a ZIP archive into OutputStream:
using (var outputStream = new PositionWrapperStream(Response.OutputStream))
using (var archive = new ZipArchive(outputStream, ZipArchiveMode.Create, false))
{
var entry = archive.CreateEntry("filename");
using (var writer = new StreamWriter(entry.Open()))
{
writer.WriteLine("Information about this package.");
writer.WriteLine("========================");
}
}

If you compare your code adaptation with the version presented in MSDN page you'll see that the ZipArchiveMode.Create is never used, what is used is ZipArchiveMode.Update.
Despite that, the main problem is the OutputStream that doesn't support Read and Seek which is need by the ZipArchive in Update Mode:
When you set the mode to Update, the underlying file or stream must
support reading, writing, and seeking. The content of the entire
archive is held in memory, and no data is written to the underlying
file or stream until the archive is disposed.
Source: MSDN
You weren't getting any exceptions with the create mode because it only needs to write:
When you set the mode to Create, the underlying file or stream must support writing, but does not have to support seeking. Each entry in the archive can be opened only once for writing. If you create a single entry, the data is written to the underlying stream or file as soon as it is available. If you create multiple entries, such as by calling the CreateFromDirectory method, the data is written to the underlying stream or file after all the entries are created.
Source: MSDN
I believe you can't create a zip file directly in the OutputStream since it's a network stream and seek is not supported:
Streams can support seeking. Seeking refers to querying and modifying the current position within a stream. Seek capability depends on the kind of backing store a stream has. For example, network streams have no unified concept of a current position, and therefore typically do not support seeking.
An alternative could be writing to a memory stream, then use the OutputStream.Write method to send the zip file.
MemoryStream ZipInMemory = new MemoryStream();
using (ZipArchive UpdateArchive = new ZipArchive(ZipInMemory, ZipArchiveMode.Update))
{
ZipArchiveEntry Zipentry = UpdateArchive.CreateEntry("filename.txt");
foreach (ZipArchiveEntry entry in UpdateArchive.Entries)
{
using (StreamWriter writer = new StreamWriter(entry.Open()))
{
writer.WriteLine("Information about this package.");
writer.WriteLine("========================");
}
}
}
byte[] buffer = ZipInMemory.GetBuffer();
Response.AppendHeader("content-disposition", "attachment; filename=Zip_" + DateTime.Now.ToString() + ".zip");
Response.AppendHeader("content-length", buffer.Length.ToString());
Response.ContentType = "application/x-compressed";
Response.OutputStream.Write(buffer, 0, buffer.Length);
EDIT: With feedback from comments and further reading, you could be creating large Zip files, so the memory stream could cause you problems.
In this case i suggest you create the zip file on the web server then output the file using Response.WriteFile .

A refinement to svick's answer of 2nd February 2014. I found that it was necessary to implement some more methods and properties of the Stream abstract class and to declare the pos member as long. After that it worked like a charm. I haven't extensively tested this class, but it works for the purposes of returning a ZipArchive in the HttpResponse. I assume I've implemented Seek and Read correctly, but they may need some tweaking.
class PositionWrapperStream : Stream
{
private readonly Stream wrapped;
private long pos = 0;
public PositionWrapperStream(Stream wrapped)
{
this.wrapped = wrapped;
}
public override bool CanSeek
{
get { return false; }
}
public override bool CanWrite
{
get { return true; }
}
public override long Position
{
get { return pos; }
set { throw new NotSupportedException(); }
}
public override bool CanRead
{
get { return wrapped.CanRead; }
}
public override long Length
{
get { return wrapped.Length; }
}
public override void Write(byte[] buffer, int offset, int count)
{
pos += count;
wrapped.Write(buffer, offset, count);
}
public override void Flush()
{
wrapped.Flush();
}
protected override void Dispose(bool disposing)
{
wrapped.Dispose();
base.Dispose(disposing);
}
public override long Seek(long offset, SeekOrigin origin)
{
switch (origin)
{
case SeekOrigin.Begin:
pos = 0;
break;
case SeekOrigin.End:
pos = Length - 1;
break;
}
pos += offset;
return wrapped.Seek(offset, origin);
}
public override void SetLength(long value)
{
wrapped.SetLength(value);
}
public override int Read(byte[] buffer, int offset, int count)
{
pos += offset;
int result = wrapped.Read(buffer, offset, count);
pos += count;
return result;
}
}

An simplified version of svick's answer for zipping a server-side file and sending it via the OutputStream:
using (var outputStream = new PositionWrapperStream(Response.OutputStream))
using (var archive = new ZipArchive(outputStream, ZipArchiveMode.Create, false))
{
var entry = archive.CreateEntryFromFile(fullPathOfFileOnDisk, fileNameAppearingInZipArchive);
}
(In case this seems obvious, it wasn't to me!)

Presumably this is not an MVC app, where you could easily just use the FileStreamResult class.
I'm using this currently with ZipArchive created using a MemoryStream, so I know it works.
With that in mind, have a look at the FileStreamResult.WriteFile() method:
protected override void WriteFile(HttpResponseBase response)
{
// grab chunks of data and write to the output stream
Stream outputStream = response.OutputStream;
using (FileStream)
{
byte[] buffer = newbyte[_bufferSize];
while (true)
{
int bytesRead = FileStream.Read(buffer, 0, _bufferSize);
if (bytesRead == 0)
{
// no more data
break;
}
outputStream.Write(buffer, 0, bytesRead);
}
}
}
(Entire FileStreamResult on CodePlex)
Here is how I'm generating and returning the ZipArchive.
You should have no issues replacing the FSR with the guts of the WriteFile method from above, where FileStream becomes resultStream from the code below:
var resultStream = new MemoryStream();
using (var zipArchive = new ZipArchive(resultStream, ZipArchiveMode.Create, true))
{
foreach (var doc in req)
{
var fileName = string.Format("Install.Rollback.{0}.v{1}.docx", doc.AppName, doc.Version);
var xmlData = doc.GetXDocument();
var fileStream = WriteWord.BuildFile(templatePath, xmlData);
var docZipEntry = zipArchive.CreateEntry(fileName, CompressionLevel.Optimal);
using (var entryStream = docZipEntry.Open())
{
fileStream.CopyTo(entryStream);
}
}
}
resultStream.Position = 0;
// add the Response Header for downloading the file
var cd = new ContentDisposition
{
FileName = string.Format(
"{0}.{1}.{2}.{3}.Install.Rollback.Documents.zip",
DateTime.Now.Year, DateTime.Now.Month, DateTime.Now.Day, (long)DateTime.Now.TimeOfDay.TotalSeconds),
// always prompt the user for downloading, set to true if you want
// the browser to try to show the file inline
Inline = false,
};
Response.AppendHeader("Content-Disposition", cd.ToString());
// stuff the zip package into a FileStreamResult
var fsr = new FileStreamResult(resultStream, MediaTypeNames.Application.Zip);
return fsr;
Finally, if you will be writing large streams (or a larger number of them at any given time), then you may want to consider using anonymous pipes to write the data to the output stream immediately after you write it to the underlying stream in the zip file. Because you will be holding all the file contents in memory on the server. The end of this answer to a similar question has a nice explanation of how to do that.

Related

Return buffer while processing Stream

So I have a file upload form which (after uploading) encrypts the file and uploads it to an S3 bucket. However, I'm doing an extra step which I want to avoid. First, I'll show you some code what I am doing now:
using (MemoryStream memoryStream = new MemoryStream())
{
Security.EncryptFile(FileUpload.UploadedFile.OpenReadStream(), someByteArray, memoryStream);
memoryStream.Position = 0; // reset it's position
await S3Helper.Upload(objectName, memoryStream);
}
My Security.EncryptFile method:
public static void EncryptFile(Stream inputStream, byte[] key, Stream outputStream)
{
CryptoStream cryptoStream;
using (SymmetricAlgorithm cipher = Aes.Create())
using (inputStream)
{
cipher.Key = key;
// aes.IV will be automatically populated with a secure random value
byte[] iv = cipher.IV;
// Write a marker header so we can identify how to read this file in the future
outputStream.WriteByte(69);
outputStream.WriteByte(74);
outputStream.WriteByte(66);
outputStream.WriteByte(65);
outputStream.WriteByte(69);
outputStream.WriteByte(83);
outputStream.Write(iv, 0, iv.Length);
using (cryptoStream =
new CryptoStream(inputStream, cipher.CreateEncryptor(), CryptoStreamMode.Read))
{
cryptoStream.CopyTo(outputStream);
}
}
}
The S3Helper.Upload method:
public async static Task Upload(string objectName, Stream inputStream)
{
try
{
// Upload a file to bucket.
using (inputStream)
{
await minio.PutObjectAsync(S3BucketName, objectName, inputStream, inputStream.Length);
}
Console.Out.WriteLine("[Bucket] Successfully uploaded " + objectName);
}
catch (MinioException e)
{
Console.WriteLine("[Bucket] Upload exception: {0}", e.Message);
}
}
So, what happens above is I'm creating a MemoryStream, running the EncryptFile() method (which outputs it back to the stream), I reset the stream position and finally reuse it again to upload it to the S3 bucket (Upload()).
The question
What I'd like to do is the following (if possible): directly upload the uploaded file to the S3 bucket, without storing the full file in memory first (kinda like the code below, even though it's not working):
await S3Helper.Upload(objectName, Security.EncryptFile(FileUpload.UploadedFile.OpenReadStream(), someByteArray));
So I assume it has to return a buffer to the Upload method, which will upload it, and waits for the EncryptFile() method to return a buffer again until the file has been fully read. Any pointers to the right direction will be greatly appreciated.
What you could do is make your own EncryptionStream that overloads the Stream class. When you read from this stream, it will take a block from the inputstream, encrypt it and then output the encrypted data.
As an example, something like this:
public class EncrypStream : Stream {
private Stream _cryptoStream;
private SymmetricAlgorithm _cipher;
private Stream InputStream { get; }
private byte[] Key { get; }
public EncrypStream(Stream inputStream, byte[] key) {
this.InputStream = inputStream;
this.Key = key;
}
public override int Read(byte[] buffer, int offset, int count) {
if (this._cipher == null) {
_cipher = Aes.Create();
_cipher.Key = Key;
// aes.IV will be automatically populated with a secure random value
byte[] iv = _cipher.IV;
// Write a marker header so we can identify how to read this file in the future
// #TODO Make sure the BUFFER is big enough...
var idx = offset;
buffer[idx++] = 69;
buffer[idx++] = 74;
buffer[idx++] = 66;
buffer[idx++] = 65;
buffer[idx++] = 69;
buffer[idx++] = 83;
Array.Copy(iv, 0, buffer, idx, iv.Length);
offset = idx + iv.Length;
// Startup stream
this._cryptoStream = new CryptoStream(InputStream, _cipher.CreateEncryptor(), CryptoStreamMode.Read);
}
// Write block
return this._cryptoStream.Read(buffer, offset, count);
}
protected override void Dispose(bool disposing) {
base.Dispose(disposing);
// Make SURE you properly dispose the underlying streams!
this.InputStream?.Dispose();
this._cipher?.Dispose();
this._cryptoStream?.Dispose();
}
// Omitted other methods from stream for readability...
}
Which allows you to call the stream as:
using (var stream = new EncrypStream(FileUpload.UploadedFile.OpenReadStream(), someByteArray)) {
await S3Helper.Upload(objectName, stream);
}
As I notice your upload method requires the total bytelength of the encrypted data, you can look into this post here to get an idea how you would be able to calculate this.
(I'm guessing that the CryptoStream does not return the expected length of the encrypted data, but please correct me if I'm wrong on this)

Create a filestream without a file c#

Is it possible to create a filestream without an actual file?
I'll try to explain:
I know how to create a stream from a real file:
FileStream s = new FileStream("FilePath", FileMode.Open, FileAccess.Read);
But can I create a fileStream with a fake file?
meaning:
define properties such as name, type, size, whatever else is necessary, to some file object (is there such thing?), without a content, just all the properties,
and after that to create a fileStream from this "file"? to have the result similar to the above code?
edit.
I am using an API sample that has that code:
FileStream s = new FileStream("FilePath", FileMode.Open, FileAccess.Read);
try
{
SolFS.SolFSStream stream = new SolFS.SolFSStream(Storage, FullName, true, false, true, true, true, "pswd", SolFS.SolFSEncryption.ecAES256_SHA256, 0);
try
{
byte[] buffer = new byte[1024*1024];
long ToRead = 0;
while (s.Position < s.Length)
{
if (s.Length - s.Position < 1024*1024)
ToRead = s.Length - s.Position;
else
ToRead = 1024 * 1024;
s.Read(buffer, 0, (int) ToRead);
stream.Write(buffer, 0, (int) ToRead);
}
So it is basically writes fileStream "s" somewhere.
I don't want to take an existing file and write it, but I want to "create" a different file without the content (I don't need the content) but to have the properties of the real file such as size, name, type
Apparently, you want to have a FileStream (explicitly with its FileStream-specific properties such as Name) that does not point to a file.
This is, to my knowledge, not possible based on the implementation of FileStream.
However, creating a wrapper class with the required properties would be a straightforward solution:
You could store all the properties you need in the wrapper.
The wrapper could wrap an arbitrary Stream, so you would be free to choose between FileStream, MemoryStream, or any other stream type.
Here is an example:
public class StreamContainer
{
public StreamContainer(string name, Stream contents)
{
if (name == null) {
throw new ArgumentNullException("name");
}
if (contents == null) {
throw new ArgumentNullException("contents");
}
this.name = name;
this.contents = contents;
}
private readonly string name;
public string Name {
get {
return name;
}
}
private readonly Stream contents;
public Stream Contents {
get {
return contents;
}
}
}
Of course, you could then add some courtesy creation methods for various stream types (as static methods in the above class):
public static StreamContainer CreateForFile(string path)
{
return new StreamContainer(path, new FileStream(path, FileMode.Open, FileAccess.Read));
}
public static StreamContainer CreateWithoutFile(string name)
{
return new StreamContainer(name, new MemoryStream());
}
In your application, whereever you want to use such a named stream, pass around the StreamContainer rather than expecting a Stream directly.

c# How to send a filestream still being written and keep sending until the end of the creation

I have a Restful web service containing a method that returns a stream, and calling this method results on downloading the file associated to the stream.
Now, for performances purpose, I need this file to be downloaded while it's still being created.
Here is the actual code of the method :
[WebGet(UriTemplate = "SendFileStream/")]
public Stream SendFileStream()
{
//This thread generates a big file
FileCreatorThread fileCreator = new FileCreatorThread();
Thread fileCreatorThread = new Thread(fileCreator.CreateFile);
fileCreatorThread.Start();
WebOperationContext.Current.OutgoingResponse.Headers.Add(HttpResponseHeader.Expires, DateTime.UtcNow.ToString("ddd, dd MMM yyyy HH:mm:ss 'GMT'"));
WebOperationContext.Current.OutgoingResponse.ContentType = "multipart/related";
FileStream fs = new FileStream(#"c:\test.txt", FileMode.Open, FileAccess.Read, FileShare.Write);
return fs;
}
This method works great, the file is downloadable while being created without throwing an IOException.
But the problem is that the download is faster than the creation and it stops as soon as it reaches the end of the stream without downloading the part that still has to be created.
So my question is, is there a way to keep the download pending until the end of the creation of the file (assuming we don't know in advance the final length of this file) without the need of a second method to check if the size of the file downloaded is the same has the length of the actual file and a method to restart the download until it's completed?
Thanks in advance for your help.
PS: for those who wonder how it is possible to have access to a file in read and write with two different threads, here is the code of the thread generating the file.
public class FileCreatorThread
{
public void CreateFile()
{
FileStream fs = new FileStream(#"c:\test.txt", FileMode.Create, FileAccess.Write, FileShare.Read);
StreamWriter writer = new StreamWriter(fs);
for (int i = 0; i < 1000000; i++)
{
writer.WriteLine("New line number " + i);
writer.Flush();
//Thread.Sleep(1);
}
writer.Close();
fs.Close();
}
}
Solution :
If finaly found a solution to my problem, mainly based on these two pages :
this one
and this one
First of all I enabled the streaming in the web service.
Here's a code snipet from the web.config file, for more information see the links above
<bindings>
<webHttpBinding>
<binding name="httpsStream" transferMode="Streamed" maxReceivedMessageSize="67108864">
<security mode="Transport"/>
</binding>
</webHttpBinding>
</bindings>
Secondly, I created a custom stream with an overload of the read() method that chekcs if the end of the stream has been rechaed and if so, waits a few milliseconds and retries the read() to make sure this is really the end of the file.
Here is the code of the custom stream :
public class BigFileStream : Stream
{
FileStream inStream;
FileStream testStream;
String filePath;
internal BigFileStream(string filePath)
{
this.filePath = filePath;
inStream = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.Write);
}
public override bool CanRead
{
get { return inStream.CanRead; }
}
public override bool CanSeek
{
get { return false; }
}
public override bool CanWrite
{
get { return false; }
}
public override void Flush()
{
throw new Exception("This stream does not support writing.");
}
public override long Length
{
get { throw new Exception("This stream does not support the Length property."); }
}
public override long Position
{
get
{
return inStream.Position;
}
set
{
throw new Exception("This stream does not support setting the Position property.");
}
}
public override int Read(byte[] buffer, int offset, int count)
{
int countRead = inStream.Read(buffer, offset, count);
if (countRead != 0)
{
return countRead;
}
else
{
for (int i = 1; i < 10; i++)
{
Thread.Sleep(i * 15);
countRead = inStream.Read(buffer, offset, count);
if (countRead != 0)
{
return countRead;
}
}
return countRead;
}
}
public override long Seek(long offset, SeekOrigin origin)
{
throw new Exception("This stream does not support seeking.");
}
public override void SetLength(long value)
{
throw new Exception("This stream does not support setting the Length.");
}
public override void Write(byte[] buffer, int offset, int count)
{
throw new Exception("This stream does not support writing.");
}
public override void Close()
{
inStream.Close();
base.Close();
}
protected override void Dispose(bool disposing)
{
inStream.Dispose();
base.Dispose(disposing);
}
}
And the download method will now return this :
return new BigFileStream(#"c:\test.txt");
This is probably not the cleaner way or the most efficient way to do it so please do not hesitate to comment and suggest other solutions.
Update 2 :
Here is a more efficient version of the Read() method as the one I first posted could still fail if the writer takes more than 815ms to write more bytes.
public override int Read(byte[] buffer, int offset, int count)
{
int countRead = inStream.Read(buffer, offset, count);
if (countRead != 0)
{
return countRead;
}
else
{
Boolean fileAccessible = false;
while (!fileAccessible)
{
try
{
//try to open the file in Write, if it goes in exception, that means that the file is still opened by the writer
testStream = new FileStream(this.filePath, FileMode.Open, FileAccess.Write, FileShare.Read);
testStream.Close();
break;
}
catch (Exception e)
{
Thread.Sleep(500);
countRead = inStream.Read(buffer, offset, count);
if (countRead != 0)
{
return countRead;
}
}
}
countRead = inStream.Read(buffer, offset, count);
return countRead;
}
}
The problem you have is because you are trying to get the entire file and not segment of a file.
when a file is written from one side , the other side should request one segment at the time. lets say 4096 bytes each cycle and then create the entire file when the segments reach the file length.
I know there is a way to async webservice calls using WCF.
try to solve your problem in this direction.
if you still have a problem contact me.

Upload file using a virtual path provider and Amazon S3 SDK

The background to this question is based on a virtual file system I'm developing. The concept I'm using is virutal path providers for different types of storage type i.e local file system, dropbox and amazon s3. My base class for a virtual file looks like this:
public abstract class CommonVirtualFile : VirtualFile {
public virtual string Url {
get { throw new NotImplementedException(); }
}
public virtual string LocalPath {
get { throw new NotImplementedException(); }
}
public override Stream Open() {
throw new NotImplementedException();
}
public virtual Stream Open(FileMode fileMode) {
throw new NotImplementedException();
}
protected CommonVirtualFile(string virtualPath) : base(virtualPath) { }
}
The implementation of the second Open method is what my question is all about. If we look at my implementation for the local file system i.e saving a file on disk it looks like this:
public override Stream Open(FileMode fileMode) {
return new FileStream("The_Path_To_The_File_On_Disk"), fileMode);
}
If I would like to save a file on the local file system this would look something like this:
const string virtualPath = "/assets/newFile.txt";
var file = HostingEnvironment.VirtualPathProvider.GetFile(virtualPath) as CommonVirtualFile;
if (file == null) {
var virtualDir = VirtualPathUtility.GetDirectory(virtualPath);
var directory = HostingEnvironment.VirtualPathProvider.GetDirectory(virtualDir) as CommonVirtualDirectory;
file = directory.CreateFile(VirtualPathUtility.GetFileName(virtualPath));
}
byte[] fileContent;
using (var fileStream = new FileStream(#"c:\temp\fileToCopy.txt", FileMode.Open, FileAccess.Read)) {
fileContent = new byte[fileStream.Length];
fileStream.Read(fileContent, 0, fileContent.Length);
}
// write the content to the local file system
using (Stream stream = file.Open(FileMode.Create)) {
stream.Write(fileContent, 0, fileContent.Length);
}
What I want is that if I switch to my amazon s3 virtual path provider I want this code to work directly without any changes so to sum things up, how can I solve this using the amazon s3 sdk and how should i implement my Open(FileMode fileMode) method in my amazon s3 virtual path provider?
Hey i stood for this problem, too, and i solved it implementing a stream.
Here is my way i did it maybe it helps:
public static Stream OpenStream(S3TransferUtility transferUtility, string key)
{
byte[] buffer = new byte[Buffersize + Buffersize/2];
S3CopyMemoryStream s3CopyStream =
new S3CopyMemoryStream(key, buffer, transferUtility)
.WithS3CopyFileStreamEvent(CreateMultiPartS3Blob);
return s3CopyStream;
}
My Stream with constructor overrides the close and write(array, offset, count) methods and upload the stream to amazon s3 partly.
public class S3CopyMemoryStream : MemoryStream
{
public S3CopyMemoryStream WithS3CopyFileStreamEvent(StartUploadS3CopyFileStreamEvent doing)
{
S3CopyMemoryStream s3CopyStream = new S3CopyMemoryStream(this._key, this._buffer, this._transferUtility);
s3CopyStream.StartUploadS3FileStreamEvent = new S3CopyMemoryStream.StartUploadS3CopyFileStreamEvent(CreateMultiPartS3Blob);
return s3CopyStream;
}
public S3CopyMemoryStream(string key, byte[] buffer, S3TransferUtility transferUtility)
: base(buffer)
{
if (buffer.LongLength > int.MaxValue)
throw new ArgumentException("The length of the buffer may not be longer than int.MaxValue", "buffer");
InitiatingPart = true;
EndOfPart = false;
WriteCount = 1;
PartETagCollection = new List<PartETag>();
_buffer = buffer;
_key = key;
_transferUtility = transferUtility;
}
The event StartUploadS3FileStreamEvent invokes a call that initiate, uploadpart and complete the upload.
Alternatively you could implement a FileStream which is much easier because you can use
TransferUtilityUploadRequest request =
new TransferUtilityUploadRequest()
.WithAutoCloseStream(false).WithBucketName(
transferUtility.BucketName)
.WithKey(key)
.WithPartSize(stream.PartSize)
.WithInputStream(stream) as TransferUtilityUploadRequest;
transferUtility.Upload(request);
at the close method of the overriden FileStream. The disadvantage is that you have to write the whole data to the disk first and then you can upload it.

Read file.inputstream twice

I need to read csv file twice. but after first reading:
using (var csvReader = new StreamReader(file.InputStream))
{
fileFullText += csvReader.ReadToEnd();
file.InputStream.Seek(0, SeekOrigin.Begin);
csvReader.Close();
}
using file in enother function:
public static List<string> ParceCsv(HttpPostedFileBase file)
{
//file.InputStream.Seek(0, SeekOrigin.Begin);
using (var csvReader = new StreamReader(file.InputStream))
{
// csvReader.DiscardBufferedData();
// csvReader.BaseStream.Seek(0, SeekOrigin.Begin);
string inputLine = "";
var values = new List<string>();
while ((inputLine = csvReader.ReadLine()) != null)
{
values.Add(inputLine.Trim().Replace(",", "").Replace(" ", ""));
}
csvReader.Close();
return values;
}
}
The file.Length is 0.
Can anybody help?
The reason is that SteramReader's Dispose() method also closes the underlying stream; In your case file.InputStream. The using statement calls Dispose() implicitly. Try to replace using with disposes of both your StreamReaded-s after you finished both read operations. As I remember some stream classes have a bool option to leave underlying stream open after dispose.
.NET 4.5 fixed this issue by introducing leaveOpen parameter in SteamReader constructor. See: MSDN
public StreamReader(
Stream stream,
Encoding encoding,
bool detectEncodingFromByteOrderMarks,
int bufferSize,
bool leaveOpen
)
One more thing. You do not need to close SteramReader yourself (the line with csvReader.Close();) when you wrap it in using statement, thus Dispose() and Close() are the same in case of StreamReader.
if your using HttpPostedFileBase you need to clone it first,
use the code this git here
or just add this as a class in your namespace:
public static class HttpPostedFileBaseExtensions
{
public static Byte[] ToByteArray(this HttpPostedFileBase value)
{
if (value == null)
return null;
var array = new Byte[value.ContentLength];
value.InputStream.Position = 0;
value.InputStream.Read(array, 0, value.ContentLength);
return array;
}
}
now you can read the HttpPostedFileBase like so:
private static void doSomeStuff(HttpPostedFileBase file)
{
try
{
using (var reader = new MemoryStream(file.ToByteArray()))
{
// do some stuff... say read it to xml
using (var xmlTextReader = new XmlTextReader(reader))
{
}
}
}
catch (Exception ex)
{
throw ex;
}
}
after using this you can still write in your main code:
file.SaveAs(path);
and it will save it to the file.

Categories

Resources