HttpPostedFileBase Stream Upload to AWS SDK Bucket has no Data - c#

I'm testing how to upload to AWS using SDK with a sample .txt file from a web app. The file uploads to the Bucket, but the downloaded file from the bucket is just an empty Notepad document without the text from the original uploaded file. I'm new to working with streams, so I'm not sure what could be wrong here. Does anyone see why the data wouldn't be sent in the transfer request? Thanks in advance!
using (var client = new AmazonS3Client(Amazon.RegionEndpoint.USWest1))
{
//Save File to Bucket
using (FileStream txtFileStream = (FileStream)UploadedHttpFileBase.InputStream)
{
try
{
TransferUtility fileTransferUtility = new TransferUtility();
fileTransferUtility.Upload(txtFileStream, bucketLocation,
UploadedHttpFileBase.FileName);
}
catch (Exception e)
{
e.Message.ToString();
}
}
}
EDIT:
Both TransferUtility and PutObjectRequest/PutObjectResponse/AmazonS3Client.PutObject saved a blank text file. Then, after having some trouble instantiating a new FileStream, a MemoryStream used after resetting the starting position to zero still saved a blank text file. Any ideas?
New Code:
using (var client = new AmazonS3Client(Amazon.RegionEndpoint.USWest1))
{
Stream saveableStream = new MemoryStream();
using (Stream source = (Stream)UploadedHttpFileBase.InputStream)
{
source.Position = 0;
source.CopyTo(saveableStream);
}
//Save File to Bucket
try
{
PutObjectRequest request = new PutObjectRequest
{
BucketName = bucketLocation,
Key = UploadedHttpFileBase.FileName,
InputStream = saveableStream
};
PutObjectResponse response = client.PutObject(request);
}
catch (Exception e)
{
e.Message.ToString();
}
}

Most probably that TransferUtility doesn't work good with temporary upload files. Try to copy your input stream somewhere (e.g. into other not-so-temporary file, or even MemoryStream if you're sure it would not give you OutOfMemory at some point). Another thing is to get rid of TransferUtility and use low-level AmazonS3Client.PutObject with which you get finer control over Stream lifetime (do not forget that you'll need to implement some retrying as S3 API is prone to returning random temporary errors).

The answer had something to do with nesting, which is still a little beyond my understanding, and not because the code posted here was inherently wrong. This code came after an initial StreamReader which checked the first line of the text file to determine whether or not to save the file. After moving the code out from the while loop doing the ReadLines, the upload worked. Everything works as it's supposed to now that the validation is reorganized so that there's no need for the nested Stream or MemoryStream.

Related

Stream video from RavenDB through ASP.Net MVC

I'm trying to stream a Video, that is saved as an attachment in a Ravendb-Database, through an ASP.NET MVC 5 Action to a WebBrowser. It is working with the following Code, but the Video gets fully downloaded before the Video starts. I don't get what I'm doing wrong.
I found some ways to do Streaming in MVC, but they seem to expect a Seekable Stream - but the stream I receive from Ravendb is not seekable; it even does not provide a length. So the only way of doing it would be to copy the ravendb-stream to a memorystream and provide a PartialContent or similar from there.
Does anybody have a better solution? I cannot be the only one that wants to stream a Video from a database without loading the full Video into Memory before sending it.
I'm fetching the attachment from ravendb like this:
public async Task<System.IO.Stream> GetAttachmentAsync(IAttachmentPossible attachedObject, string key)
{
using (var ds = InitDatabase())
{
using (var session = ds.OpenAsyncSession())
{
try
{
var result = await session.Advanced.Attachments.GetAsync(attachedObject.Id, key);
return result.Stream;
}
catch (Exception ex)
{
return null;
}
}
}
}
After that I send the stream to the browser like this:
var file = await _database.GetAttachmentAsync(entry, attachmentId);
HttpResponseMessage msg = new HttpResponseMessage(HttpStatusCode.OK);
msg.Content = new StreamContent(file);
msg.Content.Headers.ContentType = new MediaTypeHeaderValue("video/mp4");
return msg;
Any ideas? Thank you very much!
I think that it is correct to copy the stream into a memory stream. With the answer you linked (https://stackoverflow.com/a/39247028/10291808) you can do streaming.
Maybe could be an idea to think about the multiple calls that will done to retrieve the file from the db (maybe you could cache the file for a limited time to improve performance).

What is the difference between SftpClient.UploadFile and SftpClient.WriteAllBytes?

I am observing some strange behaviour when I use SSH.NET to transfer files with SFTP. I am using SFTP to transfer XML files to another service (which I don't control) for processing. If I use SftpClient.WriteAllBytes the service complains the file is not valid XML. If I first write to a temporary file and then use SftpClient.UploadFile the transfer is successful.
What's happening?
Using .WriteAllBytes:
public void Send(string remoteFilePath, byte[] contents)
{
using(var client = new SftpClient(new ConnectionInfo(/* username password etc.*/)))
{
client.Connect();
client.WriteAllBytes(remoteFilePath, contents);
}
}
Using .UploadFile:
public void Send(string remoteFilePath, byte[] contents)
{
var tempFileName = Path.GetTempFileName();
File.WriteAllBytes(tempFileName, contents);
using(var fs = new FileStream(tempFile, FileMode.Open))
using(var client = new SftpClient(new ConnectionInfo(/* username password etc.*/)))
{
client.Connect();
client.UploadFile(fs, targetPath);
}
}
Edit:
Will in the comments asked how I turn the XML into a byte-array. I didn't think this was relevant, but then again I'm the one asking the question... :P
// somewhere else:
// XDocument xdoc = CreateXDoc();
using(var st = new MemoryStream())
{
using(var xw = XmlWriter.Create(st, new XmlWriterSettings { Encoding = Encoding.UTF8, Indent = true }))
{
xdoc.WriteTo(xw);
}
return st.ToArray();
}
I can reproduce your problem using SSH.NET 2016.0.0 from NuGet. But not with 2016.1.0-beta1.
Inspecting the code, I can see that the SftpFileStream (what the WriteAllBytes uses) keeps writing the same (starting) piece of the data all the time.
It seems that your are suffering from this bug:
https://github.com/sshnet/SSH.NET/issues/70
While the bug description does not make it clear that it's your problem, the commit that fixes it matches the problem I have found:
Take into account the offset in SftpFileStream.Write(byte[] buffer, int offset, int count) when not writing to the buffer. Fixes issue #70.
To answer your question: The methods should indeed behave similarly.
Except that SftpClient.UploadFile is optimized for uploads of large amount of data, while the SftpClient.WriteAllBytes is not. So the underlying implementation is very different.
Also the SftpClient.WriteAllBytes does not truncate an existing file. What matters, when you are uploading less data than the existing file have.

Zip within a zip opens to undocumented System.IO.Compression.SubReadStream

I have a function I use for aggregating streams from a zip archive.
private void ExtractMiscellaneousFiles()
{
foreach (var miscellaneousFileName in _fileData.MiscellaneousFileNames)
{
var fileEntry = _archive.GetEntry(miscellaneousFileName);
if (fileEntry == null)
{
throw new ZipArchiveMissingFileException("Couldn't find " + miscellaneousFileName);
}
var stream = fileEntry.Open();
OtherFileStreams.Add(miscellaneousFileName, (DeflateStream) stream);
}
}
This works well in most cases. However, if I have a zip within a zip, I get an excpetion on casting the stream to a DeflateStream:
System.InvalidCastException: Unable to cast object of type 'System.IO.Compression.SubReadStream' to type 'System.IO.Compression.DeflateStream'.
I am unable to find Microsoft documentation for a SubReadStream. I would like my zip within a zip as a DeflateStream. Is this possible? If so how?
UPDATE
Still no success. I attempted #Sunshine's suggestion of copying the stream using the following code:
private void ExtractMiscellaneousFiles()
{
_logger.Log("Extracting misc files...");
foreach (var miscellaneousFileName in _fileData.MiscellaneousFileNames)
{
_logger.Log($"Opening misc file stream for {miscellaneousFileName}");
var fileEntry = _archive.GetEntry(miscellaneousFileName);
if (fileEntry == null)
{
throw new ZipArchiveMissingFileException("Couldn't find " + miscellaneousFileName);
}
var openStream = fileEntry.Open();
var deflateStream = openStream;
if (!(deflateStream is DeflateStream))
{
var memoryStream = new MemoryStream();
deflateStream.CopyTo(memoryStream);
memoryStream.Position = 0;
deflateStream = new DeflateStream(memoryStream, CompressionLevel.NoCompression, true);
}
OtherFileStreams.Add(miscellaneousFileName, (DeflateStream)deflateStream);
}
}
But I get a
System.NotSupportedException: Stream does not support reading.
I inspected deflateStream.CanRead and it is true.
I've discovered this happens not just on zips, but on files that are in the zip but are not compressed (because too small, for example). Surely there's a way to deal with this; surely someone has encountered this before. I'm opening a bounty on this question.
Here's the .NET source for SubReadStream, thanks to #Quantic.
The return type of ZipArchiveEntry.Open() is Stream. An abstract type, in practice it can be a DeflateStream (you'd be happy), a SubReadStream (boo) or a WrappedStream (boo). Woe be you if they decide to improve the class some day and use a ZopfliStream (boo). The workaround is not good, you are trying to deflate data that is not compressed (boo).
Too many boos.
Only good solution is to change the type of your OtherFileStreams member. We can't see it, smells like a List<DeflateStream>. It needs to be a List<Stream>.
So it looks like the when storing a zip file inside another zip it doesn't deflate the zip but rather just inlines the content of the zip with the rest of the files with some information that these entries are part of a sub zip file. Which makes sense because applying compression to something that is already compressed is a waste of time.
This zip file is marked as CompressionMethodValues.Stored in the archive, which causes .NET to just return the original stream it read instead to wrapping it in a DeflateStream.
Source here: https://github.com/dotnet/corefx/blob/master/src/System.IO.Compression/src/System/IO/Compression/ZipArchiveEntry.cs#L670
You could pass the stream into a ZipArchive, if it's not a DeflateStream (if you are interested in the file inside)
var stream = entry.Open();
if (!(stream is DeflateStream))
{
var subArchive = new ZipArchive(stream);
}
Or you can copy the stream to a FileStream (if you want to save it to disk)
var stream = entry.Open();
if (!(stream is DeflateStream))
{
var fs = File.Create(Path.GetTempFileName());
stream.CopyTo(fs);
fs.Close();
}
Or copy to any stream you are interested in using.
Note: This is also how .NET 4.6 behaves

Creating zipFile exhausts memory

I have ASP.NET Web API project where a user can download some stuff from a database.
My Download controller fetches data from the database instance. Every single result has a blob field which is some kind of data (1).
I want add each result to a ZIP file (2). After all I send the HTTP response adding my stream content.
List<Result> results = m_Repository.GetResultsForResultId(given_id_by_request);
// 1
foreach (Result result in results)
{
string fileName = String.Format("{0}-{1}.bin", id >> 16, result.Id);
zipFile.AddEntry(fileName, result.Value);
}
// 2
PushStreamContent pushStreamContent = new PushStreamContent((stream, content, context) =>
{
zipFile.Save(stream);
stream.Close();
}
response = new HttpResponseMessage(HttpStatusCode.OK) { Content = pushStreamContent };
It works nice! But on big download requests this exhausts my memory. I need to find a way to put a stream into a zip archive bufferless. Can someone please help me?!
As far as I can see from the code you posted, you are not disposing the streams you create after usage. This can add to a great amount of memory being reserved by your app which might cause your problems.
I am using the ZipArchive to put multiple files into a zip file in my web application. The code Looks somewhat like that:
using (var compressedFileStream = new MemoryStream())
{
using (var zipArchive = new ZipArchive(compressedFileStream, ZipArchiveMode.Update, false))
{
foreach (Result result in results)
{
string fileName = String.Format("{0}-{1}.bin", id >> 16, result.Id);
var zipEntry = zipArchive.CreateEntry(fileName);
using (var originalFileStream = new MemoryStream(result.Value))
{
using (var zipEntryStream = zipEntry.Open())
{
originalFileStream.CopyTo(zipEntryStream);
}
}
}
}
return File(compressedFileStream.ToArray(), "application/zip", string.Format("Download_{0:ddMMyyy_hhmm}.zip", DateTime.Now));
}
I am using that code snippet inside an MVC Controller method so you have to adapt the return part for your situation.
The above code works fine in my application for up to 300 entries or 50MB volume (those are the limits set by the requirements for my app).
Hope that helps you.
EDIT: Forgot the closing bracket of the first using block. the return Statement has to be inside this using-block, else the stream will be disposed.

Nancy httpfile issue

i have problems during parsing request files.
my file size is 1338521 bytes, but Nancy says, that file size is some times 1751049 or 3200349.
on my windows pc it works fine, on linux server this problem appears, so i can't save file.
string result = Convert.ToBase64String(Core.ReadBytesFromStream(file.Value));
using (MemoryStream ms = new MemoryStream(Convert.FromBase64String(result)))
{
using (Bitmap bm2 = new Bitmap(ms))
{
bm2.Save(path);
}
}
any ideas?
You don't need to convert the file like that.
var filename = Path.Combine(storagePath, Request.Files[0].Name);
using (var fileStream = new FileStream(filename, FileMode.Create))
{
Request.Files[0].Value.CopyTo(fileStream);
}
Validate the file when it comes in to ensure the extension is accepted, create a save path, and copy the stream to a new file on the filesystem.
That's it.

Categories

Resources