I am trying to upload a compressed GZipStream to sftp server using ssh.net library. The problem is that when I create the GZipStream, it can not be read any more. Below is my code:
using (SftpClient client = new SftpClient(connectionInfo))
{
client.Connect();
client.ChangeDirectory("/upload");
var uploadFileDirectory = client.WorkingDirectory + "\testXml.xml.gz";
using (GZipStream gzs = new GZipStream(stream, CompressionLevel.Fastest))
{
stream.CopyTo(gzs);
client.UploadFile(gzs, "text.xml.gz");
}
}
The SftpClient's UploadFile takes a stream and I need to upload the GzipStream that is being compressed (without storing to a local drive and then read it again). But the GZipStream doesn't allow read when it is compressed. I tried doign the upload outside the gzipstream using clause and it says that the stream can not be accessed.
How can I approach this? Is it even possible to do it directly this way or do I need to write it to local drive and then upload it...
For future reference, I manage to find how to do this. You can't read the gZipStream on Compression mode but you can get create another MemoryStream of previous stream's bytes like this:
using (SftpClient client = new SftpClient(connectionInfo))
{
client.Connect();
client.ChangeDirectory("/upload");
using (MemoryStream outputStream = new MemoryStream())
{
using (var gzip = new GZipStream(outputStream, CompressionLevel.Fastest))
{
stream.CopyTo(gzip);
}
using (Stream stm = new MemoryStream(outputStream.ToArray()))
{
client.UploadFile(stm,"txt.gz");
}
}
}
Related
I want to decompress a file that was uploaded encoded with gzip to S3 straight to a file stream.
Here is my method that returns the gzip stream after decompressing the S3 stream:
using var stream = await _s3.GetObjectStreamAsync(_processServiceOptions.BucketName, key, null);
using var gzipStream = new GZipStream(stream, CompressionMode.Decompress, true);
await WriteToFileAsync(gzipStream);
I'm trying to use it like so to copy it directly to the file stream, instead of loading it into memory using another stream...
async Task WriteToFileAsync(Stream data)
{
using (var fs = File.OpenWrite(path))
{
await data.CopyToAsync(fs);
}
}
However I'm getting System.IO.InvalidDataException: The archive entry was compressed using an unsupported compression method.
Why is that?
I use the same code for gzipping streams all the time in the same way.
But for some reason it does not work in Azure.Storage.Blobs version 12.6.0 lib
m_currentFileName = Guid.NewGuid() + ".txt.gz";
var blockBlob = new BlockBlobClient(m_connectionString, m_containerName, GetTempFilePath());
using (var stream = await blockBlob.OpenWriteAsync(true))
using (var currentStream = new GZipStream(stream, CompressionMode.Compress))
using (var writer = new StreamWriter(currentStream))
{
writer.WriteLine("Hello world!");
}
After that I got 0B file in Azure Blob Storage
The code without GZipStream works as expected.
I found lots of code examples with a copy data to a MemoryStream first but I do not wanna keep my data in RAM. I did not find any issues on StackOverflow or Azure Blob Storage GitHub. So I may do something wrong here. Any suggestions?
It appears GZipStream stream needs to be explicitly closed, according to GZipStream.Write method
The write operation might not occur immediately but is buffered until
the buffer size is reached or until the Flush or Close method is
called.
For example:
using (var stream = new MemoryStream())
{
using (var gzipStream = new GZipStream(stream, CompressionMode.Compress, true))
using (var writer = new StreamWriter(gzipStream))
{
writer.Write("Hello world!");
}
stream.Position = 0;
await blockBlob.UploadAsync(stream);
}
Whenever I try to upload a file to the SFTP server with the .csv file extension the only thing within that file is System.IO.MemoryStream. If it's a .txt extension it will have all the values in the file. I can manually convert the .txt to .csv and it will be fine. Is it possible to upload it directly to the SFTP server as a CSV file?
The SFTP Service is using the SSH.NET library by Renci.
Using statement:
using (var stream = csvFileWriter.Write(data, new CsvMapper()))
{
byte[] file = Encoding.UTF8.GetBytes(stream.ToString());
sftpService.Put(SftpCredential.Credentials.Id, file, $"/file.csv");
}
SFTP service:
public void Put(int credentialId, byte[] source, string destination)
{
using (SftpClient client = new SftpClient(GetConnectionInfo(credentialId)))
{
ConnectClient(client);
using (MemoryStream memoryStream = new MemoryStream(source))
{
client.BufferSize = 4 * 1024; // bypass Payload error large files
client.UploadFile(memoryStream, destination);
}
DisconnectClient(client);
}
Solution:
The csvFilerWriter I was using returned a Stream not a MemoryStream, so by switching the csvFileWriter and CsvPut() over to MemoryStream it worked.
Updated using statement:
using (var stream = csvFileWriter.Write(data, new CsvMapper()))
{
stream.Position = 0;
sftpService.CsvPut(SftpCredential.credemtoa;s.Id, stream, $"/file.csv");
}
Updated SFTP service:
public void CsvPut(int credentialId, MemoryStream source, string destination)
{
using (SftpClient client = new SftpClient(GetConnectionInfo(credentialId)))
{
ConnectClient(client);
client.BufferSize = 4 * 1024; //bypass Payload error large files
client.UploadFile(source, destination);
DisconnectClient(client);
}
}
It looks like the csvFileWriter.Write already returns MemoryStream. And its ToString returns "System.IO.MemoryStream" string. That's the root source of your problem.
Aditionally, as you already have the MemoryStream, its an overkill to copy it to yet another MemoryStream, upload it directly. You are copying the data over and over again, it's just a waste of memory.
Like this:
var stream = csvFileWriter.Write(data, new CsvMapper());
stream.Position = 0;
client.UploadFile(stream, destination);
See also:
Upload data from memory to SFTP server using SSH.NET
When uploading memory stream with contents created by csvhelper using SSH.NET to SFTP server, the uploaded file is empty
A simple test code to upload in-memory data:
var stream = new MemoryStream();
stream.Write(Encoding.UTF8.GetBytes("this is test"));
stream.Position = 0;
using (var client = new SftpClient("example.com", "username", "password"))
{
client.Connect();
client.UploadFile(stream, "/remote/path/file.txt");
}
You can avoid the unnecessary using of memory stream like this:
using (var sftp = new SftpClient(GetConnectionInfo(SftpCredential.GetById(credentialId).Id))))
{
sftp.Connect();
using (var uplfileStream = System.IO.File.OpenRead(fileName))
{
sftp.UploadFile(uplfileStream, fileName, true);
}
sftp.Disconnect();
}
I want to create a zip-file and return it to the browser so that it downloads the zip to the downloads-folder.
var images = imageRepository.GetAll(allCountryId);
using (FileStream f2 = new FileStream("SudaAmerica", FileMode.Create))
using (GZipStream gz = new GZipStream(f2, CompressionMode.Compress, false))
{
foreach (var image in images)
{
gz.Write(image.ImageData, 0, image.ImageData.Length);
}
return base.File(gz, "application/zip", "SudaAmerica");
}
i have tried the above but then i get an error saying the stream is disposed.
Is this possible or should i use another library then gzipstream?
The problem here is exactly what it says: you are handing it something based on gz, but gz gets disposed the moment you leave the using.
One option would be to wait until outside the using block, then tell it to use the filename of the thing you just wrote ("SudaAmerica"). However, IMO you shouldn't actually be writing a file here at all. If you use a MemoryStream instead, you can use .ToArray() to get a byte[] of the contents, which you can use in the File method. This requires no IO access, which is a win in about 20 different ways. Well, maybe 3 ways. But...
var images = imageRepository.GetAll(allCountryId);
using (MemoryStream ms = new MemoryStream())
{
using (GZipStream gz = new GZipStream(ms, CompressionMode.Compress, false))
{
foreach (var image in images)
{
gz.Write(image.ImageData, 0, image.ImageData.Length);
}
}
return base.File(ms.ToArray(), "application/zip", "SudaAmerica");
}
Note that a gzip stream is not the same as a .zip archive, so I very much doubt this will have the result you want. Zip archive creation is available elsewhere in the .NET framework, but it is not via GZipStream.
You probably want ZipArchive
i have problems during parsing request files.
my file size is 1338521 bytes, but Nancy says, that file size is some times 1751049 or 3200349.
on my windows pc it works fine, on linux server this problem appears, so i can't save file.
string result = Convert.ToBase64String(Core.ReadBytesFromStream(file.Value));
using (MemoryStream ms = new MemoryStream(Convert.FromBase64String(result)))
{
using (Bitmap bm2 = new Bitmap(ms))
{
bm2.Save(path);
}
}
any ideas?
You don't need to convert the file like that.
var filename = Path.Combine(storagePath, Request.Files[0].Name);
using (var fileStream = new FileStream(filename, FileMode.Create))
{
Request.Files[0].Value.CopyTo(fileStream);
}
Validate the file when it comes in to ensure the extension is accepted, create a save path, and copy the stream to a new file on the filesystem.
That's it.