Empty array with BinaryReader on UploadedFile in c# - c#

Assume the following code:
Stream file = files[0].InputStream;
var FileLen = files[0].ContentLength;
var b = new BinaryReader(file);
var bytes = b.ReadBytes(FileLen);
If I upload a CSV file that is 10 records ( 257 bytes ), the BinaryReader fills the array of bytes with "0".
I also wrote a loop to step through the ReadByte Method of the BinaryReader and in the first iteration of the loop, I received the following exception:
Unable to read beyond the end of the stream
When I increase the CSV file to 200 hundred records, everything worked just fine.
The question is then, Why does this happen on smaller files, and is there a workaround that allows the Binary read of smaller files.

Not sure why, but when you are using BinaryReader on an uploaded stream, the start position needs to be explicitly set.
b.BaseStream.Position = 0;

Related

Write in the end of a file on ubuntu server using ssh.net

I want to add some lines to \etc\ssh\sshd_config using ssh.net, I have tried this method
SFTPCLient.WriteAllLines(string path, string[] content) but it adds the content at the beginning of the file after deleting the same number of lines. In the documentation I've found this It is not first truncated to zero bytes. what does it mean?
I've tried also this method public abstract void Write(byte[] buffer, int offset, int count); with another file, because I didn't want to risque deleting other lines, BUT it doesn't work, can anyone explain the values that the offset and the count must take?
string[] config = {"Match user Fadwa", "ChrootDirectory /fadwa/dhifi/test",
"ForceCommand internal-sftp -d /fadwa"};
byte[] bconfig = config.SelectMany(s =>Encoding.UTF8.GetBytes(s +
Environment.NewLine)).ToArray();
Renci.SshNet.Sftp.SftpFileStream stream = sftpClient.OpenWrite("/sftp/dhifi/test.txt");
if (stream.CanWrite)
{
stream.Write(bconfig,0,bconfig.Length);
}
else Console.WriteLine("Can't Write right now ! ");
Else, any other solution is welcome, thanks in advance!
SftpClient.OpenWrite() creates a Stream positioned at the beginning of the file. "It is not first truncated to zero bytes" means that Stream will not be writing into what is effectively an empty file, but instead overwriting the existing bytes of the file. Thus, if the existing file is longer than the content parameter of SftpFileStream.Write(), the resulting file will consist of content followed by the remainder of the existing file, which is exactly what you observed.
Looking at the documentation I see a few solutions:
SftpClient.AppendAllLines():
sftpClient.AppendAllLines("/sftp/dhifi/test.txt", config);
SftpClient.AppendText():
using (StreamWriter writer = sftpClient.AppendText("/sftp/dhifi/test.txt"))
foreach (string line in config)
writer.WriteLine(line);
Since you want to start writing at the end of the file, instead of SftpClient.OpenWrite() you can call SftpClient.Open(), which allows you to specify a FileMode of Append:
using (SftpFileStream stream = sftpClient.Open("/sftp/dhifi/test.txt", FileMode.Open | FileMode.Append))
stream.Write(bconfig, 0, bconfig.Length);
I don't believe your check of CanWrite is necessary because the Stream should fail to open in the first place if FileAccess.Write could not be granted.
If FileMode.Append for some reason doesn't work with SSH.NET (and it looks like it does since that's what AppendText() uses) you could instead use #jdweng's suggestion and seek to the end of the file before writing to it:
using (SftpFileStream stream = sftpClient.OpenWrite("/sftp/dhifi/test.txt"))
{
stream.Seek(0, SeekOrigin.End);
stream.Write(bconfig, 0, bconfig.Length);
}
As an aside, another way to inject NewLines between your config lines would be...
byte[] bconfig = Encoding.UTF8.GetBytes(
string.Join(Environment.NewLine, config)
);
That's a little shorter and more efficient since it reduces the number of intermediate strings and byte[]s.

What is the simplest way to decompress a ZIP buffer in C#?

When I use zlib in C/C++, I have a simple method uncompress which only requires two buffers and no more else. Its definition is like this:
int uncompress (Bytef *dest, uLongf *destLen, const Bytef *source,
uLong sourceLen);
/*
Decompresses the source buffer into the destination buffer. sourceLen is the byte length of the source buffer. Upon entry,
destLen is the total size of the destination buffer, which must be
large enough to hold the entire uncompressed data. (The size of
the uncompressed data must have been saved previously by the
compressor and transmitted to the decompressor by some mechanism
outside the scope of this compression library.) Upon exit, destLen
is the actual size of the uncompressed data.
uncompress returns Z_OK if success, Z_MEM_ERROR if there was not enough memory, Z_BUF_ERROR if there was not enough room in the output
buffer, or Z_DATA_ERROR if the input data was corrupted or incomplete.
In the case where there is not enough room, uncompress() will fill
the output buffer with the uncompressed data up to that point.
*/
I want to know if C# has a similar way. I checked SharpZipLib FAQ as follows but did not quite understand:
How do I compress/decompress files in memory?
Use a memory stream when creating the Zip stream!
MemoryStream outputMemStream = new MemoryStream();
using (ZipOutputStream zipOutput = new ZipOutputStream(outputMemStream)) {
// Use zipOutput stream as normal
...
You can get the resulting data with memory stream methods ToArray or GetBuffer.
ToArray is the cleaner and easiest to use correctly with the penalty
of duplicating allocated memory. GetBuffer returns a raw buffer raw
and so you need to account for the true length yourself.
See the framework class library help for more information.
I can't figure out if this block of code is for compression or decompression, if outputMemStream meas a compressed stream or an uncompressed stream. I really hope there is a easy-to-understand-way like in zlib. Thanks you very much if you can help me.
Check out the ZipArchive class, which I think has the features you need to accomplish in-memory decompression of zip files.
Assuming you have an array of bytes (byte []) which represent the ZIP file in memory, you have to instantiate a ZipArchive object which will be used to read that array of bytes and interpret them as the ZIP file you whish to load. If you check the ZipArchive class' available constructors in documentation, you will see that they require a stream object from which the data will be read. So, first step would be to convert your byte [] array to a stream that can be read by the constructors, and you can do this by using a MemoryStream object.
Here's an example of how to list all entries inside of a ZIP archive represented in memory as a bytes array:
byte [] zipArchiveBytes = ...; // Read the ZIP file in memory as an array of bytes
using (var inputStream = new MemoryStream(zipArchiveBytes))
using (var zipArchive = new ZipArchive(inputStream, ZipArchiveMode.Read))
{
Console.WriteLine("Listing archive entries...");
foreach (var archiveEntry in zipArchive.Entries)
Console.WriteLine($" {archiveEntry.FullName}");
}
Each file in the ZIP archive will be represented as a ZipArchiveEntry instance. This class offers properties which allow you to retrieve information such as the original length of a file from the ZIP archive, its compressed length, its name, etc.
In order to read a specific file which is contained inside the ZIP file, you can use ZipArchiveEntry.Open(). The following exemplifies how to open a specific file from an archive, if you have its FullName inside the ZIP archive:
ZipArchiveEntry archEntry = zipArchive.GetEntry("my-folder-inside-zip/dog-picture.jpg");
byte[] readResult;
using (Stream entryReadStream = archEntry.Open())
{
using (var tempMemStream = new MemoryStream())
{
entryReadStream.CopyTo(tempMemStream);
readResult = tempMemStream.ToArray();
}
}
This example reads the given file contents, and returns them as an array of bytes (stored in the byte[] readResult variable) which you can then use according to your needs.

Strange results from OpenReadAsync() when reading data from Azure Blob storage

I'm having a go at modifying an existing C# (dot net core) app that reads a type of binary file to use Azure Blob Storage.
I'm using Windows.Azure.Storage (8.6.0).
The issue is that this app reads the binary data from files from a Stream in very small blocks (e.g. 5000-6000 bytes). This reflects how the data is structured.
Example pseudo code:
var blocks = new List<byte[]>();
var numberOfBytesToRead = 6240;
var numberOfBlocksToRead = 1700;
using (var stream = await blob.OpenReadAsync())
{
stream.Seek(3000, SeekOrigin.Begin); // start reading at a particular position
for (int i = 1; i <= numberOfBlocksToRead; i++)
{
byte[] traceValues = new byte[numberOfBytesToRead];
stream.Read(traceValues, 0, numberOfBytesToRead);
blocks.Add(traceValues);
}
}`
If I try to read a 10mb file using OpenReadAsync(), I get invalid/junk values in the byte arrays after around 4,190,000 bytes.
If I set StreamMinimumReadSize to 100Mb it works.
If I read more data per block (e.g. 1mb) it works.
Some of the files can be more than 100Mb, so setting the StreamMinimumReadSize may not be the best solution.
What is going on here, and how can I fix this?
Are the invalid/junk values zeros? If so (and maybe even if not) check the return value from stream.Read. That method is not guaranteed to actually read the number of bytes that you ask it to. It can read less. In which case you are supposed to call it again in a loop, until it has read the total amount that you want. A quick web search should show you lots of examples of the necessary looping.

File saving as 0 KB when using HttpPostedFileBase.InputStream.CopyToAsync(FileStream) [duplicate]

I'm saving an uploaded image using this code:
using (var fileStream = File.Create(savePath))
{
stream.CopyTo(fileStream);
}
When the image is saved to its destination folder, it's empty, 0 kb. What could possible be wrong here? I've checked the stream.Length before copying and its not empty.
There is nothing wrong with your code. The fact you say "I've checked the stream.Length before copying and its not empty" makes me wonder about the stream position before copying.
If you've already consumed the source stream once then although the stream isn't zero length, its position may be at the end of the stream - so there is nothing left to copy.
If the stream is seekable (which it will be for a MemoryStream or a FileStream and many others), try putting
stream.Position = 0
just before the copy. This resets the stream position to the beginning, meaning the whole stream will be copied by your code.
I would recommend to put the following before CopyTo()
fileStream.Position = 0
Make sure to use the Flush() after this, to avoid empty file after copy.
fileStream.Flush()
This problem started for me after migrating my project from to .NET Core 1 to 2.2.
I fixed this issue by setting the Position of my filestream to zero.
using (var fileStream = new FileStream(savePath, FileMode.Create))
{
fileStream.Position = 0;
await imageFile.CopyToAsync(fileStream);
}

How to truncate a file down to certain size but keep the end section?

I have a text file that's appended to over time and periodically I want to truncate it down to a certain size, e.g. 10MB, but keeping the last 10MB rather than the first.
Is there any clever way to do this? I'm guessing I should seek to the right point, read from there into a new file, delete old file and rename new file to old name. Any better ideas or example code? Ideally I wouldn't read the whole file into memory because the file could be big.
Please no suggestions on using Log4Net etc.
If you're okay with just reading the last 10MB into memory, this should work:
using(MemoryStream ms = new MemoryStream(10 * 1024 * 1024)) {
using(FileStream s = new FileStream("yourFile.txt", FileMode.Open, FileAccess.ReadWrite)) {
s.Seek(-10 * 1024 * 1024, SeekOrigin.End);
s.CopyTo(ms);
s.SetLength(10 * 1024 * 1024);
s.Position = 0;
ms.Position = 0; // Begin from the start of the memory stream
ms.CopyTo(s);
}
}
You don't need to read the whole file before writing it, especially not if you're writing into a different file. You can work in chunks; reading a bit, writing, reading a bit more, writing again. In fact, that's how all I/O is done anyway. Especially with larger files you never really want to read them in all at once.
But what you propose is the only way of removing data from the beginning of a file. You have to rewrite it. Raymond Chen has a blog post on exactly that topic, too.
I tested the solution from "false" but it doesn't work for me, it trims the file but keeps the beginning of it, not the end.
I suspect that CopyTo copies the whole stream instead of starting for the stream position. Here's how I made it work:
int trimSize = 10 * 1024 * 1024;
using (MemoryStream ms = new MemoryStream(trimSize))
{
using (FileStream s = new FileStream(logFilename, FileMode.Open, FileAccess.ReadWrite))
{
s.Seek(-trimSize, SeekOrigin.End);
byte[] bytes = new byte[trimSize];
s.Read(bytes, 0, trimSize);
ms.Write(bytes, 0, trimSize);
ms.Position = 0;
s.SetLength(trimSize);
s.Position = 0;
ms.CopyTo(s);
}
}
You could read the file into a binary stream and use the seek method to just retrieve the last 10MB of data and load them into memory. Then you save this stream into a new file and delete the old one. The text data could be truncated though so you have to decide if this is exceptable.
Look here for an example of the Seek method:
http://www.dotnetperls.com/seek

Categories

Resources