I'm saving an uploaded image using this code:
using (var fileStream = File.Create(savePath))
{
stream.CopyTo(fileStream);
}
When the image is saved to its destination folder, it's empty, 0 kb. What could possible be wrong here? I've checked the stream.Length before copying and its not empty.
There is nothing wrong with your code. The fact you say "I've checked the stream.Length before copying and its not empty" makes me wonder about the stream position before copying.
If you've already consumed the source stream once then although the stream isn't zero length, its position may be at the end of the stream - so there is nothing left to copy.
If the stream is seekable (which it will be for a MemoryStream or a FileStream and many others), try putting
stream.Position = 0
just before the copy. This resets the stream position to the beginning, meaning the whole stream will be copied by your code.
I would recommend to put the following before CopyTo()
fileStream.Position = 0
Make sure to use the Flush() after this, to avoid empty file after copy.
fileStream.Flush()
This problem started for me after migrating my project from to .NET Core 1 to 2.2.
I fixed this issue by setting the Position of my filestream to zero.
using (var fileStream = new FileStream(savePath, FileMode.Create))
{
fileStream.Position = 0;
await imageFile.CopyToAsync(fileStream);
}
Related
I have the following code:
using (var fs = new FileStream(#"C:\dump.bin", FileMode.Create))
{
income.CopyTo(fs);
}
income is a stream that I need to save to disk, the problem is that I want to ignore the last 8 bytes and save everything before that. The income stream is read only, forward only so I cannot predict its size and I don't want to load all the stream in memory due to huge files being sent.
Any help will be appreciated.
Maybe (or rather probably) there is a cleaner way of doing it but being pragmatic at the moment the first thought which comes to my mind is this:
using (var fs = new FileStream(#"C:\dump.bin", FileMode.Create))
{
income.CopyTo(fs);
fs.SetLength(Math.Max(income.Length - 8, 0));
}
Which set's the file length after it is written.
I have a file that gets GBs of data written to it over time (looping once it reaches the end). I would like to create the file ahead of time and preset the storage so that the required storage is never taken up by other downloads during the writing to the file. This is done using visual studio 2012 in C#.
I have tried:
if (fileSizeRequirement or fileName is changed) //if filePath or file size is changed
{
//Open or create the file, set the file to size requirement, close the filestream
fileStream = new FileStream(fileName, FileMode.OpenOrCreate, FileAccess.ReadWrite);
fileStream.SetLength((long)fileSizeRequirement);
fileStream.Close();
}
1) Is this an appropriate way to "preallocate" space for a folder?
2) Will the SetLength require a seek to the beginning after setting the length or does the position in the folder stay at the beginning?
3) What is the correct way to achieve file preallocation of storage space?
Thanks ahead of time and I appreciate any suggestions.
Using SetLength is a common approach although I'd generally use a using statement here.
using(var fileStream = new FileStream(fileName, FileMode.OpenOrCreate, FileAccess.ReadWrite))
{
fileStream.SetLength((long)fileSizeRequirement);
}
Calling fileStream.Position straight after SetLength yields 0 so you shouldn't need to seek to the beginning.
I have a text file that's appended to over time and periodically I want to truncate it down to a certain size, e.g. 10MB, but keeping the last 10MB rather than the first.
Is there any clever way to do this? I'm guessing I should seek to the right point, read from there into a new file, delete old file and rename new file to old name. Any better ideas or example code? Ideally I wouldn't read the whole file into memory because the file could be big.
Please no suggestions on using Log4Net etc.
If you're okay with just reading the last 10MB into memory, this should work:
using(MemoryStream ms = new MemoryStream(10 * 1024 * 1024)) {
using(FileStream s = new FileStream("yourFile.txt", FileMode.Open, FileAccess.ReadWrite)) {
s.Seek(-10 * 1024 * 1024, SeekOrigin.End);
s.CopyTo(ms);
s.SetLength(10 * 1024 * 1024);
s.Position = 0;
ms.Position = 0; // Begin from the start of the memory stream
ms.CopyTo(s);
}
}
You don't need to read the whole file before writing it, especially not if you're writing into a different file. You can work in chunks; reading a bit, writing, reading a bit more, writing again. In fact, that's how all I/O is done anyway. Especially with larger files you never really want to read them in all at once.
But what you propose is the only way of removing data from the beginning of a file. You have to rewrite it. Raymond Chen has a blog post on exactly that topic, too.
I tested the solution from "false" but it doesn't work for me, it trims the file but keeps the beginning of it, not the end.
I suspect that CopyTo copies the whole stream instead of starting for the stream position. Here's how I made it work:
int trimSize = 10 * 1024 * 1024;
using (MemoryStream ms = new MemoryStream(trimSize))
{
using (FileStream s = new FileStream(logFilename, FileMode.Open, FileAccess.ReadWrite))
{
s.Seek(-trimSize, SeekOrigin.End);
byte[] bytes = new byte[trimSize];
s.Read(bytes, 0, trimSize);
ms.Write(bytes, 0, trimSize);
ms.Position = 0;
s.SetLength(trimSize);
s.Position = 0;
ms.CopyTo(s);
}
}
You could read the file into a binary stream and use the seek method to just retrieve the last 10MB of data and load them into memory. Then you save this stream into a new file and delete the old one. The text data could be truncated though so you have to decide if this is exceptable.
Look here for an example of the Seek method:
http://www.dotnetperls.com/seek
Assume the following code:
Stream file = files[0].InputStream;
var FileLen = files[0].ContentLength;
var b = new BinaryReader(file);
var bytes = b.ReadBytes(FileLen);
If I upload a CSV file that is 10 records ( 257 bytes ), the BinaryReader fills the array of bytes with "0".
I also wrote a loop to step through the ReadByte Method of the BinaryReader and in the first iteration of the loop, I received the following exception:
Unable to read beyond the end of the stream
When I increase the CSV file to 200 hundred records, everything worked just fine.
The question is then, Why does this happen on smaller files, and is there a workaround that allows the Binary read of smaller files.
Not sure why, but when you are using BinaryReader on an uploaded stream, the start position needs to be explicitly set.
b.BaseStream.Position = 0;
We are having an issue with one server and it's utilization of the StreamWriter class. Has anyone experienced something similar to the issue below? If so, what was the solution to fix the issue?
using( StreamWriter logWriter = File.CreateText( logFileName ) )
{
for (int i = 0; i < 500; i++)
logWriter.WriteLine( "Process completed successfully." );
}
When writing out the file the following output is generated:
Process completed successfully.
... (497 more lines)
Process completed successfully.
Process completed s
Tried adding logWriter.Flush() before close without any help. The more lines of text I write out the more data loss occurs.
Had a very similar issue myself. I found that if I enabled AutoFlush before doing any writes to the stream and it started working as expected.
logWriter.AutoFlush = true;
sometimes even u call flush(), it just won't do the magic. becus Flush() will cause stream to write most of the data in stream except the last block of its buffer.
try
{
// ... write method
// i dont recommend use 'using' for unmanaged resource
}
finally
{
stream.Flush();
stream.Close();
stream.Dispose();
}
Cannot reproduce this.
Under normal conditions, this should not and will not fail.
Is this the actual code that fails ? The text "Process completed" suggests it's an extract.
Any threading involved?
Network drive or local?
etc.
This certainly appears to be a "flushing" problem to me, even though you say you added a call to Flush(). The problem may be that your StreamWriter is just a wrapper for an underlying FileStream object.
I don't typically use the File.CreateText method to create a stream for writing to a file; I usually create my own FileStream and then wrap it with a StreamWriter if desired. Regardless, I've run into situations where I've needed to call Flush on both the StreamWriter and the FileStream, so I imagine that is your problem.
Try adding the following code:
logWriter.Flush();
if (logWriter.BaseStream != null)
logWriter.BaseStream.Flush();
In my case, this is what I found with output file
Case 1: Without Flush() and Without Close()
Character Length = 23,371,776
Case 2: With Flush() and Without Close()
logWriter.flush()
Character Length = 23,371,201
Case 3: When propely closed
logWriter.Close()
Character Length = 23,375,887 (Required)
So, In order to get proper result, always need to close Writer instance.
I faced same problem
Following worked for me
using (StreamWriter tw = new StreamWriter(#"D:\Users\asbalach\Desktop\NaturalOrder\NatOrd.txt"))
{
tw.Write(abc.ToString());// + Environment.NewLine);
}
Using framework 4.6.1 and under heavy stress it still has this problem. I'm not sure why it does this, though i found a way to solve it very differently (which strengthens my feeling its indeed a .net bug).
In my case i tried write huge jagged arrays to disk (video caching).
Since the jagged array is quite large it had to do lot of repeated writes to store a large set of video frames, and despite they where uncompressed and each cache file got exact 1000 frames, the logged cash files had all different sizes.
I had the problem when i used this
//note, generateLogfileName is just a function to create a filename()
using (FileStream fs = new FileStream(generateLogfileName(), FileMode.OpenOrCreate))
{
using (StreamWriter sw = new StreamWriter(fs)
{
// do your stuff, but it will be unreliable
}
}
However when i provided it an Encoding type, all logged files got an equal size, and the problem was gone.
using (FileStream fs = new FileStream(generateLogfileName(), FileMode.OpenOrCreate))
{
using (StreamWriter sw = new StreamWriter(fs,Encoding.Unicode))
{
// all data written correctly, no data lost.
}
}
Note also read the file width the same encoding type!
This did the trick for me:
streamWriter.flush();