windows 8 C# read and write simultaneously to a file - c#

I have a process where I am writing byte array to a file (StorageFile). The process runs in periodic intervals and during the same time I need to read from the beginning of the file in another process. The two processes are in different classes. When I start reading the file, the write operation fails with access denied error.
Here the file is a StorageFile inside the app folder.
The write method calls:
IRandomAccessStream randomStream = await targetFile.OpenAsync(FileAccessMode.ReadWrite);
Stream stream = randomStream.AsStreamForWrite();
The read method calls:
IRandomAccessStream randomStream = await targetFile.OpenAsync(FileAccessMode.Read);
Stream stream = randomStream.AsStreamForRead();
Both operations take place simultaneously from different methods and it results in an access denied error. Seems I can use the read/write inside the same method while opening for write. But how to do that from different methods??
Do we have anything similar to the earlier System.IO.FileShare so that I can explicitly say that this file needs to be accessed from another location in the same app?

Besides switching to a database type setup, I would suggest buffering your writes to an array and writing them in chunks based on the buffer filling and/or certain about of time elapsed. Unless you need the latest and greatest performance down to the millisecond, I doubt people will notice a performance impact. You will have to play with the buffer size to make sure you give your reader enough time to actually read the file.
On the reader side, you could implement a FileSystemWatcher object on the file.

Related

Can FileStream create zero-fill files when Windows crashes?

I have received several reports from users of C# applications that their data was corrupted when Windows crashed while the application was running (e.g., the notebook PC was turned off, forced to quit, etc.).
When I looked at the data, strangely enough, the file size was the same as the normal one, but all the contents were filled with zero.
Is it possible that such a file could be generated from the following code?
var tempFilePath = $"{path}.temp";
using (var stream = File.Open(tempFilePath, FileMode.Create))
{
await stream.WriteAsync(bytes, 0, bytes.Length);
} // Flush should be done automatically in Dispose
// Originally it was not via temp file, but after receiving the report, it is now via temp file.
// Still did not resolve the issue.
File.Copy(tempFilePath, path, true);
File.Delete(tempFilePath);
Since no .temp file was left, it is considered to have run to delete normally.
The file size is small even when normal, less than 300 bytes. (Is it possible that its small size prevents it from being written to disk from the cache?)
I found this article but thought this was irrelevant because of copying to anotherdisk.
https://devblogs.microsoft.com/oldnewthing/20190517-00/?p=102500

How can i get length (file size) of a task stream

My primary requirement is to get length (File Size) of a task stream System.Threading.Tasks.Task<System.IO.Stream>. In a memorystream i used to get the file size using "stream.Result.Length" but when i tried to use the same in a taskstream it throws an exception saying System.NotSupportedException, Seems like the stream doesn't support that property. I think there is a difference between memory streams and other streams.
Exception occurred handling notification:System.NotSupportedException:
This stream does not support seek operations.
Could you please give me any instructions how can i achieve this
i found this link which gives me the instructions. I am using .Net 3.5 therefore i cant use ConvertTo() functions that is there in .Net 4
The point of a stream is that you don't need to have all the data available before you can start processing the first part of it. MemoryStream is an exception, in that it does have the entire contents of the stream in memory at the same time, so it can "support seek operations" to tell you things like how big the stream is.
If you need the full size of a stream that can't seek, you're going to have to consume the entire stream to find that out. If the stream is always going to be relatively small, you could copy its contents into a MemoryStream. If it's going to be larger, you could copy its contents into a file on disk.
You may want to examine why you need to know the length. If it's so that you can cancel uploads that are too large, for example, then perhaps you should just start processing the upload in chunks, but after each piece of data comes in check how much data you've received so far, and cancel the process if it gets too big.

Read a file that is locked (?) by a other process

I want to read the content of a file which is opened (and locked?) by a other process.
I tried it with File.ReadAllText() and with new StreamReader(new FileStream(path, FileMode.Open, FileAccess.Read)) but both methods trigger a IOException.
For example I can open the file with Notepad++ and the content is shown so I think it must be possible too with c#.
You need to use the FileStream constructor overload that takes a FileShare argument. And pass FileShare.ReadWrite. You can only open the file if you permit write access since the other program already acquired that right. Otherwise the reason that your attempts failed so far, they used FileShare.Read. Can't work, you cannot deny write access because the other program already got that.
Dealing with the program writing to the file while you are reading it is entirely up to you. Results can be quite random. Anything is possible, but in general for a log file you'll get a partially written last line that's trailing behind the actual output of the program, some of which is still in the program's file buffer. A buffer size of 4096 bytes is a common choice.

Reading XML file as it is written C#

I have a program that logs it's progress and other data to an XML file. I want to be able to open this XML file without blocking out the writer program (not a .NET program, and I have no control over it), and to read the XML as it comes, waiting for more when it is all processed, until the EOF is received.
How can this be achieved in C#?
Note that there are 2 problems:
Keeping a reading stream open without blocking the other process.
Knowing when there is more input and waiting when there isn't.
If I needed to do this I would do something like the following:
Use a FileSystemWatcher to get notified when the file changes. Then just read the file and parse the XML as you require.
I would go down this route as it will be difficult to read the stream as and when the external application writes to the file.
I did soemthing similar in past yielding in an OS program called Tailf.
Just check the code if you want to do it yourself, or grab all from it, it should almost work for you as well, a part the fact I just care about text files.
You can open a file stream without locking it by passing in the following flags:
new FileStream(logFilePath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
As far as waiting for the "EOF", if the other program is only writing data intermittently, you may have to put some sort of heuristics into your progress (ie. stop peeking for new data only if there's nothing new for X minutes).

DotNetZip streaming

I'm trying to zip a bunch of files and make the data consumable via a stream.
I would like to keep the memory footprint as small as possible.
My idea was to implement a Stream where I've got a bunch of FileStream objects as data members. When the Read method on my Stream was called, I would read some data from one of my file streams and use the ZipOutputStream instance to write zipped data to temporary storage stream which i would then forward the read request to.
This temporary storage stream would just be a queue of bytes. As these bytes are moved into a buffer (via a call to Read), they'd be deleted from the queue. This way, I'd only be storing the bytes that haven't been read yet.
Unfortunately, it seems as though when i dispose a ZipOutputStream it needs to write in random file locations in order to create a valid zip file. This will prevent me from using my "fleeting data" solution.
Hopefully this is all clear :)
Is there another way to minimize memory footprint when creating zip files? Please Help!
Thanks!
ZipOutputStream doesn't need to write to random locations in the output stream (in other words, call Seek()). But if the stream you're writing into reports that it CanSeek, it will use that ability to update some headers.
So, make sure that the stream you're writing to returns false for CanSeek() and everything should work fine.

Categories

Resources