I am developing an application which requires to read a text file which is continuously updating. I need to read the file till the end of file(at that very moment) and need to remember this location for my next file read. I am planning to develop this application in C#.net . How can I perform these partial reads and remember the locations as C# does not provide pointers in file handling ?
Edit : File is updated every 30 seconds and new data is appended to the old file.
I tried with maintaining the length of previous read and then reading the data from that location but the file can not be accessed by two applications at the same time.
You can maintain the last offset of the read pointer in the file. You can do sth like this
long lastOffset = 0;
using (var fs = new FileStream("myFile.bin", FileMode.Open))
{
fs.Seek(lastOffset, SeekOrigin.Begin);
// Read the file here
// Once file is read, update the lastOffset
lastOffset=fs.Seek(0, SeekOrigin.End);
}
Open the file, read everything, save the count of bytes you read in a variable (let's assume it's read_until_here).
next time you read the file, just take the new information (whichever comes after the location of your read_until_here variable ...
I am planning to develop this application in C#.net . How can I perform these partial reads and remember the locations as C# does not provide pointers in file handling ?
Not entirely sure why you'd be concerned about the supposed lack of pointers... I'd look into int FileStream.Read(byte[] array, int offset, int count) myself.
It allows you to read from an offset into a buffer, as many bytes as you want, and tells you how many bytes were actually read... which looks to be all the functionality you'd need.
Related
I'm having a go at modifying an existing C# (dot net core) app that reads a type of binary file to use Azure Blob Storage.
I'm using Windows.Azure.Storage (8.6.0).
The issue is that this app reads the binary data from files from a Stream in very small blocks (e.g. 5000-6000 bytes). This reflects how the data is structured.
Example pseudo code:
var blocks = new List<byte[]>();
var numberOfBytesToRead = 6240;
var numberOfBlocksToRead = 1700;
using (var stream = await blob.OpenReadAsync())
{
stream.Seek(3000, SeekOrigin.Begin); // start reading at a particular position
for (int i = 1; i <= numberOfBlocksToRead; i++)
{
byte[] traceValues = new byte[numberOfBytesToRead];
stream.Read(traceValues, 0, numberOfBytesToRead);
blocks.Add(traceValues);
}
}`
If I try to read a 10mb file using OpenReadAsync(), I get invalid/junk values in the byte arrays after around 4,190,000 bytes.
If I set StreamMinimumReadSize to 100Mb it works.
If I read more data per block (e.g. 1mb) it works.
Some of the files can be more than 100Mb, so setting the StreamMinimumReadSize may not be the best solution.
What is going on here, and how can I fix this?
Are the invalid/junk values zeros? If so (and maybe even if not) check the return value from stream.Read. That method is not guaranteed to actually read the number of bytes that you ask it to. It can read less. In which case you are supposed to call it again in a loop, until it has read the total amount that you want. A quick web search should show you lots of examples of the necessary looping.
I have a code written in MFC for file reading and writing. I am rewriting it for C#.
My file consists of three parts, Header *Body* and Footer. Now in MFC code CArchive can write/read any of these parts. This is done by COleStreamFile::OpenStream. In this method we gave which part to read and this returns stream pointing to that location in the file. CArchive then uses stream and reads/writes to the file.
COleStreamFile stream;
//Stream is pointed to footer location.
stream.OpenStream(m_pStg, "Footer", nOpenFlags, pError); // pStg is LPSTORAGE
CArchive ar(&stream, CArchive::load);
Now after this code when I do ar >> or ar << I didn't read file from start. It is reading from middle or end (depends on stream). Now what I want is that to convert this code to C#. Whats the replacement for COleStreamFile::OpenStream in C#.
Here is what I have done so far.
using (var stream = new FileStream(filePath, FileMode.Open))
{
using (var binaryReader = new BinaryReader(stream)
{
}
}
Now here stream is pointing to the start. I think I can give it to read from specific byte. But I don't know that byte location. What I know is Header, Body and Footer names which are being used by MFC code.
Or is there any way to find out the current location of CArchive when it is reading or writing. If I get byte location from there I can use that as well.
I'm upload big files dividing its on chunks(small parts) on my ASMX webservice(asmx doesn't support streaming, I not found another way):
bool UploadChunk(byte[] bytes, string path, string md5)
{
...
using (FileStream fs = new FileStream(tempPath, FileMode.Append) )
{
fs.Write( bytes, 0, bytes.Length );
}
...
return status;
}
but on some files after ~20-50 invokes I catch this error: The process cannot access the file because it is being used by another process.
I suspect that this related with Windows can't realize the file. Any idea to get rid of this boring error?
EDIT
the requests executes sequentially and synchronously
EDIT2
client code looks like:
_service.StartUpload(path);
...
do
{
..
bool status = _service.UploadChunk(buf, path, md5);
if(!status)return Status.Failed;
..
}
while(bytesRead > 0);
_service.CheckFile(path, md5);
Each request is handled independently. The process still accessing the file may be the previous request.
In general, you should use file transfer protocols to transfer files. ASMX is not good for that.
And, I presume you have a good reason to not use WCF?
Use WhoLockMe at the moment the error occurs to check who is using the file. You could put the application into debug mode and hold the break point to do this. In all probability it will be your process.
Also try adding a delay after each transfer (and before the next) to see if it helps. Maybe your transfers are too fast and the stream is still in use or being flushed when the next transfer comes in.
Option 1: Get the requirements changed so you don't have to do this using ASMX. WCF supports a streaming model that I'm about to experiment with, but it should be much more effective for what you want.
Option 2: Look into WSE 3.0. I haven't looked at it much, but I think it extends ASMX web services to support things like DIME and MTOM which are designed for transferring files so that may help.
Option 3: Set the system up so that each call writes a piece of the file into a different filename, then write code to rejoin everything at the end.
use this for creating a file
if you want to append something then add FileMode.Append
var filestreama = new FileStream(name, FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.ReadWrite);
I am building some C# desktop application and I need to save file into database. I have come up with some file chooser which give me correct path of the file. Now I have question how to save that file into database by using its path.
It really depends on the type and size of the file. If it's a text file, then you could use File.ReadAllText() to get a string that you can save in your database.
If it's not a text file, then you could use File.ReadAllBytes() to get the file's binary data, and then save that to your database.
Be careful though, databases are not a great way to store heavy files (you'll run into some performance issues).
FileStream fs = new FileStream(fileName, FileMode.Open, FileAccess.Read);
BinaryReader br = new BinaryReader(fs);
int numBytes = new FileInfo(fileName).Length;
byte[] buff = br.ReadBytes(numBytes);
Then you upload it to the DB like anything else, I'm assume you are using a varbinary column (BLOB)
So filestream would be it but since you're using SQL 2K5 you will have to do it the read into memory way; which consumes alot of resources.
First of the column type varchar(max) is your friend this give you ~2Gb of data to play with, which is pretty big for most uses.
Next read the data into a byte array and convert it to a Base64String
FileInfo _fileInfo = new FileInfo(openFileDialog1.FileName);
if (_fileInfo.Length < 2147483647) //2147483647 - is the max size of the data 1.9gb
{
byte[] _fileData = new byte[_fileInfo.Length];
_fileInfo.OpenRead().Read(_fileData, 0, (int)_fileInfo.Length);
string _data = Convert.ToBase64String(_fileData);
}
else
{
MessageBox.Show("File is too large for database.");
}
And reverse the process to recover
byte[] _fileData = Convert.FromBase64String(_data);
You'll want to dispose of those strings as quickly as possible by setting them to string.empty as soon as you have finished using them!
But if you can, just upgrade to 2008 and use FILESTREAM.
If you're using SQL Server 2008, you could use FILESTREAM (getting started guide here). An example of using this functionality from C# is here.
You would need the file into a byte array then store this as a blob field in the database possible with the name you wanted to give the file and the file type.
You could just reverse the process for putting the file out again.
While opening a file in C# using stream reader is the file going to remain in memory till it closed.
For eg if a file of size 6MB is opened by a program using streamreader to append a single line at the end of the file. Will the program hold the entire 6 MB in it's memory till file is closed. OR is a file pointer returned internally by .Net code and the line is appended at the end. So the 6MB memory will not be taken up by the program
The whole point of a stream is so that you don't have to hold an entire object in memory. You read from it piece by piece as needed.
If you want to append to a file, you should use File.AppendText which will create a StreamWriter that appends to the end of a file.
Here is an example:
string path = #"c:\temp\MyTest.txt";
// This text is always added, making the file longer over time
// if it is not deleted.
using (StreamWriter sw = File.AppendText(path))
{
sw.WriteLine("This");
sw.WriteLine("is Extra");
sw.WriteLine("Text");
}
Again, the whole file will not be stored in memory.
Documentation: http://msdn.microsoft.com/en-us/library/system.io.file.appendtext.aspx
The .NET FileStream will buffer a small amount of data (you can set this amount with some of the constructors).
The Windows OS will do more significant caching of the file, if you have plenty of RAM this might be the whole file.
A StreamReader uses FileStream to open the file. FileStream stores a Windows handle, returned by the CreateFile() API function. It is 4 bytes on a 32-bit operating system. FileStream also has a byte[] buffer, it is 4096 bytes by default. This buffer avoids having to call the ReadFile() API function for every single read call. StreamReader itself has a small buffer to make decoding the text in the file more efficient, it is 128 bytes by default. And it has some private variables to keep track of the buffer index and whether or not a BOM has been detected.
This all adds up to a few kilobytes. The data you read with StreamReader will of course take space in your program's heap. That could add up to 12 megabytes if you store every string in, say, a List. You usually want to avoid that.
StreamReader will not read the 6 MB file into memory. Also, you can't append a line to the end of the file using StreamReader. You might want to use StreamWriter.
update: not counting buffering and OS caching as someone else mentioned