We can read file either by using StreamReader or by using File.ReadAllLines.
For example I want to load each line into a List or string[] for further manipulation on each line.
string[] lines = File.ReadAllLines(#"C:\\file.txt");
foreach(string line in lines)
{
//DoSomething(line);
}
or
using (StreamReader reader = new StreamReader("file.txt"))
{
string line;
while ((line = reader.ReadLine()) != null)
{
//DoSomething(line); or //save line into List<string>
}
}
//if list is created loop through list here
Application come across different size of text file. Which could grow from few KBs to MBs occasionally.
My question is that which one is preferred way and why one should be preferred over other?
If you want to process each line of a text file without loading the entire file into memory, the best approach is like this:
foreach (var line in File.ReadLines("Filename"))
{
// ...process line.
}
This avoids loading the entire file, and uses an existing .Net function to do so.
However, if for some reason you need to store all the strings in an array, you're best off just using File.ReadAllLines() - but if you are only using foreach to access the data in the array, then use File.ReadLines().
Microsoft uses a StreamReader in File.ReadAllLines:
private static String[] InternalReadAllLines(String path, Encoding encoding)
{
Contract.Requires(path != null);
Contract.Requires(encoding != null);
Contract.Requires(path.Length != 0);
String line;
List<String> lines = new List<String>();
using (StreamReader sr = new StreamReader(path, encoding))
while ((line = sr.ReadLine()) != null)
lines.Add(line);
return lines.ToArray();
}
The StreamReader read the file line by line, it will consume less memory.
Whereas, File.ReadAllLines read all lines at once and store it into string[], it will consume more memory. And if that string[] is larger than int.maxvalue then that will produce memory overflow(limit of 32bit OS).
So, for bigger files StreamReader will be more efficient.
Related
I have a simple program to read a file using the StreamReader and process it line by line. But the file I am reading may sometimes locate in a network folder. I came across while doing some testing with such a file, that if the network connection lost at some point while I am reading, it'll stay in the same line again and again looping in an infinite loop by resulting the same line as the result from stream.ReadLine().
Is there a way I can find when the fileHandle is not available from the stream itself? I was expecting a FileNotAvailableException kind of an exception would fire when the filehandle is lost from the StreamReader.
Here's my code snippet...
string file = #"Z://1601120903.csv"; //Network file
string line;
StringBuilder stb = new StringBuilder();
StreamReader stream = new StreamReader(file, Encoding.UTF8, true, 1048576);
do
{
line = stream.ReadLine();
// Do some work here
} while (line != "");
Compare with null not with empty string:
https://msdn.microsoft.com/en-us/library/system.io.streamreader.readline(v=vs.110).aspx
Return Value Type: System.String The next line from the input stream,
or null if the end of the input stream is reached.
do
{
line = stream.ReadLine();
// Do some work here
} while (line != null);
A better approach, however, is to let .Net do the work (line by line file reading) for you and drop all readers:
foreach (String line in File.ReadLines(file)) {
// Do some work here
}
Correct approach 1 (EndOfStream) :
using(StreamReader sr = new StreamReader(...)) {
while(!sr.EndOfStream) {
string line = sr.ReadLine();
Console.WriteLine(line);
}
}
Correct approach 2 (Peek)
using(StreamReader sr = new StreamReader(...)) {
while(sr.Peek() >= 0) {
string line = sr.ReadLine();
}
}
Note: that it is incorrect to threat an empty string as end of file.
if the network connection lost at some point while I am reading,
it'll stay in the same line again and again looping in an infinite
loop by resulting the same line as the result from stream.ReadLine()
I've checked this scenario right now - the System.IO.IOException ("The network path was not found."} should be thrown in this case.
Wrapping this with a try catch block will not fix my problem, will it?
In this case you can break the reading as follows:
string line;
do {
try {
line = sr.ReadLine();
// Do some work here
}
catch(System.IO.IOException) {
break;
}
} while(line != null);
If you write it with a while-loop:
while ((line = sr.ReadLine()) != null)
{
Console.WriteLine(line);
}
Source
One more way would be to use File.ReadAllLines() and it will take care of opening file and reading all lines and closig the file and may also handle scenario when network connection is lost.
var lines = File.ReadAllLines("Z://1601120903.csv");
foreach(line in lines)
{
// Do some work
}
Assuming the file shouldn't change while you reading it and it's not huge, you might want to consider to copy it to a temp file (locally) and then work on it without interference.
If you want to get index of the place you reached this might help:
How to know position(linenumber) of a streamreader in a textfile?
If your stream is a NetworkStream, the ReadLine method will expect more content from the stream, if it reached at end, indefinitely. I think, and according to the StreamReader documentation, it is designed to work only with local file streams. In this case, you can read bytes directly from the NetworkStream.
https://learn.microsoft.com/pt-br/dotnet/api/system.net.sockets.networkstream.read?view=netcore-3.1#System_Net_Sockets_NetworkStream_Read_System_Span_System_Byte__
What is the quickest way to read a text file into a string variable?
I understand it can be done in several ways, such as read individual bytes and then convert those to string. I was looking for a method with minimal coding.
How about File.ReadAllText:
string contents = File.ReadAllText(#"C:\temp\test.txt");
A benchmark comparison of File.ReadAllLines vs StreamReader ReadLine from C# file handling
Results. StreamReader is much faster for large files with 10,000+
lines, but the difference for smaller files is negligible. As always,
plan for varying sizes of files, and use File.ReadAllLines only when
performance isn't critical.
StreamReader approach
As the File.ReadAllText approach has been suggested by others, you can also try the quicker (I have not tested quantitatively the performance impact, but it appears to be faster than File.ReadAllText (see comparison below)). The difference in performance will be visible only in case of larger files though.
string readContents;
using (StreamReader streamReader = new StreamReader(path, Encoding.UTF8))
{
readContents = streamReader.ReadToEnd();
}
Comparison of File.Readxxx() vs StreamReader.Readxxx()
Viewing the indicative code through ILSpy I have found the following about File.ReadAllLines, File.ReadAllText.
File.ReadAllText - Uses StreamReader.ReadToEnd internally
File.ReadAllLines - Also uses StreamReader.ReadLine internally with the additionally overhead of creating the List<string> to return as the read lines and looping till the end of file.
So both the methods are an additional layer of convenience built on top of StreamReader. This is evident by the indicative body of the method.
File.ReadAllText() implementation as decompiled by ILSpy
public static string ReadAllText(string path)
{
if (path == null)
{
throw new ArgumentNullException("path");
}
if (path.Length == 0)
{
throw new ArgumentException(Environment.GetResourceString("Argument_EmptyPath"));
}
return File.InternalReadAllText(path, Encoding.UTF8);
}
private static string InternalReadAllText(string path, Encoding encoding)
{
string result;
using (StreamReader streamReader = new StreamReader(path, encoding))
{
result = streamReader.ReadToEnd();
}
return result;
}
string contents = System.IO.File.ReadAllText(path)
Here's the MSDN documentation
For the noobs out there who find this stuff fun and interesting, the fastest way to read an entire file into a string in most cases (according to these benchmarks) is by the following:
using (StreamReader sr = File.OpenText(fileName))
{
string s = sr.ReadToEnd();
}
//you then have to process the string
However, the absolute fastest to read a text file overall appears to be the following:
using (StreamReader sr = File.OpenText(fileName))
{
string s = String.Empty;
while ((s = sr.ReadLine()) != null)
{
//do what you have to here
}
}
Put up against several other techniques, it won out most of the time, including against the BufferedReader.
Take a look at the File.ReadAllText() method
Some important remarks:
This method opens a file, reads each line of the file, and then adds
each line as an element of a string. It then closes the file. A line
is defined as a sequence of characters followed by a carriage return
('\r'), a line feed ('\n'), or a carriage return immediately followed
by a line feed. The resulting string does not contain the terminating
carriage return and/or line feed.
This method attempts to automatically detect the encoding of a file
based on the presence of byte order marks. Encoding formats UTF-8 and
UTF-32 (both big-endian and little-endian) can be detected.
Use the ReadAllText(String, Encoding) method overload when reading
files that might contain imported text, because unrecognized
characters may not be read correctly.
The file handle is guaranteed to be closed by this method, even if
exceptions are raised
string text = File.ReadAllText("Path"); you have all text in one string variable. If you need each line individually you can use this:
string[] lines = File.ReadAllLines("Path");
System.IO.StreamReader myFile =
new System.IO.StreamReader("c:\\test.txt");
string myString = myFile.ReadToEnd();
if you want to pick file from Bin folder of the application then you can try following and don't forget to do exception handling.
string content = File.ReadAllText(Path.Combine(System.IO.Directory.GetCurrentDirectory(), #"FilesFolder\Sample.txt"));
#Cris sorry .This is quote MSDN Microsoft
Methodology
In this experiment, two classes will be compared. The StreamReader and the FileStream class will be directed to read two files of 10K and 200K in their entirety from the application directory.
StreamReader (VB.NET)
sr = New StreamReader(strFileName)
Do
line = sr.ReadLine()
Loop Until line Is Nothing
sr.Close()
FileStream (VB.NET)
Dim fs As FileStream
Dim temp As UTF8Encoding = New UTF8Encoding(True)
Dim b(1024) As Byte
fs = File.OpenRead(strFileName)
Do While fs.Read(b, 0, b.Length) > 0
temp.GetString(b, 0, b.Length)
Loop
fs.Close()
Result
FileStream is obviously faster in this test. It takes an additional 50% more time for StreamReader to read the small file. For the large file, it took an additional 27% of the time.
StreamReader is specifically looking for line breaks while FileStream does not. This will account for some of the extra time.
Recommendations
Depending on what the application needs to do with a section of data, there may be additional parsing that will require additional processing time. Consider a scenario where a file has columns of data and the rows are CR/LF delimited. The StreamReader would work down the line of text looking for the CR/LF, and then the application would do additional parsing looking for a specific location of data. (Did you think String. SubString comes without a price?)
On the other hand, the FileStream reads the data in chunks and a proactive developer could write a little more logic to use the stream to his benefit. If the needed data is in specific positions in the file, this is certainly the way to go as it keeps the memory usage down.
FileStream is the better mechanism for speed but will take more logic.
well the quickest way meaning with the least possible C# code is probably this one:
string readText = System.IO.File.ReadAllText(path);
you can use :
public static void ReadFileToEnd()
{
try
{
//provide to reader your complete text file
using (StreamReader sr = new StreamReader("TestFile.txt"))
{
String line = sr.ReadToEnd();
Console.WriteLine(line);
}
}
catch (Exception e)
{
Console.WriteLine("The file could not be read:");
Console.WriteLine(e.Message);
}
}
string content = System.IO.File.ReadAllText( #"C:\file.txt" );
You can use like this
public static string ReadFileAndFetchStringInSingleLine(string file)
{
StringBuilder sb;
try
{
sb = new StringBuilder();
using (FileStream fs = File.Open(file, FileMode.Open))
{
using (BufferedStream bs = new BufferedStream(fs))
{
using (StreamReader sr = new StreamReader(bs))
{
string str;
while ((str = sr.ReadLine()) != null)
{
sb.Append(str);
}
}
}
}
return sb.ToString();
}
catch (Exception ex)
{
return "";
}
}
Hope this will help you.
you can read a text from a text file in to string as follows also
string str = "";
StreamReader sr = new StreamReader(Application.StartupPath + "\\Sample.txt");
while(sr.Peek() != -1)
{
str = str + sr.ReadLine();
}
I made a comparison between a ReadAllText and StreamBuffer for a 2Mb csv and it seemed that the difference was quite small but ReadAllText seemed to take the upper hand from the times taken to complete functions.
I'd highly recommend using the File.ReadLines(path) compare to StreamReader or any other File reading methods. Please find below the detailed performance benchmark for both small-size file and large-size file.
I hope this would help.
File operations read result:
For small file (just 8 lines)
For larger file (128465 lines)
Readlines Example:
public void ReadFileUsingReadLines()
{
var contents = File.ReadLines(path);
}
Note : Benchmark is done in .NET 6.
This comment is for those who are trying to read the complete text file in winform using c++ with the help of C# ReadAllText function
using namespace System::IO;
String filename = gcnew String(charfilename);
if(System::IO::File::Exists(filename))
{
String ^ data = gcnew String(System::IO::File::RealAllText(filename)->Replace("\0", Environment::Newline));
textBox1->Text = data;
}
I can currently remove the last line of a text file using:
var lines = System.IO.File.ReadAllLines("test.txt");
System.IO.File.WriteAllLines("test.txt", lines.Take(lines.Length - 1).ToArray());
Although, how is it possible to instead remove the beginning of the text file?
Instead of lines.Take, you can use lines.Skip, like:
var lines = File.ReadAllLines("test.txt");
File.WriteAllLines("test.txt", lines.Skip(1).ToArray());
to truncate at the beginning despite the fact that the technique used (read all text and write everything back) is very inefficient.
About the efficient way: The inefficiency comes from the necessity to read the whole file into memory. The other way around could easily be to seek in a stream and copy the stream to another output file, delete the original, and rename the old. That one would be equally fast and yet consume much less memory.
Truncating a file at the end is much easier. You can just find the trunaction position and call FileStream.SetLength().
Here is an alternative:
using (var stream = File.OpenRead("C:\\yourfile"))
{
var items = new LinkedList<string>();
using (var reader = new StreamReader(stream))
{
reader.ReadLine(); // skip one line
string line;
while ((line = reader.ReadLine()) != null)
{
//it's far better to do the actual processing here
items.AddLast(line);
}
}
}
Update
If you need an IEnumerable<string> and don't want to waste memory you could do something like this:
public static IEnumerable<string> GetFileLines(string filename)
{
using (var stream = File.OpenRead(filename))
{
using (var reader = new StreamReader(stream))
{
reader.ReadLine(); // skip one line
string line;
while ((line = reader.ReadLine()) != null)
{
yield return line;
}
}
}
}
static void Main(string[] args)
{
foreach (var line in GetFileLines("C:\\yourfile.txt"))
{
// do something with the line here.
}
}
var lines = System.IO.File.ReadAllLines("test.txt");
System.IO.File.WriteAllLines("test.txt", lines.Skip(1).ToArray());
Skip eliminates the given number of elements from the beginning of the sequence. Take eliminates all but the given number of elements from the end of the sequence.
To remove fist line from a text file
System.IO.StreamReader file = new System.IO.StreamReader(filePath);
string data = file.ReadToEnd();
file.Close();
data = Regex.Replace(data, "<.*\n", "");
System.IO.StreamWriter file = new System.IO.StreamWriter(filePath, false);
file.Write(data);
file.Close();
can do in one line also
File.WriteAllLines(origialFilePath,File.ReadAllLines(originalFilePath).Skip(1));
Assuming you are passing your filePath as parameter to the function.
In C#, I'm reading a moderate size of file (100 KB ~ 1 MB), modifying some parts of the content, and finally writing to a different file. All contents are text. Modification is done as string objects and string operations. My current approach is:
Read each line from the original file by using StreamReader.
Open a StringBuilder for the contents of the new file.
Modify the string object and call AppendLine of the StringBuilder (until the end of the file)
Open a new StreamWriter, and write the StringBuilder to the write stream.
However, I've found that StremWriter.Write truncates 32768 bytes (2^16), but the length of StringBuilder is greater than that. I could write a simple loop to guarantee entire string to a file. But, I'm wondering what would be the most efficient way in C# for doing this task?
To summarize, I'd like to modify only some parts of a text file and write to a different file. But, the text file size could be larger than 32768 bytes.
== Answer == I'm sorry to make confusin to you! It was just I didn't call flush. StremWriter.Write does not have a short (e.g., 2^16) limitation.
StreamWriter.Write
does not
truncate the string and has no limitation.
Internally it uses String.CopyTo which on the other hand uses unsafe code (using fixed) to copy chars so it is the most efficient.
The problem is most likely related to not closing the writer. See http://msdn.microsoft.com/en-us/library/system.io.streamwriter.flush.aspx.
But I would suggest not loading the whole file in memory if that can be avoided.
can you try this :
void Test()
{
using (var inputFile = File.OpenText(#"c:\in.txt"))
{
using (var outputFile = File.CreateText(#"c:\out.txt"))
{
string current;
while ((current = inputFile.ReadLine()) != null)
{
outputFile.WriteLine(Process(current));
}
}
}
}
string Process(string current)
{
return current.ToLower();
}
It avoid to have to full file loaded in memory, by processing line by line and writing it directly
Well, that entirely depends on what you want to modify. If your modifications of one part of the text file are dependent on another part of the text file, you obviously need to have both of those parts in memory. If however, you only need to modify the text file on a line-by-line basis then use something like this :
using (StreamReader sr = new StreamReader(#"test.txt"))
{
using (StreamWriter sw = new StreamWriter(#"modifiedtest.txt"))
{
while (!sr.EndOfStream)
{
string line = sr.ReadLine();
//do some modifications
sw.WriteLine(line);
sw.Flush(); //force line to be written to disk
}
}
}
Instead of of running though the hole dokument i would use a regex to find what you are looking for Sample:
public List<string> GetAllProfiles()
{
List<string> profileNames = new List<string>();
using (StreamReader reader = new StreamReader(_folderLocation + "profiles.pg"))
{
string profiles = reader.ReadToEnd();
var regex = new Regex("\nname=([^\r]{0,})", RegexOptions.IgnoreCase);
var regexMatchs = regex.Matches(profiles);
profileNames.AddRange(from Match regexMatch in regexMatchs select regexMatch.Groups[1].Value);
}
return profileNames;
}
I know normally you would use the File.ReadAllLines, but I'm trying to do it with an uploaded file.
Can I somehow put it into a temporary location?, or read it from memory?
I was able to get this working
Is this a string, a Stream, or what? either way, you want a TextReader - the question is simply StringReader vs StreamReader. Once you have that, I would do something like:
public static IEnumerable<string> ReadLines(TextReader reader) {
string line;
while((line = reader.ReadLine()) != null) yield return line;
}
then with whichever reader, I can either user:
foreach(var line in ReadLines(reader)) {
// note: non-buffered - i.e. more memory-efficient
}
or:
string[] lines = ReadLines(reader).ToArray();
// note: buffered - all read into memory at once (less memory efficient)
i.e. if it is a Stream you are reading from:
using(var reader = new StreamReader(inputStream)) {
foreach(var line in ReadLines(reader)) {
// do something fun and interesting
}
}