I get an out of memory exception a few seconds after I execute the following code. It doesn't write anything before the exception is thrown. The text file is about half a gigabyte in size. The text file I'm writing too will end up being about 3/4 of a gigabyte. Are there any tricks to navigate around this exception? I assume it's because the text file is too large.
public static void ToCSV(string fileWRITE, string fileREAD)
{
StreamWriter commas = new StreamWriter(fileWRITE);
var readfile = File.ReadAllLines(fileREAD);
foreach (string y in readfile)
{
string q = (y.Substring(0,15)+","+y.Substring(15,1)+","+y.Substring(16,6)+","+y.Substring(22,6)+ ",NULL,NULL,NULL,NULL");
commas.WriteLine(q);
}
commas.Close();
}
I have changed my code to the following yet I still get the same excpetion?
public static void ToCSV(string fileWRITE, string fileREAD)
{
StreamWriter commas = new StreamWriter(fileWRITE);
using (FileStream fs = File.Open(fileREAD, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
using (BufferedStream bs = new BufferedStream(fs))
using (StreamReader sr = new StreamReader(bs))
{
string y;
while ((y = sr.ReadLine()) != null)
{
string q = (y.Substring(0, 15) + "," + y.Substring(15, 1) + "," + y.Substring(16, 6) + "," + y.Substring(22, 6) + ",NULL,NULL,NULL,NULL");
commas.WriteLine(q);
}
}
commas.Close();
}
Read the file line by line, it will help you to avoid OutOfMemoryException. And I personnaly prefer using using's to handle streams. It makes sure that the file is closed in case of an exception.
public static void ToCSV(string fileWRITE, string fileREAD)
{
using(var commas = new StreamWriter(fileWRITE))
using(var file = new StreamReader("yourFile.txt"))
{
var line = file.ReadLine();
while( line != null )
{
string q = (y.Substring(0,15)+","+y.Substring(15,1)+","+y.Substring(16,6)+","+y.Substring(22,6)+ ",NULL,NULL,NULL,NULL");
commas.WriteLine(q);
line = file.ReadLine();
}
}
}
In the following post you can find numerous methods for reading and writing large files. Reading large text files with streams in C#
Basically you just need to read the bytes in a buffer which will be reused. This way you will load a very small amount of the file into memory.
Instead of reading the whole file, try to read and process line by line.
That way you do not risk to run into an out of memory exception. Because even if you succeed in organising more memory for the program, one day will come where the file will again be too large.
But the program may loose speed if you use less memory, so basically you'd have to balance the memory use and the execution time. One workaround would be to use a buffered output, reading more than one line at a time or transforming the strings in multiple threads.
Related
I'm new to C# and object-oriented programming in general. I have an application which parses text file.
The objective of the application is to read the contents of the provided text file and replace the matching values.
When a file about 800 MB to 1.2GB is provided as the input, the application crashes with error System.OutofMemoryException.
On researching, I came across couple of answers which recommend changing the Target Platform: to x64.
Same issue exists after changing the target platform.
Following is the code:
// Reading the text file
var _data = string.Empty;
using (StreamReader sr = new StreamReader(logF))
{
_data = sr.ReadToEnd();
sr.Dispose();
sr.Close();
}
foreach (var replacement in replacements)
{
_data = _data.Replace(replacement.Key, replacement.Value);
}
//Writing The text File
using (StreamWriter sw = new StreamWriter(logF))
{
sw.WriteLine(_data);
sw.Dispose();
sw.Close();
}
The error points to
_data = sr.ReadToEnd();
replacements is a dictionary. The Key contains the original word and the Value contains the word to be replaced.
The Key elements are replaced with the Value elements of the KeyValuePair.
The approached being followed is Reading the file, replacing and writing.
I tried using a StringBuilder instead of string yet the application crashed.
Can this be overcome by reading the file one line at a time, replacing and writing? What would be the efficient and faster way of doing the same.
Update: The system memory is 8 GB and on monitoring the performance it spikes upto 100% memory usage.
#Tim Schmelter answer works well.
However, the memory utilization spikes over 90%. It could be due to the following code:
String[] arrayofLine = File.ReadAllLines(logF);
// Generating Replacement Information
Dictionary<int, string> _replacementInfo = new Dictionary<int, string>();
for (int i = 0; i < arrayofLine.Length; i++)
{
foreach (var replacement in replacements.Keys)
{
if (arrayofLine[i].Contains(replacement))
{
arrayofLine[i] = arrayofLine[i].Replace(replacement, masking[replacement]);
if (_replacementInfo.ContainsKey(i + 1))
{
_replacementInfo[i + 1] = _replacementInfo[i + 1] + "|" + replacement;
}
else
{
_replacementInfo.Add(i + 1, replacement);
}
}
}
}
//Creating Replacement Information
StringBuilder sb = new StringBuilder();
foreach (var Replacement in _replacementInfo)
{
foreach (var replacement in Replacement.Value.Split('|'))
{
sb.AppendLine(string.Format("Line {0}: {1} ---> \t\t{2}", Replacement.Key, replacement, masking[replacement]));
}
}
// Writing the replacement information
if (sb.Length!=0)
{
using (StreamWriter swh = new StreamWriter(logF_Rep.txt))
{
swh.WriteLine(sb.ToString());
swh.Dispose();
swh.Close();
}
}
sb.Clear();
It finds the line number in which the replacement was made. Can this be captured using Tim's code in order to avoid loading the data into memory multiple times.
If you have very large files you should try MemoryMappedFile which is designed for this purpose(files > 1GB) and enables to read "windows" of a file into memory. But it's not easy to use.
A simple optimization would be to read and replace line by line
int lineNumber = 0;
var _replacementInfo = new Dictionary<int, List<string>>();
using (StreamReader sr = new StreamReader(logF))
{
using (StreamWriter sw = new StreamWriter(logF_Temp))
{
while (!sr.EndOfStream)
{
string line = sr.ReadLine();
lineNumber++;
foreach (var kv in replacements)
{
bool contains = line.Contains(kv.Key);
if (contains)
{
List<string> lineReplaceList;
if (!_replacementInfo.TryGetValue(lineNumber, out lineReplaceList))
lineReplaceList = new List<string>();
lineReplaceList.Add(kv.Key);
_replacementInfo[lineNumber] = lineReplaceList;
line = line.Replace(kv.Key, kv.Value);
}
}
sw.WriteLine(line);
}
}
}
At the end you can use File.Copy(logF_Temp, logF, true); if you want to overwite the old.
Read file line by line and append changed line to other file. At the end replace source file with new one (create backup or not).
var tmpFile = Path.GetTempFileName();
using (StreamReader sr = new StreamReader(logF))
{
using (StreamWriter sw = new StreamWriter(tmpFile))
{
string line;
while ((line = sr.ReadLine()) != null)
{
foreach (var replacement in replacements)
line = line.Replace(replacement.Key, replacement.Value);
sw.WriteLine(line);
}
}
}
File.Replace(tmpFile, logF, null);// you can pass backup file name instead on null if you want a backup of logF file
An OutOfMemoryException is thrown whenever the application tries and fails to allocate memory to perform an operation. According to Microsoft's documentation, the following operations can potentially throw an OutOfMemoryException:
Boxing (i.e., wrapping a value type in an Object)
Creating an array
Creating an object
If you try to create an infinite number of objects, then it's pretty reasonable to assume that you're going to run out of memory sooner or later.
(Note: don't forget about the garbage collector. Depending on the lifetimes of the objects being created, it will delete some of them if it determines they're no longer in use.)
For What I suspect is this line :
foreach (var replacement in replacements)
{
_data = _data.Replace(replacement.Key, replacement.Value);
}
That sooner or later u will run out of memory. Do u ever count how many it loop?
Try
Increase the available memory.
Reduce the amount of data you are retrieving.
I am trying to append the text from a text box to a new line in a text file and have run into an issue. Lets say that there are already contents in the text file and looks something like this
something
Something
Something<---- Cursor ends here
And the cursor ends where the arrow is pointing (After the g on the last 'something'. If I try to use
File.AppendAllLines(#"Location", new[]{tb.Text});
Or
File.AppendAllText(#"Location", tb.Text + Environment.NewLine);
They will put the text where the cursor is at, not on a new line under the last item in the text file. It works if the cursor is on a new line to begin with but as soon as it ends at the end of the word everything goes on the same line.
Is there something I'm missing? Would using a streamwriter or some other method fix this issue?
It does what you should expect. Your text doesn't init with a newline so it is appended just after the point where the previous text ends, To solve your problem you could open the file before writing in it and try to read the last byte.
bool addNewLine = true;
using (FileStream fs = new FileStream(#"location", FileMode.Open))
using (BinaryReader rd = new BinaryReader(fs))
{
fs.Position = fs.Length - 1;
int last = rd.Read();
// The last byte is 10 if there is a LineFeed
if (last == 10)
addNewLine = false;
}
string allLines = (addNewLine ? Environment.NewLine + tb.Text : tb.Text);
File.AppendAllLines(#"Location", new[]{allLines});
As you can see this a bit more complex but this avoid to read all the file in memory.
Actually, as pointed out in other answers. This is by design. File.AppendAllLines appends text at the end of the file. It does not add a line break before appending text. For this, you will have to read the file somehow and determine if the last character is a line break. There are multiple ways to do this.
The simplest way is to just read all lines and check if the last line is empty. If not, just prepend a line break to your string before passing it to File.AppendAllLines.
However, if dealing with large files, or if you do not want to open the file multiple times - something like the following method will do this whilst still only opening the file once.
public void AppendAllLinesAtNewLine(string path, IEnumerable<string> content)
{
// No file, just append as usual.
if (!File.Exists(path))
{
File.AppendAllLines(path, content);
return;
}
using (var stream = File.Open(path, FileMode.Open, FileAccess.ReadWrite))
using (var reader = new StreamReader(stream))
using (var writer = new StreamWriter(stream))
{
// Determines if there is a new line at the end of the file. If not, one is appended.
long readPosition = stream.Length == 0 ? 0 : -1;
stream.Seek(readPosition, SeekOrigin.End);
string end = reader.ReadToEnd();
if (end.Length != 0 && !end.Equals("\n", StringComparison.Ordinal))
{
writer.Write(Environment.NewLine);
}
// Simple write all lines.
foreach (var line in content)
{
writer.WriteLine(line);
}
}
}
Note: the long readPosition = s.Length == 0 ? 0 : -1; is for handling empty files.
This is exactly what you are asking to do. It appends the text you have to the existing file. If that file is 'wrong', the writer can't help it.
You could make sure the line end is always written to the file, if the file is entirely under your control. Else, you could read every line, append yours to it, and write the entire file back. This is bad for large files obviously:
File.WriteAllLines( #"Location"
, File.ReadAllLines(#"Location")
.Concat(new string[] { text })
);
If you put the Environment.NewLine in front of the text, it should give you the results you're looking for:
File.AppendAllText(#"Location", Environment.NewLine + tb.Text);
Thank you all for your suggestions, here is what I decided to do
FileStream stream = new FileStream("location", FileMode.Open, FileAccess.Read);
char actual = '\0';
if(stream.Length != 0)
{
stream.Seek(-1, SeekOrigin.End);
var lastChar = (byte)stream.ReadByte();
actual = (char)lastChar;
}
stream.Close();
try {
if(addCompT.Text != "")
{
if (actual == '\n' || actual == '\0')
File.AppendAllText("location", addCompT.Text + Environment.NewLine);
else
File.AppendAllText("location", Environment.NewLine + addCompT.Text + Environment.NewLine);
addCompT.Text = "";
}
}
catch(System.UnauthorizedAccessException)
{
MessageBox.Show("Please Run Program As Administrator!");
}
I appreciate everyone's help!
What is the quickest way to read a text file into a string variable?
I understand it can be done in several ways, such as read individual bytes and then convert those to string. I was looking for a method with minimal coding.
How about File.ReadAllText:
string contents = File.ReadAllText(#"C:\temp\test.txt");
A benchmark comparison of File.ReadAllLines vs StreamReader ReadLine from C# file handling
Results. StreamReader is much faster for large files with 10,000+
lines, but the difference for smaller files is negligible. As always,
plan for varying sizes of files, and use File.ReadAllLines only when
performance isn't critical.
StreamReader approach
As the File.ReadAllText approach has been suggested by others, you can also try the quicker (I have not tested quantitatively the performance impact, but it appears to be faster than File.ReadAllText (see comparison below)). The difference in performance will be visible only in case of larger files though.
string readContents;
using (StreamReader streamReader = new StreamReader(path, Encoding.UTF8))
{
readContents = streamReader.ReadToEnd();
}
Comparison of File.Readxxx() vs StreamReader.Readxxx()
Viewing the indicative code through ILSpy I have found the following about File.ReadAllLines, File.ReadAllText.
File.ReadAllText - Uses StreamReader.ReadToEnd internally
File.ReadAllLines - Also uses StreamReader.ReadLine internally with the additionally overhead of creating the List<string> to return as the read lines and looping till the end of file.
So both the methods are an additional layer of convenience built on top of StreamReader. This is evident by the indicative body of the method.
File.ReadAllText() implementation as decompiled by ILSpy
public static string ReadAllText(string path)
{
if (path == null)
{
throw new ArgumentNullException("path");
}
if (path.Length == 0)
{
throw new ArgumentException(Environment.GetResourceString("Argument_EmptyPath"));
}
return File.InternalReadAllText(path, Encoding.UTF8);
}
private static string InternalReadAllText(string path, Encoding encoding)
{
string result;
using (StreamReader streamReader = new StreamReader(path, encoding))
{
result = streamReader.ReadToEnd();
}
return result;
}
string contents = System.IO.File.ReadAllText(path)
Here's the MSDN documentation
For the noobs out there who find this stuff fun and interesting, the fastest way to read an entire file into a string in most cases (according to these benchmarks) is by the following:
using (StreamReader sr = File.OpenText(fileName))
{
string s = sr.ReadToEnd();
}
//you then have to process the string
However, the absolute fastest to read a text file overall appears to be the following:
using (StreamReader sr = File.OpenText(fileName))
{
string s = String.Empty;
while ((s = sr.ReadLine()) != null)
{
//do what you have to here
}
}
Put up against several other techniques, it won out most of the time, including against the BufferedReader.
Take a look at the File.ReadAllText() method
Some important remarks:
This method opens a file, reads each line of the file, and then adds
each line as an element of a string. It then closes the file. A line
is defined as a sequence of characters followed by a carriage return
('\r'), a line feed ('\n'), or a carriage return immediately followed
by a line feed. The resulting string does not contain the terminating
carriage return and/or line feed.
This method attempts to automatically detect the encoding of a file
based on the presence of byte order marks. Encoding formats UTF-8 and
UTF-32 (both big-endian and little-endian) can be detected.
Use the ReadAllText(String, Encoding) method overload when reading
files that might contain imported text, because unrecognized
characters may not be read correctly.
The file handle is guaranteed to be closed by this method, even if
exceptions are raised
string text = File.ReadAllText("Path"); you have all text in one string variable. If you need each line individually you can use this:
string[] lines = File.ReadAllLines("Path");
System.IO.StreamReader myFile =
new System.IO.StreamReader("c:\\test.txt");
string myString = myFile.ReadToEnd();
if you want to pick file from Bin folder of the application then you can try following and don't forget to do exception handling.
string content = File.ReadAllText(Path.Combine(System.IO.Directory.GetCurrentDirectory(), #"FilesFolder\Sample.txt"));
#Cris sorry .This is quote MSDN Microsoft
Methodology
In this experiment, two classes will be compared. The StreamReader and the FileStream class will be directed to read two files of 10K and 200K in their entirety from the application directory.
StreamReader (VB.NET)
sr = New StreamReader(strFileName)
Do
line = sr.ReadLine()
Loop Until line Is Nothing
sr.Close()
FileStream (VB.NET)
Dim fs As FileStream
Dim temp As UTF8Encoding = New UTF8Encoding(True)
Dim b(1024) As Byte
fs = File.OpenRead(strFileName)
Do While fs.Read(b, 0, b.Length) > 0
temp.GetString(b, 0, b.Length)
Loop
fs.Close()
Result
FileStream is obviously faster in this test. It takes an additional 50% more time for StreamReader to read the small file. For the large file, it took an additional 27% of the time.
StreamReader is specifically looking for line breaks while FileStream does not. This will account for some of the extra time.
Recommendations
Depending on what the application needs to do with a section of data, there may be additional parsing that will require additional processing time. Consider a scenario where a file has columns of data and the rows are CR/LF delimited. The StreamReader would work down the line of text looking for the CR/LF, and then the application would do additional parsing looking for a specific location of data. (Did you think String. SubString comes without a price?)
On the other hand, the FileStream reads the data in chunks and a proactive developer could write a little more logic to use the stream to his benefit. If the needed data is in specific positions in the file, this is certainly the way to go as it keeps the memory usage down.
FileStream is the better mechanism for speed but will take more logic.
well the quickest way meaning with the least possible C# code is probably this one:
string readText = System.IO.File.ReadAllText(path);
you can use :
public static void ReadFileToEnd()
{
try
{
//provide to reader your complete text file
using (StreamReader sr = new StreamReader("TestFile.txt"))
{
String line = sr.ReadToEnd();
Console.WriteLine(line);
}
}
catch (Exception e)
{
Console.WriteLine("The file could not be read:");
Console.WriteLine(e.Message);
}
}
string content = System.IO.File.ReadAllText( #"C:\file.txt" );
You can use like this
public static string ReadFileAndFetchStringInSingleLine(string file)
{
StringBuilder sb;
try
{
sb = new StringBuilder();
using (FileStream fs = File.Open(file, FileMode.Open))
{
using (BufferedStream bs = new BufferedStream(fs))
{
using (StreamReader sr = new StreamReader(bs))
{
string str;
while ((str = sr.ReadLine()) != null)
{
sb.Append(str);
}
}
}
}
return sb.ToString();
}
catch (Exception ex)
{
return "";
}
}
Hope this will help you.
you can read a text from a text file in to string as follows also
string str = "";
StreamReader sr = new StreamReader(Application.StartupPath + "\\Sample.txt");
while(sr.Peek() != -1)
{
str = str + sr.ReadLine();
}
I made a comparison between a ReadAllText and StreamBuffer for a 2Mb csv and it seemed that the difference was quite small but ReadAllText seemed to take the upper hand from the times taken to complete functions.
I'd highly recommend using the File.ReadLines(path) compare to StreamReader or any other File reading methods. Please find below the detailed performance benchmark for both small-size file and large-size file.
I hope this would help.
File operations read result:
For small file (just 8 lines)
For larger file (128465 lines)
Readlines Example:
public void ReadFileUsingReadLines()
{
var contents = File.ReadLines(path);
}
Note : Benchmark is done in .NET 6.
This comment is for those who are trying to read the complete text file in winform using c++ with the help of C# ReadAllText function
using namespace System::IO;
String filename = gcnew String(charfilename);
if(System::IO::File::Exists(filename))
{
String ^ data = gcnew String(System::IO::File::RealAllText(filename)->Replace("\0", Environment::Newline));
textBox1->Text = data;
}
In C#, I'm reading a moderate size of file (100 KB ~ 1 MB), modifying some parts of the content, and finally writing to a different file. All contents are text. Modification is done as string objects and string operations. My current approach is:
Read each line from the original file by using StreamReader.
Open a StringBuilder for the contents of the new file.
Modify the string object and call AppendLine of the StringBuilder (until the end of the file)
Open a new StreamWriter, and write the StringBuilder to the write stream.
However, I've found that StremWriter.Write truncates 32768 bytes (2^16), but the length of StringBuilder is greater than that. I could write a simple loop to guarantee entire string to a file. But, I'm wondering what would be the most efficient way in C# for doing this task?
To summarize, I'd like to modify only some parts of a text file and write to a different file. But, the text file size could be larger than 32768 bytes.
== Answer == I'm sorry to make confusin to you! It was just I didn't call flush. StremWriter.Write does not have a short (e.g., 2^16) limitation.
StreamWriter.Write
does not
truncate the string and has no limitation.
Internally it uses String.CopyTo which on the other hand uses unsafe code (using fixed) to copy chars so it is the most efficient.
The problem is most likely related to not closing the writer. See http://msdn.microsoft.com/en-us/library/system.io.streamwriter.flush.aspx.
But I would suggest not loading the whole file in memory if that can be avoided.
can you try this :
void Test()
{
using (var inputFile = File.OpenText(#"c:\in.txt"))
{
using (var outputFile = File.CreateText(#"c:\out.txt"))
{
string current;
while ((current = inputFile.ReadLine()) != null)
{
outputFile.WriteLine(Process(current));
}
}
}
}
string Process(string current)
{
return current.ToLower();
}
It avoid to have to full file loaded in memory, by processing line by line and writing it directly
Well, that entirely depends on what you want to modify. If your modifications of one part of the text file are dependent on another part of the text file, you obviously need to have both of those parts in memory. If however, you only need to modify the text file on a line-by-line basis then use something like this :
using (StreamReader sr = new StreamReader(#"test.txt"))
{
using (StreamWriter sw = new StreamWriter(#"modifiedtest.txt"))
{
while (!sr.EndOfStream)
{
string line = sr.ReadLine();
//do some modifications
sw.WriteLine(line);
sw.Flush(); //force line to be written to disk
}
}
}
Instead of of running though the hole dokument i would use a regex to find what you are looking for Sample:
public List<string> GetAllProfiles()
{
List<string> profileNames = new List<string>();
using (StreamReader reader = new StreamReader(_folderLocation + "profiles.pg"))
{
string profiles = reader.ReadToEnd();
var regex = new Regex("\nname=([^\r]{0,})", RegexOptions.IgnoreCase);
var regexMatchs = regex.Matches(profiles);
profileNames.AddRange(from Match regexMatch in regexMatchs select regexMatch.Groups[1].Value);
}
return profileNames;
}
This is the way I read file:
public static string readFile(string path)
{
StringBuilder stringFromFile = new StringBuilder();
StreamReader SR;
string S;
SR = File.OpenText(path);
S = SR.ReadLine();
while (S != null)
{
stringFromFile.Append(SR.ReadLine());
}
SR.Close();
return stringFromFile.ToString();
}
The problem is it so long (the .txt file is about 2.5 megs). Took over 5 minutes. Is there a better way?
Solution taken
public static string readFile(string path)
{
return File.ReadAllText(path);
}
Took less than 1 second... :)
S = SR.ReadLine();
while (S != null)
{
stringFromFile.Append(SR.ReadLine());
}
Of note here, S is never set after that initial ReadLine(), so the S != null condition never triggers if you enter the while loop. Try:
S = SR.ReadLine();
while (S != null)
{
stringFromFile.Append(S = SR.ReadLine());
}
or use one of the other comments.
If you need to remove newlines, use string.Replace(Environment.NewLine, "")
Leaving aside the horrible variable names and the lack of a using statement (you won't close the file if there are any exceptions) that should be okay, and certainly shouldn't take 5 minutes to read 2.5 megs.
Where does the file live? Is it on a flaky network share?
By the way, the only difference between what you're doing and using File.ReadAllText is that you're losing line breaks. Is this deliberate? How long does ReadAllText take?
return System.IO.File.ReadAllText(path);
Marcus Griep has it right. IT's taking so long because YOU HAVE AN INFINITE LOOP. copied your code and made his changes and it read a 2.4 M text file in less than a second.
but I think you might miss the first line of the file. Try this.
S = SR.ReadLine();
while (S != null){
stringFromFile.Append(S);
S = SR.ReadLine();
}
Do you need the entire 2.5 Mb in memory at once?
If not, I would try to work with what you need.
Use System.IO.File.RealAllLines instead.
http://msdn.microsoft.com/en-us/library/system.io.file.readalllines.aspx
Alternatively, estimating the character count and passing that to StringBuilder's constructor as the capacity should speed it up.
Try this, should be much faster:
var str = System.IO.File.ReadAllText(path);
return str.Replace(Environment.NewLine, "");
By the way: Next time you're in a similar situation, try pre-allocating memory. This improves runtime drastically, regardless of the exact data structures you use. Most containers (StringBuilder as well) have a constructor that allow you to reserve memory. This way, less time-consuming reallocations are necessary during the read process.
For example, you could write the following if you want to read data from a file into a StringBuilder:
var info = new FileInfo(path);
var sb = new StringBuilder((int)info.Length);
(Cast necessary because System.IO.FileInfo.Length is long.)
ReadAllText was a very good solution for me. I used following code for 3.000.000 row text file and it took 4-5 seconds to read all rows.
string fileContent = System.IO.File.ReadAllText(txtFilePath.Text)
string[] arr = fileContent.Split('\n');
The loop and StringBuilder may be redundant; Try using
ReadToEnd.
To read a text file fastest you can use something like this
public static string ReadFileAndFetchStringInSingleLine(string file)
{
StringBuilder sb;
try
{
sb = new StringBuilder();
using (FileStream fs = File.Open(file, FileMode.Open))
{
using (BufferedStream bs = new BufferedStream(fs))
{
using (StreamReader sr = new StreamReader(bs))
{
string str;
while ((str = sr.ReadLine()) != null)
{
sb.Append(str);
}
}
}
}
return sb.ToString();
}
catch (Exception ex)
{
return "";
}
}
Hope this will help you. and for more info, please visit to the following link-
Fastest Way to Read Text Files