i want to read every single line form txt file. Instead of every line i get every second line. The question is why and how can i do something about it.
list.txt:
60001
60002
60003
60004
..every number in single line and so on 100 lines
StreamReader podz = new StreamReader(#"D:\list.txt");
string pd="";
int count=0;
while ((pd = podz.ReadLine()) != null)
{
Console.WriteLine("\n number: {0}", podz.ReadLine());
count++;
}
Console.WriteLine("\n c {0}", count);
Console.ReadLine();
Because you're reading two lines per loop iteration:
while ((pd = podz.ReadLine()) != null) // here
{
Console.WriteLine("\n number: {0}", podz.ReadLine()); // and here
count++;
}
Instead, just read the line in the first place and use the pd variable in which you store the line in the second place:
while ((pd = podz.ReadLine()) != null)
{
Console.WriteLine("\n number: {0}", pd);
count++;
}
I suggest using File instead of Streams and Readers:
var lines = File
.ReadLines(#"D:\list.txt");
int count = 0;
foreach (var line in lines) {
Console.WriteLine("\n number: {0}", line);
count++;
}
Console.WriteLine("\n c {0}", count);
Console.ReadLine();
Your code has some problems:
You are reading the line incorrectly (calling ReadLine twice per each iteration)
You are not closing the Stream
If the file is in use by another process (i.e. a process writing to the file) you may get some errors
The StreamReader class is useful when the size of the file is very large, and in case you are dealing with a small file you can simply call System.IO.File.ReadAllLines("FileName").
In case the size of the file is large, follow this approach
public static List<String> ReadAllLines(String fileName)
{
using(System.IO.FileStream fs = new System.IO.FileStream(fileName, System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.Read))
{
using(System.IO.StreamReader sr = new System.IO.StreamReader(fs))
{
List<String> lines = new List<String>();
while (!sr.EndOfStream)
{
lines.Add(sr.ReadLine());
}
return lines;
}
}
Related
I was practicing to write into a file using c#
my code is not working (writing in file is not done)
{
int T, N; //T = testCase , N = number of dice in any Test
int index = 0, straight;
List<string> nDiceFaceValues = new List<string>(); //List for Dice Faces
string line = null; //string to read line from file
string[] lineValues = {}; //array of string to split string line values
string InputFilePath = # "E:\Visual Studio 2017\CodeJam_Dice Straight\A-small-practice.in"; //path of input file
string OuputFilePath = #
"E:\Visual Studio 2017\CodeJam_Dice Straight\A-small-practice.out"; //path of otput file
StreamReader InputFile = new StreamReader(InputFilePath);
StreamWriter Outputfile = new StreamWriter(OuputFilePath);
T = Int32.Parse(InputFile.ReadLine()); //test cases input
Console.WriteLine("Test Cases : {0}", T);
while (index < T) {
N = Int32.Parse(InputFile.ReadLine());
for (int i = 0; i < N; i++) {
line = InputFile.ReadLine();
lineValues = line.Split(' ');
foreach(string j in lineValues)
{
nDiceFaceValues.Add(j);
}
}
straight = ArrangeDiceINStraight(nDiceFaceValues);
Console.WriteLine("case: {0} , {1}", ++index, straight);
Outputfile.WriteLine("case: {0} , {1}", index, straight);
nDiceFaceValues.Clear();
}
}
what is wrong with this code?
how I fix it?
why its not working??
Note: I want to write in file line by line
What's missing is: closing things down - flushing the buffers, etc:
using(var outputfile = new StreamWriter(ouputFilePath)) {
outputfile.WriteLine("case: {0} , {1}", index, straight);
}
However, if you're going to do that for every line, File.AppendText may be more convenient.
In particular, note that new StreamWriter will be overwriting by default, so you'd also need to account for that:
using(var outputfile = new StreamWriter(ouputFilePathm, true)) {
outputfile.WriteLine("case: {0} , {1}", index, straight);
}
the true here is for append.
If you have opened a file for concurrent read/write, you could also try just adding outputfile.Flush();, but... it isn't guaranteed to do anything.
I am having trouble attempting to find words in a text file in C#.
I want to find the word that is input into the console then display the entire line that the word was found on in the console.
In my text file I have:
Stephen Haren,December,9,4055551235
Laura Clausing,January,23,4054447788
William Connor,December,13,123456789
Kara Marie,October,23,1593574862
Audrey Carrit,January,16,1684527548
Sebastian Baker,October,23,9184569876
So if I input "December" I want it to display "Stephen Haren,December,9,4055551235" and "William Connor,December,13,123456789" .
I thought about using substrings but I figured there had to be a simpler way.
My Code After Given Answer:
using System;
using System.IO;
class ReadFriendRecords
{
public static void Main()
{
//the path of the file
FileStream inFile = new FileStream(#"H:\C#\Chapter.14\FriendInfo.txt", FileMode.Open, FileAccess.Read);
StreamReader reader = new StreamReader(inFile);
string record;
string input;
Console.Write("Enter Friend's Birth Month >> ");
input = Console.ReadLine();
try
{
//the program reads the record and displays it on the screen
record = reader.ReadLine();
while (record != null)
{
if (record.Contains(input))
{
Console.WriteLine(record);
}
record = reader.ReadLine();
}
}
finally
{
//after the record is done being read, the progam closes
reader.Close();
inFile.Close();
}
Console.ReadLine();
}
}
Iterate through all the lines (StreamReader, File.ReadAllLines, etc.) and check if
line.Contains("December") (replace "December" with the user input).
Edit:
I would go with the StreamReader in case you have large files. And use the IndexOf-Example from #Matias Cicero instead of contains for case insensitive.
Console.Write("Keyword: ");
var keyword = Console.ReadLine() ?? "";
using (var sr = new StreamReader("")) {
while (!sr.EndOfStream) {
var line = sr.ReadLine();
if (String.IsNullOrEmpty(line)) continue;
if (line.IndexOf(keyword, StringComparison.CurrentCultureIgnoreCase) >= 0) {
Console.WriteLine(line);
}
}
}
As mantioned by #Rinecamo, try this code:
string toSearch = Console.ReadLine().Trim();
In this codeline, you'll be able to read user input and store it in a line, then iterate for each line:
foreach (string line in System.IO.File.ReadAllLines(FILEPATH))
{
if(line.Contains(toSearch))
Console.WriteLine(line);
}
Replace FILEPATH with the absolute or relative path, e.g. ".\file2Read.txt".
How about something like this:
//We read all the lines from the file
IEnumerable<string> lines = File.ReadAllLines("your_file.txt");
//We read the input from the user
Console.Write("Enter the word to search: ");
string input = Console.ReadLine().Trim();
//We identify the matches. If the input is empty, then we return no matches at all
IEnumerable<string> matches = !String.IsNullOrEmpty(input)
? lines.Where(line => line.IndexOf(input, StringComparison.OrdinalIgnoreCase) >= 0)
: Enumerable.Empty<string>();
//If there are matches, we output them. If there are not, we show an informative message
Console.WriteLine(matches.Any()
? String.Format("Matches:\n> {0}", String.Join("\n> ", matches))
: "There were no matches");
This approach is simple and easy to read, it uses LINQ and String.IndexOf instead of String.Contains so we can do a case insensitive search.
For finding text in a file you can use this algorithim use this code in
static void Main(string[] args)
{
}
try this one
StreamReader oReader;
if (File.Exists(#"C:\TextFile.txt"))
{
Console.WriteLine("Enter a word to search");
string cSearforSomething = Console.ReadLine().Trim();
oReader = new StreamReader(#"C:\TextFile.txt");
string cColl = oReader.ReadToEnd();
string cCriteria = #"\b"+cSearforSomething+#"\b";
System.Text.RegularExpressions.Regex oRegex = new
System.Text.RegularExpressions.Regex(cCriteria,RegexOptions.IgnoreCase);
int count = oRegex.Matches(cColl).Count;
Console.WriteLine(count.ToString());
}
Console.ReadLine();
I want to count the number of some strings and store it into a csv file. I've tried it but I don't know if this is the correct way and in addition, there are two problems.
First of all, here is my method:
public void CountMacNames(String macName)
{
string path = #"D:\Counter\macNameCounter.csv";
if (!File.Exists(path))
{
File.Create(path).Close();
}
var lines = File.ReadLines(path);
foreach (var line in lines)
{
bool isExists = line.Split(',').Any(x => x == macName);
if (isExists)
{
// macName exists, increment it's value by 1
}
else
{
// macName does not exists, add macName to CSV file and start counter by 1
var csv = new StringBuilder();
var newLine = string.Format("{0},{1}", macName, 1);
csv.AppendLine(newLine);
File.WriteAllText(path, csv.ToString());
}
}
}
The first problem is this IOException:
The process cannot access the file 'D:\Counter\macNameCounter.csv'
because it is being used by another process.
The second problem is, that I don't know how to increment the value by one, if a macName exists in the csv file (see first comment)
EDIT: Example for method "CountMacNames" call:
CountMacNames("Cansas");
CountMacNames("Wellback");
CountMacNames("Newton");
CountMacNames("Cansas");
CountMacNames("Princet");
Then, the CSV file should contain:
Cansas, 2
Wellback, 1
Newton, 1
Princet, 1
OK, this is what I'd do:
public void CountMacNames(String macName)
{
string path = #"D:\Counter\macNameCounter.csv";
// Read all lines, but only if file exists
string[] lines = new string[0];
if (File.Exists(path))
lines = File.ReadAllLines(path);
// This is the new CSV file
StringBuilder newLines = new StringBuilder();
bool macAdded = false;
foreach (var line in lines)
{
string[] parts = line.Split(',');
if (parts.Length == 2 && parts[0].Equals(macName))
{
int newCounter = Convert.ToIn32(parts[1])++;
newLines.AppendLine(String.Format("{0},{1}", macName, newCounter));
macAdded = true;
}
else
{
newLines.AppendLine(line.Trim());
}
}
if (!macAdded)
{
newLines.AppendLine(String.Format("{0},{1}", macName, 1));
}
File.WriteAllText(path, newLines.ToString());
}
This code does this:
Read all the lines from file only if it exists - otherwise we start a new file
Iterate over all the lines
If the first part of a 2-part line equals the mac, add 1 to counter and add line to output
If the first part doesn't match or the line format is wrong, add the line to output as is
If we didn't find the mac in any line, add a new line for the mac with counter 1
Write the file back
You can't read and write to the same file at the same time (in a simple way).
For small files, there are already answers.
If your file is really large (too big to fit in memory) you need another approach:
Read input file line by line
optinally modify the current line
write line to a temporary file
If finished delete input file, rename temporary file
For the first problem you can either read all the lines into memory and work there then write it all out again, or use streams.
using (FileStream fs = File.Open(filePath, FileMode.Create, FileAccess.ReadWrite))
{
var sw = new StreamWriter(fs);
var sr = new StreamReader(fs);
while(!streamReader.EndOfStream)
{
var line = sr.ReadLine();
//Do stuff with line.
//...
if (macExists)
{
//Increment the number, Note that in here we can only replace characters,
//We can't insert extra characters unless we rewrite the rest of the file
//Probably more hassle than it's worth but
//You could have a fixed number of characters like 000001 or 1
//Read the number as a string,
//Int.Parse to get the number
//Increment it
//work out the number of bytes in the line.
//get the stream position
//seek back to the beginning of the line
//Overwrite the whole line with the same number of bytes.
}
else
{
//Append a line, also harder to do with streams like this.
//Store the current position,
//Seek to the end of the file,
//WriteLine
//Seek back again.
}
}
}
You need to read the file in and release it, like this, to avoid the IO exception:
string[] lines = null;
using (var sr = new System.IO.StreamReader(path))
lines = sr.ReadToEnd().Split(new string[] {"\r", "\n"}, StringSplitOptions.RemoveEmptyEntries);
As for the count, you can just add an int value, change the method return type as int, too.
public int CountMacNames(String macName, String path)
{
if (!File.Exists(path))
{
File.Create(path).Close();
}
string[] lines = null;
using (var sr = new System.IO.StreamReader(path))
lines = sr.ReadToEnd().Split(new string[] {"\r", "\n"}, StringSplitOptions.RemoveEmptyEntries);
return lines.Where(p => p.Split(',').Contains(macName)).Count();
}
and inside the method that calls it:
var path = #"<PATH TO FILE>";
var cnt = CountMacNames("Canvas", path);
if (cnt > 0)
{
using (var sw = new StreamWriter(path, true, Encoding.Unicode))
sw.WriteLine(string.Format("Canvas,{0}", cnt));
}
Now, var res = CountMacNames("Canvas","PATH"); will return 2, and the lines "Canvas,2" or "Newton,1" will be appended to the file, without overwriting it.
I am using the stream reader feature to search for a record in a text file then display that record on the console window. It is searching the text file by a question number (1-50). The only number it works with is question 1. It won't display any other question but it displays question 1 perfect. Here is a section of the code where the problem lies.
static void Amending(QuestionStruct[] _Updating)
{
string NumberSearch;
bool CustomerNumberMatch = false;
Console.Clear();
begin:
try
{
Console.Write("\t\tPlease enter the question number you want to append: ");
NumberSearch = Console.ReadLine();
Console.ReadKey();
}
catch
{
Console.WriteLine("Failed. Please try again.");
goto begin;
}
//finding the question number to replace
while (!CustomerNumberMatch)
{
var pathToCust = #"..\..\..\Files\questions.txt";
using (StreamReader sr = new StreamReader(pathToCust, true))
{
RecCount = 0;
questions[RecCount].QuestionNum = sr.ReadLine();
if (questions[RecCount].QuestionNum == NumberSearch)
{
Console.Clear();
Console.WriteLine("Question Number: {0}", questions[RecCount].QuestionNum);
questions[RecCount].Level = sr.ReadLine();
Console.WriteLine("Level: {0}", questions[RecCount].Level);
questions[RecCount].Question = sr.ReadLine();
Console.WriteLine("Question: {0}", questions[RecCount].Question);
questions[RecCount].answer = sr.ReadLine();
Console.WriteLine("Answer: {0}", questions[RecCount].answer);
CustomerNumberMatch = true;
}
RecCount++;
//sr.Dispose();
}
}
You are reopening the questions.txt file each time around your while loop, so it will start from the beginning of the file each time and never get past the first line (unless you are looking for question 1, when it will read the question detail lines). Instead, you need to loop inside of the using statement:
var pathToCust = #"..\..\..\Files\questions.txt";
using (StreamReader sr = new StreamReader(pathToCust, true))
{
while (!CustomerNumberMatch)
{
RecCount = 0;
questions[RecCount].QuestionNum = sr.ReadLine();
if (questions[RecCount].QuestionNum == NumberSearch)
{
Console.Clear();
Console.WriteLine("Question Number: {0}", questions[RecCount].QuestionNum);
questions[RecCount].Level = sr.ReadLine();
Console.WriteLine("Level: {0}", questions[RecCount].Level);
questions[RecCount].Question = sr.ReadLine();
Console.WriteLine("Question: {0}", questions[RecCount].Question);
questions[RecCount].answer = sr.ReadLine();
Console.WriteLine("Answer: {0}", questions[RecCount].answer);
CustomerNumberMatch = true;
}
RecCount++;
}
}
Something still seems a little off with the code - e.g. you are only populating the question properties when you find a match with NumberSearch, but hopefully this gets you closer.
Code:
static void MultipleFilesToSingleFile(string dirPath, string filePattern, string destFile)
{
string[] fileAry = Directory.GetFiles(dirPath, filePattern);
Console.WriteLine("Total File Count : " + fileAry.Length);
using (TextWriter tw = new StreamWriter(destFile, true))
{
foreach (string filePath in fileAry)
{
using (TextReader tr = new StreamReader(filePath))
{
tw.WriteLine(tr.ReadToEnd());
tr.Close();
tr.Dispose();
}
Console.WriteLine("File Processed : " + filePath);
}
tw.Close();
tw.Dispose();
}
}
I need to optimize this as its extremely slow: takes 3 minutes for 45 files of average size 40 — 50 Mb XML file.
Please note: 45 files of an average 45 MB is just one example, it can be n numbers of files of m size, where n is in thousands & m can be of average 128 Kb. In short, it can vary.
Could you please provide any views on optimization?
General answer
Why not just use the Stream.CopyTo(Stream destination) method?
private static void CombineMultipleFilesIntoSingleFile(string inputDirectoryPath, string inputFileNamePattern, string outputFilePath)
{
string[] inputFilePaths = Directory.GetFiles(inputDirectoryPath, inputFileNamePattern);
Console.WriteLine("Number of files: {0}.", inputFilePaths.Length);
using (var outputStream = File.Create(outputFilePath))
{
foreach (var inputFilePath in inputFilePaths)
{
using (var inputStream = File.OpenRead(inputFilePath))
{
// Buffer size can be passed as the second argument.
inputStream.CopyTo(outputStream);
}
Console.WriteLine("The file {0} has been processed.", inputFilePath);
}
}
}
Buffer size adjustment
Please, note that the mentioned method is overloaded.
There are two method overloads:
CopyTo(Stream destination).
CopyTo(Stream destination, int bufferSize).
The second method overload provides the buffer size adjustment through the bufferSize parameter.
One option is to utilize the copy command, and let it do what is does well.
Something like:
static void MultipleFilesToSingleFile(string dirPath, string filePattern, string destFile)
{
var cmd = new ProcessStartInfo("cmd.exe",
String.Format("/c copy {0} {1}", filePattern, destFile));
cmd.WorkingDirectory = dirPath;
cmd.UseShellExecute = false;
Process.Start(cmd);
}
I would use a BlockingCollection to read so you can read and write concurrently.
Clearly should write to a separate physical disk to avoid hardware contention.
This code will preserve order.
Read is going to be faster than write so no need for parallel read.
Again since read is going to be faster limit the size of the collection so read does not get farther ahead of write than it needs to.
A simple task to read the single next in parallel while writing the current has the problem of different file sizes - write a small file is faster than read a big.
I use this pattern to read and parse text on T1 and then insert to SQL on T2.
public void WriteFiles()
{
using (BlockingCollection<string> bc = new BlockingCollection<string>(10))
{
// play with 10 if you have several small files then a big file
// write can get ahead of read if not enough are queued
TextWriter tw = new StreamWriter(#"c:\temp\alltext.text", true);
// clearly you want to write to a different phyical disk
// ideally write to solid state even if you move the files to regular disk when done
// Spin up a Task to populate the BlockingCollection
using (Task t1 = Task.Factory.StartNew(() =>
{
string dir = #"c:\temp\";
string fileText;
int minSize = 100000; // play with this
StringBuilder sb = new StringBuilder(minSize);
string[] fileAry = Directory.GetFiles(dir, #"*.txt");
foreach (string fi in fileAry)
{
Debug.WriteLine("Add " + fi);
fileText = File.ReadAllText(fi);
//bc.Add(fi); for testing just add filepath
if (fileText.Length > minSize)
{
if (sb.Length > 0)
{
bc.Add(sb.ToString());
sb.Clear();
}
bc.Add(fileText); // could be really big so don't hit sb
}
else
{
sb.Append(fileText);
if (sb.Length > minSize)
{
bc.Add(sb.ToString());
sb.Clear();
}
}
}
if (sb.Length > 0)
{
bc.Add(sb.ToString());
sb.Clear();
}
bc.CompleteAdding();
}))
{
// Spin up a Task to consume the BlockingCollection
using (Task t2 = Task.Factory.StartNew(() =>
{
string text;
try
{
while (true)
{
text = bc.Take();
Debug.WriteLine("Take " + text);
tw.WriteLine(text);
}
}
catch (InvalidOperationException)
{
// An InvalidOperationException means that Take() was called on a completed collection
Debug.WriteLine("That's All!");
tw.Close();
tw.Dispose();
}
}))
Task.WaitAll(t1, t2);
}
}
}
BlockingCollection Class
Tried solution posted by sergey-brunov for merging 2GB file. System took around 2 GB of RAM for this work. I have made some changes for more optimization and it now takes 350MB RAM to merge 2GB file.
private static void CombineMultipleFilesIntoSingleFile(string inputDirectoryPath, string inputFileNamePattern, string outputFilePath)
{
string[] inputFilePaths = Directory.GetFiles(inputDirectoryPath, inputFileNamePattern);
Console.WriteLine("Number of files: {0}.", inputFilePaths.Length);
foreach (var inputFilePath in inputFilePaths)
{
using (var outputStream = File.AppendText(outputFilePath))
{
// Buffer size can be passed as the second argument.
outputStream.WriteLine(File.ReadAllText(inputFilePath));
Console.WriteLine("The file {0} has been processed.", inputFilePath);
}
}
}
Several things you can do:
I my experience the default buffer sizes can be increased with noticeable benefit up to about 120K, I suspect setting a large buffer on all streams will be the easiest and most noticeable performance booster:
new System.IO.FileStream("File.txt", System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.Read, 150000);
Use the Stream class, not the StreamReader class.
Read contents into a large buffer, dump them in output stream at once — this will speed up small files operations.
No need of the redundant close/dispose: you have the using statement.
// Binary File Copy
public static void mergeFiles(string strFileIn1, string strFileIn2, string strFileOut, out string strError)
{
strError = String.Empty;
try
{
using (FileStream streamIn1 = File.OpenRead(strFileIn1))
using (FileStream streamIn2 = File.OpenRead(strFileIn2))
using (FileStream writeStream = File.OpenWrite(strFileOut))
{
BinaryReader reader = new BinaryReader(streamIn1);
BinaryWriter writer = new BinaryWriter(writeStream);
// create a buffer to hold the bytes. Might be bigger.
byte[] buffer = new Byte[1024];
int bytesRead;
// while the read method returns bytes keep writing them to the output stream
while ((bytesRead =
streamIn1.Read(buffer, 0, 1024)) > 0)
{
writeStream.Write(buffer, 0, bytesRead);
}
while ((bytesRead =
streamIn2.Read(buffer, 0, 1024)) > 0)
{
writeStream.Write(buffer, 0, bytesRead);
}
}
}
catch (Exception ex)
{
strError = ex.Message;
}
}