Add new line to a text file in loop statement - c#

I created a loop that takes user names from a text boxes. I would like to put the names into a text file and add a new line each time. It's not working. It's once again overrides the previous name. I know how to add new line to a text file, but in the loop statement it dose not work.
Here's my code:
for (int i = 0; i < txt_user.length ; i++ )
{
File.WriteAllText(#"C:\mail\users.txt", txt_user[i].Text + Environment.NewLine);
}
Here is a sample code out of the loop as writing a new line - and it works:
File.WriteAllText(#"C:\mail\users.txt", txt_user1.Text + Environment.NewLine + "abc");

You're close: there is File.AppendAllText() or File.AppendText(). You could also collect all lines in memory first and use File.AppendAllLines() (if you have enough RAM to store all lines).
WriteAllText() will write a new file or overwrite an existing one.
This will work well enough for smaller files, since the OS may apply some caching strategies. However, doing that in a loop for very large files may not be efficient. You should have a look at FileStreams then.

If you have too many entities it's better use this code
using (StreamWriter sw = File.CreateText(#"C:\mail\users.txt"))
{
for (int i = 0; i < txt_user.length ; i++ )
{
sw.WriteLine(txt_user[i].Text);
}
}

This will open the file once, and write lines to the text file as it enumerates them, and then close the file. It doesn't do multiple opens, nor does it try and build up a large string in the process, which likely makes it the most I/O and memory efficient of the bunch of answers given so far.
File.WriteAllLines(#"C:\mail\users.txt",
Enumerable
.Range(0,txt_user.length)
.Select(i=>txt_user[i].Text));

Use File.AppendText if you want to add to an existing file.

I recommend use a StringBuilder:
var builder = new StringBuilder();
for (int i = 0; i < txt_user.length ; i++ )
builder.AppendLine(txt_user[i].Text);
File.WriteAllText(#"C:\mail\users.txt", builder.ToString());

Instead of openeing and closing the file all the time (because this what File.WriteAllText does at every call), prepare the text by using a string builder and write it to the file at once. Since the text comes from text boxes, I assume that the resulting string will not be very long anyway.
var sb = new Stringbuilder();
for (int i = 0; i < txt_user.length; i++)
{
sb.AppendLine(txt_user[i].Text);
}
File.WriteAllText(#"C:\mail\users.txt", sb.ToString());

Related

how to access and write each word in string array read from a file onto a new file in c#?

My testerfile contains:
processes
deleting
agreed
And this the code in C#
PorterStemmer testing = new PorterStemmer();
string temp,stemmed;
string[] lines = System.IO.File.ReadAllLines(#"C:\\Users\\PJM\\Documents\\project\\testerfile.txt");
System.Console.WriteLine("Contents of testerfile.txt = ");
for (int i = 0; i <2; i++)
{
temp = lines[i];
stemmed = testing.StemWord(temp);
System.IO.File.WriteAllText(#"C:\\Users\\PJM\\Documents\\project\\testerfile3.txt", stemmed);
Console.WriteLine("\t" + stemmed);
}
After running the code, the testerfile3 contains only "agre" .
So my problem here is that I want each word in the string array to be processed seperately i.e. I am having problem accessing string array. Is there any way to access every index in the string array?
From the documentation of WriteAllText:
If the target file already exists, it is overwritten.
so each iteration in your for loop overwrites the file, and you're only left with the text from the last iteration.
you can use System.IO.File.AppendAllText instead
also, you can use the array's Length property to loop through all words for (int i = 0; i < lines.Length; i++)
Alternatively, instead of the for-loop you can use LINQ's Select to project the non-stemmed line to the stemmed one and use AppendAllLines to write the results:
System.IO.File.AppendAllLines(#"C:\\Users\\PJM\\Documents\\project\\testerfile3.txt", lines.Select(l => testing.StemWord(l)));

C# - Reading Text Files(System IO)

I would like to consecutively read from a text file that is generated by my program. The problem is that after parsing the file for the first time, my program reads the last line of the file before it can begin re-parsing, which causes it to accumulates unwanted data.
3 photos: first is creating tournament and showing points, second is showing text file and the third is showing that TeamA got more 3 points
StreamReader = new StreamReader("Torneios.txt");
torneios = 0;
while (!rd.EndOfStream)
{
string line = rd.ReadLine();
if (line == "Tournament")
{
torneios++;
}
else
{
string[] arr = line.Split('-');
equipaAA = arr[0];
equipaBB = arr[1];
res = Convert.ToChar(arr[2]);
}
}
rd.Close();
That is what I'm using at the moment.
To avoid mistakes like these, I highly recommend using File.ReadAllText or File.ReadAllLines unless you are using large files (in which case they are not good choices), here is an example of an implementation of such:
string result = File.ReadAllText("textfilename.txt");
Regarding your particular code, an example using File.ReadAllLines which achieves this is:
string[] lines = File.ReadAllLines("textfilename.txt");
for(int i = 0; i < lines.Length; i++)
{
string line = lines[i];
//Do whatever you want here
}
Just to make it clear, this is not a good idea if the files you intend to read from are large (such as binary files).

Insert text into large existing file

I have a file that contains about 2000 lines of text that I need to add a few lines to. My initial solution was to just copy the existing file to a new file and then add a few lines at the end of it. That was until I realized that the last line in the file had to always be the last line. So now I need to add my new lines before that one line of text. I know that I can just read the entire file, save it to my program and then write everything to a new file with my extra lines included. But since the file has that many lines I wanted to know if it was a better way to do it.
You will need to copy it into a new file. There's no way to inject data into the middle of a file, unfortunately.
However, you don't have to load it into memory to do that. You can use a StreamReader and read only one line at a time, or better yet, the System.IO.File.ReadLines method.
int newLineIndex = 100;
string newLineText = "Here's the new line";
using (var writer = new StreamWriter(outputFileName))
{
int lineNumber = 0;
foreach (var line in File.ReadLines(inputFileName))
{
if (lineNumber == newLineIndex)
{
writer.WriteLine(newLineText);
}
else if (lineNumber > 0)
{
writer.WriteLine();
}
writer.Write(line);
lineNumber++;
}
}
Of course, this becomes substantially easier if you're comfortable assuming that the new line will always go at index zero. If that's the case, I'd be tempted to forgo much of this, and just go with a simple Stream.CopyTo after writing the first line. But this should still work.
string newLineText = "Here's the new line";
using (var writer = new StreamWriter(outputFileName))
using (var reader = File.OpenRead(inputFileName))
{
writer.WriteLine(newLineText);
reader.CopyTo(writer.BaseStream);
}
Of course, there are any number of ways to perform this, with different trade-offs. This is just one option.
that I can just read the entire file, save it to my program and then write everything to a new file with my extra lines included.
Not everything needs to be written. Just write the inserted lines and lines after the inserted lines to the original file, starting from the position (byte index) of the inserted lines.

How to read large text file and break it into batches for processing?

I have a large text file that contains GUIDs that I will use to load into the Custom Application that I am trying to create. Since the file is so large (may contain millions of lines of GUIDs), I want to break it into parts and process each part and then move to the next part afterwwards until the end of the file.
Example of text file
ASDFSADFJO23490234AJSDFKL
JOGIJO349230420GJDGJDO230
BJCIOJDFOBJOD239402390423
JFWEIOJFOWE2390423901230N
3490FJSDOFOIWEMO23MOFI23O
FJWEIOFJWEIOFJOI23J230022
Lets just say, the text file has 99,000 lines and I want to process the first 10,000 values (repeat until the end). I will create a new folder for the first batch of 10,000 using like a DateTime.Now as the folder name. Then, the 10,000 values will each have a file created using its value name as the file name. After the first 10,000 values are done, I will create a new folder using DateTime.Now again and move onto the next 10,000 values in the text file. Repeat until the end of the file.
I am able to read the text file, create a folder with the DateTime.Now, create the file with the appropriate name, but I do not know how to batch process the list of values from the text file.
This is how I am reading the file.
string[] source = new string[] {};
source = File.ReadAllLines(#"C:\guids.txt");
I tried to use the Skip/Take method, and I think it works? but I just do not know how to create a new folder and add the new subset to it. Any help will be greatly appreciated. I am open to suggestions and can help clarify if you need more details. Thanks!!
From the comments, I deduce that your problem is not in fact "how do I batch the reads from guid.txt?", but "how do I process these guids and create files in groups of ten thousands in separate folders".
With this in mind, here's an example of how you could do that.
var batchSize = 10000;
var source = File.ReadLines(#"C:\guids.txt");
var i = 0;
var currentDirPath = "";
foreach (var line in source)
{
if (i % batchSize == 0)
{
currentDirPath = Path.GetRandomFileName();
Directory.CreateDirectory(currentDirPath);
}
var newFile = Path.Combine(currentDirPath, line + ".txt");
File.WriteAllText(newFile, "Some content");
i++;
}
Avoid using DateTime for file or folder names. The odds that some unforeseen behavior makes your code try to write to a file that already exists is just too high.
EDIT: About parallelism: use it only if you need it. It is always more complex than it seems, and it has a tendency to introduce hard to find bungs. That being said, here is an untested idea.
//Make sure the current folder is empty, otherwise the folders are very likely to already exist.
if (Directory.GetFiles(Directory.GetCurrentDirectory()).Any())
{
throw new IOException("Current directory is not empty.");
}
var batchSize = 10000;
var source = File.ReadAllLines(#"C:\guids.txt");
//Create the folders synchronoulsy to avoid race conditions.
var batchCount = (source.Length/batchSize) + 1;
for (int i = 0; i < batchCount; i++)
{
Directory.CreateDirectory(i.ToString());
}
source.AsParallel().ForAll(line =>
{
var folder = ((int)(Array.IndexOf(source, line) / batchSize)).ToString();
var newFile = Path.Combine(folder.ToString(), line + ".txt");
File.WriteAllText(newFile, "Some content");
});

C# Best way to parse flat file with dynamic number of fields per row

I have a flat file that is pipe delimited and looks something like this as example
ColA|ColB|3*|Note1|Note2|Note3|2**|A1|A2|A3|B1|B2|B3
The first two columns are set and will always be there.
* denotes a count for how many repeating fields there will be following that count so Notes 1 2 3
** denotes a count for how many times a block of fields are repeated and there are always 3 fields in a block.
This is per row, so each row may have a different number of fields.
Hope that makes sense so far.
I'm trying to find the best way to parse this file, any suggestions would be great.
The goal at the end is to map all these fields into a few different files - data transformation. I'm actually doing all this within SSIS but figured the default components won't be good enough so need to write own code.
UPDATE I'm essentially trying to read this like a source file and do some lookups and string manipulation to some of the fields in between and spit out several different files like in any normal file to file transformation SSIS package.
Using the above example, I may want to create a new file that ends up looking like this
"ColA","HardcodedString","Note1CRLFNote2CRLF","ColB"
And then another file
Row1: "ColA","A1","A2","A3"
Row2: "ColA","B1","B2","B3"
So I guess I'm after some ideas on how to parse this as well as storing the data in either Stacks or Lists or?? to play with and spit out later.
One possibility would be to use a stack. First you split the line by the pipes.
var stack = new Stack<string>(line.Split('|'));
Then you pop the first two from the stack to get them out of the way.
stack.Pop();
stack.Pop();
Then you parse the next element: 3* . For that you pop the next 3 items on the stack. With 2** you pop the next 2 x 3 = 6 items from the stack, and so on. You can stop as soon as the stack is empty.
while (stack.Count > 0)
{
// Parse elements like 3*
}
Hope this is clear enough. I find this article very useful when it comes to String.Split().
Something similar to below should work (this is untested)
ColA|ColB|3*|Note1|Note2|Note3|2**|A1|A2|A3|B1|B2|B3
string[] columns = line.Split('|');
List<string> repeatingColumnNames = new List<string();
List<List<string>> repeatingFieldValues = new List<List<string>>();
if(columns.Length > 2)
{
int repeatingFieldCountIndex = columns[2];
int repeatingFieldStartIndex = repeatingFieldCountIndex + 1;
for(int i = 0; i < repeatingFieldCountIndex; i++)
{
repeatingColumnNames.Add(columns[repeatingFieldStartIndex + i]);
}
int repeatingFieldSetCountIndex = columns[2 + repeatingFieldCount + 1];
int repeatingFieldSetStartIndex = repeatingFieldSetCountIndex + 1;
for(int i = 0; i < repeatingFieldSetCount; i++)
{
string[] fieldSet = new string[repeatingFieldCount]();
for(int j = 0; j < repeatingFieldCountIndex; j++)
{
fieldSet[j] = columns[repeatingFieldSetStartIndex + j + (i * repeatingFieldSetCount))];
}
repeatingFieldValues.Add(new List<string>(fieldSet));
}
}
System.IO.File.ReadAllLines("File.txt").Select(line => line.Split(new[] {'|'}))

Categories

Resources