Best way of creating a IEnumerable<MyObj> from csv file input? - c#

I've got some large csv files that I need to import into IEnumerable (prob a list) so that I can do some "magic" on them before saving into a db. I don't need every value (column) from the csv.
However, I can't find a better alternative than this:
Read csv file by line
Split the line on ,
new MyObj{
Prop1 = split[0],
Prop2 = split[1],
Prop3 = split[6],
Prop4 = split[7],
Prop5 = split[9]
}
Add new MyObj to List
This works and is quick enough, but seems very clunky?
Is there an alternative (other than add a ctor, which acheives the same as above).

You can use a CSV parser - there is one in the Microsoft.VisualBasic.FileIO namespace - the TextFieldParser.
FileHelpers is another popular option, and there are many free ones around (just search).

You may find the the FileHelpers library useful.

Your code looks fine as long as you are sure that there will never be a comma in the data.
There are many csv readers around on the net that are more robust: here's one

I have a very similar application that performs the same form of translation, we read all the lines into an IDictionary and then perform the new object creation using Data Parallelism Parallel.ForEach(sourceCollection, item => Process(key, item)). Any errors we log the Key which is the row number.

Related

Get certain row by searching for a string

I am very new to C# and am trying to feel it out. Slow going so far! What I am trying to achieve should be relatively simple; I want to read a row from a CSV file with a search. I.e. if I search for username "Toby" it would fetch the entire row, preferably as an array.
Here is my users.csv file:
Id,Name,Password
1,flugs,password
2,toby,foo
I could post the code that I've tried, but I haven't even come close in previous attempts. It's a bit easier to do such a thing in Python, it may be easy in C# too but I'm far too new to know!
Does anyone have any ideas as to how I should approach/code this? Many thanks.
Easy to do in c# too:
var lineAsArray = File.ReadLines("path").First(s => s.Contains(",toby,")).Split(',');
If you want case insens, use e.g. Contains(",toby,", StringComparison.OrdinalIgnoreCase)
If your user is going to type in "Toby" you can either concatenate a comma on the start/end of it to follow this simplistic searching (which will find Toby anywhere on the line) or you can split the lone first and look to see if the second element is Toby
var lineAsArray = File.ReadLines("path").Split(',').First(a => a[1].Equals("toby"));
To make this one case insensitive, put a suitable StringComparison argument into the Equals using the same approach as above
Sky's the limit with how involved you want to get with it; using a library that parses CSV to objects that represent your lines with named, typed parameters is probably where I'd stop.. take a look at CSVHelper from josh close or ServiceStack Text, though there are no shortage of csv parser libs- it's been done to death!

C# - Comparing two CSV Files and giving an output

Need a bit of help, I have two sources of information and the information is exported to two different CSV file's by different programs. They are supposed to include the same information, however this is what needs to be checked.
Therefore what I would like to do is as follows:
Take the information from the two files.
Compare
Output any differences and which file the difference was in. (e.g File A Contained this, but File B did not and vice versa).
The files are 200,000 odd rows so will need to be as effective as possible.
Tried doing this with Excel however has proved to be too complicated and I'm really struggling to find a way programatically.
Assuming that the files are really supposed to be identical, right down to text qualifiers, ordering of rows, and number of rows contained in each file, the simplest approach may be to simply iterate through both files together and compare each line.
using (StreamReader f1 = new StreamReader(path1))
using (StreamReader f2 = new StreamReader(path2)) {
var differences = new List<string>();
int lineNumber = 0;
while (!f1.EndOfStream) {
if (f2.EndOfStream) {
differences.Add("Differing number of lines - f2 has less.");
break;
}
lineNumber++;
var line1 = f1.ReadLine();
var line2 = f2.ReadLine();
if (line1 != line2) {
differences.Add(string.Format("Line {0} differs. File 1: {1}, File 2: {2}", lineNumber, line1, line2);
}
}
if (!f2.EndOfStream) {
differences.Add("Differing number of lines - f1 has less.");
}
}
Depending on your answers to the comments on your question, if it doesn't really need to be done with code, you could do worse than download a compare tool, which is likely to more sophisticated.
(Winmerge for example)
OK, for anyone else that googles this and finds this. Here is what my answer was.
I exported the details to a CSV and ordered them numerically when they were exported for ease of use. Once they were exported as two CSV files, I then used a program called Beyond Compare which can be found here. This allows the files to be compared.
At first I used Beyond Compare manually to test what I was exporting was correct etc, however Beyond Compare does have the ability to be able to use command lines to compare. This then results in everything done programatically, all that has to be done is a user views the results in Beyond Compare. You may be able to export them to another CSV, I havn't looked as the GUI of Beyond Compare is very nice and useful, so it is easier to use this.

Linq To Text Files

I have a Text File (Sorry, I'm not allowed to work on XML files :(), and it includes customer records. Each text file looks like:
Account_ID: 98734BLAH9873
User Name: something_85
First Name: ILove
Last Name: XML
Age: 209
etc... And I need to be able to use LINQ to get the data from these text files and just store them in memory.
I have seen many Linq to SQL, Linq to BLAH but nothing for Linq to Text. Can someone please help me out abit?
Thank you
You can use the code like that
var pairs = File.ReadAllLines("filename.txt")
.Select(line => line.Split(':'))
.ToDictionary(cells => cells[0].Trim(), cells => cells[1].Trim())
Or use the .NET 4.0 File.ReadLines() method to return an IEnumerable, which is useful for processing big text files.
The concept of a text file data source is extremely broad (consider that XML is stored in text files). For that reason, I think it is unlikely that such a beast exists.
It should be simple enough to read the text file into a collection of Account objects and then use LINQ-to-Objects.
Filehelpers is a really great open source solution to this:
http://filehelpers.sourceforge.net/
You just declare a class with attributes, and FileHelpers reads the flat file for you:
[FixedLengthRecord]
public class PriceRecord
{
[FieldFixedLength(6)]
public int ProductId;
[FieldFixedLength(8)]
[FieldConverter(typeof(MoneyConverter))]
public decimal PriceList;
[FieldFixedLength(8)]
[FieldConverter(typeof(MoneyConverter))]
public decimal PriceOnePay;
}
Once FileHelpers gives you back an array of rows, you can use Linq to Objects to query the data
We've had great success with it. I actually think Kaerber's solution is a nice simple solution, maybe stave of migrating to FileHelpers till you really need the extra power.

.NET writing a delimited text file

I am writing a framework for writing out collections into different formats for a project at my employer. One of the output formats is delimited text files (commonly known as the CSV -- even though CSVs aren't always delimited by a comma).
I am using the Microsoft.Jet.OLEDB.4.0 provider via OleDbConnection in ADO.net. For reading this files, its very quick. However, for writing, its extremely slow.
In one case, I have a file with 160 records, with each record having about 250 fields. It takes approximately 30 seconds to create this file, seemingly CPU bound.
I have done the following, which provided significant performance boosts, but I can't think of anything else:
Preparing the statement once
Using unnamed parameters
Any other suggestions to speed this up some?
How about "don't use OleDbConnection"... writing delimited files with TextWriter is pretty simple (escaping aside). For reading, CsvReader.
I have written a small and simple set of classes at my employer to do just that (write and read CSV files or other flat files with a fixed field length).
I have just used the StreamWriter & StreamReader classes, and it is quite fast actually.
Try using the System.Configuration.CommaDelimitedStringCollection, like this code here to print a list of objects to a TextWriter.
public void CommaSeperatedWriteLine(TextWriter sw, params Object[] list)
{
if (list.Length > 0)
{
System.Configuration.CommaDelimitedStringCollection commaStr = new System.Configuration.CommaDelimitedStringCollection();
foreach (Object obj in list)
{
commaStr.Add(obj.ToString());
}
sw.WriteLine(commaStr.ToString());
}
}
Take a look at this LINQ to CSV library from code project:
http://www.codeproject.com/KB/linq/LINQtoCSV.aspx
I have not used this yet but I have had it in my reference file for about a year now.
"This library makes it easy to use CSV files with LINQ queries."

C# Datatype for large sorted collection with position?

I am trying to compare two large datasets from a SQL query. Right now the SQL query is done externally and the results from each dataset is saved into its own csv file. My little C# console application loads up the two text/csv files and compares them for differences and saves the differences to a text file.
Its a very simple application that just loads all the data from the first file into an arraylist and does a .compare() on the arraylist as each line is read from the second csv file. Then saves the records that don't match.
The application works but I would like to improve the performance. I figure I can greatly improve performance if I can take advantage of the fact that both files are sorted, but I don't know a datatype in C# that keeps order and would allow me to select a specific position. Theres a basic array, but I don't know how many items are going to be in each list. I could have over a million records. Is there a data type available that I should be looking at?
If data in both of your CSV files is already sorted and have the same number of records, you could skip the data structure entirely and do in-place analysis.
StreamReader one = new StreamReader("C:\file1.csv");
StreamReader two = new StreamReader("C:\file2.csv");
String lineOne;
String lineTwo;
StreamWriter differences = new StreamWriter("Output.csv");
while (!one.EndOfStream)
{
lineOne = one.ReadLine();
lineTwo = two.ReadLine();
// do your comparison.
bool areDifferent = true;
if (areDifferent)
differences.WriteLine(lineOne + lineTwo);
}
one.Close();
two.Close();
differences.Close();
System.Collections.Specialized.StringCollection allows you to add a range of values and, using the .IndexOf(string) method, allows you to retrieve the index of that item.
That being said, you could likely just load up a couple of byte[] from a filestream and do byte comparison... don't even worry about loading that stuff into a formal datastructure like StringCollection or string[]; if all you're doing is checking for differences, and you want speed, I would wreckon byte differences are where it's at.
This is an adaptation of David Sokol's code to work with varying number of lines, outputing the lines that are in one file but not the other:
StreamReader one = new StreamReader("C:\file1.csv");
StreamReader two = new StreamReader("C:\file2.csv");
String lineOne;
String lineTwo;
StreamWriter differences = new StreamWriter("Output.csv");
lineOne = one.ReadLine();
lineTwo = two.ReadLine();
while (!one.EndOfStream || !two.EndOfStream)
{
if(lineOne == lineTwo)
{
// lines match, read next line from each and continue
lineOne = one.ReadLine();
lineTwo = two.ReadLine();
continue;
}
if(two.EndOfStream || lineOne < lineTwo)
{
differences.WriteLine(lineOne);
lineOne = one.ReadLine();
}
if(one.EndOfStream || lineTwo < lineOne)
{
differences.WriteLine(lineTwo);
lineTwo = two.ReadLine();
}
}
Standard caveat about code written off the top of my head applies -- you may need to special-case running out of lines in one while the other still has lines, but I think this basic approach should do what you're looking for.
Well, there are several approaches that would work. You could write your own data structure that did this. Or you can try and use SortedList. You can also return the DataSets in code, and then use .Select() on the table. Granted, you would have to do this on both tables.
You can easily use a SortedList to do fast lookups. If the data you are loading is already sorted, insertions into the SortedList should not be slow.
If you are looking simply to see if all lines in FileA are included in FileB you could read it in and just compare streams inside a loop.
File 1
Entry1
Entry2
Entry3
File 2
Entry1
Entry3
You could loop through with two counters and find omissions, going line by line through each file and see if you get what you need.
Maybe I misunderstand, but the ArrayList will maintain its elements in the same order by which you added them. This means you can compare the two ArrayLists within one pass only - just increment the two scanning indices according to the comparison results.
One question I have is have you considered "out-sourcing" your comparison. There are plenty of good diff tools that you could just call out to. I'd be surprised if there wasn't one that let you specify two files and get only the differences. Just a thought.
I think the reason everyone has so many different answers is that you haven't quite got your problem specified well enough to be answered. First off, it depends what kind of differences you want to track. Are you wanting the differences to be output like in a WinDiff where the first file is the "original" and second file is the "modified" so you can list changes as INSERT, UPDATE or DELETE? Do you have a primary key that will allow you to match up two lines as different versions of the same record (when fields other than the primary key are different)? Or is is this some sort of reconciliation where you just want your difference output to say something like "RECORD IN FILE 1 AND NOT FILE 2"?
I think the asnwers to these questions will help everyone to give you a suitable answer to your problem.
If you have two files that are each a million lines as mentioned in your post, you might be using up a lot of memory. Some of the performance problem might be that you are swapping from disk. If you are simply comparing line 1 of file A to line one of file B, line2 file A -> line 2 file B, etc, I would recommend a technique that does not store so much in memory. You could either read write off of two file streams as a previous commenter posted and write out your results "in real time" as you find them. This would not explicitly store anything in memory. You could also dump chunks of each file into memory, say one thousand lines at a time, into something like a List. This could be fine tuned to meet your needs.
To resolve question #1 I'd recommend looking into creating a hash of each line. That way you can compare hashes quick and easy using a dictionary.
To resolve question #2 one quick and dirty solution would be to use an IDictionary. Using itemId as your first string type and the rest of the line as your second string type. You can then quickly find if an itemId exists and compare the lines. This of course assumes .Net 2.0+

Categories

Resources