How do I create column headers using CsvHelper? - c#

So I just installed CsvHelper because that's the one I heard was the better one to use.
I looked at the documentation and tried to figure out how to accompish what I wanted which is to create 1 column and fill it with values that are seperated with commas (,).
And then a cell under that one that would have the corresponding value so like this
ID,Type,Name,Description,Image
11,Variation,MyCoolProduct,A super cool product, Image1 | Image2
I dont want different columns to the side I want to have 1 column with a string inside that is formated like that.
This is what I did, which didnt work because you cant even open the file because you get a SYLK format issue
var records = new List<Columns>
{
new Columns {ID = 12, Type = "Variation", Description = "Simple product with different colors",
Images = "Image1 | Image 2", Price = 19.99d}
};
using (StreamWriter sw = new StreamWriter("Testfile.csv"))
{
var writer = new CsvWriter(sw);
writer.WriteRecords(records);
}
UPDATE
I've mapped it like this now, how do i write it out to a textfile?
public sealed class MyClassMap : ClassMap<Columns>
{
public MyClassMap()
{
Map(m => m.ID);
}
}

You should generally create a CsvClassMap class and map your class to the CSV format you need.
It's really simple and has a nice fluent interface. Just do something like:
public class ColMap : CsvClassMap<Column>
{
public ColMap()
{
Map(m=>m.ID).Name("ID").Index(0);
.
.
.
}
}
Some of the other options after the .Index call will allow you to further configure the format of each of your columns.

Related

How to detect if a row has extra columns (more than the header)

While reading a CSV file, how can I configure CsvHelper to enforce that each row has no extra columns that are not found in the header? I cannot find any obvious property under CsvConfiguration nor under CsvHelper.Configuration.Attributes.
Context: In our CSV file format, the last column is a string description, which our users (using plain-text editors) sometimes forget to quote when the description contains commas. Such "raw" commas cause that row to have extra columns, and the intended description read into the software omits the description after the first raw comma. I want to detect this and throw an exception that suggests to the user they may have forgotten to quote the description cell.
It looks like CsvConfiguration.DetectColumnCountChanges might be related, but presently the 29.0.0 library lacks any Intellisense description of CsvConfiguration properties, so I have no idea how to use this.
Similar information for other CSV libraries:
With LINQtoCSV this was done by setting IgnoreUnknownColumns = false in CsvFileDescription.
Can Lumenworks CSV parser error when there are too many columns in a row?
You were on the right track with CsvConfiguration.DetectColumnCountChanges.
void Main()
{
var config = new CsvConfiguration(CultureInfo.InvariantCulture)
{
DetectColumnCountChanges = true
};
using (var reader = new StringReader("Id,Name\n1,MyName\n2,YourName,ExtraColumn"))
using (var csv = new CsvReader(reader, config))
{
try
{
var records = csv.GetRecords<Foo>().ToList();
}
catch (BadDataException ex)
{
if (ex.Message.StartsWith("An inconsistent number of columns has been detected."))
{
Console.WriteLine("There is an issue with an inconsistent number of columns on row {0}", ex.Context.Parser.RawRow);
Console.WriteLine("Row data: \"{0}\"", ex.Context.Parser.RawRecord);
Console.WriteLine("Please check for commas in a field that were not properly quoted.");
}
}
}
}
public class Foo
{
public int Id { get; set; }
public string Name { get; set; }
}

LinqToExcel Duplicate Column Names

I have a machine generated excel file that has a few columns with the same name. e.g.:
A B C D
Group 1           Group 2
Period | Name     Period | Name
And i got a DTO like this:
[ExcelColumn("Period")]
public string FirstPeriod { get; set; }
[ExcelColumn("Name")]
public string FirstName { get; set; }
[ExcelColumn("Period")]
public string SecondPeriod { get; set; }
[ExcelColumn("Name")]
public string SecondName { get; set; }
I use the following command to read the lines:
var excel = new ExcelQueryFactory(filePath);
excel.WorksheetRange<T>(beginCell, endColl + linesCount.ToString(), sheetIndex);
It reads the file just fine, but when i check the content of my DTO i saw that all the 'Second' properties have the same values of the 'First' ones.
This post was the closest thing that i found in my searches and i think the problem could be solved with something like this:
excel.AddMapping<MyDto>(x => x.FirstPeriod, "A");
excel.AddMapping<MyDto>(x => x.FirstName, "B");
excel.AddMapping<MyDto>(x => x.SecondPeriod, "C");
excel.AddMapping<MyDto>(x => x.SecondName, "D");
But i don't know how to get the excel column letters...
Obs: I got a few more code behind this but i don't think its relevant to the problem.
The problem that you're having is not possible to solve today with LinqToExcel because it wraps the OleDb functions and then they map properties based on columns names, so you lose the OleDb options like "FN" for specify columns (like "F1" for "A").
There's a issue on LinqToExcel github repo about this. https://github.com/paulyoder/LinqToExcel/issues/85
I recommend you to change the name of the columns to not duplicates names (e.g. Period1, Name1, Period2, Name2) if it's not possible to change because its machine generated, try change the header names in runtime.
Another option is to make more than one query in excel file, with ranges splitted your groups and then merging the results later.
var excel = new ExcelQueryFactory(filePath);
var group1 = excel.WorksheetRange<T>(A1, B + rowCount);
var group2 = excel.WorksheetRange<T>(C1, D + rowCount);
Edit: I'll work on a feature to try solve this problem in a elegant manner, so maybe in future you have a more flexible option to map columns and properties (if they accept my Pull Request)

Enforce LF line endings with CsvHelper

If I have some LF converted (using N++) CSV files, everytime I write data to them using JoshClose's CsvHelper the line endings are back to CRLF.
Since I'm having problems with CLRF ROWTERMINATORS in SQL Server, I whish to keep my line endings like the initital status of the file.
Couldn't find it in the culture settings, I compile my own version of the library.
How to proceed?
Missing or incorrect Newline characters when using CsvHelper is a common problem with a simple but poorly documented solution. The other answers to this SO question are correct but are missing one important detail.
Configuration allows you to choose from one of four available alternatives:
// Pick one of these alternatives
CsvWriter.Configuration.NewLine = NewLine.CR;
CsvWriter.Configuration.NewLine = NewLine.LF;
CsvWriter.Configuration.NewLine = NewLine.CRLF;
CsvWriter.Configuration.NewLine = NewLine.Environment;
However, many people are tripped up by the fact that (by design) CsvWriter does not emit any newline character when you write the header using CsvWriter.WriteHeader() nor when you write a single record using CsvWriter.WriteRecord(). The reason is so that you can write additional header elements or additional record elements, as you might do when your header and row data comes from two or more classes rather than from a single class.
CsvWriter does emit the defined type of newline when you call CsvWriter.NextRecord(), and the author, JoshClose, states that you are supposed to call NextRecord() after you are done with the header and after you are done with each individual row added using WriteRecord. See GitHub Issues List 929
When you are writing multiple records using WriteRecords() CsvWriter automatically emits the defined type of newline at the end of each record.
In my opinion this ought to be much better documented, but there it is.
From what I can tell, the line terminator isn't controlled by CvsHelper. I've gotten it to work by adjusting the File writer I pass to CsvWriter.
TextWriter tw = File.CreateText(filepathname);
tw.NewLine = "\n";
CsvWriter csvw = new CsvWriter(tw);
csvw.WriteRecords(records);
csvw.Dispose();
Might be useful for somebody:
public static void AppendToCsv(ShopDataModel shopRecord)
{
using (var writer = new StreamWriter(DestinationFile, true))
{
using (var csv = new CsvWriter(writer))
{
csv.WriteRecord(shopRecord);
writer.Write("\n");
}
}
}
As of CsvHelper 13.0.0, line-endings are now configurable via the NewLine configuration property.
E.g.:
using CsvHelper;
using CsvHelper.Configuration;
using System.Globalization;
void Main()
{
using (var writer = new StreamWriter(#"my-file.csv"))
{
using (var csv = new CsvWriter(writer, CultureInfo.InvariantCulture))
{
csv.Configuration.HasHeaderRecord = false;
csv.Configuration.NewLine = NewLine.LF; // <<####################
var records = new List<Foo>
{
new Foo { Id = 1, Name = "one" },
new Foo { Id = 2, Name = "two" },
};
csv.WriteRecords(records);
}
}
}
private class Foo
{
public int Id { get; set; }
public string Name { get; set; }
}

exporting a comma separated values text file to use in excel

In my C# winforms program I want to make an export function that will create a comma separated text file or csv. I am not sure about the logic of how to do this the best way. My exported file will be like this :
Family Name, First Name, Sex, Age
Dekker, Sean, Male, 23
Doe, John, Male, 40
So the first line I want to be the name of the columns, and the rest should be treated as values. Is it ok in this way for later usage? Or I should not include column names?
Would be nice to hear your experiences about this!
Sean,
sorry don't have enough privilege points to comment directly on your post. I think you may be confusing CSV and Excel files here. A CSV is simply a text file where each value is separated by a comma, there is no special formating etc. Excel will display CSV files since it knows how to open them but you can just as easily open them in notepad.
Excel .xslx files are different and can contain all sorts of different formats, charts etc. To format these files its important to understand that .xslx files are essentially zips. So the first place to start is to create an excel file with some data, save it and then rename the extension to .zip
Open the zip file created now and you will see a number of different folders and files, of these the most important for your purposes is the XL directory. In this folder you will see a shared strings xml file and a worksheets folder.
Lets start by going into the worksheet folder and opening sheet1.xml. Look for the line that says "
If there is text in this column, i.e. data that excel should read as text then you will have something like 0. This indicates that cell A1 is of type string t="s" and that the value is to be found as the first value in the SharedStrings.xml file 0
If there is a number in the cell then you may have something like 234. In this case Excel knows to use the value 234 in this cell.
So in your case you will need to do the following:
1: create the excel document in C# - there are a number of libraries available for this
2: Open the excel file as a zip
3: Modify in your case the styles and worksheets xml files
4: Save the document
That is absolutely fine to do (to state the obvious....). Excel has a little checkbox that allows the user importing to use the first line as column headers if they select it.
I would also suggest that you leave out the spaces at the start of each piece of data, it isn't necessary.
In general its best practice to include the column headers, the only reason not to do so would be an external program over which you have no control accessing your data which doesn't realise that the first row are the column headers and which can't be changed.
To create the export function something like this should work:
private static List<Person> people = new List<Person>();
static void Main(string[] args)
{
// add some people
people.Add(
new Person() { firstName = "John", familyName = "Smith", sex = Sex.Male, age = 12 }
);
people.Add(
new Person() { firstName = "Mary", familyName = "Doe", sex = Sex.Female, age = 25 }
);
// write the data
Write();
}
static void Write()
{
using (TextWriter tw = new StreamWriter(#"c:\junk1\test.csv", false))
{
// write the header
tw.WriteLine("Family Name, First Name, Sex, Age");
// write the details
foreach(Person person in people)
{
tw.WriteLine(String.Format("{0}, {1}, {2}, {3}", person.familyName, person.firstName, person.sex.ToString(), person.age.ToString()));
}
}
}
}
/// <summary>
/// Applicable sexes
/// </summary>
public enum Sex
{
Male,
Female
}
/// <summary>
/// holds details about a person
/// </summary>
public class Person
{
public string familyName { get; set; }
public string firstName { get; set; }
public Sex sex { get; set; }
public int age { get; set; }
}
You can use Dataset to do this.
Please refer here
//Why not save the lines to a List<string> object, first List<sting>Object.Add//("your headers"), use the string.Join("," "your Header Array string[]" do not add //(+",") the .Join extension menthod will handle that for you.
// here is an example
//if you were to retreive the header values from a database using a SQL Reader
var reader = sqlcmdSomeQueryCommand.ExecuteReader();
var columns = new List<string>();
//get all the field names from the Columns var
for (int intCounter = 0; intCounter < reader.FieldCount; intCounter++)
{
columns.Add(reader.GetName(intCounter));
}
strarryTmpString = columns.ToArray();
string TmpFields = string.Join(", ", strarryTmpString);
columns.Clear();
columns.Add(TmpFields);
//you can save the TmpFieldList to later add the rest of your comma delimited fields
write line by line in a foreach loop or use the List<string> object .foreach(delegate(string delString)
{
someStreamWriterObject.WriteLine(delString)
});

Column headers in CSV using fileHelpers library?

Is there a built-in field attribute in the FileHelper library which will add a header row in the final generated CSV?
I have Googled and didn't find much info on it. Currently I have this:
DelimitedFileEngine _engine = new DelimitedFileEngine(T);
_engine.WriteStream
(HttpContext.Current.Response.Output, dataSource, int.MaxValue);
It works, but without a header.
I'm thinking of having an attribute like FieldTitleAttribute and using this as a column header.
So, my question is at which point do I check the attribute and insert header columns? Has anyone done something similar before?
I would like to get the headers inserted and use custom text different from the actual field name just by having an attribute on each member of the object:
[FieldTitleAttribute("Custom Title")]
private string Name
and maybe an option to tell the engine to insert the header when it's generated.
So when WriteStream or WriteString is called, the header row will be inserted with custom titles.
I have found a couple of Events for DelimitedFileEngine, but not what's the best way to detect if the current record is the first row and how to insert a row before this.
I know this is an old question, but here is an answer that works for v2.9.9
FileHelperEngine<Person> engine = new FileHelperEngine<Person>();
engine.HeaderText = engine.GetFileHeader();
Here's some code that'll do it: https://gist.github.com/1391429
To use it, you must decorate your fields with [FieldOrder] (a good FileHelpers practice anyway). Usage:
[DelimitedRecord(","), IgnoreFirst(1)]
public class Person
{
// Must specify FieldOrder too
[FieldOrder(1), FieldTitle("Name")]
string name;
[FieldOrder(2), FieldTitle("Age")]
int age;
}
...
var engine = new FileHelperEngine<Person>
{
HeaderText = typeof(Person).GetCsvHeader()
};
...
engine.WriteFile(#"C:\people.csv", people);
But support for this really needs to be added within FileHelpers itself. I can think of a few design questions off the top of my head that would need answering before it could be implemented:
What happens when reading a file? Afaik FileHelpers is currently all based on ordinal column position and ignores column names... but if we now have [FieldHeader] attributes everywhere then should we also try matching properties with column names in the file? Should you throw an exception if they don't match? What happens if the ordinal position doesn't agree with the column name?
When reading as a data table, should you use A) the field name (current design), or B) the source file column name, or C) the FieldTitle attribute?
I don't know if you still need this, but here is the way FileHelper is working :
To include headers of columns, you need to define a string with headers delimited the same way as your file.
For example with '|' as delimiter :
public const string HeaderLine = #"COLUMN1|COLUMN2|COLUMN3|...";
Then, when calling your engine :
DelimitedFileEngine _engine = new DelimitedFileEngine<T> { HeaderText = HeaderLine };
If you don't want to write the headers, just don't set the HeaderText attribute on the engine.
List<MyClass> myList = new List<MyClass>();
FileHelperEngine engine = new FileHelperEngine(typeof(MyClass));
String[] fieldNames = Array.ConvertAll<FieldInfo, String>(typeof(MyClass).GetFields(), delegate(FieldInfo fo) { return fo.Name; });
engine.HeaderText = String.Join(";", fieldNames);
engine.WriteFile(MapPath("MyClass.csv"), myList);
Just to include a more complete example, which would have saved me some time, for version 3.4.1 of the FileHelpers NuGet package....
Given
[DelimitedRecord(",")]
public class Person
{
[FieldCaption("First")]
public string FirstName { get; set; }
[FieldCaption("Last")]
public string LastName { get; set; }
public int Age { get; set; }
}
and this code to create it
static void Main(string[] args)
{
var people = new List<Person>();
people.Add(new Person() { FirstName = "James", LastName = "Bond", Age = 38 });
people.Add(new Person() { FirstName = "George", LastName = "Washington", Age = 43 });
people.Add(new Person() { FirstName = "Robert", LastName = "Redford", Age = 28 });
CreatePeopleFile(people);
}
private static void CreatePeopleFile(List<Person> people)
{
var engine = new FileHelperEngine<Person>();
using (var fs = File.Create(#"c:\temp\people.csv"))
using (var sw = new StreamWriter(fs))
{
engine.HeaderText = engine.GetFileHeader();
engine.WriteStream(sw, people);
sw.Flush();
}
}
You get this
First,Last,Age
James,Bond,38
George,Washington,43
Robert,Redford,28
I found that you can use the FileHelperAsyncEngine to accomplish this. Assuming your data is a list called "output" of type "outputData", then you can write code that looks like this:
FileHelperAsyncEngine outEngine = new FileHelperAsyncEngine(typeof(outputData));
outEngine.HeaderText = "Header1, Header2, Header3";
outEngine.BeginWriteFile(outputfile);
foreach (outputData line in output){
outEngine.WriteNext(line);
}
outEngine.Close();
You can simply use FileHelper's GetFileHeader function from base class
var engine = new FileHelperEngine<ExportType>();
engine.HeaderText = engine.GetFileHeader();
engine.WriteFile(exportFile, exportData);

Categories

Resources