How write header of not flat class using CSVHelper? - c#

public class Class1
{
[CsvField(Name = "Field1")]
public int Field1 { get; set; }
[CsvField(Name = "Field2")]
public int Field2 { get; set; }
[CsvField(Ignore = true)]
public Class2 Class2 { get; set; }
[CsvField(Ignore = true)]
public Class3 Class3 { get; set; }
}
public class Class2
{
[CsvField(Name = "Field3")]
public int Field3 { get; set; }
[CsvField(Name = "Field4")]
public int Field4 { get; set; }
}
public class Class3
{
[CsvField(Name = "Field5")]
public int Field5 { get; set; }
[CsvField(Name = "Field6")]
public int Field6 { get; set; }
}
I'm using CSVHelper to write data into CSV file.
I need write Class1 with header like this:
Field1, Field2, Field3, Field4, Field5, Field6
How I can do this?

This is an old question, so you likely already have an answer. You have a few options.
Option 1 (that I know will work) Documentation
You just need to manually write out the contents of the CSV, below is some code that will get you started, but you'll need to modify based on how the contents of your objects are stored.
using (var stream = new MemoryStream())
{
using (var streamWriter = new StreamWriter(stream))
using (var csv = new CsvWriter(streamWriter))
{
// Write out header
csv.WriteField("Field1");
csv.WriteField("Field2");
csv.WriteField("Field3");
csv.WriteField("Field4");
csv.WriteField("Field5");
csv.WriteField("Field6");
// Write out end line
csv.NextRecord();
//Pseudocode
foreach (var item in Class1Collection)
{
csv.WriteField(item.Field1);
csv.WriteField(item.Field2);
csv.WriteField(item.Class2.Field3);
csv.WriteField(item.Class2.Field4);
csv.WriteField(item.Class3.Field5);
csv.WriteField(item.Class3.Field6);
// Write out end line
csv.NextRecord();
}
}
}
Option 2 (have used, but not like this) Documentation
Your second option is to write a custom CSVMap that tells the CSVWriter how to handle the nested classes. I'm not sure how to deal with the name, so you might have to work through that.
public sealed class Class1CSVMap : CsvClassMap<RemittanceFormModel>
{
public Class1CSVMap()
{
Map(m => m.Field1).Name("Field1");
Map(m => m.Field2).Name("Field2");
Map(m => m.Class2).Name("Field3,Field4").TypeConverter<Class2Converter>();
Map(m => m.Class3).Name("Field5,Field6").TypeConverter<Class3Converter>();
}
}
Then you have your converter, one for Class2 and one for Class3
public class Class2Converter : DefaultTypeConverter
{
public override string ConvertToString(TypeConverterOptions options, object model)
{
var result = string.Empty;
var classObject = model as Class2;
if (classObject != null)
{
result = string.Format("{0},{1}", classObject.Field3, classObject.Field4);
}
return result;
}
}
Option 3 (have never used) Documentation
You can do an inline converter instead of creating a separate class. I've never tried this, but it should work.
public sealed class Class1CSVMap : CsvClassMap<Class1>
{
public Class1CSVMap()
{
Map(m => m.Field1).Name("Field1");
Map(m => m.Field2).Name("Field2");
Map(m => m.Class2).Name("Field3,Field4").ConvertUsing(row => string.Format("{0},{1}", row.Field3, row.Field4); );
Map(m => m.Class3).Name("Field5,Field6").ConvertUsing(row => string.Format("{0},{1}", row.Field5, row.Field6); );
}
}

look at this
foreach (var item in Collection)
{
csv.WriteField(item.Field1);
csv.WriteField(item.Field2);
csv.WriteField(item.Class2.Field3);
csv.WriteField(item.Class2.Field4);
csv.WriteField(item.Class3.Field5);
csv.WriteField(item.Class3.Field6);
// Write out end line
csv.NextRecord();
}
}
}

Related

CsvHelper : Intialize a map member

Imagine a collection of EF dbSet like this :
public class Employee {
...
public string FirstName { get; set; }
public List<Badge> Badge { get; set; }
}
public class Badge {
public long CSN { get; set; }
public int EmployeeId { get; set; }
public int Type { get; set; }
}
This models are used in my SGBB and I want to use it to import data from a CSV file. But this file has a small difference. It give only one badge like this :
FIRSTNAME;CSN;TYPE
Jerome;12345;1
I have used a CollectionGenericConverter, to initialize the List with a new record.
Map(m => m.Firstname).Name("Firstname");
Map(m => m.Badges).Name("CSN").TypeConverter<BadgeConverter>();
...
public class BadgeConverter : CollectionGenericConverter {
public override object ConvertFromString(String text, IReaderRow row, MemberMapData memberMapData) {
return new List<Badge> {
new Badge {
CSN = Convert.ToInt16(text)
}
};
}
}
I have just a problem with the second value, using a second converter reset the list of badges :
Map(m => m.Badges).Name("Type").TypeConverter<AnotherOneBadgeConverter>();
And set directly the first item not work :
Map(m => m.Badges[0].Type).Name("Type");
How to do that ?
Something like this may work for you.
public class Program
{
public static void Main(string[] args)
{
using (MemoryStream stream = new MemoryStream())
using (StreamWriter writer = new StreamWriter(stream))
using (StreamReader reader = new StreamReader(stream))
using (CsvReader csv = new CsvReader(reader))
{
writer.WriteLine("FIRSTNAME;CSN;TYPE");
writer.WriteLine("Jerome;12345;1");
writer.Flush();
stream.Position = 0;
csv.Configuration.Delimiter = ";";
csv.Configuration.RegisterClassMap<EmployeeMap>();
var records = csv.GetRecords<Employee>().ToList();
}
}
}
public class Employee
{
public string FirstName { get; set; }
public List<Badge> Badge { get; set; }
}
public class Badge
{
public long CSN { get; set; }
public int EmployeeId { get; set; }
public int Type { get; set; }
}
public class EmployeeMap: ClassMap<Employee>
{
public EmployeeMap()
{
Map(m => m.FirstName).Name("FIRSTNAME");
Map(m => m.Badge).ConvertUsing(row =>
{
var list = new List<Badge>
{
new Badge { CSN = row.GetField<long>("CSN"), Type = row.GetField<int>("TYPE") },
};
return list;
});
}
}

CsvHelper - Read different record types in same CSV

I'm trying to read two types of records out of a CSV file with the following structure:
PlaceName,Longitude,Latitude,Elevation
NameString,123.456,56.78,40
Date,Count
1/1/2012,1
2/1/2012,3
3/1/2012,10
4/2/2012,6
I know this question has been covered previously in
Reading multiple classes from single csv file using CsvHelper
Multiple Record Types in One File?
but when I run my implementation it gets a CsvMissingFieldException saying that Fields 'Date' do not exist in the CSV file. I have two definition and map classes, one for the location and the other for the counts, which are:
public class LocationDefinition
{
public string PlaceName { get; set; }
public double Longitude { get; set; }
public double Latitude { get; set; }
public double Elevation { get; set; }
}
public sealed class LocationMap : CsvClassMap<LocationDefinition>
{
public LocationMap()
{
Map(m => m.PlaceName).Name("PlaceName");
Map(m => m.Longitude).Name("Longitude");
Map(m => m.Latitude).Name("Latitude");
Map(m => m.Elevation).Name("Elevation");
}
}
public class CountDefinition
{
public DateTime Date { get; set; }
public int Count { get; set; }
}
public sealed class CountMap : CsvClassMap<CountDefinition>
{
public CountMap()
{
Map(m => m.Date).Name("Date");
Map(m => m.Count).Name("Count");
}
}
The code that I have for reading the csv file is:
LocationDefinition Location;
var Counts = new List<CountDefinition>();
using (TextReader fileReader = File.OpenText(#"Path\To\CsvFile"))
using (var csvReader = new CsvReader(fileReader))
{
csvReader.Configuration.RegisterClassMap<LocationMap>();
csvReader.Configuration.RegisterClassMap<CountMap>();
// Only reads a single line of Location data
csvReader.Read();
LocationData = csvReader.GetRecord<LocationDefinition>();
csvReader.Read(); // skip blank line
csvReader.Read(); // skip second header section
// Read count data records
while (csvReader.Read())
{
var tempCount = csvReader.GetRecord<CountDefinition>();
Counts.Add(tempCount);
}
}
The exception gets thrown on the tempCount line. From what I can tell it still expects a Location record, but I would have thought GetRecord<CountDefinition> would specify the record type. I've also tried ClearRecordCache and unregistering the LocationMap to no avail.
How should this code be changed to get it to read a csv file of this structure?
Try this
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
namespace ConsoleApplication1
{
enum State
{
FIND_RECORD,
GET_LOCATION,
GET_DATES
}
class Program
{
const string FILENAME = #"c:\temp\test.txt";
static void Main(string[] args)
{
StreamReader reader = new StreamReader(FILENAME);
State state = State.FIND_RECORD;
LocationDefinition location = null;
string inputLine = "";
while ((inputLine = reader.ReadLine()) != null)
{
inputLine = inputLine.Trim();
if (inputLine.Length == 0)
{
state = State.FIND_RECORD;
}
else
{
switch (state)
{
case State.FIND_RECORD :
if (inputLine.StartsWith("PlaceName"))
{
state = State.GET_LOCATION;
}
else
{
if (inputLine.StartsWith("Date"))
{
state = State.GET_DATES;
}
}
break;
case State.GET_DATES :
if (location.dates == null) location.dates = new CountDefinition();
location.dates.dates.Add(new CountDefinition(inputLine));
break;
case State.GET_LOCATION :
location = new LocationDefinition(inputLine);
break;
}
}
}
}
}
public class LocationDefinition
{
public static List<LocationDefinition> locations = new List<LocationDefinition>();
public CountDefinition dates { get; set; }
public string PlaceName { get; set; }
public double Longitude { get; set; }
public double Latitude { get; set; }
public double Elevation { get; set; }
public LocationDefinition(string location)
{
string[] array = location.Split(new char[] { ',' }, StringSplitOptions.RemoveEmptyEntries);
PlaceName = array[0];
Longitude = double.Parse(array[1]);
Latitude = double.Parse(array[2]);
Elevation = double.Parse(array[3]);
locations.Add(this);
}
}
public class CountDefinition
{
public List<CountDefinition> dates = new List<CountDefinition>();
public DateTime Date { get; set; }
public int Count { get; set; }
public CountDefinition() { ;}
public CountDefinition(string count)
{
string[] array = count.Split(new char[] { ',' }, StringSplitOptions.RemoveEmptyEntries);
Date = DateTime.Parse(array[0]);
Count = int.Parse(array[1]);
dates.Add(this);
}
}
}
I got a response from Josh Close on the issue tracker:
CsvReader not recognising different registered class maps
Here is his answer to this question:
Since you don't have a single header, you'll need to ignore headers
and use indexes instead. This brings up an idea though. I could have
the ReadHeader method parse headers for a specific record type.
Here is an example that should work for you though.
void Main()
{
LocationDefinition Location;
var Counts = new List<CountDefinition>();
using (var stream = new MemoryStream())
using (var reader = new StreamReader(stream))
using (var writer = new StreamWriter(stream))
using (var csvReader = new CsvReader(reader))
{
writer.WriteLine("PlaceName,Longitude,Latitude,Elevation");
writer.WriteLine("NameString,123.456,56.78,40");
writer.WriteLine();
writer.WriteLine("Date,Count");
writer.WriteLine("1/1/2012,1");
writer.WriteLine("2/1/2012,3");
writer.WriteLine("3/1/2012,10");
writer.WriteLine("4/2/2012,6");
writer.Flush();
stream.Position = 0;
csvReader.Configuration.HasHeaderRecord = false;
csvReader.Configuration.RegisterClassMap<LocationMap>();
csvReader.Configuration.RegisterClassMap<CountMap>();
csvReader.Read(); // get header
csvReader.Read(); // get first record
var locationData = csvReader.GetRecord<LocationDefinition>();
csvReader.Read(); // skip blank line
csvReader.Read(); // skip second header section
// Read count data records
while (csvReader.Read())
{
var tempCount = csvReader.GetRecord<CountDefinition>();
Counts.Add(tempCount);
}
}
}
public class LocationDefinition
{
public string PlaceName { get; set; }
public double Longitude { get; set; }
public double Latitude { get; set; }
public double Elevation { get; set; }
}
public sealed class LocationMap : CsvClassMap<LocationDefinition>
{
public LocationMap()
{
Map(m => m.PlaceName);
Map(m => m.Longitude);
Map(m => m.Latitude);
Map(m => m.Elevation);
}
}
public class CountDefinition
{
public DateTime Date { get; set; }
public int Count { get; set; }
}
public sealed class CountMap : CsvClassMap<CountDefinition>
{
public CountMap()
{
Map(m => m.Date);
Map(m => m.Count);
}
}

CsvHelper discards row if there is a missing field

I have a CSV files that can be structured similar to the following:
Header1,Header2,Header3
1,2,3
5,,6
4,4,4
When using Josh Close's CsvHelper and calling GetRecords<T> as per:
List<TestData> data = csvReader.GetRecords<TestData>();
The list of data does not contain the second row. I've tinkered with the settings and tried to implement a double converter that accepts an empty string and returns '0' when empty however the row still gets discarded. I'm trying to avoid doing a manual get for each field. However, I would still be happy with a row by row solution, i.e. csvReader.GetRecord<TestData>() nested in a loop.
I have the following test code:
public class When_importing_csv_with_missing_filed
{
[Test]
public void Dont_discard_the_row_with_missing_field()
{
using (TextReader textReader = new StreamReader("Test.csv"))
{
Assert.IsTrue(File.Exists("Test.csv"));
var reader = new CsvReader(textReader);
reader.Configuration.RegisterClassMap<TestMap>();
reader.Configuration.IgnoreReadingExceptions = true;
reader.Configuration.SkipEmptyRecords = false;
List<TestData> testData = reader.GetRecords<TestData>().ToList();
}
}
}
public class TestMap : CsvClassMap<TestData>
{
public override void CreateMap()
{
Map(m => m.Header1).Name("Header1").TypeConverter<DoubleConverter>();
Map(m => m.Header2).Name("Header2").TypeConverter<NullValueTypeConverter>();
Map(m => m.Header3).Name("Header3").TypeConverter<DoubleConverter>();
}
}
public class NullValueTypeConverter : DoubleConverter
{
public override object ConvertFromString(TypeConverterOptions options, string text)
{
return String.IsNullOrEmpty(text) ? 0 : base.ConvertFromString(options, text);
}
public override bool CanConvertFrom(Type type)
{
return type == typeof(string);
}
}
public class TestData
{
public double? Header1 { get; set; }
public double? Header2 { get; set; }
public double? Header3 { get; set; }
}
Over to you..
This seems to work completely fine for me.
Code:
void Main()
{
using( var stream = new MemoryStream() )
using( var writer = new StreamWriter( stream ) )
using( var reader = new StreamReader( stream ) )
using( var csv = new CsvReader( reader ) )
{
writer.WriteLine( "Header1,Header2,Header3" );
writer.WriteLine( "1,2,3" );
writer.WriteLine( "5,,6" );
writer.WriteLine( "4,4,4" );
writer.Flush();
stream.Position = 0;
csv.Configuration.RegisterClassMap<TestMap>();
csv.Configuration.IgnoreReadingExceptions = true;
csv.Configuration.SkipEmptyRecords = false;
var records = csv.GetRecords<TestData>().ToList();
records.Dump();
}
}
public class TestData
{
public double? Header1 { get; set; }
public double? Header2 { get; set; }
public double? Header3 { get; set; }
}
public class TestMap : CsvClassMap<TestData>
{
public override void CreateMap()
{
Map(m => m.Header1).Name("Header1");
Map(m => m.Header2).Name("Header2");
Map(m => m.Header3).Name("Header3");
}
}
Result:
Header1 Header2 Header3
1 2 3
5 null 6
4 4 4

Remove the null property from object

,I have one class in which I have three properties now what I want to do, if in the object if any one of null or empty then I want to remove it from the object below is my code.
public class TestClass
{
public string Name { get; set; }
public int ID { get; set; }
public DateTime? DateTime { get; set; }
public string Address { get; set; }
}
TestClass t=new TestClass();
t.Address="address";
t.ID=132;
t.Name=string.Empty;
t.DateTime=null;
Now here I want the object of TestClass but in that Name and DateTime property should not be their in the object,
is it possible?
Please help me
There's no such concept as removing a property from an individual object. The type decided which properties are present - not individual objects.
In particular, it will always be valid to have a method like this:
public void ShowDateTime(TestClass t)
{
Console.WriteLine(t.DateTme);
}
That code has no way of knowing whether you've wanted to "remove" the DateTime property from the object that t refers to. If the value is null, it will just get that value - that's fine. But you can't remove the property itself.
If you're listing the properties of an object somewhere, you should do the filtering there, instead.
EDIT: Okay, no you've given us some context:
ok I am using Schemaless database so null and empty value also store space in database that's the reason
So in the code you're using which populates that database, just don't set any fields which corresponds to properties with a null value. That's purely a database population concern - not a matter for the object itself.
(I'd also argue that you should consider how much space you'll really save by doing this. Do you really care that much?)
I was bored and got this in LINQPad
void Main()
{
TestClass t=new TestClass();
t.Address="address";
t.ID=132;
t.Name=string.Empty;
t.DateTime=null;
t.Dump();
var ret = t.FixMeUp();
((object)ret).Dump();
}
public static class ReClasser
{
public static dynamic FixMeUp<T>(this T fixMe)
{
var t = fixMe.GetType();
var returnClass = new ExpandoObject() as IDictionary<string, object>;
foreach(var pr in t.GetProperties())
{
var val = pr.GetValue(fixMe);
if(val is string && string.IsNullOrWhiteSpace(val.ToString()))
{
}
else if(val == null)
{
}
else
{
returnClass.Add(pr.Name, val);
}
}
return returnClass;
}
}
public class TestClass
{
public string Name { get; set; }
public int ID { get; set; }
public DateTime? DateTime { get; set; }
public string Address { get; set; }
}
Hereby a 'slightly' more clear and shorter version of the accepted answer.
/// <returns>A dynamic object with only the filled properties of an object</returns>
public static object ConvertToObjectWithoutPropertiesWithNullValues<T>(this T objectToTransform)
{
var type = objectToTransform.GetType();
var returnClass = new ExpandoObject() as IDictionary<string, object>;
foreach (var propertyInfo in type.GetProperties())
{
var value = propertyInfo.GetValue(objectToTransform);
var valueIsNotAString = !(value is string && !string.IsNullOrWhiteSpace(value.ToString()));
if (valueIsNotAString && value != null)
{
returnClass.Add(propertyInfo.Name, value);
}
}
return returnClass;
}
You could take advantage of the dynamic type:
class Program
{
static void Main(string[] args)
{
List<dynamic> list = new List<dynamic>();
dynamic
t1 = new ExpandoObject(),
t2 = new ExpandoObject();
t1.Address = "address1";
t1.ID = 132;
t2.Address = "address2";
t2.ID = 133;
t2.Name = "someName";
t2.DateTime = DateTime.Now;
list.AddRange(new[] { t1, t2 });
// later in your code
list.Select((obj, index) =>
new { index, obj }).ToList().ForEach(item =>
{
Console.WriteLine("Object #{0}", item.index);
((IDictionary<string, object>)item.obj).ToList()
.ForEach(i =>
{
Console.WriteLine("Property: {0} Value: {1}",
i.Key, i.Value);
});
Console.WriteLine();
});
// or maybe generate JSON
var s = JsonSerializer.Create();
var sb=new StringBuilder();
var w=new StringWriter(sb);
var items = list.Select(item =>
{
sb.Clear();
s.Serialize(w, item);
return sb.ToString();
});
items.ToList().ForEach(json =>
{
Console.WriteLine(json);
});
}
}
May be interfaces will be handy:
public interface IAdressAndId
{
int ID { get; set; }
string Address { get; set; }
}
public interface INameAndDate
{
string Name { get; set; }
DateTime? DateTime { get; set; }
}
public class TestClass : IAdressAndId, INameAndDate
{
public string Name { get; set; }
public int ID { get; set; }
public DateTime? DateTime { get; set; }
public string Address { get; set; }
}
Creating object:
IAdressAndId t = new TestClass()
{
Address = "address",
ID = 132,
Name = string.Empty,
DateTime = null
};
Also u can put your interfaces in separate namespace and make your class declaration as internal. After that create some public factories which will create the instances of your classes.

Records are doubled when adding referenced entities using Entity Framework

I have a problem, when i add a new entity of SessionImages, and then add another one (By uploading it), it gets doubled.
For example :
Adding an image :
Adding another one afterwards :
Here's my controller code :
[HttpPost]
public ActionResult SessionImages(FormCollection collection)
{
int SessionID = Convert.ToInt32(collection[0]);
Sessions SessionModel = sessionsRepo.GetSessionById(SessionID);
bool uploadSucceded = Utility.Utility.UploadImages(this, Request.Files, Server.MapPath(Path.Combine("~/Photos/Sessions", SessionModel.Name)));
for (int i = 0; i < Request.Files.Count; i++)
{
if (Request.Files[i].ContentLength == 0)
{
continue;
}
SessionImages ImageModel = new SessionImages
{
Name = Request.Files[i].FileName,
Path = Path.Combine("~/Photos/Sessions/", SessionModel.Name, "actual", Request.Files[i].FileName),
Session = SessionModel,
SessionID = SessionID
};
sessionImagesRepo.Add(SessionModel, ImageModel);
}
return RedirectToAction("SessionImages",SessionModel.ID);
}
Here's the sessionImagesRepo.Add method :
public SessionImages Add(Sessions SessionModel, SessionImages ImageModel)
{
try
{
context.Entry(ImageModel).State = System.Data.EntityState.Added;
SessionModel.PhotosCount = SessionModel.Images.Count;
context.Entry(SessionModel).State = System.Data.EntityState.Modified;
context.SaveChanges();
}
catch
{
return null;
}
return ImageModel;
}
Here's my entities :
namespace FP.Domain.Entities
{
public class Sessions
{
public int ID { get; set; }
public string Name { get; set; }
public string Description { get; set; }
public DateTime Date { get; set; }
public int PhotosCount { get; set; }
public string CoverPhoto { get; set; }
public List<SessionImages> Images { get; set; }
}
}
namespace FP.Domain.Entities
{
public class SessionImages
{
public int ImageID { get; set; }
public string Name { get; set; }
public string Path { get; set; }
public int SessionID { get; set; }
public Sessions Session { get; set; }
}
}
Here's the configuration :
namespace FP.Domain.Configurations
{
public class SessionsConfig : EntityTypeConfiguration<Sessions>
{
public SessionsConfig()
{
ToTable("Sessions");
Property(p => p.Name).IsRequired();
Property(p => p.PhotosCount).IsRequired();
Property(p => p.CoverPhoto).IsRequired();
Property(p => p.Date).HasColumnType("date");
}
}
}
namespace FP.Domain.Configurations
{
public class SessionImagesConfig : EntityTypeConfiguration<SessionImages>
{
public SessionImagesConfig()
{
ToTable("SessionImages");
HasKey(e => e.ImageID);
Property(p => p.Path).IsRequired();
HasRequired(i => i.Session)
.WithMany(s => s.Images)
.HasForeignKey(i => i.SessionID);
}
}
}
Try changing the repository method like this:
public SessionImages Add(Sessions SessionModel, SessionImages ImageModel)
{
try
{
SessionModel.Images.Add(ImageModel);
SessionModel.PhotosCount = SessionModel.Images.Count;
context.Entry(SessionModel).State = System.Data.EntityState.Modified;
context.SaveChanges();
}
catch
{
return null;
}
return ImageModel;
}
And in the action remove the line that I've commented:
SessionImages ImageModel = new SessionImages
{
Name = Request.Files[i].FileName,
Path = Path.Combine("~/Photos/Sessions/", SessionModel.Name, "actual", Request.Files[i].FileName),
//Session = SessionModel,
SessionID = SessionID
};
I'd suggest you drop these lines, if you can workaround it (I can see the point though, in the count)...
context.Entry(ImageModel).State = System.Data.EntityState.Added;
SessionModel.PhotosCount = SessionModel.Images.Count;
context.Entry(SessionModel).State = System.Data.EntityState.Modified;
That's probably what's messing with it. I've seen similar problems more than once - and then I usually end up doing quite the opposite:
context.Entry(Parent).State = System.Data.EntityState.Unchanged;
Everything you're doing there is to get the Count working.
I can understand the optimization reasons, but you do have that count in the collection inherently. I know this is faster if you don't want to load etc.
I'm not sure what to suggest - but I'm suspecting that to be the 'culprit'.
Try w/o it, turn off and disregard count while testing. See what
happens. If that's it, then think about how to reorganize that a bit.

Categories

Resources