How to update object values (based on the curr-previous pattern)? - c#

Assume you have a CSV file with the following simplified structure
LINE1: ID,Description,Value
LINE2: 1,Product1,2
LINE3: ,,3
LINE4: ,,4
LINE5: 2,Product2,2
LINE6: ,,3
LINE7: ,,5
I am using FileHelpers to read the CSV and have hooked up one the interfaces that allows me me to access the current line, after it has been read. Refer to this SO question for more background.
The issues is that using that approach I will need to write many more if statements to check all the fields that need to be copied. (I have at least, 6 csv files at the moment with the same 'blank' format all having more than 20 fields that need to be copied ~ 120 if statements. urggh)
Now this is not a micro optimisation exercise since there will be more files that will have this 'incomplete' format.
How can I update the previous record in an elegant way such that I wont have to write if conditions and declarations for each field?

The current solution is to annotate the required fields using a custom attribute called CopyMe and then use reflection to copy.
[DelimitedRecord(",") ]
[IgnoreFirst(1)]
public class Product
{
[CopyMe()] public int ID { get; set; }
[CopyMe()] public string Description { get; set; }
[FieldConverter(ConverterKind.Decimal)]
public decimal Val{ get; set; }
}
with the AfterRead method looking like so...
public void AfterRead(AfterReadEventArgs<RawVolsDataPoints> e)
{
var record = (Product)e.Record;
if (PreviousRecord == null)
{
PreviousRecord = new Product();
PreviousRecord = record;
}
if (String.IsNullOrEmpty(record.ID)) // null value indicates a new row
{
var ttype = typeof(Product);
var fields = ttype.GetFields();
var fieldsToCopy = fields.Where(field =>
field.GetCustomAttributes(typeof(CopyMeAttribute), true).Any());
foreach (var item in fieldsToCopy)
{
var prevvalue = item.GetValue(PreviousRecord);
item.SetValue(record, prevvalue);
}
}
}

Related

JSON to datatable - How to deserialize

I have this very simple JSON string:
{
"data": {
"id": 33306,
"sport": {
"id1": "FB",
"id2": "HB"
}
}
}
I can't understand how to return a datatable from this string.
I have tried to use this code but it's not working:
DataTable dt = (DataTable)JsonConvert.DeserializeObject(json, (typeof(DataTable)));
you have to flatten all json properties, after this to convert to DataTable
var jObj = (JObject)JObject.Parse(json)["data"];
var properties = jObj.Properties().ToList();
for (var i = 0; i < properties.Count; i++)
{
var prop = properties[i];
if (prop.Value.Type == JTokenType.Object)
{
foreach (var p in ((JObject)prop.Value).Properties())
jObj.Add(new JProperty(prop.Name + " " + p.Name, p.Value));
prop.Remove();
}
}
DataTable dt = new JArray { jObj }.ToObject<DataTable>();
output
[
{
"id": 33306,
"sport id1": "FB",
"sport id2": "HB"
}
]
You need to do this in two steps.
Deserialize into a .net object whose structure matches that of the JSON
Populate a DataTable with the properties of that object
Step 1 - Deserialize
We need to define some classes to receive the deserialized data. The names of the classes aren't particularly important (as long as they're meaningful to you), however the names of the properties of those classes need to match the names of the corresponding elements in the JSON.
First the outermost class, which is the shape of the JSON you want to deserialize.
public class SomeClass
{
public DataElement Data { get; set; }
}
Now we need to define the shape of DataElement.
public class DataElement
{
public int Id { get; set; }
public SportElement Sport { get; set; }
}
And now we need to define the shape of SportElement.
public class SportElement
{
public string Id1 { get; set; }
public string Id2 { get; set; }
}
The implementation above is fairly rigid and assumes that the shape of the JSON doesn't change from one document to the next. If, however, you expect the shape to vary, for example, if the sport element could could contain any number of id1, id2, id3, ... id100 etc elements, then you can throw away the SportElement class and use a dictionary to represent that element instead.
public class DataElement
{
public int Id { get; set; }
public Dictionary<string, string> Sport { get; set; }
}
Which of those two approaches to use will depend on how predictable the structure of the JSON is (or whether or not it's under your control). I find that using a dictionary is a good way of coping with JSON produced by 3rd party applications which aren't under my control, but the resulting objects aren't as easy to work with as those where I know exactly what shape the JSON will always be and can create a strongly-typed class structure representing that shape.
Whichever approach you choose, the usage is the same:
var deserialized = JsonConvert.DeserializeObject<SomeClass>(json);
Step 2 - Populate the DataTable from the object
How to do this step will depend on what you want the DataTable to look like (which is not clear from the question). For example, you might want it to look like this (which I think is what Serge's answer would return).
id
sport id1
sport id2
33306
FB
HB
Or (if for example the sport element could contain any number of id1, id2 and so on elements) you might want it to look like this.
id
sport
33306
id1: FB
33306
id2: HB
Or you might want some different representation altogether. Sorry if that's an incomplete answer, if you want to update the question with what you'd expect the DataTable to look like then I can update this answer with more detail on how to go about step 2.

Looping header and details records in same file

I have re edit this question below I have an example file which as multiple purchase orders in the file which is identified by the second column.
Order Number, Purchase Number,DATE,Item Code ,Qty, Description
1245456,98978,12/01/2019, 1545-878, 1,"Test"
1245456,98978,12/01/2019,1545-342,2,"Test"
1245456,98978,12/01/2019,1545-878,2,"Test"
1245456,98979,12/02/2019,1545-878,3,"Test 3"
1245456,98979,12/02/2019,1545-342,4,"Test 4"
1245456,98979,12/02/2019,1545-878,5,"Test 4"
What I want the end result to be is to be able to place the above into one class like the following
At the min I am using filelpers to parse the csv file this would work fine if I had sep header file and row file but they are combined as you see
var engine = new FileHelperEngine<CSVLines>();
var lines = engine.ReadFile(csvFileName);
So the Class should be like below
[DelimitedRecord(",")]
public class SalesOrderHeader
{
private Guid? _guid;
public Guid RowID
{
get
{
return _guid ?? (_guid = Guid.NewGuid()).GetValueOrDefault();
}
}
public string DocReference { get; set; }
public string CardCode { get; set; }
public string DocDate { get; set; }
public string ItemCode { get; set; }
public string Description { get; set; }
public string Qty { get; set; }
public string Price { get; set; }
[FieldHidden]
public List<SalesOrderHeader> OrdersLines { get; set; }
}
What I imagine I will have to do is two loops as you will see from my createsales order routine i first create the header and then add the lines in.
public void CreateSalesOrder(List<SalesOrderHeader> _salesOrders)
{
foreach (var record in _salesOrders.GroupBy(g => g.DocReference))
{
// Init the Order object
oOrder = (SAPbobsCOM.Documents)company.GetBusinessObject(SAPbobsCOM.BoObjectTypes.oOrders);
SAPbobsCOM.SBObob oBob;
// set properties of the Order object
// oOrder.NumAtCard = record.Where(w=>w.RowID = record.Where()
oOrder.CardCode = record.First().CardCode;
oOrder.DocDueDate = DateTime.Now;
oOrder.DocDate =Convert.ToDateTime(record.First().DocDate);
foreach (var recordItems in _salesOrders.SelectMany(e=>e.OrdersLines).Where(w=>w.DocReference ==record.First().DocReference))
{
oOrder.Lines.ItemCode = recordItems.ItemCode;
oOrder.Lines.ItemDescription = recordItems.Description;
oOrder.Lines.Quantity = Convert.ToDouble(recordItems.Qty);
oOrder.Lines.Price = Convert.ToDouble(recordItems.Price);
oOrder.Lines.Add();
log.Debug(string.Format("Order Line added to sap Item Code={0}, Description={1},Qty={2}", recordItems.ItemCode, recordItems.Description, recordItems.Qty));
}
int lRetCode = oOrder.Add(); // Try to add the orer to the database
}
if(lRetCode == 0)
{
string body = "Purchase Order Imported into SAP";
}
if (lRetCode != 0)
{
int temp_int = lErrCode;
string temp_string = sErrMsg;
company.GetLastError(out temp_int, out temp_string);
if (lErrCode != -4006) // Incase adding an order failed
{
log.Error(string.Format("Error adding an order into sap ErrorCode {0},{1}", temp_int, temp_string));
}
}
}
The problem you will see i have is how do I first split the csv into the two lists and second how do i access the header rows correctly in the strongly type object as you see I am using first which will not work correctly.
With FileHelpers it is important to avoid using the mapping class for anything other than describing the underlying file structure. Here I suspect you are trying to map directly to a class which is too complex.
A FileHelpers class is just a way of defining the specification of a flat file using C# syntax.
As such, the FileHelpers classes are an unusual type of C# class and you should not try to use accepted OOP principles. FileHelpers should not have properties or methods beyond the ones used by the FileHelpers library.
Think of the FileHelpers class as the 'specification' of your CSV format only. That should be its only role. (This is good practice from a maintenance perspective anyway - if the underlying CSV structure were to change, it is easier to adapt your code).
Then if you need the records in a more 'normal' object, then map the results to something better, that is, a class that encapsulates all the functionality of the Order object rather than the CSVOrder.
So, one way of handling this type of file is to parse the file twice. In the first pass you extract the header records. Something like this:
var engine1 = new FileHelperEngine<CSVHeaders>();
var headers = engine1.ReadFile(csvFileName);
In the second pass you extract the details;
var engine2 = new FileHelperEngine<CSVDetails>();
var details = engine2.ReadFile(csvFileName);
Then you combine this information into a new dedicated class, maybe with some LINQ similar to this
var niceOrders =
headers
.DistinctBy(h => h.OrderNumber)
.SelectMany(d => details.Where(d => d.OrderNumber = y))
.Select(x =>
new NiceOrder() {
OrderNumber = x.OrderNumber,
Customer = x.Customer,
ItemCode = x.ItemCode
// etc.
});

Linq: Transforming specified column's datatypes and values while preserving unspecified columns

I have a list of Order objects. Order has the properties: int Id, decimal Price, string OrderNumber, string ShipperState, DateTime TimeStamp;
I know which columns I want to transform (Price, TimeStamp) and I want to keep the other columns without needing to specify them.
This example is transforming specified columns but I still need to include the non-transformed columns.
var myList = model.Orders.Select(x => new
{
x.Id,
x.OrderNumber,
// decimal to string
Price = x.Price.ToString("C", new CultureInfo("en-US")),
x.ShipperState,
// DateTime to string
TimeStamp = x.TimeStamp.ToString("MM/dd/yyyy H:mm")
}
If I were to add a column string ShipperCity to the Order class, I would like myList to also have that property without having to go back and update the projection.
An ideal answer would not rely on external libraries, reflection and only be a line or two.
If you do not want to modify the model class as #David suggested you can write extension methods for it like this:
public static class OrderExtensions
{
public static string GetFormattedPrice(this Order order)
=> order.Price.ToString("C", new CultureInfo("en-US"));
public static string GetFormattedTimestamp(this Order order)
=> order.Timestamp.ToString("MM/dd/yyyy H:mm");
}
UPDATE #1
The effect of this alternative is that whereever you wanted to use the transformed order.Price and order.Timestamp there you have to use order.GetFormattedPrice() and order.GetFormattedTimestamp() respectively.
In the question it was not specified that where the data come from and what type of application the data is used in.
For example methods cannot be used in XAML binding and everywhere else where a property is required.
Please note:
In C# (almost) everything is strongly typed hence once the class and the properties in it are defined you cannot set one of its property value to a different type of data and also you cannot change the type of the property. So by default you cannot avoid projection when you need some transformation. If you need all the properties - either the original value or the transformed value - you have to list all of them in the projection.
almost everything except dynamic
You can actually transform the type and the value of a property but only if it is defined as dynamic. For example this works below:
public class Order
{
public int Id { get; set; }
public string OrderNumber { get; set; }
// Original: decimal; Converted: string;
public dynamic Price { get; set; }
public string ShipperState { get; set; }
// Original: DateTime; Converted: string;
public dynamic Timestamp { get; set; }
}
public static class OrderExtensions
{
public static void Transform(this Order order)
{
if (order.Price.GetType() == typeof(decimal))
order.Price = order.Price.ToString("C", new CultureInfo("en-US"));
if (order.Timestamp.GetType() == typeof(DateTime))
order.Timestamp = order.Timestamp.ToString("MM/dd/yyyy H:mm");
}
}
class Program
{
static void Main(string[] args)
{
var originalList = new List<Order>()
{
new Order() { Id = 1, OrderNumber = "1", Price = 100m, Timestamp = DateTime.Now },
new Order() { Id = 2, OrderNumber = "2", Price = 200m, Timestamp = DateTime.Now },
new Order() { Id = 3, OrderNumber = "3", Price = 300m, Timestamp = DateTime.Now }
};
originalList.ForEach(order => order.Transform());
}
}
Although this example works there are some things to know:
dynamic type
This example looks like a hack, maybe it can be considered as a hack. :)
In this example the original Order objects are changed not their projection/clone/etc.
dynamic properties are not allowed in Entity Framework models as you cannot specify the SQL column type for them even using the methods of DbModelBuilder. I did not try it in other use-cases but it seems to be a very restricted possibility.
For dynamic properties there is no IntelliSense, so after typing order.Price. no list would appear with any method or property.
You have to use these properties very carefully as there is no compile-time check. Any typo or other mistake will throw an exception only during run-time.
If this option somehow fits the needs it might be worth implementing the conversion of the string value back to the original type.
That's all the update I could add to my original answer. Hope this is an acceptable answer to your comment.

Use C# Linq Lambda to combine fields from two objects into one, preferably without anonymous objects

I have a class setup like this:
public class Summary
{
public Geometry geometry { get; set; }
public SummaryAttributes attributes { get; set; }
}
public class SummaryAttributes
{
public int SERIAL_NO { get; set; }
public string District { get; set; }
}
public class Geometry
{
public List<List<List<double>>> paths { get; set; }
}
and i take a json string of records for that object and cram them in there like this:
List<Summary> oFeatures = reportObject.layers[0].features.ToObject<List<Summary>>();
my end goal is to create a csv file so i need one flat List of records to send to the csv writer i have.
I can do this:
List<SummaryAttributes> oAtts = oFeatures.Select(x => x.attributes).ToList();
and i get a nice List of the attributes and send that off to csv. Easy peasy.
What i want though is to also pluck a field off of the Geometry object as well and include that in my final List to go to csv.
So the final List going to the csv writer would contain objects with all of the fields from SummaryAttributes plus the first and last double values from the paths field on the Geometry object (paths[0][0][first] and paths[0][0][last])
It's hard to explain. I want to graft two extra attributes onto the original SummaryAttributes object.
I would be ok with creating a new SummaryAttributesXY class with the two extra fields if that's what it takes.
But i'm trying to avoid creating a new anonymous object and having to delimit every field in the SummaryAttributes class as there are many more than i have listed in this sample.
Any suggestions?
You can select new anonymous object with required fields, but you should be completely sure that paths has at least one item in each level of lists:
var query = oFeatures.Select(s => new {
s.attributes.SERIAL_NO,
s.attributes.District,
First = s.geometry.paths[0][0].First(), // or [0][0][0]
Last = s.geometry.paths[0][0].Last()
}).ToList()
Got it figured out. I include the X and Y fields in the original class definition. When the json gets deserialized they will be null. Then i loop back and fill them in.
List<Summary> oFeatures = reportObject.layers[0].features.ToObject<List<Summary>>();
List<Summary> summary = oFeatures.Select(s =>
{
var t = new Summary
{
attributes = s.attributes
};
t.attributes.XY1 = string.Format("{0} , {1}", s.geometry.paths[0][0].First(), s.geometry.paths[0][1].First());
t.attributes.XY2 = string.Format("{0} , {1}", s.geometry.paths[0][0].Last(), s.geometry.paths[0][1].First());
return t;
}).ToList();
List<SummaryAttributes> oAtts = summary.Select(x => x.attributes).ToList();

How to create an index returning the input document type?

I have a Raven database which contains a document collection. I would like to retrieve a subset of the documents in that collection. Only documents fulfilling certain criteria would be retrieved. However, for each document retrieved, the entire document must be retrieved.
Consider the following document type:
public class MyDocument {
public string Id { get; set; }
public string Name { get; set; }
public int Foo { get; set; }
public string Bar { get; set; }
}
Let's say I would like to retrieve all documents where the Foo property is greater than a given value (unknown at compile/index creation time). Using dynamic indexes, this could be done like:
IList<MyDocument> FindMyDocuments(int minFooValue) {
using(IDocumentSession session = _store.OpenSession()) {
return session.Query<MyDocument>().Where(d => d.Foo > minFooValue).ToList();
}
}
However, as I understand it, there are benefits to using predefined indexes instead of dynamic indexes. So I would like to define an index for this operation up front. How would an implementation of AbstractIndexCreationTask< MyDocument, MyDocument > look like?
The following doesn't seem to work as Raven wants the Map to select a new anonymous type:
class MyDocumentIndex: AbstractIndexCreationTask<MyDocument, MyDocument> {
public MyDocumentIndex() {
Map = docs => from doc from docs
select doc;
}
}
And shouldn't there be a Reduce part as well?
As you probably noticed, I'm rather new to this Map/Reduce concept :-).
David,
You do it like this:
public class MyDocumentIndex: AbstractIndexCreationTask<MyDocument> {
public MyDocumentIndex() {
Map = docs => from doc from docs
select new { doc.Foo };
}
}
And then you query it with:
session.Query<MyDocument, MyDocumentIndex().Query(x=>x.Foo > minValue).ToArray();

Categories

Resources