I have re edit this question below I have an example file which as multiple purchase orders in the file which is identified by the second column.
Order Number, Purchase Number,DATE,Item Code ,Qty, Description
1245456,98978,12/01/2019, 1545-878, 1,"Test"
1245456,98978,12/01/2019,1545-342,2,"Test"
1245456,98978,12/01/2019,1545-878,2,"Test"
1245456,98979,12/02/2019,1545-878,3,"Test 3"
1245456,98979,12/02/2019,1545-342,4,"Test 4"
1245456,98979,12/02/2019,1545-878,5,"Test 4"
What I want the end result to be is to be able to place the above into one class like the following
At the min I am using filelpers to parse the csv file this would work fine if I had sep header file and row file but they are combined as you see
var engine = new FileHelperEngine<CSVLines>();
var lines = engine.ReadFile(csvFileName);
So the Class should be like below
[DelimitedRecord(",")]
public class SalesOrderHeader
{
private Guid? _guid;
public Guid RowID
{
get
{
return _guid ?? (_guid = Guid.NewGuid()).GetValueOrDefault();
}
}
public string DocReference { get; set; }
public string CardCode { get; set; }
public string DocDate { get; set; }
public string ItemCode { get; set; }
public string Description { get; set; }
public string Qty { get; set; }
public string Price { get; set; }
[FieldHidden]
public List<SalesOrderHeader> OrdersLines { get; set; }
}
What I imagine I will have to do is two loops as you will see from my createsales order routine i first create the header and then add the lines in.
public void CreateSalesOrder(List<SalesOrderHeader> _salesOrders)
{
foreach (var record in _salesOrders.GroupBy(g => g.DocReference))
{
// Init the Order object
oOrder = (SAPbobsCOM.Documents)company.GetBusinessObject(SAPbobsCOM.BoObjectTypes.oOrders);
SAPbobsCOM.SBObob oBob;
// set properties of the Order object
// oOrder.NumAtCard = record.Where(w=>w.RowID = record.Where()
oOrder.CardCode = record.First().CardCode;
oOrder.DocDueDate = DateTime.Now;
oOrder.DocDate =Convert.ToDateTime(record.First().DocDate);
foreach (var recordItems in _salesOrders.SelectMany(e=>e.OrdersLines).Where(w=>w.DocReference ==record.First().DocReference))
{
oOrder.Lines.ItemCode = recordItems.ItemCode;
oOrder.Lines.ItemDescription = recordItems.Description;
oOrder.Lines.Quantity = Convert.ToDouble(recordItems.Qty);
oOrder.Lines.Price = Convert.ToDouble(recordItems.Price);
oOrder.Lines.Add();
log.Debug(string.Format("Order Line added to sap Item Code={0}, Description={1},Qty={2}", recordItems.ItemCode, recordItems.Description, recordItems.Qty));
}
int lRetCode = oOrder.Add(); // Try to add the orer to the database
}
if(lRetCode == 0)
{
string body = "Purchase Order Imported into SAP";
}
if (lRetCode != 0)
{
int temp_int = lErrCode;
string temp_string = sErrMsg;
company.GetLastError(out temp_int, out temp_string);
if (lErrCode != -4006) // Incase adding an order failed
{
log.Error(string.Format("Error adding an order into sap ErrorCode {0},{1}", temp_int, temp_string));
}
}
}
The problem you will see i have is how do I first split the csv into the two lists and second how do i access the header rows correctly in the strongly type object as you see I am using first which will not work correctly.
With FileHelpers it is important to avoid using the mapping class for anything other than describing the underlying file structure. Here I suspect you are trying to map directly to a class which is too complex.
A FileHelpers class is just a way of defining the specification of a flat file using C# syntax.
As such, the FileHelpers classes are an unusual type of C# class and you should not try to use accepted OOP principles. FileHelpers should not have properties or methods beyond the ones used by the FileHelpers library.
Think of the FileHelpers class as the 'specification' of your CSV format only. That should be its only role. (This is good practice from a maintenance perspective anyway - if the underlying CSV structure were to change, it is easier to adapt your code).
Then if you need the records in a more 'normal' object, then map the results to something better, that is, a class that encapsulates all the functionality of the Order object rather than the CSVOrder.
So, one way of handling this type of file is to parse the file twice. In the first pass you extract the header records. Something like this:
var engine1 = new FileHelperEngine<CSVHeaders>();
var headers = engine1.ReadFile(csvFileName);
In the second pass you extract the details;
var engine2 = new FileHelperEngine<CSVDetails>();
var details = engine2.ReadFile(csvFileName);
Then you combine this information into a new dedicated class, maybe with some LINQ similar to this
var niceOrders =
headers
.DistinctBy(h => h.OrderNumber)
.SelectMany(d => details.Where(d => d.OrderNumber = y))
.Select(x =>
new NiceOrder() {
OrderNumber = x.OrderNumber,
Customer = x.Customer,
ItemCode = x.ItemCode
// etc.
});
Related
I'm fetching data from website that returns me an object in a string like this:
{
index: 1,
commentNumber: 20,
feedComments: {
3465665: {
text: "I do not agree",
likeRatio: 0
},
6169801: {
text: "Hello",
likeRatio: 12
},
7206201: {
text: "Great job!",
likeRatio: 5
}
}
}
I want to work with this as an object, that's pretty easy to do, I'll just do this:
string objectString = GetData(); // Artificial GetData() method
dynamic data = JObject.Parse(objectString);
And now I can easily get all properties I want from this object using dynamic
The problem is pretty obvious now, I want to get properties, whose name starts with number (the object data structure I fetch is just designed that way). But property/field names you get from object cannot begin with a number.
int commentNumber = data.commentNumber; // Works fine
string commentText = data.feedComments.3465665.text; // Obviously won't compile
Is there any way to do this?
Note that I want to work with data I fetch as it was an object, I know I get get the comment text right from the string that GetData() method returns using some regex or something, but that's something I want to avoid.
You should really be parsing the JSON into concrete C# classes. Dynamic is slow and vulnerable to runtime errors that are hard to detect.
The comments will go into a Dictionary. For example:
public class Root
{
public int Index { get; set; }
public int CommentNumber { get; set; }
public Dictionary<long, FeedComment> FeedComments { get; set; }
}
public class FeedComment
{
public string Text { get; set; }
public int LikeRatio { get; set; }
}
And deserialise like this:
var result = JsonConvert.DeserializeObject<Root>(objectString);
Now you can access the comments very easily:
var commentText = result.FeedComments[3465665].Text
I have a class setup like this:
public class Summary
{
public Geometry geometry { get; set; }
public SummaryAttributes attributes { get; set; }
}
public class SummaryAttributes
{
public int SERIAL_NO { get; set; }
public string District { get; set; }
}
public class Geometry
{
public List<List<List<double>>> paths { get; set; }
}
and i take a json string of records for that object and cram them in there like this:
List<Summary> oFeatures = reportObject.layers[0].features.ToObject<List<Summary>>();
my end goal is to create a csv file so i need one flat List of records to send to the csv writer i have.
I can do this:
List<SummaryAttributes> oAtts = oFeatures.Select(x => x.attributes).ToList();
and i get a nice List of the attributes and send that off to csv. Easy peasy.
What i want though is to also pluck a field off of the Geometry object as well and include that in my final List to go to csv.
So the final List going to the csv writer would contain objects with all of the fields from SummaryAttributes plus the first and last double values from the paths field on the Geometry object (paths[0][0][first] and paths[0][0][last])
It's hard to explain. I want to graft two extra attributes onto the original SummaryAttributes object.
I would be ok with creating a new SummaryAttributesXY class with the two extra fields if that's what it takes.
But i'm trying to avoid creating a new anonymous object and having to delimit every field in the SummaryAttributes class as there are many more than i have listed in this sample.
Any suggestions?
You can select new anonymous object with required fields, but you should be completely sure that paths has at least one item in each level of lists:
var query = oFeatures.Select(s => new {
s.attributes.SERIAL_NO,
s.attributes.District,
First = s.geometry.paths[0][0].First(), // or [0][0][0]
Last = s.geometry.paths[0][0].Last()
}).ToList()
Got it figured out. I include the X and Y fields in the original class definition. When the json gets deserialized they will be null. Then i loop back and fill them in.
List<Summary> oFeatures = reportObject.layers[0].features.ToObject<List<Summary>>();
List<Summary> summary = oFeatures.Select(s =>
{
var t = new Summary
{
attributes = s.attributes
};
t.attributes.XY1 = string.Format("{0} , {1}", s.geometry.paths[0][0].First(), s.geometry.paths[0][1].First());
t.attributes.XY2 = string.Format("{0} , {1}", s.geometry.paths[0][0].Last(), s.geometry.paths[0][1].First());
return t;
}).ToList();
List<SummaryAttributes> oAtts = summary.Select(x => x.attributes).ToList();
I'm working on a web service that needs to accept a collection with three values of different types. The values are
SkuNumber (integer),
FirstName (string),
LastName (string)
I want the web service to accept a list of 100 instances of these values but am not sure how to go about it. Do I use a multidimensional list or array? Maybe a tuple? Or can I just create a simple class structure and accept a list of that class?
This is all simple enough in a normal application, I'm just not sure how the app calling the web service would pass the data with any of the given options.
Can someone give me some pointers?
If a shared assembly is not feasible, you can always go with good ol' XML. It may not be the optimal solution and I'm sure plenty of users here will balk at the idea, but it is easy to support and relatively quick to implement, so it really depends on your individual situation and the skill level of the developers responsible for supporting the application.
The benefit to using XML here, is that the calling application can be written in almost any language on almost any platform, as long as it adheres to the XML structure.
The XML string should be easy enough to generate in the calling application, but the biggest downside here is that if you have a ton of data, the processing may take longer than desired -- on both ends of the service.
Here is a working sample if you want to give it a try:
public class Whatever
{
public int SkuNumber { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
}
[WebMethod]
public void HelloWorld(string xmlString)
{
//make all the node names + attribute names lowercase, to account for erroneous xml formatting -- leave the values alone though
xmlString = Regex.Replace(xmlString, #"<[^<>]+>", m => m.Value.ToLower(),RegexOptions.Multiline | RegexOptions.Singleline);
var xmlDoc = LoadXmlDocument(xmlString);
var listOfStuff = new List<Whatever>();
var rootNode = xmlDoc.DocumentElement;
foreach(XmlNode xmlNode in rootNode)
{
var whatever = new Whatever
{
FirstName = xmlNode["first_name"].InnerText,
LastName = xmlNode["last_name"].InnerText,
SkuNumber = Convert.ToInt32(xmlNode["sku_number"].InnerText)
};
listOfStuff.Add(whatever);
}
}
public static XmlDocument LoadXmlDocument(string xmlString)
{
//some extra stuff to account for URLEncoded strings, if necessary
if (xmlString.IndexOf("%3e%") > -1)
xmlString = HttpUtility.UrlDecode(xmlString);
xmlString = xmlString.Replace((char)34, '\'').Replace("&", "&").Replace("\\", "");
var xmlDocument = new XmlDocument();
xmlDocument.PreserveWhitespace = false;
xmlDocument.LoadXml(xmlString);
return xmlDocument;
}
Your XML would look like this:
<stuff_to_track>
<whatever>
<sku_number>1</sku_number>
<first_name>jimi</first_name>
<last_name>hendrix</last_name>
</whatever>
<whatever>
<sku_number>2</sku_number>
<first_name>miles</first_name>
<last_name>davis</last_name>
</whatever>
<whatever>
<sku_number>3</sku_number>
<first_name>david</first_name>
<last_name>sanborn</last_name>
</whatever>
<whatever>
<sku_number>4</sku_number>
<first_name>john</first_name>
<last_name>coltrane</last_name>
</whatever>
</stuff_to_track>
I also recommend validating the incoming XML, for both schema and data.
Create a class and accept a list of that class. Be sure to mark it as [Serializable].
[Serializable]
public class Whatever
{
public int SkuNumber { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
}
Best practice would be to define the class in an assembly that can be accessed by both the service and the project that calls it.
The trouble with a tuple or a multi-dimensional array is that the data you send doesn't have an inherent identity: you could stick any old thing in there. If you have a class, you are indicating that you are sending an Order or an Inquiry or a Coupon or whatever it is you are tracking. There's a level of meaning that goes along with it.
Just send what you want:
public class Whatever
{
public int SkuNumber { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
}
[WebMethod]
public void TakeList(List<Whatever> theList)
{
foreach (var w in theList)
{
}
}
I have a Raven database which contains a document collection. I would like to retrieve a subset of the documents in that collection. Only documents fulfilling certain criteria would be retrieved. However, for each document retrieved, the entire document must be retrieved.
Consider the following document type:
public class MyDocument {
public string Id { get; set; }
public string Name { get; set; }
public int Foo { get; set; }
public string Bar { get; set; }
}
Let's say I would like to retrieve all documents where the Foo property is greater than a given value (unknown at compile/index creation time). Using dynamic indexes, this could be done like:
IList<MyDocument> FindMyDocuments(int minFooValue) {
using(IDocumentSession session = _store.OpenSession()) {
return session.Query<MyDocument>().Where(d => d.Foo > minFooValue).ToList();
}
}
However, as I understand it, there are benefits to using predefined indexes instead of dynamic indexes. So I would like to define an index for this operation up front. How would an implementation of AbstractIndexCreationTask< MyDocument, MyDocument > look like?
The following doesn't seem to work as Raven wants the Map to select a new anonymous type:
class MyDocumentIndex: AbstractIndexCreationTask<MyDocument, MyDocument> {
public MyDocumentIndex() {
Map = docs => from doc from docs
select doc;
}
}
And shouldn't there be a Reduce part as well?
As you probably noticed, I'm rather new to this Map/Reduce concept :-).
David,
You do it like this:
public class MyDocumentIndex: AbstractIndexCreationTask<MyDocument> {
public MyDocumentIndex() {
Map = docs => from doc from docs
select new { doc.Foo };
}
}
And then you query it with:
session.Query<MyDocument, MyDocumentIndex().Query(x=>x.Foo > minValue).ToArray();
Assume you have a CSV file with the following simplified structure
LINE1: ID,Description,Value
LINE2: 1,Product1,2
LINE3: ,,3
LINE4: ,,4
LINE5: 2,Product2,2
LINE6: ,,3
LINE7: ,,5
I am using FileHelpers to read the CSV and have hooked up one the interfaces that allows me me to access the current line, after it has been read. Refer to this SO question for more background.
The issues is that using that approach I will need to write many more if statements to check all the fields that need to be copied. (I have at least, 6 csv files at the moment with the same 'blank' format all having more than 20 fields that need to be copied ~ 120 if statements. urggh)
Now this is not a micro optimisation exercise since there will be more files that will have this 'incomplete' format.
How can I update the previous record in an elegant way such that I wont have to write if conditions and declarations for each field?
The current solution is to annotate the required fields using a custom attribute called CopyMe and then use reflection to copy.
[DelimitedRecord(",") ]
[IgnoreFirst(1)]
public class Product
{
[CopyMe()] public int ID { get; set; }
[CopyMe()] public string Description { get; set; }
[FieldConverter(ConverterKind.Decimal)]
public decimal Val{ get; set; }
}
with the AfterRead method looking like so...
public void AfterRead(AfterReadEventArgs<RawVolsDataPoints> e)
{
var record = (Product)e.Record;
if (PreviousRecord == null)
{
PreviousRecord = new Product();
PreviousRecord = record;
}
if (String.IsNullOrEmpty(record.ID)) // null value indicates a new row
{
var ttype = typeof(Product);
var fields = ttype.GetFields();
var fieldsToCopy = fields.Where(field =>
field.GetCustomAttributes(typeof(CopyMeAttribute), true).Any());
foreach (var item in fieldsToCopy)
{
var prevvalue = item.GetValue(PreviousRecord);
item.SetValue(record, prevvalue);
}
}
}