Map element position in data file to class property - c#

I need to read/write files, following a format provided by a third party specification. The specification itself is pretty simple: it says the position and the size of the data that will be saved in the file.
For example:
Position Size Description
--------------------------------------------------
0001 10 Device serial number
0011 02 Hour
0013 02 Minute
0015 02 Second
0017 02 Day
0019 02 Month
0021 02 Year
The list is very long, it has about 400 elements. But lots of them can be combined. For example, hour, minute, second, day, month and year can be combined in a single DateTime object.
I've split the elements into about 4 categories, and created separated classes for holding the data. So, instead of a big structure representing the data, I have some smaller classes. I've also created different classes for reading and writing the data.
The problem is: how to map the positions in the file to the objects properties, so that I don't need to repeat the values in the reading/writing class? I could use some custom attributes and retrieve them via reflection. But since the code will be running on devices with small memory and processor, it would be nice to find another way.
My current read code looks like this:
public void Read() {
DataFile dataFile = new DataFile();
// the arguments are: position, size
dataFile.SerialNumber = ReadLong(1, 10);
//...
}
Any ideas on this one?

Custom attributes was going to be my suggestion, but I see you've already thought about that. Aside from that, my only other suggestion would be to store the mapping in, say, an XML file.

Related

OutputBuffer not working for large c# list

I'm currently using SSIS to do an improvement on a project. need to insert single documents in a MongoDB collection of type Time Series. At some point I want to retrieve rows of data after going through a C# transformation script. I did this:
foreach (BsonDocument bson in listBson)
{
OutputBuffer.AddRow();
OutputBuffer.DatalineX = (string) bson.GetValue("data");
}
But this piece of code that works great with small file does not work with a 6 million line file. That is, there are no lines in the output. The other following tasks validate but react as if they had received nothing as input.
Where could the problem come from?
Your OuputBuffer has DatalineX defined as a string, either DT_STR or DT_WSTR and a specific length. When you exceed that value, things go bad. In normal strings, you'd have a maximum length of 8k or 4k respectively.
Neither of which are useful for your use case of at least 6M characters. To handle that, you'll need to change your data type to DT_TEXT/DT_NTEXT Those data types do not require a length as they are "max" types. There are lots of things to be aware of when using the LOB types.
Performance can suck depending on whether SSIS can keep the data in memory (good) or has to write intermediate values to disk (bad)
You can't readily manipulate them in a data flow
You'll use a different syntax in a Script Component to work with them
e.g.
// TODO: convert to bytes
Output0Buffer.DatalineX.AddBlobData(bytes);
Longer example of questionable accuracy with regard to encoding the bytes that you get to solve at https://stackoverflow.com/a/74902194/181965

Decode EMV TLV Data

I am working on a POS application that supports EMV cards. I am able to read card data from a Verifone MX card reader in TLV, but I am facing issues in decoding the TLV data to readable data.
I am able to Split the data into TLV Tags and its values. The resultant value is in Hex instead of Decoded text.
Example:
This is a sample TLV data (I got this sample TLV Data here
6F2F840E325041592E5359532E4444463031A51DBF0C1A61184F07A0000000031010500A564953412044454249548701019000
When i check this TLV in TLVUtil, I get data in certain Tags in readable format (like Tag 50 here).
The Closest I could get in my application is this:
Tag Value
50 56495341204445424954
4F A0000000031010
61 4F07A0000000031010500A56495341204445424954870101
6F 840E325041592E5359532E4444463031A51DBF0C1A61184F07A0000000031010500A56495341204445424954870101
84 325041592E5359532E4444463031
87 1
90
A5 BF0C1A61184F07A0000000031010500A56495341204445424954870101
BF0C 61184F07A0000000031010500A56495341204445424954870101
I would like to know if there is any way to identify certain tags that need to be converted from Hex to string or if there is any TLV Parser and decoder available in .Net that can replicate the TLVUtil tool.
Complete list of EMV tags and are available in EMVCo 4.3 specification book 3 -
you can download from here - https://www.emvco.com/download_agreement.aspx?id=654
How data is represented differs from field to field. Check 'Annex A - Data Elements Dictionary'
Details on encoding is mentioned in section 4.3
Read both the sections and your problem solved.
There are only a few tags that need to be converted to string. Generally tags that are put on POS screen personalized in hex equivalent of readable string.
5F20 : Cardholder Name
50 : Application Label.
5F2D : Language Preference
You must know which tags can be converted.
As it seems to me, programmatically you can identify something like,
Tag is of one byte ( 5A - Pan number ) or it contain 2 byte ( 5F20 - CARD HOLDER NAME), AND
length is of 1 byte or 2 byte AND
Tag is primitiv or constructed. More you can read Here
and if you know the list you can get something useful Here, It define the format of tag that you are looking for.
Here you can hard coded the format as it is well defined.
Hope it helps.
That data beginni g with 6F is a File Control Information (FCI) responded by an EMV card after SELECT command. There is an example in this video also decoded and explained.
https://youtu.be/iWg8EBhsfjY
Its easy check it out

Data Container Quick Access

What is the best way to store data (no serialization - just using a Stream + BinaryWriter/BinaryReader) in this scenario for quick and easy access to files.
DataContainer contains 10 files each files is 1 mb.
If I need to write to/read from file 5 it should only read that part of the 10 mb container and return 1mb by using a unique name/ID identifier, possibly stored in a header. Problems occur when you wish to update a file in the middle of the container, because the indexs will change in the stream (if the updated object is larger or smaller than the existing one)
How do I handle this without having to rewrite the entire datacontainer when updating?
I wish to write this for myself instead of using pre-existing libraries.
Any ideas?
I think you can store sizes of files instead of indexes, so you can calculate indexes. For example, consider container:
file size
1 10
2 15
3 20
4 25
Index of file 4 is calculated simply as 10+15+20.

Best Way to Load a File, Manipulate the Data, and Write a New File

I have an issue where I need to load a fixed-length file. Process some of the fields, generate a few others, and finally output a new file. The difficult part is that the file is of part numbers and some of the products are superceded by other products (which can also be superceded). What I need to do is follow the superceded trail to get information I need to replace some of the fields in the row I am looking at. So how can I best handle about 200000 lines from a file and the need to move up and down within the given products? I thought about using a collection to hold the data or a dataset, but I just don't think this is the right way. Here is an example of what I am trying to do:
Before
Part Number List Price Description Superceding Part Number
0913982 3852943
3852943 0006710 CARRIER,BEARING
After
Part Number List Price Description Superceding Part Number
0913982 0006710 CARRIER,BEARING 3852943
3852943 0006710 CARRIER,BEARING
As usual any help would be appreciated, thanks.
Wade
Create structure of given fields.
Read file and put structures in collection. You may use part number as key for hashtable to provide fastest searching.
Scan collection and fix the data.
200 000 objects from given lines will fit easily in memory.
For example.
If your structure size is 50 bytes then you will need only 10Mb of memory. It is nothing for modern PC.

Text File Mapping

I have a text files that are coming always in the same text format (I do not have the xsd of the text file).
I want to map the data from it to some class.
Is there some standard way to do so, except starting writing string parsers or some complicated REGEXs.
I really do not want to go with text parsers becasue we are several people working on this and it probably take each of us time to understand what the other is doing .
Example
Thanks.
If you have a special format you need your own parser for sure.
If the format is a standard one like xml, yml, json, csv etc, the parsing library will be always available in your language.
UPDATE
From the sample you provide it seems the format is more like INI file but entries are custom. May be you could extend NINI
Solution:
Change the format of that file to a standard format like tab delimited or comma separated csv file.
Then use a many libraries that out there to read that files or import it in a database and use an ORM like Entity Framework to read them
Assuming you cannot change the incoming file format to something more machine-readable, then you will probably need to write your own custom parser. The best way to do it would be to create classes to represent and store all of the different kinds of data, using the appropriate data formats for each field (custom enums, DateTime, Version, etc.)
Try to compartmentalize the code. For example, take these lines here:
272 298 9.663 18.665 -90.000 48 0 13 2 10 5 20009 1 2 1 257 "C4207" 0 0 1000 0 0
This could be a single class or struct. Its constructor could accept the above string as a parameter, and each value could be parsed to to different local members. That same class could have a Save() or ToString() method that converts all the values back to a string if needed.
Then the parent class would simply contain an array of the above structure, based on how many entries are in the file.

Categories

Resources