Decode EMV TLV Data - c#

I am working on a POS application that supports EMV cards. I am able to read card data from a Verifone MX card reader in TLV, but I am facing issues in decoding the TLV data to readable data.
I am able to Split the data into TLV Tags and its values. The resultant value is in Hex instead of Decoded text.
Example:
This is a sample TLV data (I got this sample TLV Data here
6F2F840E325041592E5359532E4444463031A51DBF0C1A61184F07A0000000031010500A564953412044454249548701019000
When i check this TLV in TLVUtil, I get data in certain Tags in readable format (like Tag 50 here).
The Closest I could get in my application is this:
Tag Value
50 56495341204445424954
4F A0000000031010
61 4F07A0000000031010500A56495341204445424954870101
6F 840E325041592E5359532E4444463031A51DBF0C1A61184F07A0000000031010500A56495341204445424954870101
84 325041592E5359532E4444463031
87 1
90
A5 BF0C1A61184F07A0000000031010500A56495341204445424954870101
BF0C 61184F07A0000000031010500A56495341204445424954870101
I would like to know if there is any way to identify certain tags that need to be converted from Hex to string or if there is any TLV Parser and decoder available in .Net that can replicate the TLVUtil tool.

Complete list of EMV tags and are available in EMVCo 4.3 specification book 3 -
you can download from here - https://www.emvco.com/download_agreement.aspx?id=654
How data is represented differs from field to field. Check 'Annex A - Data Elements Dictionary'
Details on encoding is mentioned in section 4.3
Read both the sections and your problem solved.

There are only a few tags that need to be converted to string. Generally tags that are put on POS screen personalized in hex equivalent of readable string.
5F20 : Cardholder Name
50 : Application Label.
5F2D : Language Preference
You must know which tags can be converted.

As it seems to me, programmatically you can identify something like,
Tag is of one byte ( 5A - Pan number ) or it contain 2 byte ( 5F20 - CARD HOLDER NAME), AND
length is of 1 byte or 2 byte AND
Tag is primitiv or constructed. More you can read Here
and if you know the list you can get something useful Here, It define the format of tag that you are looking for.
Here you can hard coded the format as it is well defined.
Hope it helps.

That data beginni g with 6F is a File Control Information (FCI) responded by an EMV card after SELECT command. There is an example in this video also decoded and explained.
https://youtu.be/iWg8EBhsfjY
Its easy check it out

Related

How to print exponential number "m³" using ESC/POS printer with C#?

I'm trying to print a ticket, and need to print out m³ as unit of measurement using a serial printer. And the following is what I have tried so far:
if (!printer.IsOpen)
printer.Open();
printer.WriteLine(string.Format("{0} m{1}", "2.34", Convert.ToChar(0xB3)));
printer.Close();
When I tried to debug and view the value using the text visualizer the text is correct which is "2.34 m³". But when it comes to printing, the text changed to "2.34 m?", where the expected output should be "2.34 m³".
I've been trying to figure this out for days. Please help. Thanks.
It seems as if your printer does not support the '³' character natively.
Therefore you need the printer manufacturer's technical manual. There, you must identify a control character sequence for superscript mode and put it in from of a (normal) character '3'. After it, you must switch off superscript mode again.This control codes are printer specific, and you need, therefore you must have this information from the printer manufacturer.
Assuming "Superscript on" is ESC 0x4e and "Superscript off" is ESC 0x4f, then your cde would look like this:
printer.WriteLine(string.Format("{0} m{1}3{2}", "2.34", "\x1b\x4e", "\x1b\x4f" );
What font are you using? I think the font you're using doesn't support that character, which is why it put a ?. During debug it shows the right character because visual studio is using a font that does support that character. Try choosing a different font and see if that helps.
I don't know C# well enough to comment on the code in your question, but it's absolutely possible to print the ³ character in ESC/POS by sending "\x1B\x74\x02\xFC".
Here is a picture of a receipt from an Epson TM-T20:
I printed this by using a PHP library which understands how to convert UTF-8 into the available code pages of ESC/POS printers. A similar library exists for python, and you would be well advised to use a C# equivalent if it exists!
<?php
require __DIR__ . '/vendor/autoload.php';
use Mike42\Escpos\Printer;
use Mike42\Escpos\PrintConnectors\FilePrintConnector;
use Mike42\Escpos\CapabilityProfile;
$connector = new FilePrintConnector("php://stdout");
$profile = CapabilityProfile::load("default");
$printer = new Printer($connector, $profile);
$printer -> text("2.43 m³\n");
$printer -> cut();
$printer -> close();
This corresponds to the following hexdump.
$ php superscript-demo.php | hexdump -C
00000000 1b 40 32 2e 34 33 20 6d 1b 74 02 fc 0a 1d 56 41 |.#2.43 m.t....VA|
00000010 03 |.|
00000011
All of the commands here are:
ESC # - set formatting and character encoding settings back to their defaults.
ESC t 2 - select code page 2.
LF - line break
ESC V 65 3 - a cut command
The magic here is ESC t 2. The code page numbered 2 on Epson printers is legacy code page 850 on Epson printers. Other vendors may be different, but the manual for your printer also shows CP850 at the same position.
Your default code page (CP437) does not contain the character you want, while in code page 850, ³ is represented by 0xFC. Once you change code pages, the change is active until you either reset the printer, or issue ESC #.
To save time finding special characters individually, you can select a code page which contains everything you plan to use, and then lean on your programming language standard library to encode strings with that code page.

USSD command translation

I need help decoding this received response.
at
OK
+CUSD: 0,"ar#?$ #9#d? ?# ???(d??)##1pD?"?T?Hc#
?& ?#D??? ?#??5 41 IA ?R",17
OK
+CUSD: 0,"ar?hb? ?' 10?# ? ?hb#?J##?#?? #f#??#?#S#d$#",17
I tried when dcs value was 72 on another network provider.
but this one value 17 I don't understand.
how to decode it?
after results :
AT+CSCS="UCS2"
OK
at+cusd=1,"002a003100350030002a0032002a00330032003300390031002a00360039003100370037002a00310023",15
+CUSD: 0,"00610072003f00680062003f0020003f00270020002000310030003f00400020003f0020003f006800620040003f004a00400040003f0040003f003f0020004000660040003f003f0040003f004000530040006400240040",17
AT+CSMP?
+CSMP: 17,167,0,0
OK
by the way when i set my AT+CSCS="UTF-8" it report Error but
it is reported back with this command AT+CSCS=?
The format of the response is according to 27.007:
+CUSD=[<n>[,<str>[,<dcs>]]]
Thus the third parameter is <dcs>. Its format is just deferred:
<dcs>: 3GPP TS 23.038 [25] Cell Broadcast Data Coding Scheme in integer format
(default 0)
In chapter "5 CBS Data Coding Scheme" in 23.038 it states These codings may also be used for USSD.
For 17, binary 0001 0001:
bit 7..4 Coding Group Bits = 0001
bit 3..0 = 0001 --> UCS2; message preceded by language indication
And it notes that
An MS not supporting UCS2 coding will present the two character language identifier followed by improperly interpreted user data.
which is exactly the case in your output (e.g. ar meaning arabic followed by garbage).
For 72, binary 0100 1000:
bit 7..4 Coding Group Bits = 01xx
bit 5 = 0 --> uncompressed,
bit 4 = 0 --> no class meaning
bit 3 & 2 = 1 & 0 --> UCS2 (16bit)
The "not supporting" part above might just be that you are using a limited character set encoding (PCCP437). In any case, unless your modem does not support UTF-8 you really should use that and not this PCCP437. Or you might use USC2. If your modem lacks both of those characters, you can try HEX (guessing on my part from what I saw when researching this answer, maybe you need to set the <dcs> parameter in AT+CSMP for this to work?).
Notice that after selecting UCS2 every string must be encoded that way, including switching to another character set, see this answer for an example.
Use the following functions to decode "UCS2" response data:
public static String HexStr2UnicodeStr(String strHex)
{
byte[] ba = Hex2ByteArray(strHex);
return HexBytes2UnicodeStr(ba);
}
public static String HexBytes2UnicodeStr(byte[] ba)
{
var strMessage = Encoding.BigEndianUnicode.GetString(ba, 0, ba.Length);
return strMessage;
}
for example:
String str1 = SmsEngine.HexStr2UnicodeStr("002a003100350030002a0032002a00330032003300390031002a00360039003100370037002a00310023");
// str1 = "*150*2*32391*69177*1#"
Please also check UnicodeStr2HexStr()

Issues modifying file attachment streams with Outlook .OFT files

I'm attempting to programmatically replace an embedded image within an OFT file (An Outlook message template), which is in Compound File Binary Format (because using anything human readable would make my life too easy).
To work with this file, I'm using OpenMCDF.
Since embedded images are basically file attachments, I can get the stream for the image like so:
static string FOOTER_IMG = "__substg1.0_37010102"; //Stream ID for embedded JPEG footer image
static string ATTACHMENT2 = "__attach_version1.0_#00000001"; //Storage ID for attached footer image
// ...
CFStream imgStream2 = file.RootStorage.GetStorage(ATTACHMENT2).GetStream(FOOTER_IMG);
I can then update that stream with the bytes from my desired image like so:
byte[] img2 = File.ReadAllBytes(footerimgFile); // New file
imgStream2.SetData(img2);
However, when I load the .OFT file in Outlook, the image no longer loads and I get a red X saying the image could not be loaded. I spent hours analyzing every bit of that OFT file, and the only thing that changed between the original template and the new template is that one stream that I replaced.
Here's where things get weird:
I noticed I could replace the bytes with the same exact bytes I had before and save it, so my saving mechanism is working. I thought maybe the OFT template stores some sort of hash of the image which has to match up. So I modified a few random bytes, and the image still loads (sometimes with some funky colors). Eventually, I realized it only breaks if the new image contains fewer bytes than the original image. I can replace the image with a larger image, and that works! I can also just pad a smaller image with trailing zeros at the end of the stream, and it still works.
This led me to come up with a true hackerific masterpiece:
if (img2.Length < 5585) img2 = img2.Concat(new byte[5585 - img2.Length]).ToArray();
Basically, if img2 is too small, I pad on enough bytes to make it the same size as the original image (5585 bytes to be exact). So this works. But.. yea.
My Question:
Does the Microsoft OFT file format store the byte count for attachments in some other stream or some other CDF container? If this was a standard property of CDF, you'd think OpenMCDF would update this count. This leads me to believe this is a property of the OFT file format, which OpenMCDF would of course know nothing about.
Why would writing a smaller stream corrupt the file, where writing a larger stream work?
Update:
From what I've read so far, the __properties_version1.0 stream contains a list of pointers (offsets?) to mark where various other streams are. I'm guessing something in here needs to be updated. Currently, I have these streams in the attachment container:
From what I can tell __properties_version1.0 doesn't change hardly at all between the first attachment (a 36,463 byte file) and the second attachment (a 5,585 byte file). The __properties_version1.0 for the second attachment is:
There's only a set of 8 bytes that change between those two attachments. In attachment 1 we have:
6F 8E 00 00 03 00 2D 00
In attachment 2 (pictured above) we have:
D1 15 00 00 03 00 6F 08
Are those offsets? Doesn't seem to be a range, or the numbers would go up. Those numbers are also way too big to be file sizes. Plus, it seems redundant to store file sizes in here anyway. So, I'm once again at a loss as to why changing the size of the 0x37010102 stream causes the image to no longer load.
Another thing that makes zero sense. I can change the size of the first attachment with either larger or smaller files, and nothing breaks. However, there's absolutely no difference between any stream in those two containers except the data in the 0x37010102 stream. Why does this approach work in one attachment and not the other?
Update 2:
I have noticed the two differences in the __properties_version1.0 stream between the two attachments do correspond to the file sizes:
6F 8E 00 00 03 00 2D 00 // Attachment 1
D1 15 00 00 03 00 6F 08 // Attachment 2
6F 8E seems to be a little-Endian representation of the file size, as 8E6F in decimal would be 36463, which is the number of bytes in the first attachment. 15D1 in decimal is 5585, the size of the second attachment. So, this stream definitely is storing file sizes. Now to see if I fix those bytes if the file becomes uncorrupted.
Update 3:
So, changing those bytes fixes a previously corrupted file, so that's the key! Now just to find a way to do this programmatically.
Are you working with embedded HTML images (which are just regular image attachments) or embedded RTF images (which OLE storage)?
Do you simply truncate a particular stream without adjusting any other properties?
Well, it's times like this I feel like an uber nerd. Here's the code that fixes the problem. Note, the byte offsets in propBytes might be different if you had other properties in the attachment.
// Fix file size on prop stream
var propStream = file.RootStorage.GetStorage(ATTACHMENT2).GetStream("__properties_version1.0");
var propBytes = propStream.GetData();
propBytes[0xb0] = (byte)(img2.Length & 0xFF);
propBytes[0xb1] = (byte)(img2.Length >> 8);
propStream.SetData(propBytes);
However, I like this solution better than padding extra zeros.
I think the real solution would be to use a third party library that deals with .MSG format, however I could not find any that don't make you install Outlook or Exchange on the server (which we can't do) or that are free (we have zero budget for this).

Text File Mapping

I have a text files that are coming always in the same text format (I do not have the xsd of the text file).
I want to map the data from it to some class.
Is there some standard way to do so, except starting writing string parsers or some complicated REGEXs.
I really do not want to go with text parsers becasue we are several people working on this and it probably take each of us time to understand what the other is doing .
Example
Thanks.
If you have a special format you need your own parser for sure.
If the format is a standard one like xml, yml, json, csv etc, the parsing library will be always available in your language.
UPDATE
From the sample you provide it seems the format is more like INI file but entries are custom. May be you could extend NINI
Solution:
Change the format of that file to a standard format like tab delimited or comma separated csv file.
Then use a many libraries that out there to read that files or import it in a database and use an ORM like Entity Framework to read them
Assuming you cannot change the incoming file format to something more machine-readable, then you will probably need to write your own custom parser. The best way to do it would be to create classes to represent and store all of the different kinds of data, using the appropriate data formats for each field (custom enums, DateTime, Version, etc.)
Try to compartmentalize the code. For example, take these lines here:
272 298 9.663 18.665 -90.000 48 0 13 2 10 5 20009 1 2 1 257 "C4207" 0 0 1000 0 0
This could be a single class or struct. Its constructor could accept the above string as a parameter, and each value could be parsed to to different local members. That same class could have a Save() or ToString() method that converts all the values back to a string if needed.
Then the parent class would simply contain an array of the above structure, based on how many entries are in the file.

Map element position in data file to class property

I need to read/write files, following a format provided by a third party specification. The specification itself is pretty simple: it says the position and the size of the data that will be saved in the file.
For example:
Position Size Description
--------------------------------------------------
0001 10 Device serial number
0011 02 Hour
0013 02 Minute
0015 02 Second
0017 02 Day
0019 02 Month
0021 02 Year
The list is very long, it has about 400 elements. But lots of them can be combined. For example, hour, minute, second, day, month and year can be combined in a single DateTime object.
I've split the elements into about 4 categories, and created separated classes for holding the data. So, instead of a big structure representing the data, I have some smaller classes. I've also created different classes for reading and writing the data.
The problem is: how to map the positions in the file to the objects properties, so that I don't need to repeat the values in the reading/writing class? I could use some custom attributes and retrieve them via reflection. But since the code will be running on devices with small memory and processor, it would be nice to find another way.
My current read code looks like this:
public void Read() {
DataFile dataFile = new DataFile();
// the arguments are: position, size
dataFile.SerialNumber = ReadLong(1, 10);
//...
}
Any ideas on this one?
Custom attributes was going to be my suggestion, but I see you've already thought about that. Aside from that, my only other suggestion would be to store the mapping in, say, an XML file.

Categories

Resources