I need help decoding this received response.
at
OK
+CUSD: 0,"ar#?$ #9#d? ?# ???(d??)##1pD?"?T?Hc#
?& ?#D??? ?#??5 41 IA ?R",17
OK
+CUSD: 0,"ar?hb? ?' 10?# ? ?hb#?J##?#?? #f#??#?#S#d$#",17
I tried when dcs value was 72 on another network provider.
but this one value 17 I don't understand.
how to decode it?
after results :
AT+CSCS="UCS2"
OK
at+cusd=1,"002a003100350030002a0032002a00330032003300390031002a00360039003100370037002a00310023",15
+CUSD: 0,"00610072003f00680062003f0020003f00270020002000310030003f00400020003f0020003f006800620040003f004a00400040003f0040003f003f0020004000660040003f003f0040003f004000530040006400240040",17
AT+CSMP?
+CSMP: 17,167,0,0
OK
by the way when i set my AT+CSCS="UTF-8" it report Error but
it is reported back with this command AT+CSCS=?
The format of the response is according to 27.007:
+CUSD=[<n>[,<str>[,<dcs>]]]
Thus the third parameter is <dcs>. Its format is just deferred:
<dcs>: 3GPP TS 23.038 [25] Cell Broadcast Data Coding Scheme in integer format
(default 0)
In chapter "5 CBS Data Coding Scheme" in 23.038 it states These codings may also be used for USSD.
For 17, binary 0001 0001:
bit 7..4 Coding Group Bits = 0001
bit 3..0 = 0001 --> UCS2; message preceded by language indication
And it notes that
An MS not supporting UCS2 coding will present the two character language identifier followed by improperly interpreted user data.
which is exactly the case in your output (e.g. ar meaning arabic followed by garbage).
For 72, binary 0100 1000:
bit 7..4 Coding Group Bits = 01xx
bit 5 = 0 --> uncompressed,
bit 4 = 0 --> no class meaning
bit 3 & 2 = 1 & 0 --> UCS2 (16bit)
The "not supporting" part above might just be that you are using a limited character set encoding (PCCP437). In any case, unless your modem does not support UTF-8 you really should use that and not this PCCP437. Or you might use USC2. If your modem lacks both of those characters, you can try HEX (guessing on my part from what I saw when researching this answer, maybe you need to set the <dcs> parameter in AT+CSMP for this to work?).
Notice that after selecting UCS2 every string must be encoded that way, including switching to another character set, see this answer for an example.
Use the following functions to decode "UCS2" response data:
public static String HexStr2UnicodeStr(String strHex)
{
byte[] ba = Hex2ByteArray(strHex);
return HexBytes2UnicodeStr(ba);
}
public static String HexBytes2UnicodeStr(byte[] ba)
{
var strMessage = Encoding.BigEndianUnicode.GetString(ba, 0, ba.Length);
return strMessage;
}
for example:
String str1 = SmsEngine.HexStr2UnicodeStr("002a003100350030002a0032002a00330032003300390031002a00360039003100370037002a00310023");
// str1 = "*150*2*32391*69177*1#"
Please also check UnicodeStr2HexStr()
Related
I have a program for parsing GS1 Barcodes (with Zebra scanner), which worked just fine, atleast I thought it was OK...
Until I came across one box with 2 GS1 barcodes.. one "linear" and one data matrix (UDI). For linear, it worked just fine, I successfully got out the GTIN and Serial. But datamatrix is different. For some reason, its content is a bit longer than linear code, it has some production date and something else at the end.
This is the Linear code: (01)00380652555852(17)260221(21)25146965079(30)1
This is data matrix: (01)00380652555852(17)260221(21)2514696507911210222240SA60AT225
I have problems with parsing out the Serial number - 25146965079.
Serial number in GS1 has a length of 1-20 characters. This one has 11, but How can I make it stop after the 9 characters? How can I know that the serial ends there?
I tried transforming each character to UDI, but it seems that there is no special separating character or anything.. so I honestly donjt know what to do. Does anyone have any idea?
This the code, if anyone wanna try anything https://prnt.sc/1x2sw8l
Those codes/products came right from the manufacturer, so there shouldnt be anything wrong with the code, I guess...
If you verify the barcode with a scanner that is designed to interpret a GS1 structure, you will see that the generated barcode is in fact incorrect.
You are missing a GS after the serial number, these codes MUST end a variable-length field if it's not the last one. This is specified in GS1 general specifications section 7.8.5.2
Without this separator you can't know where the serial ends - or, a machine interpreting the code can't know.
Tell the manufacturer that they need to study the GS1 specs.
Edit: the "correct" version would be:
(01)00380652555852(17)260221(21)25146965079<GS>(11)210222(240)SA60AT225
The parentheses and group separator <GS> are not included literally in the code.
Since you have two variable-length identifiers (21) and (240) you need a GS no matter what you do. Only alternative would be to have some padding for serial number, then you could do without separator.
According to the GS1 documentation (page 156 and forwards)
All the fields are correct
(01)00380652555852 --> GTIN
(17)260221 --> Expiration date
(21)25146965079 --> Serial Number
(11)210222 --> Production Date
(240)SA60AT225 --> Additional Product Identification
I tried scanning the image but the result was the same as yours.
So the problem is that the separators are not there. Which is a problem for you, and there is no way to know where the serial number ends without the separator.
I am sorry my English is not good
The reason of this problem is group separetors are unreadable character for example if you focus on text box and press capslock button or shift button nothing appear in text box the same in gs
To solve this problem
Public l as integer
And put the following code in keyup event
If textbox1.textlenght = l then
My.combuter.keybord.sendkeys({enter})
L= textbox1.textlenght
End if
This code will give space after each litter (because each litter combined with cabslock button) and five spaces in groub space
store raw input in KeyPress event and then read the character for Letter Or Digit.
if (e.KeyChar != 13)
{
int asci = Convert.ToInt32(e.KeyChar);
if (asci > 31 && asci < 128) // numeric and chars only
rawbcode += Convert.ToChar((int)(e.KeyChar & 0xffff));
else
{
if (asci == 29)
{
rawbcode += "<GS>"; // GS1 Seperator
}
}
}
This is from twitter doc: https://developer.twitter.com/en/docs/basics/counting-characters.html
"Twitter counts the length of a Tweet using the Normalization Form C (NFC) version of the text ... Twitter also counts the number of codepoints in the text rather than UTF-8 bytes."
It works for Western languages. But when I apply FormC normalization to the following, for example:
(I posted an example in Korean, but stackoverflow considers it spam and doesn't let me post it)
I get the value of 160. On Twitter's Web client, this is the maximum available message - adding even one character goes over the limit.
Applying FormD to the above gets a value over 300.
Since Twitter limit is either 140 or 280, I really don't understand how that message's char count is determined by Twitter.
So - how in the world can I figure out what the actual message length is for non-Western languages for a tweet?
The code to normalize, in c#:
private static int GetCodepointLength(string inp)
{
var info = new StringInfo(inp.Normalize(NormalizationForm.FormC));
return info.LengthInTextElements;
}
I am working on a POS application that supports EMV cards. I am able to read card data from a Verifone MX card reader in TLV, but I am facing issues in decoding the TLV data to readable data.
I am able to Split the data into TLV Tags and its values. The resultant value is in Hex instead of Decoded text.
Example:
This is a sample TLV data (I got this sample TLV Data here
6F2F840E325041592E5359532E4444463031A51DBF0C1A61184F07A0000000031010500A564953412044454249548701019000
When i check this TLV in TLVUtil, I get data in certain Tags in readable format (like Tag 50 here).
The Closest I could get in my application is this:
Tag Value
50 56495341204445424954
4F A0000000031010
61 4F07A0000000031010500A56495341204445424954870101
6F 840E325041592E5359532E4444463031A51DBF0C1A61184F07A0000000031010500A56495341204445424954870101
84 325041592E5359532E4444463031
87 1
90
A5 BF0C1A61184F07A0000000031010500A56495341204445424954870101
BF0C 61184F07A0000000031010500A56495341204445424954870101
I would like to know if there is any way to identify certain tags that need to be converted from Hex to string or if there is any TLV Parser and decoder available in .Net that can replicate the TLVUtil tool.
Complete list of EMV tags and are available in EMVCo 4.3 specification book 3 -
you can download from here - https://www.emvco.com/download_agreement.aspx?id=654
How data is represented differs from field to field. Check 'Annex A - Data Elements Dictionary'
Details on encoding is mentioned in section 4.3
Read both the sections and your problem solved.
There are only a few tags that need to be converted to string. Generally tags that are put on POS screen personalized in hex equivalent of readable string.
5F20 : Cardholder Name
50 : Application Label.
5F2D : Language Preference
You must know which tags can be converted.
As it seems to me, programmatically you can identify something like,
Tag is of one byte ( 5A - Pan number ) or it contain 2 byte ( 5F20 - CARD HOLDER NAME), AND
length is of 1 byte or 2 byte AND
Tag is primitiv or constructed. More you can read Here
and if you know the list you can get something useful Here, It define the format of tag that you are looking for.
Here you can hard coded the format as it is well defined.
Hope it helps.
That data beginni g with 6F is a File Control Information (FCI) responded by an EMV card after SELECT command. There is an example in this video also decoded and explained.
https://youtu.be/iWg8EBhsfjY
Its easy check it out
So I am attempting to write a compare function in C which can take a UTF-8 encoded Unicode string and use the Windows CompareStringEx() function and I am expecting it to work just like .NET's CultureInfo.CompareInfo.Compare().
Now the function I have written in C works some of the time, but not in all cases and I'm trying to figure out why. Here is a case that fails (passes in C#, not in C):
CultureInfo cultureInfo = new CultureInfo("en-US");
CompareOptions compareOptions = CompareOptions.IgnoreCase | CompareOptions.IgnoreKanaType | CompareOptions.IgnoreWidth;
string stringA = "คนอ้วน ๆ";
string stringB = "はじめまして";
//Result is -1 which is expected
int result = cultureInfo.CompareInfo.Compare(stringA, stringB);
And here is what I have written in C. Keep in mind this is supposed to take a UTF-8 encoded string and use the Windows CompareStringEx() function so conversion is necessary.
// Compare flags for the string comparison
#define COMPARE_STRING_FLAGS (NORM_IGNORECASE | NORM_IGNOREKANATYPE | NORM_IGNOREWIDTH)
int CompareStrings(int lenA, const void *strA, int lenB, const void *strB)
{
LCID ENGLISH_LCID = MAKELCID(MAKELANGID(LANG_ENGLISH, SUBLANG_ENGLISH_US), SORT_DEFAULT);
int compareString = -1;
// Get the size of the strings as UTF-18 encoded Unicode strings.
// Note: Passing 0 as the last parameter forces the MultiByteToWideChar function
// to give us the required buffer size to convert the given string to utf-16s
int strAWStrBufferSize = MultiByteToWideChar(CP_UTF8, 0, (LPCSTR)strA, lenA, NULL, 0);
int strBWStrBufferSize = MultiByteToWideChar(CP_UTF8, 0, (LPCSTR)strB, lenB, NULL, 0);
// Malloc the strings to store the converted UTF-16 values
LPWSTR utf16StrA = (LPWSTR) GlobalAlloc(GMEM_FIXED, strAWStrBufferSize * sizeof(WCHAR));
LPWSTR utf16StrB = (LPWSTR) GlobalAlloc(GMEM_FIXED, strBWStrBufferSize * sizeof(WCHAR));
// Convert the UTF-8 strings (SQLite will pass them as UTF-8 to us) to standard
// windows WCHAR (UTF-16\UCS-2) encoding for Unicode so they can be used in the
// Windows CompareStringEx() function.
if(strAWStrBufferSize != 0)
{
MultiByteToWideChar(CP_UTF8, 0, (LPCSTR)strA, lenA, utf16StrA, strAWStrBufferSize);
}
if(strBWStrBufferSize != 0)
{
MultiByteToWideChar(CP_UTF8, 0, (LPCSTR)strB, lenB, utf16StrB, strBWStrBufferSize);
}
// Compare the strings using the windows compare function.
// Note: We subtract 1 from the size since we don't want to include the null termination character
if(NULL != utf16StrA && NULL != utf16StrB)
{
compareValue = CompareStringEx(L"en-US", COMPARE_STRING_FLAGS, utf16StrA, strAWStrBufferSize - 1, utf16StrB, strBWStrBufferSize - 1, NULL, NULL, 0);
}
// In the Windows CompareStringEx() function, 0 indicates an error, 1 indicates less than,
// 2 indicates equal to, 3 indicates greater than so subtract 2 to maintain C convention
if(compareValue > 0)
{
compareValue -= 2;
}
return compareValue;
}
Now if I run the following code, I expect the result to be -1 based on the .NET implementation (see above) but I get 1 indicating that the strings are greater than:
char strA[50] = "คนอ้วน ๆ";
char strB[50] = "はじめまして";
// Will be 1 when we expect it to be -1
int result = CompareStrings(strlen(strA), strA, strlen(strB), strB);
Any ideas on why the results I'm getting are different? I'm using the same LCID/cultureInfo and compareOptions in both implementations and the conversions are successful as far as I can tell.
FYI: This function will be used as a custom collation in SQLite. Not relevant to the question but in case anyone is wondering why the function signature is the way it is.
UPDATE: I also determined that when running the same code in .NET 4 I would see the behavior I saw in the native code. As a result there was now a discrepancy between .NET versions. See my answer below for the reasons behind this.
Well, your code performs several steps here - it's not clear whether it's the compare step which is failing or not.
As a first step, I would write out - in both the .NET code and the C code - the exact UTF-16 code units which you've got in utf16StrA, utf16StrB, stringA and stringB. I wouldn't be at all surprised to find that there's a problem in the input data you're using in the C code.
What you are hoping for here is that your text editor will save the source code file in utf-8 format. And that the compiler will then somehow not interpret the source code as utf-8. That's too much to hope for, at least on my compiler:
warning C4566: character represented by universal-character-name '\u0E04' cannot be represented in the current code page (1252)
Fix:
const wchar_t* strA = L"คนอ้วน ๆ";
const wchar_t* strB = L"はじめまして";
And remove the conversion code.
So I ended up figuring out the issue after contacting Microsoft support. Here is what they had to say about the issue:
The reason for the issue you are seeing, namely, running CompareInfo.Compare against the same string with the same compare options but getting different return values when run under different versions of the .NET Framework, is that the sorting rules are tied to the Unicode spec, which evolves over time. Historically .NET has snapped data for side by side releases to correspond to the newest version of Windows and the corresponding version of Unicode implemented at that time so 2.0, 3.0 and 3.5 correspond to the version for Windows XP or Server 2003, whereas v4.0 matched the Vista sorting rules. As a result the sorting rules for the various versions of the .NET Framework have changed over time.
This also means that when I ran the native code I was calling the sort methods that adhered ot the Vista sorting rules and when I ran in .NET 3.5 I was running sort methods that used the Windows XP sorting rules. Seems odd to me that the Unicode spec would change in such a manner as to cause such a dramatic difference but apparently that's the case here. Seems to me that changing the Unicode spec in such a dramatic way is a fantastic way to break backwards compatibility.
Recently our site has been deluged with the resurgence of the Asprox botnet SQL injection attack. Without going into details, the attack attempts to execute SQL code by encoding the T-SQL commands in an ASCII encoded BINARY string. It looks something like this:
DECLARE%20#S%20NVARCHAR(4000);SET%20#S=CAST(0x44004500...06F007200%20AS%20NVARCHAR(4000));EXEC(#S);--
I was able to decode this in SQL, but I was a little wary of doing this since I didn't know exactly what was happening at the time.
I tried to write a simple decode tool, so I could decode this type of text without even touching SQL Server. The main part I need to be decoded is:
CAST(0x44004500...06F007200 AS
NVARCHAR(4000))
I've tried all of the following commands with no luck:
txtDecodedText.Text =
System.Web.HttpUtility.UrlDecode(txtURLText.Text);
txtDecodedText.Text =
Encoding.ASCII.GetString(Encoding.ASCII.GetBytes(txtURLText.Text));
txtDecodedText.Text =
Encoding.Unicode.GetString(Encoding.Unicode.GetBytes(txtURLText.Text));
txtDecodedText.Text =
Encoding.ASCII.GetString(Encoding.Unicode.GetBytes(txtURLText.Text));
txtDecodedText.Text =
Encoding.Unicode.GetString(Convert.FromBase64String(txtURLText.Text));
What is the proper way to translate this encoding without using SQL Server? Is it possible? I'll take VB.NET code since I'm familiar with that too.
Okay, I'm sure I'm missing something here, so here's where I'm at.
Since my input is a basic string, I started with just a snippet of the encoded portion - 4445434C41 (which translates to DECLA) - and the first attempt was to do this...
txtDecodedText.Text = Encoding.UTF8.GetString(Encoding.UTF8.GetBytes(txtURL.Text));
...and all it did was return the exact same thing that I put in since it converted each character into is a byte.
I realized that I need to parse every two characters into a byte manually since I don't know of any methods yet that will do that, so now my little decoder looks something like this:
while (!boolIsDone)
{
bytURLChar = byte.Parse(txtURLText.Text.Substring(intParseIndex, 2));
bytURL[intURLIndex] = bytURLChar;
intParseIndex += 2;
intURLIndex++;
if (txtURLText.Text.Length - intParseIndex < 2)
{
boolIsDone = true;
}
}
txtDecodedText.Text = Encoding.UTF8.GetString(bytURL);
Things look good for the first couple of pairs, but then the loop balks when it gets to the "4C" pair and says that the string is in the incorrect format.
Interestingly enough, when I step through the debugger and to the GetString method on the byte array that I was able to parse up to that point, I get ",-+" as the result.
How do I figure out what I'm missing - do I need to do a "direct cast" for each byte instead of attempting to parse it?
I went back to Michael's post, did some more poking and realized that I did need to do a double conversion, and eventually worked out this little nugget:
Convert.ToString(Convert.ToChar(Int32.Parse(EncodedString.Substring(intParseIndex, 2), System.Globalization.NumberStyles.HexNumber)));
From there I simply made a loop to go through all the characters 2 by 2 and get them "hexified" and then translated to a string.
To Nick, and anybody else interested, I went ahead and posted my little application over in CodePlex. Feel free to use/modify as you need.
Try removing the 0x first and then call Encoding.UTF8.GetString. I think that may work.
Essentially: 0x44004500
Remove the 0x, and then always two bytes are one character:
44 00 = D
45 00 = E
6F 00 = o
72 00 = r
So it's definitely a Unicode/UTF format with two bytes/character.