The following code works and allows me to get the UID of a Mifare 1k card. Unfortunately, it does not work with Desfire cards.
public byte[] GetUid()
{
byte[] uid = new byte[6];
int rc = Communicate(new byte[]{0xff, 0xca, 0x00, 0x00, 0x04}, ref uid);
if (rc != 0)
throw new Exception("failure: " + rc);
int rc1, rc2;
if (uid.Length == 2)
{
rc1 = uid[0];
rc2 = uid[1];
}
else
{
rc1 = uid[4];
rc2 = uid[5];
}
if (rc1 != 0x90 || rc2 != 0x00)
throw new Exception("failure: " + rc1 + "/" + rc2);
byte[] result = new byte[4];
Array.Copy(uid, result, 4);
return result;
}
I had a look at following resources
http://ridrix.wordpress.com/2009/09/19/mifare-desfire-communication-example/
http://code.google.com/p/nfc-tools/source/browse/trunk/libfreefare/libfreefare/mifare_desfire.c?r=532
... and tried to do it like this:
byte[] outb = new byte[15];
int rc9 = Communicate(new byte[] { 0x60 }, ref outb);
outb always contains { 0x67, 0x00 } and not, as expected, { af 04 01 01 00 02 18 05 }.
Connect is sucessful, and SCardGetAttrib allows me to fetch the ATR. The Communicate method works with SCardTransmit. I can post the code if it helps.
Thanks for any pointer!
EDIT:
Thanks for the first answer! I changed the program as suggested:
byte[] outb = new byte[9];
int rc5 = Communicate(new byte[]{0x90, 0x60, 0x00, 0x00, 0x00, 0x00}, ref outb);
Now outb is { 0x91, 0x7E }. This seems to be better, 0x91 looking like an ISO 7816 response code, but unfortunately not 0x90, as expected. (I also had a look at the DESFIRE_TRANSCEIVE macro in the second link that continues reading if it receives 0xf2.) I tried a Google search for ISO 7816 APDU response codes, but had no success in decoding the error code.
EDIT 2:
I also found the following comment:
with an omnikey 5321 I get DESFire ATR 3B8180018080 UID 04 52 2E AA 47
23 80 90 00 [from apdu FFCA000000] All other apdu give 917E unknown
error
This explains my error code and gives me another hint, FFCA000000 looking quite similar to my other Mifare 1k string. So with FFCA000000 I get a 9 byte response that seem to contain the UID. Interestingly, the FFCA000000 code also works with the 1k cards, so maybe my solution is just to change the last 04 to 00 and deal with responses of different length. Right?
EDIT 3:
It seems the penny has dropped... 0x04 = 4 bytes response = too small for a 7 byte UID = response 917E = buffer too small :-)
This code seems to work:
int rc = Communicate(new byte[] { 0xff, 0xca, 0x00, 0x00, 0x00 }, ref uid);
if (rc != 0)
throw new Exception("failure: " + rc);
int rc1 = uid[uid.Length-2], rc2 = uid[uid.Length-1];
if (rc1 != 0x90 || rc2 != 0x00)
throw new Exception("failure: " + rc1 + "/" + rc2);
byte[] result = new byte[uid.Length - 2];
Array.Copy(uid, result, uid.Length - 2);
return result;
Any comments?
Cla=ff commands are pcsc part 3 commands. Ins=ca should work with any cl reader that is pcsc 2.0x compliant
Try the "Native wrapped" version of the first link you supplied instead. Your interface expects ISO 7816-4 style APDU's (as it returns an ISO 7816-4 status word meaning wrong length).
Related
I'm, trying to figure out bytes order after convertation from BitArray to byte[].
Firstly, here is the BitArray content:
BitArray encoded = huffmanTree.Encode(input);
foreach (bool bit in encoded)
{
Console.Write((bit ? 1 : 0));
}
Console.WriteLine();
Output:
Encoded: 000001010110101011111111
Okay, so if we convert these binary to Hex manually we will get: 05 6A FF
However, when I am using convertation in C#, here is what I get:
BitArray encoded = huffmanTree.Encode(input);
byte[] bytes = new byte[encoded.Length / 8 + (encoded.Length % 8 == 0 ? 0 : 1)];
encoded.CopyTo(bytes, 0);
string StringByte = BitConverter.ToString(bytes);
Console.WriteLine(StringByte); // just to check the Hex
Output:
A0-56-FF
Nevertheless, as I have mentioned, it should be 05 6A FF. Please help me to understand why is that so.
I am running into an issue building a program to process files provided. These files are XML files formatted with UTF-8. Oddly, some files end with 0x0A 0x00, and cause our XML parser to throw an error. I am looking to build a function to remove these bytes on the end of a file if they exist, without "hard coding" 0x0A 0x00. Ideally this function could be used in the future for any similar behavior with an array of any size.
Here is the exception:
System.Xml.XmlException:
hexadecimal value 0x00, is an invalid character. Line 250, position 1.
This occurs in some files, but not all. The root cause of this behavior is yet to be discovered.
I'm sorry, I do not have a code sample, as I have not been able to get anything remotely close to work :) I will edit this post if I get something somewhat working.
Any insight is appreciated!
Something like this should do the trick, keep in mind though this does not have any error handling built in, this is just the barebones functionality:
static void TrimFile(string filePath, byte[] badBytes)
{
using (FileStream file = new FileStream(filePath, FileMode.Open, FileAccess.ReadWrite))
{
byte[] bytes = new byte[badBytes.Length];
file.Seek(-badBytes.Length, SeekOrigin.End);
file.Read(bytes, 0, badBytes.Length);
if (Enumerable.SequenceEqual(bytes, badBytes))
{
file.SetLength(Math.Max(0, file.Length - badBytes.Length));
}
}
}
You can call it like this:
TrimFile(filePath, new byte[] { 0x0A, 0x00 });
Here's a test file I created with 0xCA 0xFE 0xFF 0xFF at the end (some bunk data)
62 75 6E 6B 20 66 69 6C 65 CA FE FF FF
bunk fileÊþÿÿ
After running TrimFile(filePath, new byte[] { 0xCA, 0xFE, 0xFF, 0xFF });
62 75 6E 6B 20 66 69 6C 65
bunk file
Hope this comes in handy!
I need to run mode 0x100 and send hex message
this is the mode100 function I have created (which is working )
public static byte Mode100(byte[] p)
{
byte lcs = 0;
foreach (byte b in p)
{
lcs += b;
}
return lcs;
}
this is what I'm trying to send
byte[] msg = { 0X06, 0XA2, 0XD2, 0X06, 0XD3, 0X11, 0XD4, 0X65, 0X6F };
var Mode100Dec = Mode100(msg);//value in int
string Mode100hex = Mode100Dec.ToString("X"); //value in hex 0-F
byte [] Mode100Byte = Encoding.Default.GetBytes(Mode100hex);//value in dec ascci of hex
var hexString = BitConverter.ToString(Mode100Byte); //value in hex of ascii
for this example the the Mode100 function return me
12(Dec)
which is C(Hex)
but how do I convert it to byte[] so I can send 0x0C ?
because now it change me the "C" to 67 Dec \ 42 Hex
which is wrong .....
I have look this post
How do you convert a byte array to a hexadecimal string, and vice versa?
but it didn't help me to get the answer I need
I agree with note from #500-InternalServerError that it seems you want the BINARY value, not the hex. Your example actually shows this -- your checksum would be 0x09, which is what you send. The ASCII for for 0x09 would be two characters, '0' & '9', which in ASCII would be 0x30 0x39. So I think you are confusing "Decimal" and "Hex" (which are conversion of binary values to strings) with the binary value (which is neither decimal nor hex).
To get the result you seem to be looking for, remove the conversion to a HEX string:
byte[] msg = { 0X06, 0XA2, 0XD2, 0X06, 0XD3, 0X11, 0XD4, 0X65, 0X6F };
var Mode100val = Mode100(msg); //value as a BYTE
byte [] Mode100Byte = new byte[] { Mode100val};
The array of a single byte will contain a single binary value 12 decimal / 0C hex.
I'm trying to generate a shared secret between a web server running PHP and a C# desktop application. I'm aware of the BouncyCastle library, but I'd prefer not having to use it since it's pretty huge.
I'm using phpecc and ECDiffieHellmanCng and trying to generate a shared secret between the two parties but I'm having issues with exporting/importing in C#.
It seems phpecc requires der/pem format in order to import a key, and ECDiffieHellmanCng doesn't seem to have any easy way to export in a compatible format.
Would I need to write my own pem/der encoder and decoder in order to do this or is there some alternative easier way?
Currently I'm doing the following in C#:
using (var ecdh = new ECDiffieHellmanCng())
{
ecdh.HashAlgorithm = CngAlgorithm.ECDiffieHellmanP384;
ecdh.KeyDerivationFunction = ECDiffieHellmanKeyDerivationFunction.Hash;
var encoded = EncodePem(ecdh.PublicKey.ToByteArray());
//... do something with encoded
}
private static string EncodePem(byte[] data)
{
var pemDat = new StringBuilder();
var chunk = new char[64];
pemDat.AppendLine("-----BEGIN PUBLIC KEY-----");
var encodedData = Convert.ToBase64String(data);
for (var i = 0; i < encodedData.Length; i += chunk.Length)
{
var index = 0;
while (index != chunk.Length && i + index < encodedData.Length)
{
chunk[index] = encodedData[i + index];
index++;
}
pemDat.AppendLine(new string(chunk));
}
pemDat.AppendLine("-----END PUBLIC KEY-----");
return pemDat.ToString();
}
Obviously the above is only doing the pem encoding, so on the php side it returns an error when it's trying to parse it:
Type: Runtime
Exception Message: Invalid data.
File: /.../vendor/mdanter/ecc/src/Serializer/PublicKey/Der/Parser.php
Line: 49
.NET Core 1.0 and .NET Framework 4.7 have the ECParameters struct to import/export keys. The ToByteArray() method you called is producing a CNG EccPublicBlob which has very little to do with the SEC-1 ECParameters format.
I'm going to assume that you wanted to use secp384r1/NIST P-384, even though you specified that as a hash algorithm. If you want some other curve, you'll need to do some translations.
The (.NET) ECParameters struct will only help you get started. Turning that into a file requires translating it into a PEM-encoded DER-encoded ASN.1-based structure. (But if you're sticking with NIST P-256/384/521, you can do it with the byte[] you currently have)
In SEC 1 v2.0 we get the following structures:
SubjectPublicKeyInfo ::= SEQUENCE {
algorithm AlgorithmIdentifier {{ECPKAlgorithms}} (WITH COMPONENTS {algorithm, parameters}),
subjectPublicKey BIT STRING
}
ECPKAlgorithms ALGORITHM ::= {
ecPublicKeyType |
ecPublicKeyTypeRestricted |
ecPublicKeyTypeSupplemented |
{OID ecdh PARMS ECDomainParameters {{SECGCurveNames}}} |
{OID ecmqv PARMS ECDomainParameters {{SECGCurveNames}}},
...
}
ecPublicKeyType ALGORITHM ::= {
OID id-ecPublicKey PARMS ECDomainParameters {{SECGCurveNames}}
}
ECDomainParameters{ECDOMAIN:IOSet} ::= CHOICE {
specified SpecifiedECDomain,
named ECDOMAIN.&id({IOSet}),
implicitCA NULL
}
An elliptic curve point itself is represented by the following type
ECPoint ::= OCTET STRING
whose value is the octet string obtained from the conversion routines given in Section 2.3.3.
Distilling this down to the relevant parts, you need to write
SEQUENCE (SubjectPublicKeyInfo)
SEQUENCE (AlgorithmIdentifier)
OBJECT IDENTIFIER id-ecPublicKey
OBJECT IDENTIFIER secp384r1 (or whatever named curve you're using)
BIT STRING
public key encoded as ECPoint
The AlgorithmIdentifier contains data that's fixed given you don't change the curve:
SEQUENCE (AlgorithmIdentifier)
30 xx [yy [zz]]
OBJECT IDENTIFIER id-ecPublicKey (1.2.840.10045.2.1)
06 07 2A 86 48 CE 3D 02 01
OBJECT IDENTIFIER secp384r1 (1.3.132.0.34)
06 05 2B 81 04 00 22
and we can now count how many bytes were in the payload: 16 (0x10), so we fill in the length:
30 10 06 07 2A 86 48 CE 3D 02 01 06 05 2B 81 04
00 22
The public key encoding that everyone understands is "uncompressed point", which is
04 th eb yt es of x. th eb yt es of y.
Turns out, that has a fixed size for a given curve, too, so unlike most things that are DER encoded, you can do this in one pass :). For secp384r1 the x and y coordinate are each 384 bit values, or (384 + 7)/8 == 48 bytes, so the ECPoint is 48 + 48 + 1 == 97 (0x61) bytes. Then it needs to be wrapped in a BIT STRING, which adds one payload byte and the length and tag. So, we get:
private static byte[] s_secp384r1PublicPrefix = {
// SEQUENCE (SubjectPublicKeyInfo, 0x76 bytes)
0x30, 0x76,
// SEQUENCE (AlgorithmIdentifier, 0x10 bytes)
0x30, 0x10,
// OBJECT IDENTIFIER (id-ecPublicKey)
0x06, 0x07, 0x2A, 0x86, 0x48, 0xCE, 0x3D, 0x02, 0x01,
// OBJECT IDENTIFIER (secp384r1)
0x06, 0x05, 0x2B, 0x81, 0x04, 0x00, 0x22,
// BIT STRING, 0x61 content bytes, 0 unused bits.
0x03, 0x62, 0x00,
// Uncompressed EC point
0x04,
}
...
using (ECDiffieHellman ecdh = ECDiffieHellman.Create())
{
ecdh.KeySize = 384;
byte[] prefix = s_secp384r1PublicPrefix;
byte[] derPublicKey = new byte[120];
Buffer.BlockCopy(prefix, 0, derPublicKey, 0, prefix.Length);
byte[] cngBlob = ecdh.PublicKey.ToByteArray();
Debug.Assert(cngBlob.Length == 104);
Buffer.BlockCopy(cngBlob, 8, derPublicKey, prefix.Length, cngBlob.Length - 8);
// Now move it to PEM
StringBuilder builder = new StringBuilder();
builder.AppendLine("-----BEGIN PUBLIC KEY-----");
builder.AppendLine(
Convert.ToBase64String(derPublicKey, Base64FormattingOptions.InsertLineBreaks));
builder.AppendLine("-----END PUBLIC KEY-----");
Console.WriteLine(builder.ToString());
}
Running the output I got from that into OpenSSL:
$ openssl ec -pubin -text -noout
read EC key
(paste)
-----BEGIN PUBLIC KEY-----
MHYwEAYHKoZIzj0CAQYFK4EEACIDYgAEwpbxYmcsNvr14D8k+0VQCkSY4WCV/3V10AiIq7sFdmUX
9+0DMuuLDmcKjL1ZFEFk0yHCPpY+pdkYtzPwE+dsApCPT3Ljk0AxHQBTSo4yjwsElMoA4Mtp8Qdo
LZD1Nx6v
-----END PUBLIC KEY-----
Private-Key: (384 bit)
pub:
04:c2:96:f1:62:67:2c:36:fa:f5:e0:3f:24:fb:45:
50:0a:44:98:e1:60:95:ff:75:75:d0:08:88:ab:bb:
05:76:65:17:f7:ed:03:32:eb:8b:0e:67:0a:8c:bd:
59:14:41:64:d3:21:c2:3e:96:3e:a5:d9:18:b7:33:
f0:13:e7:6c:02:90:8f:4f:72:e3:93:40:31:1d:00:
53:4a:8e:32:8f:0b:04:94:ca:00:e0:cb:69:f1:07:
68:2d:90:f5:37:1e:af
ASN1 OID: secp384r1
NIST CURVE: P-384
I have a textbox that I use to convert things like:
74 00 65 00 73 00 74 00
Back into a string, the above says "test" but for some reason when I click the convert button it will display only the first letter "t" 74 00 and other byte arrays work just as expected, the entire text is converted.
Here is the 2 codes I have tried which produce the same behavior of not properly converting the entire byte array back to word:
byte[] bArray = ByteStrToByteArray(iSequence.Text);
ASCIIEncoding enc = new ASCIIEncoding();
string word = enc.GetString(bArray);
iResult.Text = word + Environment.NewLine;
which uses the function:
private byte[] ByteStrToByteArray(string byteString)
{
byteString = byteString.Replace(" ", string.Empty);
byte[] buffer = new byte[byteString.Length / 2];
for (int i = 0; i < byteString.Length; i += 2)
buffer[i / 2] = (byte)Convert.ToByte(byteString.Substring(i, 2), 16);
return buffer;
}
another way I was using is:
string str = iSequence.Text.Replace(" ", "");
byte[] bArray = Enumerable.Range(0, str.Length)
.Where(x => x % 2 == 0)
.Select(x => Convert.ToByte(str.Substring(x, 2), 16))
.ToArray();
ASCIIEncoding enc = new ASCIIEncoding();
string word = enc.GetString(bArray);
iResult.Text = word + Environment.NewLine;
Tried checking for the lengths to see if it was iterating thru and it was ...
Don't really know how to debug why this is happenning to the above byte array but all the other byte arrays seemed to be working just fine only this one is outputing only the first letter of it.
Have I done something wrong that could produce this behavior some how ?
What could I try in order to find out what is wrong ?
If you have the byte sequence
var bytes = new byte[] { 0x74, 0x00, 0x65, 0x00, 0x73, 0x00, 0x74, 0x00 };
and you decode it to a string using ASCII encoding (Encoding.ASCII), then you get
var result = Encoding.ASCII.GetString(bytes);
// result == "\x74\x00\x65\x00\x73\x00\x74\x00" == "t\0e\0s\0t\0"
Notice the Null \0 characters? When you display such a string in a textbox, only the part of the string until the first Null character is displayed.
Since you say the result should read "test", the input is actually not encoded in ASCII but in UTF-16LE (Encoding.Unicode).
var result = Encoding.Unicode.GetString(bytes);
// result == "\u0074\u0065\u0073\u0074" == "test"
your converting a unicode string to ascii , your not specifying the codepage on your machine to convert from.
System.Text.Encoding.GetEncoding("codepage").GetString()
if my memory serves me correct. Also to note, any control in .NET is unicode ... Soooooo.... what your trying to stick in the text box (if the conversion isent correct) could be an end of line character .. or eof, or any kind of control character. all depends on your codepage.
I tried debugging the first program using breakpoints in VS2010. I found out that the line
string word = enc.GetString(bArray);
output word as "t\0e\0s\0t".
The last line
iResult.Text = word + Environment.NewLine;
gives iResult.Text as simply "t".
So I was thinking since \0 is not a valid escape sequence, the compiler ignored everything after it. Could be wrong though but try removing all occurrences of 00 in the input string.
I'm not really into C#. I'm only suggesting this because it looks like C++.
It works for me:
string outputText = "t\0e\0s\0t";
outputText = outputText.Replace("\0", " ");