Add CRL Distribution Points (CDP) Extension to X509Certificate2 Certificate - c#

I trying to add certificate extension to my X509Certificate2 object in pure .NET 4.7.2
I was using BouncyCastle by this method:
private static void AddCdpUrl(X509V3CertificateGenerator certificateGenerator, string cdptUrl)
{
var uriGeneralName = new GeneralName(GeneralName.UniformResourceIdentifier, cdptUrl);
var cdpName = new DistributionPointName(DistributionPointName.FullName, uriGeneralName);
var cdp = new DistributionPoint(cdpName, null, null);
certificateGenerator.AddExtension(X509Extensions.CrlDistributionPoints, false, new CrlDistPoint(new[] { cdp }));
}
Add its works and I get great result:
Now in pure .NET I am using this method:
const string X509CRLDistributionPoints = "2.5.29.31";
certificateRequest.CertificateExtensions.Add(new X509Extension(new Oid(X509CRLDistributionPoints), Encoding.UTF8.GetBytes("http://crl.example.com"), false));
And get this result:
I am missing the sequences for "Distribution Point Name", "Full Name" and "URL="
How can I generate the same result that BouncyCastle does with pure .NET
Thanks

If you only want to write one distribution point, and it's less than or equal to 119 ASCII characters long, and you aren't delegating CRL signing authority to a different certificate:
private static X509Extension MakeCdp(string url)
{
byte[] encodedUrl = Encoding.ASCII.GetBytes(url);
if (encodedUrl.Length > 119)
{
throw new NotSupportedException();
}
byte[] payload = new byte[encodedUrl.Length + 10];
int offset = 0;
payload[offset++] = 0x30;
payload[offset++] = (byte)(encodedUrl.Length + 8);
payload[offset++] = 0x30;
payload[offset++] = (byte)(encodedUrl.Length + 6);
payload[offset++] = 0xA0;
payload[offset++] = (byte)(encodedUrl.Length + 4);
payload[offset++] = 0xA0;
payload[offset++] = (byte)(encodedUrl.Length + 2);
payload[offset++] = 0x86;
payload[offset++] = (byte)(encodedUrl.Length);
Buffer.BlockCopy(encodedUrl, 0, payload, offset, encodedUrl.Length);
return new X509Extension("2.5.29.31", payload, critical: false);
}
Past 119 characters the outer payload length exceeds 0x7F and then you really start wanting a proper DER encoder. You definitely want one for variable numbers of URLs, or including any of the optional data from the extension.

This is probably a bit late, but you could use BC as a utility to get the DER-encoded extension and import it into native .NET like so:
// req is a .NET Core 3.1 CertificateRequest object
req.CertificateExtensions.Add(
new X509Extension(
new Oid("2.5.29.31"),
crlDistPoint.GetDerEncoded(), // this is your CRL extension
false
)
);

I ran into this problem and now there is a AsnWriter available (thanks #bartonjs, looks like you got the namespace you wanted :)), if you're on at least NET5.
So I've cobbled together a method which creates the extension using the writer:
/// <summary>Derived from https://github.com/dotnet/runtime/blob/main/src/libraries/System.Security.Cryptography/src/System/Security/Cryptography/X509Certificates/Asn1/DistributionPointAsn.xml.cs</summary>
private static X509Extension BuildDistributionPointExtension(string[] fullNames, ReasonFlagsAsn? reasons, string[]? crlIssuers, bool critical) {
var writer = new AsnWriter(AsnEncodingRules.DER);
writer.PushSequence();
writer.PushSequence(Asn1Tag.Sequence);
writer.PushSequence(new Asn1Tag(TagClass.ContextSpecific, 0));
writer.PushSequence(new Asn1Tag(TagClass.ContextSpecific, 0));
//See https://github.com/dotnet/runtime/blob/main/src/libraries/Common/src/System/Security/Cryptography/Asn1/GeneralNameAsn.xml.cs for different value types
for(int i = 0; i < fullNames.Length; i++) writer.WriteCharacterString(UniversalTagNumber.IA5String, fullNames[i], new Asn1Tag(TagClass.ContextSpecific, 6)); //GeneralName 6=URI
writer.PopSequence(new Asn1Tag(TagClass.ContextSpecific, 0));
writer.PopSequence(new Asn1Tag(TagClass.ContextSpecific, 0));
if(reasons.HasValue) writer.WriteNamedBitList(reasons.Value, new Asn1Tag(TagClass.ContextSpecific, 1));
if(crlIssuers?.Length > 0) {
writer.PushSequence(new Asn1Tag(TagClass.ContextSpecific, 2));
for(int i = 0; i < crlIssuers.Length; i++) writer.WriteCharacterString(UniversalTagNumber.IA5String, crlIssuers[i], new Asn1Tag(TagClass.ContextSpecific, 2)); //GeneralName 2=DnsName
writer.PopSequence(new Asn1Tag(TagClass.ContextSpecific, 2));
}
writer.PopSequence(Asn1Tag.Sequence);
writer.PopSequence();
return new X509Extension(new Oid("2.5.29.31"), writer.Encode(), critical);
}
[Flags] internal enum ReasonFlagsAsn { Unused = 1 << 0, KeyCompromise = 1 << 1, CACompromise = 1 << 2, AffiliationChanged = 1 << 3, Superseded = 1 << 4, CessationOfOperation = 1 << 5, CertificateHold = 1 << 6, PrivilegeWithdrawn = 1 << 7, AACompromise = 1 << 8 }
Just be careful and do your own tests as I've only checked if the CRL is properly displayed in a cert viewer.

Related

Packing bytes manually to send on network

I have an object that has the following variables:
bool firstBool;
float firstFloat; (0.0 to 1.0)
float secondFloat (0.0 to 1.0)
int firstInt; (0 to 10,000)
I was using a ToString method to get a string that I can send over the network. Scaling up I have encountered issues with the amount of data this is taking up.
the string looks like this at the moment:
"false:1.0:1.0:10000" this is 19 characters at 2 bytes per so 38 bytes
I know that I can save on this size by manually storing the data in 4 bytes like this:
A|B|B|B|B|B|B|B
C|C|C|C|C|C|C|D
D|D|D|D|D|D|D|D
D|D|D|D|D|X|X|X
A = bool(0 or 1), B = int(0 to 128), C = int(0 to 128), D = int(0 to 16384), X = Leftover bits
I convert the float(0.0 to 1.0) to int(0 to 128) since I can rebuild them on the other end and the accuracy isn't super important.
I have been experimenting with BitArray and byte[] to convert the data into and out of the binary structure.
After some experiments I ended up with this serialization process(I know it needs to be cleaned up and optimized)
public byte[] Serialize() {
byte[] firstFloatBytes = BitConverter.GetBytes(Mathf.FloorToInt(firstFloat * 128)); //Convert the float to int from (0 to 128)
byte[] secondFloatBytes = BitConverter.GetBytes(Mathf.FloorToInt(secondFloat * 128)); //Convert the float to int from (0 to 128)
byte[] firstIntData = BitConverter.GetBytes(Mathf.FloorToInt(firstInt)); // Get the bytes for the int
BitArray data = new BitArray(32); // create the size 32 bitarray to hold all the data
int i = 0; // create the index value
data[i] = firstBool; // set the 0 bit
BitArray ffBits = new BitArray(firstFloatBytes);
for(i = 1; i < 8; i++) {
data[i] = ffBits[i-1]; // Set bits 1 to 7
}
BitArray sfBits = new BitArray(secondFloatBytes);
for(i = 8; i < 15; i++) {
data[i] = sfBits[i-8]; // Set bits 8 to 14
}
BitArray fiBits = new BitArray(firstIntData);
for(i = 15; i < 29; i++) {
data[i] = fiBits[i-15]; // Set bits 15 to 28
}
byte[] output = new byte[4]; // create a byte[] to hold the output
data.CopyTo(output,0); // Copy the bits to the byte[]
return output;
}
Getting the information back out of this structure is much more complicated than getting it into this form. I figure I can probably workout something using the bitwise operators and bitmasks.
This is proving to be more complicated than I was expecting. I thought it would be very easy to access the bits of a byte[] to manipulate the data directly, extract ranges of bits, then convert back to the values required to rebuild the object. Are there best practices for this type of data serialization? Does anyone know of a tutorial or example reference I could read?
Standard and efficient serialization methods are:
Using BinaryWriter / BinaryReader:
public byte[] Serialize()
{
using(var s = new MemoryStream())
using(var w = new BinaryWriter(s))
{
w.Write(firstBool);
w.Write(firstFloat);
...
return s.ToArray();
}
}
public void Deserialize(byte[] bytes)
{
using(var s = new MemoryStream(bytes))
using(var r = new BinaryReader(s))
{
firstBool = r.ReadBool();
firstFload = r.ReadFloat();
...
}
}
Using protobuf.net
BinaryWriter / BinaryReader is much faster (around 7 times). Protobuf is more flexible, easy to use, very popular and serializes into around 33% fewer bytes. (of course these numbers are orders of magnitude and depend on what you serialize and how).
Now basically BinaryWriter will write 1 + 4 + 4 + 4 = 13 bytes. You shrink it to 5 bytes by converting the values to bool, byte, byte, short first by rounding it the way you want. Finally it's easy to merge the bool with one of your bytes to get 4 bytes if you really want to.
I don't really discourage manual serialization. But it has to be worth the price in terms of performance. The code is quite unreadable. Use bit masks and binary shifts on bytes directly but keep it as simple as possible. Don't use BitArray. It's slow and not more readable.
Here is a simple method for pack/unpack. But you loose accuracy converting a float to only 7/8 bits
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
foreach (Data data in Data.input)
{
Data.Print(data);
Data results = Data.Unpack(Data.Pack(data));
Data.Print(results);
}
Console.ReadLine();
}
}
public class Data
{
public static List<Data> input = new List<Data>() {
new Data() { firstBool = true, firstFloat = 0.2345F, secondFloat = 0.432F, firstInt = 12},
new Data() { firstBool = true, firstFloat = 0.3445F, secondFloat = 0.432F, firstInt = 11},
new Data() { firstBool = false, firstFloat = 0.2365F, secondFloat = 0.432F, firstInt = 9},
new Data() { firstBool = false, firstFloat = 0.545F, secondFloat = 0.432F, firstInt = 8},
new Data() { firstBool = true, firstFloat = 0.2367F, secondFloat = 0.432F, firstInt = 7}
};
public bool firstBool { get; set; }
public float firstFloat {get; set; } //(0.0 to 1.0)
public float secondFloat {get; set; } //(0.0 to 1.0)
public int firstInt { get; set; } //(0 to 10,000)
public static byte[] Pack(Data data)
{
byte[] results = new byte[4];
results[0] = (byte)((data.firstBool ? 0x80 : 0x00) | (byte)(data.firstFloat * 128));
results[1] = (byte)(data.secondFloat * 256);
results[2] = (byte)((data.firstInt >> 8) & 0xFF);
results[3] = (byte)(data.firstInt & 0xFF);
return results;
}
public static Data Unpack(byte[] data)
{
Data results = new Data();
results.firstBool = ((data[0] & 0x80) == 0) ? false : true;
results.firstFloat = ((float)(data[0] & 0x7F)) / 128.0F;
results.secondFloat = (float)data[1] / 256.0F;
results.firstInt = (data[2] << 8) | data[3];
return results;
}
public static void Print(Data data)
{
Console.WriteLine("Bool : '{0}', 1st Float : '{1}', 2nd Float : '{2}', Int : '{3}'",
data.firstBool,
data.firstFloat,
data.secondFloat,
data.firstInt
);
}
}
}

How to simplify these methods to avoid confusion?

I've come here today to ask a question about these methods. I've taken lead on a personal project as a hobby and unfortunately I can't contact the old developer to ask what these methods even do. I'm pretty new to C# so I was asking if anyone could help me in simplifying them, to avoid the confusion I'm having? If anyone could actually tell me what they do also that would really help.
I'm just a little confused about them as of now... They were in the utilities folder. The project is an emulation server for a game, sending and receiving packets is the main focus.
public static int DecodeInt32(byte[] v)
{
if ((v[0] | v[1] | v[2] | v[3]) < 0)
{
return -1;
}
return (v[0] << 0x18) + (v[1] << 0x10) + (v[2] << 8) + v[3];
}
public static int DecodeInt16(byte[] v)
{
if ((v[0] | v[1]) < 0)
{
return -1;
}
return (v[0] << 8) + v[1];
}
Here is a part of code that uses them, might help in finding out?
using (BinaryReader Reader = new BinaryReader(new MemoryStream(Data)))
{
if (Data.Length < 4)
return;
int MsgLen = Utilities.DecodeInt32(Reader.ReadBytes(4));
if ((Reader.BaseStream.Length - 4) < MsgLen)
{
this._halfData = Data;
this._halfDataRecieved = true;
return;
}
else if (MsgLen < 0 || MsgLen > 5120)//TODO: Const somewhere.
return;
byte[] Packet = Reader.ReadBytes(MsgLen);
using (BinaryReader R = new BinaryReader(new MemoryStream(Packet)))
{
int Header = Utilities.DecodeInt16(R.ReadBytes(2));
byte[] Content = new byte[Packet.Length - 2];
Buffer.BlockCopy(Packet, 2, Content, 0, Packet.Length - 2);
ClientPacket Message = new ClientPacket(Header, Content);
try
{
Server.GetGame().GetPacketManager().TryExecutePacket(this, Message);
}
catch (Exception e)
{
ExceptionLogger.LogException(e);
}
this._deciphered = false;
}
if (Reader.BaseStream.Length - 4 > MsgLen)
{
byte[] Extra = new byte[Reader.BaseStream.Length - Reader.BaseStream.Position];
Buffer.BlockCopy(Data, (int)Reader.BaseStream.Position, Extra, 0, (int)(Reader.BaseStream.Length - Reader.BaseStream.Position));
this._deciphered = true;
HandleMoreData(Extra);
}
}
The BinaryReader has the methods ReadInt16 and ReadInt32 (and many others). Therefore you could replace the decoding methods.
int MsgLen = Utilities.DecodeInt32(Reader.ReadBytes(4));
becomes
int MsgLen = Reader.ReadInt32();
I assume that the Endianness of the bytes is right for the BinaryReader methods.

Add number of characters and extend time of Rfc6238AuthenticationService

I am using Rfc6238AuthenticationService at https://github.com/aspnet/Identity/blob/85012bd0ac83548f7eab31f0585dae3836935d9d/src/Microsoft.AspNet.Identity/Rfc6238AuthenticationService.cs
which uses rfc6238 https://www.rfc-editor.org/rfc/rfc6238
internal static class Rfc6238AuthenticationService
{
private static readonly DateTime _unixEpoch = new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc);
private static readonly TimeSpan _timestep = TimeSpan.FromMinutes(3);
private static readonly Encoding _encoding = new UTF8Encoding(false, true);
private static int ComputeTotp(HashAlgorithm hashAlgorithm, ulong timestepNumber, string modifier)
{
// # of 0's = length of pin
const int Mod = 1000000;
// See https://www.rfc-editor.org/rfc/rfc4226
// We can add an optional modifier
var timestepAsBytes = BitConverter.GetBytes(IPAddress.HostToNetworkOrder((long)timestepNumber));
var hash = hashAlgorithm.ComputeHash(ApplyModifier(timestepAsBytes, modifier));
// Generate DT string
var offset = hash[hash.Length - 1] & 0xf;
Debug.Assert(offset + 4 < hash.Length);
var binaryCode = (hash[offset] & 0x7f) << 24
| (hash[offset + 1] & 0xff) << 16
| (hash[offset + 2] & 0xff) << 8
| (hash[offset + 3] & 0xff);
return binaryCode % Mod;
}
private static byte[] ApplyModifier(byte[] input, string modifier)
{
if (String.IsNullOrEmpty(modifier))
{
return input;
}
var modifierBytes = _encoding.GetBytes(modifier);
var combined = new byte[checked(input.Length + modifierBytes.Length)];
Buffer.BlockCopy(input, 0, combined, 0, input.Length);
Buffer.BlockCopy(modifierBytes, 0, combined, input.Length, modifierBytes.Length);
return combined;
}
// More info: https://www.rfc-editor.org/rfc/rfc6238#section-4
private static ulong GetCurrentTimeStepNumber()
{
var delta = DateTime.UtcNow - _unixEpoch;
return (ulong)(delta.Ticks / _timestep.Ticks);
}
public static int GenerateCode(byte[] securityToken, string modifier = null)
{
if (securityToken == null)
{
throw new ArgumentNullException(nameof(securityToken));
}
// Allow a variance of no greater than 90 seconds in either direction
var currentTimeStep = GetCurrentTimeStepNumber();
using (var hashAlgorithm = new HMACSHA1(securityToken))
{
return ComputeTotp(hashAlgorithm, currentTimeStep, modifier);
}
}
public static bool ValidateCode(byte[] securityToken, int code, string modifier = null)
{
if (securityToken == null)
{
throw new ArgumentNullException(nameof(securityToken));
}
// Allow a variance of no greater than 90 seconds in either direction
var currentTimeStep = GetCurrentTimeStepNumber();
using (var hashAlgorithm = new HMACSHA1(securityToken))
{
for (var i = -2; i <= 2; i++)
{
var computedTotp = ComputeTotp(hashAlgorithm, (ulong)((long)currentTimeStep + i), modifier);
if (computedTotp == code)
{
return true;
}
}
}
// No match
return false;
}
}
Is it possible to add character limit in this class and make it configurable(like 6 chars)? Also, is it possible to extend the time of token and make it configurable(like 120 seconds)?
Here is the place, where OTP is truncated to the defined length:
return binaryCode % Mod;
So, you just need to change the value of the Mod to 1000000 to get 6-digits code.
A variable, responsible for time of token is timestepNumber. So change the logic in its calculation if needed.

Convert C# To NodeJS

Hi so am trying to convert this C# function to NodeJS but it does not work I don't really know what is wrong lemme show some code and outputs
C#:
private static byte[] ConvertMsg(byte[] message, byte type = 255, byte cmd = 255)
{
int msgLength = message.Length;
byte[] bArray = new byte[msgLength + 3];
bArray[0] = type;
bArray[1] = cmd;
Buffer.BlockCopy(message, 0, bArray, 2, msgLength);
bArray[msgLength + 2] = 0;
return bArray;
}
static void Main()
{
byte[] encrypted = ConvertMsg(Encoding.Default.GetBytes("hi"),3,3);
Console.WriteLine($"Encrypted: {Convert.ToBase64String(encrypted)}");
Console.ReadKey();
}
Output:
AwNoaQA=
NodeJS:
function ConvertMsg(message, type=255, cmd=255){
let length = message.length;
let bArray = Buffer.alloc(length+3);
bArray[0] = type;
bArray[1] = cmd;
bArray.copy(message,0,length);
bArray[length + 2] = 0;
return bArray;
}
let encrypted = ConvertMsg(Buffer.from("hi"),3,3);
console.log(encrypted.toString("base64"));
Output:
AwMAAAA=
As you can see the output is not the same any help is much appreciated, please explain when you answer I would like to learn more thank you.
According to Buffer documentation, .copy(target[, targetStart[, sourceStart[, sourceEnd]]])
Copies data from a region of buf to a region in target even if the target memory region overlaps with buf.
Here
// means copy 'bArray' starting from length to 'message' starting from 0
bArray.copy(message, 0, length);
You do not copy contents of message to bArray. You do the opposite thing - you copy bArray contents, which is [3, 3, 0, 0, 0] by now to message, and actually overwrite your message.
Then, you output this bArray, which results in AwMAAAA= which is Base64 representation of [3, 3, 0, 0, 0].
You may want to change your function in this way:
function ConvertMsg(message, type=255, cmd=255){
let length = message.length;
let bArray = Buffer.alloc(length + 3);
bArray[0] = type;
bArray[1] = cmd;
// means copy 'message' starting from 0 to 'bArray' starting from 2
message.copy(bArray, 2);
bArray[length + 2] = 0;
return bArray;
}

Binary To Corresponding ASCII String Conversion

Hi i was able to convert a ASCII string to binary using a binarywriter .. as 10101011 . im required back to convert Binary ---> ASCII string .. any idea how to do it ?
This should do the trick... or at least get you started...
public Byte[] GetBytesFromBinaryString(String binary)
{
var list = new List<Byte>();
for (int i = 0; i < binary.Length; i += 8)
{
String t = binary.Substring(i, 8);
list.Add(Convert.ToByte(t, 2));
}
return list.ToArray();
}
Once the binary string has been converted to a byte array, finish off with
Encoding.ASCII.GetString(data);
So...
var data = GetBytesFromBinaryString("010000010100001001000011");
var text = Encoding.ASCII.GetString(data);
If you have ASCII charters only you could use Encoding.ASCII.GetBytes and Encoding.ASCII.GetString.
var text = "Test";
var bytes = Encoding.ASCII.GetBytes(text);
var newText = Encoding.ASCII.GetString(bytes);
Here is complete code for your answer
FileStream iFile = new FileStream(#"c:\test\binary.dat",
FileMode.Open);
long lengthInBytes = iFile.Length;
BinaryReader bin = new BinaryReader(aFile);
byte[] byteArray = bin.ReadBytes((int)lengthInBytes);
System.Text.Encoding encEncoder = System.Text.ASCIIEncoding.ASCII;
string str = encEncoder.GetString(byteArray);
Take this as a simple example:
public void ByteToString()
{
Byte[] arrByte = { 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0 };
string x = Convert.ToBase64String(arrByte);
}
This linked answer has interesting details about this kind of conversion:
binary file to string
Sometimes instead of using the built in tools it's better to use "custom" code.. try this function:
public string BinaryToString(string binary)
{
if (string.IsNullOrEmpty(binary))
throw new ArgumentNullException("binary");
if ((binary.Length % 8) != 0)
throw new ArgumentException("Binary string invalid (must divide by 8)", "binary");
StringBuilder builder = new StringBuilder();
for (int i = 0; i < binary.Length; i += 8)
{
string section = binary.Substring(i, 8);
int ascii = 0;
try
{
ascii = Convert.ToInt32(section, 2);
}
catch
{
throw new ArgumentException("Binary string contains invalid section: " + section, "binary");
}
builder.Append((char)ascii);
}
return builder.ToString();
}
Tested with 010000010100001001000011 it returned ABC using the "raw" ASCII values.

Categories

Resources