I am using the OPCFoundation/UA-.NETStandard components (version 1.4.371.60) to communicate with an OPC Server in one of our products for testing purposes. The whole system is in-house and on a separate network segment so security is not an issue in this case.
Recently a new problem has arisen with certain product versions so that I cannot connect.
I always connect with SecurityMode=none & SecurityPolicy=none. The error now is OpcException: Certificate validation failed with error code 0x8114000 and the description says that the minimum length requirement of 2048 was not met.
I have used UaExpert to connect to the same server and that is successful but I have no idea which library it uses.
I have tried overriding the following attributes but with no success.
application.ApplicationConfiguration.SecurityConfiguration.AutoAcceptUntrustedCertificates = true;
application.ApplicationConfiguration.SecurityConfiguration.MinimumCertificateKeySize = 1024;
application.ApplicationConfiguration.SecurityConfiguration.RejectSHA1SignedCertificates = false;
Am I missing something? Can I override and ignore this error somehow?
What you tried looks good.
Maybe there is a *.config.xml file somewhere that override the MinimumCertificateKeySize value to the current default value.
Another solution will be to create a new certificate for the OPC UA Server to be sure it is not using a deprecated key size ;)
I have managed to get it working as I want. The problem was in the way I was initialising the components. I had created a new CertificateValidator and then set up the ApplicationConfiguration (including the MinimumCertificateKeySize). What I needed to do was to Update the validator with the application configuration as it is the validator which needs to know the min cert size.
var certificateValidator = new CertificateValidator();
certificateValidator.CertificateValidation += (sender, eventArgs) =>
{
// handle event
};
// Build the application configuration
var applicationConfiguration = new ApplicationConfiguration
{
ApplicationUri = server.ToString(),
ApplicationName = "UaClientTest",
ApplicationType = ApplicationType.Client,
CertificateValidator = certificateValidator,
SecurityConfiguration = new SecurityConfiguration
{
AutoAcceptUntrustedCertificates = true,
MinimumCertificateKeySize=1024, /* Default is 2048 but steuerung only has 1024 */
RejectSHA1SignedCertificates=false
},
// more config here...
};
// IMPORTANT: update config in cert handling
certificateValidator.Update(applicationConfiguration);
Related
I have a basic producer app and a consumer app. if I run both and have both start consuming on their respective topics, I have a great working system. My thought was that if I started the producer and sent a message that I would be able to then start the consumer and have it pick up that message. I was wrong.
Unless both are up and running, I lose messages (or they do not get consumed).
my consumer app looks like this for comsuming...
Uri uri = new Uri("http://localhost:9092");
KafkaOptions options = new KafkaOptions(uri);
BrokerRouter brokerRouter = new BrokerRouter(options);
Consumer consumer = new Consumer(new ConsumerOptions(receiveTopic, brokerRouter));
List<OffsetResponse> offset = consumer.GetTopicOffsetAsync(receiveTopic, 100000).Result;
IEnumerable<OffsetPosition> t = from x in offset select new OffsetPosition(x.PartitionId, x.Offsets.Max());
consumer.SetOffsetPosition(t.ToArray());
IEnumerable<KafkaNet.Protocol.Message> msgs = consumer.Consume();
foreach (KafkaNet.Protocol.Message msg in msgs)
{
do some stuff here based on the message received
}
unless I have the code between the lines, it starts at the beginning every time I start the application.
What is the proper way to manage topic offsets so messages are consumed after a disconnect happens?
If I run
kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic chat-message-reply-XXX consumer-property fetch-size=40000000 --from-beginning
I can see the messages, but when I connect my application to that topic, the consumer.Consume() does not pick up the messages it has not already seen. I have tried this with and without runing the above bat file to see if that makes any difference. When I look at the consumer.SetOffsetPosition(t.ToArray()) call (t specifically) it shows that the offset is the count of all messages for the topic.
Please help,
Set auto.offset.reset configuration in your ConsumerOptions to earliest. When the consumer group starts the consume messages, it will consume from the latest offset because the default value for auto.offset.reset is latest.
But I looked at kafka-net API now, it does not have a AutoOffsetReset property, and it seems pretty insufficient with its configuration in consumers. It also lacks documentation with method summaries.
I would suggest you use Confluent .NET Kafka Nuget package because it is owned by Confluent itself.
Also, why are calling GetTopicOffsets and setting that offset back again in consumer. I think when you configure your consumer, you should just start reading messages with Consume().
Try this:
static void Main(string[] args)
{
var uri = new Uri("http://localhost:9092");
var kafkaOptions = new KafkaOptions(uri);
var brokerRouter = new BrokerRouter(kafkaOptions);
var consumerOptions = new ConsumerOptions(receivedTopic, brokerRouter);
var consumer = new Consumer(consumerOptions);
foreach (var msg in consumer.Consume())
{
var value = Encoding.UTF8.GetString(msg.Value);
// Process value here
}
}
In addition, enable logs in your KafkaOptions and ConsumerOptions, they will help you a lot:
var kafkaOptions = new KafkaOptions(uri)
{
Log = new ConsoleLog()
};
var consumerOptions = new ConsumerOptions(topic, brokerRouter)
{
Log = new ConsoleLog()
});
I switched over to use Confluent's C# .NET package and it now works.
I have been looking into how these two new settings will effect with our c# code that connects to an ldap server and performs user lookups
Using the code below to connect to an AD i have found a few scenarios that no longer work when these settings are switched on.
private static LdapConnection ConnectAndBind(
string server,
int port,
int timeout,
string userName,
string pwd,
AuthType authType,
bool useSSL,
bool useV3)
{
var con = new LdapConnection(new LdapDirectoryIdentifier(server, port));
if (useSSL)
{
con.SessionOptions.SecureSocketLayer = useSSL;
con.SessionOptions.VerifyServerCertificate = VerifyServerCertificate;
con.SessionOptions.QueryClientCertificate = QueryClientCertificate;
}
con.Timeout = new TimeSpan(0, 0, timeout);
con.SessionOptions.ProtocolVersion = useV3 ? 3 : 2;
try
{
con.AuthType = authType;
con.Credential = new NetworkCredential(userName, pwd);
con.Bind();
}
catch (Exception e)
{
throw new ProviderException(
ProviderException.ErrorIdentifier.AuthenticationFailed,
LanguageLogic.GetString("AuthenticationProvider.ConnectError"),
e.Message);
}
return con;
}
This is used in the context of a webforms/mvc asp.net (4.5) app once connected its used to import user details in the the app
but at the moment depending on how the registry keys for the two settings on the AD server are set i am finding some situations where it does not connect (the error returned is that the supplied credentials are invalid)
The first two tables are kinda how i expected it to work with non signed/non ssl basic bind not working
how ever i cannot find a reason why when the Channel binding is set to required (table 3) it does not work for the other 3 red entries
Has any one else been working on this that could shed some light on the matter. would newer version of .net support this setting.
Thanks for any info
UPDATE 1
so i downloaded Softerra LDAP browser. i get the same results using that . so i dont think its my code.
as soon as i turn on the reg key for Channel Binding i get the specified credentials are invalid for those connection methods over SSL
i have updated the AD server with all the latest patches but no change.
I am issuing (with own Certificate Authority) a certificate in c# code (based on: .NET Core 2.0 CertificateRequest class)
In CertificateRequest, unable to add Certificate ocsp Authority Information Access (oid: 1.3.6.1.5.5.7.1.1) and certificate policies (oid: 2.5.29.32) extensions (similar results of: Authority Information Access extension)
I do not want to use external libraries, perhaps only ASN1 libraries if needed.
Anyone can help with c# code to add these extensions as I didn't find any suitable types in .Net?
certificateRequestObject.CertificateExtensions.Add(
new X509Extension("2.5.29.32", **[Authority Information Access text] to RawData?** , false));
[Authority Information Access text]
Authority Information Access 1.3.6.1.5.5.7.1.1
[1]Authority Info Access
Access Method=On-line Certificate Status Protocol (1.3.6.1.5.5.7.48.1)
Alternative Name:
URL=example.org
[2]Authority Info Access
Access Method=Certification Authority Issuer (1.3.6.1.5.5.7.48.2)
Alternative Name:
URL=example.org
Disclaimer: I do strongly believe that you should not roll own crypto/CA and use standard CA software to issue certificate since they are intended to solve this problem.
There is no built-in support for ASN encoding/decoding in .NET (including .NET Core), you have to use 3rd party libraries.
For ASN encoding you can use ASN.1 library I developed: Asn1DerParser.NET
And use for your particular case will be:
Byte[] encodedData = new Asn1Builder()
.AddSequence(x => x.AddObjectIdentifier(new Oid("1.3.6.1.5.5.7.48.1")
.AddImplicit(6, Encoding.ASCII.GetBytes("http://ocsp.example.com"), true))
.GetEncoded();
var extension = new X509Extension("1.3.6.1.5.5.7.1.1", encodedData, false);
and add extension item to your request. If you need to add more URLs, then add more SEQUENCE elements:
Byte[] encodedData = new Asn1Builder()
.AddSequence(x => x.AddObjectIdentifier(new Oid("1.3.6.1.5.5.7.48.1")
.AddImplicit(6, Encoding.ASCII.GetBytes("http://ocsp1.example.com"), true))
.AddSequence(x => x.AddObjectIdentifier(new Oid("1.3.6.1.5.5.7.48.1")
.AddImplicit(6, Encoding.ASCII.GetBytes("http://ocsp2.example.com"), true))
.GetEncoded();
var extension = new X509Extension("1.3.6.1.5.5.7.1.1", encodedData, false);
I needed to add an AIA (Authority Information Access) extension using dotnet also. It is super cool #Crypt32 shared the code from Asn1DerParser.NET. It made me curious. I started looking at other code like BouncyCastle and I didn't see any code that did the same thing. There is a AuthorityInformationAccess class, but I couldn't find any tests that created an AIA extension. Maybe the implementation isn't finished. I could dig further but I instead looked at the dotnet runtime code. While of course there isn't a dotnet AIA builder, there is a SubjectAlternativeNameBuilder]2 I could learn from. So, I did just that. Essensually it uses the AsnWriter to encapsulate the mechanics of building an ASN1 Sequence. Below is an example where I add two certificate authority issuers. Next steps would be to encapuslate this in an AIA Builder but here is an example.
The parts that I struggled with were ensuring when to call writer.Encode and writer.WriteEncodedValue. After an hour or so everything made sense.
#Guru_07, I believe this allows you to avoid third party code. Although it has been some time since you posted your question.
List<byte[]> encodedUrls = new List<byte[]>();
List<byte[]> encodedSequences = new List<byte[]>();
AsnWriter writer = new AsnWriter(AsnEncodingRules.DER);
writer.WriteObjectIdentifier("1.3.6.1.5.5.7.48.2");
encodedUrls.Add(writer.Encode());
writer = new AsnWriter(AsnEncodingRules.DER);
writer.WriteCharacterString(
UniversalTagNumber.IA5String,
"http://ocsp.example.com",
new Asn1Tag(TagClass.ContextSpecific, 6)
);
encodedUrls.Add(writer.Encode());
writer = new AsnWriter(AsnEncodingRules.DER);
using (writer.PushSequence())
{
foreach (byte[] encodedName in encodedUrls)
{
writer.WriteEncodedValue(encodedName);
}
}
encodedSequences.Add(writer.Encode());
encodedUrls = new List<byte[]>();
writer = new AsnWriter(AsnEncodingRules.DER);
writer.WriteObjectIdentifier("1.3.6.1.5.5.7.48.2");
encodedUrls.Add(writer.Encode());
writer = new AsnWriter(AsnEncodingRules.DER);
writer.WriteCharacterString(
UniversalTagNumber.IA5String,
"http://ocsp2.example.com",
new Asn1Tag(TagClass.ContextSpecific, 6)
);
encodedUrls.Add(writer.Encode());
writer = new AsnWriter(AsnEncodingRules.DER);
using (writer.PushSequence())
{
foreach (byte[] encodedName in encodedUrls)
{
writer.WriteEncodedValue(encodedName);
}
}
encodedSequences.Add(writer.Encode());
writer = new AsnWriter(AsnEncodingRules.DER);
using (writer.PushSequence())
{
foreach (byte[] encodedSequence in encodedSequences)
{
writer.WriteEncodedValue(encodedSequence);
}
}
var ext = new X509Extension(
new Oid("1.3.6.1.5.5.7.1.1"),
writer.Encode(),
false);
I've a C# client witch i want to monitor with azure insights.
I've added the following Nugets:
Microsoft.ApplicationInsights v2.9.1
Microsoft.ApplicationInsights.Agent.Intercept v2.4.0
Microsoft.ApplicationInsights.DependencyCollector v2.9.1
Microsoft.ApplicationInsights.PerfCounterCollector v2.9.1
Microsoft.ApplicationInsights.Web v2.9.1
Microsoft.ApplicationInsights.WindowsServer v2.9.1
Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel v2.9.1
Microsoft.AspNet.TelemetryCorrelation v1.0.5
System.Diagnostics.DiagnosticSource v4.5.1
The problem is that the Azure Portal is recognizing my events and exceptions in the usage category but not as live metric stream. My Client is connected and the live metric stream is available but no temeletry data like tracked events or exceptions are shown. Even in Visual Studio there is no available Application Insights data during the debugging time.
I've tried a lot of times to uninstall and reinstall all the nugets. Updated them to the newest version but without any effort.
This is the related code for my Azure Client. The Payload is just a class with a few properties witch i want to track. It's mapped into a dictionary for the payload to send.
public override void Initialize()
{
try
{
base.Initialize();
var configuration = new TelemetryConfiguration();
configuration.InstrumentationKey = Configuration.AnalyticsCodeId;
var dependencies = new DependencyTrackingTelemetryModule();
dependencies.Initialize(configuration);
configuration.TelemetryInitializers.Add(new Microsoft.ApplicationInsights.Extensibility.OperationCorrelationTelemetryInitializer());
configuration.TelemetryInitializers.Add(new ClientIpHeaderTelemetryInitializer());
configuration.TelemetryInitializers.Add(new AccountIdTelemetryInitializer());
customTelemetry = new AzureCustomTelemetryInitializer(Payload);
configuration.TelemetryInitializers.Add(customTelemetry);
client = new TelemetryClient(configuration);
if (CheckTrackingIsAllowed())
InitLiveMetric(configuration);
}
catch (Exception e)
{
Log.Write(e);
}
}
private void InitLiveMetric(TelemetryConfiguration configuration)
{
QuickPulseTelemetryProcessor processor = null;
configuration.TelemetryProcessorChainBuilder
.Use((next) =>
{
processor = new QuickPulseTelemetryProcessor(next);
return processor;
})
.Build();
var quickPulse = new QuickPulseTelemetryModule();
quickPulse.Initialize(configuration);
quickPulse.RegisterTelemetryProcessor(processor);
}
public override void SendEventAsync(string eventName, string modulName)
{
if (!CheckTrackingIsAllowed())
return;
Task.Run(() =>
{
var p = MapAzurePayload(Payload);
client.TrackEvent(eventName, p);
});
}
This code seems to work properly as i can see the the tracked events and exceptions in the usage category in the Azure Portal. But as i said, not as live metric what would be very nice and must normally work with the code i think.
Any ideas why the live metric stream is not working as intended?
Edit: Found the reason... The problem is that my first track event is send when the client seems not to be ready at all. If I delayed the sending it works as intended.
My solution is to delay the first sending of a track. Not nice but i have no other idea...
I am working on an application which is deployed to a TEST and then a LIVE webserver.
I want the class library I am working on to use the correct service endpoint when it is deployed.
Currently the code is as follows;
var data = new SettingsViewModel()
{
ServiceURI = Constants.LIVE_ENDPOINT_SERVICE_ADDRESS,
AutoSync = Constants.DEFAULT_AUTO_SYNC,
AppDataFolder = Path.Combine(ApplicationData.Current.LocalFolder.Path, Constants.ROOT_FOLDER, Constants.DATA_FOLDER),
MapKey = Constants.BASIC_MAP_KEY,
Logging = false
};
#if DEBUG
data.ServiceURI = Constants.DEV_ENDPOINT_SERVICE_ADDRESS;
#endif
As you can see, this can only pick up the DEV or the LIVE endpoints. This code cannot distinguish whether the webserver is LIVE or TEST
I thought about setting up an App.Config file and get the correct Endpoint from there. But when I create a new item, the Config template is not listed. So how do I do this?
For now I could propose this solution :
public static class Constants
{
public static string GetEndPoint()
{
// Debugging purpose
if (System.Diagnose.Debug.IsAttached)
{
return DEV_ENDPOINT_SERVICE_ADDRESS;
}
else if ( Environment.MachineName == "Test Server" ) // You need to know your test server machine name at hand.
{
return "Return test Server endpoint"
}
else
{
return "Return live server endpoint";
}
}
}
You can used it in your SettingsViewModel like this:
var data = new SettingsViewModel()
{
ServiceURI = Constants.GetEndPoint(),
AutoSync = Constants.DEFAULT_AUTO_SYNC,
AppDataFolder = Path.Combine(ApplicationData.Current.LocalFolder.Path, Constants.ROOT_FOLDER, Constants.DATA_FOLDER),
MapKey = Constants.BASIC_MAP_KEY,
Logging = false
};
The drawback for this solution is, if you change your test server you need to change is manually in your code.
Having done some research I realise I need to clarify something. The application I am working on is a Windows RT application and this does not allow config files. The solution I am meant to use is to use local settings, but these do not reference an external file like an App.Config. If I want to change the location of an EndPoint then I am going to have to specify where that is in the code.